The primary use of ultrasonic transducers or ultrasonic sensors is to detect and determine the distance to an object in close proximity based on how a transmitted sonic wave is reflected back. A basic ultrasonic setup is composed of two components: a transmitter and a receiver, both of which are housed either together or separately depending on the application. A signal that is reflected back to the receiver is registered as a detection.

Current state-of-the-art methods for identifying the view of ultrasonic signals computationally involve 2-dimensional convolutional neural networks (CNNs), but these merely classify individual frames of a video in isolation, and ignore information describing the movement of structures throughout the cardiac cycle. At YOTASYS, we explore the efficacy of novel CNN architectures, including time-distributed networks and two-stream networks, which are inspired by advances in human action recognition. We could demonstrate that these new architectures more than halve the error rate of traditional CNNs. These advances in accuracy may be due to these networks’ ability to track the movement of specific structures such as heart valves throughout the sensor measurement cycle.