• Catherine Bentley

Seeing the Unseen at the Embedded Vision Summit

Updated: Jun 16

Edge of the Technology Universe...

Mohamed and I first met in 2014. When he told me about his vision of decoding meaning from human movement using a computer and camera, I thought he was on the edge of the technology universe!

He was starting with this use case to translate sign language into text. Because sign language has structure and connected movements with distinct meaning, to "understand" meant being able to "recognize" these distinct meanings within the language structure. Our body movements, too, have distinct meaning within the context in which we move so sign language was representative of body movement in general. Well, this is where we started.


During these early years, and namely between 2014 and 2018, we were witnessing a new "future" unfolding that was important to us since we were working on computational learning of human movement: There was the release of Google Glasses, body-adapted wearable sensors like Fitbit health trackers and AR/VR arriving to the world stage for early adoption. And let’s not forget Microsoft’s popular gaming peripheral, a depth sensor called the Kinect, making gaming interactive; along with the early mixed reality toolkit, HoloLens that connects industrial operations to task-specific interactive information. As you can tell, the use of artificial intelligence (AI) processing input data from vision and sensors was very niche.


Today in Edge AI and Vision

What has come of this history is anything but niche. On May 17-18, the Embedded Vision Summit, brought to us by the Edge AI and Vision Alliance, demonstrated to me how the use of AI and vision together is now a fundamental technology application that generates all kinds of business competitive advantage.


Here are my key takeaways from the conference:

  • Advanced vision functionality is now embedded into every device and creates new business value in almost any and every industry category.

  • Most vision, sensing or visual-sensing tech, processes data at the edge. Meaning AI has been designed to learn, understand, and generate information for a task in real-time right on the device or alternatively some part of the data is processed on the device and some part in the cloud.

  • Vision workload is now designed specific to the industry use case.

  • Data processing workloads using vision inputs have their own performance requirements based on the task; therefore, chip design and performance emerge as an essential industry player.

  • Chip competition is fierce as use case requirements affect pricing and production. Innovation is happening rapidly. Gone are the days of general chips that ran anything with reasonable performance. Chip design and performance is now an essential component of edge AI and embedded vision value.

I am looking back, wrapping my head around the eight short years that took us from clunky external cameras to play games, to AI and visual solutions generating trillions of dollars in new value from applications running on every-day devices. Truly it is astounding.


The AI and vision industry, supported by a chip industry that meets industry-specific processing performance demands, is now an essential functionality. For example, in transportation our cars send us signals when we are too close to another car, or in manufacturing, machine maintenance is now determined from processing internal data and visual performance. In both cyber and physical security, facial and image recognition is used for a wide variety of purposes including authentication and alert notifications. AI and vision are playing powerful roles in healthcare, too, supporting diagnostics, remote patient monitoring and like our product Movality, generating objective data to improve delivery and care.


In eight years, we are computationally seeing and understanding at scale what we couldn’t before. The ubiquitous nature of embedded AI and vision at the enterprise level is driving us closer to the mainstream applications for consumers. Every year as this technology advances, the "edge of the technology universe" as I once thought, moves further and further away.


About linedanceAI:

linedanceAI is a machine-learning startup in Dallas, Texas specialized in human movement. linedanceAI is releasing MovalityTM a software solution of movement quality records for outpatient musculoskeletal providers who work with patients recovering from accidents or injuries. MovalityTM transforms provider observations into explainable and actionable data-backed assessments to increase course of care retention by boosting patient understanding. Providers improve the explainability and delivery of musculoskeletal care in their practices by connecting objective movement quality metrics with patient recovery pathways and goals. Visit us at www.linedanceAI.com or email us at info@linedanceAI.com


Recent Posts

See All