Automatic Video Activity Recognition (AVAR)
Patented AI algorithms for human activities & movement recognition are core to linedanceAI. Video or video streams identify, detect and understand the humans pattern of behaviors through activities and movement. AI capability for human activities in video builds a safer tomorrow for everyone, everywhere.
Making Pixels Smart
Billions of videos. Everywhere. Searching, querying or understanding interrelations between videos is unthinkable at scale. linedanceAI's AVAR converts videos into smart pixels by recognizing and clustering human movements to create new insights.
Classify & Search
Machine-learning models classify human movement by specific activity in a video or stream, and generates annotation for search
Movement Unique As A Fingerprint
Deep-learning algorithms over movement data to generate similarity, of a gait or other signature movement: Identity or movement verification, recognition, or performance analysis
Learned sequences of movement for corresponding behaviors generates alert indicators to preemptively identify risk or threat
Another side of video data...
In 2013 we began to develop novel, machine-learning and image processing approaches to automatically learn and analyze human body movement. Early sources of data were sign language movements. Designing algorithms to learn a body movement-based language with high fidelity and a low number samples, gave rise to our AI-as-a-Service algorithm stack for use in video analytics. US Patent number US10628664B2 and US20200167555A1 (Canada, Israel, EPO)
If you are curious about our sign language work, visit Project KinTrans, Hands Can Talk.
Sign language has been used across technical research domains to develop and enhance machine-learning body movement models. Through a Microsoft-sponsored project, linedanceAI owns a dataset of over 600,000 unique sign movements, largest, annotated 3D sign language datasets for body movement analysis.