Machine learning consists of two components: data and the algorithms used to process that data. Data is fast becoming the more important part of this equation. Initial machine learning use cases focused on numerical data and regression algorithms. Technology progressed to include text as well as images. Combining all these technologies together, it’s now possible to use machine learning on video data as well.
To make this easy, Comp Three created AUGI, a video analytics platform which emphasizes leading edge technology and extensibility. It leverages images, audio, and transcriptions to unlock insights hiding in video data. The platform can be extended with custom dictionaries and machine learning models to ensure great performance for any given industry use case. AUGI allows users to access the most relevant parts of their videos quickly.
The best way to understand AUGI is to take a look at the individual component capabilities:
Audio Capabilities: Audio is extracted and analyzed to determine interesting points in videos. AUGI also leverages the latest in Machine learning to generate video transcriptions for future processing.
Text Capabilities: Video transcriptions are analyzed using NLP to identify entities for semantic querying. Sentiment analysis identifies video tone while a “hot” language classifier determines heated moments. Word2Vec models enable video comparisons.
Image Capabilities: Object detection is used for a baseline understanding of video content. People of interest are identified using facial recognition. Images are sampled using intelligent keyframe extraction.
If you’d like to learn more about this project, feel free to reach out to us at firstname.lastname@example.org