The potential for visual analytics is enormous. The gathering of image and video data through IP-connected cameras, and the intelligent processing of that data either at the edge (close to the device) or in the cloud has the ability to either complement, or perhaps replace, sensor data in many IoT use cases.
From face and object recognition in public surveillance systems, through monitoring in assisted living solutions, smart parking occupancy detection, to retail footfall analysis…the list really is endless.
The diversity of these use cases is breathtaking, in terms of the characteristics of the data and the requirements of the application. In the smart parking example, it’s really sufficient to determine that there is (or is not) an object of the correct size in the specified location; if there is a big rectangular thing there, then the parking space is occupied.
Picking out and recognising a specific face in a crowd of moving people is much more demanding; quality control on a factory production line might be somewhere in the middle. Sometimes the camera needs to move about, sometimes it must be able to pan, sometimes it can stay fixed.
If it wasn’t for the fact that we humans are creatures that derive much of our primary sense data about the physical world from two paired light-sensing organs, we probably wouldn’t think about all of these different applications as belonging to the same ‘vision’ category at all.
This diversity means that creating solutions is best handled by an open-ended, flexible platform with interfaces to other components and to the widest possible universe of developers, rather than by a series of closed vertical systems. So Qualcomm’s announcement on 11 April of its Vision Intelligence Platform seems to fit the bill rather well.
insertAd($(‘#’+divId), ‘sharethroughInArticle’, undefined, undefined, undefined, false);
var articleIndex = 0 + 0;
for(; i ‘