Intel Aims To Help Partners Accelerate Vision Computing At The Edge
Intel has announced a new developer toolkit that can help channel partners accelerate development of high-performance video analysis in edge devices for Internet of Things deployments.
The new toolkit, announced Wednesday, is called OpenVINO, which stands for Open Visual Inference and Neural Network Optimization. The goal is to help developers that have been training neural networks in existing frameworks like TensorFlow and Caffe, and give them easy ways to bring inference capabilities to the edge for vision computing applications.
Inference is an important component of artificial intelligence that allows algorithms to deduce new insights based on existing information. For example, equipment provider Adlink has used OpenVINO to develop a vision computing solution that inspects printed circuit boards on a production line for errors, according to Steen Graham, managing director of IoT channels at Intel.
"What that allows them to do is if you catch an error early in the production line, you don't actually take it through the whole manufacturing process," he told CRN.
Other companies that have been using OpenVINO include GE Healthcare, Amazon Web Services, Current by GE, Honeywell, Dell and Dahua.
While the toolkit is meant more broadly for developers, Graham said that channel partners focusing on vision computing as part of their IoT solutions will find OpenVINO helpful, especially because it can use existing host systems for running inference algorithms
"When you look at our channel, they have a ton of host systems with CPUs and integrated graphics that they're not leveraging for inference today," Graham said. He added that "making use of existing infrastructure to do AI at the edge" gives partners "a great business opportunity to solve their customers' problems."
Beyond working with CPUs and integrated graphics cards, OpenVINO is also optimized to work with two Intel products that are more specifically developed for AI applications: the Movidius vision processing unit and the field programmable gate array, which is Intel's line of reprogrammable chips for high-end computing.
Brian Salisbury, vice president of product management at Melville, N.Y.-based solution provider Comtech Telecommunications, told CRN that with the large amounts of video data generated by vision computing applications, it makes much more sense to process that information at the edge, rather than sending all of it to the cloud.
"One of the things that people are realizing when they start to look at this is that you cannot pass all of this data back to a central place. The volume is massive," he said.
Salisbury said he remembers a presentation that underlined edge computing as a necessity: the presenter was talking about how the data being captured by vehicles daily amounted to something that was roughly the size of "14 Facebooks" – which would be gargantuan, given that Facebook has 2.2 billion monthly active users alone.
"Clearly if you fed all of that amount of data on a daily basis back into some central place, it just wouldn't be practical, so it has to move to the edge so that pre-processing can be done, compression can be done, insights can be gained and fed back into the real world for some of the applications," Salisbury said.