12/2/2023 0 Comments Shifty eyes pixel![]() Smart events can also be based on color or depth sensing (using two SPUs for high-speed stereo vision). One is “smart events,” using less than 10 percent of bandwidth compared to a full-frame image, but containing sufficient information for an application. On top of full frame images and events, that allows two additional forms of output. With every pixel capable of some basic computation, algorithms can be implemented on the SPU without external processing. There are times when you do want the full frame… so we built our architecture to allow you to get all these outputs using software.” Oculi’s SPU combines vision sensing, processing and memory at the pixel level. “Event sensors out there today don’t give you any information at that point, they become blind. “There are times when you’re not looking for changes, you’re looking for something in the scene,” he said. It really should be about efficiency: How do we get the information in an efficient way?” Machine vision is not about pretty images. “That’s not the case, because the problem really starts at the sensor. “The problem that we’re running into right now with machine vision is that we’re using sensors and processors that were developed for different purposes, putting them together and thinking if we throw enough processing downstream we solved the problem,” Rizk said. ![]() ![]() Other applications include smart city infrastructure and eventually, automotive vision sensing.īeyond buzz over existing event-based vision sensing frameworks, Oculi CEO Charbel Rizk told EE Times there’s plenty of room for innovation elsewhere. Oculi is developing products for gesture recognition and eye tracking in consumer AR/VR systems. An early-stage company spun out of Johns Hopkins University wants to make machine vision more like human vision by adding memory and computing to each sensor pixel.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |