There might be a revolution in the way artificial intelligence centers around recreating how the human retina works. If proven to work on scale, this could be a major advancement for not only robots, and self-driving cars, but other devices that could be enhanced by pattern recognition and other image-identifying systems.
According to the journal ACS Nano, a team of researchers at the University of Central Florida, taking inspiration from how human eyes are able to perceive the environment, have created a device that captures visual information via a nano-sized image sensor. The information in turn is stored and processed by their machine learning algorithm.
The way this works is that the device senses and with the assistance of its machine learning algorithm recognizes the wavelength of images. So far, the researchers have been able to get an accuracy rate of 70 to 80 percent.
The research itself can be a major step forward in the pursuit of allowing artificial intelligence to both visualize and identify the environment around it in order to make autonomous decisions. In the study, the device is able to take in data from the infrared, ultraviolet, and visible light spectrums(light ranging from 300 nm in ultraviolet to 2 μm in infrared) – a greater range of wavelengths in comparison to the human eye.
What makes this unique is that the AI-powered device is able to take in those three spectrums and process the data. This would be a major step forward in intelligent image technology since these processes have historically been done separately.
Study Principal Investigator Tania Roy, an assistant professor at UCF’s Department of Materials Science and Engineering and NanoScience Technology Center stated “It will change the way artificial intelligence is realized today.”
The Professor also remarked on the uniqueness of the device’s compact nature, using hundreds of the devices on a single-inch wide chip. “Today, everything is discrete components and running on conventional hardware. And here, we have the capacity to do in-sensor computing using a single device on one small platform.”
With the economics of scale, such a breakthrough doesn’t only push the path forward for Artificial Intelligence to be able to process the environment around it, at a faster rate with high accuracy, but also provides a means for future advancements in AI that will allow it to visualize the world in a human-like manner.