An eye for an AI: Optic device mimics human retina

The device responds to changes in what it sees, which could help it spot objects much more quickly.

Published: December 10, 2020 at 8:33 am

If our artificial intelligence is able to think like a human brain, why do we feed it data like a normal computer? Scientists are addressing this question by considering the sensory input we receive and have developed an optical device inspired by the workings of the human eye. Researchers in Oregon published their research onoptical sensors, which could make robotic components far more efficient.

Using ultrathin layers of photosensitive perovskite material, normally adopted in solar cells, this device adapts its signals as it senses different intensities of light. Perovskites are chemical materials, composed of metal atoms carrying positive charges and oxygen or halide anions, carrying negative charges which layer into an interesting lattice.

The charged lattice structure that creates the unique properties of perovskites, asatomic-level changes in the structure can alter its electrical behaviour. It's these properties that make perovskites excellent semiconductors, able to switch from insulating electricity to conducting it.

Unlike solar cells, the devices created do not store and use the light provided as energy, but instead respond to changes illumination. In doing so, these new ‘retinomorphic’ sensors send signals to process the image in front of them based on the changes in light.

Dr John Labram, Assistant Professor of Electrical & Computing Engineering, was initially inspired by a biology lecture he played in the background, which detailed how the human brain and eyes work. Our eyes have photo-receptors which are sensitive to changes in the light, but less responsive to constant illumination. From this, he started sketching potential devices to mimic the processing behaviour of these photo-receptors in our eyes.

Such changes are often associated with motion, making this an incredibly important development for the field of artificial intelligence. Looking out across a beach, our eyes are drawn to the changes like a huge, curling wave or a seagull swooping down to steal our chips. By prioritising information in this way, it takes less time for us to interpret our surroundings.

Read more about artificial intelligence:

Forartificial intelligence, this translates to simpler, more efficient processing at the visual input level, meaning AI systems could bring together different types of information much quicker than they currently do.

"You can imagine these sensors being used by a robot tracking the motion of objects. Anything static in its field of view would not elicit a response, however a moving object would be registering a high voltage. This would tell the robot immediately where the object was, without any complex image processing," said Dr Labram.

Currently, computers receive information in a step-by-step way, processing inputs as a series of data points, whereas this technology helps build a more integrated system. For artificial intelligence, researchers are attempting to build on human brains which contain a network of neurons, communicating cells, able to process information in parallel. Labram’s research is an essential step in this direction, with potential to be scaled up for robotics, image recognition and self-driving cars.

Why do we make robots look like humans?

Asked by: Roberta Wild, Lincoln

We’ve always been fascinated by the idea of creating autonomous machines that resemble us, and if they need to interact closely with us, we prefer them to look familiar.

Human-like robots such as Honda’s ASIMO, Boston Dynamics’ Atlas, and the childlike iCub built by the Italian Institute of Technology are amazing demonstrations of our technology, but they still have a long way to go – and when they look nearly human but not quite, they end up looking seriously freaky to us.

Perhaps we should just let robots be the shape they need to be, in order to best carry out their function.

Read more: