Human vision inspires new camera system

Human vision inspires new camera system

Human vision inspires new camera system

Inspired by how human gaze works, scientists have created a new way for computer-controlled cameras to 'see'.

Researchers from University of Glasgow in the UK developed a method for creating video using single-pixel cameras.

They have found a way to instruct cameras to prioritise objects in images using a method similar to the way our brains make the same decisions.
The eyes and brains of humans, and many animals, work in tandem to prioritise specific areas of their field of view.

During a conversation, for example, visual attention is focused primarily on the other speaker, with less of the brain's 'processing time' given over to peripheral details.The vision of some hunting animals also works along similar lines.

The team's sensor uses just one light-sensitive pixel to build up moving images of objects placed in front of it.
Single-pixel sensors are much cheaper than dedicated megapixel sensors found in digital cameras, and are capable of building images at wavelengths where conventional cameras are expensive or simply do not exist, such as at the infrared or terahertz frequencies.

The images the system outputs are square, with an overall resolution of 1,000 pixels. In conventional cameras, those thousand pixels would be evenly spread in a grid across the image.

The team's new system instead can choose to allocate its 'pixel budget' to prioritise the most important areas within the frame, placing more higher resolution pixels in these locations and so sharpening the detail of some sections while sacrificing detail in others.

This pixel distribution can be changed from one frame to the next, similar to the way biological vision systems work, for example when human gaze is redirected from one person to another.

"Initially, the problem I was trying to solve was how to maximise the frame rate of the single-pixel system to make the video output as smooth as possible," said David Phillips, from Glasgow's School of Physics and Astronomy.

"However, I started to think a bit about how vision works in living things and I realised that building a programme which could interpret the data from our single-pixel sensor along similar lines could solve the problem," said Phillips, who led the research.

By channelling our pixel budget into areas where high resolutions were beneficial, such as where an object is moving, we could instruct the system to pay less attention to the other areas of the frame, researchers said.
"By prioritising the information from the sensor in this way, we have managed to produce images at an improved frame rate but we have also taught the system a valuable new skill.

"We are keen to continue improving the system and explore the opportunities for industrial and commercial use, for example in medical imaging," Phillips added. The research was published in the journal Science Advances.