UNIVERSITY PARK, Pa. — On a cold, sunny day, you’re driving on a rural road, surrounded by snow-covered fields. In an instant, your eyes process the scene, picking out individual objects to focus on — a stop sign, a barn — while the rest of the scene blurs in the periphery. Your brain stores the focused and blurred images as a memory that can be pictured in your mind later, while sitting at your desk.
Mimicking this easy, instantaneous image processing power of the human eye, Penn State electrical engineering researchers created a metasurface: an optical element akin to a glass slide that uses tiny nanostructures placed at different angles to control light. Led by corresponding author Xingjie Ni, associate professor of electrical engineering and computer science (EECS) at Penn State, the team published their invention in Nature Communications.
Artificial intelligence (AI) systems require significant computing power and energy and can be slow to process images and identify objects, according to the researchers. By contrast, the metasurface can be used to preprocess and transform images before they are captured by a camera, allowing a computer — and AI — to process them with minimal power and data bandwidth.
The metasurface works by converting an image from the Cartesian coordinate system, where image pixels are arranged in straight rows and columns along the x and y axes, to the log-polar system, which uses a bullseye-like pixel distribution.
“Like the arrangement of light receptors inside the human eye, the metasurface takes images and arranges them in a log-polar coordinate system — with denser pixels for the central, focused features and sparser pixels for the peripheral regions,” Ni said. “This allows for the more important aspects of a photo to come through clearly while others remain less in focus, thereby saving data bandwidth.”
The metasurface is placed in front of a camera so that light first passes through it and transforms the image from the Cartesian system into log-polar coordinates before it is digitalized by a camera and transferred to a computer. Since it works using nanostructures that bend light, the metasurface does not need any power and works at the speed of light.
“As an image of an object can vary in size or orientation, it is desirable to preprocess images to make them resistant to scale and rotation changes,” Ni said. “This preprocessing helps AI applications more easily recognize them as the same object.”
By placing a different metasurface in front of a camera, researchers also can transform the log-polar image back into the original image with Cartesian coordinates.
The invention has many potential applications, the researchers said, including for use in target tracking and surveillance to map how a car, for example, moves across a city.
“A metasurface can be used in tandem with AI systems as a preprocessor, making it easier to recognize the same car from multiple street view cameras,” Ni said. “Or if it is applied to a satellite, it could potentially track planes from takeoff to landing.”
In addition to Ni, the co-authors include Zingwang Zhang, a former postdoctoral scholar in EECS; Ziaojie Zhang, who was a graduate student in EECS at the time of research; Yao Duan, who earned a doctorate in EECS from Penn State; and Lidan Zhang, a graduate student in EECS.
The Gordon and Betty Moore Foundation, NASA, the Office of Naval Research, the National Eye Institute of the National Institutes of Health and the U.S. National Science Foundation supported this work.