ScienceThis camera lens can focus up close and far...

This camera lens can focus up close and far away at the same time


Ben Franklin had nothing on trilobites.

Roughly 400 million years before the founding father invented bifocals, the now extinct trilobite Dalmanitina socialis already had a superior version (SN: 2/2/74). Not only could the sea critter see things both near and far, it could also see both distances in focus at the same time — an ability that eludes most eyes and cameras.

Now, a new type of camera sees the world the way this trilobite did. Inspired by D. socialis’ eyes, the camera can simultaneously focus on two points anywhere between three centimeters and nearly two kilometers away, researchers report April 19 in Nature Communications.

“In optics, there was a problem,” says Amit Agrawal, a physicist at the National Institute of Standards and Technology in Gaithersburg, Md. If you wanted to focus a single lens to two different points, you just simply could not do it, he says.

If a camera could see like a trilobite, Agrawal figured, it could capture high-quality images with higher depths of field. A high depth of field — the distance between the nearest and farthest points that a camera can bring into focus — is important for the relatively new technique of light-field photography, which uses many tiny lenses to produce 3-D photos.

To mimic the trilobite’s ability, the team constructed a metalens, a type of flat lens made up of millions of differently sized rectangular nanopillars arranged like a cityscape — if skyscrapers were one two-hundredth the width of a human hair. The nanopillars act as obstacles that bend light in different ways depending on their shape, size and arrangement. The researchers arranged the pillars so some light traveled through one part of the lens and some light through another, creating two different focal points.

illustration of a metalens capturing an image of a faraway tree and nearby rabbit and resulting in a highly focused image
The trilobite-inspired metalens is a flat surface covered in rectangular ‘nanopillars’ (illustrated). Their shapes and orientations bend light in such a way that distant objects and nearby ones could be focused in a single plane (right), thus providing an image with high depth of field.S. Kelley/NISTThe trilobite-inspired metalens is a flat surface covered in rectangular ‘nanopillars’ (illustrated). Their shapes and orientations bend light in such a way that distant objects and nearby ones could be focused in a single plane (right), thus providing an image with high depth of field.S. Kelley/NIST

To use the device in a light-field camera, the team then built an array of identical metalenses that could capture thousands of tiny images. When combined, the result is an image that’s in focus close up and far away, but blurry in between. The blurry bits are then sharpened with a type of machine learning computer program.

Achieving a large depth of field can help the program recover depth information, says Ivo Ihrke, a computational imaging scientist at the University of Siegen in Germany who was not involved with this research. Standard images don’t contain information about the distances to objects in the photo, but 3-D images do. So the more depth information that can be captured, the better.

The trilobite approach isn’t the only way to boost the range of visual acuity. Other cameras using a different method have accomplished a similar depth of field, Ihrke says. For instance, a light-field camera made by the company Raytrix contains an array of tiny glass lenses of three different types that work in concert, with each type tailored to focus light from a particular distance. The trilobite way also uses an array of lenses, but all the lenses are the same, each one capable of doing all the depth-of-focus work on its own — which helps achieve a slightly higher resolution than using different types of lenses.

Regardless of how it’s done, all the recent advances in capturing depth with light-field cameras will improve imaging techniques that depend on that depth, Agrawal says. These techniques could someday help self-driving cars to track distances to other vehicles, for example, or Mars rovers to gauge distances to and sizes of landmarks in their vicinity.



Original Source Link

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest News

Best Places to Work in Healthcare – 2022 (alphabetical list)

AKASA South San Francisco Aledade Bethesda, Md. All Star Healthcare Solutions Deerfield Beach, Fla. Amedisys Baton Rouge, La. Aspen RxHealth Tampa, Fla. Aya Healthcare San Diego b.well Connected Health Baltimore Bailey Medical...

“You begin to look over your shoulder”

On Friday, Justice Thomas was interviewed by John Yoo, his former law clerk, at an event in Dallas....

NASA’s InSight lander has recorded the largest Marsquake yet

Any Martians out there should learn to duck and cover. On May 4, the Red Planet was rocked by...

Patti Payne's Cool Pads: 'Street of Dreams' home in Bellevue put up for sale

The 4,360-square-foot home in the fashionable Summit Ridge neighborhood is owned by the chief financial officer of Kent-based...

‘Today Something Broke’: Princess Diaries Star Heather Matarazzo Explains Distressing Tweets ‘Struggling’ In Hollywood

Aw, this genuinely hurt out heart! Heather Matarazzo was having a rough one this week and decided to really...

Must Read

Inside the $23.5 million house that may break local California record

The ocean views from the most expensive mansion...

Retailers feel squeeze in historically tight US jobs market

Brieana LaCaze had 30 days to hire a...
- Advertisement -

You might also likeRELATED
Recommended to you