ScienceThis camera lens can focus up close and far...

This camera lens can focus up close and far away at the same time

Ben Franklin had nothing on trilobites.

Roughly 400 million years before the founding father invented bifocals, the now extinct trilobite Dalmanitina socialis already had a superior version (SN: 2/2/74). Not only could the sea critter see things both near and far, it could also see both distances in focus at the same time — an ability that eludes most eyes and cameras.

Now, a new type of camera sees the world the way this trilobite did. Inspired by D. socialis’ eyes, the camera can simultaneously focus on two points anywhere between three centimeters and nearly two kilometers away, researchers report April 19 in Nature Communications.

“In optics, there was a problem,” says Amit Agrawal, a physicist at the National Institute of Standards and Technology in Gaithersburg, Md. If you wanted to focus a single lens to two different points, you just simply could not do it, he says.

If a camera could see like a trilobite, Agrawal figured, it could capture high-quality images with higher depths of field. A high depth of field — the distance between the nearest and farthest points that a camera can bring into focus — is important for the relatively new technique of light-field photography, which uses many tiny lenses to produce 3-D photos.

To mimic the trilobite’s ability, the team constructed a metalens, a type of flat lens made up of millions of differently sized rectangular nanopillars arranged like a cityscape — if skyscrapers were one two-hundredth the width of a human hair. The nanopillars act as obstacles that bend light in different ways depending on their shape, size and arrangement. The researchers arranged the pillars so some light traveled through one part of the lens and some light through another, creating two different focal points.

illustration of a metalens capturing an image of a faraway tree and nearby rabbit and resulting in a highly focused image
The trilobite-inspired metalens is a flat surface covered in rectangular ‘nanopillars’ (illustrated). Their shapes and orientations bend light in such a way that distant objects and nearby ones could be focused in a single plane (right), thus providing an image with high depth of field.S. Kelley/NISTThe trilobite-inspired metalens is a flat surface covered in rectangular ‘nanopillars’ (illustrated). Their shapes and orientations bend light in such a way that distant objects and nearby ones could be focused in a single plane (right), thus providing an image with high depth of field.S. Kelley/NIST

To use the device in a light-field camera, the team then built an array of identical metalenses that could capture thousands of tiny images. When combined, the result is an image that’s in focus close up and far away, but blurry in between. The blurry bits are then sharpened with a type of machine learning computer program.

Achieving a large depth of field can help the program recover depth information, says Ivo Ihrke, a computational imaging scientist at the University of Siegen in Germany who was not involved with this research. Standard images don’t contain information about the distances to objects in the photo, but 3-D images do. So the more depth information that can be captured, the better.

The trilobite approach isn’t the only way to boost the range of visual acuity. Other cameras using a different method have accomplished a similar depth of field, Ihrke says. For instance, a light-field camera made by the company Raytrix contains an array of tiny glass lenses of three different types that work in concert, with each type tailored to focus light from a particular distance. The trilobite way also uses an array of lenses, but all the lenses are the same, each one capable of doing all the depth-of-focus work on its own — which helps achieve a slightly higher resolution than using different types of lenses.

Regardless of how it’s done, all the recent advances in capturing depth with light-field cameras will improve imaging techniques that depend on that depth, Agrawal says. These techniques could someday help self-driving cars to track distances to other vehicles, for example, or Mars rovers to gauge distances to and sizes of landmarks in their vicinity.

Original Source Link

Latest News

Biggest US retailers cut prices as inflation hits shoppers

Unlock the Editor’s Digest for freeRoula Khalaf, Editor of the FT, selects her favourite stories in this weekly...

Amazon Alexa’s big AI upgrade could require a new subscription

It’s no secret that Amazon is busy overhauling Alexa with generative AI, but a new report from CNBC...

Fani Willis Cites Appearance On Rachel Maddow In Primary Victory Speech

Fulton County DA Fani Willis was interviewed by Rachel Maddow the night before the Democratic primary and she...

Why we still don't know exactly how bird flu is spreading between cows

Early evidence suggests that a bird flu virus called H5N1 may be infecting dairy cows through contaminated milking...

14 Outdoor Decorating Mistakes to Avoid This Year

Designing a picture-perfect outdoor space goes beyond simply arranging furniture and plants. It’s about creating a functional, inviting...

Ireland, Norway, Spain, recognize Palestinian statehood : NPR

Good morning. You're reading the Up First newsletter. Subscribe here to get...

Must Read

Economists forecast $500bn annual hit from new Trump tariffs

Unlock the US Election Countdown newsletter for freeThe...

Israeli War Cabinet Member Says He’ll Resign Unless There’s A New War Plan By June 8

DEIR AL-BALAH, Gaza Strip (AP) — Benny Gantz,...
- Advertisement -

You might also likeRELATED
Recommended to you