what camera lens is closest to the human eye
This article started after I followed an online discussion about whether a 35mm or a 50mm lens on a full frame camera gives the equivalent field of view to normal human vision. This particular give-and-take immediately delved into the optical physics of the eye every bit a camera and lens — an understandable comparison since the eye consists of a front end element (the cornea), an aperture ring (the iris and pupil), a lens, and a sensor (the retina).
Despite all the impressive mathematics thrown back and forth regarding the optical physics of the eyeball, the discussion didn't quite seem to brand sense logically, then I did a lot of reading of my own on the topic.
There won't be whatsoever direct benefit from this article that will allow you run out and have better photographs, but you might notice information technology interesting. Yous may too find it incredibly boring, so I'll give you my conclusion kickoff, in the form of ii quotes from Garry Winogrand:
A photo is the illusion of a literal clarification of how the camera 'saw' a piece of time and space.
Photography is non near the thing photographed. It is about how that affair looks photographed.
Basically in doing all this research about how the homo heart is like a camera, what I really learned is how human vision is not like a photo. In a fashion, it explained to me why I so often detect a photograph much more cute and interesting than I found the actual scene itself.
The Eye equally a Camera System
Superficially, its pretty logical to compare the eye to a camera. We tin can measure the front-to-back length of the centre (most 25mm from the cornea to the retina), and the bore of the pupil (2mm contracted, 7 to 8 mm dilated) and calculate lens-like numbers from those measurements.
You'll notice some different numbers quoted for the focal length of the eye, though. Some are from physical measurements of the anatomic structures of the centre, others from optometric calculations, some have into account that the lens of the center and eye size itself modify with the contractions of diverse muscles.
To summarize, though, 1 commonly quoted focal length of the eye is 17mm (this is calculated from the Optometric diopter value). The more than ordinarily accustomed value, withal, is 22mm to 24mm (calculated from concrete refraction in the middle). In certain situations, the focal length may actually be longer.
Since we know the estimate focal length and the diameter of the student, its relatively like shooting fish in a barrel to calculate the discontinuity (f-stop) of the eye. Given a 17mm focal length and an 8mm pupil the eyeball should role every bit an f/two.1 lens. If nosotros utilize the 24mm focal length and 8mm educatee, it should be f/iii.5. At that place have actually been a number of studies done in astronomy to really measure the f-end of the human centre, and the measured number comes out to be f/3.two to f/3.five (Middleton, 1958).
At this signal, both of you who read this far probably have wondered "If the focal length of the eye is 17 or 24mm, why is anybody arguing about whether 35mm or 50mm lenses are the same field of view as the human eye?"
The reason is that the the measured focal length of the heart isn't what determines the angle of view of human vision. I'll become into this in more detail beneath, but the main point is that simply part of the retina processes the master epitome we encounter. (The surface area of main vision is called the cone of visual attention, the rest of what we encounter is "peripheral vision").
Studies have measured the cone of visual attending and institute it to be near 55 degrees wide. On a 35mm total frame camera, a 43mm lens provides an angle of view of 55 degrees, so that focal length provides exactly the same bending of view that we humans have. Damn if that isn't halfway between 35mm and 50mm. So the original argument is ended, the actual 'normal' lens on a 35mm SLR is neither 35mm nor 50mm, information technology'due south halfway in between.
The Eye is Non a Camera System
Having gotten the answer to the original discussion, I could have left things alone and walked away with however another flake of fairly useless trivia filed away to amaze my online friends with. Simply NOOoooo. When I take a bunch of work that needs doing, I find I'll nearly ever cull to spend some other couple of hours reading more articles near human being vision.
You may have noticed the in a higher place section left out some of the eye-to-camera analogies, because in one case yous get by the uncomplicated measurements of aperture and lens, the rest of the comparisons don't fit so well.
Consider the eye'south sensor, the retina. The retina is almost the aforementioned size (32mm in diameter) every bit the sensor on a full frame camera (35mm in diameter). Afterward that, though, almost everything is different.
The first difference between the retina and your photographic camera's sensor is rather obvious: the retina is curved along the back surface of the eyeball, not flat like the silicon sensor in the camera. The curvature has an obvious reward: the edges of the retina are about the same distance from the lens as the heart. On a apartment sensor the edges are further away from the lens, and the centre closer. Advantage retina — it should have better 'corner sharpness'.
The human eye as well has a lot more pixels than your camera, nigh 130 million pixels (you 24-megapixel camera owners feeling apprehensive now?). Nonetheless, just near half dozen million of the eye's pixels are cones (which see color), the remaining 124 million merely see black and white. Only advantage retina once again. Big time.
But if we look further the differences become fifty-fifty more than pronounced…
On a camera sensor each pixel is set out in a regular grid pattern. Every foursquare millimeter of the sensor has exactly the aforementioned number and pattern of pixels. On the retina there'south a small central area, about 6mm across (the macula) that contains the densest concentration of photo receptors in the eye. The central portion of the macula (the fovea) is densely packed with only cone (color sensing) cells. The rest of the macula around this central 'colour only' area contains both rods and cones.
The macula contains about 150,000 'pixels' in each 1mm square (compare that to 24,000,000 pixels spread over a 35mm x 24mm sensor in a 5DMkII or D3x) and provides our 'cardinal vision' (the 55 caste cone of visual attending mentioned above). Anyway, the central role of our visual field has far more than resolving ability than fifty-fifty the all-time camera.
The rest of the retina has far fewer 'pixels', near of which are black and white sensing only. It provides what we usually consider 'peripheral vision', the things we see "in the corner of our eye". This part senses moving objects very well, just doesn't provide enough resolution to read a volume, for example.
The total field of view (the area in which nosotros tin see motion) of the homo eye is 160 degrees, but exterior of the cone of visual attending nosotros tin't really recognize detail, only wide shapes and motion.
The advantages of the human being centre compared to the camera become reduced a bit as we exit the retina and travel back toward the encephalon. The camera sends every pixel's data from the sensor to a figurer scrap for processing into an paradigm. The eye has 130 million sensors in the retina, but the optic nerve that carries those sensors' signals to the brain has just 1.2 million fibers, so less than 10% of the retina's data is passed on to the brain at any given instant. (Partly this is considering the chemic light sensors in the retina accept a while to 'recharge' afterwards being stimulated. Partly considering the brain couldn't process that much information anyway.)
And of class the brain processes the signals a lot differently than a photography photographic camera. Unlike the intermittent shutter clicks of a camera, the heart is sending the encephalon a constant feed video which is beingness processed into what we see. A subconscious part of the brain (the lateral geniculate nucleus if you must know) compares the signals from both eyes, assembles the nigh important parts into 3-D images, and sends them on to the conscious role of the encephalon for image recognition and further processing.
The subconscious brain also sends signals to the eye, moving the eyeball slightly in a scanning design so that the sharp vision of the macula moves across an object of interest. Over a few separate seconds the eye really sends multiple images, and the brain processes them into a more complete and detailed image.
The subconscious encephalon also rejects a lot of the incoming bandwidth, sending only a small fraction of its data on to the conscious encephalon. You lot can control this to some extent: for example, right now your conscious encephalon is telling the lateral geniculate nucleus "send me data from the central vision only, focus on those typed words in the heart of the field of vision, move from left to right so I can read them". Stop reading for a second and without moving your eyes try to see what's in your peripheral field of view. A 2nd agone you didn't "encounter" that object to the right or left of the computer monitor considering the peripheral vision wasn't getting passed on to the conscious brain.
If you lot concentrate, even without moving your optics, you can at least tell the object is there. If you want to see it clearly, though, y'all'll accept to send some other brain bespeak to the eye, shifting the cone of visual attention over to that object. Notice likewise that yous can't both read the text and see the peripheral objects — the brain can't process that much information.
The brain isn't done when the image has reached the witting part (called the visual cortex). This area connects strongly with the retentiveness portions of the brain, assuasive you to 'recognize' objects in the paradigm. We've all experienced that moment when we see something, just don't recognize what information technology is for a second or ii. Afterwards we've recognized information technology, we wonder why in the world it wasn't obvious immediately. Information technology's because it took the brain a separate second to access the retentivity files for epitome recognition. (If you haven't experienced this yet, just expect a few years. You will.)
In reality (and this is very obvious) human vision is video, not photography. Even when staring at a photograph, the brain is taking multiple 'snapshots' as it moves the center of focus over the picture, stacking and assembling them into the concluding image we perceive. Wait at a photograph for a few minutes and you'll realize that subconsciously your eye has drifted over the picture, getting an overview of the image, focusing in on details here and in that location and, later on a few seconds, realizing some things about it that weren't obvious at first glance.
Then What'southward the Point?
Well, I accept some observations, although they're far abroad from "which lens has the field of view virtually like to human being vision?". This information got me thinking about what makes me so fascinated by some photographs, and non so much by others. I don't know that any of these observations are truthful, but they're interesting thoughts (to me at least). All of them are based on one fact: when I actually like a photograph, I spend a infinitesimal or two looking at it, letting my human vision scan it, grabbing the item from it or perhaps wondering well-nigh the particular that'southward not visible.
Photographs taken at a 'normal' angle of view (35mm to 50mm) seem to retain their entreatment whatsoever their size. Even web-sized images shot at this focal length go along the essence of the shot. The shot below (taken at 35mm) has a lot more item when seen in a large image, just the essence is obvious even when minor. Perhaps the encephalon's processing is more comfortable recognizing an image information technology sees at its normal field of view. Perhaps information technology's because nosotros photographers tend to subconsciously emphasize composition and subjects in a 'normal' bending-of-view photo.
The photo in a higher place demonstrates something else I've ever wondered well-nigh: does our fascination and love for black and white photography occur considering information technology'due south ane of the few means the dense cone (colour only) receptors in our macula are forced to send a grayscale image to our encephalon?
Perhaps our brain likes looking at just tone and texture, without colour data clogging upwards that narrow bandwidth between eyeball and brain.
Like 'normal-angle' shots, telephoto and macro shots often await bang-up in small prints or spider web-sized JPGs. I have an 8 × 10 of an elephant'south heart and a like-sized macro print of a spider on my office wall that even from across the room await cracking. (At to the lowest degree they look great to me, merely you'll detect that they're hanging in my office. I've hung them in a couple of other places in the house and have been tactfully told that "they really don't go with the living room furniture", so maybe they don't await and so dandy to everyone.)
In that location'southward no great composition or other factors to make those photos bonny to me, but I detect them fascinating anyhow. Peradventure because even at a pocket-sized size, my human vision can run across details in the photograph that I never could see looking at an elephant or spider with the 'naked eye'.
On the other hand, when I get a good wide angle or scenic shot I hardly even carp to post a web-sized graphic or make a pocket-sized print (and I'm not going to offset for this article). I want information technology printed Big. I think maybe so that my human vision tin browse through the image picking out the little details that are completely lost when its downsized. And every time I exercise make a large impress, even of a scene I've been to a dozen times, I detect things in the photograph I've never seen when I was at that place in person.
Peradventure the 'video' my brain is making while scanning the print provides much more than detail and I find it more pleasing than the composition of the photo would give when it's printed small (or which I saw when I was really at the scene).
And perhaps the subconscious 'scanning' that my vision makes across a photo accounts for why things like the 'rule of thirds' and selective focus pulls my middle to certain parts of the photo. Mayhap nosotros photographers simply figured out how the brain processes images and took advantage of it through applied experience, without knowing all the science involved.
But I guess my only real determination is this: a photograph is NOT exactly what my eye and brain saw at the scene. When I become a good shot, information technology's something different and something better, like what Winogrand said in the two quotes to a higher place, and in this quote too:
You see something happening and you bang abroad at information technology. Either you get what you saw or you get something else — and whichever is better you print.
About the author: Roger Cicala is the founder of LensRentals. This commodity was originally published hither.
Epitome credits: my centre up close by machinecodeblue, Nikh'south eye through camera's eye from my eyes for your eyes :-) by slalit, Schematic of the Human Center past entirelysubjective, My left eye retina by Richard Masoner / Cyclelicious, Chromatic aberration (sort of) by moppet65535
Source: https://petapixel.com/2012/11/17/the-camera-versus-the-human-eye/
0 Response to "what camera lens is closest to the human eye"
Postar um comentário