Technical Stuff
Lots more technical stuff coming soon
INTRODUCTION
The Japanese photographer Daidō Moriyama once said that photographs are “fossils of light and time”. Indeed, every photograph begins as a 2 dimensional pattern of light captured by the camera’s sensor (digital or film) at a specific time. Most people believe that a camera records the same scene that we see with our eyes, but that is only partially true. The camera actually sees more than we do. The images we “see” are highly processed by our brains resulting in human perceptions that are actually interpretations of reality.
The lens in our eyes focuses an image of light reflected through the lens onto the retina. The retina acts as a 2 dimensional array of various sensors (neurons) that respond to various wavelengths of light. Electrical signals from these sensor neurons are transmitted to our brains. There the 2 dimensional data array of electrical signals is decoded into what we know as intensity, colors, shapes, faces, etc. The way in which our brains decode the data array from our eyes is the result of evolutionary selections made over thousands of years. The resulting image we perceive is designed to benefit our survival, not necessarily provide an accurate representation of reality. Exact faithfulness to the actual image focused onto the retina - that is, reality - is sacrificed to reduce brain processing requirements and thereby facilitate interpretation and decrease reaction time. We “see” the image we need to see, not an exact recreation of all that there is to see.
Below I will explain how what we see and what a camera sees differ dramatically, and how those differences can be exploited to produce interesting photographs that are both familiar and unfamiliar at the same time.