Technical Stuff

The Japanese photographer Daidō Moriyama once said that photographs are “fossils of light and time”. Indeed, every photograph begins as a 2 dimensional pattern of light captured by the camera’s sensor (digital or film) at a specific time. Most people believe that a camera records the same scene that we see with our eyes, but that is only partially true. The camera actually sees more than we do. In reality, the images we “see” are highly processed by our brains. Our eyes merely act as a 2 dimensional sensor array that relays the presence and intensity of light across the retina to the our brains There the 2 dimensional data array is decoded into what we know as colors, shapes, faces, etc. The way in which our brains decode the data array from our eyes is the result of evolutionary selections made over thousands of years. The resulting image is designed to benefit our survival. Exact faithfulness to the actual image projected onto the retina - that is, reality - is sacrificed to reduce brain processing requirements and thereby increase interpretation and reaction time. We “see” the image we need to see, not an exact recreation of all that there is to see.

In the posts below I will explain how what we see and what a camera sees differ dramatically, and how those differences can be exploited to produce interesting photographs that are both familiar and unfamiliar at the same time.