Faces of the New Deal
This was an experiment in how we can best represent photographs at small scale. The dataset is tens of thousands of images taken by government photographers from 1935-1945, as part of the Farm Security Information / Office of War Information. Thanks to the efforts of Stace Maples and others on the Photogrammar team, we had locations for a majority of these photographs.
And we’d been able to map them in a variety of ways, showing their distribution as individual dots and “bucketed” into the outlines of 1940s counties (this choropleth visualization is adjusted to normalize for area, of course.)
Each of these visualization strategies had its advantages. The dots were color-coded to show which photographer was responsible for the image, and the choropleth used varying intensities of green to show how densely photographs were distributed in each county.
But all of these mapping strategies are necessarily divorced from the visuality of the original photographs. None of them gave any insight into what these photographs actually represented.
The obvious answer, to create thumbnail images at each point in the map where a photo was taken, wouldn’t have worked very well. Here, for example, is Migrant Mother at 50 pixels in size — substantially larger than our colored dots:
I wasn’t convinced thumbnails of this size would really convey the meaning of the collection. So what would? It turns out that human beings like to look at faces. We know from user studies where the motion of viewers’ eyes are tracked that most observers instinctually look at the face — and specifically the eyes — of a photographic subject.
I decided to see if I could create a specialized thumbnail dataset focused on human faces — essentially using faces as a proxy for the content of the photo.
Luckily, the state of face detection is very advanced — it’s trivial to run a very large dataset through such an algorithm and receive high-quality results. These take the form of boxes drawn around the face of a subject, which we can transform into a crop rectangle with a bit of Python programming. I decided to capture multiple faces per photo, as there were quite a few group shots. Each face would be tracked back to its originating context, so you can always refer back to the underlying photo to examine the fuller context.
This necessarily cuts out a lot of the surrounding context — is this person at work? At home? At church? — but a surprising amount of information remains.
For example, filtering down to the work of Gordon Parks and zooming into Washington, DC suggests that this photographer had interest in, and access to, the lives of African-Americans residents of that city in ways that his White colleagues did not.