Image-Processing-1





As I shoot with an old camera, with low-resolution type on top of the images, I think about image-processing algorithms, especially noise-reduction, sharpening, resolution-enhancers, and those we might, in ambiguous shorthand, call “AI.” I’ve been sending low-quality images (from this earlier post) through some algorithms to see what might happen.

First: an original image: low resolution with low-resolution text. I think that placing it on the web has decreased its resolution somehow. The image has been darkened, with lower contrast, in the top, so that the text is a bit more legible. This will be the basis, or the control, for the algorithms.

Second. I’ve changed the font of this piece (normally I use Arizona Regular) to Georgia, which is the font I used in the low-resolution image. This will provide a basis for comparison.


I’ve been working with screenshots, not of the whole image even (so much for scientificity), just to get a quick sense of what an algorithm looks like. I’m beginning to believe that the translation in imagery from an analogue “world” to a digital image is less defined by limits of resolution and pixelation (which would look square or cubic, with a uniform, predictable loss in quality) and more by the certian type of algorithm. How we perceive the digital world, carried by biases baked into images, is shifting. 

First: Topaz Gigapixel, using a variety of basic models (here: “Standard,” and “Text & Shapes”).         

 



In the absense of data, the algorithm modernizes the font. It slurs the words too. In the first image, no two letters look the same. In the second image,the fonts is more standardized (as you’d expect from an algorithm expecting “texts and shapes”). But the font is also shifted, with some distortion. There is little detail to adapt from the background of the image, which is a dog. The colors are more saturated, especially the red. And yet, especially in the second, the pixelated edges are both “smoothed” and renderred with higher contrast: more “sharp.” It’s as if the pixelation still exists, but has become slightly less square.


Next: Photoshop’s “Superzoom” feature, which is a resolution enhancer.


To be honest, I have no words for this sort of resolution enhancer. The first image is sized up to 16x; the second is sized up to 6x. And we can see that the first did more to try to guess the detail of the dog’s fur. And the text is what I might call “biomorphic.” It is more legible, but it does not contain a uniform consistencty of a font. It also does not look much like Georgia. The second image is so crunchy and renders, the negative space in ‘o,’ ‘p’, ‘u,’ ‘a,’ ‘e,’ ‘c,’ ‘d,’ in ways that seem triangulated, or geometrically crunched like foil. The dog remains an entire blur, unlike the last image. As I raise the resolution of the images, the quality that changes is the blurriness of the text: it smooths over details. 

If I were you, I would go back and compare all of these fonts to this one in the blog. 

Next, I’ve taken images with my phone. The first series is through Instagram; the second series is from my iPhone’s camera roll. Both of these, in my mind, and especially the iPhone images, represent standards in mobile imaging. What people see on Instagram is often what they expect from trends in photography (I am asserting this).

These are the images I’ve taken through Instagram’s camera. For both of these images, I’ve zoomed in as much as my camera would allow, ideally to the edge of the camera’s resolution. Notice the font is already processed, especially on the bags of coffee (F.TA KORO, F.R.W. GAS, etc.). There is little-to-no artifacting from what we might normally call “pixelation” (although an image is still made of pixels), but instead, the algorithm smooths objects according to their shapes. 



Notice the inconsistent artifacting of ‘06’ in the center bags. The algorithm, rather than retaining the data, blurs it, similar to how Photoshop enhanced resolution of images. The bags all look wobbly as well. Notice the line on the beige box, next to the “YPIC” bottle, is nearly erased. In an effort to de-pixelate images, while also allowing the user to zoom more, the Instagram algorithm has destroyed detail, while presenting an image that, without clear pixelation, maintains, in some general sense, shapes.

Additionally, all most of the lines are gone in the image of the banner. These are vertical lines; the ones that remain are horizintal. Both are, however, renderred diagonally, given the angle of the camera. Compare to the iPhone image.


And finally, images from my iPhone’s camera roll.



Strangely, my iPhone would not allow me to zoom in as much as Instagram did, so the third is a crop of the second. Notice there’s more pixelated artifacting; there is lower contrast from the objects in the bags. Yet the text is not wobbly except where it is visible that the bag is slanted as well (the Instagram algorithm exaggerates this). The type is uniformly renderred (notice the YPIC and PIC bottles: where the Instagram algorithm thins and widens the length of the line, the line remains consistent, although pixelated, in the iPhone image).

This is not to say that the iPhone image hs NO artifacting: look at the rendering of the first image in the sequence. There is clear sharpening around the type in the “PLAY” sign; there is contrast to enhance the textural shadows in the wall especially. Where Instagram destroyed these details, the iPhone raised the clarity, or microcontrast, of them.


I have provided no “unprocessed” “DSLR” photos for a single reason: that while the world becomes more processed, we will often have no reference point to an image that “precedes” an algorithm. All digital imagery requires some translation, some processing, and some interpolation of data. These new (ish), mobile algorithms are just (“JUST!!!” Think of the scale of data it took to train these algorithms: JUST is an understatement) one further step in processing the world, attempting to escape a “pixelation” that declares the opposite of detail. It’s important, I think, to begin to be aware of these ways of processing data, especially with algorithms that prioritize blobby shapes and are allergic to pixelation. 

It makes me wonder, finally, whether these ways of interpretting data for us, and renderring some things more or less intelligible (at the expense of style), will raise questions of “photographicity,” “reality,” and other sorts of questions that pop up around image-based representation. Who knows.