Rotate your device to experience this experiment

rotate mobile

Image Processing/

01 Image Processing/

03
Experiment > Communication
How do computers parse visual data to make our photos #flawless?
Take a selfie to explore.
This experiment works best in Chrome, Firefox, or Safari. Download the latest version of one of these browsers to begin.

Draw Your Filters

Choose a filter from the list below and draw to apply them to your image

Digitizing...

Clear all filters?

Retake your photo?

Brush Shape 60px Brush size
Toy Cam

Next
03 Image Processing/

How do computers parse visual data to make our photos #flawless?

ABOUT THIS EXPERIMENT

Digital cameras and software have provided us with the power to capture our world and recolor it with the click of a button. We snap and filter photos every day, sharing them across social media, texts, and email in seconds. But the science behind this digital imaging boom was not initially designed for amateur photographers. This National Science and Technology Medals Foundation interactive encourages you to notice the rapid calculations behind some of your favorite photo filters. By learning how computers read and modify pixel data, you will find broader real-world applications in a wide range of fields including astronomy, medicine, and law enforcement.

NSTMF Laureates

The National Science and Technology Medals Foundation celebrates the amazing individuals who have won the highest science, technology, engineering, and mathematics award in the United States.

Steven Sasson

Invented the digital camera, revolutionizing how images are captured, stored, and shared.

Raymond Vahan Damadian

Developed the first commercial MRI machine, capturing scans of cancer patients.

James E. Gunn

Designed the camera for the Hubble Telescope and launched the Sloan Digital Sky Survey.

Charles Geschke

Together with John Warnock, founded Adobe Systems, creating the first desktop publishing system.

Learn more about the pioneering scientists and thinkers behind this experiment at nationalmedals.org

Here are a few to check out:

The computer reads the grayscale value for each pixel and then inverts them. Black becomes white and vice versa.
Many radiologists prefer to analyze inverted gray scale X-rays. Black bones can make it easier to detect abnormalities.
In the 1970s, advancements in digital image processing led to the development of CT and MRI scans. Scientists have used MRIs to study the brain activity of musicians playing jazz and actors getting into character.
The computer reads the brightness of each pixel and connects points with significant contrast into a set of edges.
Edge detection helps fight crime! Security and law enforcement rely on this technique to enhance surveillance footage, recognize faces, and even decipher fingerprints.
The Warren Commission employed edge detection to scrutinize images of the assassination of JFK. With some controversy, they concluded one gunman acted alone.
The computer reads the grayscale value for each pixel then shrinks or expands the range to make the image low-contrast or high-contrast.
Since the 1960s, NASA has used this technique to turn fuzzy telescope images into high-contrast photos. Some of these photos are scrutinized in the search for other habitable worlds.
Exactly how many shades of gray? While computers can read grayscale data from zero (black) to 255 (white), humans can see only about twenty-five levels of gray.
Beauty tip : low contrast settings reduce lines and blemishes; high contrast is dramatic!
The computer identifies the red, green, and blue value for each pixel and separates them into RGB channels that can be adjusted independently.
Satellites orbiting Earth are capable of sensing ultraviolet and infrared light. To make this information visible to the human eye, scientists assign RGB values to the gathered data.
Since the 1970s, Landsat satellite RGB data has been used to track environmental threats including glacier melt, droughts, oil spills and wildfires.
The computer identifies the red, green, blue value for each pixel and then adjusts the value for each RGB channel.
Filmmakers tweak RGB channels to change moods and dramatize compositions. Many horror films have a blue cast; post-apocalyptic movies are often desaturated.
Sick filter! A balance of red and blue can add a healthy glow to skin tones. But be careful with the green channel -- too fluorescent or desaturated, and you might look unwell.
Ghostly glow. In Vertigo, Alfred Hitchcock applied an eerie green filter each time Kim Novak’s haunting character appeared onscreen.
The computer reads the brightness of each pixel to determine light and dark areas of the image. Light areas are filled with small, dispersed dots. Dark areas are filled with larger, overlapping dots.
Halftone has been used a mass reprographic technique of photographic images for more than one hundred years. The process was used mainly for newspapers and commercial printing, not for fine art.
But in the 1960s, Roy Lichtenstein enlarged and exaggerated mechanical halftone dots to make fine art comic-strip paintings. Recent photo-sharing apps have turned his halftone Pop Art technique into digital filters!
The boom in desktop publishing in the 1980s also relied on the halftone process, which enabled personal computers and printers to communicate and reproduce images.
The computer reads the RGB values for each pixel and then cuts down the total number of colors. Clusters with similar values are mapped to the smaller selection of colors.
Color quantization is an essential process for reducing file sizes. But standard 24-bit color can produce plenty of variations -- over sixteen million shades! Humans and other primates can detect about ten million colors.
In the early days of PCs, color displays were sacrificed for higher resolution. There wasn’t enough memory to support both functions.
Color quantization mimics the techniques of analog printing and of fine art! Pop artist Andy Warhol applied reduced, non-representational color to photographic silkscreens to make iconic portraits of Marilyn Monroe and Liz Taylor.