November 3, 2019
Beyond Calculus
“Nature Always Wears The Colors Of The Spirit” — Ralph Waldo Emerson
Light, color & eyes — invisible energy waves, the core of beauty, & the portal to souls. Few topics have captivated the greatest minds like our collaborative search to more deeply understand how & what we see. Just in a line of polymaths alone, we have Da Vinci, Newton, Young, & Franklin, all whom at one time or another poured over the mysteries of sight.
Our curiosity for our most-dependent sense, it seems, will never be satisfied. While we’ve made monumental leaps in understanding sight, it’s clear from macro-trends in biotech, computer vision, & spatial computing that we’ve yet but scratched the surface of augmenting sight. A new era is on the horizon — in order to keep the principles ahead of the tech, it’s prudent to reach back to our roots. Like all topics deconstructed to their essentials, math weaves & flows through the tale of how we see. We’re going to take a journey & analyze, from a numbers viewpoint, just how we’ve evolved from the limited understanding of our own anatomy, to digitally replicating colors in a few short decades.
Our search for a deeper understanding of the most important of our senses, vision, likely spans the entire history of humanity — after all, it’s literally how we survived. It’s, therefore, a short philosophical leap to question what, why, & how we observe. To keep this journey within a digestible scope, we’ll quickly visit & build on the principles discovered by three specific giants: Young, Helmholtz, & Svaetichin.
The first of four stops starts with none other than the last man who knew everything, British polymath-prodigy Thomas Young. A physician interested in the anatomy of the eye as well as the physics of light, Young’s greatest contribution stems from his publication On the Theory of Light and Colours. This ground-breaking dissertation contained two monumental theories. First, he presented the wave theory of light using the wavelength of different colors of light using diffraction (preceding his double-slit experiment). Next, he put forth the theory of three-color vision to explain how the eye could detect colors — a theory that today we know is very true given the three types of cones located in the retina.
Fast-forward to the 1860s, when one Hermann von Helmholtz picked up where Young left off. While Young proposed as few as three types of photoreceptors, it was Helmholtz who takes credit for proving it. Helmholtz proposed that the photoreceptors received light of varying wavelength: short, medium & long. While scientists of the time disagreed on which three colors these wavelengths represented, Helmholtz was able to demonstrate that all the colors we see could be created by a combination of three basic colors.
The basic idea was that if all three receptors were stimulated simultaneously & at equal intensity, the eye would perceive the color white; essentially the opposite of running white light through a prism.
Next in our quick stop is Gunnar Svaetichin. A Swedish-Finnish-Venezuelan physiologist, he examined the external layers of fish retinas that electroretinograms display particular sensitivity to three different groups of wavelengths in the areas of blue, green and red. This provided the first biological demonstration in support of the Young-Helmholtz trichromatic theory.
While textbooks can (& are) written on the entire anatomical structure of the eye, we’re going to focus on the layer of cells that we now know are mainly responsible for the physical interpretation of color & light. Housed in the retina, the innermost, light-sensitive layer of tissues in the eye, live millions & millions of rods & cones. In short, the former is in charge of light & peripheral vision, while the latter is in charge of color detection.
Rods are the most numerous of the two photoreceptors, with some 120 million cells per retina. They’re hyper-sensitive & incredibly efficient photoreceptors, however, they are not sensitive to color, rather, they are responsible for our dark-adapted, or scotopic vision. Additionally, they dominate peripheral vision since it’s based on light sensitivity. It also follows that one can rely on these rods & their peripheral vision to more accurately detect motion.
Cones, on the other hand, are less sensitive to light than rods, but are hyper-sensitive to colors. With an estimated 6 million cones per retina, they’re responsible for all high-resolution vision. As theorized & eventually proven by the heroes above — there are three different types of color-sensitive cones.
Each of the three types of cones contains a different type of photosensitive pigment. Each different pigment is especially sensitive to a certain wavelength of light. The three types of cones are L, M, and S, which have pigments that respond best to the light of long (especially 560 nm), medium (530 nm), & short (420 nm) wavelengths respectively
Current understanding is that the 6 to 7 million cones can be divided into “red” cones (64%), “green” cones (32%), & “blue” cones (2%) based on measured response curves. Green & red cones are concentrated in the fovea centralis; the “blue” cones are found outside of the centralis, however, they have the highest sensitivity out of the three cones.
With a brief look into the history of its principles & its modern understanding, we’re now armed with enough knowledge to run through the math of it all — the transformation from biological trichromacy, to digital trichromacy. We leaped over decades of medical research & technical application. For example, how did we evolve from Maxwell’s first colored photograph in 1870 to the very richly-detailed LCDs of today? From TVs to computers to phones & everything in-between, we’ve managed to perfectly duplicate our biological process with a digital standard — how did this happen? And more importantly, how can we check it?
Everyone is familiar with a pixel, a picture element. Zoom into your screen far enough & you’ll multiple of these individual pixels — each built by driving three small & very close but still separated RGB light sources. At common viewing distance, the separate sources are indistinguishable, which tricks the eye to see a given solid color.
Around the late 1990s, the dust for competing variations on structuring these pixels settled & a standard emerged that became the agreed-upon standard in use by virtually every TV, computer & phone display. Known as True Color (24-Bit) color depth, this is the digital standard across any & all displays — by extension, it’s the color system that surfaces on any GUI or design software. While we’ve explored the history from a biological viewpoint, it’s now time to do what we do best: mathematically double-check exactly why that’s the golden standard. We’ll start with the highest-level assumption when considering the limits of human vision, specifically: how many colors can the average person see?
In order to create a display that covers all possible mixtures of colors, we first need to establish the limits of what humans can perceive. One study, done by psychologist Frank Geldard in 1972, claims that:
Our difference threshold for colors is so low that we can discriminate some 7 million different color variations
A later study published in 1996 by Kurt Kleiner claims that we distinguish around 2.3 million colors. That leaves us with a rough range stretching from lows of 2.3 million to highs of 7 million — all dependent on the individual. With this range, to guarantee a universal color system, it’s a given that we’ll need a way to derive at least 2.3 million colors.
An additional material fact that helps us double-check our minimum, we can turn to the light-sensitivity of cones. According to research, there are about ~150 differentiable hues (from darkest to lightest) that the average eye can distinguish per each of the RGB channels. Double-checking our math here against the previous range, we confirm that 150³ (3.4M) does indeed land between our 2.5M — 7M color estimate.
Let’s now turn our attention to the digital world by starting with the smallest & most basic unit of all information: a bit. Due to the basic framework of circuit building, which marries Boolean algebra with the binary properties of electric switches, a bit underlies all digital information. A single bit, like a Boolean type, can only ever take one of two possible states: 0 & 1.
If each of the three RGB values can represent up to 150 gradients, then we need a way to guarantee at least that many values as well as a way to effectively store these values . In short, we’re looking for some value n, where: 2^n > 150. Solving for n yields the minimal amount of bits needed to store all shades perceivable in a single RGB channel.
Solving for n, we get that 2⁸ = 256 (note, 2⁷ yields 128 which would not have been enough for our 150 minimum). Eight (8) bits are required per color channel in an RGB model to mathematically guarantee a display of all colors perceptible by the human eye. Perhaps a happy coincidence, or maybe a cosmic happening, my programming friends likely recognize the significance of that fact. Why? Because 8-bits is particularly critical in the world of computer science, so important in fact, that it has its own name: a byte.
Since a single byte (8-bits) can represent one of the three RGB channels, it follow that three bytes (24-bits) can fully display the RGB model. The fact that a byte stores a single channel allows for neat accommodations into non-binary coloring systems that are easy enough for the modern layman to use. If you’ve ever opened up any type of design software or app, you’ve no doubt come across both the RGB & hexadecimal system.
Hopefully, the RGB system is straight-forward by now; users can manually input an integer from 0 to 255 (0-based so 256 total possibilities) in each of the three channels. We now know that this 256 number comes from (2⁸) possible different channel values.
On the other hand, you’ve probably come across a hexadecimal string— they usually look something like the following: #012f5b The core rule of the hexadecimal numbering system is that every character is represented by strictly one of the following sixteen values: 0–9 & A — F; additionally, the first two, middle two, & last two characters are paired together as each pair corresponds to one of the three RGB channels. Mathematically, this again checks out since each pair of characters is a two-character combination where each character can take on one of 16 states (16*16 = 256 = 2⁸):
And there we go — by breaking down the numbers behind True Color (24-Bit), we uncovered the cosmic coincidence that an RGB channel stored with a single byte contains the set of all shades perceptible by the average eye. Any other form of data storage, such as seven or nine bits, & we would’ve strayed from perfection. From a mathematical viewpoint, why True Color (24-Bit) emerged as the universal digital standard makes sense.
However, as mentioned in the opening, this is simply the start of a new era. As we pull away from replicating & push towards augmenting, how will our understanding evolve? What patterns & principles will emerge as we continue pushing the brink on our greatest sense?