
Color measurement
The Perception of Color
Color sensations are human sensory perceptions, and color measurement technology must express them in descriptive and comprehensible quantities. Part 1 of DIN 5033 defines color as follows:
Color is the visual sensation, associated with a part of the field of view that appears to the eye to be without structure, through which this part can be distinguished from another unstructured neighboring area when observed with a single, unmoving eye.
This rather complicated but unambiguous definition of color allows the visual sensation of “color” to be distinguished from all the other impressions received when seeing. The insertion of “unstructured” into this definition also separates the texture of observed objects from the sensation of color. Thus the texture of a textile, for instance, is not included in the color.
The definition also calls for observation with a ”single” eye which is ”unmoving”, conditionally excluding other factors such as spatial sensation, perception of the location of objects, their direction, and even their relative movement from the perception of color. Since single-eyed observation of an unmoving object with an unmoving eye does not allow for the perception of gloss, the evaluation of gloss is excluded from the perception of color.
In general, unlike mass, volume or temperature, color is not merely a physical property of an object. It is rather a sensation triggered by radiation of sufficient intensity. This can be the radiation of a self-emitting light source, or it can be reflected from a surface. This radiation enters the eye where receptive cells convert it to nervous stimulation. The nervous stimulation is in turn transmitted to the appropriate part of the brain where it is experienced as color. The sensation of color not depends only on physical laws, but also on the physiological processing of the radiation in the sense organs. Visual conditions, luminance (brightness) and the state of the eye’s adaptation are among the contributory factors.
Color manifests itself in the form of light from self-emitting light sources, surface colors (of light sources that are not self-emitting) and in the intermediate form of luminescent colors of dyestuffs such as optical brightening agents and day-glow paints that absorb photons from a short wavelength part of the spectrum and emit the energy in a part of the spectrum with longer wavelengths.
Physiological background
From the fact that spectral decomposition of white light produces the perception of different colors, it can be deduced that color perception is closely connected to the light wavelength (Fig. 1).
As an example, light with a 650 nm wavelength is perceived as ”red” and light with a 550 nm wavelength perceived as ”green”. However, there are colors, such as purple, that cannot be directly related to a certain wavelength and therefore do not occur in the spectral decomposition of white light.

The perception of color is formed in our brain by the superposition of the neural signals from three different kinds of photoreceptors which are distributed over the human eye’s retina. These photoreceptors are called cones and are responsible for photopic vision under daylight conditions. Scotopic (night) vision is caused by photoreceptors called rods, which are much more sensitive than cones. Since there is only one kind of rods, night vision is colorless.
The three different kinds of cones differ in their spectral sensitivity to electromagnetic radiation. This is shown in Fig. 1 for the average normal sighted human eye. If monochromatic radiation irradiates the eye, as is the case with spectral decomposition of white light, the wavelength determines which types of cones are excited. For instance, monochromatic light at 680 nm only excites one type of cones, whereas the two other types are insensitive at this wavelength. The brain interprets signals from only this type of cones as color “red”. No signal is sent from the other cones. These cones are therefore called “red cones”. Similarly, the two other types are referred to as “blue cones” and “green cones”.
Quelle: http://lsvl.la.asu.edu/askabiologist/research/seecolor/rodsandcones.html
Color addition
As discussed above, monochromatic light of a certain wavelength might predominantly excite a single type of cones, thus producing the color perception of “blue”, “green” or “red”. Depending on the actual wavelength, monochromatic light might also excite two types of cones simultaneously, thus producing the perception of another color. For instance, red and green cones are both excited by monochromatic light at 580 nm and a signal from these two types of cones – with the simultaneous absence of a signal from blue cones – leads to the perception of the color “yellow”.
However, our visual system cannot differentiate between monochromatic and broadband radiation as long as the excitation of the three types of cones remains the same. Thus, the perception of “yellow” can also be produced by a broadband spectrum between 550 nm and 700 nm as long as green and yellow cones are similarly stimulated and the blue cones not stimulated at all. In the same way, the perception of “cyan” is produced by simultaneous stimulation of blue and green cones, whereas the perception of “magenta” (or purple) is caused by simultaneous stimulation of blue and red cones (Fig. 1). Simultaneous stimulation of all three types of cones results in the perception of “white”.
This fact has an important consequence: Consider a light source consisting of three single sources with the colors red, green and blue. If it is possible to vary the intensities of the three single sources individually, all possible colors can be produced. This is the main idea of color cathode ray tubes commonly used in TV and computer monitors – every pixel (a point on the monitor) consists of three smaller individual spots in the colors red, green and blue (Fig. 2).
As these individual spots are so close together, the human eye cannot resolve them. Instead, they produce the perception of a certain color by superposition of their respective intensities. For instance, the pixel appears yellow when only the red and the green spot are emitting light, and the pixel appears white when all three spots are emitting light. The entirety of colors produced through color addition forms the RGB color space, since they are based on the three (additive) primary colors red, green and blue.

Fig. 2: The effect of color addition demonstrated with white light from an overhead projector before (top) and after (bottom) passing through a magenta filter.
The respective spectral decomposition is shown on the left whereas the circle on the right shows the resulting color impressions.
It can be clearly seen that the filter strongly absorbs light from the green part of the visual spectrum, whereas blue and red light passes the filter with low attenuation.
The impression of magenta is produced by simultaneous presence of light from the blue and red regions of the visible spectrum, whereas light from the green region is missing.

Fig. 3: An RGB monitor consists of tiny red, green and blue spots.
Variation of their brightness produces the impression of different colors by color addition.
Source (valid as of 2002): http://www.cs.princeton.edu/courses/archive/fall99/cs426/lectures/raster/img013.gif
Color subtraction
Whereas color addition describes the perception of different colors caused by a superposition of red, green and blue light sources, the concept of color subtraction is based on the absorption of white light by filters or pigments.
As an example, a yellow filter absorbs wavelengths below about 500 nm, corresponding to blue light, but transmits longer wavelengths corresponding to green and red light. Thus, when irradiated with white light, the filter only transmits wavelengths which stimulate the green and red cones, whereas the blue cones are not stimulated. As discussed above, this results in the perception of the color “yellow”.
Similarly, a surface (better: pigments on a surface) absorbing wavelengths below 500 nm and reflecting wavelengths above appears yellow when irradiated with white light. Thus, when irradiated with white light, filters (or pigments) absorbing blue light appear yellow, filters (or pigments) absorbing green light appear magenta, and filters (or pigments) absorbing red light appear cyan. Because the effect of filters on transmitted light is the same as that of pigments on reflected light, the following conclusions derived for pigments are also valid for filters.
What happens if two pigments are combined? The combination of a yellow pigment, which absorbs short (blue) wavelengths with a cyan pigment, which absorbs long (red) wavelengths, leaves only medium (green) wavelengths to be reflected when irradiated with white light. As a result, the combination of yellow and cyan pigments results in green reflected light. Similarly, the combination of yellow and magenta pigments results in red and the combination of cyan and magenta results in green reflected light. In next figure, the effect of color subtraction is demonstrated for filters.
Ideally, a combination of yellow, cyan and magenta pigments should result in total absorption of the whole visible wavelength range and thus in the perception of a black surface.
However, the absorption properties of these pigments are never ideal in reality. This for instance explains why a four-color printer uses a black pigment in addition. Colors produced by a combination of cyan, yellow, magenta and black form the so called CYMK color space.
Fig. 4: Overlapping arrangement of yellow, cyan and magenta color filters on an overhead projector.
In the overlapping regions, color subtraction results in green, red and blue light.
Colorimetry
The basic problem of colorimetry is the quantification of the physiological color perception caused by a certain spectral color stimulus function φλ(λ).
When the color of a primary light source has to be characterized, φλ(λ) equals the source’s spectral radiant power Φλ(λ) (or another spectral radiometric quantity, such as radiant intensity or radiance).
When the color of a reflecting or transmitting object (for example a filter) has to be characterized, φλ(λ) equals the incident spectral irradiance impinging upon the object’s surface, multiplied by the object’s spectral reflectance, its spectral radiance coefficient or its spectral transmittance.
Since colors of reflecting or transmitting objects depend on the object’s illumination, the CIE has defined colorimetric standard illuminants. The CIE Standard Illuminant A is defined by a Planckian blackbody radiator at a temperature of 2856 K, and the CIE Standard Illuminant D56 is representative of average daylight with a correlated color temperature of 6500 K (for the definition of color temperature, see below).
RGB and XYZ color matching functions
According to the tristimulus theory, every color which can be perceived by the normal-sighted human eye can be described by three numbers that quantify the stimulation of red, green and blue cones. If two color stimuli result in the same values for these three numbers, they produce the same color perception even when their spectral distributions are different. Around 1930, Wright and Guild performed experiments during which observers had to combine light at 435.8 nm, 546.1 nm and 700 nm in such a way that the resulting color perception matched the color perception produced by monochromatic light at a certain wavelength of the visible spectrum.
Evaluation of these experiments resulted in the definition of the standardized RGB color matching functions r(λ), g (λ) and b(λ), which have been transformed into the CIE 1931 XYZ color matching functions x(λ), y (λ) and z(λ). These color matching functions define the CIE 1931 standard colorimetric observer and are valid for an observer’s field of view of 2°. Practically, this observer can be used for any field of view smaller than 4°. For a 10 ° field of view, the CIE specifies another set of color matching functions x10(λ), y10(λ) and z(λ)10. This set defines the CIE 1964 supplementary standard colorimetric observer, which has to be used for fields of view larger than 4 °.

Fig. 1: XYZ color matching functions as defined by the CIE 1931 standard colorimetric observer.
x(λ) (solid black line) consists of a short- and a longwavelength part, and
y(λ) (solid grey line) is identical with the CIE spectral luminous efficiency function V(λ).
Although RGB and XYZ color matching functions can be equally used to define three parameters where the numbers uniquely describe a certain color perception. The XYZ color matching functions are preferred because they have positive values for all wavelengths (Fig. 1). In addition, y (λ) is equal to the CIE spectral luminous efficiency function V(λ) for photopic vision.
The XYZ tristimulus values of a certain spectral color stimulus function φλ (λ) are calculated by
X = kλ∫ φλ(λ) × x (λ) dλ
Y = kλ∫ φλ(λ) × x (λ) dλ
Z = kλ∫ φλ(λ) × x (λ) dλ
The choice of the normalization constant k depends on the colorimetric task: When the spectral color stimulus φλ (λ) describes a spectral radiometric quantity of a primary light source, k = 683 lm/W and consequently Y yields the corresponding photometric quantity.
When the spectral color stimulus φλ (λ) describes the spectral distribution of optical radiation reflected or transmitted by an object, k is defined by
k = 100 / λ∫ Eλ(λ) y (λ) dλ
with E(λ) denoting the incident spectral irradiance impinging upon the object’s surface.
The (x, y) and (u’, v’) chromaticity diagrams
Although the XYZ tristimulus values define a three-dimensional color space representing all possible color perceptions, the representation of color in a two-dimensional plane is often sufficient for most applications. One possibility for a twodimensional representation is the CIE 1931 (x, y) chromaticity diagram with its coordinates x and y calculated from a projection of the X, Y and Z values:
x=X/(X+Y+Z)
y=Y/(X+Y+Z)
Although widely used, the (x, y) chromaticity diagram is largely limited by non-uniformity since geometric distances in the (x, y) chromaticity diagram do not correspond to perceived color differences. It is for this reason that in 1976, the CIE defined the uniform (u’, v’) chromaticity scale (UCS) diagram, with its coordinates defined by
u’ = 4X/(X + 15Y + 3Z)
v’ = 9Y/(X + 15Y + 3Z)
Although this definition of the u’ and v’ coordinates does not provide a strict correspondence between geometric distances and perceived color differences, there are far less discrepancies than in the CIE (x, y) chromaticity diagram.

Fig. 2: The CIE 1931 (x,y) chromaticity diagram

Fig. 3: The CIE 1976 (u’, v’) chromaticity diagram.
Source (valid as of 2002): http://home.wanadoo.nl/paulschils/10.02.htm
Correlated color temperature
The correlated color temperature is used to characterize the spectral distribution of optical radiation emitted by a light source. This characterization corresponds to the projection of a two-dimensional chromaticity diagram onto a onedimensional scale and is therefore very coarse.
In detail, the correlated color temperature is given in Kelvin (K) and is the temperature of the blackbody (Planckian) radiator whose received color most closely resembles that of a given color stimulus.
As a (simplified) rule of thumb, spectral distributions dominated by long (reddish) wavelengths correspond to a low correlated color temperature whereas spectral distributions dominated by short (bluish) wavelengths correspond to a high correlated color temperature. For example, the warm color of incandescent lamps has a correlated color temperature of about 2800 K, average daylight has a correlated color temperature of about 6500 K and the bluish white from a Cathode Ray Tube (CRT) has a correlated color temperature of about 9000 K.
Color rendering index CRI
The color rendering index (CRI) is a numerical description of the color rendition quality of a light source at the identical correlated color temperature. Here, the general color rendering index Ra represents the average of the first eight test color samples. Overall, 14 such test color samples that were defined by DIN 6169 and CIE 13.2. are currently used. In many cases, an additional CRI 15 that was added subsequently is also calculated. In order to perform the calculation, a black body radiator’s color temperature of up to 5000 K is used. For temperatures above 5000K, daylight e.g. D65 (D65 = 6500 K daylight) is used instead. Fundamentally, the color rendering index does not depend on the color temperature. It only depends on the relation the light source’s spectral distribution within the visible spectral range to that of a reference light source.
Mathematically speaking Ra is defined by:
Ra = 1/8 Σ8i=1 Ri
where Ri are given by
Ri = 100 – 4,6 × Δ Ei
Here, ΔEi is the Euclidian distance of the respective test color sample when illuminated by the light source under test as compared to the reference light source. Fig. 4 shows the spectral functions of the test color samples.

Fig. 4: Test color samples according to CIE 13.2
Color preference and rendition metric CQS
In addition to the CRI specified by CIE 1976, there are 15 test color samples of the colorquality scale (CQS) that are calculated in a similar way. The CQS method (version 7.5) uses 15 selected (saturated) test colors from the Munsell color system instead of the CIE test color samples.
According to the CIE color rendering index, test light sources that increase the saturation of an object’s hue as compared to a reference light source are evaluated with low CRI scores. Unlike the CQS reference light source, the CQS method does not evaluate an increase in saturation of a test device in a negative way in order to take into account an observer’s tendency to prefer colors with a higher saturation.
An increased saturation of the color rendition when an object is illuminated by a light source under test means a change in color perception. This again means that there is no conformity of the test light source and the reference thus leading to a decreased CRI score.
By definition, the color quality scale is hence no pure color rendition metric. Instead, it is a combination of color preference and rendition metric. Its advantage is that the color preference is taken into account. At the same time, an objective evaluation of color rendition is not possible.
The Planckian locus is a diagrammatic representation of a black body radiator’s emission with respect to the color temperature (see Fig. 5).
Fig. 5: Planckian black body locus

The dominant wavelength of a radiator
The dominant wavelength of a radiator is determined by the point where a straight line from the white point through the color coordinates of the radiator intersects with the spectral locus, the outer curved boundary of the CIE 1931 color space diagram. This is shown in Figure 6. The dominant wavelength cannot be determined directly from the spectrum. Instead, it represents an evaluation of a light source’s properties based on the color metric.
Fig. 6: Dominant wavelength in the CIE 1931 color space diagram

The purity of a radiator
The purity defines how close the color coordinate of a radiator is positioned in relation to the spectral locus in the CIE color space diagram. Geometrically, it can be depicted as shown in Figure 7.
Fig. 7: Purity in CIE 1931 color space diagram

The color coordinate of the measurement is represented by the cross symbol. A line is drawn from the white point via the color coordinate towards the dominant wavelength of the measurement.
The purity is now calculated as follows by evaluation of the sections of the line:
purity = a/(a + b’)
where a is the distance between the white point and the measurement point and b is the distance between the measurement point and the dominant wavelength. If the purity equals one, i. e. b equals zero, the light source has a pure single line spectrum e. g. a laser. If purity equals zero the radiator has a spectrum that is as broad as possible.
The purity of a radiator
Distances in the xy color space, which was defined in 1931 by CIE, do not reflect the distances as perceived by the human eye. This means that if two measurement points in the color diagram have the same distance to an arbitrary reference point, the perceived color contrast differs in general. In 1942, MacAdam tried to take this into account by adding ellipses into the color diagram as shown in Figure 8. Nowadays, so-called n step MacAdam ellipses are used. Here, n is the magnification of the ellipse in comparison to the one which was originally defined by MacAdam. Established values are 3x, 5x and 10x.

Fig. 8: MacAdam ellipses in xy color space

Fig. 9: MacAdam ellipses in u’v’ color space
MacAdam’s research presented a huge progress and considering the options at that time in terms of experimental setups and computing power, his results are truly remarkable. In the 1960s, additional research was promoted which resulted in the CIE 1976 u’v’ color space. Although the xy color space is the most widely accepted color space to date, CIE recommends using the u’v’ color space.
Recent findings indicate that the ellipses are not the ideal choice for modern solid state lighting (SSL) luminaire technology such as light emitting diodes (LEDs). The original ellipses were determined using fluorescent lamps of six different color temperatures. Recent technologies are not subjected to the same restrictions. Thus, new regulations are required example e.g. as defined by ANSI (ANSI NEMA ANSLG. 2011. C78.377-2008) using eight nominal CCTs as well as CCTs in 100K steps (see Fig. 9).

Fig. 10: ANSI bins in the u’v’ diagram
Most recent recommendations by the CIE (IEC 60081, IEC 1997) include the use of n step circles in u’v’ color space with their centers at the position of the MacAdam ellipses. The radius of these circles can be adjusted as required.
Their mathematical description is given by:
( u’ – u’c )2 + ( v’ – v’c )2 = (0.0011 × n)2
where u’C and v’C are their centre coordinates. With such a description, interpolation in terms of CCT becomes an option which reflects the diversity of LED technology. For binning purposes, either these or customized areas defined by the manufacturers are used in the selection and sorting of LEDs based on specific color properties.
Such measurements require precise spectrometers that have a high wavelength precision, narrow optical bandwidth (or a bandwidth correction) and a verified absolute calibration. Additionally, highly sensitive devices that enable rapid measurements and fast data transfers are required. For a measurement device to be used in LED binning, it must meet these speed and precision requirements. Figure 11 shows the color bins of a LED manufacturer in the u’v’ color space.

Fig. 11: Color bins of a LED manufacturer
Source (valid as of 2002): http://whatis.techtarget.com/definition/0,,sid9_gci528813,00.html
Article based on publication Basics of Light Measurement by our partner Gigahertz-Optik