LMS color space

From Wikipedia, the free encyclopedia

Normalized responsivity spectra of human cone cells, S, M, and L types (SMJ data based on Stiles and Burch RGB color-matching)[1]

LMS (long, medium, short), is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.

The numerical range is generally not specified, except that the lower end is generally bounded by zero. It is common to use the LMS color space when performing chromatic adaptation (estimating the appearance of a sample under a different illuminant). It's also useful in the study of color blindness, when one or more cone types are defective.

XYZ to LMS[]

Typically, colors to be adapted chromatically will be specified in a color space other than LMS (e.g. sRGB). The chromatic adaptation matrix in the diagonal von Kries transform method, however, operates on tristimulus values in the LMS color space. Since colors in most colorspaces can be transformed to the XYZ color space, only one additional transformation matrix is required for any color space to be adapted chromatically: to transform colors from the XYZ color space to the LMS color space.[2]

In addition, many color adaption methods, or color appearance models (CAMs), runs a von Kries-style diagonal matrix transform in a slightly modified, LMS-like, space instead. They may refer to it simply as LMS, as RGB, or as ργβ. The following text uses the "RGB" naming, but do note that the resulting space has nothing to do with the additive color model called RGB.[2]

The CAT matrices for some CAMs in terms of CIEXYZ coordinates are presented here. The matrices, in conjunction with the XYZ data defined for the standard observer, implicitly define a "cone" response for each cell type.

Notes:

  • All tristimulus values are normally calculated using the CIE 1931 2° standard colorimetric observer.[2]
  • Unless specified otherwise, the CAT matrices are normalized (the elements in a row add up to 1) so the tristimulus values for an equal-energy illuminant (X=Y=Z), like CIE Illuminant E, produce equal LMS values.[2]

Hunt, RLAB[]

The Hunt and RLAB color appearance models use the Hunt-Pointer-Estevez transformation matrix (MHPE) for conversion from CIE XYZ to LMS.[3][4][5] This is the transformation matrix which was originally used in conjunction with the von Kries transform method, and is therefore also called von Kries transformation matrix (MvonKries).

Equal-energy illuminants:
Normalized[6] to D65:

Bradford's spectrally sharpened matrix (LLAB, CIECAM97s)[]

The original CIECAM97s color appearance model uses the Bradford transformation matrix (MBFD) (as does the LLAB color appearance model).[2] This is a “spectrally sharpened” transformation matrix (i.e. the L and M cone response curves are narrower and more distinct from each other). The Bradford transformation matrix was supposed to work in conjunction with a modified von Kries transform method which introduced a small non-linearity in the S (blue) channel. However, outside of CIECAM97s and LLAB this is often neglected and the Bradford transformation matrix is used in conjunction with the linear von Kries transform method, explicitly so in ICC profiles.[7]

A "spectually sharpened" matrix is believed to improve chromatic adaptation especially for blue colors, but does not work as a real cone-describing LMS space for later human vision processing. Although the outputs are called "LMS" in its original LLAB incarceration, CIECAM97s uses a different "RGB" name to highlight that this space does not really reflect cone cells; hence the different names here.

LLAB proceeds by taking the post-adaptation XYZ values and performing a CIELAB-like treatment to get the visual correlates. On the other hand, CIECAM97s takes the post-adaptation XYZ value back into the Hunt LMS space, and works from there to model the vision system's calculation of color properties.

Later CIECAMs[]

A revised version of CIECAM97s switches back to a linear transform method and introduces a corresponding transformation matrix (MCAT97s):[8]

The sharpened transformation matrix in CIECAM02 (MCAT02) is:[9][2]

CAM16 uses a different matrix:[10]

        [+0.401288, +0.650173, -0.051461],
 M16 =  [-0.250268, +1.204414, +0.045854],
        [-0.002079, +0.048952, +0.953127].

As in CIECAM97s, after adaptation, the colors are converted to the traditional Hunt–Pointer–Estévez LMS for final prediction of visual results.

Direct from spectra[]

From a physiological point of view, the LMS color space describes a more fundamental level of human visual response, so it makes more sense to define the physiopsychological XYZ by LMS, rather than the other way around.

Stockman & Sharpe (2000)[]

A set of physiologically-based LMS functions are proposed by Stockman & Sharpe in 2000. The function has been published in a technical report by the CIE in 2006 (CIE 170).[11] The functions are derived from Stiles and Burch (1959) RGB CMF data, combined with newer measurements about the contribution of each cone in the RGB functions. To adjust from the 10° data to 2°, assumptions about photopigment density difference and data about the absorption of light by pigment in the lens and the macula lutea are used.[12]

The Stockman & Sharpe functions can then be turned into a set of three color-matching functions similar to those in CIEXYZ:[13]

The inverse matrix is shown here for comparison with the ones for traditional XYZ:

Applications[]

Color blindness[]

The LMS color space can be used to emulate the way color-blind people see color. An early emulation of dichromats were produced by Brettel et al. 1997 and was rated favorably by actual patients. An example of a state-of-the-art method is Machado et al. 2009.[14]

A related application is making color filters for color-blind people to more easily notice differences in color, a process known as daltonization.[15]

Image processing[]

JPEG XL uses an XYB color space derived from LMS. Its transform matrix is shown here:

This can be interpreted as a hybrid color theory where L and M are opponents but S is handled in a trichromatic way, justified by the lower spatial density of S cones. In practical terms, this allows for using less data for storing blue signals without losing much perceived quality.[16]

The colorspace originates from Guetzli's butteraugli metric,[17] and was passed down to JPEG XL via Google's Pik project.

See also[]

References[]

  1. ^ http://www.cvrl.org/database/text/cones/smj2.htm
  2. ^ a b c d e f Fairchild, Mark D. (2005). Color Appearance Models (2E ed.). Wiley Interscience. pp. 182–183, 227–230. ISBN 978-0-470-01216-1.
  3. ^ Schanda, Jnos, ed. (July 27, 2007). Colorimetry. p. 305. doi:10.1002/9780470175637. ISBN 9780470175637.
  4. ^ Moroney, Nathan; Fairchild, Mark D.; Hunt, Robert W.G.; Li, Changjun; Luo, M. Ronnier; Newman, Todd (November 12, 2002). "The CIECAM02 Color Appearance Model". IS&T/SID Tenth Color Imaging Conference. Scottsdale, Arizona: The Society for Imaging Science and Technology. ISBN 0-89208-241-0.
  5. ^ Ebner, Fritz (July 1, 1998). "Derivation and modelling hue uniformity and development of the IPT color space". Theses: 129.
  6. ^ "Welcome to Bruce Lindbloom's Web Site". brucelindbloom.com. Retrieved March 23, 2020.
  7. ^ Specification ICC.1:2010 (Profile version 4.3.0.0). Image technology colour management — Architecture, profile format, and data structure, Annex E.3, pp. 102.
  8. ^ Fairchild, Mark D. (2001). "A Revision of CIECAM97s for Practical Applications" (PDF). Color Research & Application. Wiley Interscience. 26 (6): 418–427. doi:10.1002/col.1061.
  9. ^ Fairchild, Mark. "Errata for COLOR APPEARANCE MODELS" (PDF). The published MCAT02 matrix in Eq. 9.40 is incorrect (it is a version of the HuntPointer-Estevez matrix. The correct MCAT02 matrix is as follows. It is also given correctly in Eq. 16.2)
  10. ^ Li, Changjun; Li, Zhiqiang; Wang, Zhifeng; Xu, Yang; Luo, Ming Ronnier; Cui, Guihua; Melgosa, Manuel; Brill, Michael H.; Pointer, Michael (2017). "Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS". Color Research & Application. 42 (6): 703–718. doi:10.1002/col.22131.
  11. ^ "CIE functions". cvrl.ucl.ac.uk.
  12. ^ "Stockman and Sharpe (2000) 2-deg (from 10-deg) cone fundamentals". cvrl.ucl.ac.uk.
  13. ^ "CIE 2-deg CMFs". cvrl.ucl.ac.uk.
  14. ^ "Color Vision Deficiency Emulation". colorspace.r-forge.r-project.org.
  15. ^ Simon-Liedtke, Joschua Thomas; Farup, Ivar (February 2016). "Evaluating color vision deficiency daltonization methods using a behavioral visual-search method". Journal of Visual Communication and Image Representation. 35: 236–247. doi:10.1016/j.jvcir.2015.12.014. hdl:11250/2461824.
  16. ^ Alakuijala, Jyrki; van Asseldonk, Ruud; Boukortt, Sami; Szabadka, Zoltan; Bruse, Martin; Comsa, Iulia-Maria; Firsching, Moritz; Fischbacher, Thomas; Kliuchnikov, Evgenii; Gomez, Sebastian; Obryk, Robert; Potempa, Krzysztof; Rhatushnyak, Alexander; Sneyers, Jon; Szabadka, Zoltan; Vandervenne, Lode; Versari, Luca; Wassenberg, Jan (September 6, 2019). Tescher, Andrew G; Ebrahimi, Touradj (eds.). "JPEG XL next-generation image compression architecture and coding tools". Applications of Digital Image Processing XLII. 11137: 20. Bibcode:2019SPIE11137E..0KA. doi:10.1117/12.2529237. ISBN 9781510629677.
  17. ^ "butteraugli/butteraugli.h at master · google/butteraugli". GitHub. Retrieved August 2, 2021.
Retrieved from ""