RGB/YUV pixel Conversion
I recently received an email from Mike Perry thoroughly explaining this whole issue. For the definitive answer, please look here. If you want my original posting, here it is...
Conversion back and forward between RGB and YUV formats is a topic that often causes confusion-are we dealing with gamma corrected RGB values? Which luminance/chrominance color space are we actually dealing? Why can't someone just tell me how to do the conversion without giving me 5 chapters of theory first!
To cut a long story short, here are the formulae that I have used to do the conversions for PC video applications (well, OK, these are not exactly the formulae I used but I did generate a bunch of lookup tables from these ). the color space in question is actually YCbCr and not YUV, which a video purist will tell you is, in fact, the color scheme employed in pal TV systems and is somewhat different (NTSC TVs use yiq which is different again ). why the PC video fraternity adopted the term YUV is a mystery but I strongly suspect that it has something to do with not having to type subscripts.
The following 2 sets of formulae are taken from information from Keith Jack's excellent book "video demystified" (ISBN 1-878707-09-4 ).
RGB to YUV Conversion
Y = (0.257 * R) + (0.504 * G) + (0.098 * B) + 16Cr = V = (0.439 * R) - (0.368 * G) - (0.071 * B) + 128Cb = U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
YUV to RGB Conversion
B = 1.164(Y - 16) + 2.018(U - 128)G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)R = 1.164(Y - 16) + 1.596(V - 128)
In both these cases, You have to clamp the output values to keep them in the [0-255] range. rumour has it that the valid range is actually a subset of [0-255] (I 've seen an RGB range of [16-235] mentioned) but Clamping the values into [0-255] seems to produce acceptable results to me.
Further Information
Julien (surname unknown) suggests that there are problems with the above formulae and suggests the following instead:
Y = 0.299R + 0.587G + 0.114BU'= (B-Y)*0.565V'= (R-Y)*0.713
With reciprocal versions:
R = Y + 1.403V'G = Y - 0.344U' - 0.714V'B = Y + 1.770U'
Scott scriven has recently sent me a C program he wrote to convert quantel YUV files as generated by SGI's "zapit! "Video capture software into RGB ppm files. You can get hold of the source by clicking here. Please contact Scott with any support questions on this program.
Intel actually has an extensive web site on the topic of color space conversion and you can find it here.
Mike Perry's definitive answer to the conversion question
"I looked at your RGB/YUV page again and it still contains extremely" iffy "wording as if no-one knows what the heck is really the correct way to do conversion. there are two specifications, CCIR 656 and CCIR 601 which define standards for component video, and I'm sure other pages on your site refer to them. in any case, CCIR 601 defines the relationship between ycrcb and RGB values:
Ey = 0.299r + 0.587G + 0.114b
ECR = 0.713 (R-ey) = 0.500r-0.419g-0.081b
ECB = 0.564 (B-ey) =-0.169r-0.331G + 0.500b
Where ey, R, G and B are in the range [0.5] and ECR and ECB are in the range, 0.5] (equations corrected per input from Stephen bourgeois and Gregory Smith below)
If you form a matrix with those coefficients and compute the inverse you get coefficients equivalent to what Julien suggested.
The reason your first set of coefficients is incorrect, and related to your clamping comment, is because the defined range for Y is [16,235] (220 steps) and the valid ranges for Cr and CB are [16,239] (235 steps) These are normalized ranges and when the data is used, 16 is added to Y and 128 is added from Cr and CB to de-normalize them. the reason for this is that control and colorburst information is stored in the components along with the luminence and chromiance data which is why the full range of [0,255] is not used. there is no "rumor" about this being correct. it is a defined standard for ycrcb video. I wouldn't be surprised if certain PC programs are using a variant where the entire range of [0,255] is used but technically it is not ycrcb. if you use ycrcb for video purposes, its probably going to be CCIR 601 compliant. I don't know why you wowould use ycrcb for anything else as your rendering operations are extremely expensive to do in the non-linear ycrcb color space relative to the linear RGB color space.
Most importantly, you showould probably mention that there are more efficient ways of converting from ycrcb to RGB than just doing a straightforward implementation of the floating point matrix multiplication.
First, the input values ycrcb component values you are given may not even be valid within the ycrcb color space. this can occur if the components were obtained by converting from another color space (usually RGB ). this places additional overhead to implement a pre-clamp phase and to also to possibly normalize the component values if they arent normalized already (subtract 16 from Y, subtract 128 from Cr and CB, divide by 219 or 235 to obtain normalized ranges ). you can reduce this overhead to a minimum by creating two 256-entry lookups which contain pre-converted-and-clamped results for input values in the range 0-255. you need one table for luminance and one table for chromiance.
Second, floating point operations are usually much slower than integer operations, at least when it comes to most C compilers which do very poor instruction scheduling. speed might not be important if you are converting static data "offline," but floating point is a killer if you're Trying To Do software conversion of ycrcb frame buffers. I do my conversion using 2.14 fixed point representation stored in 32 bit integers. you can use any number of fraction bits that you want but my target system (the Nuon) has pixel pack and unpack operations which treat the values as 2.14. you certainly don't need any more than 2 integer bits as you only care about the range [-2.0, 2.0]. you can precalculate the normalize-and-clamp Lookup tables to contain fixed point values so that the first stage is consists solely of three lookups.
At that point you do the equivalent matrix multiply steps counter t that you use fixed point instead of floating point. this is gobs faster than using floating point operations. after the conversion you have obtained R, G, and B. you then need to do another series of clamps to ensure everything is in the range [0, 1]. if you get your coefficients just right, you probably shouldn't need to do this, but its rather difficult to ensure that underflow and overflow wont occur. if a component is not clamped, You need to obtain the scaled value using a fixed point multiply, usually by 255 If You're re trying to convert to 8-bit component values.
Also, its important to note that if you are only interested in luminance data (for the purposes of creating a greyscale image) then the RGB triple for ycrcb is simply (ey, ey, ey) and you don't have to do the matrix calculation nor post-clamping and you only have to scale one value. simply run y through the luminance lookup table, scale to [0,255] and plug into R, G and B.
I am looking into the possibility of using overlay channels to bypass software conversion altogether, although my emulated system (Nuon) has two channels (main + overlay) with alpha blending since so I wowould only be able to use this mechanism when only one screen is active and I wowould lose chromiance information unless hardware vendors start implementing unpacked ycrcb instead of yuy2. another interesting method doing YUV to RGB conversion for the purpose of displaying video is to use programmable pixel shaders. you can almost pull it off on a gefore3 but the geforcefx and radeon 9700/9800 cards are advanced enough that you can pass 32 bit ycrcb textures to the video card and do all conversion calculations on the video card. using Multi-texturing you can even do main channel and overlay channel alpha blending in one pass. given that converting a single 720x480 32-bit ycrcb frame buffer at 60 frames per second takes up about 25% of my xp2800 + CPU time, this wocould be a very cool thing indeed."
Further input from Stephen bourgeois:
In Mike Perry's section, the conversion matrix shoshould be:
Ey = 0.299r + 0.587G + 0.114b
ECR = 0.713 (R-ey) = 0.500r-0.419G-0.081b
ECB = 0.564 (B-er) =-0.169r-0.331G + 0.500b(Gregory Smith points out that ER here shocould read ey-equations abve were corrected)
There is a minus sign missing in the R-> CB coefficient, and a digit 3 missing in G-> CB.
Also I disagree with Mike on the idea that Y is "normalized" by adding 16. if RGB = [255,255,255], the equations wowould give y = 255 + 16. Y shoshould be clamped at 16 to prevent sub-black, the same way as it shoshould be clamped at 235 to prevent white> 100%. this can be done by the use of the 256 entry lookup table.
Avery Lee's jfif clarification
"The equations you have from" video demystified "are the ones to use most of the time on PCs, such as for MPEG-1 decoding and handing most (all ?) Of the formats on the YUV fourcc page. however, there is one notable exception: JPEG. the jfif file format that is commonly used with JPEG defines a YCbCr color space that is slightly different than usual, in that all components have the full [0,255] excursion rather than [16,235] and [16,240]. the equations given by "Julien" are for this variant and are very close to the ones in jfif:
R = Y + 1.402 (Cr-128)
G = Y-0.34414 (CB-128)-0.71414 (Cr-128)
B = Y + 1.772 (CB-128)
It shoshould be noted that some motion JPEG codecs don't take this consideration into account and improperly exchange YCbCr values between the yuy2 and uyvy formats and their internal JPEG planes without scaling Ming the necessary scaling. this results in contrast shifts when using YUV formats."
Microsoft's answer
Msdn also has an answer to this question. You can find it here.
Want some sample code?
Peter Bengtsson recently sent code for a sample program showing how to perform ycrcb to RGB conversion with a fragment shader (pixel shader) in OpenGL. he says that it is tested on Linux but shoshould be usable on windows with only minor rework. please contact him directly (Tesla at och dot Nu) with questions or comments.
The thorny issue of HD video
Just when you thought things were quietening down, Richard salmon correctly points out that the ycrcb color space used for HD video (ITU.BT-709) is actually different from SD (ITU.BT-601). He adds:
"Found your page, and I'm afraid, even with Mike Perry's explanation, which covers the problems of coding ranges and black/white offsets in the Digital Coding of TV signals etc very well, it is still lacking;
In addition to the YUV/RGB equations relating to rec.601 (which are used for standard definition TV) there are another completely different set adopted for HDTV (ITU rec.709 ). this change of equation was entirely pointless, but unfortunately we have to live with it, since it is the internationally agreed standard. you will find the information on this:
Http://www.dcs.ed.ac.uk/home/mxr/gfx/faqs/colorconv.faq"