Jump to content

Talk:YIQ

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

These formulae are a bit much for the main article (and poorly presented), thus removed:

The approximate value of the matrix is:

The exact value of the formula is:

Y  = + 0.299R + 0.587G + 0.114B
I  = + 0.877(R - Y) cos 33 - 0.492(B - Y) sin 33 = + [(0.877 cos 33)(1 - 0.299) - (0.492 sin 33)(-0.299)]R + [(0.877 cos 33)(-0.587) - (0.492 sin 33)(-0.587)]G + [(0.877 cos 33)(-0.114) - (0.492 sin 33)(1 - 0.114)]B
Q  = + 0.877(R - Y) sin 33 + 0.492(B - Y) cos 33 = + [(0.877 sin 33)(1 - 0.299) + (0.492 cos 33)(-0.299)]R + [(0.877 sin 33)(-0.587) + (0.492 cos 33)(-0.587)]G + [(0.877 sin 33)(-0.114) + (0.492 cos 33)(1 - 0.114)]B

The exact value of the matrix is:

The approximate value of the inverse matrix is:

The exact value of the inverse matrix is:

--Dtcdthingy 07:30, 27 Nov 2004 (UTC)

Actually, the exact value of the inverse matrix is:
--Zom-B 04:36, 9 Sep 2006 (UTC)
  • This uses a surreal amount of precision: 19 significant figures for a formula presented as approximate! I don't have the expertise to say how many figures are justified from these inputs, but I have cropped it down to 4 decimal places. I suspect 3 would be better. Notinasnaid 19:48, 21 March 2007 (UTC)[reply]

Crappy NTSC

[edit]

Is the information reduction used with YIQ the reason why NTSC video pictures look sort of trashy and blurry compared to the PAL video system YUV? --Abdull 12:42, 23 Mar 2005 (UTC)

NTSC is only 483 visible lines tall. If you round that down and use square pixels, it's 640x480. (familiar?) PAL is about 576 visible lines tall, and thus about 768x576 if you assume square pixels. This doesn't come free though; PAL is 50 fields (25 frames) per second while NTSC is 59.94 fields (29.97 frames) per second. PAL is jittery compared to NTSC. Even with the low frame rate, PAL generally takes more bandwidth on the air. (it's a radio spectrum hog) AlbertCahalan 03:51, 23 May 2005 (UTC)[reply]
Because of the "483 visible lines" mentioned above, some square pixel calculations cited a figure of 648x486. –Wbwn 02:10, 6 September 2006 (UTC)[reply]
PAL allocates more bandwidth for color information (1.3MHz each for U and V) than NTSC (0.5MHz and 1.3MHz for I and Q). The PAL system also cancels out the color error that ends up right on the screen in NTSC. There's also much more luma bandwidth (6MHz vs 4.3 MHz). Combined with the lower frame rate, this allows much more detail to be present in each frame. --Dtcdthingy 20:03, 24 May 2005 (UTC)[reply]
I've read several sources that suggest that manufacturers switched to using 1.3MHz for both I and Q, as the lower bandwidth for Q was no longer required. This introduced a phase error of 33° for older televisions, but it could be simply fixed by adjusting the hue control. It had the benefit of standardizing on the same low-pass filter for both I and Q, as well as negating the need for a delay circuit on the I line. The formula became "Q = B-Y cos33° + R-Y cos57°" and "I = -B-Y cos33° + R-Y cos57°" Dinjiin (talk) 09:06, 14 January 2013 (UTC)[reply]
As far as I know, NTSC broadcasters are (or were) required to generate properly filtered I and Q. 99.99% of the sets ever made decode along RGB axes, and filter to the lower bandwidth. This leaves the question of what would happen if one decoded I and Q (or R and B) with full 1.2MHz bandwidth. If that was reasonable, you might think someone would have done it. I suspect that there is additional noise in the decoded signal that overcomes the otherwise advantageous bandwidth of I. I suspect that we will never know. Gah4 (talk) 03:26, 27 July 2020 (UTC)[reply]
Do the above ideas fit the topic of YIQ space? —The preceding unsigned comment was added by Scetpfe (talkcontribs) 03:21, 24 February 2007 (UTC).[reply]
This "comment" is a rude and ignorant attack on NTSC by someone who has little understanding of television systems.
First, NTSC and PAL are "Tweedledum & Tweedledee". There is no fundamental difference between them. In fact, the original NTSC proposal of 55 years ago, as given in an issue of Electronics magazine, was actually what became PAL (equal-bandwidth R & B primaries, polarity alternation).
Second, neither system discards much information. The color-difference signals represent saturation, and as most (not all!) objects (natural or man-made) have hues of constant saturation, this makes highly efficient use of the available the color bandwidth.
In the preceding response, the writer says "The PAL system also cancels out the color error that ends up right on the screen in NTSC." I assume he's referring to hue shifts caused by non-constant group delay. This was never a problem in the US, because American coax and microwave distribution systems had low differential phase error. Europe's equipment was not so good, which was one of the reasons polarity alteration was used. Phase errors were of opposite polarity (and thus of complementary hue) on alternate lines, thus causing them to visually cancel. The "gotcha" is that the cancellation reduces saturation, introducing visible artifacts into the image. Yes, NTSC would have been subject to (arguably worse-looking) hue errors under the same conditions -- but the conditions were never the same.
There is another error in the preceding response, which states the PAL chroma signals are 1.3MHz. I've always understood them to be 1.0MHz -- narrower than the NTSC Q signal of 1.5MHz -- and this is consistent with the drawing in the Wikipedia article on PAL. If the claimed (non-existent) "information reduction" of the NTSC system makes its pictures "look sort of trashy and blurry", what can one then say about PAL, which has a narrower chroma bandwidth?
Yes, the European 625/25 standard does permit somewhat greater visual detail. Big deal. Neither system produces "trashy" or "blurry" pictures when properly used. The original poster should watch NTSC cable signals on a high-quality receiver and decide for himself whether NTSC has fundamental problems with image quality.
WilliamSommerwerck (talk) 18:26, 1 March 2008 (UTC)[reply]

Anyone?

[edit]

Does anyone understand this subject enough to clean it up? It's been marked since april last year.

I changed it to {{expert}} to see if that'll attract anyone. 68.39.174.238 02:45, 14 April 2006 (UTC)[reply]

I have some minor expertise in this subject and I believe this article offers a good explaination of the subject.

Two technical statements in the article concerned me. Both were added by Mako098765.

The first was that the I and Q components refer to the "modulation schemes used" but they don't really they refer to the in-phase and quadrature-phase "components used in quadrature amplitude modulation".

The second was his explaination of the image. The origin he gave was not even on the diagram presented and he seemed to assume the I and Q components are polar co-ordinates when actually they are typically cartesian co-ordinates.

I have fixed both these problems, though this involved removing the image.

My only other concern was that the article was a little light in discussing how the YIQ colour space is used in image processing.

That said this article really does offer a good explaination of the topic, so I am removing the expert tag unless another expert offers a differing opinion.

Cedars 02:50, 18 May 2006 (UTC)[reply]

I got my info from the Buchsbaum TV servicing book that I mentioned. The color space diagram explanation may be somewhat suspect, but I am fairly confident about the modulation aspect, which refers to broadcast NTSC. From your comments about coordinates and image processing, you bring to mind YUV. - mako 06:09, 19 May 2006 (UTC)[reply]

Move page

[edit]

I suggest that this article, as well as the YUV and YDbDr articles, be renamed so that the words "color space" or "color model" occur afterward. This way they adhere to the convention set by the other color space articles. -SharkD 02:13, 21 October 2006 (UTC)[reply]

I believe this is only necessary when there are conflicts with other article names. Valcumine (talk) 18:23, 28 June 2010 (UTC)[reply]

Image seems to be incorrect.

[edit]

The first image of the article has incorrect coordinates. I and Q values never go above 0.6 or go below -0.6, atleast in the real world, which makes me question the authority of who made it. I'm not an expert, however. Valcumine (talk) 18:20, 28 June 2010 (UTC)[reply]

I believe that is right, but would rather not go through all the math. Most of the discussion is in bandwidth, but you also have to have a signal that resutls in not overmodulating the carrier. That is, Y+I, Y-I, Y+Q and Y-Q should never be greater than 1. Gah4 (talk) 00:12, 6 July 2017 (UTC)[reply]

YIQ or YUV for NTSC?

[edit]

Several other Wikipedia pages (e.g. the german YIQ page and http://en.wikipedia.org/wiki/Color_space) suggest that YIQ is actually no longer being used at all for NTSC. Can anyone confirm this? Beriechil (talk) 08:49, 13 November 2010 (UTC)y[reply]

  • Full power NTSC broadcasting in the US was discontinued by June 12, 2009. It's still used for low power stations, but that will be ending sometime in 2012. I'll update the article lead-in appropriately. Msaunier (talk) 14:21, 9 July 2011 (UTC)[reply]
I've read on various places on the internet that most major broadcasters and hardware manufacturers switched from YIQ to YUV in something like the 70s or 80s. YIQ was kept alive after that for a while just like monochrome TV or mechanical scan television were for some time after their heyday. Not much in use anymore, but not actually illegal or dropped from regulatory standards. --79.242.203.134 (talk) 21:40, 5 July 2017 (UTC)[reply]
As far as I know, the FCC requires YIQ for broadcasters. That is, higher resolution (bandwidth) on the I axis than the Q axis. It isn't required that receivers decode it this way, and it is easier in the YUV coordinate system. Since there are millions of decoders for each encoder, it makes some sense to simplify the decoder. By the 1970's, though, analog integrated circuits made it much easier to do. Unless broadcasting equipment sellers offered a better price, and it might save a few dollars, I don't know why anyone would change. It is sad that so few receivers ever did YIQ decoding, though. Gah4 (talk) 23:47, 5 July 2017 (UTC)[reply]

Why 33 degrees?

[edit]

Why did they choose 33 degrees exactly. Where did they derive that exact number from? — Preceding unsigned comment added by 157.178.2.1 (talk) 19:07, 8 November 2011 (UTC)[reply]

I found a great (well, no) explanation in "Donald G. Fink, ed. (1957). Television Engineering Handbook. McGraw Hill. p. 9-26.": This angle of 33 deg was chosen so that the wideband and narrowband chrominance signals (Ei' and Eq') are conveniently obtainable by quadrature demodulation and at the same time the color-difference signals (Er' — Ey') and (Eb' — Ey') are conveniently obtainable by quadrature demodulation. Tadoritz (talk) 08:32, 26 December 2019 (UTC)[reply]
As well as I know it, NTSC studied the visual ability to resolve detail among different colors, and came up with the 33 degrees. I suspect that it isn't easy to make the measurements, as you need to eliminate (or reduce enough) the luminance differences as seen by viewers. In any case, the study was done and they came up with a number. With the very small number of YIQ decoders, it ended up not making much difference, but they didn't know that at the time. Gah4 (talk) 07:54, 31 December 2019 (UTC)[reply]

The first image on this page appears to be rotated by about 120 degrees (relative to YUV), not 33 degrees! For example, in YUV, Blue appears at the right (+X axis) while on this page Blue is near the top-left quadrant. I'm not saying the offset is exactly 120 degrees, but it is much greater than 33 degrees or the image(s) are wrong. Maybe 90 + 33 = 123 degrees ? Hydradix (talk) 09:54, 28 October 2012 (UTC)[reply]

While human cone cells are sensitive to red, green, and blue, it seems that the colors that we can see in highest resolution are not along the red, green, or blue axes of the color chart. YIQ encodes after matrixing (rotating the coordinate system) from YUV (that is, along the blue and red axes) to those of higher and lower resolution. This is a 33 degree rotation. The color burst, then, has the appropriate phase to make YUV (that is, red and blue axis) decoding easier. A YIQ decoder has to generate the appropriate phase to extract I and Q, amplify and filter them to the appropriate bandwidth, delay Q, as the filters have different delay, then matrix back to RGB. Not so easy in vacuum tube days, but should have been done in the transistor and IC days. Gah4 (talk) 23:58, 5 July 2017 (UTC)[reply]
The question is why 33 degrees is the most "human sensitive" and not 30 degrees, and not 35 degrees. — Preceding unsigned comment added by 75.57.173.245 (talk) 05:26, 22 January 2018 (UTC)[reply]

I went down this rabbit hole: You can find the definitive source of 33 degrees here, in the origin RCA application to the FCC from 1953: https://worldradiohistory.com/BOOKSHELF-ARH/Regulatory/RCA-Color-TV-Application-FCC-1953.pdf - specifically in Appendix H, page 240 ("A series of Munsell colors covering the color gamut were viewed, each in turn superimposed on each of the others." [...] "As a result of the tests, a preferred pair of vectors was found approximately 33 degrees from the B -Y and R -Y pair."). One other thing to note is that being 33 degees rotated, the I channel is exactly 1 radian off of the color burst - this perhaps made parts cheaper (I have no source for that though - just speculation.) Daemon404 (talk) 19:17, 6 April 2021 (UTC)[reply]

As well as I know, the processing of color vision is not well understood. It seems that some color differences are done inside the eye, and others after they get to the brain. In any case, the 33 degrees is experimental. They might have taken the average from experiments with a number of people. I suppose there could be some advantage if it is one radian, in terms of values for RC delay circuits. Gah4 (talk) 11:04, 13 April 2021 (UTC)[reply]

YIQ to analog formula?

[edit]

Is there a formula for converting YIQ to an anlog quadrature-modulated chroma signal? -- 15:45, 11 December 2014 (UTC) — Preceding unsigned comment added by 89.182.14.227 (talk)

I and Q are the signals to modulate the two phases for the quadrature amplitude modulated (QAM) subcarrier. This is then added to Y, the luminance signal. I is filtered to 1.3MHz bandwidth, and Q to 400kHz bandwidth, before modulation. Gah4 (talk) 00:01, 6 July 2017 (UTC)[reply]

1953 vs Modern Equations

[edit]

Currently, there are 2 groups of equations, the NTSC 1953 colorimetry and the FCC NTSC Standard (SMPTE C). Short version of the issue: while themselves equations are correct, it appears the titles and descriptions are not.

Before making changes, to the main article, it seemed a good idea to good to discuss first.

The main difference between the 2 sets of equations is that the 1st group in the article set is based off of 3 digits of accuracy while the 2nd set is lower accuracy since it is based off of 2 digits of accuracy.

According to SMPTE 170M-2004 (and the earlier 170M-1999), the 3+ digit accuracy version was the original derivation back in 1953. However, CFR 47 73.682 ("NTSC 1953") was published with only 2 digits of accuracy due to the use of slide rules at the time. Decades later, SMPTE published a "recreation" of the original 3+ digit accuracy in SMPTE 170M. Thus, the 2 sets of equations only differ by round-off error. In actuality, the 2 digit version is considered semi-deprecated, even though CFR 47 73.682 is still on-the-books with 2 digits of accuracy.

Here's the 3+ digit accuracy set of equations (from SMPTE 170M-2004):

Y =  0.299R + 0.587G + 0.114B
I = -0.268(B-Y) + 0.736(R-Y)
Q =  0.413(B-Y) + 0.478(R-Y)

Note that the above equations are rounded to 3 digits of accuracy. The above are 33 degree rotations of the below equations from SMPTE 170M, which are defined as having exactly 3 digits of accuracy.

Y   =  0.299R + 0.587G + 0.114B
B-Y = -0.299R - 0.587G + 0.886B
R-Y =  0.701R - 0.587G - 0.114B

Here's the 2 digit accuracy set of equations (from CFR 47 73.682):

EY′ =  0.30ER′ + 0.59EG′ + 0.11EB′
EI′ = −0.27(EB′−EY′) + 0.74(ER′−EY′)
EQ′ =  0.41(EB′−EY′) + 0.48(ER′−EY′)

As for SMPTE C, that is a separate specification the defines how R, G, and B are defined in the CIE 1931 color space. Back in 1953, the phosphors on TV screens had a greater color gamut but reacted slowly, causing color smear on fast moving images. By the late 1960s, manufacturers started switching to different color phosphors that were faster reacting (no smearing) but had a smaller color gamut. The smaller color gamut was finally standardized in SMPTE C. The round-off errors between the 2 sets of equations above are not enough to compensate for the SMPTE C change in color gamut.

Comments?
Lathe26 (talk) 01:00, 7 March 2022 (UTC)[reply]

I suppose so. But note that there is no requirement that Wikipedia use the full precision of any source. Actually users, especially for commercial use, should find the official source. As very few receivers ever used YIQ decoders, and it is even less likely that any will in the future, I suspect no-one will care either way. Common precision resistors, that one might use for building such a matrix, have 1% tolerance. Does NTSC give a tolerance on the values? I was some time ago trying to find the tolerance for the subcarrier frequency, but didn't think about this one. FCC documents are not so easy to find. Gah4 (talk) 01:50, 7 March 2022 (UTC)[reply]
The main thing here are the diferent color spaces - diferent color spaces were usually ignored, leading to very weird early DVD transfers (ex: Star Trek DS9 and TNG first seasons) when viewed on modern HD Rec.709 compliant screens. NTSC colorimetry is mentioned here: https://en.wikipedia.org/wiki/NTSC#Colorimetry . Perhaps we should copy that information. Of course, transforming to/from RGB doesn't change the color space gamut, but not mentioning diferent primaries generates confusion. Many people think that generic RGB = sRGB, and it's not. Equation precision should be as indicated on official sources. If the values end up being the same, the enhanced precision shows technical advancement and should be mentioned. 08:53, 7 March 2022 (UTC) 4throck (talk) 08:53, 7 March 2022 (UTC)[reply]
It is nice to be precise, but WP doesn't require it. Sometimes it takes a lot of work to understand the actual precision, and which one is the official version. (And not always the more precise one.) Some time ago, I was working on an article related to jet engine thrust, with the official documents to about 5 digits. Then they are convert to metric units by the Europeans, again to 5 digits. I suspect you can't measure thrust that accurately in a flying airplane. Gah4 (talk) 20:24, 7 March 2022 (UTC)[reply]
Those are some interesting points. At present, I lean towards still listing both sets of equations for completeness. However, Gah4′s point that WP doesn't need both sets is something to consider. Maybe keep the 3-digit equations and de-emphasize the 2-digit equations. If only one set is to be listed, then the 3-digit seems preferable since that is what modern equipment would use when converting analog signals to digital with 8-bit analog-to-digital converters (~0.4%). Regarding allowed error, the older 1953 (CFR 47 73.682) is really loose and allows 2.5% error in some parts and 10% in others. The doc is littered with statements like "Closer tolerances may prove to be practicable and desirable with advance in the art." I suspect this was to enable cheaper color TVs early on with looser standards and they never bothered tightening the hard requirements later.
Regarding colorimetry, it would be good to mention its relation to the RGB values in this article (e.g., generic RGB is not sRGB) and then link to the official NTSC Colorimetry article. However, copying text from NTSC Colorimetry article to here would lead maintenance headaches. Lathe26 (talk) 05:45, 11 March 2022 (UTC)[reply]
Pretty much zero TVs decode this way. The question, then, is how close encoders were. Since there are much fewer of them, one can put more money in to them. Part of the original design is that you can encode YIQ but decode YRB by a phase rotation. Even more conveniently, the color burst has the phase for easy YRB decoding. But YRB loses the extra chroma resolution that YIQ was designed to keep. Sometimes I try to find FCC documents, but it isn't easy. I suspect, though, that at the time 1% was considered pretty good. Gah4 (talk) 06:09, 11 March 2022 (UTC)[reply]
OK, copying from the NTSC article is not a great solution... But the NTSC article has many problems (one is not even linking to YIQ!) Perhaps we should move (and improve) the Colorimetry and Color encoding sub-sections from the NTSC article here ? These sub-sections can easy be expanded to mention these issues (extra chroma resolution, relation with RGB) 4throck (talk) 10:53, 11 March 2022 (UTC)[reply]
There is a link above to the RCA proposal to the FCC based on the NTSC study that gives them as two digits multiplied by sin(33) and cos(33). The above link also includes the reference material, and is about 700 pages long. If the FCC proposal does it with two digits, it should be close enough for us. Gah4 (talk) 21:36, 11 March 2022 (UTC)[reply]
Technically, it was derived from YUV using two matrices: rotation matrix and swap matrix, swap so U and V are swapped. That means that it should use all the properties of YUV as used in PAL. Now, BT.470 that describes it made a dumb typo right in the matrix of YUV. So as SMPTE 170M says no one should care and decoder used approximate values anyway. Analog TV in BT.470 sense is almost over, so only software defined radio encoder is still there. https://github.com/fsphil/hacktv Valery Zapolodov (talk) 10:35, 16 April 2022 (UTC)[reply]
I agree with all of your points, @Lathe26. At the very least, the titles and introductions to these formulas should be corrected as they are plain wrong in their current state. The lower-precision formulas should be associated with NTSC-1953 and FCC rules while removing references to SMPTE C. The higher-precision formulas definitely come from SMPTE ST 170 (formerly SMPTE 170M).
The revised SMPTE C primaries and white are interesting here only in that they were NOT used in SMPTE's accuracy/precision revision to the luma separation matrix. Similarly, PAL systems had their own new set of primaries+white but still used NTSC-1953's (and later SMPTE ST 170's) luma separation matrix. By not updating the matrix for new colors+white, TV systems avoided having to make expensive and non-backwards-compatible changes to transceiver and camera electronics. It wasn't until the HDTV era when luma-separation matrices were updated to mathematically reflect new system colors and white point. JustinTArthur (talk) 14:29, 1 April 2024 (UTC)[reply]

Something is wrong with the images

[edit]

Something is wrong with at least one of the images on this article. The IQ plane shows +I as orange and -I as turquoise, but the component-splited image shows +I as red and -I as cyan. There's a huge hue difference and this cannot be caused by intensities of the components itself, so at least one of images is very wrong. According to https://colorizer.org, the both are wrong (i don't know is colorizer.org a reliable source). What do you think about this? This topic is different from topic of the discussion named "Image seems to be incorrect.", as the topic of it's about numbers while i'm talking about hues. RuzDD (talk) 05:24, 10 February 2024 (UTC)[reply]

As noted, YIQ is 33 degrees rotated from YUV, where YUV uses the red and blue axes. The graph looks close enough for me. I am not sure that I am good enough to tell the colors of the separation images. I don't even know where they came from. Gah4 (talk) 14:15, 10 February 2024 (UTC)[reply]
The YIQ IQ plane image seems correct so, for consistency, the separation image should use similar color gradients. Here's a corrected version I made: https://images4.imagebam.com/e0/75/5d/MERYFZV_o.jpg
This is only an approximation, as I just changed the colors of the separations. 4throck (talk) 18:06, 10 February 2024 (UTC)[reply]
@4throck Good work for I axis, but now the Q axis is very wrong... I think even amplitudes of the IQ components can be wrong, in this case simply applying a hue-rotation cannot solve the problem... (Note: Colors of the plane are probably correct, as the difference with colorizer.org is just a few degrees.) RuzDD (talk) 20:51, 10 February 2024 (UTC)[reply]

Wasn’t YIQ already officially demoted to second tier in 1987?

[edit]

As far as I can tell, from the release of the first version of the SMPTE-170M standard in 1987, YUV was the preferred encoding and decoding mode for NTSC analog broadcast television signals in North America, with YIQ encoding demoted to a possible alternative method. So the phase out, not just in practice but even officially, of YIQ did not happen at the time of analog switch-off but much earlier, right? -- 2A02:3030:614:C505:1807:DB80:55A7:6B4B (talk) 10:25, 21 May 2024 (UTC)[reply]

The original standard, using YIQ, allows for more bandwidth for I than Q, following the color resolutions of the eye (or eye-brain system), as found by NTSC experimentation. The choice of the color burst phase makes it easier to design Y'UV decoders, especially when they are built from vacuum tubes. As well as I know it, it was in the 1990's when RCA built one that did real YIQ decoding, with the full resolution I signal.
If you are asking what broadcasters are required to do, I don't see any reason to change it in 1987. The idea being that there are a small number of encoders, and a large number of decoders, the total cost of the encoders is small.
On the other hand, those not generating broadcast quality video can do anything that they want. I am not sure, for example, about Betamax and VHS. Gah4 (talk) 00:14, 22 May 2024 (UTC)[reply]