I have a color processing problem: i want to avoid color distortion when doing live streaming. I've noticed that colors distort slightly when especially encoding live using capture cards. I tried this out using different cards and boards and software (Osprey, Datapath, Black Magic cards, Asus , HP boards, Wirecast, FMLE, OBS software).

My input is 4:2:2 SDI in most cases (SDI 4:2:2 until encoded), sometimes DVI (thruout the signal chain) in some certain cases when encoding an RGB original source.

I know the difference when YUV (SDI) converted to RGB. But this seem to be slightly different compared to Equasys GMBH color conversion explained.

When I capture for example using Datapath Vision capture hw/sw I receive full colors (0-255) within Vision cpature window but in any encoding software I get some of the colors lost mainly increade or decreased level of green depending on color (tested with 100% color bars to get exact numeric value of distortion.

So is there anybody who could explain me what is the main concern here? So far we have been happy to be able to control contrast, brightness, saturation and tint when reaching the best possible video quality but now, since we are encoding also RGB source material (using DVI / RGB chain), it is critical for our customer to restore the critical color of their product.

After good ten years I suddenly got lost with this! Do I think about this just too complex way?