I was comparing it to my TV and what was considered a low lag TV from 2010 to about 2014.
Then the invention of the TN monitor came.
Inputlag.com rated the lag of the PS3DTV at 31 milliseconds or approximately two 60 Hertz frames versus reality.
If I caught up one frame then I caught up with lag of that TV’s lag, then I caught up 1 frame vs reality. I was hoping that it was exactly two frames behind then that would prove that the HDMI to VGA converters and back combined are less than one millisecond therefore over 95% of the delay is in the display not in the conversion.
And during this test, I had one direct path to the PS3DTV (lower right) one path straight to to CRT (upper right) and one “double converted” HDMI to VGA then. VGA to HDMI that was HDMI captured.
I’m not insisting the Twitch stream was zero ping relative to the game reality. If OBS or Twitch was late, we factored that out by making them the same broadcast.
If left and bottom right are equal, and one route was 0 converters, and the other route was 2 converters, and the left is internal input and the bottom right was internal input from camera, then that shows is that the display delay is more significant than the converter delay.
VHS and Betas, are considered zero ping, but actually throw off a light gun by a couple of frames. A DVD-R in preview mode still makes the light gun work, but is more wildly off. It doesn’t register in half the screen, and is thrown off vertically.
Retro Tink’s SCART to YCbCr converter only throws light guns off as much as a VCR does.
By the way, how does one “frame advance” and “frame retreat” in the Twitch playback on an Android app? I’d like to do an analysis on Twitch.
If I am right this is how a grocery list Style order of operations exists on a CRT TV.
Step one get sync signal to start at top of line.
Step 2 draw line from left to right as one continuous analogy colored gradually changing line.
Step 3 do a carriage return to start the next line.
Step 4 repeat two and three until end of frame.
Step 5. Either send new sync signal if 240p 60 or, on alternate passes, send new field signal if 480i 30 and new sync signal on alternating trips.
Each field trip takes 16.7 milliseconds assuming 60 Hz or i30.
The color divisions within a row are continuous yet the color divisions within a column are discrete.
I don’t know much about digital TV but I assume it gets the whole frame drawn out then puts it in a frame buffer then broadcasts it and most of the data it receives are Delta frames or difference frames between that frame and the last frame and some could be Delta pixels meaning pixels described in terms of the previous pixel not as an absolute analog value. That’s how you save bandwidth.
I know there is a video bandwidth limits to keep within live video. Because even with our pathetic 1.5 Meg in for our DSL we sometimes lead the movie and disconnect well after our internet has gone out because it has buffered lots of movie in between now and later. Of course we have to keep it at 480i in order to assure that.
A movie you could phone in ahead because it’s predetermined and loaded faster than it plays. if a video game would do that, you would question the meaning of life, or at least, the meaning of a game if everything is predetermined.
that’s the reason why there are alternate signals carried on the same digital carrier: because for TV to make sense, the video must either be live or else we get into really big tough existential questions about the true meaning of video games and displays and fate.
Arguably most TV is not used for live stuff. For video games to be of any worth, it must be live.
with smaller data footprints (assuming same time stamp) does that mean a digital signal could be transmitted stored, processesd and actually fully displayed before 16 milliseconds which is the natural, organic time it takes to display one field in a i30 or p60 game?
I am discussing this. I believe what I said is true. If a TN monitor shows a complete p60 picture in 8ms instant complete draw time, then maybe I have to think about this. Maybe data footprints savings that are greater account for faster complete frames vs live CRT draw time.
Does someone have a breakdown of 16 milliseconds of one frame how an analog TV versus a digital TV processes and displays the screen? If the lower data packet is faster than 16 milliseconds of a live spooling then it is possible to display the whole thing faster than 16 milliseconds, the natural length of an analog CRT frame.