The Physics and Physiology of Cloud Gaming

Cloud Gaming Physics and Physiology

Advertisement

I have often witnessed conversations online where gamers dismiss Cloud Gaming (and various platforms/services) because the lag/latency must surely be unbearable… because… “Physics.” Apparently, the poor blundering fools behind these services just forgot to check the fundamental laws of the universe, which make cloud gaming infeasible, before they launched their products.

I don’t often do this, but it’s time to whip out the Physics Ph.D. card to demand some authority on this subject. Let’s see what “Physics” actually tells us about the latency limits of cloud gaming.

In a practical sense, optimizing the transfer of information from point A to point B can involve complex networking (one of minimizing hops, managing congestion, ensuring adequate bandwidth at each leg etc.). This is something that is in Google’s wheelhouse and something their infrastructure team is always improving. But, this isn’t what many appear to question, what they assert is that it is fundamentally impossible (in a Physics sense) for a game to appear responsive on a screen in your house when streamed from a box outside that house.

Let’s analyze this by looking at a worst case scenario.

Worst Case Scenario: New York to LA

Worst Case Scenario for Cloud Gaming. LA to New York.

Let’s imagine you are in Los Angeles, and the lazy buggers at Microsoft set you up with a Xbox Cloud server in New York. You are playing Fortnite and an enemy pops up on the screen from behind an object How long does Physics say it is going to take you to receive that information, and then have your button press register back on the server to credit the headshot you so rightly deserve??

Well, if the information is being carried by a person on a flight, Google can tell you that it will take a bit over 5 hours each way (and $452) to travel the 2451 miles. However, data transfers these days (you know, the days since the invention of the telegraph) typically travel via electrical or optical signals – that propagate at or near the speed of light.

Hiding that Ph.D. card again for a second while I use an unholy unit of velocity (miles per second), it is known that the speed of light is 186,000 Mi/s. That means it takes light 2451 Mi / 186,000 Mi/s = 0.013 seconds, or 13 ms (milliseconds), to make the trip from New York to LA. In other words, it takes light only 26 ms round trip!

Thus, even if you were cloud gaming from a server an entire continent away, the only additional latency you are going to feel because of Physics limitations is ~26 ms. We’ll put this number into context in a second.

As an aside though, in practice, your ping time to a server this far away probably won’t hit this Physics limit because, for example, at each hop in its route, your messages will spend time on a switch or router which adds some small additional latency. But, reality check, your server is going to be a lot closer to you than in this worst case scenario. Ping a geographically close cloud server location (e.g. AWS); round trip ping time could be as low as 10ms in practice – with various wifi bands vs ethernet potentially making a significant part of the difference.

Let’s now put this input lag coming from using a remote server into context for a second.

Other sources of “input lag”

What else adds to the input lag, and how much is normal?

If you are using a wireless controller (and pretty much everyone is these days) that adds its own 5-10 ms of latency. There is the time it takes to compute the response to the button push in the game simulation. There is time to render a frame that includes that response (~17 ms at 60 FPS or ~33 ms at 30 FPS).

Wikipedia says:

It appears that (excluding the monitor/television display lag) 133 ms is an average (game) response time and the most sensitive games (fighting games, first person shooters and rhythm games) achieve response times of 67 ms (excluding display lag)

There is then the time it takes for the display to actually get the image out to you. Display lag varies from about 10-50 ms if you are in the best mode for gaming. And, if you forget to turn on game mode, you could be adding 10s of milliseconds more.

At best, you are looking at 80-100 ms of response time (input lag) and, on average, 150-200 ms.

The extra 10 ms (or even 30-40 ms for worst case LA to NY) is probably in the noise.

But!! None of this is the biggest source of latency in the Fortnite scenario above…

The biggest source of latency is you!

When the enemy pops up at you from behind the barrel, it’s your own reaction time that is really the limiting factor! Average human reaction times to visual stimuli are reported to be 200-250 ms depending on the experiment (example source).

If you think a 10-20 ms difference in latency really changes your gaming ability, you are either fooling yourself or you are a superhuman. Frankly, this goes for the 16.6 ms latency difference between 30 FPS and 60 FPS as well.

“But, your scenario is special, Jack,” you say? “What about enemies who are running across the screen? We can prepare and time our shot in that scenario.”

Yes, yes you can! And, to do that you have had to get yourself accustomed to the game’s own inherit timing. This includes the response time we discussed above and whether the game shoots immediately or if it animates a trigger/arrow/knife action. Do you have to adjust your timing for distance etc.? Your going to get accustomed to the games mechanics regardless of the latency in your setup. In short, you’re going to have to get used to each game’s unique timings no matter what setup you have.

But, what’s this about “Negative Latency” that clearly isn’t allowed by Physics right?

Negative Latency was a marketing term that anti-Stadia community members decided to jump on and make fun of for their own nefarious purposes. However, it refers to real, well-proven techniques in the computing industry – for example speculative execution – that are used by just about every modern system to accelerate computing workloads.

As a thought example: if it is likely a player might press a button within a given window of time, it’s possible for a CPU to simulate both the game outcome where the button is pressed and not-pressed concurrently. When the signal for the button press actually comes in, the game process can immediately jump to the branch of execution where it was assumed the button got pressed.

This is just one technique that can reduce a gamer’s actual experienced latency. At the end of the day, with optimized networking, performant cloud CPU/GPU hardware, drivers plus a tuned software stack, it is absolutely possible a user could have a more responsive gaming experience with cloud gaming than playing the game locally – particularly if the local hardware isn’t as performant. Try playing a game on GeForce Now’s 3080 tier (the current cloud fidelity leader) if you don’t believe me.

We hope this helps you understand the Physics and Physiology of cloud gaming that is at play in your gaming experience. Sorry to say, those missed shots probably have more to do with your own limits and skills than cloud gaming tech. We hope you still appreciate the data and science driven approach we take here at Cloud Dosage, though.


Advertisement

Jack Deslippe

Jack Deslippe is an HPC professional with a PhD in Physics from the University of California, Berkeley. As a hobby, he is passionate about consumer technology and Cloud Gaming in particular. He volunteers as an editor for Cloud Dosage in his spare time. See the games Jack is Playing at ExoPhase. Like his content? You can follow Jack on Threads: @jackdeslippe and Buy Jack a Beer.

One thought on “The Physics and Physiology of Cloud Gaming”

Leave a Reply

Discover more from Cloud Dosage

Subscribe now to keep reading and get access to the full archive.

Continue reading