The SRAM transfer in netplay handshake now uses autosave_lock and
autosave_unlock. Will possibly fix a hang/crash bug on Android with
netplay and autosave conflicting.
Previously, if two clients were connected to the same server and one of
them was ahead of the server, the only way to rectify that situation was
for the client to get so far ahead that it stalled, as the server could
only catch up with an ahead client if all clients were ahead. That's
unrealistic. This gives the server the alternate option of demanding
that a client stall. This keeps things nicely in line even with >2
players.
Since the quirks protocol was that a core could report variable
savestate size, but the host then tells it "no", we should actually
accept the variable size quirk in netplay, since RetroArch refuses to
allow cores to actually produce variable-size states.
In the previous catch-up system, we would only try to catch up if we
were falling behind the farthest-behind peer. However, as they would
also only try to catch up to us, everyone basically agreed to the
worst-case latency. It makes more sense to try to be in parity with your
direct peer than with indirect connections.
Previously, we could be stalled by one player but still reading data
from another, which would wedge the client because we would never act
upon the newly-read data. Now we act upon data even if we're stalled.
Fixes bugs in initial connection with high latency.
Making the netplay handshake protocol send the core and content as an
explicit command, so that the other side can (notionally) choose to load
it. That isn't implemented, of course.
The idea:
* Use a fixed number of delay_frames (eventually to be fixed at 120,
currently still uses the config variable, 0 will still be an option)
* Determine how long it takes to simulate a frame.
* Stall only if resimulating the intervening frames would be
sufficiently annoying (currently fixed at three frames worth of
time)
Because clients always try to catch up, the actual frame delay works out
automatically to be minimally zero and maximally the latency. If one
client is underpowered but the other is fine, the powerful one will
automatically take up the slack. Seems like the most reasonable system.