2015-12-23 13:25:28 -07:00
|
|
|
/* RetroArch - A frontend for libretro.
|
|
|
|
* Copyright (C) 2010-2014 - Hans-Kristian Arntzen
|
2016-01-10 04:06:50 +01:00
|
|
|
* Copyright (C) 2011-2016 - Daniel De Matteis
|
2015-12-23 13:25:28 -07:00
|
|
|
*
|
|
|
|
* RetroArch is free software: you can redistribute it and/or modify it under the terms
|
|
|
|
* of the GNU General Public License as published by the Free Software Found-
|
|
|
|
* ation, either version 3 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* RetroArch is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
|
|
|
|
* without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
|
|
|
|
* PURPOSE. See the GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along with RetroArch.
|
|
|
|
* If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef __RARCH_NETPLAY_PRIVATE_H
|
|
|
|
#define __RARCH_NETPLAY_PRIVATE_H
|
2016-05-12 10:20:14 +02:00
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
#include "netplay.h"
|
2016-05-09 20:30:47 +02:00
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
#include <net/net_compat.h>
|
|
|
|
#include <retro_endianness.h>
|
2016-05-09 20:30:47 +02:00
|
|
|
|
2016-09-06 06:11:44 +02:00
|
|
|
#include "../../core.h"
|
2016-05-19 11:46:54 +02:00
|
|
|
#include "../../msg_hash.h"
|
|
|
|
#include "../../verbosity.h"
|
2015-12-23 13:25:28 -07:00
|
|
|
|
|
|
|
#ifdef ANDROID
|
|
|
|
#define HAVE_IPV6
|
|
|
|
#endif
|
|
|
|
|
2016-09-12 07:42:35 -04:00
|
|
|
#define WORDS_PER_FRAME 4 /* Allows us to send 128 bits worth of state per frame. */
|
|
|
|
#define MAX_SPECTATORS 16
|
|
|
|
#define RARCH_DEFAULT_PORT 55435
|
2015-12-23 13:25:28 -07:00
|
|
|
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
#define NETPLAY_PROTOCOL_VERSION 1
|
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
#define PREV_PTR(x) ((x) == 0 ? netplay->buffer_size - 1 : (x) - 1)
|
|
|
|
#define NEXT_PTR(x) ((x + 1) % netplay->buffer_size)
|
|
|
|
|
|
|
|
struct delta_frame
|
|
|
|
{
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
uint32_t frame;
|
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
void *state;
|
|
|
|
|
2016-09-12 07:42:35 -04:00
|
|
|
uint32_t real_input_state[WORDS_PER_FRAME - 1];
|
|
|
|
uint32_t simulated_input_state[WORDS_PER_FRAME - 1];
|
|
|
|
uint32_t self_state[WORDS_PER_FRAME - 1];
|
2015-12-23 13:25:28 -07:00
|
|
|
|
2016-09-12 09:13:26 -04:00
|
|
|
/* Have we read local input? */
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
bool have_local;
|
2016-09-12 09:13:26 -04:00
|
|
|
|
|
|
|
/* Badly named: This is !have_real(_remote) */
|
2015-12-23 13:25:28 -07:00
|
|
|
bool is_simulated;
|
2016-09-12 09:13:26 -04:00
|
|
|
|
|
|
|
/* Is the current state as of self_frame_count using the real data? */
|
2015-12-23 13:25:28 -07:00
|
|
|
bool used_real;
|
|
|
|
};
|
|
|
|
|
|
|
|
struct netplay_callbacks {
|
|
|
|
void (*pre_frame) (netplay_t *netplay);
|
|
|
|
void (*post_frame)(netplay_t *netplay);
|
|
|
|
bool (*info_cb) (netplay_t *netplay, unsigned frames);
|
|
|
|
};
|
|
|
|
|
2016-09-12 09:13:26 -04:00
|
|
|
enum rarch_netplay_stall_reasons
|
|
|
|
{
|
|
|
|
RARCH_NETPLAY_STALL_NONE = 0,
|
|
|
|
RARCH_NETPLAY_STALL_RUNNING_FAST
|
|
|
|
};
|
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
struct netplay
|
|
|
|
{
|
|
|
|
char nick[32];
|
|
|
|
char other_nick[32];
|
|
|
|
struct sockaddr_storage other_addr;
|
|
|
|
|
|
|
|
struct retro_callbacks cbs;
|
|
|
|
/* TCP connection for state sending, etc. Also used for commands */
|
|
|
|
int fd;
|
|
|
|
/* UDP connection for game state updates. */
|
|
|
|
int udp_fd;
|
|
|
|
/* Which port is governed by netplay (other user)? */
|
|
|
|
unsigned port;
|
|
|
|
bool has_connection;
|
|
|
|
|
|
|
|
struct delta_frame *buffer;
|
|
|
|
size_t buffer_size;
|
|
|
|
|
|
|
|
/* Pointer where we are now. */
|
|
|
|
size_t self_ptr;
|
|
|
|
/* Points to the last reliable state that self ever had. */
|
|
|
|
size_t other_ptr;
|
|
|
|
/* Pointer to where we are reading.
|
|
|
|
* Generally, other_ptr <= read_ptr <= self_ptr. */
|
|
|
|
size_t read_ptr;
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
/* A pointer used temporarily for replay. */
|
|
|
|
size_t replay_ptr;
|
2015-12-23 13:25:28 -07:00
|
|
|
|
|
|
|
size_t state_size;
|
|
|
|
|
|
|
|
/* Are we replaying old frames? */
|
|
|
|
bool is_replay;
|
|
|
|
/* We don't want to poll several times on a frame. */
|
|
|
|
bool can_poll;
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
/* If we end up having to drop remote frame data because it's ahead of us, fast-forward is URGENT */
|
|
|
|
bool must_fast_forward;
|
2015-12-23 13:25:28 -07:00
|
|
|
|
2016-09-12 07:42:35 -04:00
|
|
|
/* A buffer for outgoing input packets. */
|
|
|
|
uint32_t packet_buffer[2 + WORDS_PER_FRAME];
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
uint32_t self_frame_count;
|
2015-12-23 13:25:28 -07:00
|
|
|
uint32_t read_frame_count;
|
|
|
|
uint32_t other_frame_count;
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
uint32_t replay_frame_count;
|
2015-12-23 13:25:28 -07:00
|
|
|
struct addrinfo *addr;
|
|
|
|
struct sockaddr_storage their_addr;
|
|
|
|
bool has_client_addr;
|
|
|
|
|
|
|
|
unsigned timeout_cnt;
|
|
|
|
|
|
|
|
/* Spectating. */
|
|
|
|
struct {
|
|
|
|
bool enabled;
|
|
|
|
int fds[MAX_SPECTATORS];
|
|
|
|
uint16_t *input;
|
|
|
|
size_t input_ptr;
|
|
|
|
size_t input_sz;
|
|
|
|
} spectate;
|
|
|
|
bool is_server;
|
|
|
|
/* User flipping
|
|
|
|
* Flipping state. If ptr >= flip_frame, we apply the flip.
|
|
|
|
* If not, we apply the opposite, effectively creating a trigger point.
|
|
|
|
* To avoid collition we need to make sure our client/host is synced up
|
|
|
|
* well after flip_frame before allowing another flip. */
|
|
|
|
bool flip;
|
|
|
|
uint32_t flip_frame;
|
|
|
|
|
|
|
|
/* Netplay pausing
|
|
|
|
*/
|
|
|
|
bool pause;
|
|
|
|
uint32_t pause_frame;
|
|
|
|
|
2016-09-12 09:13:26 -04:00
|
|
|
/* And stalling */
|
|
|
|
int stall;
|
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
struct netplay_callbacks* net_cbs;
|
|
|
|
};
|
|
|
|
|
2015-12-26 08:10:37 +01:00
|
|
|
extern void *netplay_data;
|
2015-12-23 13:25:28 -07:00
|
|
|
|
|
|
|
struct netplay_callbacks* netplay_get_cbs_net(void);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2015-12-23 13:25:28 -07:00
|
|
|
struct netplay_callbacks* netplay_get_cbs_spectate(void);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
void netplay_log_connection(const struct sockaddr_storage *their_addr,
|
2015-12-23 13:25:28 -07:00
|
|
|
unsigned slot, const char *nick);
|
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_get_nickname(netplay_t *netplay, int fd);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_send_nickname(netplay_t *netplay, int fd);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_send_info(netplay_t *netplay);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
uint32_t *netplay_bsv_header_generate(size_t *size, uint32_t magic);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_bsv_parse_header(const uint32_t *header, uint32_t magic);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
uint32_t netplay_impl_magic(void);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_send_info(netplay_t *netplay);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_get_info(netplay_t *netplay);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 10:20:14 +02:00
|
|
|
bool netplay_is_server(netplay_t* netplay);
|
2016-09-03 07:48:25 +02:00
|
|
|
|
2016-05-12 12:03:43 +02:00
|
|
|
bool netplay_is_spectate(netplay_t* netplay);
|
2016-05-12 10:20:14 +02:00
|
|
|
|
Multitudinous fixes and updates to Netplay. Had to be one commit since
they're mostly related:
(1) Renamed frame_count to self_frame_count to be consistent with all
other names.
(2) Previously, it was possible to overwrite data in the ring buffer
that hadn't yet been used. Now that's not possible, but that just
changes one breakage for another: It's now possible to miss the NEW
data. The final resolution for this will probably be requesting stalls.
This is accomplished simply by storing frame numbers in the ring buffer
and checking them against the 'other' head.
(3) In TCP packets, separated cmd_size from cmd. It was beyond pointless
for these to be combined, and restricted cmd_size to 16 bits, which
will probably fail when/if state loading is supported.
(4) Readahead is now allowed. In the past, if the peer got ahead of us,
we would simply ignore their data. Thus, if they got too far ahead of
us, we'd stop reading their data altogether. Fabulous. Now, we're happy
to read future input.
(5) If the peer gets too far ahead of us (currently an unconfigurable 10
frames), fast forward to catch up. This should prevent desync due to
clock drift or stutter.
(6) Used frame_count in a few places where ptr was used. Doing a
comparison of pointers on a ring buffer is a far more dangerous way to
assure we're done with a task than simply using the count, since the
ring buffer is... well, a ring.
(7) Renamed tmp_{ptr,frame_count} to replay_{ptr,frame_count} for
clarity.
(8) Slightly changed the protocol version hash, just to assure that
other clients wouldn't think they were compatible with this one.
(9) There was an off-by-one error which, under some circumstances, could
allow the replay engine to run a complete round through the ring buffer,
replaying stale data. Fixed.
2016-09-11 22:01:47 -04:00
|
|
|
bool netplay_delta_frame_ready(netplay_t *netplay, struct delta_frame *delta, uint32_t frame);
|
|
|
|
|
2015-12-26 08:10:37 +01:00
|
|
|
#endif
|