2016-12-13 20:42:53 -05:00
|
|
|
/* RetroArch - A frontend for libretro.
|
|
|
|
* Copyright (C) 2010-2014 - Hans-Kristian Arntzen
|
2021-04-08 18:09:02 +02:00
|
|
|
* Copyright (C) 2011-2021 - Daniel De Matteis
|
2017-01-22 13:40:32 +01:00
|
|
|
* Copyright (C) 2016-2017 - Gregor Richards
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* Copyright (C) 2021-2021 - Roberto V. Rampim
|
2016-12-13 20:42:53 -05:00
|
|
|
*
|
|
|
|
* RetroArch is free software: you can redistribute it and/or modify it under the terms
|
|
|
|
* of the GNU General Public License as published by the Free Software Found-
|
|
|
|
* ation, either version 3 of the License, or (at your option) any later version.
|
|
|
|
*
|
|
|
|
* RetroArch is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
|
|
|
|
* without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
|
|
|
|
* PURPOSE. See the GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License along with RetroArch.
|
|
|
|
* If not, see <http://www.gnu.org/licenses/>.
|
|
|
|
*/
|
|
|
|
|
2021-04-08 18:16:24 +02:00
|
|
|
#if defined(_MSC_VER) && !defined(_XBOX)
|
|
|
|
#pragma comment(lib, "ws2_32")
|
|
|
|
#endif
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
#include <stdio.h>
|
2021-04-08 18:09:02 +02:00
|
|
|
#include <stdlib.h>
|
2016-12-13 20:42:53 -05:00
|
|
|
#include <sys/types.h>
|
2021-09-18 06:14:34 +02:00
|
|
|
#include <string.h>
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
#include <boolean.h>
|
2021-11-05 04:42:03 +01:00
|
|
|
#include <retro_assert.h>
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#include <compat/strl.h>
|
2021-04-08 18:09:02 +02:00
|
|
|
#include <net/net_compat.h>
|
|
|
|
#include <net/net_socket.h>
|
2021-11-05 04:42:03 +01:00
|
|
|
#include <net/net_http.h>
|
2021-09-18 06:14:34 +02:00
|
|
|
#include <encodings/crc32.h>
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#include <encodings/base64.h>
|
2021-09-18 06:14:34 +02:00
|
|
|
#include <lrc_hash.h>
|
|
|
|
#include <retro_timers.h>
|
2016-12-13 20:42:53 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#include <math/float_minmax.h>
|
2021-09-18 06:14:34 +02:00
|
|
|
#include <string/stdstring.h>
|
|
|
|
#include <file/file_path.h>
|
|
|
|
|
2021-12-23 17:28:30 -03:00
|
|
|
#if defined(_WIN32) && defined(IP_MULTICAST_IF)
|
|
|
|
#include <iphlpapi.h>
|
|
|
|
#endif
|
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
#ifdef HAVE_DISCORD
|
|
|
|
#include "../discord.h"
|
|
|
|
#endif
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#include "../../file_path_special.h"
|
|
|
|
#include "../../paths.h"
|
|
|
|
#include "../../content.h"
|
|
|
|
|
|
|
|
#ifdef HAVE_CONFIG_H
|
|
|
|
#include "../../config.h"
|
|
|
|
#endif
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
#include "../../autosave.h"
|
2017-02-23 21:49:22 -05:00
|
|
|
#include "../../configuration.h"
|
2021-09-18 06:14:34 +02:00
|
|
|
#include "../../command.h"
|
|
|
|
#include "../../content.h"
|
2021-04-08 18:09:02 +02:00
|
|
|
#include "../../driver.h"
|
2017-05-11 09:11:46 +02:00
|
|
|
#include "../../retroarch.h"
|
2021-09-18 06:14:34 +02:00
|
|
|
#include "../../version.h"
|
|
|
|
#include "../../verbosity.h"
|
|
|
|
|
2017-02-20 19:08:31 -05:00
|
|
|
#include "../../tasks/tasks_internal.h"
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
#include "../../input/input_driver.h"
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_MENU
|
|
|
|
#include "../../menu/menu_input.h"
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#include "../../menu/menu_driver.h"
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef HAVE_GFX_WIDGETS
|
|
|
|
#include "../../gfx/gfx_widgets.h"
|
2021-04-08 18:16:24 +02:00
|
|
|
#endif
|
|
|
|
|
2020-05-20 15:27:27 +02:00
|
|
|
#ifdef HAVE_DISCORD
|
|
|
|
#include "../discord.h"
|
2021-09-18 06:14:34 +02:00
|
|
|
#endif
|
2019-02-06 19:01:11 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#include "netplay.h"
|
|
|
|
#include "netplay_private.h"
|
|
|
|
|
|
|
|
#if defined(AF_INET6) && !defined(HAVE_SOCKET_LEGACY) && !defined(_3DS)
|
|
|
|
#define HAVE_INET6 1
|
2020-05-20 15:27:27 +02:00
|
|
|
#endif
|
2019-02-06 19:01:11 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#ifdef TCP_NODELAY
|
|
|
|
#define SET_TCP_NODELAY(fd) \
|
|
|
|
{ \
|
|
|
|
int on = 1; \
|
2021-12-21 23:05:35 -03:00
|
|
|
if (setsockopt((fd), IPPROTO_TCP, TCP_NODELAY, \
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
(const char *) &on, sizeof(on)) < 0) \
|
|
|
|
RARCH_WARN("[Netplay] Could not set netplay TCP socket to nodelay. Expect jitter.\n"); \
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
#define SET_TCP_NODELAY(fd)
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#if defined(F_SETFD) && defined(FD_CLOEXEC)
|
|
|
|
#define SET_FD_CLOEXEC(fd) \
|
2021-12-21 23:05:35 -03:00
|
|
|
if (fcntl((fd), F_SETFD, FD_CLOEXEC) < 0) \
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] Cannot set netplay port to close-on-exec. It may fail to reopen.\n");
|
|
|
|
#else
|
|
|
|
#define SET_FD_CLOEXEC(fd)
|
|
|
|
#endif
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#define RECV(buf, sz) \
|
|
|
|
recvd = netplay_recv(&connection->recv_packet_buffer, connection->fd, (buf), (sz), false); \
|
|
|
|
if (recvd >= 0 && recvd < (ssize_t) (sz)) \
|
|
|
|
{ \
|
|
|
|
netplay_recv_reset(&connection->recv_packet_buffer); \
|
|
|
|
return true; \
|
|
|
|
} \
|
|
|
|
else if (recvd < 0)
|
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
#define SET_PING(connection) \
|
|
|
|
ping = (int32_t)((cpu_features_get_time_usec() - (connection)->ping_timer) / 1000); \
|
|
|
|
if ((connection)->ping < 0 || ping < (connection)->ping) \
|
|
|
|
(connection)->ping = ping;
|
|
|
|
|
|
|
|
#define REQUIRE_PROTOCOL_VERSION(connection, version) \
|
|
|
|
if ((connection)->netplay_protocol >= (version))
|
|
|
|
|
|
|
|
#define REQUIRE_PROTOCOL_RANGE(connection, vmin, vmax) \
|
|
|
|
if ((connection)->netplay_protocol >= (vmin) && \
|
|
|
|
(connection)->netplay_protocol <= (vmax))
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#define NETPLAY_MAGIC 0x52414E50 /* RANP */
|
2021-11-05 18:52:56 +01:00
|
|
|
#define FULL_MAGIC 0x46554C4C /* FULL */
|
2021-10-18 01:13:35 +02:00
|
|
|
#define POKE_MAGIC 0x504F4B45 /* POKE */
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Discovery magics */
|
|
|
|
#define DISCOVERY_QUERY_MAGIC 0x52414E51 /* RANQ */
|
|
|
|
#define DISCOVERY_RESPONSE_MAGIC 0x52414E53 /* RANS */
|
|
|
|
|
|
|
|
/* MITM magics */
|
|
|
|
#define MITM_SESSION_MAGIC 0x52415453 /* RATS */
|
|
|
|
#define MITM_LINK_MAGIC 0x5241544C /* RATL */
|
|
|
|
#define MITM_PING_MAGIC 0x52415450 /* RATP */
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
struct vote_count
|
|
|
|
{
|
|
|
|
uint16_t votes[32];
|
|
|
|
};
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
struct nick_buf_s
|
|
|
|
{
|
|
|
|
uint32_t cmd[2];
|
|
|
|
char nick[NETPLAY_NICK_LEN];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct password_buf_s
|
|
|
|
{
|
|
|
|
uint32_t cmd[2];
|
|
|
|
char password[NETPLAY_PASS_HASH_LEN];
|
|
|
|
};
|
|
|
|
|
|
|
|
struct info_buf_s
|
|
|
|
{
|
|
|
|
uint32_t cmd[2];
|
|
|
|
uint32_t content_crc;
|
|
|
|
char core_name[NETPLAY_NICK_LEN];
|
|
|
|
char core_version[NETPLAY_NICK_LEN];
|
|
|
|
};
|
|
|
|
|
2021-11-06 00:27:33 +01:00
|
|
|
/* The keys supported by netplay */
|
|
|
|
enum netplay_keys
|
|
|
|
{
|
|
|
|
NETPLAY_KEY_UNKNOWN = 0,
|
|
|
|
#define K(k) NETPLAY_KEY_ ## k,
|
|
|
|
#define KL(k,l) K(k)
|
|
|
|
#include "netplay_keys.h"
|
|
|
|
#undef KL
|
|
|
|
#undef K
|
|
|
|
NETPLAY_KEY_LAST
|
|
|
|
};
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
/* The mapping of keys from netplay (network) to libretro (host) */
|
2021-11-06 00:27:33 +01:00
|
|
|
static const uint16_t netplay_key_ntoh_mapping[] = {
|
2021-04-08 18:09:02 +02:00
|
|
|
(uint16_t) RETROK_UNKNOWN,
|
|
|
|
#define K(k) (uint16_t) RETROK_ ## k,
|
|
|
|
#define KL(k,l) (uint16_t) l,
|
|
|
|
#include "netplay_keys.h"
|
|
|
|
#undef KL
|
|
|
|
#undef K
|
|
|
|
0
|
|
|
|
};
|
|
|
|
|
2021-11-06 00:27:33 +01:00
|
|
|
#define NETPLAY_KEY_NTOH(k) (netplay_key_ntoh_mapping[k])
|
|
|
|
|
2021-11-05 04:50:16 +01:00
|
|
|
static net_driver_state_t networking_driver_st = {0};
|
|
|
|
|
|
|
|
net_driver_state_t *networking_state_get_ptr(void)
|
|
|
|
{
|
|
|
|
return &networking_driver_st;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_SOCKET_LEGACY
|
|
|
|
#ifndef htons
|
|
|
|
/* The fact that I need to write this is deeply depressing */
|
|
|
|
static int16_t htons_for_morons(int16_t value)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
union {
|
|
|
|
int32_t l;
|
|
|
|
int16_t s[2];
|
|
|
|
} val;
|
|
|
|
val.l = htonl(value);
|
|
|
|
return val.s[1];
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#define htons htons_for_morons
|
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#ifndef ntohs
|
|
|
|
static int16_t ntohs_for_morons(int16_t value)
|
|
|
|
{
|
|
|
|
union {
|
|
|
|
int32_t l;
|
|
|
|
int16_t s[2];
|
|
|
|
} val;
|
|
|
|
val.l = ntohl(value);
|
|
|
|
return val.l == value ? val.s[1] : val.s[0];
|
|
|
|
}
|
|
|
|
#define ntohs ntohs_for_morons
|
|
|
|
#endif
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#ifdef HAVE_NETPLAYDISCOVERY
|
|
|
|
/** Initialize Netplay discovery (client) */
|
|
|
|
bool init_netplay_discovery(void)
|
|
|
|
{
|
|
|
|
struct addrinfo *addr = NULL;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
int fd = socket_init(
|
|
|
|
(void **) &addr, 0, NULL,
|
|
|
|
SOCKET_TYPE_DATAGRAM);
|
|
|
|
bool ret = fd >= 0 &&
|
|
|
|
socket_bind(fd, addr);
|
|
|
|
|
|
|
|
if (ret)
|
|
|
|
{
|
2021-12-23 17:28:30 -03:00
|
|
|
#if defined(_WIN32) && defined(IP_MULTICAST_IF)
|
|
|
|
MIB_IPFORWARDROW ip_forward;
|
|
|
|
|
|
|
|
if (GetBestRoute(0xDFFFFFFF, 0, &ip_forward) == NO_ERROR)
|
|
|
|
{
|
|
|
|
IF_INDEX index = ip_forward.dwForwardIfIndex;
|
|
|
|
PMIB_IPADDRTABLE table = malloc(sizeof(MIB_IPADDRTABLE));
|
|
|
|
|
|
|
|
if (table)
|
|
|
|
{
|
|
|
|
DWORD len = sizeof(*table);
|
|
|
|
DWORD result = GetIpAddrTable(table, &len, FALSE);
|
|
|
|
|
|
|
|
if (result == ERROR_INSUFFICIENT_BUFFER)
|
|
|
|
{
|
|
|
|
PMIB_IPADDRTABLE new_table = realloc(table, len);
|
|
|
|
|
|
|
|
if (new_table)
|
|
|
|
{
|
|
|
|
table = new_table;
|
|
|
|
result = GetIpAddrTable(table, &len, FALSE);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (result == NO_ERROR)
|
|
|
|
{
|
|
|
|
DWORD i;
|
|
|
|
|
|
|
|
for (i = 0; i < table->dwNumEntries; i++)
|
|
|
|
{
|
|
|
|
PMIB_IPADDRROW ip_addr = &table->table[i];
|
|
|
|
|
|
|
|
if (ip_addr->dwIndex == index)
|
|
|
|
{
|
|
|
|
setsockopt(fd, IPPROTO_IP, IP_MULTICAST_IF,
|
|
|
|
(const char *) &ip_addr->dwAddr, sizeof(ip_addr->dwAddr));
|
|
|
|
((struct sockaddr_in *) addr->ai_addr)->sin_addr.s_addr =
|
|
|
|
ip_addr->dwAddr;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
free(table);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#if defined(SOL_SOCKET) && defined(SO_BROADCAST)
|
|
|
|
/* Make it broadcastable */
|
2021-12-23 17:28:30 -03:00
|
|
|
{
|
|
|
|
int broadcast = 1;
|
|
|
|
if (setsockopt(fd, SOL_SOCKET, SO_BROADCAST,
|
|
|
|
(const char *) &broadcast, sizeof(broadcast)) < 0)
|
|
|
|
RARCH_WARN("[Discovery] Failed to set netplay discovery port to broadcast.\n");
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#endif
|
|
|
|
|
2021-12-23 17:28:30 -03:00
|
|
|
if (!socket_bind(fd, addr))
|
|
|
|
{
|
|
|
|
socket_close(fd);
|
|
|
|
net_st->lan_ad_client_fd = -1;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->lan_ad_client_fd = fd;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (fd >= 0)
|
|
|
|
socket_close(fd);
|
|
|
|
|
|
|
|
net_st->lan_ad_client_fd = -1;
|
|
|
|
|
|
|
|
RARCH_ERR("[Discovery] Failed to initialize netplay advertisement client socket.\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (addr)
|
|
|
|
freeaddrinfo_retro(addr);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Deinitialize Netplay discovery (client) */
|
|
|
|
void deinit_netplay_discovery(void)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-11-05 18:53:33 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (net_st->lan_ad_client_fd >= 0)
|
|
|
|
{
|
|
|
|
socket_close(net_st->lan_ad_client_fd);
|
|
|
|
net_st->lan_ad_client_fd = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool netplay_lan_ad_client_query(void)
|
|
|
|
{
|
|
|
|
char port[6];
|
|
|
|
uint32_t header;
|
|
|
|
struct addrinfo *addr = NULL;
|
|
|
|
struct addrinfo hints = {0};
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
bool ret = false;
|
|
|
|
|
|
|
|
/* Get the broadcast address (IPv4 only for now) */
|
|
|
|
snprintf(port, sizeof(port), "%hu",
|
|
|
|
(uint16_t) RARCH_DEFAULT_PORT);
|
|
|
|
hints.ai_family = AF_INET;
|
|
|
|
hints.ai_socktype = SOCK_DGRAM;
|
|
|
|
if (getaddrinfo_retro("255.255.255.255", port, &hints, &addr))
|
|
|
|
return ret;
|
|
|
|
if (!addr)
|
|
|
|
return ret;
|
|
|
|
|
|
|
|
/* Put together the request */
|
|
|
|
header = htonl(DISCOVERY_QUERY_MAGIC);
|
|
|
|
|
|
|
|
/* And send it off */
|
|
|
|
if (sendto(net_st->lan_ad_client_fd, (char *) &header, sizeof(header),
|
|
|
|
0, addr->ai_addr, addr->ai_addrlen) == sizeof(header))
|
|
|
|
ret = true;
|
|
|
|
else
|
|
|
|
RARCH_ERR("[Discovery] Failed to send netplay discovery query (error: %d).\n", errno);
|
|
|
|
|
|
|
|
freeaddrinfo_retro(addr);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool netplay_lan_ad_client_response(void)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Check for any ad queries */
|
|
|
|
for (;;)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
fd_set fds;
|
|
|
|
struct timeval tv = {0};
|
|
|
|
struct sockaddr_storage their_addr = {0};
|
|
|
|
socklen_t addr_size = sizeof(their_addr);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
FD_ZERO(&fds);
|
|
|
|
FD_SET(net_st->lan_ad_client_fd, &fds);
|
|
|
|
tv.tv_usec = 500000;
|
|
|
|
if (socket_select(net_st->lan_ad_client_fd + 1,
|
|
|
|
&fds, NULL, NULL, &tv) <= 0)
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
/* Somebody queried, so check that it's valid */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (recvfrom(net_st->lan_ad_client_fd,
|
|
|
|
(char *) &net_st->ad_packet_buffer,
|
|
|
|
sizeof(net_st->ad_packet_buffer), 0,
|
|
|
|
(struct sockaddr *) &their_addr,
|
|
|
|
&addr_size) == sizeof(net_st->ad_packet_buffer))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char address[256];
|
2021-09-18 06:14:34 +02:00
|
|
|
struct netplay_host *host = NULL;
|
|
|
|
|
|
|
|
/* Make sure it's a valid response */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (ntohl(net_st->ad_packet_buffer.header) !=
|
|
|
|
DISCOVERY_RESPONSE_MAGIC)
|
2021-09-18 06:14:34 +02:00
|
|
|
continue;
|
|
|
|
|
|
|
|
/* And that we know how to handle it */
|
2021-12-21 23:05:35 -03:00
|
|
|
#ifdef HAVE_INET6
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (their_addr.ss_family != AF_INET)
|
|
|
|
continue;
|
2021-12-21 23:05:35 -03:00
|
|
|
#endif
|
2021-12-20 10:19:42 -03:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!netplay_is_lan_address(
|
|
|
|
(struct sockaddr_in *) &their_addr))
|
|
|
|
continue;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
|
|
|
if (getnameinfo((struct sockaddr *)
|
|
|
|
&their_addr, sizeof(their_addr),
|
|
|
|
address, sizeof(address), NULL, 0,
|
|
|
|
NI_NUMERICHOST))
|
|
|
|
continue;
|
|
|
|
#else
|
|
|
|
/* We need to convert the address manually */
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
uint8_t *addr8 = (uint8_t *)
|
|
|
|
&((struct sockaddr_in *) &their_addr)->sin_addr;
|
|
|
|
snprintf(address, sizeof(address),
|
|
|
|
"%d.%d.%d.%d",
|
|
|
|
(int)addr8[0], (int)addr8[1],
|
|
|
|
(int)addr8[2], (int)addr8[3]);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Allocate space for it */
|
2021-11-05 18:53:33 +01:00
|
|
|
if (net_st->discovered_hosts.size >= net_st->discovered_hosts_allocated)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:53:33 +01:00
|
|
|
size_t allocated = net_st->discovered_hosts_allocated;
|
2021-09-18 06:14:34 +02:00
|
|
|
struct netplay_host *new_hosts = NULL;
|
|
|
|
|
|
|
|
if (allocated == 0)
|
|
|
|
allocated = 2;
|
|
|
|
else
|
|
|
|
allocated *= 2;
|
|
|
|
|
2021-11-05 18:53:33 +01:00
|
|
|
if (net_st->discovered_hosts.hosts)
|
2021-09-18 06:14:34 +02:00
|
|
|
new_hosts = (struct netplay_host *)
|
2021-11-05 18:53:33 +01:00
|
|
|
realloc(net_st->discovered_hosts.hosts, allocated * sizeof(struct
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_host));
|
|
|
|
else
|
|
|
|
/* Should be equivalent to realloc,
|
|
|
|
* but I don't trust screwy libcs */
|
|
|
|
new_hosts = (struct netplay_host *)
|
|
|
|
malloc(allocated * sizeof(struct netplay_host));
|
|
|
|
|
|
|
|
if (!new_hosts)
|
|
|
|
return false;
|
|
|
|
|
2021-11-05 18:53:33 +01:00
|
|
|
net_st->discovered_hosts.hosts = new_hosts;
|
|
|
|
net_st->discovered_hosts_allocated = allocated;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Get our host structure */
|
2021-11-05 18:53:33 +01:00
|
|
|
host = &net_st->discovered_hosts.hosts[net_st->discovered_hosts.size++];
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Copy in the response */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
host->content_crc = ntohl(net_st->ad_packet_buffer.content_crc);
|
|
|
|
host->port = ntohl(net_st->ad_packet_buffer.port);
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->address,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
address,
|
|
|
|
sizeof(host->address));
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->nick,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.nick,
|
|
|
|
sizeof(host->nick));
|
|
|
|
strlcpy(host->frontend,
|
|
|
|
net_st->ad_packet_buffer.frontend,
|
|
|
|
sizeof(host->frontend));
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->core,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.core,
|
|
|
|
sizeof(host->core));
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->core_version,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.core_version,
|
|
|
|
sizeof(host->core_version));
|
|
|
|
strlcpy(host->retroarch_version,
|
|
|
|
net_st->ad_packet_buffer.retroarch_version,
|
|
|
|
sizeof(host->retroarch_version));
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->content,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.content,
|
|
|
|
sizeof(host->content));
|
2021-11-05 18:55:55 +01:00
|
|
|
strlcpy(host->subsystem_name,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.subsystem_name,
|
|
|
|
sizeof(host->subsystem_name));
|
|
|
|
if (net_st->ad_packet_buffer.has_password & 1)
|
|
|
|
host->has_password = true;
|
|
|
|
else
|
|
|
|
host->has_password = false;
|
|
|
|
if (net_st->ad_packet_buffer.has_password & 2)
|
|
|
|
host->has_spectate_password = true;
|
|
|
|
else
|
|
|
|
host->has_spectate_password = false;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/** Discovery control */
|
|
|
|
bool netplay_discovery_driver_ctl(
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
enum rarch_netplay_discovery_ctl_state state, void *data)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:53:33 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (net_st->lan_ad_client_fd < 0)
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
|
|
|
|
switch (state)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
case RARCH_NETPLAY_DISCOVERY_CTL_LAN_SEND_QUERY:
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
return netplay_lan_ad_client_query();
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
case RARCH_NETPLAY_DISCOVERY_CTL_LAN_GET_RESPONSES:
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!netplay_lan_ad_client_response())
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-11-05 18:53:33 +01:00
|
|
|
*((struct netplay_host_list **) data) = &net_st->discovered_hosts;
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_DISCOVERY_CTL_LAN_CLEAR_RESPONSES:
|
2021-11-05 18:53:33 +01:00
|
|
|
net_st->discovered_hosts.size = 0;
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/** Initialize Netplay discovery */
|
|
|
|
static bool init_lan_ad_server_socket(void)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct addrinfo *addr = NULL;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
int fd = socket_init(
|
|
|
|
(void **) &addr, RARCH_DEFAULT_PORT, NULL,
|
|
|
|
SOCKET_TYPE_DATAGRAM);
|
|
|
|
bool ret = fd >= 0 &&
|
|
|
|
socket_bind(fd, addr);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (ret)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->lan_ad_server_fd = fd;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
|
|
|
{
|
|
|
|
if (fd >= 0)
|
|
|
|
socket_close(fd);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->lan_ad_server_fd = -1;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Discovery] Failed to initialize netplay advertisement socket.\n");;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (addr)
|
|
|
|
freeaddrinfo_retro(addr);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/** Deinitialize Netplay discovery */
|
|
|
|
static void deinit_lan_ad_server_socket(void)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
|
|
|
|
if (net_st->lan_ad_server_fd >= 0)
|
|
|
|
{
|
|
|
|
socket_close(net_st->lan_ad_server_fd);
|
|
|
|
net_st->lan_ad_server_fd = -1;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_lan_ad_server
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Respond to any LAN ad queries that the netplay server has received.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool netplay_lan_ad_server(netplay_t *netplay)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
fd_set fds;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
uint32_t header;
|
|
|
|
struct timeval tv = {0};
|
|
|
|
struct sockaddr_storage their_addr = {0};
|
|
|
|
socklen_t addr_size = sizeof(their_addr);
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Check for any ad queries */
|
|
|
|
FD_ZERO(&fds);
|
|
|
|
FD_SET(net_st->lan_ad_server_fd, &fds);
|
|
|
|
if (socket_select(net_st->lan_ad_server_fd + 1,
|
|
|
|
&fds, NULL, NULL, &tv) < 0)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
deinit_lan_ad_server_socket();
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!FD_ISSET(net_st->lan_ad_server_fd, &fds))
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Somebody queried, so check that it's valid */
|
|
|
|
if (recvfrom(net_st->lan_ad_server_fd,
|
|
|
|
(char *) &header, sizeof(header), 0,
|
|
|
|
(struct sockaddr *) &their_addr,
|
|
|
|
&addr_size) == sizeof(header))
|
|
|
|
{
|
|
|
|
const frontend_ctx_driver_t *frontend_drv;
|
|
|
|
char frontend_architecture_tmp[32];
|
|
|
|
uint32_t content_crc = 0;
|
|
|
|
struct retro_system_info *system = &runloop_state_get_ptr()->system.info;
|
|
|
|
struct string_list *subsystem = path_get_subsystem_list();
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
|
|
|
|
/* Make sure it's a valid query */
|
|
|
|
if (ntohl(header) != DISCOVERY_QUERY_MAGIC)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Discovery] Invalid query.\n");
|
|
|
|
return true;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
#ifdef HAVE_INET6
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (their_addr.ss_family != AF_INET)
|
|
|
|
return true;
|
2021-12-21 23:05:35 -03:00
|
|
|
#endif
|
2021-12-20 10:19:42 -03:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!netplay_is_lan_address(
|
|
|
|
(struct sockaddr_in *) &their_addr))
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Discovery] Query received on LAN interface.\n");
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Now build our response */
|
|
|
|
memset(&net_st->ad_packet_buffer, 0,
|
|
|
|
sizeof(net_st->ad_packet_buffer));
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.header = htonl(DISCOVERY_RESPONSE_MAGIC);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.port = htonl((int) netplay->tcp_port);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcpy(net_st->ad_packet_buffer.nick, netplay->nick,
|
|
|
|
sizeof(net_st->ad_packet_buffer.nick));
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
frontend_drv =
|
|
|
|
(const frontend_ctx_driver_t*) frontend_driver_get_cpu_architecture_str(
|
|
|
|
frontend_architecture_tmp, sizeof(frontend_architecture_tmp));
|
|
|
|
if (frontend_drv)
|
|
|
|
snprintf(net_st->ad_packet_buffer.frontend,
|
|
|
|
sizeof(net_st->ad_packet_buffer.frontend),
|
|
|
|
"%s %s",
|
|
|
|
frontend_drv->ident, frontend_architecture_tmp);
|
|
|
|
else
|
|
|
|
strlcpy(net_st->ad_packet_buffer.frontend,
|
|
|
|
"N/A",
|
|
|
|
sizeof(net_st->ad_packet_buffer.frontend));
|
|
|
|
|
|
|
|
strlcpy(net_st->ad_packet_buffer.core,
|
|
|
|
system->library_name,
|
|
|
|
sizeof(net_st->ad_packet_buffer.core));
|
|
|
|
strlcpy(net_st->ad_packet_buffer.core_version,
|
|
|
|
system->library_version,
|
|
|
|
sizeof(net_st->ad_packet_buffer.core_version));
|
|
|
|
|
|
|
|
strlcpy(net_st->ad_packet_buffer.retroarch_version,
|
|
|
|
PACKAGE_VERSION,
|
|
|
|
sizeof(net_st->ad_packet_buffer.retroarch_version));
|
|
|
|
|
|
|
|
if (subsystem && subsystem->size > 0)
|
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
char buf[4096];
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
buf[0] = '\0';
|
|
|
|
for (i = 0;;)
|
|
|
|
{
|
|
|
|
strlcat(buf, path_basename(subsystem->elems[i++].data), sizeof(buf));
|
|
|
|
if (i >= subsystem->size)
|
|
|
|
break;
|
|
|
|
strlcat(buf, "|", sizeof(buf));
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcpy(net_st->ad_packet_buffer.content, buf,
|
|
|
|
sizeof(net_st->ad_packet_buffer.content));
|
|
|
|
strlcpy(net_st->ad_packet_buffer.subsystem_name,
|
|
|
|
path_get(RARCH_PATH_SUBSYSTEM),
|
|
|
|
sizeof(net_st->ad_packet_buffer.subsystem_name));
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
content_crc = 0;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
const char *base = path_basename(path_get(RARCH_PATH_BASENAME));
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcpy(net_st->ad_packet_buffer.content,
|
|
|
|
!string_is_empty(base) ? base : "N/A",
|
|
|
|
sizeof(net_st->ad_packet_buffer.content));
|
|
|
|
strlcpy(net_st->ad_packet_buffer.subsystem_name, "N/A",
|
|
|
|
sizeof(net_st->ad_packet_buffer.subsystem_name));
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
content_crc = content_get_crc();
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.content_crc = htonl(content_crc);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->ad_packet_buffer.has_password = 0;
|
|
|
|
if (!string_is_empty(settings->paths.netplay_password))
|
|
|
|
net_st->ad_packet_buffer.has_password |= 1;
|
|
|
|
if (!string_is_empty(settings->paths.netplay_spectate_password))
|
|
|
|
net_st->ad_packet_buffer.has_password |= 2;
|
|
|
|
|
|
|
|
/* Send our response */
|
|
|
|
sendto(net_st->lan_ad_server_fd,
|
|
|
|
(char *) &net_st->ad_packet_buffer,
|
|
|
|
sizeof(net_st->ad_packet_buffer),
|
|
|
|
0, (struct sockaddr *) &their_addr, sizeof(their_addr));
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* TODO/FIXME - replace netplay_log_connection with calls
|
|
|
|
* to inet_ntop_compat and move runloop message queue pushing
|
|
|
|
* outside */
|
|
|
|
#if !defined(HAVE_SOCKET_LEGACY) && !defined(WIIU) && !defined(_3DS)
|
|
|
|
/* Custom inet_ntop. Win32 doesn't seem to support this ... */
|
|
|
|
static void netplay_log_connection(
|
|
|
|
const struct sockaddr_storage *their_addr,
|
|
|
|
unsigned slot, const char *nick, char *s, size_t len)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
union
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
const struct sockaddr_storage *storage;
|
|
|
|
const struct sockaddr_in *v4;
|
|
|
|
const struct sockaddr_in6 *v6;
|
|
|
|
} u;
|
|
|
|
const char *str = NULL;
|
|
|
|
char buf_v4[INET_ADDRSTRLEN] = {0};
|
|
|
|
char buf_v6[INET6_ADDRSTRLEN] = {0};
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
u.storage = their_addr;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (their_addr->ss_family)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
case AF_INET:
|
|
|
|
{
|
|
|
|
struct sockaddr_in in;
|
|
|
|
|
|
|
|
memset(&in, 0, sizeof(in));
|
|
|
|
|
|
|
|
str = buf_v4;
|
|
|
|
in.sin_family = AF_INET;
|
|
|
|
memcpy(&in.sin_addr, &u.v4->sin_addr, sizeof(struct in_addr));
|
|
|
|
|
|
|
|
getnameinfo((struct sockaddr*)&in, sizeof(struct sockaddr_in),
|
|
|
|
buf_v4, sizeof(buf_v4),
|
|
|
|
NULL, 0, NI_NUMERICHOST);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case AF_INET6:
|
|
|
|
{
|
|
|
|
struct sockaddr_in6 in;
|
|
|
|
memset(&in, 0, sizeof(in));
|
|
|
|
|
|
|
|
str = buf_v6;
|
|
|
|
in.sin6_family = AF_INET6;
|
|
|
|
memcpy(&in.sin6_addr, &u.v6->sin6_addr, sizeof(struct in6_addr));
|
|
|
|
|
|
|
|
getnameinfo((struct sockaddr*)&in, sizeof(struct sockaddr_in6),
|
|
|
|
buf_v6, sizeof(buf_v6), NULL, 0, NI_NUMERICHOST);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (str)
|
|
|
|
snprintf(s, len, msg_hash_to_str(MSG_GOT_CONNECTION_FROM_NAME),
|
|
|
|
nick, str);
|
|
|
|
else
|
|
|
|
snprintf(s, len, msg_hash_to_str(MSG_GOT_CONNECTION_FROM),
|
|
|
|
nick);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#else
|
|
|
|
static void netplay_log_connection(
|
|
|
|
const struct sockaddr_storage *their_addr,
|
|
|
|
unsigned slot, const char *nick, char *s, size_t len)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Stub code - will need to be implemented */
|
|
|
|
snprintf(s, len, msg_hash_to_str(MSG_GOT_CONNECTION_FROM),
|
|
|
|
nick);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_impl_magic:
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* A pseudo-hash of the RetroArch and Netplay version, so only compatible
|
|
|
|
* versions play together.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static uint32_t netplay_impl_magic(void)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
size_t i;
|
|
|
|
uint32_t res = 0;
|
|
|
|
const char *ver = PACKAGE_VERSION;
|
|
|
|
size_t len = strlen(ver);
|
|
|
|
|
|
|
|
for (i = 0; i < len; i++)
|
|
|
|
res ^= ver[i] << (i & 0xf);
|
|
|
|
|
|
|
|
res ^= NETPLAY_PROTOCOL_VERSION << (i & 0xf);
|
|
|
|
|
|
|
|
return res;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_platform_magic
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Just enough info to tell us if our platforms mismatch: Endianness and a
|
|
|
|
* couple of type sizes.
|
|
|
|
*
|
|
|
|
* Format:
|
|
|
|
* bit 31: Reserved
|
|
|
|
* bit 30: 1 for big endian
|
|
|
|
* bits 29-15: sizeof(size_t)
|
|
|
|
* bits 14-0: sizeof(long)
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static uint32_t netplay_platform_magic(void)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t ret =
|
|
|
|
((1 == htonl(1)) << 30)
|
2021-11-05 18:53:33 +01:00
|
|
|
|(sizeof(size_t) << 15)
|
2021-09-18 06:14:34 +02:00
|
|
|
|(sizeof(long));
|
|
|
|
return ret;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_endian_mismatch
|
|
|
|
*
|
|
|
|
* Do the platform magics mismatch on endianness?
|
|
|
|
*/
|
|
|
|
static bool netplay_endian_mismatch(uint32_t pma, uint32_t pmb)
|
|
|
|
{
|
|
|
|
uint32_t ebit = (1<<30);
|
|
|
|
return (pma & ebit) != (pmb & ebit);
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static int simple_rand(unsigned long *simple_rand_next)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
*simple_rand_next = *simple_rand_next * 1103515245 + 12345;
|
|
|
|
return((unsigned)(*simple_rand_next / 65536) % 32768);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static uint32_t simple_rand_uint32(unsigned long *simple_rand_next)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
uint32_t part0 = simple_rand(simple_rand_next);
|
|
|
|
uint32_t part1 = simple_rand(simple_rand_next);
|
|
|
|
uint32_t part2 = simple_rand(simple_rand_next);
|
2021-11-05 18:53:33 +01:00
|
|
|
return ((part0 << 30) + (part1 << 15) + part2);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
/*
|
|
|
|
* netplay_init_socket_buffer
|
|
|
|
*
|
|
|
|
* Initialize a new socket buffer.
|
|
|
|
*/
|
|
|
|
static bool netplay_init_socket_buffer(
|
|
|
|
struct socket_buffer *sbuf, size_t size)
|
|
|
|
{
|
|
|
|
sbuf->data = (unsigned char*)malloc(size);
|
|
|
|
if (!sbuf->data)
|
|
|
|
return false;
|
|
|
|
sbuf->bufsz = size;
|
|
|
|
sbuf->start = sbuf->read = sbuf->end = 0;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_deinit_socket_buffer
|
|
|
|
*
|
|
|
|
* Free a socket buffer.
|
|
|
|
*/
|
|
|
|
static void netplay_deinit_socket_buffer(struct socket_buffer *sbuf)
|
|
|
|
{
|
|
|
|
if (sbuf->data)
|
|
|
|
free(sbuf->data);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_init_send
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Initialize our handshake and send the first part of the handshake protocol.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
bool netplay_handshake_init_send(netplay_t *netplay,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct netplay_connection *connection, uint32_t protocol)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t header[6];
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
|
|
|
|
header[0] = htonl(NETPLAY_MAGIC);
|
|
|
|
header[1] = htonl(netplay_platform_magic());
|
|
|
|
header[2] = htonl(NETPLAY_COMPRESSION_SUPPORTED);
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
if (!string_is_empty(settings->paths.netplay_password) ||
|
|
|
|
!string_is_empty(settings->paths.netplay_spectate_password))
|
|
|
|
{
|
|
|
|
/* Demand a password */
|
|
|
|
if (netplay->simple_rand_next == 1)
|
|
|
|
netplay->simple_rand_next = (unsigned long) time(NULL);
|
|
|
|
connection->salt = simple_rand_uint32(&netplay->simple_rand_next);
|
|
|
|
if (!connection->salt)
|
|
|
|
connection->salt = 1;
|
|
|
|
header[3] = htonl(connection->salt);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
header[3] = 0;
|
|
|
|
}
|
|
|
|
else
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* HACK ALERT!!!
|
|
|
|
* We need to do this in order to maintain full backwards compatibility.
|
|
|
|
* Send our highest available protocol in the unused salt field.
|
|
|
|
* Servers can then pick the best protocol choice for the client. */
|
|
|
|
header[3] = htonl(HIGH_NETPLAY_PROTOCOL_VERSION);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
header[4] = htonl(protocol);
|
|
|
|
header[5] = htonl(netplay_impl_magic());
|
|
|
|
|
|
|
|
/* First ping */
|
|
|
|
connection->ping = -1;
|
|
|
|
connection->ping_timer = cpu_features_get_time_usec();
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, header,
|
|
|
|
sizeof(header)) ||
|
|
|
|
!netplay_send_flush(&connection->send_packet_buffer, connection->fd, false))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_MENU
|
|
|
|
static void handshake_password(void *ignore, const char *line)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct password_buf_s password_buf;
|
|
|
|
char password[8+NETPLAY_PASS_LEN]; /* 8 for salt, 128 for password */
|
|
|
|
char hash[NETPLAY_PASS_HASH_LEN+1]; /* + NULL terminator */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct netplay_connection *connection;
|
2021-11-05 18:53:33 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
|
|
|
|
if (!netplay)
|
|
|
|
return;
|
|
|
|
|
|
|
|
connection = &netplay->connections[0];
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-09-25 15:08:34 -07:00
|
|
|
snprintf(password, sizeof(password), "%08lX", (unsigned long)connection->salt);
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!string_is_empty(line))
|
|
|
|
strlcpy(password + 8, line, sizeof(password)-8);
|
|
|
|
|
|
|
|
password_buf.cmd[0] = htonl(NETPLAY_CMD_PASSWORD);
|
|
|
|
password_buf.cmd[1] = htonl(sizeof(password_buf.password));
|
|
|
|
sha256_hash(hash, (uint8_t *) password, strlen(password));
|
|
|
|
memcpy(password_buf.password, hash, NETPLAY_PASS_HASH_LEN);
|
|
|
|
|
|
|
|
/* We have no way to handle an error here, so we'll let the next function error out */
|
|
|
|
if (netplay_send(&connection->send_packet_buffer, connection->fd, &password_buf, sizeof(password_buf)))
|
|
|
|
netplay_send_flush(&connection->send_packet_buffer, connection->fd, false);
|
|
|
|
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
menu_input_dialog_end();
|
|
|
|
retroarch_menu_running_finished(false);
|
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
|
|
|
/**
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* netplay_handshake_nick
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* Send a NICK command.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool netplay_handshake_nick(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct nick_buf_s nick_buf = {0};
|
|
|
|
|
|
|
|
/* Send our nick */
|
|
|
|
nick_buf.cmd[0] = htonl(NETPLAY_CMD_NICK);
|
|
|
|
nick_buf.cmd[1] = htonl(sizeof(nick_buf.nick));
|
|
|
|
strlcpy(nick_buf.nick, netplay->nick, sizeof(nick_buf.nick));
|
|
|
|
|
|
|
|
/* Second ping */
|
|
|
|
connection->ping_timer = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
&nick_buf, sizeof(nick_buf)) ||
|
|
|
|
!netplay_send_flush(&connection->send_packet_buffer, connection->fd,
|
|
|
|
false))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_handshake_init
|
|
|
|
*
|
|
|
|
* Data receiver for the initial part of the handshake, i.e., waiting for the
|
|
|
|
* netplay header.
|
|
|
|
*/
|
|
|
|
bool netplay_handshake_init(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
|
|
|
{
|
|
|
|
ssize_t recvd;
|
|
|
|
uint32_t header[6];
|
|
|
|
uint32_t netplay_magic = 0;
|
|
|
|
uint32_t hi_protocol = 0;
|
|
|
|
uint32_t lo_protocol = 0;
|
|
|
|
uint32_t local_pmagic = 0;
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t remote_pmagic = 0;
|
|
|
|
uint32_t compression = 0;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int32_t ping = 0;
|
2021-09-18 06:14:34 +02:00
|
|
|
struct compression_transcoder *ctrans = NULL;
|
|
|
|
const char *dmsg = NULL;
|
2021-11-05 18:52:56 +01:00
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
bool extra_notifications = settings->bools.notification_show_netplay_extra;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
memset(header, 0, sizeof(header));
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
RECV(header, sizeof(header[0]))
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg_hash_to_str(
|
|
|
|
netplay->is_server
|
|
|
|
? MSG_FAILED_TO_CONNECT_TO_CLIENT
|
|
|
|
: MSG_FAILED_TO_CONNECT_TO_HOST);
|
2021-09-18 06:14:34 +02:00
|
|
|
goto error;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-10-18 01:13:35 +02:00
|
|
|
netplay_magic = ntohl(header[0]);
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* Poking the server for information? Just disconnect */
|
|
|
|
if (netplay_magic == POKE_MAGIC)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Send it our highest available protocol. */
|
|
|
|
netplay_handshake_init_send(netplay, connection,
|
|
|
|
HIGH_NETPLAY_PROTOCOL_VERSION);
|
2021-11-05 18:52:56 +01:00
|
|
|
socket_close(connection->fd);
|
|
|
|
connection->active = false;
|
|
|
|
netplay_deinit_socket_buffer(&connection->send_packet_buffer);
|
|
|
|
netplay_deinit_socket_buffer(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Only the client is able to estimate latency at this point. */
|
2021-12-21 23:05:35 -03:00
|
|
|
SET_PING(connection)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (netplay_magic == FULL_MAGIC)
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_HOST_FULL);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-10-18 01:13:35 +02:00
|
|
|
if (netplay_magic != NETPLAY_MAGIC)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_NOT_RETROARCH);
|
|
|
|
goto error;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
RECV(header + 1, sizeof(header) - sizeof(header[0]))
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg_hash_to_str(
|
|
|
|
netplay->is_server
|
|
|
|
? MSG_FAILED_TO_RECEIVE_HEADER_FROM_CLIENT
|
|
|
|
: MSG_FAILED_TO_RECEIVE_HEADER_FROM_HOST);
|
2021-09-18 06:14:34 +02:00
|
|
|
goto error;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* HACK ALERT!!!
|
|
|
|
* We need to do this in order to maintain full backwards compatibility.
|
|
|
|
* If client sent a non zero salt, assume it's the highest supported protocol. */
|
|
|
|
if (netplay->is_server)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
hi_protocol = ntohl(header[3]);
|
|
|
|
lo_protocol = ntohl(header[4]);
|
|
|
|
|
|
|
|
if (!hi_protocol)
|
|
|
|
/* Older clients don't send a high protocol. */
|
|
|
|
connection->netplay_protocol = lo_protocol;
|
|
|
|
else if (hi_protocol > HIGH_NETPLAY_PROTOCOL_VERSION)
|
|
|
|
/* Run at our highest supported protocol. */
|
|
|
|
connection->netplay_protocol = HIGH_NETPLAY_PROTOCOL_VERSION;
|
|
|
|
else
|
|
|
|
/* Otherwise run at the client's highest supported protocol. */
|
|
|
|
connection->netplay_protocol = hi_protocol;
|
|
|
|
|
|
|
|
if (connection->netplay_protocol < LOW_NETPLAY_PROTOCOL_VERSION)
|
|
|
|
{
|
|
|
|
/* Send it so that a proper notification can be shown there. */
|
|
|
|
netplay_handshake_init_send(netplay, connection,
|
|
|
|
LOW_NETPLAY_PROTOCOL_VERSION);
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_OUT_OF_DATE);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Clients only see one protocol. */
|
|
|
|
connection->netplay_protocol = ntohl(header[4]);
|
|
|
|
|
|
|
|
if (connection->netplay_protocol < LOW_NETPLAY_PROTOCOL_VERSION ||
|
|
|
|
connection->netplay_protocol > HIGH_NETPLAY_PROTOCOL_VERSION)
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_OUT_OF_DATE);
|
|
|
|
goto error;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We only care about platform magic if our core is quirky */
|
|
|
|
local_pmagic = netplay_platform_magic();
|
|
|
|
remote_pmagic = ntohl(header[1]);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if ((netplay->quirks & NETPLAY_QUIRK_ENDIAN_DEPENDENT) &&
|
|
|
|
netplay_endian_mismatch(local_pmagic, remote_pmagic))
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_ENDIAN_DEPENDENT);
|
|
|
|
goto error;
|
|
|
|
}
|
|
|
|
if ((netplay->quirks & NETPLAY_QUIRK_PLATFORM_DEPENDENT) &&
|
|
|
|
(local_pmagic != remote_pmagic))
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_PLATFORM_DEPENDENT);
|
|
|
|
goto error;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (ntohl(header[5]) != netplay_impl_magic())
|
|
|
|
{
|
|
|
|
/* We allow the connection but warn that this could cause issues. */
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_DIFFERENT_VERSIONS);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
if (extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
dmsg = NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Check what compression is supported */
|
|
|
|
compression = ntohl(header[2]);
|
|
|
|
compression &= NETPLAY_COMPRESSION_SUPPORTED;
|
|
|
|
|
|
|
|
if (compression & NETPLAY_COMPRESSION_ZLIB)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
ctrans = &netplay->compress_zlib;
|
|
|
|
if (!ctrans->compression_backend)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
ctrans->compression_backend =
|
|
|
|
trans_stream_get_zlib_deflate_backend();
|
|
|
|
if (!ctrans->compression_backend)
|
|
|
|
ctrans->compression_backend = trans_stream_get_pipe_backend();
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
connection->compression_supported = NETPLAY_COMPRESSION_ZLIB;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
ctrans = &netplay->compress_nil;
|
|
|
|
if (!ctrans->compression_backend)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
ctrans->compression_backend =
|
|
|
|
trans_stream_get_pipe_backend();
|
|
|
|
}
|
|
|
|
connection->compression_supported = 0;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!ctrans->decompression_backend)
|
|
|
|
ctrans->decompression_backend = ctrans->compression_backend->reverse;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Allocate our compression stream */
|
|
|
|
if (!ctrans->compression_stream)
|
|
|
|
ctrans->compression_stream = ctrans->compression_backend->stream_new();
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!ctrans->decompression_stream)
|
2021-09-18 06:14:34 +02:00
|
|
|
ctrans->decompression_stream = ctrans->decompression_backend->stream_new();
|
|
|
|
if (!ctrans->compression_stream || !ctrans->decompression_stream)
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Finally send our header, if server. */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
if (!netplay_handshake_init_send(netplay, connection,
|
|
|
|
connection->netplay_protocol))
|
|
|
|
return false;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If a password is demanded, ask for it */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if ((connection->salt = ntohl(header[3])))
|
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_MENU
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
menu_input_ctx_line_t line = {0};
|
|
|
|
retroarch_menu_running();
|
|
|
|
line.label = msg_hash_to_str(MSG_NETPLAY_ENTER_PASSWORD);
|
|
|
|
line.label_setting = "no_setting";
|
|
|
|
line.cb = handshake_password;
|
|
|
|
if (!menu_input_dialog_start(&line))
|
|
|
|
return false;
|
2021-11-12 19:00:24 +01:00
|
|
|
#endif
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
2021-11-12 19:00:24 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!netplay_handshake_nick(netplay, connection))
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Move on to the next mode */
|
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_NICK;
|
|
|
|
*had_input = true;
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
|
|
|
|
|
|
|
error:
|
|
|
|
if (dmsg)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
/* These notifications are useful to the client in figuring out what caused its premature disconnection,
|
|
|
|
but they are quite useless (annoying) to the server.
|
|
|
|
Let them be optional if server. */
|
|
|
|
if (!netplay->is_server || extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static void netplay_handshake_ready(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
|
|
|
char msg[512];
|
2021-11-05 18:52:56 +01:00
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
bool extra_notifications = settings->bools.notification_show_netplay_extra;
|
2021-09-18 06:14:34 +02:00
|
|
|
msg[0] = '\0';
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
unsigned slot = (unsigned)(connection - netplay->connections);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_log_connection(&connection->addr,
|
|
|
|
slot, connection->nick, msg, sizeof(msg));
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s %u\n", msg_hash_to_str(MSG_CONNECTION_SLOT), slot);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send them the savestate */
|
|
|
|
if (!(netplay->quirks &
|
|
|
|
(NETPLAY_QUIRK_NO_SAVESTATES|NETPLAY_QUIRK_NO_TRANSMISSION)))
|
|
|
|
netplay->force_send_savestate = true;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->is_connected = true;
|
|
|
|
snprintf(msg, sizeof(msg), "%s: \"%s\"",
|
|
|
|
msg_hash_to_str(MSG_CONNECTED_TO),
|
|
|
|
connection->nick);
|
|
|
|
}
|
|
|
|
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s\n", msg);
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Useful notification to the client in figuring out if a connection was successfully made before an error,
|
|
|
|
but not as useful to the server.
|
|
|
|
Let it be optional if server. */
|
|
|
|
if (!netplay->is_server || extra_notifications)
|
|
|
|
runloop_msg_queue_push(msg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT,
|
|
|
|
MESSAGE_QUEUE_CATEGORY_INFO);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Unstall if we were waiting for this */
|
|
|
|
if (netplay->stall == NETPLAY_STALL_NO_CONNECTION)
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_handshake_info
|
|
|
|
*
|
|
|
|
* Send an INFO command.
|
|
|
|
*/
|
|
|
|
static bool netplay_handshake_info(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct info_buf_s info_buf = {0};
|
2021-09-21 19:08:26 +02:00
|
|
|
struct retro_system_info *system = &runloop_state_get_ptr()->system.info;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
info_buf.cmd[0] = htonl(NETPLAY_CMD_INFO);
|
|
|
|
info_buf.cmd[1] = htonl(sizeof(info_buf) - 2*sizeof(uint32_t));
|
|
|
|
|
|
|
|
/* Get our core info */
|
|
|
|
if (system)
|
|
|
|
{
|
|
|
|
strlcpy(info_buf.core_name,
|
|
|
|
system->library_name, sizeof(info_buf.core_name));
|
|
|
|
strlcpy(info_buf.core_version,
|
|
|
|
system->library_version, sizeof(info_buf.core_version));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
strlcpy(info_buf.core_name,
|
|
|
|
"UNKNOWN", sizeof(info_buf.core_name));
|
|
|
|
strlcpy(info_buf.core_version,
|
|
|
|
"UNKNOWN", sizeof(info_buf.core_version));
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Get our content CRC */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
info_buf.content_crc = htonl(content_get_crc());
|
2021-09-18 06:14:34 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Third ping */
|
|
|
|
connection->ping_timer = cpu_features_get_time_usec();
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Send it off and wait for info back */
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
&info_buf, sizeof(info_buf)) ||
|
|
|
|
!netplay_send_flush(&connection->send_packet_buffer, connection->fd,
|
|
|
|
false))
|
|
|
|
return false;
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_sync
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Send a SYNC command.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_handshake_sync(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If we're the server, now we send sync info */
|
|
|
|
size_t i;
|
|
|
|
int matchct;
|
|
|
|
uint32_t cmd[4];
|
|
|
|
retro_ctx_memory_info_t mem_info;
|
|
|
|
uint32_t client_num = 0;
|
|
|
|
uint32_t device = 0;
|
|
|
|
size_t nicklen, nickmangle = 0;
|
|
|
|
bool nick_matched = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
mem_info.id = RETRO_MEMORY_SAVE_RAM;
|
|
|
|
core_get_memory(&mem_info);
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send basic sync info */
|
|
|
|
cmd[0] = htonl(NETPLAY_CMD_SYNC);
|
|
|
|
cmd[1] = htonl(2*sizeof(uint32_t)
|
|
|
|
/* Controller devices */
|
|
|
|
+ MAX_INPUT_DEVICES*sizeof(uint32_t)
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Share modes */
|
|
|
|
+ MAX_INPUT_DEVICES*sizeof(uint8_t)
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Device-client mapping */
|
|
|
|
+ MAX_INPUT_DEVICES*sizeof(uint32_t)
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Client nick */
|
|
|
|
+ NETPLAY_NICK_LEN
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* And finally, sram */
|
|
|
|
+ mem_info.size);
|
|
|
|
cmd[2] = htonl(netplay->self_frame_count);
|
|
|
|
client_num = (uint32_t)(connection - netplay->connections + 1);
|
|
|
|
|
|
|
|
if (netplay->local_paused || netplay->remote_paused)
|
|
|
|
client_num |= NETPLAY_CMD_SYNC_BIT_PAUSED;
|
|
|
|
|
|
|
|
cmd[3] = htonl(client_num);
|
|
|
|
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, cmd,
|
|
|
|
sizeof(cmd)))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Now send the device info */
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
device = htonl(netplay->config_devices[i]);
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
&device, sizeof(device)))
|
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Then the share mode */
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
netplay->device_share_modes, sizeof(netplay->device_share_modes)))
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Then the device-client mapping */
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
{
|
|
|
|
device = htonl(netplay->device_clients[i]);
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
&device, sizeof(device)))
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Now see if we need to mangle their nick */
|
|
|
|
nicklen = strlen(connection->nick);
|
|
|
|
if (nicklen > NETPLAY_NICK_LEN - 5)
|
|
|
|
nickmangle = NETPLAY_NICK_LEN - 5;
|
|
|
|
else
|
|
|
|
nickmangle = nicklen;
|
|
|
|
matchct = 1;
|
|
|
|
do
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
nick_matched = false;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct netplay_connection *sc = &netplay->connections[i];
|
|
|
|
if (sc == connection)
|
|
|
|
continue;
|
|
|
|
if (sc->active &&
|
|
|
|
sc->mode >= NETPLAY_CONNECTION_CONNECTED &&
|
|
|
|
!strncmp(connection->nick, sc->nick, NETPLAY_NICK_LEN))
|
|
|
|
{
|
|
|
|
nick_matched = true;
|
|
|
|
break;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!strncmp(connection->nick, netplay->nick, NETPLAY_NICK_LEN))
|
|
|
|
nick_matched = true;
|
|
|
|
|
|
|
|
if (nick_matched)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Somebody has this nick, make a new one! */
|
|
|
|
snprintf(connection->nick + nickmangle,
|
|
|
|
NETPLAY_NICK_LEN - nickmangle, " (%d)", ++matchct);
|
|
|
|
connection->nick[NETPLAY_NICK_LEN - 1] = '\0';
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
} while (nick_matched);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send the nick */
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
connection->nick, NETPLAY_NICK_LEN))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* And finally, the SRAM */
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
mem_info.data, mem_info.size) ||
|
|
|
|
!netplay_send_flush(&connection->send_packet_buffer, connection->fd,
|
|
|
|
false))
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Send our settings. */
|
2021-12-21 23:05:35 -03:00
|
|
|
REQUIRE_PROTOCOL_VERSION(connection, 6)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
uint32_t allow_pausing;
|
|
|
|
int32_t frames[2];
|
|
|
|
|
|
|
|
allow_pausing = htonl((uint32_t) netplay->allow_pausing);
|
|
|
|
if (!netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_SETTING_ALLOW_PAUSING,
|
|
|
|
&allow_pausing, sizeof(allow_pausing)))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
frames[0] = htonl(netplay->input_latency_frames_min);
|
|
|
|
frames[1] = htonl(netplay->input_latency_frames_max);
|
|
|
|
if (!netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_SETTING_INPUT_LATENCY_FRAMES,
|
|
|
|
frames, sizeof(frames)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!netplay_send_flush(&connection->send_packet_buffer,
|
|
|
|
connection->fd, false))
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Now we're ready! */
|
|
|
|
connection->mode = NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
netplay_handshake_ready(netplay, connection);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_pre_nick
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Data receiver for the second stage of handshake, receiving the other side's
|
|
|
|
* nickname.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_handshake_pre_nick(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct nick_buf_s nick_buf;
|
|
|
|
ssize_t recvd;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int32_t ping = 0;
|
2021-11-05 18:52:56 +01:00
|
|
|
const char *dmsg = NULL;
|
|
|
|
settings_t *settings = config_get_ptr();
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool extra_notifications = settings->bools.notification_show_netplay_extra;
|
2021-04-08 18:33:46 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&nick_buf, sizeof(nick_buf)) {}
|
2021-04-08 18:33:46 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Expecting only a nick command */
|
|
|
|
if (recvd < 0 ||
|
|
|
|
ntohl(nick_buf.cmd[0]) != NETPLAY_CMD_NICK ||
|
|
|
|
ntohl(nick_buf.cmd[1]) != sizeof(nick_buf.nick))
|
2021-04-08 18:33:46 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg_hash_to_str(
|
|
|
|
netplay->is_server
|
|
|
|
? MSG_FAILED_TO_GET_NICKNAME_FROM_CLIENT
|
|
|
|
: MSG_FAILED_TO_RECEIVE_NICKNAME_FROM_HOST);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Useful to the client in figuring out
|
|
|
|
what caused its premature disconnection,
|
|
|
|
but not as useful to the server.
|
|
|
|
Let it be optional if server. */
|
|
|
|
if (!netplay->is_server || extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-04-08 18:33:46 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
SET_PING(connection)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
strlcpy(connection->nick, nick_buf.nick,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
sizeof(connection->nick));
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!netplay_handshake_nick(netplay, connection))
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* There's a password, so just put them in PRE_PASSWORD mode */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if ( !string_is_empty(settings->paths.netplay_password) ||
|
|
|
|
!string_is_empty(settings->paths.netplay_spectate_password))
|
2021-09-18 06:14:34 +02:00
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_PASSWORD;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (!netplay_handshake_info(netplay, connection))
|
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
connection->can_play = true;
|
2021-09-18 06:14:34 +02:00
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_INFO;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Client needs to wait for INFO */
|
|
|
|
else
|
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_INFO;
|
|
|
|
|
|
|
|
*had_input = true;
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
2021-04-08 18:33:46 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_pre_password
|
2021-04-08 18:33:46 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Data receiver for the third, optional stage of server handshake, receiving
|
|
|
|
* the password and sending core/content info.
|
2021-04-08 18:33:46 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_handshake_pre_password(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
2021-04-08 18:33:46 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct password_buf_s password_buf;
|
|
|
|
char password[8+NETPLAY_PASS_LEN]; /* 8 for salt */
|
|
|
|
char hash[NETPLAY_PASS_HASH_LEN+1]; /* + NULL terminator */
|
|
|
|
ssize_t recvd;
|
|
|
|
bool correct = false;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Only the server should call this */
|
|
|
|
if (!netplay->is_server)
|
|
|
|
return false;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
RECV(&password_buf, sizeof(password_buf)) {}
|
|
|
|
|
|
|
|
/* Expecting only a password command */
|
|
|
|
if (recvd < 0 ||
|
|
|
|
ntohl(password_buf.cmd[0]) != NETPLAY_CMD_PASSWORD ||
|
|
|
|
ntohl(password_buf.cmd[1]) != sizeof(password_buf.password))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Failed to receive netplay password.\n");
|
2021-04-08 18:33:46 +02:00
|
|
|
return false;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:33:46 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Calculate the correct password hash(es) and compare */
|
2021-09-25 15:08:34 -07:00
|
|
|
snprintf(password, sizeof(password), "%08lX", (unsigned long)connection->salt);
|
2021-04-08 18:33:46 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!string_is_empty(settings->paths.netplay_password))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
|
|
|
strlcpy(password + 8,
|
|
|
|
settings->paths.netplay_password, sizeof(password)-8);
|
2021-04-08 18:33:46 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
sha256_hash(hash, (uint8_t *) password, strlen(password));
|
2021-04-08 18:33:46 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!memcmp(password_buf.password, hash, NETPLAY_PASS_HASH_LEN))
|
|
|
|
{
|
|
|
|
correct = true;
|
|
|
|
connection->can_play = true;
|
|
|
|
}
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!correct && !string_is_empty(settings->paths.netplay_spectate_password))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
|
|
|
strlcpy(password + 8,
|
|
|
|
settings->paths.netplay_spectate_password, sizeof(password)-8);
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
sha256_hash(hash, (uint8_t *) password, strlen(password));
|
|
|
|
|
|
|
|
if (!memcmp(password_buf.password, hash, NETPLAY_PASS_HASH_LEN))
|
|
|
|
correct = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Just disconnect if it was wrong */
|
|
|
|
if (!correct)
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] A client tried to connect with the wrong password.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-11-05 18:52:56 +01:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Otherwise, exchange info */
|
|
|
|
if (!netplay_handshake_info(netplay, connection))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
*had_input = true;
|
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_INFO;
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_pre_info
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Data receiver for the third stage of server handshake, receiving
|
|
|
|
* the password.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_handshake_pre_info(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct info_buf_s info_buf;
|
|
|
|
uint32_t cmd_size;
|
|
|
|
ssize_t recvd;
|
|
|
|
uint32_t content_crc = 0;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int32_t ping = 0;
|
2021-09-18 06:14:34 +02:00
|
|
|
const char *dmsg = NULL;
|
2021-09-21 19:08:26 +02:00
|
|
|
struct retro_system_info *system = &runloop_state_get_ptr()->system.info;
|
2021-11-05 18:52:56 +01:00
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
bool extra_notifications = settings->bools.notification_show_netplay_extra;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
unsigned max_ping = settings->uints.netplay_max_ping;
|
2021-11-05 18:52:56 +01:00
|
|
|
|
|
|
|
RECV(&info_buf, sizeof(info_buf.cmd))
|
|
|
|
{
|
|
|
|
if (!netplay->is_server)
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_INCORRECT_PASSWORD);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (recvd < 0 ||
|
|
|
|
ntohl(info_buf.cmd[0]) != NETPLAY_CMD_INFO)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Failed to receive netplay info.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* Only the server is able to estimate latency at this point. */
|
2021-12-21 23:05:35 -03:00
|
|
|
SET_PING(connection)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
cmd_size = ntohl(info_buf.cmd[1]);
|
2021-11-05 18:52:56 +01:00
|
|
|
if (cmd_size != sizeof(info_buf) - sizeof(info_buf.cmd))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Either the host doesn't have anything loaded,
|
|
|
|
or this is just screwy */
|
2021-09-18 06:14:34 +02:00
|
|
|
if (cmd_size != 0)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Huh? */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Invalid NETPLAY_CMD_INFO payload size.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send our info and hope they load it! */
|
|
|
|
if (!netplay_handshake_info(netplay, connection))
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
*had_input = true;
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(&info_buf.content_crc, cmd_size)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Check the core info */
|
|
|
|
if (system)
|
|
|
|
{
|
|
|
|
if (strncmp(info_buf.core_name,
|
|
|
|
system->library_name, sizeof(info_buf.core_name)))
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Wrong core! */
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_DIFFERENT_CORES);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Useful to the client in figuring out
|
|
|
|
what caused its premature disconnection,
|
|
|
|
but not as useful to the server.
|
|
|
|
Let it be optional if server. */
|
|
|
|
if (!netplay->is_server || extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
/* FIXME: Should still send INFO, so the
|
|
|
|
other side knows what's what */
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
if (strncmp(info_buf.core_version,
|
|
|
|
system->library_version, sizeof(info_buf.core_version)))
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_DIFFERENT_CORE_VERSIONS);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
if (extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Check the content CRC */
|
|
|
|
content_crc = content_get_crc();
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (content_crc != 0)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (ntohl(info_buf.content_crc) != content_crc)
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_CONTENT_CRC32S_DIFFER);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
if (extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT,
|
|
|
|
MESSAGE_QUEUE_CATEGORY_INFO);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Now switch to the right mode */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (max_ping && (unsigned)connection->ping > max_ping)
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay_handshake_sync(netplay, connection))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (!netplay_handshake_info(netplay, connection))
|
|
|
|
return false;
|
|
|
|
connection->mode = NETPLAY_CONNECTION_PRE_SYNC;
|
|
|
|
}
|
|
|
|
|
|
|
|
*had_input = true;
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake_pre_sync
|
|
|
|
*
|
|
|
|
* Data receiver for the client's third handshake stage, receiving the
|
|
|
|
* synchronization information.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_handshake_pre_sync(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t cmd[2];
|
|
|
|
uint32_t new_frame_count, client_num;
|
|
|
|
uint32_t device;
|
|
|
|
uint32_t local_sram_size, remote_sram_size;
|
|
|
|
size_t i, j;
|
|
|
|
ssize_t recvd;
|
|
|
|
retro_ctx_controller_info_t pad;
|
|
|
|
char new_nick[NETPLAY_NICK_LEN];
|
|
|
|
retro_ctx_memory_info_t mem_info;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Only the client should call this */
|
|
|
|
if (netplay->is_server)
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
2021-11-05 18:52:56 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RECV(cmd, sizeof(cmd))
|
|
|
|
{
|
|
|
|
const char *dmsg = msg_hash_to_str(MSG_PING_TOO_HIGH);
|
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Only expecting a sync command */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (ntohl(cmd[0]) != NETPLAY_CMD_SYNC ||
|
2021-09-18 06:14:34 +02:00
|
|
|
ntohl(cmd[1]) < (2+2*MAX_INPUT_DEVICES)*sizeof(uint32_t) + (MAX_INPUT_DEVICES)*sizeof(uint8_t) +
|
|
|
|
NETPLAY_NICK_LEN)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Failed to receive netplay sync.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Get the frame count */
|
|
|
|
RECV(&new_frame_count, sizeof(new_frame_count))
|
|
|
|
return false;
|
|
|
|
new_frame_count = ntohl(new_frame_count);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Get our client number and pause mode */
|
|
|
|
RECV(&client_num, sizeof(client_num))
|
|
|
|
return false;
|
|
|
|
client_num = ntohl(client_num);
|
|
|
|
if (client_num & NETPLAY_CMD_SYNC_BIT_PAUSED)
|
|
|
|
{
|
|
|
|
netplay->remote_paused = true;
|
|
|
|
client_num ^= NETPLAY_CMD_SYNC_BIT_PAUSED;
|
|
|
|
}
|
|
|
|
netplay->self_client_num = client_num;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Set our frame counters as requested */
|
|
|
|
netplay->self_frame_count = netplay->run_frame_count =
|
|
|
|
netplay->other_frame_count = netplay->unread_frame_count =
|
|
|
|
netplay->server_frame_count = new_frame_count;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* And clear out the framebuffer */
|
|
|
|
for (i = 0; i < netplay->buffer_size; i++)
|
|
|
|
{
|
|
|
|
struct delta_frame *ptr = &netplay->buffer[i];
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
ptr->used = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (i == netplay->self_ptr)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Clear out any current data but still use this frame */
|
|
|
|
if (!netplay_delta_frame_ready(netplay, ptr, 0))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
ptr->frame = new_frame_count;
|
|
|
|
ptr->have_local = true;
|
|
|
|
netplay->run_ptr = netplay->other_ptr = netplay->unread_ptr =
|
|
|
|
netplay->server_ptr = i;
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < MAX_CLIENTS; i++)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->read_ptr[i] = netplay->self_ptr;
|
|
|
|
netplay->read_frame_count[i] = netplay->self_frame_count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get and set each input device */
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
{
|
|
|
|
RECV(&device, sizeof(device))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
pad.port = (unsigned)i;
|
|
|
|
pad.device = ntohl(device);
|
|
|
|
netplay->config_devices[i] = pad.device;
|
|
|
|
if ((pad.device&RETRO_DEVICE_MASK) == RETRO_DEVICE_KEYBOARD)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->have_updown_device = true;
|
|
|
|
netplay_key_hton_init();
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
core_set_controller_port_device(&pad);
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Get the share modes */
|
|
|
|
RECV(netplay->device_share_modes, sizeof(netplay->device_share_modes))
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Get the client-controller mapping */
|
|
|
|
netplay->connected_players =
|
|
|
|
netplay->connected_slaves =
|
|
|
|
netplay->self_devices = 0;
|
|
|
|
for (i = 0; i < MAX_CLIENTS; i++)
|
|
|
|
netplay->client_devices[i] = 0;
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
{
|
|
|
|
RECV(&device, sizeof(device))
|
|
|
|
return false;
|
|
|
|
device = ntohl(device);
|
|
|
|
|
|
|
|
netplay->device_clients[i] = device;
|
|
|
|
netplay->connected_players |= device;
|
|
|
|
for (j = 0; j < MAX_CLIENTS; j++)
|
|
|
|
{
|
|
|
|
if (device & (1<<j))
|
|
|
|
netplay->client_devices[j] |= 1<<i;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Get our nick */
|
|
|
|
RECV(new_nick, NETPLAY_NICK_LEN)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (strncmp(netplay->nick, new_nick, NETPLAY_NICK_LEN))
|
|
|
|
{
|
|
|
|
char msg[512];
|
|
|
|
strlcpy(netplay->nick, new_nick, NETPLAY_NICK_LEN);
|
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_CHANGED_NICK), netplay->nick);
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s\n", msg);
|
2021-09-18 06:14:34 +02:00
|
|
|
runloop_msg_queue_push(msg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now check the SRAM */
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
mem_info.id = RETRO_MEMORY_SAVE_RAM;
|
|
|
|
core_get_memory(&mem_info);
|
|
|
|
|
|
|
|
local_sram_size = (unsigned)mem_info.size;
|
|
|
|
remote_sram_size = ntohl(cmd[1]) -
|
|
|
|
(2+2*MAX_INPUT_DEVICES)*sizeof(uint32_t) - (MAX_INPUT_DEVICES)*sizeof(uint8_t) - NETPLAY_NICK_LEN;
|
|
|
|
|
|
|
|
if (local_sram_size != 0 && local_sram_size == remote_sram_size)
|
|
|
|
{
|
|
|
|
RECV(mem_info.data, local_sram_size)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n",
|
2021-09-18 06:14:34 +02:00
|
|
|
msg_hash_to_str(MSG_FAILED_TO_RECEIVE_SRAM_DATA_FROM_HOST));
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else if (remote_sram_size != 0)
|
|
|
|
{
|
|
|
|
/* We can't load this, but we still need to get rid of the data */
|
|
|
|
uint32_t quickbuf;
|
|
|
|
while (remote_sram_size > 0)
|
|
|
|
{
|
|
|
|
RECV(&quickbuf, (remote_sram_size > sizeof(uint32_t)) ? sizeof(uint32_t) : remote_sram_size)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n",
|
2021-09-18 06:14:34 +02:00
|
|
|
msg_hash_to_str(MSG_FAILED_TO_RECEIVE_SRAM_DATA_FROM_HOST));
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
if (remote_sram_size > sizeof(uint32_t))
|
|
|
|
remote_sram_size -= sizeof(uint32_t);
|
|
|
|
else
|
|
|
|
remote_sram_size = 0;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* We're ready! */
|
|
|
|
*had_input = true;
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
connection->mode = NETPLAY_CONNECTION_PLAYING;
|
|
|
|
netplay_handshake_ready(netplay, connection);
|
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
|
|
|
|
/* Ask to switch to playing mode if we should */
|
|
|
|
{
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
if (!settings->bools.netplay_start_as_spectator)
|
|
|
|
return netplay_cmd_mode(netplay, NETPLAY_CONNECTION_PLAYING);
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_handshake
|
|
|
|
*
|
|
|
|
* Data receiver for all handshake states.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
bool netplay_handshake(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
bool ret = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (connection->mode)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CONNECTION_INIT:
|
|
|
|
ret = netplay_handshake_init(netplay, connection, had_input);
|
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_PRE_NICK:
|
|
|
|
ret = netplay_handshake_pre_nick(netplay, connection, had_input);
|
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_PRE_PASSWORD:
|
|
|
|
ret = netplay_handshake_pre_password(netplay, connection, had_input);
|
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_PRE_INFO:
|
|
|
|
ret = netplay_handshake_pre_info(netplay, connection, had_input);
|
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_PRE_SYNC:
|
|
|
|
ret = netplay_handshake_pre_sync(netplay, connection, had_input);
|
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_NONE:
|
|
|
|
default:
|
|
|
|
return false;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (connection->mode >= NETPLAY_CONNECTION_CONNECTED &&
|
|
|
|
!netplay_send_cur_input(netplay, connection))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The mapping of keys from libretro (host) to netplay (network) */
|
|
|
|
uint32_t netplay_key_hton(unsigned key)
|
|
|
|
{
|
2021-11-05 18:53:33 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-09-18 06:14:34 +02:00
|
|
|
if (key >= RETROK_LAST)
|
|
|
|
return NETPLAY_KEY_UNKNOWN;
|
2021-11-05 18:53:33 +01:00
|
|
|
return net_st->mapping[key];
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Because the hton keymapping has to be generated, call this before using
|
|
|
|
* netplay_key_hton */
|
|
|
|
void netplay_key_hton_init(void)
|
|
|
|
{
|
|
|
|
static bool mapping_defined = false;
|
2021-11-05 18:53:33 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (!mapping_defined)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint16_t i;
|
|
|
|
for (i = 0; i < NETPLAY_KEY_LAST; i++)
|
2021-11-05 18:53:33 +01:00
|
|
|
net_st->mapping[NETPLAY_KEY_NTOH(i)] = i;
|
2021-09-18 06:14:34 +02:00
|
|
|
mapping_defined = true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static void clear_input(netplay_input_state_t istate)
|
|
|
|
{
|
|
|
|
while (istate)
|
|
|
|
{
|
|
|
|
istate->used = false;
|
|
|
|
istate = istate->next;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_delta_frame_ready
|
|
|
|
*
|
|
|
|
* Prepares, if possible, a delta frame for input, and reports whether it is
|
|
|
|
* ready.
|
|
|
|
*
|
|
|
|
* Returns: True if the delta frame is ready for input at the given frame,
|
|
|
|
* false otherwise.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
bool netplay_delta_frame_ready(netplay_t *netplay, struct delta_frame *delta,
|
|
|
|
uint32_t frame)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
size_t i;
|
|
|
|
if (delta->used)
|
|
|
|
{
|
|
|
|
if (delta->frame == frame)
|
|
|
|
return true;
|
|
|
|
/* We haven't even replayed this frame yet,
|
|
|
|
* so we can't overwrite it! */
|
|
|
|
if (netplay->other_frame_count <= delta->frame)
|
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
delta->used = true;
|
|
|
|
delta->frame = frame;
|
|
|
|
delta->crc = 0;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
clear_input(delta->resolved_input[i]);
|
|
|
|
clear_input(delta->real_input[i]);
|
|
|
|
clear_input(delta->simlated_input[i]);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
delta->have_local = false;
|
|
|
|
for (i = 0; i < MAX_CLIENTS; i++)
|
|
|
|
delta->have_real[i] = false;
|
|
|
|
return true;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_delta_frame_crc
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Get the CRC for the serialization of this frame.
|
2021-04-08 18:09:02 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static uint32_t netplay_delta_frame_crc(netplay_t *netplay,
|
|
|
|
struct delta_frame *delta)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
return encoding_crc32(0L, (const unsigned char*)delta->state,
|
|
|
|
netplay->state_size);
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/*
|
|
|
|
* Free an input state list
|
|
|
|
*/
|
|
|
|
static void free_input_state(netplay_input_state_t *list)
|
|
|
|
{
|
|
|
|
netplay_input_state_t cur, next;
|
|
|
|
cur = *list;
|
|
|
|
while (cur)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
next = cur->next;
|
|
|
|
free(cur);
|
|
|
|
cur = next;
|
|
|
|
}
|
|
|
|
*list = NULL;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_delta_frame_free
|
|
|
|
*
|
|
|
|
* Free a delta frame's dependencies
|
|
|
|
*/
|
|
|
|
static void netplay_delta_frame_free(struct delta_frame *delta)
|
|
|
|
{
|
|
|
|
uint32_t i;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (delta->state)
|
|
|
|
{
|
|
|
|
free(delta->state);
|
|
|
|
delta->state = NULL;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
{
|
|
|
|
free_input_state(&delta->resolved_input[i]);
|
|
|
|
free_input_state(&delta->real_input[i]);
|
|
|
|
free_input_state(&delta->simlated_input[i]);
|
|
|
|
}
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_input_state_for
|
|
|
|
*
|
|
|
|
* Get an input state for a particular client
|
|
|
|
*/
|
|
|
|
netplay_input_state_t netplay_input_state_for(
|
|
|
|
netplay_input_state_t *list,
|
|
|
|
uint32_t client_num, size_t size,
|
|
|
|
bool must_create, bool must_not_create)
|
|
|
|
{
|
|
|
|
netplay_input_state_t ret;
|
|
|
|
while (*list)
|
|
|
|
{
|
|
|
|
ret = *list;
|
|
|
|
if (!ret->used && !must_not_create && ret->size == size)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
ret->client_num = client_num;
|
|
|
|
ret->used = true;
|
|
|
|
memset(ret->data, 0, size*sizeof(uint32_t));
|
|
|
|
return ret;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else if (ret->used && ret->client_num == client_num)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!must_create && ret->size == size)
|
|
|
|
return ret;
|
|
|
|
return NULL;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
list = &(ret->next);
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (must_not_create)
|
|
|
|
return NULL;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Couldn't find a slot, allocate a fresh one */
|
|
|
|
if (size > 1)
|
|
|
|
ret = (netplay_input_state_t)calloc(1, sizeof(struct netplay_input_state) + (size-1) * sizeof(uint32_t));
|
|
|
|
else
|
|
|
|
ret = (netplay_input_state_t)calloc(1, sizeof(struct netplay_input_state));
|
|
|
|
if (!ret)
|
|
|
|
return NULL;
|
|
|
|
*list = ret;
|
|
|
|
ret->client_num = client_num;
|
|
|
|
ret->used = true;
|
|
|
|
ret->size = (uint32_t)size;
|
|
|
|
return ret;
|
|
|
|
}
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_expected_input_size
|
|
|
|
*
|
|
|
|
* Size in words for a given set of devices.
|
|
|
|
*/
|
|
|
|
uint32_t netplay_expected_input_size(netplay_t *netplay, uint32_t devices)
|
|
|
|
{
|
|
|
|
uint32_t ret = 0, device;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (netplay->config_devices[device]&RETRO_DEVICE_MASK)
|
|
|
|
{
|
|
|
|
/* These are all essentially magic numbers, but each device has a
|
|
|
|
* fixed size, documented in network/netplay/README */
|
|
|
|
case RETRO_DEVICE_JOYPAD:
|
|
|
|
ret += 1;
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_MOUSE:
|
|
|
|
ret += 2;
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_KEYBOARD:
|
|
|
|
ret += 5;
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_LIGHTGUN:
|
|
|
|
ret += 2;
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_ANALOG:
|
|
|
|
ret += 3;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break; /* Unsupported */
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static size_t buf_used(struct socket_buffer *sbuf)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (sbuf->end < sbuf->start)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
size_t newend = sbuf->end;
|
|
|
|
while (newend < sbuf->start)
|
|
|
|
newend += sbuf->bufsz;
|
|
|
|
return newend - sbuf->start;
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
return sbuf->end - sbuf->start;
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t buf_unread(struct socket_buffer *sbuf)
|
|
|
|
{
|
|
|
|
if (sbuf->end < sbuf->read)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
size_t newend = sbuf->end;
|
|
|
|
while (newend < sbuf->read)
|
|
|
|
newend += sbuf->bufsz;
|
|
|
|
return newend - sbuf->read;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return sbuf->end - sbuf->read;
|
|
|
|
}
|
|
|
|
|
|
|
|
static size_t buf_remaining(struct socket_buffer *sbuf)
|
|
|
|
{
|
|
|
|
return sbuf->bufsz - buf_used(sbuf) - 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_resize_socket_buffer
|
|
|
|
*
|
|
|
|
* Resize the given socket_buffer's buffer to the requested size.
|
|
|
|
*/
|
|
|
|
static bool netplay_resize_socket_buffer(
|
|
|
|
struct socket_buffer *sbuf, size_t newsize)
|
|
|
|
{
|
|
|
|
unsigned char *newdata = (unsigned char*)malloc(newsize);
|
|
|
|
if (!newdata)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Copy in the old data */
|
|
|
|
if (sbuf->end < sbuf->start)
|
|
|
|
{
|
|
|
|
memcpy(newdata,
|
|
|
|
sbuf->data + sbuf->start,
|
|
|
|
sbuf->bufsz - sbuf->start);
|
|
|
|
memcpy(newdata + sbuf->bufsz - sbuf->start,
|
|
|
|
sbuf->data,
|
|
|
|
sbuf->end);
|
|
|
|
}
|
|
|
|
else if (sbuf->end > sbuf->start)
|
|
|
|
memcpy(newdata,
|
|
|
|
sbuf->data + sbuf->start,
|
|
|
|
sbuf->end - sbuf->start);
|
|
|
|
|
|
|
|
/* Adjust our read offset */
|
|
|
|
if (sbuf->read < sbuf->start)
|
|
|
|
sbuf->read += sbuf->bufsz - sbuf->start;
|
|
|
|
else
|
|
|
|
sbuf->read -= sbuf->start;
|
|
|
|
|
|
|
|
/* Adjust start and end */
|
|
|
|
sbuf->end = buf_used(sbuf);
|
|
|
|
sbuf->start = 0;
|
|
|
|
|
|
|
|
/* Free the old one and replace it with the new one */
|
|
|
|
free(sbuf->data);
|
|
|
|
sbuf->data = newdata;
|
|
|
|
sbuf->bufsz = newsize;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_send
|
|
|
|
*
|
|
|
|
* Queue the given data for sending.
|
|
|
|
*/
|
|
|
|
bool netplay_send(
|
|
|
|
struct socket_buffer *sbuf,
|
|
|
|
int sockfd, const void *buf,
|
|
|
|
size_t len)
|
|
|
|
{
|
|
|
|
if (buf_remaining(sbuf) < len)
|
|
|
|
{
|
|
|
|
/* Need to force a blocking send */
|
|
|
|
if (!netplay_send_flush(sbuf, sockfd, true))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (buf_remaining(sbuf) < len)
|
|
|
|
{
|
|
|
|
/* Can only be that this is simply too big
|
|
|
|
* for our buffer, in which case we just
|
|
|
|
* need to do a blocking send */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!socket_send_all_blocking(sockfd, buf, len, true))
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Copy it into our buffer */
|
|
|
|
if (sbuf->bufsz - sbuf->end < len)
|
|
|
|
{
|
|
|
|
/* Half at a time */
|
|
|
|
size_t chunka = sbuf->bufsz - sbuf->end,
|
|
|
|
chunkb = len - chunka;
|
|
|
|
memcpy(sbuf->data + sbuf->end, buf, chunka);
|
|
|
|
memcpy(sbuf->data, (const unsigned char *)buf + chunka, chunkb);
|
|
|
|
sbuf->end = chunkb;
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Straight in */
|
|
|
|
memcpy(sbuf->data + sbuf->end, buf, len);
|
|
|
|
sbuf->end += len;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_send_flush
|
|
|
|
*
|
|
|
|
* Flush unsent data in the given socket buffer, blocking to do so if
|
|
|
|
* requested.
|
|
|
|
*
|
|
|
|
* Returns false only on socket failures, true otherwise.
|
|
|
|
*/
|
|
|
|
bool netplay_send_flush(struct socket_buffer *sbuf, int sockfd, bool block)
|
|
|
|
{
|
|
|
|
ssize_t sent;
|
|
|
|
|
|
|
|
if (buf_used(sbuf) == 0)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
if (sbuf->end > sbuf->start)
|
|
|
|
{
|
|
|
|
/* Usual case: Everything's in order */
|
|
|
|
if (block)
|
|
|
|
{
|
|
|
|
if (!socket_send_all_blocking(
|
|
|
|
sockfd, sbuf->data + sbuf->start,
|
|
|
|
buf_used(sbuf), true))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sbuf->start = sbuf->end = 0;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
sent = socket_send_all_nonblocking(
|
|
|
|
sockfd, sbuf->data + sbuf->start,
|
|
|
|
buf_used(sbuf), true);
|
|
|
|
|
|
|
|
if (sent < 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sbuf->start += sent;
|
|
|
|
|
|
|
|
if (sbuf->start == sbuf->end)
|
|
|
|
sbuf->start = sbuf->end = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Unusual case: Buffer overlaps break */
|
|
|
|
if (block)
|
|
|
|
{
|
|
|
|
if (!socket_send_all_blocking(
|
|
|
|
sockfd, sbuf->data + sbuf->start,
|
|
|
|
sbuf->bufsz - sbuf->start, true))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sbuf->start = 0;
|
|
|
|
|
|
|
|
return netplay_send_flush(sbuf, sockfd, true);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
sent = socket_send_all_nonblocking(
|
|
|
|
sockfd, sbuf->data + sbuf->start,
|
|
|
|
sbuf->bufsz - sbuf->start, true);
|
|
|
|
|
|
|
|
if (sent < 0)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
sbuf->start += sent;
|
|
|
|
|
|
|
|
if (sbuf->start >= sbuf->bufsz)
|
|
|
|
{
|
|
|
|
sbuf->start = 0;
|
|
|
|
return netplay_send_flush(sbuf, sockfd, false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_recv
|
|
|
|
*
|
|
|
|
* Receive buffered or fresh data.
|
|
|
|
*
|
|
|
|
* Returns number of bytes returned, which may be short or 0, or -1 on error.
|
|
|
|
*/
|
|
|
|
ssize_t netplay_recv(struct socket_buffer *sbuf, int sockfd, void *buf,
|
|
|
|
size_t len, bool block)
|
|
|
|
{
|
|
|
|
ssize_t recvd;
|
|
|
|
bool error = false;
|
|
|
|
|
|
|
|
/* Receive whatever we can into the buffer */
|
|
|
|
if (sbuf->end >= sbuf->start)
|
|
|
|
{
|
|
|
|
recvd = socket_receive_all_nonblocking(sockfd, &error,
|
|
|
|
sbuf->data + sbuf->end, sbuf->bufsz - sbuf->end -
|
|
|
|
((sbuf->start == 0) ? 1 : 0));
|
|
|
|
|
|
|
|
if (recvd < 0 || error)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
sbuf->end += recvd;
|
|
|
|
|
|
|
|
if (sbuf->end >= sbuf->bufsz)
|
|
|
|
{
|
|
|
|
sbuf->end = 0;
|
|
|
|
error = false;
|
|
|
|
recvd = socket_receive_all_nonblocking(
|
|
|
|
sockfd, &error, sbuf->data, sbuf->start - 1);
|
|
|
|
|
|
|
|
if (recvd < 0 || error)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
sbuf->end += recvd;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
recvd = socket_receive_all_nonblocking(
|
|
|
|
sockfd, &error, sbuf->data + sbuf->end,
|
|
|
|
sbuf->start - sbuf->end - 1);
|
|
|
|
|
|
|
|
if (recvd < 0 || error)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
sbuf->end += recvd;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now copy it into the reader */
|
|
|
|
if (sbuf->end >= sbuf->read || (sbuf->bufsz - sbuf->read) >= len)
|
|
|
|
{
|
|
|
|
size_t unread = buf_unread(sbuf);
|
|
|
|
if (len <= unread)
|
|
|
|
{
|
|
|
|
memcpy(buf, sbuf->data + sbuf->read, len);
|
|
|
|
sbuf->read += len;
|
|
|
|
if (sbuf->read >= sbuf->bufsz)
|
|
|
|
sbuf->read = 0;
|
|
|
|
recvd = len;
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
memcpy(buf, sbuf->data + sbuf->read, unread);
|
|
|
|
sbuf->read += unread;
|
|
|
|
if (sbuf->read >= sbuf->bufsz)
|
|
|
|
sbuf->read = 0;
|
|
|
|
recvd = unread;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Our read goes around the edge */
|
|
|
|
size_t chunka = sbuf->bufsz - sbuf->read,
|
|
|
|
pchunklen = len - chunka,
|
|
|
|
chunkb = (pchunklen >= sbuf->end) ? sbuf->end : pchunklen;
|
|
|
|
memcpy(buf, sbuf->data + sbuf->read, chunka);
|
|
|
|
memcpy((unsigned char *) buf + chunka, sbuf->data, chunkb);
|
|
|
|
sbuf->read = chunkb;
|
|
|
|
recvd = chunka + chunkb;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Perhaps block for more data */
|
|
|
|
if (block)
|
|
|
|
{
|
|
|
|
sbuf->start = sbuf->read;
|
|
|
|
if (recvd < 0 || recvd < (ssize_t) len)
|
|
|
|
{
|
|
|
|
if (!socket_receive_all_blocking(
|
|
|
|
sockfd, (unsigned char *)buf + recvd, len - recvd))
|
|
|
|
return -1;
|
|
|
|
recvd = len;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return recvd;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_recv_reset
|
|
|
|
*
|
|
|
|
* Reset our recv buffer so that future netplay_recvs
|
|
|
|
* will read the same data again.
|
|
|
|
*/
|
|
|
|
void netplay_recv_reset(struct socket_buffer *sbuf)
|
|
|
|
{
|
|
|
|
sbuf->read = sbuf->start;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_recv_flush
|
|
|
|
*
|
|
|
|
* Flush our recv buffer, so a future netplay_recv_reset will reset to this
|
|
|
|
* point.
|
|
|
|
*/
|
|
|
|
void netplay_recv_flush(struct socket_buffer *sbuf)
|
|
|
|
{
|
|
|
|
sbuf->start = sbuf->read;
|
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
static bool netplay_full(netplay_t *netplay, int sockfd)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
unsigned max_connections = settings->uints.netplay_max_connections;
|
|
|
|
unsigned total = 0;
|
|
|
|
|
|
|
|
if (max_connections)
|
|
|
|
{
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
if (netplay->connections[i].active) total++;
|
|
|
|
|
|
|
|
if (total >= max_connections)
|
|
|
|
{
|
|
|
|
/* We send a header to let the client
|
|
|
|
know we are full */
|
|
|
|
uint32_t header[6];
|
|
|
|
|
|
|
|
/* The only parameter that we need to set is
|
|
|
|
netplay magic;
|
|
|
|
we set the protocol version parameter
|
|
|
|
too for backwards compatibility */
|
|
|
|
memset(header, 0, sizeof(header));
|
|
|
|
header[0] = htonl(FULL_MAGIC);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
header[4] = htonl(HIGH_NETPLAY_PROTOCOL_VERSION);
|
2021-11-05 18:52:56 +01:00
|
|
|
|
|
|
|
/* The kernel might close the socket before
|
|
|
|
sending our data.
|
|
|
|
This is fine; the header is just a warning
|
|
|
|
for the client. */
|
|
|
|
socket_send_all_nonblocking(sockfd, header,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
sizeof(header), true);
|
2021-11-05 18:52:56 +01:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_cmd_crc
|
|
|
|
*
|
|
|
|
* Send a CRC command to all active clients.
|
|
|
|
*/
|
|
|
|
static bool netplay_cmd_crc(netplay_t *netplay, struct delta_frame *delta)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
uint32_t payload[2];
|
|
|
|
bool success = true;
|
|
|
|
|
|
|
|
payload[0] = htonl(delta->frame);
|
|
|
|
payload[1] = htonl(delta->crc);
|
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
if (netplay->connections[i].active &&
|
|
|
|
netplay->connections[i].mode >= NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
success = netplay_send_raw_cmd(netplay, &netplay->connections[i],
|
|
|
|
NETPLAY_CMD_CRC, payload, sizeof(payload)) && success;
|
|
|
|
}
|
|
|
|
return success;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_cmd_request_savestate
|
|
|
|
*
|
|
|
|
* Send a savestate request command.
|
|
|
|
*/
|
|
|
|
static bool netplay_cmd_request_savestate(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
if (netplay->connections_size == 0 ||
|
|
|
|
!netplay->connections[0].active ||
|
|
|
|
netplay->connections[0].mode < NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
return false;
|
|
|
|
if (netplay->savestate_request_outstanding)
|
|
|
|
return true;
|
|
|
|
netplay->savestate_request_outstanding = true;
|
|
|
|
return netplay_send_raw_cmd(netplay, &netplay->connections[0],
|
|
|
|
NETPLAY_CMD_REQUEST_SAVESTATE, NULL, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_cmd_stall
|
|
|
|
*
|
|
|
|
* Send a stall command.
|
|
|
|
*/
|
|
|
|
static bool netplay_cmd_stall(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection,
|
|
|
|
uint32_t frames)
|
|
|
|
{
|
|
|
|
frames = htonl(frames);
|
|
|
|
return netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_STALL, &frames, sizeof(frames));
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
static void handle_play_spectate(netplay_t *netplay, uint32_t client_num,
|
|
|
|
struct netplay_connection *connection, uint32_t cmd, uint32_t cmd_size,
|
|
|
|
uint32_t *payload);
|
|
|
|
|
|
|
|
#if 0
|
|
|
|
#define DEBUG_NONDETERMINISTIC_CORES
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_update_unread_ptr
|
|
|
|
*
|
|
|
|
* Update the global unread_ptr and unread_frame_count to correspond to the
|
|
|
|
* earliest unread frame count of any connected player
|
|
|
|
*/
|
|
|
|
void netplay_update_unread_ptr(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
if (netplay->is_server && netplay->connected_players<=1)
|
|
|
|
{
|
|
|
|
/* Nothing at all to read! */
|
|
|
|
netplay->unread_ptr = netplay->self_ptr;
|
|
|
|
netplay->unread_frame_count = netplay->self_frame_count;
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
size_t new_unread_ptr = 0;
|
|
|
|
uint32_t new_unread_frame_count = (uint32_t) -1;
|
|
|
|
uint32_t client;
|
|
|
|
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(netplay->connected_players & (1 << client)))
|
|
|
|
continue;
|
|
|
|
if ((netplay->connected_slaves & (1 << client)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (netplay->read_frame_count[client] < new_unread_frame_count)
|
|
|
|
{
|
|
|
|
new_unread_ptr = netplay->read_ptr[client];
|
|
|
|
new_unread_frame_count = netplay->read_frame_count[client];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if ( !netplay->is_server &&
|
|
|
|
netplay->server_frame_count < new_unread_frame_count)
|
|
|
|
{
|
|
|
|
new_unread_ptr = netplay->server_ptr;
|
|
|
|
new_unread_frame_count = netplay->server_frame_count;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (new_unread_frame_count != (uint32_t) -1)
|
|
|
|
{
|
|
|
|
netplay->unread_ptr = new_unread_ptr;
|
|
|
|
netplay->unread_frame_count = new_unread_frame_count;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->unread_ptr = netplay->self_ptr;
|
|
|
|
netplay->unread_frame_count = netplay->self_frame_count;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_device_client_state
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @simframe : frame in which merging is being performed
|
|
|
|
* @device : device being merged
|
|
|
|
* @client : client to find state for
|
|
|
|
*/
|
|
|
|
netplay_input_state_t netplay_device_client_state(netplay_t *netplay,
|
|
|
|
struct delta_frame *simframe, uint32_t device, uint32_t client)
|
|
|
|
{
|
|
|
|
uint32_t dsize =
|
|
|
|
netplay_expected_input_size(netplay, 1 << device);
|
|
|
|
netplay_input_state_t simstate =
|
|
|
|
netplay_input_state_for(
|
|
|
|
&simframe->real_input[device], client,
|
|
|
|
dsize, false, true);
|
|
|
|
|
|
|
|
if (!simstate)
|
|
|
|
{
|
|
|
|
if (netplay->read_frame_count[client] > simframe->frame)
|
|
|
|
return NULL;
|
|
|
|
simstate = netplay_input_state_for(&simframe->simlated_input[device],
|
|
|
|
client, dsize, false, true);
|
|
|
|
}
|
|
|
|
return simstate;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_merge_digital
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @resstate : state being resolved
|
|
|
|
* @simframe : frame in which merging is being performed
|
|
|
|
* @device : device being merged
|
|
|
|
* @clients : bitmap of clients being merged
|
|
|
|
* @digital : bitmap of digital bits
|
|
|
|
*/
|
|
|
|
static void netplay_merge_digital(netplay_t *netplay,
|
|
|
|
netplay_input_state_t resstate, struct delta_frame *simframe,
|
|
|
|
uint32_t device, uint32_t clients, const uint32_t *digital)
|
|
|
|
{
|
|
|
|
netplay_input_state_t simstate;
|
|
|
|
uint32_t word, bit, client;
|
|
|
|
uint8_t share_mode = netplay->device_share_modes[device]
|
|
|
|
& NETPLAY_SHARE_DIGITAL_BITS;
|
|
|
|
|
|
|
|
/* Make sure all real clients are accounted for */
|
|
|
|
for (simstate = simframe->real_input[device];
|
|
|
|
simstate; simstate = simstate->next)
|
|
|
|
{
|
|
|
|
if (!simstate->used || simstate->size != resstate->size)
|
|
|
|
continue;
|
|
|
|
clients |= 1 << simstate->client_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (share_mode == NETPLAY_SHARE_DIGITAL_VOTE)
|
|
|
|
{
|
|
|
|
unsigned i, j;
|
|
|
|
/* This just assumes we have no more than
|
|
|
|
* three words, will need to be adjusted for new devices */
|
|
|
|
struct vote_count votes[3];
|
|
|
|
/* Vote mode requires counting all the bits */
|
|
|
|
uint32_t client_count = 0;
|
|
|
|
|
|
|
|
for (i = 0; i < 3; i++)
|
|
|
|
for (j = 0; j < 32; j++)
|
|
|
|
votes[i].votes[j] = 0;
|
|
|
|
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(clients & (1 << client)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
simstate = netplay_device_client_state(
|
|
|
|
netplay, simframe, device, client);
|
|
|
|
|
|
|
|
if (!simstate)
|
|
|
|
continue;
|
|
|
|
client_count++;
|
|
|
|
|
|
|
|
for (word = 0; word < resstate->size; word++)
|
|
|
|
{
|
|
|
|
if (!digital[word])
|
|
|
|
continue;
|
|
|
|
for (bit = 0; bit < 32; bit++)
|
|
|
|
{
|
|
|
|
if (!(digital[word] & (1 << bit)))
|
|
|
|
continue;
|
|
|
|
if (simstate->data[word] & (1 << bit))
|
|
|
|
votes[word].votes[bit]++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now count all the bits */
|
|
|
|
client_count /= 2;
|
|
|
|
for (word = 0; word < resstate->size; word++)
|
|
|
|
{
|
|
|
|
for (bit = 0; bit < 32; bit++)
|
|
|
|
{
|
|
|
|
if (votes[word].votes[bit] > client_count)
|
|
|
|
resstate->data[word] |= (1 << bit);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else /* !VOTE */
|
|
|
|
{
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(clients & (1 << client)))
|
|
|
|
continue;
|
|
|
|
simstate = netplay_device_client_state(
|
|
|
|
netplay, simframe, device, client);
|
|
|
|
|
|
|
|
if (!simstate)
|
|
|
|
continue;
|
|
|
|
for (word = 0; word < resstate->size; word++)
|
|
|
|
{
|
|
|
|
uint32_t part;
|
|
|
|
if (!digital[word])
|
|
|
|
continue;
|
|
|
|
part = simstate->data[word];
|
|
|
|
|
|
|
|
if (digital[word] == (uint32_t) -1)
|
|
|
|
{
|
|
|
|
/* Combine the whole word */
|
|
|
|
switch (share_mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_SHARE_DIGITAL_XOR:
|
|
|
|
resstate->data[word] ^= part;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
resstate->data[word] |= part;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else /* !whole word */
|
|
|
|
{
|
|
|
|
for (bit = 0; bit < 32; bit++)
|
|
|
|
{
|
|
|
|
if (!(digital[word] & (1 << bit)))
|
|
|
|
continue;
|
|
|
|
switch (share_mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_SHARE_DIGITAL_XOR:
|
|
|
|
resstate->data[word] ^= part & (1 << bit);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
resstate->data[word] |= part & (1 << bit);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* merge_analog_part
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @resstate : state being resolved
|
|
|
|
* @simframe : frame in which merging is being performed
|
|
|
|
* @device : device being merged
|
|
|
|
* @clients : bitmap of clients being merged
|
|
|
|
* @word : word to merge
|
|
|
|
* @bit : first bit to merge
|
|
|
|
*/
|
|
|
|
static void merge_analog_part(netplay_t *netplay,
|
|
|
|
netplay_input_state_t resstate, struct delta_frame *simframe,
|
|
|
|
uint32_t device, uint32_t clients, uint32_t word, uint8_t bit)
|
|
|
|
{
|
|
|
|
netplay_input_state_t simstate;
|
|
|
|
uint32_t client, client_count = 0;
|
|
|
|
uint8_t share_mode = netplay->device_share_modes[device]
|
|
|
|
& NETPLAY_SHARE_ANALOG_BITS;
|
|
|
|
int32_t value = 0, new_value;
|
|
|
|
|
|
|
|
/* Make sure all real clients are accounted for */
|
|
|
|
for (simstate = simframe->real_input[device]; simstate; simstate = simstate->next)
|
|
|
|
{
|
|
|
|
if (!simstate->used || simstate->size != resstate->size)
|
|
|
|
continue;
|
|
|
|
clients |= 1 << simstate->client_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(clients & (1 << client)))
|
|
|
|
continue;
|
|
|
|
simstate = netplay_device_client_state(
|
|
|
|
netplay, simframe, device, client);
|
|
|
|
if (!simstate)
|
|
|
|
continue;
|
|
|
|
client_count++;
|
|
|
|
new_value = (int16_t) ((simstate->data[word]>>bit) & 0xFFFF);
|
|
|
|
switch (share_mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_SHARE_ANALOG_AVERAGE:
|
|
|
|
value += (int32_t) new_value;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
if (abs(new_value) > abs(value) ||
|
|
|
|
(abs(new_value) == abs(value) && new_value > value))
|
|
|
|
value = new_value;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (share_mode == NETPLAY_SHARE_ANALOG_AVERAGE)
|
|
|
|
if (client_count > 0) /* Prevent potential divide by zero */
|
|
|
|
value /= client_count;
|
|
|
|
|
|
|
|
resstate->data[word] |= ((uint32_t) (uint16_t) value) << bit;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_merge_analog
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @resstate : state being resolved
|
|
|
|
* @simframe : frame in which merging is being performed
|
|
|
|
* @device : device being merged
|
|
|
|
* @clients : bitmap of clients being merged
|
|
|
|
* @dtype : device type
|
|
|
|
*/
|
|
|
|
static void netplay_merge_analog(netplay_t *netplay,
|
|
|
|
netplay_input_state_t resstate, struct delta_frame *simframe,
|
|
|
|
uint32_t device, uint32_t clients, unsigned dtype)
|
|
|
|
{
|
|
|
|
/* Devices with no analog parts */
|
|
|
|
if (dtype == RETRO_DEVICE_JOYPAD || dtype == RETRO_DEVICE_KEYBOARD)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* All other devices have at least one analog word */
|
|
|
|
merge_analog_part(netplay, resstate, simframe, device, clients, 1, 0);
|
|
|
|
merge_analog_part(netplay, resstate, simframe, device, clients, 1, 16);
|
|
|
|
|
|
|
|
/* And the ANALOG device has two (two sticks) */
|
|
|
|
if (dtype == RETRO_DEVICE_ANALOG)
|
|
|
|
{
|
|
|
|
merge_analog_part(netplay, resstate, simframe, device, clients, 2, 0);
|
|
|
|
merge_analog_part(netplay, resstate, simframe, device, clients, 2, 16);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_resolve_input
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @sim_ptr : frame pointer for which to resolve input
|
|
|
|
* @resim : are we resimulating, or simulating this frame for the
|
|
|
|
* first time?
|
|
|
|
*
|
|
|
|
* "Simulate" input by assuming it hasn't changed since the last read input.
|
|
|
|
* Returns true if the resolved input changed from the last time it was
|
|
|
|
* resolved.
|
|
|
|
*/
|
|
|
|
bool netplay_resolve_input(netplay_t *netplay, size_t sim_ptr, bool resim)
|
|
|
|
{
|
|
|
|
size_t prev;
|
|
|
|
uint32_t device;
|
|
|
|
uint32_t clients, client, client_count;
|
|
|
|
netplay_input_state_t simstate, client_state = NULL,
|
|
|
|
resstate, oldresstate, pstate;
|
|
|
|
bool ret = false;
|
|
|
|
struct delta_frame *pframe = NULL;
|
|
|
|
struct delta_frame *simframe = &netplay->buffer[sim_ptr];
|
|
|
|
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
unsigned dtype = netplay->config_devices[device]&RETRO_DEVICE_MASK;
|
|
|
|
uint32_t dsize = netplay_expected_input_size(netplay, 1 << device);
|
|
|
|
clients = netplay->device_clients[device];
|
|
|
|
client_count = 0;
|
|
|
|
|
|
|
|
/* Make sure all real clients are accounted for */
|
|
|
|
for (simstate = simframe->real_input[device]; simstate; simstate = simstate->next)
|
|
|
|
{
|
|
|
|
if (!simstate->used || simstate->size != dsize)
|
|
|
|
continue;
|
|
|
|
clients |= 1 << simstate->client_num;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(clients & (1 << client)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Resolve this client-device */
|
|
|
|
simstate = netplay_input_state_for(
|
|
|
|
&simframe->real_input[device], client, dsize, false, true);
|
|
|
|
if (!simstate)
|
|
|
|
{
|
|
|
|
/* Don't already have this input, so must
|
|
|
|
* simulate if we're supposed to have it at all */
|
|
|
|
if (netplay->read_frame_count[client] > simframe->frame)
|
|
|
|
continue;
|
|
|
|
simstate = netplay_input_state_for(&simframe->simlated_input[device], client, dsize, false, false);
|
|
|
|
if (!simstate)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
prev = PREV_PTR(netplay->read_ptr[client]);
|
|
|
|
pframe = &netplay->buffer[prev];
|
|
|
|
pstate = netplay_input_state_for(&pframe->real_input[device], client, dsize, false, true);
|
|
|
|
if (!pstate)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (resim && (dtype == RETRO_DEVICE_JOYPAD || dtype == RETRO_DEVICE_ANALOG))
|
|
|
|
{
|
|
|
|
/* In resimulation mode, we only copy the buttons. The reason for this
|
|
|
|
* is nonobvious:
|
|
|
|
*
|
|
|
|
* If we resimulated nothing, then the /duration/ with which any input
|
|
|
|
* was pressed would be approximately correct, since the original
|
|
|
|
* simulation came in as the input came in, but the /number of times/
|
|
|
|
* the input was pressed would be wrong, as there would be an
|
|
|
|
* advancing wavefront of real data overtaking the simulated data
|
|
|
|
* (which is really just real data offset by some frames).
|
|
|
|
*
|
|
|
|
* That's acceptable for arrows in most situations, since the amount
|
|
|
|
* you move is tied to the duration, but unacceptable for buttons,
|
|
|
|
* which will seem to jerkily be pressed numerous times with those
|
|
|
|
* wavefronts.
|
|
|
|
*/
|
|
|
|
const uint32_t keep =
|
|
|
|
(UINT32_C(1) << RETRO_DEVICE_ID_JOYPAD_UP) |
|
|
|
|
(UINT32_C(1) << RETRO_DEVICE_ID_JOYPAD_DOWN) |
|
|
|
|
(UINT32_C(1) << RETRO_DEVICE_ID_JOYPAD_LEFT) |
|
|
|
|
(UINT32_C(1) << RETRO_DEVICE_ID_JOYPAD_RIGHT);
|
|
|
|
simstate->data[0] &= keep;
|
|
|
|
simstate->data[0] |= pstate->data[0] & ~keep;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
memcpy(simstate->data, pstate->data,
|
|
|
|
dsize * sizeof(uint32_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
client_state = simstate;
|
|
|
|
client_count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The frontend always uses the first resolved input,
|
|
|
|
* so make sure it's right */
|
|
|
|
while (simframe->resolved_input[device]
|
|
|
|
&& (simframe->resolved_input[device]->size != dsize
|
|
|
|
|| simframe->resolved_input[device]->client_num != 0))
|
|
|
|
{
|
|
|
|
/* The default resolved input is of the wrong size! */
|
|
|
|
netplay_input_state_t nextistate =
|
|
|
|
simframe->resolved_input[device]->next;
|
|
|
|
free(simframe->resolved_input[device]);
|
|
|
|
simframe->resolved_input[device] = nextistate;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Now we copy the state, whether real or simulated,
|
|
|
|
* out into the resolved state */
|
|
|
|
resstate = netplay_input_state_for(
|
|
|
|
&simframe->resolved_input[device], 0,
|
|
|
|
dsize, false, false);
|
|
|
|
if (!resstate)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
if (client_count == 1)
|
|
|
|
{
|
|
|
|
/* Trivial in the common 1-client case */
|
|
|
|
if (memcmp(resstate->data, client_state->data,
|
|
|
|
dsize * sizeof(uint32_t)))
|
|
|
|
ret = true;
|
|
|
|
memcpy(resstate->data, client_state->data,
|
|
|
|
dsize * sizeof(uint32_t));
|
|
|
|
|
|
|
|
}
|
|
|
|
else if (client_count == 0)
|
|
|
|
{
|
|
|
|
uint32_t word;
|
|
|
|
for (word = 0; word < dsize; word++)
|
|
|
|
{
|
|
|
|
if (resstate->data[word])
|
|
|
|
ret = true;
|
|
|
|
resstate->data[word] = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Merge them */
|
|
|
|
/* Most devices have all the digital parts in the first word. */
|
|
|
|
static const uint32_t digital_common[3] = {~0u, 0u, 0u};
|
|
|
|
static const uint32_t digital_keyboard[5] = {~0u, ~0u, ~0u, ~0u, ~0u};
|
|
|
|
const uint32_t *digital = digital_common;
|
|
|
|
|
|
|
|
if (dtype == RETRO_DEVICE_KEYBOARD)
|
|
|
|
digital = digital_keyboard;
|
|
|
|
|
|
|
|
oldresstate = netplay_input_state_for(
|
|
|
|
&simframe->resolved_input[device], 1, dsize, false, false);
|
|
|
|
|
|
|
|
if (!oldresstate)
|
|
|
|
continue;
|
|
|
|
memcpy(oldresstate->data, resstate->data, dsize * sizeof(uint32_t));
|
|
|
|
memset(resstate->data, 0, dsize * sizeof(uint32_t));
|
|
|
|
|
|
|
|
netplay_merge_digital(netplay, resstate, simframe,
|
|
|
|
device, clients, digital);
|
|
|
|
netplay_merge_analog(netplay, resstate, simframe,
|
|
|
|
device, clients, dtype);
|
|
|
|
|
|
|
|
if (memcmp(resstate->data, oldresstate->data,
|
|
|
|
dsize * sizeof(uint32_t)))
|
|
|
|
ret = true;
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void netplay_handle_frame_hash(netplay_t *netplay,
|
|
|
|
struct delta_frame *delta)
|
|
|
|
{
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
if (netplay->check_frames &&
|
|
|
|
delta->frame % abs(netplay->check_frames) == 0)
|
|
|
|
{
|
|
|
|
if (netplay->state_size)
|
|
|
|
delta->crc = netplay_delta_frame_crc(netplay, delta);
|
|
|
|
else
|
|
|
|
delta->crc = 0;
|
|
|
|
netplay_cmd_crc(netplay, delta);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (delta->crc && netplay->crcs_valid)
|
|
|
|
{
|
|
|
|
/* We have a remote CRC, so check it */
|
|
|
|
uint32_t local_crc = 0;
|
|
|
|
if (netplay->state_size)
|
|
|
|
local_crc = netplay_delta_frame_crc(netplay, delta);
|
|
|
|
|
|
|
|
if (local_crc != delta->crc)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
|
|
|
/* If the very first check frame is wrong,
|
|
|
|
* they probably just don't work */
|
|
|
|
if (!netplay->crc_validity_checked)
|
|
|
|
netplay->crcs_valid = false;
|
|
|
|
else if (netplay->crcs_valid)
|
|
|
|
{
|
|
|
|
/* Fix this! */
|
|
|
|
if (netplay->check_frames < 0)
|
|
|
|
{
|
|
|
|
/* Just report */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay CRCs mismatch!\n");
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay_cmd_request_savestate(netplay);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (!netplay->crc_validity_checked)
|
|
|
|
netplay->crc_validity_checked = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/**
|
|
|
|
* handle_connection
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @error : value of pointer is set to true if a critical error occurs
|
|
|
|
*
|
|
|
|
* Accepts a new client connection.
|
|
|
|
*
|
|
|
|
* Returns: fd of a new connection or -1 if there was no new connection.
|
|
|
|
*/
|
|
|
|
static int handle_connection(netplay_t *netplay, bool *error)
|
|
|
|
{
|
|
|
|
fd_set fds;
|
|
|
|
int result;
|
|
|
|
struct timeval tv = {0};
|
|
|
|
int new_fd = -1;
|
|
|
|
|
|
|
|
FD_ZERO(&fds);
|
|
|
|
FD_SET(netplay->listen_fd, &fds);
|
|
|
|
|
|
|
|
/* Check for a connection */
|
|
|
|
result = socket_select(netplay->listen_fd + 1, &fds, NULL, NULL, &tv);
|
|
|
|
|
|
|
|
if (result < 0)
|
|
|
|
goto critical_failure;
|
|
|
|
|
|
|
|
if (result)
|
|
|
|
{
|
|
|
|
struct sockaddr_storage their_addr;
|
|
|
|
socklen_t addr_size = sizeof(their_addr);
|
|
|
|
|
|
|
|
new_fd = accept(netplay->listen_fd,
|
|
|
|
(struct sockaddr*) &their_addr, &addr_size);
|
|
|
|
|
|
|
|
if (new_fd >= 0)
|
|
|
|
{
|
|
|
|
/* Set the socket nonblocking */
|
|
|
|
if (!socket_nonblock(new_fd))
|
|
|
|
goto critical_failure;
|
|
|
|
|
|
|
|
SET_TCP_NODELAY(new_fd)
|
|
|
|
SET_FD_CLOEXEC(new_fd)
|
|
|
|
}
|
|
|
|
else
|
|
|
|
goto critical_failure;
|
|
|
|
}
|
|
|
|
|
|
|
|
return new_fd;
|
|
|
|
|
|
|
|
critical_failure:
|
|
|
|
if (new_fd >= 0)
|
|
|
|
socket_close(new_fd);
|
|
|
|
|
|
|
|
*error = true;
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool netplay_tunnel_connect(int fd, const struct addrinfo *addr)
|
|
|
|
{
|
|
|
|
int result;
|
|
|
|
|
|
|
|
SET_TCP_NODELAY(fd)
|
|
|
|
SET_FD_CLOEXEC(fd)
|
|
|
|
|
|
|
|
if (!socket_nonblock(fd))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
result = socket_connect(fd, (void*) addr, false);
|
|
|
|
if (result && !isagain(result))
|
|
|
|
#if !defined(_WIN32) && defined(EINPROGRESS)
|
|
|
|
return result < 0 && errno == EINPROGRESS;
|
|
|
|
#else
|
|
|
|
return false;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* handle_mitm_connection
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @error : value of pointer is set to true if a critical error occurs
|
|
|
|
*
|
|
|
|
* Do three things here.
|
|
|
|
* 1: Check if any pending tunnel connection is ready.
|
|
|
|
* 2: Check the tunnel server to see if we need to link to a client.
|
|
|
|
* 3: Reply to ping requests from the tunnel server.
|
|
|
|
*
|
|
|
|
* For performance reasons, only create and complete one connection per call.
|
|
|
|
*
|
|
|
|
* Returns: fd of a new completed connection or -1 if no connection was completed.
|
|
|
|
*/
|
|
|
|
static int handle_mitm_connection(netplay_t *netplay, bool *error)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
void* recv_buf;
|
|
|
|
size_t recv_len;
|
|
|
|
ssize_t recvd;
|
|
|
|
int new_fd = -1;
|
|
|
|
retro_time_t ctime = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
/* We want to call select individually in order to handle errors on a per
|
|
|
|
connection basis. */
|
|
|
|
for (i = 0; i < NETPLAY_MITM_MAX_PENDING; i++)
|
|
|
|
{
|
|
|
|
int fd = netplay->mitm_pending->fds[i];
|
|
|
|
if (fd >= 0)
|
|
|
|
{
|
|
|
|
fd_set wfd, efd;
|
|
|
|
struct timeval tv = {0};
|
|
|
|
|
|
|
|
FD_ZERO(&wfd);
|
|
|
|
FD_ZERO(&efd);
|
|
|
|
FD_SET(fd, &wfd);
|
|
|
|
FD_SET(fd, &efd);
|
|
|
|
|
|
|
|
if (socket_select(fd + 1, NULL, &wfd, &efd, &tv) < 0 ||
|
|
|
|
FD_ISSET(fd, &efd))
|
|
|
|
{
|
|
|
|
/* Error */
|
|
|
|
RARCH_ERR("[Netplay] Tunnel link connection failed.\n");
|
|
|
|
}
|
|
|
|
else if (FD_ISSET(fd, &wfd))
|
|
|
|
{
|
|
|
|
/* Connection is ready.
|
|
|
|
Send the linking id. */
|
|
|
|
mitm_id_t *lid = &netplay->mitm_pending->ids[i];
|
|
|
|
if (socket_send_all_nonblocking(fd, lid, sizeof(*lid), true) == sizeof(*lid))
|
|
|
|
{
|
|
|
|
new_fd = fd;
|
|
|
|
RARCH_LOG("[Netplay] Tunnel link connection completed.\n");
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* We couldn't send our id in one call. Assume error. */
|
|
|
|
socket_close(fd);
|
|
|
|
RARCH_ERR("[Netplay] Tunnel link connection failed after handshake.\n");
|
|
|
|
}
|
|
|
|
netplay->mitm_pending->fds[i] = -1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Check if the connection timeouted. */
|
|
|
|
retro_time_t timeout = netplay->mitm_pending->timeouts[i];
|
|
|
|
if (ctime < timeout)
|
|
|
|
continue;
|
|
|
|
RARCH_ERR("[Netplay] Tunnel link connection timeout.\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
socket_close(fd);
|
|
|
|
netplay->mitm_pending->fds[i] = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
recv_buf = (void*) ((size_t) &netplay->mitm_pending->id_buf +
|
|
|
|
netplay->mitm_pending->id_recvd);
|
|
|
|
if (netplay->mitm_pending->id_recvd < sizeof(netplay->mitm_pending->id_buf.magic))
|
|
|
|
recv_len = sizeof(netplay->mitm_pending->id_buf.magic) -
|
|
|
|
netplay->mitm_pending->id_recvd;
|
|
|
|
else
|
|
|
|
recv_len = sizeof(netplay->mitm_pending->id_buf) -
|
|
|
|
netplay->mitm_pending->id_recvd;
|
|
|
|
|
|
|
|
recvd = socket_receive_all_nonblocking(netplay->listen_fd,
|
|
|
|
error, recv_buf, recv_len);
|
|
|
|
|
|
|
|
if (recvd < 0 || (size_t)recvd > recv_len)
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] Tunnel server error.\n");
|
|
|
|
goto critical_failure;
|
|
|
|
}
|
|
|
|
|
|
|
|
netplay->mitm_pending->id_recvd += recvd;
|
|
|
|
if (netplay->mitm_pending->id_recvd >= sizeof(netplay->mitm_pending->id_buf.magic))
|
|
|
|
{
|
|
|
|
switch (ntohl(netplay->mitm_pending->id_buf.magic))
|
|
|
|
{
|
|
|
|
case MITM_LINK_MAGIC:
|
|
|
|
{
|
|
|
|
if (netplay->mitm_pending->id_recvd == sizeof(netplay->mitm_pending->id_buf))
|
|
|
|
{
|
|
|
|
netplay->mitm_pending->id_recvd = 0;
|
|
|
|
|
|
|
|
/* Find a free spot to allocate this connection. */
|
|
|
|
for (i = 0; i < NETPLAY_MITM_MAX_PENDING; i++)
|
|
|
|
if (netplay->mitm_pending->fds[i] < 0)
|
|
|
|
break;
|
|
|
|
if (i < NETPLAY_MITM_MAX_PENDING)
|
|
|
|
{
|
|
|
|
int fd = socket(
|
|
|
|
netplay->mitm_pending->addr->ai_family,
|
|
|
|
netplay->mitm_pending->addr->ai_socktype,
|
|
|
|
netplay->mitm_pending->addr->ai_protocol
|
|
|
|
);
|
|
|
|
if (fd >= 0)
|
|
|
|
{
|
|
|
|
if (netplay_tunnel_connect(fd, netplay->mitm_pending->addr))
|
|
|
|
{
|
|
|
|
netplay->mitm_pending->fds[i] = fd;
|
|
|
|
memcpy(&netplay->mitm_pending->ids[i],
|
|
|
|
&netplay->mitm_pending->id_buf,
|
|
|
|
sizeof(*netplay->mitm_pending->ids));
|
|
|
|
/* 30 seconds */
|
|
|
|
netplay->mitm_pending->timeouts[i] = ctime + 30000000;
|
|
|
|
RARCH_LOG("[Netplay] Queued tunnel link connection.\n");
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
socket_close(fd);
|
|
|
|
RARCH_ERR("[Netplay] Failed to connect to tunnel server.\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] Failed to create socket for tunnel link connection.\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
RARCH_WARN("[Netplay] Cannot create more tunnel link connections.\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
case MITM_PING_MAGIC:
|
|
|
|
{
|
|
|
|
/* Tunnel server requested for us to reply to a ping request. */
|
|
|
|
void *ping = &netplay->mitm_pending->id_buf.magic;
|
|
|
|
size_t len = sizeof(netplay->mitm_pending->id_buf.magic);
|
|
|
|
|
|
|
|
netplay->mitm_pending->id_recvd = 0;
|
|
|
|
|
|
|
|
if (socket_send_all_nonblocking(netplay->listen_fd,
|
|
|
|
ping, len, true) != len)
|
|
|
|
{
|
|
|
|
/* We couldn't send our ping reply in one call. Assume error. */
|
|
|
|
RARCH_ERR("[Netplay] Tunnel ping reply failed.\n");
|
|
|
|
goto critical_failure;
|
|
|
|
}
|
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
default:
|
|
|
|
RARCH_ERR("[Netplay] Received unknown magic from tunnel server.\n");
|
|
|
|
goto critical_failure;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return new_fd;
|
|
|
|
|
|
|
|
critical_failure:
|
|
|
|
if (new_fd >= 0)
|
|
|
|
socket_close(new_fd);
|
|
|
|
|
|
|
|
*error = true;
|
|
|
|
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
/**
|
|
|
|
* netplay_sync_pre_frame
|
|
|
|
* @netplay : pointer to netplay object
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* @disconnect : disconnect netplay
|
2021-04-08 18:09:02 +02:00
|
|
|
*
|
|
|
|
* Pre-frame for Netplay synchronization.
|
|
|
|
*/
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool netplay_sync_pre_frame(netplay_t *netplay, bool *disconnect)
|
2021-04-08 18:09:02 +02:00
|
|
|
{
|
|
|
|
retro_ctx_serialize_info_t serial_info;
|
|
|
|
|
|
|
|
if (netplay_delta_frame_ready(netplay,
|
|
|
|
&netplay->buffer[netplay->run_ptr], netplay->run_frame_count))
|
|
|
|
{
|
|
|
|
serial_info.data_const = NULL;
|
|
|
|
serial_info.data = netplay->buffer[netplay->run_ptr].state;
|
|
|
|
serial_info.size = netplay->state_size;
|
|
|
|
|
|
|
|
memset(serial_info.data, 0, serial_info.size);
|
|
|
|
if ((netplay->quirks & NETPLAY_QUIRK_INITIALIZATION)
|
|
|
|
|| netplay->run_frame_count == 0)
|
|
|
|
{
|
|
|
|
/* Don't serialize until it's safe */
|
|
|
|
}
|
|
|
|
else if (!(netplay->quirks & NETPLAY_QUIRK_NO_SAVESTATES)
|
|
|
|
&& core_serialize(&serial_info))
|
|
|
|
{
|
|
|
|
if (netplay->force_send_savestate && !netplay->stall
|
|
|
|
&& !netplay->remote_paused)
|
|
|
|
{
|
|
|
|
/* Bring our running frame and input frames into
|
|
|
|
* parity so we don't send old info. */
|
|
|
|
if (netplay->run_ptr != netplay->self_ptr)
|
|
|
|
{
|
|
|
|
memcpy(netplay->buffer[netplay->self_ptr].state,
|
|
|
|
netplay->buffer[netplay->run_ptr].state,
|
|
|
|
netplay->state_size);
|
|
|
|
netplay->run_ptr = netplay->self_ptr;
|
|
|
|
netplay->run_frame_count = netplay->self_frame_count;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Send this along to the other side */
|
|
|
|
serial_info.data_const = netplay->buffer[netplay->run_ptr].state;
|
|
|
|
netplay_load_savestate(netplay, &serial_info, false);
|
|
|
|
netplay->force_send_savestate = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* If the core can't serialize properly, we must stall for the
|
|
|
|
* remote input on EVERY frame, because we can't recover */
|
|
|
|
netplay->quirks |= NETPLAY_QUIRK_NO_SAVESTATES;
|
|
|
|
netplay->stateless_mode = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If we can't transmit savestates, we must stall
|
|
|
|
* until the client is ready. */
|
|
|
|
if (netplay->run_frame_count > 0 &&
|
|
|
|
(netplay->quirks & (NETPLAY_QUIRK_NO_SAVESTATES|NETPLAY_QUIRK_NO_TRANSMISSION)) &&
|
|
|
|
(netplay->connections_size == 0 || !netplay->connections[0].active ||
|
|
|
|
netplay->connections[0].mode < NETPLAY_CONNECTION_CONNECTED))
|
|
|
|
netplay->stall = NETPLAY_STALL_NO_CONNECTION;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int new_fd = -1;
|
|
|
|
bool server_error = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->mitm_pending)
|
|
|
|
new_fd = handle_mitm_connection(netplay, &server_error);
|
|
|
|
else
|
|
|
|
new_fd = handle_connection(netplay, &server_error);
|
2021-04-08 18:09:02 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (server_error)
|
|
|
|
{
|
|
|
|
*disconnect = true;
|
|
|
|
}
|
|
|
|
else if (new_fd >= 0)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection;
|
|
|
|
size_t connection_num;
|
2021-04-08 18:09:02 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (netplay_full(netplay, new_fd))
|
|
|
|
{
|
|
|
|
/* Not accepting any more connections. */
|
|
|
|
socket_close(new_fd);
|
|
|
|
goto process;
|
|
|
|
}
|
|
|
|
|
2021-04-08 18:09:02 +02:00
|
|
|
/* Allocate a connection */
|
|
|
|
for (connection_num = 0; connection_num < netplay->connections_size; connection_num++)
|
|
|
|
if (!netplay->connections[connection_num].active &&
|
|
|
|
netplay->connections[connection_num].mode != NETPLAY_CONNECTION_DELAYED_DISCONNECT)
|
|
|
|
break;
|
|
|
|
if (connection_num == netplay->connections_size)
|
|
|
|
{
|
|
|
|
if (connection_num == 0)
|
|
|
|
{
|
|
|
|
netplay->connections = (struct netplay_connection*)
|
|
|
|
malloc(sizeof(struct netplay_connection));
|
|
|
|
|
|
|
|
if (!netplay->connections)
|
|
|
|
{
|
|
|
|
socket_close(new_fd);
|
|
|
|
goto process;
|
|
|
|
}
|
|
|
|
netplay->connections_size = 1;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
size_t new_connections_size = netplay->connections_size * 2;
|
|
|
|
struct netplay_connection
|
|
|
|
*new_connections = (struct netplay_connection*)
|
|
|
|
realloc(netplay->connections,
|
|
|
|
new_connections_size*sizeof(struct netplay_connection));
|
|
|
|
|
|
|
|
if (!new_connections)
|
|
|
|
{
|
|
|
|
socket_close(new_fd);
|
|
|
|
goto process;
|
|
|
|
}
|
|
|
|
|
|
|
|
memset(new_connections + netplay->connections_size, 0,
|
|
|
|
netplay->connections_size * sizeof(struct netplay_connection));
|
|
|
|
netplay->connections = new_connections;
|
|
|
|
netplay->connections_size = new_connections_size;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
connection = &netplay->connections[connection_num];
|
|
|
|
|
|
|
|
/* Set it up */
|
|
|
|
memset(connection, 0, sizeof(*connection));
|
|
|
|
connection->active = true;
|
|
|
|
connection->fd = new_fd;
|
|
|
|
connection->mode = NETPLAY_CONNECTION_INIT;
|
|
|
|
|
|
|
|
if (!netplay_init_socket_buffer(&connection->send_packet_buffer,
|
|
|
|
netplay->packet_buffer_size) ||
|
|
|
|
!netplay_init_socket_buffer(&connection->recv_packet_buffer,
|
|
|
|
netplay->packet_buffer_size))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay_deinit_socket_buffer(&connection->send_packet_buffer);
|
|
|
|
netplay_deinit_socket_buffer(&connection->recv_packet_buffer);
|
2021-04-08 18:09:02 +02:00
|
|
|
connection->active = false;
|
|
|
|
socket_close(new_fd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
process:
|
|
|
|
netplay->can_poll = true;
|
|
|
|
input_poll_net();
|
|
|
|
|
|
|
|
return (netplay->stall != NETPLAY_STALL_NO_CONNECTION);
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_sync_post_frame
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
*
|
|
|
|
* Post-frame for Netplay synchronization.
|
|
|
|
* We check if we have new input and replay from recorded input.
|
|
|
|
*/
|
|
|
|
void netplay_sync_post_frame(netplay_t *netplay, bool stalled)
|
|
|
|
{
|
|
|
|
uint32_t lo_frame_count, hi_frame_count;
|
|
|
|
|
|
|
|
/* Unless we're stalling, we've just finished running a frame */
|
|
|
|
if (!stalled)
|
|
|
|
{
|
|
|
|
netplay->run_ptr = NEXT_PTR(netplay->run_ptr);
|
|
|
|
netplay->run_frame_count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We've finished an input frame even if we're stalling */
|
|
|
|
if ((!stalled || netplay->stall == NETPLAY_STALL_INPUT_LATENCY) &&
|
|
|
|
netplay->self_frame_count <
|
|
|
|
netplay->run_frame_count + netplay->input_latency_frames)
|
|
|
|
{
|
|
|
|
netplay->self_ptr = NEXT_PTR(netplay->self_ptr);
|
|
|
|
netplay->self_frame_count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Only relevant if we're connected and not in a desynching operation */
|
|
|
|
if ((netplay->is_server && (netplay->connected_players<=1)) ||
|
|
|
|
(netplay->self_mode < NETPLAY_CONNECTION_CONNECTED) ||
|
|
|
|
(netplay->desync))
|
|
|
|
{
|
|
|
|
netplay->other_frame_count = netplay->self_frame_count;
|
|
|
|
netplay->other_ptr = netplay->self_ptr;
|
|
|
|
|
|
|
|
/* FIXME: Duplication */
|
|
|
|
if (netplay->catch_up)
|
|
|
|
{
|
|
|
|
netplay->catch_up = false;
|
2021-09-30 21:10:12 +02:00
|
|
|
input_state_get_ptr()->nonblocking_flag = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
driver_set_nonblock_state();
|
|
|
|
}
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Reset if it was requested */
|
|
|
|
if (netplay->force_reset)
|
|
|
|
{
|
|
|
|
core_reset();
|
|
|
|
netplay->force_reset = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
netplay->replay_ptr = netplay->other_ptr;
|
|
|
|
netplay->replay_frame_count = netplay->other_frame_count;
|
|
|
|
|
|
|
|
#ifndef DEBUG_NONDETERMINISTIC_CORES
|
|
|
|
if (!netplay->force_rewind)
|
|
|
|
{
|
|
|
|
bool cont = true;
|
|
|
|
|
|
|
|
/* Skip ahead if we predicted correctly.
|
|
|
|
* Skip until our simulation failed. */
|
|
|
|
while (netplay->other_frame_count < netplay->unread_frame_count &&
|
|
|
|
netplay->other_frame_count < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
struct delta_frame *ptr = &netplay->buffer[netplay->other_ptr];
|
|
|
|
|
|
|
|
/* If resolving the input changes it, we used bad input */
|
|
|
|
if (netplay_resolve_input(netplay, netplay->other_ptr, true))
|
|
|
|
{
|
|
|
|
cont = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
netplay_handle_frame_hash(netplay, ptr);
|
|
|
|
netplay->other_ptr = NEXT_PTR(netplay->other_ptr);
|
|
|
|
netplay->other_frame_count++;
|
|
|
|
}
|
|
|
|
netplay->replay_ptr = netplay->other_ptr;
|
|
|
|
netplay->replay_frame_count = netplay->other_frame_count;
|
|
|
|
|
|
|
|
if (cont)
|
|
|
|
{
|
|
|
|
while (netplay->replay_frame_count < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
if (netplay_resolve_input(netplay, netplay->replay_ptr, true))
|
|
|
|
break;
|
|
|
|
netplay->replay_ptr = NEXT_PTR(netplay->replay_ptr);
|
|
|
|
netplay->replay_frame_count++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Now replay the real input if we've gotten ahead of it */
|
|
|
|
if (netplay->force_rewind ||
|
|
|
|
netplay->replay_frame_count < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
retro_ctx_serialize_info_t serial_info;
|
|
|
|
|
|
|
|
/* Replay frames. */
|
|
|
|
netplay->is_replay = true;
|
|
|
|
|
|
|
|
/* If we have a keyboard device, we replay the previous frame's input
|
|
|
|
* just to assert that the keydown/keyup events work if the core
|
|
|
|
* translates them in that way */
|
|
|
|
if (netplay->have_updown_device)
|
|
|
|
{
|
|
|
|
netplay->replay_ptr = PREV_PTR(netplay->replay_ptr);
|
|
|
|
netplay->replay_frame_count--;
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
core_run();
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
netplay->replay_ptr = NEXT_PTR(netplay->replay_ptr);
|
|
|
|
netplay->replay_frame_count++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (netplay->quirks & NETPLAY_QUIRK_INITIALIZATION)
|
|
|
|
/* Make sure we're initialized before we start loading things */
|
|
|
|
netplay_wait_and_init_serialization(netplay);
|
|
|
|
|
|
|
|
serial_info.data = NULL;
|
|
|
|
serial_info.data_const = netplay->buffer[netplay->replay_ptr].state;
|
|
|
|
serial_info.size = netplay->state_size;
|
|
|
|
|
|
|
|
if (!core_unserialize(&serial_info))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay savestate loading failed: Prepare for desync!\n");
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
while (netplay->replay_frame_count < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
retro_time_t start, tm;
|
|
|
|
struct delta_frame *ptr = &netplay->buffer[netplay->replay_ptr];
|
|
|
|
|
|
|
|
serial_info.data = ptr->state;
|
|
|
|
serial_info.size = netplay->state_size;
|
|
|
|
serial_info.data_const = NULL;
|
|
|
|
|
|
|
|
start = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
/* Remember the current state */
|
|
|
|
memset(serial_info.data, 0, serial_info.size);
|
|
|
|
core_serialize(&serial_info);
|
|
|
|
if (netplay->replay_frame_count < netplay->unread_frame_count)
|
|
|
|
netplay_handle_frame_hash(netplay, ptr);
|
|
|
|
|
|
|
|
/* Re-simulate this frame's input */
|
|
|
|
netplay_resolve_input(netplay, netplay->replay_ptr, true);
|
|
|
|
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
core_run();
|
|
|
|
#ifdef HAVE_THREADS
|
|
|
|
autosave_unlock();
|
|
|
|
#endif
|
|
|
|
netplay->replay_ptr = NEXT_PTR(netplay->replay_ptr);
|
|
|
|
netplay->replay_frame_count++;
|
|
|
|
|
|
|
|
#ifdef DEBUG_NONDETERMINISTIC_CORES
|
|
|
|
if (ptr->have_remote && netplay_delta_frame_ready(netplay, &netplay->buffer[netplay->replay_ptr], netplay->replay_frame_count))
|
|
|
|
{
|
2021-04-08 18:16:24 +02:00
|
|
|
RARCH_LOG("PRE %u: %X\n", netplay->replay_frame_count-1, netplay->state_size ? netplay_delta_frame_crc(netplay, ptr) : 0);
|
2021-04-08 18:09:02 +02:00
|
|
|
if (netplay->is_server)
|
|
|
|
RARCH_LOG("INP %X %X\n", ptr->real_input_state[0], ptr->self_state[0]);
|
|
|
|
else
|
|
|
|
RARCH_LOG("INP %X %X\n", ptr->self_state[0], ptr->real_input_state[0]);
|
|
|
|
ptr = &netplay->buffer[netplay->replay_ptr];
|
|
|
|
serial_info.data = ptr->state;
|
|
|
|
memset(serial_info.data, 0, serial_info.size);
|
|
|
|
core_serialize(&serial_info);
|
2021-04-08 18:16:24 +02:00
|
|
|
RARCH_LOG("POST %u: %X\n", netplay->replay_frame_count-1, netplay->state_size ? netplay_delta_frame_crc(netplay, ptr) : 0);
|
2021-04-08 18:09:02 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* Get our time window */
|
|
|
|
tm = cpu_features_get_time_usec() - start;
|
|
|
|
netplay->frame_run_time_sum -= netplay->frame_run_time[netplay->frame_run_time_ptr];
|
|
|
|
netplay->frame_run_time[netplay->frame_run_time_ptr] = tm;
|
|
|
|
netplay->frame_run_time_sum += tm;
|
|
|
|
netplay->frame_run_time_ptr++;
|
|
|
|
if (netplay->frame_run_time_ptr >= NETPLAY_FRAME_RUN_TIME_WINDOW)
|
|
|
|
netplay->frame_run_time_ptr = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Average our time */
|
|
|
|
netplay->frame_run_time_avg = netplay->frame_run_time_sum / NETPLAY_FRAME_RUN_TIME_WINDOW;
|
|
|
|
|
|
|
|
if (netplay->unread_frame_count < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
netplay->other_ptr = netplay->unread_ptr;
|
|
|
|
netplay->other_frame_count = netplay->unread_frame_count;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->other_ptr = netplay->run_ptr;
|
|
|
|
netplay->other_frame_count = netplay->run_frame_count;
|
|
|
|
}
|
|
|
|
netplay->is_replay = false;
|
|
|
|
netplay->force_rewind = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
uint32_t client;
|
|
|
|
|
|
|
|
lo_frame_count = hi_frame_count = netplay->unread_frame_count;
|
|
|
|
|
|
|
|
/* Look for players that are ahead of us */
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(netplay->connected_players & (1 << client)))
|
|
|
|
continue;
|
|
|
|
if (netplay->read_frame_count[client] > hi_frame_count)
|
|
|
|
hi_frame_count = netplay->read_frame_count[client];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
lo_frame_count = hi_frame_count = netplay->server_frame_count;
|
|
|
|
|
|
|
|
/* If we're behind, try to catch up */
|
|
|
|
if (netplay->catch_up)
|
|
|
|
{
|
|
|
|
/* Are we caught up? */
|
|
|
|
if (netplay->self_frame_count + 1 >= lo_frame_count)
|
|
|
|
{
|
|
|
|
netplay->catch_up = false;
|
2021-09-30 21:10:12 +02:00
|
|
|
input_state_get_ptr()->nonblocking_flag = false;
|
2021-04-08 18:09:02 +02:00
|
|
|
driver_set_nonblock_state();
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else if (!stalled)
|
|
|
|
{
|
|
|
|
if (netplay->self_frame_count + 3 < lo_frame_count)
|
|
|
|
{
|
|
|
|
retro_time_t cur_time = cpu_features_get_time_usec();
|
|
|
|
uint32_t cur_behind = lo_frame_count - netplay->self_frame_count;
|
|
|
|
|
|
|
|
/* We're behind, but we'll only try to catch up if we're actually
|
|
|
|
* falling behind, i.e. if we're more behind after some time */
|
|
|
|
if (netplay->catch_up_time == 0)
|
|
|
|
{
|
|
|
|
/* Record our current time to check for catch-up later */
|
|
|
|
netplay->catch_up_time = cur_time;
|
|
|
|
netplay->catch_up_behind = cur_behind;
|
|
|
|
|
|
|
|
}
|
|
|
|
else if (cur_time - netplay->catch_up_time > CATCH_UP_CHECK_TIME_USEC)
|
|
|
|
{
|
|
|
|
/* Time to check how far behind we are */
|
|
|
|
if (netplay->catch_up_behind <= cur_behind)
|
|
|
|
{
|
|
|
|
/* We're definitely falling behind! */
|
2021-09-30 21:10:12 +02:00
|
|
|
netplay->catch_up = true;
|
|
|
|
netplay->catch_up_time = 0;
|
|
|
|
input_state_get_ptr()->nonblocking_flag = true;
|
2021-04-08 18:09:02 +02:00
|
|
|
driver_set_nonblock_state();
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Check again in another period */
|
|
|
|
netplay->catch_up_time = cur_time;
|
|
|
|
netplay->catch_up_behind = cur_behind;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else if (netplay->self_frame_count + 3 < hi_frame_count)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
netplay->catch_up_time = 0;
|
|
|
|
|
|
|
|
/* We're falling behind some clients but not others, so request that
|
|
|
|
* clients ahead of us stall */
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
uint32_t client_num;
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
|
|
|
|
if (!connection->active ||
|
|
|
|
connection->mode != NETPLAY_CONNECTION_PLAYING)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
client_num = (uint32_t)(i + 1);
|
|
|
|
|
|
|
|
/* Are they ahead? */
|
|
|
|
if (netplay->self_frame_count + 3 < netplay->read_frame_count[client_num])
|
|
|
|
{
|
|
|
|
/* Tell them to stall */
|
|
|
|
if (connection->stall_frame + NETPLAY_MAX_REQ_STALL_FREQUENCY <
|
|
|
|
netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
connection->stall_frame = netplay->self_frame_count;
|
|
|
|
netplay_cmd_stall(netplay, connection,
|
|
|
|
netplay->read_frame_count[client_num] -
|
|
|
|
netplay->self_frame_count + 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay->catch_up_time = 0;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay->catch_up_time = 0;
|
|
|
|
}
|
|
|
|
|
2017-09-10 19:42:32 -04:00
|
|
|
#if 0
|
2016-12-17 16:50:06 -05:00
|
|
|
#define DEBUG_NETPLAY_STEPS 1
|
|
|
|
|
|
|
|
static void print_state(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
char msg[512];
|
|
|
|
size_t cur = 0;
|
2017-09-09 20:44:12 -04:00
|
|
|
uint32_t client;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
|
|
|
#define APPEND(out) cur += snprintf out
|
|
|
|
#define M msg + cur, sizeof(msg) - cur
|
|
|
|
|
2017-01-20 14:28:18 -05:00
|
|
|
APPEND((M, "NETPLAY: S:%u U:%u O:%u", netplay->self_frame_count, netplay->unread_frame_count, netplay->other_frame_count));
|
2016-12-17 16:50:06 -05:00
|
|
|
if (!netplay->is_server)
|
|
|
|
APPEND((M, " H:%u", netplay->server_frame_count));
|
2017-09-09 20:44:12 -04:00
|
|
|
for (client = 0; client < MAX_USERS; client++)
|
2016-12-17 16:50:06 -05:00
|
|
|
{
|
2017-09-10 09:15:06 -04:00
|
|
|
if ((netplay->connected_players & (1<<client)))
|
2017-09-11 10:40:34 -04:00
|
|
|
APPEND((M, " %u:%u", client, netplay->read_frame_count[client]));
|
2016-12-17 16:50:06 -05:00
|
|
|
}
|
|
|
|
msg[sizeof(msg)-1] = '\0';
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] %s\n", msg);
|
2016-12-17 16:52:02 -05:00
|
|
|
|
|
|
|
#undef APPEND
|
|
|
|
#undef M
|
2016-12-17 16:50:06 -05:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2016-12-14 11:45:11 -05:00
|
|
|
/**
|
|
|
|
* remote_unpaused
|
|
|
|
*
|
|
|
|
* Mark a particular remote connection as unpaused and, if relevant, inform
|
|
|
|
* every one else that they may resume.
|
|
|
|
*/
|
2020-05-24 20:38:50 +02:00
|
|
|
static void remote_unpaused(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
2016-12-14 11:45:11 -05:00
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
connection->paused = false;
|
|
|
|
netplay->remote_paused = false;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *sc = &netplay->connections[i];
|
|
|
|
if (sc->active && sc->paused)
|
|
|
|
{
|
|
|
|
netplay->remote_paused = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (!netplay->remote_paused && !netplay->local_paused)
|
|
|
|
netplay_send_raw_cmd_all(netplay, connection, NETPLAY_CMD_RESUME, NULL, 0);
|
|
|
|
}
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
/**
|
|
|
|
* netplay_hangup:
|
|
|
|
*
|
|
|
|
* Disconnects an active Netplay connection due to an error
|
2016-12-15 08:42:03 -05:00
|
|
|
*/
|
2020-05-24 20:38:50 +02:00
|
|
|
void netplay_hangup(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2016-12-15 17:20:04 -05:00
|
|
|
char msg[512];
|
|
|
|
const char *dmsg;
|
2017-08-25 14:38:21 -04:00
|
|
|
size_t i;
|
2021-11-05 18:52:56 +01:00
|
|
|
bool was_playing = false;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
bool extra_notifications = settings->bools.notification_show_netplay_extra;
|
2016-12-15 17:20:04 -05:00
|
|
|
|
2021-11-05 18:53:20 +01:00
|
|
|
if (!netplay || !connection->active)
|
2016-12-13 20:42:53 -05:00
|
|
|
return;
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
was_playing =
|
|
|
|
connection->mode == NETPLAY_CONNECTION_PLAYING
|
|
|
|
|| connection->mode == NETPLAY_CONNECTION_SLAVE;
|
2016-12-15 17:20:04 -05:00
|
|
|
|
|
|
|
/* Report this disconnection */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!string_is_empty(connection->nick))
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_SERVER_NAMED_HANGUP), connection->nick);
|
|
|
|
dmsg = msg;
|
|
|
|
}
|
2016-12-15 17:20:04 -05:00
|
|
|
else
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_SERVER_HANGUP);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_CLIENT_HANGUP);
|
2019-02-06 19:01:11 -05:00
|
|
|
#ifdef HAVE_DISCORD
|
2021-11-05 13:45:00 +01:00
|
|
|
if (discord_state_get_ptr()->inited)
|
2019-02-06 19:01:11 -05:00
|
|
|
{
|
|
|
|
discord_userdata_t userdata;
|
|
|
|
userdata.status = DISCORD_PRESENCE_NETPLAY_NETPLAY_STOPPED;
|
|
|
|
command_event(CMD_EVENT_DISCORD_UPDATE, &userdata);
|
|
|
|
}
|
|
|
|
#endif
|
2017-05-16 00:15:06 -05:00
|
|
|
netplay->is_connected = false;
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
/* This notification is really only important to the server if the client was playing.
|
|
|
|
* Let it be optional if server and the client wasn't playing. */
|
|
|
|
if (!netplay->is_server || was_playing || extra_notifications)
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT,
|
|
|
|
MESSAGE_QUEUE_CATEGORY_INFO);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
socket_close(connection->fd);
|
|
|
|
connection->active = false;
|
|
|
|
netplay_deinit_socket_buffer(&connection->send_packet_buffer);
|
|
|
|
netplay_deinit_socket_buffer(&connection->recv_packet_buffer);
|
|
|
|
|
|
|
|
if (!netplay->is_server)
|
|
|
|
{
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_NONE;
|
2017-09-10 09:15:06 -04:00
|
|
|
netplay->connected_players &= (1L<<netplay->self_client_num);
|
2017-08-25 14:38:21 -04:00
|
|
|
for (i = 0; i < MAX_CLIENTS; i++)
|
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
if (i != netplay->self_client_num)
|
|
|
|
netplay->client_devices[i] = 0;
|
2017-08-25 14:38:21 -04:00
|
|
|
}
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
netplay->device_clients[i] &= (1L<<netplay->self_client_num);
|
2016-12-15 17:20:04 -05:00
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2017-06-06 21:35:09 -04:00
|
|
|
/* Mark the player for removal */
|
2021-11-05 18:52:56 +01:00
|
|
|
if (was_playing)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
uint32_t client_num = (uint32_t)
|
|
|
|
(connection - netplay->connections + 1);
|
|
|
|
|
|
|
|
/* This special mode keeps the connection object
|
|
|
|
alive long enough to send the disconnection
|
|
|
|
message at the correct time */
|
|
|
|
connection->mode = NETPLAY_CONNECTION_DELAYED_DISCONNECT;
|
|
|
|
connection->delay_frame = netplay->read_frame_count[client_num];
|
2017-06-06 21:35:09 -04:00
|
|
|
|
|
|
|
/* Mark them as not playing anymore */
|
2017-09-10 09:15:06 -04:00
|
|
|
netplay->connected_players &= ~(1L<<client_num);
|
|
|
|
netplay->connected_slaves &= ~(1L<<client_num);
|
2017-08-25 14:38:21 -04:00
|
|
|
netplay->client_devices[client_num] = 0;
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
netplay->device_clients[i] &= ~(1L<<client_num);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
2016-12-14 11:45:11 -05:00
|
|
|
|
|
|
|
/* Unpause them */
|
|
|
|
if (connection->paused)
|
|
|
|
remote_unpaused(netplay, connection);
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2017-06-06 21:35:09 -04:00
|
|
|
/**
|
|
|
|
* netplay_delayed_state_change:
|
|
|
|
*
|
2020-05-24 20:38:50 +02:00
|
|
|
* Handle any pending state changes which are ready
|
|
|
|
* as of the beginning of the current frame.
|
2017-06-06 21:35:09 -04:00
|
|
|
*/
|
|
|
|
void netplay_delayed_state_change(netplay_t *netplay)
|
|
|
|
{
|
2018-04-09 15:56:45 +02:00
|
|
|
unsigned i;
|
2017-06-06 21:35:09 -04:00
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
2018-04-09 15:56:45 +02:00
|
|
|
uint32_t client_num = (uint32_t)(i + 1);
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
2019-02-03 15:49:35 -08:00
|
|
|
|
2017-06-06 21:35:09 -04:00
|
|
|
if ((connection->active || connection->mode == NETPLAY_CONNECTION_DELAYED_DISCONNECT) &&
|
|
|
|
connection->delay_frame &&
|
|
|
|
connection->delay_frame <= netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
/* Something was delayed! Prepare the MODE command */
|
2017-09-12 16:45:38 -04:00
|
|
|
uint32_t payload[15] = {0};
|
2018-04-09 15:56:45 +02:00
|
|
|
payload[0] = htonl(connection->delay_frame);
|
|
|
|
payload[1] = htonl(client_num);
|
|
|
|
payload[2] = htonl(0);
|
2019-02-03 15:49:35 -08:00
|
|
|
|
2017-09-10 19:42:32 -04:00
|
|
|
memcpy(payload + 3, netplay->device_share_modes, sizeof(netplay->device_share_modes));
|
2017-09-12 16:45:38 -04:00
|
|
|
strncpy((char *) (payload + 7), connection->nick, NETPLAY_NICK_LEN);
|
2017-06-06 21:35:09 -04:00
|
|
|
|
|
|
|
/* Remove the connection entirely if relevant */
|
|
|
|
if (connection->mode == NETPLAY_CONNECTION_DELAYED_DISCONNECT)
|
|
|
|
connection->mode = NETPLAY_CONNECTION_NONE;
|
|
|
|
|
|
|
|
/* Then send the mode change packet */
|
|
|
|
netplay_send_raw_cmd_all(netplay, connection, NETPLAY_CMD_MODE, payload, sizeof(payload));
|
|
|
|
|
|
|
|
/* And forget the delay frame */
|
|
|
|
connection->delay_frame = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
/* Send the specified input data */
|
2017-08-25 14:38:21 -04:00
|
|
|
static bool send_input_frame(netplay_t *netplay, struct delta_frame *dframe,
|
|
|
|
struct netplay_connection *only, struct netplay_connection *except,
|
2017-09-19 21:23:42 -04:00
|
|
|
uint32_t client_num, bool slave)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2017-08-25 14:38:21 -04:00
|
|
|
#define BUFSZ 16 /* FIXME: Arbitrary restriction */
|
|
|
|
uint32_t buffer[BUFSZ], devices, device;
|
|
|
|
size_t bufused, i;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2017-08-25 14:38:21 -04:00
|
|
|
/* Set up the basic buffer */
|
|
|
|
bufused = 4;
|
2016-12-13 20:42:53 -05:00
|
|
|
buffer[0] = htonl(NETPLAY_CMD_INPUT);
|
2017-08-25 14:38:21 -04:00
|
|
|
buffer[2] = htonl(dframe->frame);
|
|
|
|
buffer[3] = htonl(client_num);
|
|
|
|
|
|
|
|
/* Add the device data */
|
|
|
|
devices = netplay->client_devices[client_num];
|
2017-11-05 22:45:53 -05:00
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
2017-08-25 14:38:21 -04:00
|
|
|
{
|
|
|
|
netplay_input_state_t istate;
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
2017-09-10 09:15:06 -04:00
|
|
|
istate = dframe->real_input[device];
|
2017-09-19 21:23:42 -04:00
|
|
|
while (istate && (!istate->used || istate->client_num != (slave?MAX_CLIENTS:client_num)))
|
2017-08-25 14:38:21 -04:00
|
|
|
istate = istate->next;
|
|
|
|
if (!istate)
|
|
|
|
continue;
|
|
|
|
if (bufused + istate->size >= BUFSZ)
|
|
|
|
continue; /* FIXME: More severe? */
|
|
|
|
for (i = 0; i < istate->size; i++)
|
|
|
|
buffer[bufused+i] = htonl(istate->data[i]);
|
|
|
|
bufused += istate->size;
|
|
|
|
}
|
|
|
|
buffer[1] = htonl((bufused-2) * sizeof(uint32_t));
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2017-09-09 20:44:12 -04:00
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Sending input for client %u\n", (unsigned) client_num);
|
2017-09-09 22:04:31 -04:00
|
|
|
print_state(netplay);
|
2017-09-09 20:44:12 -04:00
|
|
|
#endif
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
if (only)
|
|
|
|
{
|
2017-08-25 14:38:21 -04:00
|
|
|
if (!netplay_send(&only->send_packet_buffer, only->fd, buffer, bufused*sizeof(uint32_t)))
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
|
|
|
netplay_hangup(netplay, only);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
2019-06-26 07:13:50 +02:00
|
|
|
if (connection == except)
|
|
|
|
continue;
|
2016-12-13 20:42:53 -05:00
|
|
|
if (connection->active &&
|
|
|
|
connection->mode >= NETPLAY_CONNECTION_CONNECTED &&
|
|
|
|
(connection->mode != NETPLAY_CONNECTION_PLAYING ||
|
2017-08-25 14:38:21 -04:00
|
|
|
i+1 != client_num))
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd,
|
2017-08-25 14:38:21 -04:00
|
|
|
buffer, bufused*sizeof(uint32_t)))
|
2016-12-13 20:42:53 -05:00
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2017-08-25 14:38:21 -04:00
|
|
|
#undef BUFSZ
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2016-12-15 08:42:03 -05:00
|
|
|
/**
|
|
|
|
* netplay_send_cur_input
|
|
|
|
*
|
|
|
|
* Send the current input frame to a given connection.
|
|
|
|
*
|
|
|
|
* Returns true if successful, false otherwise.
|
|
|
|
*/
|
2016-12-13 20:42:53 -05:00
|
|
|
bool netplay_send_cur_input(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
2017-08-25 14:38:21 -04:00
|
|
|
uint32_t from_client, to_client;
|
2018-01-23 05:48:37 +01:00
|
|
|
struct delta_frame *dframe = &netplay->buffer[netplay->self_ptr];
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
2018-04-09 15:56:45 +02:00
|
|
|
to_client = (uint32_t)(connection - netplay->connections + 1);
|
2017-09-09 22:04:31 -04:00
|
|
|
|
2017-08-25 14:38:21 -04:00
|
|
|
/* Send the other players' input data (FIXME: This involves an
|
|
|
|
* unacceptable amount of recalculating) */
|
|
|
|
for (from_client = 1; from_client < MAX_CLIENTS; from_client++)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2017-08-25 14:38:21 -04:00
|
|
|
if (from_client == to_client)
|
2016-12-13 20:42:53 -05:00
|
|
|
continue;
|
2019-02-03 15:49:35 -08:00
|
|
|
|
2017-09-10 09:15:06 -04:00
|
|
|
if ((netplay->connected_players & (1<<from_client)))
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2017-08-25 14:38:21 -04:00
|
|
|
if (dframe->have_real[from_client])
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2017-09-19 21:23:42 -04:00
|
|
|
if (!send_input_frame(netplay, dframe, connection, NULL, from_client, false))
|
2016-12-13 20:42:53 -05:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If we're not playing, send a NOINPUT */
|
|
|
|
if (netplay->self_mode != NETPLAY_CONNECTION_PLAYING)
|
|
|
|
{
|
2017-01-20 14:28:18 -05:00
|
|
|
uint32_t payload = htonl(netplay->self_frame_count);
|
2016-12-13 20:42:53 -05:00
|
|
|
if (!netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_NOINPUT,
|
|
|
|
&payload, sizeof(payload)))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Send our own data */
|
2017-09-19 21:23:42 -04:00
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING
|
|
|
|
|| netplay->self_mode == NETPLAY_CONNECTION_SLAVE)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2017-09-09 20:44:12 -04:00
|
|
|
if (!send_input_frame(netplay, dframe, connection, NULL,
|
2017-09-19 21:23:42 -04:00
|
|
|
netplay->self_client_num,
|
|
|
|
netplay->self_mode == NETPLAY_CONNECTION_SLAVE))
|
2016-12-13 20:42:53 -05:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!netplay_send_flush(&connection->send_packet_buffer, connection->fd,
|
|
|
|
false))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_send_raw_cmd
|
|
|
|
*
|
2016-12-15 08:42:03 -05:00
|
|
|
* Send a raw Netplay command to the given connection.
|
2016-12-13 20:42:53 -05:00
|
|
|
*
|
2016-12-15 08:42:03 -05:00
|
|
|
* Returns true on success, false on failure.
|
2016-12-13 20:42:53 -05:00
|
|
|
*/
|
|
|
|
bool netplay_send_raw_cmd(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, uint32_t cmd, const void *data,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
uint32_t cmdbuf[2];
|
|
|
|
|
|
|
|
cmdbuf[0] = htonl(cmd);
|
|
|
|
cmdbuf[1] = htonl(size);
|
|
|
|
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, cmdbuf,
|
|
|
|
sizeof(cmdbuf)))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (size > 0)
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, data, size))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_send_raw_cmd_all
|
|
|
|
*
|
|
|
|
* Send a raw Netplay command to all connections, optionally excluding one
|
|
|
|
* (typically the client that the relevant command came from)
|
|
|
|
*/
|
|
|
|
void netplay_send_raw_cmd_all(netplay_t *netplay,
|
|
|
|
struct netplay_connection *except, uint32_t cmd, const void *data,
|
|
|
|
size_t size)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection == except)
|
|
|
|
continue;
|
|
|
|
if (connection->active && connection->mode >= NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
{
|
|
|
|
if (!netplay_send_raw_cmd(netplay, connection, cmd, data, size))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-05-22 16:27:22 -04:00
|
|
|
/**
|
|
|
|
* netplay_send_flush_all
|
|
|
|
*
|
|
|
|
* Flush all of our output buffers
|
|
|
|
*/
|
|
|
|
static void netplay_send_flush_all(netplay_t *netplay,
|
|
|
|
struct netplay_connection *except)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection == except)
|
|
|
|
continue;
|
|
|
|
if (connection->active && connection->mode >= NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
{
|
|
|
|
if (!netplay_send_flush(&connection->send_packet_buffer,
|
|
|
|
connection->fd, true))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
static bool netplay_cmd_nak(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_NAK, NULL, 0);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2016-12-15 08:42:03 -05:00
|
|
|
/**
|
2021-04-08 18:33:46 +02:00
|
|
|
* netplay_settings_share_mode
|
2016-12-15 08:42:03 -05:00
|
|
|
*
|
2021-04-08 18:33:46 +02:00
|
|
|
* Get the preferred share mode
|
2016-12-15 08:42:03 -05:00
|
|
|
*/
|
2021-04-08 18:33:46 +02:00
|
|
|
static uint8_t netplay_settings_share_mode(
|
|
|
|
unsigned share_digital, unsigned share_analog)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-04-08 18:33:46 +02:00
|
|
|
if (share_digital || share_analog)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-04-08 18:33:46 +02:00
|
|
|
uint8_t share_mode = 0;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-04-08 18:33:46 +02:00
|
|
|
switch (share_digital)
|
|
|
|
{
|
|
|
|
case RARCH_NETPLAY_SHARE_DIGITAL_OR:
|
|
|
|
share_mode |= NETPLAY_SHARE_DIGITAL_OR;
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_SHARE_DIGITAL_XOR:
|
|
|
|
share_mode |= NETPLAY_SHARE_DIGITAL_XOR;
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_SHARE_DIGITAL_VOTE:
|
|
|
|
share_mode |= NETPLAY_SHARE_DIGITAL_VOTE;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
share_mode |= NETPLAY_SHARE_NO_PREFERENCE;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (share_analog)
|
|
|
|
{
|
|
|
|
case RARCH_NETPLAY_SHARE_ANALOG_MAX:
|
|
|
|
share_mode |= NETPLAY_SHARE_ANALOG_MAX;
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_SHARE_ANALOG_AVERAGE:
|
|
|
|
share_mode |= NETPLAY_SHARE_ANALOG_AVERAGE;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
share_mode |= NETPLAY_SHARE_NO_PREFERENCE;
|
|
|
|
}
|
|
|
|
|
|
|
|
return share_mode;
|
|
|
|
}
|
|
|
|
return 0;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2016-12-15 08:42:03 -05:00
|
|
|
/**
|
|
|
|
* netplay_cmd_mode
|
|
|
|
*
|
2017-09-12 16:45:38 -04:00
|
|
|
* Send a mode change request. As a server, the request is to ourself, and so
|
|
|
|
* honored instantly.
|
2016-12-15 08:42:03 -05:00
|
|
|
*/
|
2016-12-13 20:42:53 -05:00
|
|
|
bool netplay_cmd_mode(netplay_t *netplay,
|
|
|
|
enum rarch_netplay_connection_mode mode)
|
|
|
|
{
|
2017-09-10 21:46:18 -04:00
|
|
|
uint32_t cmd, device;
|
2020-01-02 17:07:03 +01:00
|
|
|
uint32_t payload_buf = 0, *payload = NULL;
|
|
|
|
uint8_t share_mode = 0;
|
2017-09-12 16:45:38 -04:00
|
|
|
struct netplay_connection *connection = NULL;
|
|
|
|
|
|
|
|
if (!netplay->is_server)
|
|
|
|
connection = &netplay->one_connection;
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
switch (mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_CONNECTION_SPECTATING:
|
|
|
|
cmd = NETPLAY_CMD_SPECTATE;
|
|
|
|
break;
|
|
|
|
|
2017-02-22 20:34:17 -05:00
|
|
|
case NETPLAY_CONNECTION_SLAVE:
|
2020-01-02 17:07:03 +01:00
|
|
|
payload_buf = NETPLAY_CMD_PLAY_BIT_SLAVE;
|
2017-08-25 14:38:21 -04:00
|
|
|
/* no break */
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
case NETPLAY_CONNECTION_PLAYING:
|
2017-09-10 21:46:18 -04:00
|
|
|
{
|
2020-01-02 17:07:03 +01:00
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
payload = &payload_buf;
|
|
|
|
|
|
|
|
/* Add a share mode if requested */
|
|
|
|
share_mode = netplay_settings_share_mode(
|
|
|
|
settings->uints.netplay_share_digital,
|
|
|
|
settings->uints.netplay_share_analog
|
|
|
|
);
|
|
|
|
payload_buf |= ((uint32_t) share_mode) << 16;
|
|
|
|
|
|
|
|
/* Request devices */
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (settings->bools.netplay_request_devices[device])
|
|
|
|
payload_buf |= 1<<device;
|
|
|
|
}
|
2017-09-10 21:46:18 -04:00
|
|
|
|
2020-01-02 17:07:03 +01:00
|
|
|
payload_buf = htonl(payload_buf);
|
|
|
|
cmd = NETPLAY_CMD_PLAY;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
handle_play_spectate(netplay, 0, NULL, cmd, payload ? sizeof(uint32_t) : 0, payload);
|
|
|
|
return true;
|
|
|
|
}
|
2020-01-02 16:35:55 +01:00
|
|
|
|
|
|
|
return netplay_send_raw_cmd(netplay, connection, cmd, payload,
|
|
|
|
payload ? sizeof(uint32_t) : 0);
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2017-09-12 16:45:38 -04:00
|
|
|
/**
|
|
|
|
* announce_play_spectate
|
|
|
|
*
|
|
|
|
* Announce a play or spectate mode change
|
|
|
|
*/
|
|
|
|
static void announce_play_spectate(netplay_t *netplay,
|
|
|
|
const char *nick,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
enum rarch_netplay_connection_mode mode, uint32_t devices,
|
|
|
|
int32_t ping)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
|
|
|
char msg[512];
|
2021-11-05 18:52:56 +01:00
|
|
|
const char *dmsg = NULL;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
|
|
|
switch (mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_CONNECTION_SPECTATING:
|
|
|
|
if (nick)
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_PLAYER_S_LEFT),
|
|
|
|
NETPLAY_NICK_LEN, nick);
|
|
|
|
dmsg = msg;
|
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
else
|
2021-12-03 18:00:53 -07:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_YOU_HAVE_LEFT_THE_GAME);
|
2021-12-03 18:00:53 -07:00
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CONNECTION_PLAYING:
|
|
|
|
case NETPLAY_CONNECTION_SLAVE:
|
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
char device_str[256];
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char ping_str[32];
|
2017-09-12 16:45:38 -04:00
|
|
|
uint32_t device;
|
|
|
|
uint32_t one_device = (uint32_t) -1;
|
2021-11-05 18:52:56 +01:00
|
|
|
char *pdevice_str = NULL;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
2019-06-26 07:13:50 +02:00
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
2017-09-12 16:45:38 -04:00
|
|
|
if (one_device == (uint32_t) -1)
|
|
|
|
one_device = device;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
one_device = (uint32_t) -1;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (one_device != (uint32_t) -1)
|
|
|
|
{
|
|
|
|
/* Only have one device, simpler message */
|
|
|
|
if (nick)
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(
|
|
|
|
MSG_NETPLAY_S_HAS_JOINED_AS_PLAYER_N),
|
2017-09-12 16:45:38 -04:00
|
|
|
NETPLAY_NICK_LEN, nick, one_device + 1);
|
|
|
|
else
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(
|
|
|
|
MSG_NETPLAY_YOU_HAVE_JOINED_AS_PLAYER_N),
|
2017-09-12 16:45:38 -04:00
|
|
|
one_device + 1);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Multiple devices, so step one is to make the
|
|
|
|
device string listing them all */
|
|
|
|
pdevice_str = device_str;
|
|
|
|
|
2017-09-12 16:45:38 -04:00
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
2021-11-05 19:04:52 +01:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
if (devices & (1<<device))
|
2021-12-03 18:00:53 -07:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
pdevice_str += snprintf(pdevice_str,
|
|
|
|
sizeof(device_str) - (size_t)
|
|
|
|
(pdevice_str - device_str),
|
|
|
|
"%u, ",
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
(unsigned) (device+1));
|
2021-12-03 18:00:53 -07:00
|
|
|
}
|
2021-11-05 19:04:52 +01:00
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (pdevice_str > device_str)
|
|
|
|
pdevice_str -= 2;
|
2021-11-05 19:04:52 +01:00
|
|
|
*pdevice_str = '\0';
|
2017-09-12 16:45:38 -04:00
|
|
|
|
|
|
|
/* Then we make the final string */
|
|
|
|
if (nick)
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
2017-09-12 16:45:38 -04:00
|
|
|
msg_hash_to_str(
|
2021-11-05 18:52:56 +01:00
|
|
|
MSG_NETPLAY_S_HAS_JOINED_WITH_INPUT_DEVICES_S),
|
|
|
|
NETPLAY_NICK_LEN, nick,
|
|
|
|
sizeof(device_str), device_str);
|
2017-09-12 16:45:38 -04:00
|
|
|
else
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
2017-09-12 16:45:38 -04:00
|
|
|
msg_hash_to_str(
|
2021-11-05 18:52:56 +01:00
|
|
|
MSG_NETPLAY_YOU_HAVE_JOINED_WITH_INPUT_DEVICES_S),
|
|
|
|
sizeof(device_str),
|
|
|
|
device_str);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (ping >= 0)
|
|
|
|
{
|
2021-12-20 10:19:42 -03:00
|
|
|
snprintf(ping_str, sizeof(ping_str), " (ping: %li ms)",
|
|
|
|
(long int)ping);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcat(msg, ping_str, sizeof(msg));
|
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg;
|
2017-09-12 16:45:38 -04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
default: /* wrong usage */
|
2021-11-05 18:52:56 +01:00
|
|
|
return;
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT,
|
|
|
|
MESSAGE_QUEUE_CATEGORY_INFO);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* handle_play_spectate
|
|
|
|
*
|
|
|
|
* Handle a play or spectate request
|
|
|
|
*/
|
2021-11-05 18:52:56 +01:00
|
|
|
static void handle_play_spectate(netplay_t *netplay,
|
|
|
|
uint32_t client_num,
|
|
|
|
struct netplay_connection *connection,
|
|
|
|
uint32_t cmd, uint32_t cmd_size,
|
2017-09-12 16:45:38 -04:00
|
|
|
uint32_t *in_payload)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* MODE payload:
|
|
|
|
* word 0: frame number
|
|
|
|
* word 1: mode info (playing, slave, client number)
|
|
|
|
* word 2: device bitmap
|
|
|
|
* words 3-6: share modes for all devices
|
|
|
|
* words 7-14: client nick
|
|
|
|
*/
|
|
|
|
uint32_t payload[15] = {0};
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (cmd)
|
|
|
|
{
|
|
|
|
case NETPLAY_CMD_SPECTATE:
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
/* The frame we haven't received is their end frame */
|
|
|
|
if (connection)
|
|
|
|
connection->delay_frame = netplay->read_frame_count[client_num];
|
|
|
|
|
|
|
|
/* Mark them as not playing anymore */
|
|
|
|
if (connection)
|
|
|
|
connection->mode = NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->self_devices = 0;
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
}
|
|
|
|
netplay->connected_players &= ~(1 << client_num);
|
|
|
|
netplay->connected_slaves &= ~(1 << client_num);
|
|
|
|
netplay->client_devices[client_num] = 0;
|
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
|
|
|
netplay->device_clients[i] &= ~(1 << client_num);
|
|
|
|
|
|
|
|
/* Tell someone */
|
|
|
|
payload[0] = htonl(netplay->read_frame_count[client_num]);
|
|
|
|
payload[2] = htonl(0);
|
|
|
|
memcpy(payload + 3, netplay->device_share_modes, sizeof(netplay->device_share_modes));
|
|
|
|
if (connection)
|
|
|
|
{
|
|
|
|
/* Only tell the player. The others will be told at delay_frame */
|
|
|
|
payload[1] = htonl(NETPLAY_CMD_MODE_BIT_YOU | client_num);
|
|
|
|
strncpy((char *) (payload + 7), connection->nick, NETPLAY_NICK_LEN);
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_MODE, payload, sizeof(payload));
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* It was the server, so tell everyone else */
|
|
|
|
payload[1] = htonl(0);
|
|
|
|
strncpy((char *) (payload + 7), netplay->nick, NETPLAY_NICK_LEN);
|
|
|
|
netplay_send_raw_cmd_all(netplay, NULL, NETPLAY_CMD_MODE, payload, sizeof(payload));
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Announce it */
|
|
|
|
announce_play_spectate(netplay, connection ? connection->nick : NULL,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
NETPLAY_CONNECTION_SPECTATING, 0, -1);
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
case NETPLAY_CMD_PLAY:
|
|
|
|
{
|
|
|
|
uint32_t mode, devices = 0, device;
|
|
|
|
uint8_t share_mode;
|
|
|
|
bool slave = false;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
|
|
|
|
if (cmd_size != sizeof(uint32_t) || !in_payload)
|
|
|
|
return;
|
|
|
|
mode = ntohl(in_payload[0]);
|
|
|
|
|
|
|
|
/* Check the requested mode */
|
|
|
|
slave = (mode&NETPLAY_CMD_PLAY_BIT_SLAVE)?true:false;
|
|
|
|
share_mode = (mode>>16)&0xFF;
|
|
|
|
|
|
|
|
/* And the requested devices */
|
|
|
|
devices = mode&0xFFFF;
|
|
|
|
|
|
|
|
/* Check if their slave mode request corresponds with what we allow */
|
|
|
|
if (connection)
|
|
|
|
{
|
|
|
|
if (settings->bools.netplay_require_slaves)
|
|
|
|
slave = true;
|
|
|
|
else if (!settings->bools.netplay_allow_slaves)
|
|
|
|
slave = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
slave = false;
|
|
|
|
|
|
|
|
/* Fix our share mode */
|
|
|
|
if (share_mode)
|
|
|
|
{
|
|
|
|
if ((share_mode & NETPLAY_SHARE_DIGITAL_BITS) == 0)
|
|
|
|
share_mode |= NETPLAY_SHARE_DIGITAL_OR;
|
|
|
|
if ((share_mode & NETPLAY_SHARE_ANALOG_BITS) == 0)
|
|
|
|
share_mode |= NETPLAY_SHARE_ANALOG_MAX;
|
|
|
|
share_mode &= ~NETPLAY_SHARE_NO_PREFERENCE;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* They start at the next frame, but we start immediately */
|
|
|
|
if (connection)
|
|
|
|
{
|
|
|
|
netplay->read_ptr[client_num] = NEXT_PTR(netplay->self_ptr);
|
|
|
|
netplay->read_frame_count[client_num] = netplay->self_frame_count + 1;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->read_ptr[client_num] = netplay->self_ptr;
|
|
|
|
netplay->read_frame_count[client_num] = netplay->self_frame_count;
|
|
|
|
}
|
|
|
|
payload[0] = htonl(netplay->read_frame_count[client_num]);
|
|
|
|
|
|
|
|
if (devices)
|
|
|
|
{
|
|
|
|
/* Make sure the devices are available and/or shareable */
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
if (!netplay->device_clients[device])
|
|
|
|
continue;
|
|
|
|
if (netplay->device_share_modes[device] && share_mode)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Device already taken and unshareable */
|
|
|
|
payload[0] = htonl(NETPLAY_CMD_MODE_REFUSED_REASON_NOT_AVAILABLE);
|
|
|
|
/* FIXME: Refusal message for the server */
|
|
|
|
if (connection)
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_MODE_REFUSED, payload, sizeof(uint32_t));
|
|
|
|
devices = 0;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (devices == 0)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* Set the share mode on any new devices */
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
if (!netplay->device_clients[device])
|
|
|
|
netplay->device_share_modes[device] = share_mode;
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Find an available device */
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (netplay->config_devices[device] == RETRO_DEVICE_NONE)
|
|
|
|
{
|
|
|
|
device = MAX_INPUT_DEVICES;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (!netplay->device_clients[device])
|
|
|
|
break;
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (device >= MAX_INPUT_DEVICES)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->config_devices[1] == RETRO_DEVICE_NONE &&
|
|
|
|
netplay->device_share_modes[0] && share_mode)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* No device free and no device specifically asked for, but only
|
|
|
|
* one device, so share it */
|
2021-09-18 06:14:34 +02:00
|
|
|
device = 0;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
share_mode = netplay->device_share_modes[0];
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* No slots free! */
|
|
|
|
payload[0] = htonl(NETPLAY_CMD_MODE_REFUSED_REASON_NO_SLOTS);
|
|
|
|
/* FIXME: Message for the server */
|
|
|
|
if (connection)
|
|
|
|
netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_MODE_REFUSED, payload, sizeof(uint32_t));
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
devices = 1<<device;
|
|
|
|
netplay->device_share_modes[device] = share_mode;
|
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
payload[2] = htonl(devices);
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Mark them as playing */
|
2017-09-12 16:45:38 -04:00
|
|
|
if (connection)
|
2021-09-18 06:14:34 +02:00
|
|
|
connection->mode =
|
|
|
|
slave ? NETPLAY_CONNECTION_SLAVE : NETPLAY_CONNECTION_PLAYING;
|
2017-09-12 16:45:38 -04:00
|
|
|
else
|
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->self_devices = devices;
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_PLAYING;
|
|
|
|
}
|
|
|
|
netplay->connected_players |= 1 << client_num;
|
|
|
|
if (slave)
|
|
|
|
netplay->connected_slaves |= 1 << client_num;
|
|
|
|
netplay->client_devices[client_num] = devices;
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
netplay->device_clients[device] |= 1 << client_num;
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Tell everyone */
|
|
|
|
payload[1] = htonl(
|
|
|
|
NETPLAY_CMD_MODE_BIT_PLAYING
|
|
|
|
| (slave ? NETPLAY_CMD_MODE_BIT_SLAVE : 0) | client_num);
|
2017-09-12 16:45:38 -04:00
|
|
|
memcpy(payload + 3, netplay->device_share_modes, sizeof(netplay->device_share_modes));
|
|
|
|
if (connection)
|
|
|
|
strncpy((char *) (payload + 7), connection->nick, NETPLAY_NICK_LEN);
|
|
|
|
else
|
|
|
|
strncpy((char *) (payload + 7), netplay->nick, NETPLAY_NICK_LEN);
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_send_raw_cmd_all(netplay, connection, NETPLAY_CMD_MODE,
|
|
|
|
payload, sizeof(payload));
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Tell the player */
|
|
|
|
if (connection)
|
|
|
|
{
|
|
|
|
payload[1] = htonl(NETPLAY_CMD_MODE_BIT_PLAYING |
|
|
|
|
((connection->mode == NETPLAY_CONNECTION_SLAVE)?
|
|
|
|
NETPLAY_CMD_MODE_BIT_SLAVE:0) |
|
|
|
|
NETPLAY_CMD_MODE_BIT_YOU |
|
|
|
|
client_num);
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_MODE, payload, sizeof(payload));
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Announce it */
|
|
|
|
announce_play_spectate(netplay, connection ? connection->nick : NULL,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
NETPLAY_CONNECTION_PLAYING, devices,
|
|
|
|
connection ? connection->ping : -1);
|
2017-09-12 16:45:38 -04:00
|
|
|
break;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool chat_check(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
if (!netplay)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Do nothing if we don't have a nickname. */
|
|
|
|
if (string_is_empty(netplay->nick))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Do nothing if we are not playing. */
|
|
|
|
if (netplay->self_mode != NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
netplay->self_mode != NETPLAY_CONNECTION_SLAVE)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* If we are the server,
|
|
|
|
check if someone is able to read us. */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *conn = &netplay->connections[i];
|
2021-12-21 23:05:35 -03:00
|
|
|
if (conn->active &&
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
(conn->mode == NETPLAY_CONNECTION_PLAYING ||
|
|
|
|
conn->mode == NETPLAY_CONNECTION_SLAVE))
|
2021-12-21 23:05:35 -03:00
|
|
|
{
|
|
|
|
REQUIRE_PROTOCOL_VERSION(conn, 6)
|
|
|
|
return true;
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Otherwise, just check whether our connection is active
|
|
|
|
and the server is running protocol 6+. */
|
|
|
|
else
|
|
|
|
{
|
2021-12-21 23:05:35 -03:00
|
|
|
if (netplay->connections[0].active)
|
|
|
|
{
|
|
|
|
REQUIRE_PROTOCOL_VERSION(&netplay->connections[0], 6)
|
|
|
|
return true;
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void relay_chat(netplay_t *netplay,
|
|
|
|
const char *nick, const char *msg)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
size_t msg_len;
|
|
|
|
char data[NETPLAY_NICK_LEN + NETPLAY_CHAT_MAX_SIZE - 1];
|
|
|
|
|
|
|
|
msg_len = strlen(msg);
|
|
|
|
|
|
|
|
memcpy(data, nick, NETPLAY_NICK_LEN);
|
|
|
|
memcpy(data + NETPLAY_NICK_LEN, msg, msg_len);
|
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *conn = &netplay->connections[i];
|
|
|
|
/* Only playing clients can receive chat.
|
|
|
|
Protocol 6+ is required. */
|
2021-12-21 23:05:35 -03:00
|
|
|
if (conn->active &&
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
(conn->mode == NETPLAY_CONNECTION_PLAYING ||
|
|
|
|
conn->mode == NETPLAY_CONNECTION_SLAVE))
|
2021-12-21 23:05:35 -03:00
|
|
|
{
|
|
|
|
REQUIRE_PROTOCOL_VERSION(conn, 6)
|
|
|
|
netplay_send_raw_cmd(netplay, conn,
|
|
|
|
NETPLAY_CMD_PLAYER_CHAT, data, NETPLAY_NICK_LEN + msg_len);
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
/* We don't flush. Chat is not time essential. */
|
|
|
|
}
|
|
|
|
|
|
|
|
static void show_chat(const char *nick, const char *msg)
|
|
|
|
{
|
|
|
|
char formatted_chat[NETPLAY_CHAT_MAX_SIZE];
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
|
|
|
|
/* Truncate the message if necessary. */
|
|
|
|
snprintf(formatted_chat, sizeof(formatted_chat),
|
|
|
|
"%s: %s", nick, msg);
|
|
|
|
|
|
|
|
RARCH_LOG("[Netplay] %s\n", formatted_chat);
|
|
|
|
|
|
|
|
#ifdef HAVE_GFX_WIDGETS
|
|
|
|
if (gfx_widgets_ready())
|
|
|
|
{
|
|
|
|
struct netplay_chat_data *data;
|
|
|
|
|
|
|
|
/* Do we have a free slot for this message?
|
|
|
|
If not, get rid of the oldest message to make room for it. */
|
|
|
|
if (!net_st->chat.message_slots)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = ARRAY_SIZE(net_st->chat.messages) - 2; i >= 0; i--)
|
|
|
|
{
|
|
|
|
memcpy(&net_st->chat.messages[i+1].data,
|
|
|
|
&net_st->chat.messages[i].data,
|
|
|
|
sizeof(*data));
|
|
|
|
}
|
|
|
|
data = &net_st->chat.messages[0].data;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
data = &net_st->chat.messages[--net_st->chat.message_slots].data;
|
|
|
|
}
|
|
|
|
|
|
|
|
strlcpy(data->nick, nick, sizeof(data->nick));
|
|
|
|
strlcpy(data->msg, msg, sizeof(data->msg));
|
|
|
|
data->frames = NETPLAY_CHAT_FRAME_TIME;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
#endif
|
|
|
|
runloop_msg_queue_push(formatted_chat,
|
|
|
|
1, NETPLAY_CHAT_FRAME_TIME, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void send_chat(void *userdata, const char *line)
|
|
|
|
{
|
|
|
|
char msg[NETPLAY_CHAT_MAX_SIZE];
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
|
|
|
|
/* We perform the same checks,
|
|
|
|
just in case something has changed. */
|
|
|
|
if (!string_is_empty(line) && chat_check(netplay))
|
|
|
|
{
|
|
|
|
/* Truncate line to NETPLAY_CHAT_MAX_SIZE. */
|
|
|
|
strlcpy(msg, line, sizeof(msg));
|
|
|
|
|
|
|
|
/* For servers, we need to relay it ourselves. */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
relay_chat(netplay, netplay->nick, msg);
|
|
|
|
show_chat(netplay->nick, msg);
|
|
|
|
}
|
|
|
|
/* For clients, we just send it to the server. */
|
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay_send_raw_cmd(netplay, &netplay->connections[0],
|
|
|
|
NETPLAY_CMD_PLAYER_CHAT, msg, strlen(msg));
|
|
|
|
/* We don't flush. Chat is not time essential. */
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
menu_input_dialog_end();
|
|
|
|
retroarch_menu_running_finished(false);
|
|
|
|
}
|
|
|
|
|
|
|
|
void netplay_input_chat(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
if (chat_check(netplay))
|
|
|
|
{
|
|
|
|
menu_input_ctx_line_t chat_input = {0};
|
|
|
|
|
|
|
|
retroarch_menu_running();
|
|
|
|
|
|
|
|
chat_input.label = msg_hash_to_str(MSG_NETPLAY_ENTER_CHAT);
|
|
|
|
chat_input.label_setting = "no_setting";
|
|
|
|
chat_input.cb = send_chat;
|
|
|
|
|
|
|
|
menu_input_dialog_start(&chat_input);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* handle_chat
|
|
|
|
*
|
|
|
|
* Handle a received chat message
|
|
|
|
*/
|
|
|
|
static bool handle_chat(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection,
|
|
|
|
const char *nick, const char *msg)
|
|
|
|
{
|
2021-12-21 23:05:35 -03:00
|
|
|
if (!connection->active ||
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
string_is_empty(nick) || string_is_empty(msg))
|
|
|
|
return false;
|
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
REQUIRE_PROTOCOL_VERSION(connection, 6)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
2021-12-21 23:05:35 -03:00
|
|
|
/* Client sent a chat message;
|
|
|
|
Relay it to the other clients,
|
|
|
|
including the one who sent it. */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* Only playing clients can send chat. */
|
|
|
|
if (connection->mode != NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
connection->mode != NETPLAY_CONNECTION_SLAVE)
|
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
relay_chat(netplay, nick, msg);
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
/* If we still got a message even though we are not playing,
|
|
|
|
ignore it! */
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING ||
|
|
|
|
netplay->self_mode == NETPLAY_CONNECTION_SLAVE)
|
|
|
|
show_chat(nick, msg);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void request_ping(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
2021-12-21 23:05:35 -03:00
|
|
|
if (!connection->active ||
|
|
|
|
connection->mode < NETPLAY_CONNECTION_CONNECTED)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
return;
|
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
/* Only protocol 6+ supports the ping command. */
|
|
|
|
REQUIRE_PROTOCOL_VERSION(connection, 6)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
connection->ping_timer = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_PING_REQUEST, NULL, 0);
|
|
|
|
/* We need to get this sent asap. */
|
|
|
|
netplay_send_flush(&connection->send_packet_buffer,
|
|
|
|
connection->fd, false);
|
|
|
|
|
|
|
|
connection->ping_requested = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void answer_ping(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection)
|
|
|
|
{
|
|
|
|
netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_PING_RESPONSE, NULL, 0);
|
|
|
|
/* We need to get this sent asap. */
|
|
|
|
netplay_send_flush(&connection->send_packet_buffer,
|
|
|
|
connection->fd, false);
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#undef RECV
|
|
|
|
#define RECV(buf, sz) \
|
|
|
|
recvd = netplay_recv(&connection->recv_packet_buffer, connection->fd, (buf), \
|
|
|
|
(sz), false); \
|
|
|
|
if (recvd >= 0 && recvd < (ssize_t) (sz)) goto shrt; \
|
|
|
|
else if (recvd < 0)
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_get_cmd(netplay_t *netplay,
|
|
|
|
struct netplay_connection *connection, bool *had_input)
|
|
|
|
{
|
|
|
|
uint32_t cmd;
|
|
|
|
uint32_t cmd_size;
|
|
|
|
ssize_t recvd;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We don't handle the initial handshake here */
|
|
|
|
if (connection->mode < NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
return netplay_handshake(netplay, connection, had_input);
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&cmd, sizeof(cmd))
|
|
|
|
return false;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
cmd = ntohl(cmd);
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&cmd_size, sizeof(cmd_size))
|
|
|
|
return false;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
cmd_size = ntohl(cmd_size);
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received netplay command %X (%u) from %u\n", cmd, cmd_size,
|
2021-09-18 06:14:34 +02:00
|
|
|
(unsigned) (connection - netplay->connections));
|
|
|
|
#endif
|
|
|
|
|
|
|
|
netplay->timeout_cnt = 0;
|
|
|
|
|
|
|
|
switch (cmd)
|
|
|
|
{
|
|
|
|
case NETPLAY_CMD_ACK:
|
|
|
|
/* Why are we even bothering? */
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_NAK:
|
|
|
|
/* Disconnect now! */
|
|
|
|
return false;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_INPUT:
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t frame_num, client_num, input_size, devices, device;
|
|
|
|
struct delta_frame *dframe;
|
|
|
|
|
|
|
|
if (cmd_size < 2*sizeof(uint32_t))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_INPUT too short, no frame/client number.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(&frame_num, sizeof(frame_num))
|
|
|
|
return false;
|
|
|
|
RECV(&client_num, sizeof(client_num))
|
|
|
|
return false;
|
|
|
|
frame_num = ntohl(frame_num);
|
|
|
|
client_num = ntohl(client_num);
|
|
|
|
client_num &= 0xFFFF;
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* Ignore the claimed client #, must be this client */
|
|
|
|
if (connection->mode != NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
connection->mode != NETPLAY_CONNECTION_SLAVE)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay input from non-participating player.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
client_num = (uint32_t)(connection - netplay->connections + 1);
|
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (client_num >= MAX_CLIENTS)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_INPUT received data for an unsupported client.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!(netplay->connected_players & (1<<client_num)))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Invalid NETPLAY_CMD_INPUT player number.\n");
|
2021-11-05 18:52:56 +01:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Figure out how much input is expected */
|
|
|
|
devices = netplay->client_devices[client_num];
|
|
|
|
input_size = netplay_expected_input_size(netplay, devices);
|
|
|
|
|
|
|
|
if (cmd_size != (2+input_size) * sizeof(uint32_t))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_INPUT received an unexpected payload size.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Check the frame number only if they're not in slave mode */
|
|
|
|
if (connection->mode == NETPLAY_CONNECTION_PLAYING)
|
|
|
|
{
|
|
|
|
if (frame_num < netplay->read_frame_count[client_num])
|
|
|
|
{
|
|
|
|
uint32_t buf;
|
|
|
|
/* We already had this, so ignore the new transmission */
|
|
|
|
for (; input_size; input_size--)
|
|
|
|
{
|
|
|
|
RECV(&buf, sizeof(uint32_t))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
else if (frame_num > netplay->read_frame_count[client_num])
|
|
|
|
{
|
|
|
|
/* Out of order = out of luck */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay input out of order.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* The data's good! */
|
|
|
|
dframe = &netplay->buffer[netplay->read_ptr[client_num]];
|
|
|
|
if (!netplay_delta_frame_ready(netplay, dframe, netplay->read_frame_count[client_num]))
|
|
|
|
{
|
|
|
|
/* Hopefully we'll be ready after another round of input */
|
|
|
|
goto shrt;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Copy in the input */
|
2017-09-12 16:45:38 -04:00
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_input_state_t istate;
|
|
|
|
uint32_t dsize, di;
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
dsize = netplay_expected_input_size(netplay, 1 << device);
|
|
|
|
istate = netplay_input_state_for(&dframe->real_input[device],
|
|
|
|
client_num, dsize,
|
|
|
|
false /* Must be false because of slave-mode clients */,
|
|
|
|
false);
|
|
|
|
if (!istate)
|
|
|
|
{
|
|
|
|
/* Catastrophe! */
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
RECV(istate->data, dsize*sizeof(uint32_t))
|
|
|
|
return false;
|
|
|
|
for (di = 0; di < dsize; di++)
|
|
|
|
istate->data[di] = ntohl(istate->data[di]);
|
|
|
|
}
|
|
|
|
dframe->have_real[client_num] = true;
|
|
|
|
|
|
|
|
/* Slaves may go through several packets of data in the same frame
|
|
|
|
* if latency is choppy, so we advance and send their data after
|
|
|
|
* handling all network data this frame */
|
|
|
|
if (connection->mode == NETPLAY_CONNECTION_PLAYING)
|
|
|
|
{
|
|
|
|
netplay->read_ptr[client_num] = NEXT_PTR(netplay->read_ptr[client_num]);
|
|
|
|
netplay->read_frame_count[client_num]++;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* Forward it on if it's past data */
|
|
|
|
if (dframe->frame <= netplay->self_frame_count)
|
|
|
|
send_input_frame(netplay, dframe, NULL, connection, client_num, false);
|
|
|
|
}
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If this was server data, advance our server pointer too */
|
|
|
|
if (!netplay->is_server && client_num == 0)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->server_ptr = netplay->read_ptr[0];
|
|
|
|
netplay->server_frame_count = netplay->read_frame_count[0];
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received input from %u\n", client_num);
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
|
|
|
#endif
|
|
|
|
break;
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
case NETPLAY_CMD_NOINPUT:
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t frame;
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_NOINPUT from a client.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (cmd_size != sizeof(frame))
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_NOINPUT received"
|
2021-11-05 18:52:56 +01:00
|
|
|
" an unexpected payload size.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
RECV(&frame, sizeof(frame))
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
frame = ntohl(frame);
|
|
|
|
|
|
|
|
/* We already had this, so ignore the new transmission */
|
|
|
|
if (frame < netplay->server_frame_count)
|
2017-09-12 16:45:38 -04:00
|
|
|
break;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (frame != netplay->server_frame_count)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_NOINPUT for invalid frame.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->server_ptr = NEXT_PTR(netplay->server_ptr);
|
|
|
|
netplay->server_frame_count++;
|
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received server noinput\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
|
|
|
#endif
|
|
|
|
break;
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_SPECTATE:
|
|
|
|
{
|
|
|
|
uint32_t client_num;
|
2017-09-12 16:45:38 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay->is_server)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SPECTATE from a server.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
if (cmd_size != 0)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Unexpected payload in NETPLAY_CMD_SPECTATE.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if ( connection->mode != NETPLAY_CONNECTION_PLAYING
|
|
|
|
&& connection->mode != NETPLAY_CONNECTION_SLAVE)
|
2017-09-12 16:45:38 -04:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* They were confused */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SPECTATE from client not currently playing.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-09-12 16:45:38 -04:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
client_num = (uint32_t)(connection - netplay->connections + 1);
|
|
|
|
|
|
|
|
handle_play_spectate(netplay, client_num, connection, cmd, 0, NULL);
|
2017-09-12 16:45:38 -04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_PLAY:
|
|
|
|
{
|
|
|
|
uint32_t client_num;
|
2021-11-05 18:52:56 +01:00
|
|
|
uint32_t payload;
|
2016-12-19 23:19:59 +01:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!netplay->is_server)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PLAY from a server.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (cmd_size != sizeof(payload))
|
2021-11-05 18:34:52 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Incorrect NETPLAY_CMD_PLAY payload size.\n");
|
2021-11-05 18:34:52 +01:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2021-11-05 17:17:10 +01:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
RECV(&payload, sizeof(payload))
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (connection->delay_frame)
|
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Can't switch modes while a mode switch
|
|
|
|
is already in progress. */
|
|
|
|
payload = htonl(NETPLAY_CMD_MODE_REFUSED_REASON_TOO_FAST);
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_MODE_REFUSED, &payload, sizeof(payload));
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!connection->can_play)
|
|
|
|
{
|
|
|
|
/* Not allowed to play */
|
2021-11-05 18:52:56 +01:00
|
|
|
payload = htonl(NETPLAY_CMD_MODE_REFUSED_REASON_UNPRIVILEGED);
|
|
|
|
netplay_send_raw_cmd(netplay, connection,
|
|
|
|
NETPLAY_CMD_MODE_REFUSED, &payload, sizeof(payload));
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* They were obviously confused */
|
|
|
|
if (
|
|
|
|
connection->mode == NETPLAY_CONNECTION_PLAYING
|
|
|
|
|| connection->mode == NETPLAY_CONNECTION_SLAVE)
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PLAY from client already playing.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2021-11-05 18:52:56 +01:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
client_num = (unsigned)(connection - netplay->connections + 1);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
handle_play_spectate(netplay, client_num, connection, cmd, cmd_size, &payload);
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
case NETPLAY_CMD_MODE:
|
|
|
|
{
|
|
|
|
uint32_t payload[15];
|
|
|
|
uint32_t frame, mode, client_num, devices, device;
|
|
|
|
size_t ptr;
|
|
|
|
struct delta_frame *dframe;
|
|
|
|
const char *nick;
|
|
|
|
|
|
|
|
#define START(which) \
|
|
|
|
do { \
|
|
|
|
ptr = which; \
|
|
|
|
dframe = &netplay->buffer[ptr]; \
|
|
|
|
} while (0)
|
|
|
|
#define NEXT() \
|
|
|
|
do { \
|
|
|
|
ptr = NEXT_PTR(ptr); \
|
|
|
|
dframe = &netplay->buffer[ptr]; \
|
|
|
|
} while (0)
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_MODE from client.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cmd_size != sizeof(payload))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Invalid payload size for NETPLAY_CMD_MODE.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2017-09-10 19:42:32 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(payload, sizeof(payload))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
frame = ntohl(payload[0]);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We're changing past input, so must replay it */
|
|
|
|
if (frame < netplay->self_frame_count)
|
|
|
|
netplay->force_rewind = true;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
mode = ntohl(payload[1]);
|
2021-09-18 06:14:34 +02:00
|
|
|
client_num = mode & 0xFFFF;
|
|
|
|
if (client_num >= MAX_CLIENTS)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received NETPLAY_CMD_MODE for a higher player number than we support.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
devices = ntohl(payload[2]);
|
|
|
|
memcpy(netplay->device_share_modes, payload + 3, sizeof(netplay->device_share_modes));
|
|
|
|
nick = (const char *) (payload + 7);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (mode & NETPLAY_CMD_MODE_BIT_YOU)
|
|
|
|
{
|
|
|
|
/* A change to me! */
|
|
|
|
if (mode & NETPLAY_CMD_MODE_BIT_PLAYING)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (frame != netplay->server_frame_count)
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received mode change out of order.\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Hooray, I get to play now! */
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received player mode change even though I'm already a player.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Our mode is based on whether we have the slave bit set */
|
|
|
|
if (mode & NETPLAY_CMD_MODE_BIT_SLAVE)
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_SLAVE;
|
|
|
|
else
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_PLAYING;
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->connected_players |= (1<<client_num);
|
|
|
|
netplay->client_devices[client_num] = devices;
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
if (devices & (1<<device))
|
|
|
|
netplay->device_clients[device] |= (1<<client_num);
|
|
|
|
netplay->self_devices = devices;
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->read_ptr[client_num] = netplay->server_ptr;
|
|
|
|
netplay->read_frame_count[client_num] = netplay->server_frame_count;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Fix up current frame info */
|
|
|
|
if (!(mode & NETPLAY_CMD_MODE_BIT_SLAVE) && frame <= netplay->self_frame_count)
|
2017-02-22 20:34:17 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* It wanted past frames, better send 'em! */
|
|
|
|
START(netplay->server_ptr);
|
|
|
|
while (dframe->used && dframe->frame <= netplay->self_frame_count)
|
2017-09-11 08:51:46 -04:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
{
|
|
|
|
uint32_t dsize;
|
|
|
|
netplay_input_state_t istate;
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
dsize = netplay_expected_input_size(netplay, 1 << device);
|
|
|
|
istate = netplay_input_state_for(
|
|
|
|
&dframe->real_input[device], client_num, dsize,
|
|
|
|
false, false);
|
|
|
|
if (!istate)
|
|
|
|
continue;
|
|
|
|
memset(istate->data, 0, dsize*sizeof(uint32_t));
|
|
|
|
}
|
|
|
|
dframe->have_local = true;
|
|
|
|
dframe->have_real[client_num] = true;
|
|
|
|
send_input_frame(netplay, dframe, connection, NULL, client_num, false);
|
|
|
|
if (dframe->frame == netplay->self_frame_count) break;
|
|
|
|
NEXT();
|
2017-09-11 08:51:46 -04:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2017-02-22 20:34:17 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else
|
2017-02-22 20:34:17 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t frame_count;
|
|
|
|
|
|
|
|
/* It wants future frames, make sure we don't capture or send intermediate ones */
|
|
|
|
START(netplay->self_ptr);
|
|
|
|
frame_count = netplay->self_frame_count;
|
|
|
|
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
if (!dframe->used)
|
|
|
|
{
|
|
|
|
/* Make sure it's ready */
|
|
|
|
if (!netplay_delta_frame_ready(netplay, dframe, frame_count))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received mode change but delta frame isn't ready!\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
dframe->have_local = true;
|
|
|
|
|
|
|
|
/* Go on to the next delta frame */
|
|
|
|
NEXT();
|
|
|
|
frame_count++;
|
|
|
|
|
|
|
|
if (frame_count >= frame)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2017-02-22 20:34:17 -05:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Announce it */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
announce_play_spectate(netplay, NULL, NETPLAY_CONNECTION_PLAYING, devices,
|
|
|
|
connection->ping);
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received mode change self->%X\n", devices);
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
|
|
|
#endif
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
else /* YOU && !PLAYING */
|
|
|
|
{
|
|
|
|
/* I'm no longer playing, but I should already know this */
|
|
|
|
if (netplay->self_mode != NETPLAY_CONNECTION_SPECTATING)
|
2017-09-10 09:12:32 -04:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received mode change to spectator unprompted.\n");
|
2017-09-10 09:12:32 -04:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received mode change self->spectating\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
|
|
|
#endif
|
2017-08-25 14:38:21 -04:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
else /* !YOU */
|
|
|
|
{
|
|
|
|
/* Somebody else is joining or parting */
|
|
|
|
if (mode & NETPLAY_CMD_MODE_BIT_PLAYING)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (frame != netplay->server_frame_count)
|
2017-02-22 20:34:17 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received mode change out of order.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2017-02-22 20:34:17 -05:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->connected_players |= (1<<client_num);
|
|
|
|
netplay->client_devices[client_num] = devices;
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
if (devices & (1<<device))
|
|
|
|
netplay->device_clients[device] |= (1<<client_num);
|
|
|
|
|
|
|
|
netplay->read_ptr[client_num] = netplay->server_ptr;
|
|
|
|
netplay->read_frame_count[client_num] = netplay->server_frame_count;
|
|
|
|
|
|
|
|
/* Announce it */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
announce_play_spectate(netplay, nick, NETPLAY_CONNECTION_PLAYING, devices, -1);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received mode change %u->%u\n", client_num, devices);
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
|
|
|
#endif
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else
|
|
|
|
{
|
|
|
|
netplay->connected_players &= ~(1<<client_num);
|
|
|
|
netplay->client_devices[client_num] = 0;
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
netplay->device_clients[device] &= ~(1<<client_num);
|
|
|
|
|
|
|
|
/* Announce it */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
announce_play_spectate(netplay, nick, NETPLAY_CONNECTION_SPECTATING, 0, -1);
|
2016-12-17 16:50:06 -05:00
|
|
|
|
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Received mode change %u->spectator\n", client_num);
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
2016-12-17 16:50:06 -05:00
|
|
|
#endif
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
}
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
|
|
|
|
#undef START
|
|
|
|
#undef NEXT
|
|
|
|
}
|
|
|
|
|
|
|
|
case NETPLAY_CMD_MODE_REFUSED:
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t reason;
|
|
|
|
const char *dmsg = NULL;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
if (netplay->is_server)
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_MODE_REFUSED from client.\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (cmd_size != sizeof(reason))
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Received invalid payload size for NETPLAY_CMD_MODE_REFUSED.\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2017-02-28 21:46:48 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&reason, sizeof(reason))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
reason = ntohl(reason);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (reason)
|
|
|
|
{
|
|
|
|
case NETPLAY_CMD_MODE_REFUSED_REASON_UNPRIVILEGED:
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_CANNOT_PLAY_UNPRIVILEGED);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_MODE_REFUSED_REASON_NO_SLOTS:
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_CANNOT_PLAY_NO_SLOTS);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_MODE_REFUSED_REASON_NOT_AVAILABLE:
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_CANNOT_PLAY_NOT_AVAILABLE);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
dmsg = msg_hash_to_str(MSG_NETPLAY_CANNOT_PLAY);
|
|
|
|
}
|
|
|
|
|
2021-12-03 18:00:53 -07:00
|
|
|
RARCH_LOG("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-11-05 18:52:56 +01:00
|
|
|
break;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_DISCONNECT:
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
return true;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_CRC:
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t buffer[2];
|
|
|
|
size_t tmp_ptr = netplay->run_ptr;
|
|
|
|
bool found = false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (cmd_size != sizeof(buffer))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_CRC received unexpected payload size.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(buffer, sizeof(buffer))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
buffer[0] = ntohl(buffer[0]);
|
|
|
|
buffer[1] = ntohl(buffer[1]);
|
2017-09-10 19:42:32 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Received a CRC for some frame. If we still have it, check if it
|
|
|
|
* matched. This approach could be improved with some quick modular
|
|
|
|
* arithmetic. */
|
|
|
|
do
|
|
|
|
{
|
|
|
|
if ( netplay->buffer[tmp_ptr].used
|
|
|
|
&& netplay->buffer[tmp_ptr].frame == buffer[0])
|
|
|
|
{
|
|
|
|
found = true;
|
|
|
|
break;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
tmp_ptr = PREV_PTR(tmp_ptr);
|
|
|
|
} while (tmp_ptr != netplay->run_ptr);
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Oh well, we got rid of it! */
|
|
|
|
if (!found)
|
|
|
|
break;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (buffer[0] <= netplay->other_frame_count)
|
|
|
|
{
|
|
|
|
/* We've already replayed up to this frame, so we can check it
|
|
|
|
* directly */
|
|
|
|
uint32_t local_crc = 0;
|
|
|
|
if (netplay->state_size)
|
|
|
|
local_crc = netplay_delta_frame_crc(
|
|
|
|
netplay, &netplay->buffer[tmp_ptr]);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Problem! */
|
|
|
|
if (buffer[1] != local_crc)
|
|
|
|
netplay_cmd_request_savestate(netplay);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* We'll have to check it when we catch up */
|
|
|
|
netplay->buffer[tmp_ptr].crc = buffer[1];
|
|
|
|
}
|
2017-06-06 21:35:09 -04:00
|
|
|
|
2016-12-14 22:18:24 -05:00
|
|
|
break;
|
2016-12-15 11:56:47 -05:00
|
|
|
}
|
2016-12-14 22:18:24 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_REQUEST_SAVESTATE:
|
|
|
|
/* Delay until next frame so we don't send the savestate after the
|
|
|
|
* input */
|
|
|
|
netplay->force_send_savestate = true;
|
2016-12-13 20:42:53 -05:00
|
|
|
break;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_LOAD_SAVESTATE:
|
|
|
|
case NETPLAY_CMD_RESET:
|
|
|
|
{
|
|
|
|
uint32_t frame;
|
|
|
|
uint32_t isize;
|
|
|
|
uint32_t rd, wn;
|
|
|
|
uint32_t client;
|
|
|
|
uint32_t load_frame_count;
|
|
|
|
size_t load_ptr;
|
|
|
|
struct compression_transcoder *ctrans = NULL;
|
|
|
|
uint32_t client_num = (uint32_t)
|
|
|
|
(connection - netplay->connections + 1);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Make sure we're ready for it */
|
|
|
|
if (netplay->quirks & NETPLAY_QUIRK_INITIALIZATION)
|
|
|
|
{
|
|
|
|
if (!netplay->is_replay)
|
|
|
|
{
|
|
|
|
netplay->is_replay = true;
|
|
|
|
netplay->replay_ptr = netplay->run_ptr;
|
|
|
|
netplay->replay_frame_count = netplay->run_frame_count;
|
|
|
|
netplay_wait_and_init_serialization(netplay);
|
|
|
|
netplay->is_replay = false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay_wait_and_init_serialization(netplay);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Only players may load states */
|
|
|
|
if (connection->mode != NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
connection->mode != NETPLAY_CONNECTION_SLAVE)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay state load from a spectator.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* We only allow players to load state if we're in a simple
|
|
|
|
* two-player situation */
|
|
|
|
if (netplay->is_server && netplay->connections_size > 1)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay state load from a client with other clients connected disallowed.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2017-09-10 19:42:32 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* There is a subtlty in whether the load comes before or after the
|
|
|
|
* current frame:
|
|
|
|
*
|
|
|
|
* If it comes before the current frame, then we need to force a
|
|
|
|
* rewind to that point.
|
|
|
|
*
|
|
|
|
* If it comes after the current frame, we need to jump ahead, then
|
|
|
|
* (strangely) force a rewind to the frame we're already on, so it
|
|
|
|
* gets loaded. This is just to avoid having reloading implemented in
|
|
|
|
* too many places. */
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Check the payload size */
|
|
|
|
if ((cmd == NETPLAY_CMD_LOAD_SAVESTATE &&
|
|
|
|
(cmd_size < 2*sizeof(uint32_t) || cmd_size > netplay->zbuffer_size + 2*sizeof(uint32_t))) ||
|
2021-11-05 18:52:56 +01:00
|
|
|
(cmd == NETPLAY_CMD_RESET && cmd_size != sizeof(frame)))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] CMD_LOAD_SAVESTATE received an unexpected payload size.\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&frame, sizeof(frame))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
frame = ntohl(frame);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
load_ptr = netplay->read_ptr[client_num];
|
|
|
|
load_frame_count = netplay->read_frame_count[client_num];
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
load_ptr = netplay->server_ptr;
|
|
|
|
load_frame_count = netplay->server_frame_count;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (frame != load_frame_count)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] CMD_LOAD_SAVESTATE loading a state out of order!\n");
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay_delta_frame_ready(netplay, &netplay->buffer[load_ptr], load_frame_count))
|
|
|
|
{
|
|
|
|
/* Hopefully it will be after another round of input */
|
|
|
|
goto shrt;
|
|
|
|
}
|
2017-08-25 14:38:21 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Now we switch based on whether we're loading a state or resetting */
|
|
|
|
if (cmd == NETPLAY_CMD_LOAD_SAVESTATE)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(&isize, sizeof(isize))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
isize = ntohl(isize);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (isize != netplay->state_size)
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] CMD_LOAD_SAVESTATE received an unexpected save state size.\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(netplay->zbuffer, cmd_size - 2*sizeof(uint32_t))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* And decompress it */
|
|
|
|
switch (connection->compression_supported)
|
|
|
|
{
|
|
|
|
case NETPLAY_COMPRESSION_ZLIB:
|
|
|
|
ctrans = &netplay->compress_zlib;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
ctrans = &netplay->compress_nil;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
ctrans->decompression_backend->set_in(ctrans->decompression_stream,
|
|
|
|
netplay->zbuffer, cmd_size - 2*sizeof(uint32_t));
|
|
|
|
ctrans->decompression_backend->set_out(ctrans->decompression_stream,
|
|
|
|
(uint8_t*)netplay->buffer[load_ptr].state,
|
|
|
|
(unsigned)netplay->state_size);
|
|
|
|
ctrans->decompression_backend->trans(ctrans->decompression_stream,
|
|
|
|
true, &rd, &wn, NULL);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Force a rewind to the relevant frame */
|
|
|
|
netplay->force_rewind = true;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Resetting */
|
|
|
|
netplay->force_reset = true;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Skip ahead if it's past where we are */
|
|
|
|
if (load_frame_count > netplay->run_frame_count ||
|
|
|
|
cmd == NETPLAY_CMD_RESET)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* This is squirrely: We need to assure that when we advance the
|
|
|
|
* frame in post_frame, THEN we're referring to the frame to
|
|
|
|
* load into. If we refer directly to read_ptr, then we'll end
|
|
|
|
* up never reading the input for read_frame_count itself, which
|
|
|
|
* will make the other side unhappy. */
|
|
|
|
netplay->run_ptr = PREV_PTR(load_ptr);
|
|
|
|
netplay->run_frame_count = load_frame_count - 1;
|
|
|
|
if (frame > netplay->self_frame_count)
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->self_ptr = netplay->run_ptr;
|
|
|
|
netplay->self_frame_count = netplay->run_frame_count;
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Don't expect earlier data from other clients */
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(netplay->connected_players & (1<<client)))
|
|
|
|
continue;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (frame > netplay->read_frame_count[client])
|
|
|
|
{
|
|
|
|
netplay->read_ptr[client] = load_ptr;
|
|
|
|
netplay->read_frame_count[client] = load_frame_count;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Make sure our states are correct */
|
|
|
|
netplay->savestate_request_outstanding = false;
|
|
|
|
netplay->other_ptr = load_ptr;
|
|
|
|
netplay->other_frame_count = load_frame_count;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
|
|
|
#ifdef DEBUG_NETPLAY_STEPS
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] Loading state at %u\n", load_frame_count);
|
2021-09-18 06:14:34 +02:00
|
|
|
print_state(netplay);
|
2016-12-17 16:50:06 -05:00
|
|
|
#endif
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_PAUSE:
|
2016-12-15 11:56:47 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
char msg[512], nick[NETPLAY_NICK_LEN];
|
2016-12-15 17:20:04 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Read in the paused nick */
|
|
|
|
if (cmd_size != sizeof(nick))
|
2016-12-15 17:20:04 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PAUSE received invalid payload size.\n");
|
2016-12-15 17:20:04 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2021-11-05 18:52:56 +01:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
RECV(nick, sizeof(nick))
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2021-09-18 06:14:34 +02:00
|
|
|
nick[sizeof(nick)-1] = '\0';
|
2016-12-15 11:56:47 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
2016-12-15 11:56:47 -05:00
|
|
|
{
|
2021-12-04 02:34:21 +01:00
|
|
|
/* We outright ignore pausing from spectators and slaves */
|
|
|
|
if (connection->mode != NETPLAY_CONNECTION_PLAYING)
|
|
|
|
break;
|
|
|
|
|
2021-12-19 16:43:59 -03:00
|
|
|
/* If the client does not honor our setting,
|
|
|
|
refuse to globally pause. */
|
|
|
|
if (!netplay->allow_pausing)
|
|
|
|
break;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Inform peers */
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_PEER_PAUSED),
|
|
|
|
connection->nick);
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_send_raw_cmd_all(netplay, connection, NETPLAY_CMD_PAUSE,
|
|
|
|
connection->nick, NETPLAY_NICK_LEN);
|
2017-09-12 17:05:01 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We may not reach post_frame soon, so flush the pause message
|
|
|
|
* immediately. */
|
|
|
|
netplay_send_flush_all(netplay, connection);
|
2016-12-15 11:56:47 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_PEER_PAUSED), nick);
|
2021-12-04 02:34:21 +01:00
|
|
|
|
|
|
|
connection->paused = true;
|
|
|
|
netplay->remote_paused = true;
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] %s\n", msg);
|
2021-09-18 06:14:34 +02:00
|
|
|
runloop_msg_queue_push(msg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
2016-12-15 11:56:47 -05:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_RESUME:
|
|
|
|
remote_unpaused(netplay, connection);
|
|
|
|
break;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_CMD_STALL:
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t frames;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (netplay->is_server)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
/* Only servers can request a stall! */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Netplay client requested a stall?\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (cmd_size != sizeof(frames))
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_STALL with incorrect payload size.\n");
|
2016-12-13 20:42:53 -05:00
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
2021-11-05 18:52:56 +01:00
|
|
|
|
|
|
|
RECV(&frames, sizeof(frames))
|
|
|
|
return false;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
frames = ntohl(frames);
|
2021-11-05 18:52:56 +01:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (frames > NETPLAY_MAX_REQ_STALL_TIME)
|
|
|
|
frames = NETPLAY_MAX_REQ_STALL_TIME;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We can only stall for one reason at a time */
|
|
|
|
if (!netplay->stall)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
connection->stall = netplay->stall = NETPLAY_STALL_SERVER_REQUESTED;
|
|
|
|
netplay->stall_time = 0;
|
|
|
|
connection->stall_frame = frames;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
case NETPLAY_CMD_PLAYER_CHAT:
|
|
|
|
{
|
|
|
|
char nickname[NETPLAY_NICK_LEN];
|
|
|
|
char message[NETPLAY_CHAT_MAX_SIZE];
|
|
|
|
|
|
|
|
/* We do not send the sentinel/null character on chat messages
|
|
|
|
and we do not allow empty messages. */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
/* If server, we only receive the message,
|
|
|
|
without the nickname portion. */
|
|
|
|
if (!cmd_size || cmd_size >= sizeof(message))
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PLAYER_CHAT with incorrect payload size.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
strlcpy(nickname, connection->nick, sizeof(nickname));
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* If client, we receive both the nickname and
|
|
|
|
the message from the server. */
|
|
|
|
if (cmd_size <= sizeof(nickname) ||
|
|
|
|
cmd_size >= sizeof(nickname) + sizeof(message))
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PLAYER_CHAT with incorrect payload size.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(nickname, sizeof(nickname))
|
|
|
|
return false;
|
|
|
|
cmd_size -= sizeof(nickname);
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(message, cmd_size)
|
|
|
|
return false;
|
|
|
|
message[recvd] = '\0';
|
|
|
|
|
|
|
|
if (!handle_chat(netplay, connection,
|
|
|
|
nickname, message))
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_PLAYER_CHAT with invalid message or from an invalid peer.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_PING_REQUEST:
|
|
|
|
answer_ping(netplay, connection);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_PING_RESPONSE:
|
|
|
|
{
|
|
|
|
/* Only process ping responses if we requested them. */
|
|
|
|
if (connection->ping_requested)
|
|
|
|
{
|
|
|
|
connection->ping = (int32_t)
|
|
|
|
((cpu_features_get_time_usec() - connection->ping_timer)
|
|
|
|
/ 1000);
|
|
|
|
connection->ping_requested = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_SETTING_ALLOW_PAUSING:
|
|
|
|
{
|
|
|
|
uint32_t allow_pausing;
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SETTING_ALLOW_PAUSING from client.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cmd_size != sizeof(allow_pausing))
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SETTING_ALLOW_PAUSING with incorrect payload size.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(&allow_pausing, sizeof(allow_pausing))
|
|
|
|
return false;
|
|
|
|
allow_pausing = ntohl(allow_pausing);
|
|
|
|
|
|
|
|
if (allow_pausing != 0 && allow_pausing != 1)
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SETTING_ALLOW_PAUSING with incorrect setting.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
netplay->allow_pausing = allow_pausing;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NETPLAY_CMD_SETTING_INPUT_LATENCY_FRAMES:
|
|
|
|
{
|
|
|
|
int32_t frames[2];
|
|
|
|
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SETTING_INPUT_LATENCY_FRAMES from client.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cmd_size != sizeof(frames))
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] NETPLAY_CMD_SETTING_INPUT_LATENCY_FRAMES with incorrect payload size.\n");
|
|
|
|
return netplay_cmd_nak(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
RECV(frames, sizeof(frames))
|
|
|
|
return false;
|
|
|
|
netplay->input_latency_frames_min = ntohl(frames[0]);
|
|
|
|
netplay->input_latency_frames_max = ntohl(frames[1]);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
default:
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
unsigned char buf[1024];
|
|
|
|
while (cmd_size)
|
|
|
|
{
|
|
|
|
RECV(buf, (cmd_size > sizeof(buf)) ? sizeof(buf) : cmd_size)
|
|
|
|
return false;
|
|
|
|
cmd_size -= recvd;
|
|
|
|
}
|
|
|
|
RARCH_WARN("[Netplay] %s\n",
|
|
|
|
msg_hash_to_str(MSG_UNKNOWN_NETPLAY_COMMAND_RECEIVED));
|
|
|
|
}
|
|
|
|
break;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_recv_flush(&connection->recv_packet_buffer);
|
|
|
|
netplay->timeout_cnt = 0;
|
|
|
|
if (had_input)
|
|
|
|
*had_input = true;
|
|
|
|
return true;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
shrt:
|
|
|
|
/* No more data, reset and try again */
|
|
|
|
netplay_recv_reset(&connection->recv_packet_buffer);
|
|
|
|
return true;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#undef RECV
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_poll_net_input
|
|
|
|
*
|
|
|
|
* Poll input from the network
|
|
|
|
*/
|
|
|
|
int netplay_poll_net_input(netplay_t *netplay, bool block)
|
|
|
|
{
|
|
|
|
bool had_input = false;
|
|
|
|
int max_fd = 0;
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active && connection->fd >= max_fd)
|
|
|
|
max_fd = connection->fd + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (max_fd == 0)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
netplay->timeout_cnt = 0;
|
|
|
|
|
|
|
|
do
|
|
|
|
{
|
|
|
|
had_input = false;
|
|
|
|
|
|
|
|
netplay->timeout_cnt++;
|
|
|
|
|
|
|
|
/* Read input from each connection */
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active && !netplay_get_cmd(netplay, connection, &had_input))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (block)
|
|
|
|
{
|
|
|
|
netplay_update_unread_ptr(netplay);
|
|
|
|
|
|
|
|
/* If we were blocked for input, pass if we have this frame's input */
|
|
|
|
if (netplay->unread_frame_count > netplay->run_frame_count)
|
|
|
|
break;
|
|
|
|
|
|
|
|
/* If we're supposed to block but we didn't have enough input, wait for it */
|
|
|
|
if (!had_input)
|
|
|
|
{
|
|
|
|
fd_set fds;
|
|
|
|
struct timeval tv = {0};
|
|
|
|
tv.tv_usec = RETRY_MS * 1000;
|
|
|
|
|
|
|
|
FD_ZERO(&fds);
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active)
|
|
|
|
FD_SET(connection->fd, &fds);
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (socket_select(max_fd, &fds, NULL, NULL, &tv) < 0)
|
|
|
|
return -1;
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
RARCH_LOG(
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
"[Netplay] Network is stalling at frame %u, count %u of %d ...\n",
|
2021-11-05 18:52:56 +01:00
|
|
|
netplay->run_frame_count,
|
|
|
|
netplay->timeout_cnt,
|
|
|
|
MAX_RETRIES);
|
2021-09-18 06:14:34 +02:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if ( netplay->timeout_cnt >= MAX_RETRIES
|
|
|
|
&& !netplay->remote_paused)
|
2021-09-18 06:14:34 +02:00
|
|
|
return -1;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
} while (had_input || block);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_handle_slaves
|
|
|
|
*
|
|
|
|
* Handle any slave connections
|
|
|
|
*/
|
|
|
|
void netplay_handle_slaves(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
struct delta_frame *oframe, *frame = &netplay->buffer[netplay->self_ptr];
|
|
|
|
size_t i;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active &&
|
|
|
|
connection->mode == NETPLAY_CONNECTION_SLAVE)
|
|
|
|
{
|
|
|
|
uint32_t devices, device;
|
|
|
|
uint32_t client_num = (uint32_t)(i + 1);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* This is a slave connection. First, should we do anything at all? If
|
|
|
|
* we've already "read" this data, then we can just ignore it */
|
|
|
|
if (netplay->read_frame_count[client_num] > netplay->self_frame_count)
|
|
|
|
continue;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Alright, we have to send something. Do we need to generate it first? */
|
|
|
|
if (!frame->have_real[client_num])
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
devices = netplay->client_devices[client_num];
|
2017-09-09 20:44:12 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Copy the previous frame's data */
|
|
|
|
oframe = &netplay->buffer[PREV_PTR(netplay->self_ptr)];
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_input_state_t istate_out, istate_in;
|
|
|
|
if (!(devices & (1<<device)))
|
|
|
|
continue;
|
|
|
|
istate_in = oframe->real_input[device];
|
|
|
|
while (istate_in && istate_in->client_num != client_num)
|
|
|
|
istate_in = istate_in->next;
|
|
|
|
if (!istate_in)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Start with blank input */
|
|
|
|
netplay_input_state_for(&frame->real_input[device],
|
|
|
|
client_num,
|
|
|
|
netplay_expected_input_size(netplay, 1 << device), true,
|
|
|
|
false);
|
|
|
|
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
else
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
|
|
|
/* Copy the previous input */
|
|
|
|
istate_out = netplay_input_state_for(&frame->real_input[device],
|
|
|
|
client_num, istate_in->size, true, false);
|
|
|
|
memcpy(istate_out->data, istate_in->data,
|
|
|
|
istate_in->size * sizeof(uint32_t));
|
|
|
|
}
|
2016-12-15 17:20:04 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
frame->have_real[client_num] = true;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send it along */
|
|
|
|
send_input_frame(netplay, frame, NULL, NULL, client_num, false);
|
2016-12-15 10:21:15 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* And mark it as "read" */
|
|
|
|
netplay->read_ptr[client_num] = NEXT_PTR(netplay->self_ptr);
|
|
|
|
netplay->read_frame_count[client_num] = netplay->self_frame_count + 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2017-02-15 14:40:37 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_announce_nat_traversal
|
|
|
|
*
|
|
|
|
* Announce successful NAT traversal.
|
|
|
|
*/
|
|
|
|
void netplay_announce_nat_traversal(netplay_t *netplay)
|
|
|
|
{
|
2021-12-09 05:52:42 +01:00
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
2021-11-05 18:52:56 +01:00
|
|
|
char msg[512], host[256], port[6];
|
2021-12-09 05:52:42 +01:00
|
|
|
const char *dmsg = NULL;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-11-05 18:52:56 +01:00
|
|
|
|
2021-12-09 05:52:42 +01:00
|
|
|
if (net_st->nat_traversal_request.status == NAT_TRAVERSAL_STATUS_OPENED)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->ext_tcp_port =
|
|
|
|
ntohs(net_st->nat_traversal_request.request.addr.sin_port);
|
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!getnameinfo(
|
2021-12-09 05:52:42 +01:00
|
|
|
(struct sockaddr *) &net_st->nat_traversal_request.request.addr,
|
|
|
|
sizeof(net_st->nat_traversal_request.request.addr),
|
|
|
|
host, sizeof(host), port, sizeof(port),
|
|
|
|
NI_NUMERICHOST | NI_NUMERICSERV))
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
|
|
|
snprintf(msg, sizeof(msg), "%s: %s:%s",
|
|
|
|
msg_hash_to_str(MSG_PUBLIC_ADDRESS),
|
|
|
|
host, port);
|
|
|
|
dmsg = msg;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
else
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg_hash_to_str(MSG_UPNP_FAILED);
|
|
|
|
|
|
|
|
if (dmsg)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_LOG("[Netplay] %s\n", dmsg);
|
2021-11-05 18:52:56 +01:00
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
2016-12-17 16:50:06 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_init_nat_traversal
|
|
|
|
*
|
|
|
|
* Initialize the NAT traversal library and try to open a port
|
|
|
|
*/
|
|
|
|
void netplay_init_nat_traversal(netplay_t *netplay)
|
|
|
|
{
|
2021-12-09 05:52:42 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
|
|
|
|
task_push_netplay_nat_traversal(&net_st->nat_traversal_request, netplay->tcp_port);
|
|
|
|
}
|
|
|
|
|
|
|
|
void netplay_deinit_nat_traversal(void)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
|
|
|
|
task_push_netplay_nat_close(&net_st->nat_traversal_request);
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static int init_tcp_connection(netplay_t *netplay, const struct addrinfo *res,
|
|
|
|
bool is_server, bool is_mitm)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 22:03:13 +01:00
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
2021-11-05 18:52:56 +01:00
|
|
|
char msg[512];
|
2021-11-05 22:03:13 +01:00
|
|
|
char host[256], port[6];
|
|
|
|
#endif
|
2021-11-05 18:52:56 +01:00
|
|
|
const char *dmsg = NULL;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int fd = socket(res->ai_family, res->ai_socktype,
|
|
|
|
res->ai_protocol);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (fd < 0)
|
2021-11-05 18:52:56 +01:00
|
|
|
return -1;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
SET_TCP_NODELAY(fd)
|
|
|
|
SET_FD_CLOEXEC(fd)
|
2017-02-15 14:40:37 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!is_server)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!socket_connect(fd, (void*)res, false))
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
/* If we are connecting to a tunnel server,
|
|
|
|
we must also send our session linking request. */
|
|
|
|
if (!netplay->mitm_session_id.magic || socket_send_all_blocking(fd,
|
|
|
|
&netplay->mitm_session_id, sizeof(netplay->mitm_session_id), true))
|
|
|
|
return fd;
|
|
|
|
}
|
2021-11-05 18:52:56 +01:00
|
|
|
|
2021-11-05 19:04:52 +01:00
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
2021-11-05 18:52:56 +01:00
|
|
|
if (!getnameinfo(res->ai_addr, res->ai_addrlen,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
host, sizeof(host), port, sizeof(port),
|
|
|
|
NI_NUMERICHOST | NI_NUMERICSERV))
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
snprintf(msg, sizeof(msg),
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
"Failed to connect to host %s on port %s.",
|
|
|
|
host, port);
|
2021-11-05 18:52:56 +01:00
|
|
|
dmsg = msg;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
|
|
|
#endif
|
2021-11-12 06:30:01 +01:00
|
|
|
dmsg = "Failed to connect to host.";
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
else if (is_mitm)
|
|
|
|
{
|
|
|
|
if (!socket_connect(fd, (void*)res, false))
|
|
|
|
{
|
|
|
|
mitm_id_t new_session = {0};
|
|
|
|
mitm_id_t invalid_session = {0};
|
|
|
|
|
|
|
|
/* To request a new session,
|
|
|
|
we send the magic with the rest of the ID zeroed. */
|
|
|
|
new_session.magic = htonl(MITM_SESSION_MAGIC);
|
|
|
|
|
|
|
|
/* Tunnel server should provide us with our session ID. */
|
|
|
|
if (socket_send_all_blocking(fd,
|
|
|
|
&new_session, sizeof(new_session), true) &&
|
|
|
|
socket_receive_all_blocking(fd,
|
|
|
|
&netplay->mitm_session_id, sizeof(netplay->mitm_session_id)) &&
|
|
|
|
ntohl(netplay->mitm_session_id.magic) == MITM_SESSION_MAGIC &&
|
|
|
|
memcmp(netplay->mitm_session_id.unique, invalid_session.unique,
|
|
|
|
sizeof(netplay->mitm_session_id.unique)))
|
|
|
|
{
|
|
|
|
/* Initialize data for handling tunneled client connections. */
|
|
|
|
netplay->mitm_pending = calloc(1, sizeof(*netplay->mitm_pending));
|
|
|
|
if (netplay->mitm_pending)
|
|
|
|
{
|
|
|
|
memset(netplay->mitm_pending->fds, -1,
|
|
|
|
sizeof(netplay->mitm_pending->fds));
|
|
|
|
netplay->mitm_pending->addr = res;
|
|
|
|
return fd;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
dmsg = "Failed to create a tunnel session.";
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
|
|
|
if (!getnameinfo(res->ai_addr, res->ai_addrlen,
|
|
|
|
host, sizeof(host), port, sizeof(port),
|
|
|
|
NI_NUMERICHOST | NI_NUMERICSERV))
|
|
|
|
{
|
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
"Failed to connect to relay server %s on port %s.",
|
|
|
|
host, port);
|
|
|
|
dmsg = msg;
|
|
|
|
}
|
|
|
|
else
|
2021-11-12 18:58:40 +01:00
|
|
|
#endif
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
dmsg = "Failed to connect to relay server.";
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#if defined(HAVE_INET6) && defined(IPV6_V6ONLY)
|
2021-11-12 18:59:52 +01:00
|
|
|
/* Make sure we accept connections on both IPv6 and IPv4 */
|
|
|
|
if (res->ai_family == AF_INET6)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-12 18:59:52 +01:00
|
|
|
int on = 0;
|
|
|
|
if (setsockopt(fd, IPPROTO_IPV6, IPV6_V6ONLY,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
(const char*) &on, sizeof(on)) < 0)
|
|
|
|
RARCH_WARN("[Netplay] Failed to listen on both IPv6 and IPv4.\n");
|
2021-11-12 18:59:52 +01:00
|
|
|
}
|
2021-11-12 18:58:40 +01:00
|
|
|
#endif
|
2021-11-12 18:59:52 +01:00
|
|
|
if (socket_bind(fd, (void*)res))
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!listen(fd, 64))
|
2021-11-12 18:59:52 +01:00
|
|
|
return fd;
|
2021-11-05 18:52:56 +01:00
|
|
|
}
|
|
|
|
else
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-11 04:55:37 +01:00
|
|
|
#ifndef HAVE_SOCKET_LEGACY
|
2021-11-12 18:59:52 +01:00
|
|
|
if (!getnameinfo(res->ai_addr, res->ai_addrlen,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
NULL, 0, port, sizeof(port), NI_NUMERICSERV))
|
2021-11-12 18:59:52 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
snprintf(msg, sizeof(msg),
|
|
|
|
"Failed to bind port %s.",
|
|
|
|
port);
|
2021-11-12 18:59:52 +01:00
|
|
|
dmsg = msg;
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
2021-11-05 19:04:52 +01:00
|
|
|
#endif
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
dmsg = "Failed to bind port.";
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
socket_close(fd);
|
|
|
|
|
|
|
|
if (dmsg)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", dmsg);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
return -1;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2019-06-26 07:13:50 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool init_tcp_socket(netplay_t *netplay,
|
|
|
|
const char *server, const char *mitm, uint16_t port)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
struct addrinfo *res;
|
|
|
|
const struct addrinfo *tmp_info;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char port_buf[6];
|
2021-11-05 18:52:56 +01:00
|
|
|
struct addrinfo hints = {0};
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool is_mitm = !server && mitm;
|
2021-11-05 18:52:56 +01:00
|
|
|
int fd = -1;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!server)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!is_mitm)
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
|
|
|
hints.ai_flags = AI_PASSIVE;
|
|
|
|
#ifdef HAVE_INET6
|
|
|
|
/* Default to hosting on IPv6 and IPv4 */
|
2021-09-18 06:14:34 +02:00
|
|
|
hints.ai_family = AF_INET6;
|
2021-11-05 18:52:56 +01:00
|
|
|
#else
|
|
|
|
hints.ai_family = AF_INET;
|
|
|
|
#endif
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
|
|
|
{
|
|
|
|
/* IPv4 only for relay servers. */
|
|
|
|
hints.ai_family = AF_INET;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
hints.ai_socktype = SOCK_STREAM;
|
2016-12-17 16:50:06 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
snprintf(port_buf, sizeof(port_buf), "%hu", port);
|
|
|
|
|
|
|
|
if (getaddrinfo_retro(is_mitm ? mitm : server, port_buf,
|
|
|
|
&hints, &res))
|
|
|
|
{
|
|
|
|
if (!server && !is_mitm)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
2021-11-05 18:52:56 +01:00
|
|
|
#ifdef HAVE_INET6
|
|
|
|
try_ipv4:
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* Didn't work with IPv6, try IPv4 */
|
|
|
|
hints.ai_family = AF_INET;
|
|
|
|
if (getaddrinfo_retro(server, port_buf, &hints, &res))
|
2021-11-05 18:52:56 +01:00
|
|
|
#endif
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Failed to set a hosting address.\n");
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
else
|
|
|
|
{
|
|
|
|
RARCH_ERR("[Netplay] Failed to resolve host: %s\n", is_mitm ? mitm : server);
|
2021-11-12 18:59:52 +01:00
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!res)
|
|
|
|
return false;
|
2016-12-16 19:54:50 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
/* If we're serving on IPv6, make sure we accept all connections, including
|
|
|
|
* IPv4 */
|
2021-11-05 18:52:56 +01:00
|
|
|
#ifdef HAVE_INET6
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!server && !is_mitm && res->ai_family == AF_INET6)
|
|
|
|
{
|
|
|
|
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *) res->ai_addr;
|
2021-11-05 18:52:56 +01:00
|
|
|
#if defined(_MSC_VER) && _MSC_VER <= 1200
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
IN6ADDR_SETANY(sin6);
|
2021-11-05 18:52:56 +01:00
|
|
|
#else
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
sin6->sin6_addr = in6addr_any;
|
2021-11-11 04:55:37 +01:00
|
|
|
#endif
|
2021-11-12 18:59:52 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#endif
|
2017-05-22 16:27:22 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If "localhost" is used, it is important to check every possible
|
|
|
|
* address for IPv4/IPv6. */
|
|
|
|
tmp_info = res;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
do
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
fd = init_tcp_connection(netplay, tmp_info, !server, is_mitm);
|
2021-09-18 06:14:34 +02:00
|
|
|
if (fd >= 0)
|
|
|
|
break;
|
2021-11-05 18:52:56 +01:00
|
|
|
} while ((tmp_info = tmp_info->ai_next));
|
2021-11-05 18:34:52 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->mitm_pending && netplay->mitm_pending->addr)
|
|
|
|
netplay->mitm_pending->base_addr = res;
|
|
|
|
else
|
2021-11-12 18:59:52 +01:00
|
|
|
freeaddrinfo_retro(res);
|
2021-11-05 18:52:56 +01:00
|
|
|
res = NULL;
|
2016-12-24 15:25:03 -05:00
|
|
|
|
2021-11-05 18:52:56 +01:00
|
|
|
if (fd < 0)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
|
|
|
#ifdef HAVE_INET6
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!server && !is_mitm && hints.ai_family == AF_INET6)
|
2021-11-05 18:52:56 +01:00
|
|
|
goto try_ipv4;
|
2021-09-18 06:14:34 +02:00
|
|
|
#endif
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] Failed to set up netplay sockets.\n");
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!socket_nonblock(fd))
|
|
|
|
{
|
|
|
|
socket_close(fd);
|
2021-11-05 18:52:56 +01:00
|
|
|
return false;
|
2021-11-05 17:17:10 +01:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (server)
|
2021-11-05 18:52:56 +01:00
|
|
|
{
|
|
|
|
netplay->connections[0].active = true;
|
|
|
|
netplay->connections[0].fd = fd;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
memset(&netplay->connections[0].addr, 0,
|
|
|
|
sizeof(netplay->connections[0].addr));
|
2021-11-05 18:52:56 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay->listen_fd = fd;
|
|
|
|
|
|
|
|
return true;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-24 15:25:03 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static bool init_socket(netplay_t *netplay,
|
|
|
|
const char *server, const char *mitm, uint16_t port)
|
2021-09-18 06:14:34 +02:00
|
|
|
{
|
|
|
|
if (!network_init())
|
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!init_tcp_socket(netplay, server, mitm, port))
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (netplay->is_server && netplay->nat_traversal)
|
|
|
|
netplay_init_nat_traversal(netplay);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_init_socket_buffers(netplay_t *netplay)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Make our packet buffer big enough for a save state and stall-frames-many
|
|
|
|
* frames of input data, plus the headers for each of them */
|
2016-12-13 20:42:53 -05:00
|
|
|
size_t i;
|
2021-09-18 06:14:34 +02:00
|
|
|
size_t packet_buffer_size = netplay->zbuffer_size +
|
|
|
|
NETPLAY_MAX_STALL_FRAMES * 16;
|
|
|
|
netplay->packet_buffer_size = packet_buffer_size;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
2021-09-18 06:14:34 +02:00
|
|
|
if (connection->active)
|
2016-12-13 20:42:53 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (connection->send_packet_buffer.data)
|
|
|
|
{
|
|
|
|
if (!netplay_resize_socket_buffer(&connection->send_packet_buffer,
|
|
|
|
packet_buffer_size) ||
|
|
|
|
!netplay_resize_socket_buffer(&connection->recv_packet_buffer,
|
|
|
|
packet_buffer_size))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (!netplay_init_socket_buffer(&connection->send_packet_buffer,
|
|
|
|
packet_buffer_size) ||
|
|
|
|
!netplay_init_socket_buffer(&connection->recv_packet_buffer,
|
|
|
|
packet_buffer_size))
|
|
|
|
return false;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return true;
|
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool netplay_init_serialization(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
unsigned i;
|
|
|
|
retro_ctx_size_info_t info;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->state_size)
|
|
|
|
return true;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
core_serialize_size(&info);
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!info.size)
|
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->state_size = info.size;
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < netplay->buffer_size; i++)
|
|
|
|
{
|
|
|
|
netplay->buffer[i].state = calloc(netplay->state_size, 1);
|
|
|
|
|
|
|
|
if (!netplay->buffer[i].state)
|
|
|
|
{
|
|
|
|
netplay->quirks |= NETPLAY_QUIRK_NO_SAVESTATES;
|
|
|
|
return false;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2016-12-13 20:42:53 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->zbuffer_size = netplay->state_size * 2;
|
|
|
|
netplay->zbuffer = (uint8_t *) calloc(netplay->zbuffer_size, 1);
|
|
|
|
if (!netplay->zbuffer)
|
|
|
|
{
|
|
|
|
netplay->quirks |= NETPLAY_QUIRK_NO_TRANSMISSION;
|
|
|
|
netplay->zbuffer_size = 0;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2016-12-13 20:42:53 -05:00
|
|
|
}
|
|
|
|
|
2017-02-22 20:34:17 -05:00
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_try_init_serialization
|
2017-02-22 20:34:17 -05:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Try to initialize serialization. For quirky cores.
|
|
|
|
*
|
|
|
|
* Returns true if serialization is now ready, false otherwise.
|
2017-02-22 20:34:17 -05:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
bool netplay_try_init_serialization(netplay_t *netplay)
|
2017-02-22 20:34:17 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
retro_ctx_serialize_info_t serial_info;
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->state_size)
|
|
|
|
return true;
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay_init_serialization(netplay))
|
|
|
|
return false;
|
2017-09-09 20:44:12 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Check if we can actually save */
|
|
|
|
serial_info.data_const = NULL;
|
|
|
|
serial_info.data = netplay->buffer[netplay->run_ptr].state;
|
|
|
|
serial_info.size = netplay->state_size;
|
2017-09-19 21:23:42 -04:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!core_serialize(&serial_info))
|
|
|
|
return false;
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Once initialized, we no longer exhibit this quirk */
|
|
|
|
netplay->quirks &= ~((uint64_t) NETPLAY_QUIRK_INITIALIZATION);
|
2017-02-22 20:34:17 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay_init_socket_buffers(netplay);
|
2017-02-22 20:34:17 -05:00
|
|
|
}
|
|
|
|
|
2016-12-15 08:42:03 -05:00
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_wait_and_init_serialization
|
2016-12-15 08:42:03 -05:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Try very hard to initialize serialization, simulating multiple frames if
|
|
|
|
* necessary. For quirky cores.
|
|
|
|
*
|
|
|
|
* Returns true if serialization is now ready, false otherwise.
|
2016-12-15 08:42:03 -05:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
bool netplay_wait_and_init_serialization(netplay_t *netplay)
|
2016-12-15 08:42:03 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
int frame;
|
2016-12-15 08:42:03 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->state_size)
|
|
|
|
return true;
|
2016-12-15 08:42:03 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Wait a maximum of 60 frames */
|
|
|
|
for (frame = 0; frame < 60; frame++)
|
2016-12-15 08:42:03 -05:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay_try_init_serialization(netplay))
|
|
|
|
return true;
|
2016-12-15 08:42:03 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
#if defined(HAVE_THREADS)
|
|
|
|
autosave_lock();
|
|
|
|
#endif
|
|
|
|
core_run();
|
|
|
|
#if defined(HAVE_THREADS)
|
|
|
|
autosave_unlock();
|
2016-12-15 08:42:03 -05:00
|
|
|
#endif
|
2019-01-01 11:21:21 -05:00
|
|
|
}
|
2016-12-15 08:42:03 -05:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool netplay_init_buffers(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
struct delta_frame *delta_frames = NULL;
|
|
|
|
|
|
|
|
/* Enough to get ahead or behind by MAX_STALL_FRAMES frames, plus one for
|
|
|
|
* other remote clients, plus one to send the stall message */
|
|
|
|
netplay->buffer_size = NETPLAY_MAX_STALL_FRAMES + 2;
|
|
|
|
|
|
|
|
/* If we're the server, we need enough to get ahead AND behind by
|
|
|
|
* MAX_STALL_FRAMES frame */
|
|
|
|
if (netplay->is_server)
|
|
|
|
netplay->buffer_size *= 2;
|
|
|
|
|
|
|
|
delta_frames = (struct delta_frame*)calloc(netplay->buffer_size,
|
|
|
|
sizeof(*delta_frames));
|
|
|
|
|
|
|
|
if (!delta_frames)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
netplay->buffer = delta_frames;
|
|
|
|
|
|
|
|
if (!(netplay->quirks & (NETPLAY_QUIRK_NO_SAVESTATES|NETPLAY_QUIRK_INITIALIZATION)))
|
|
|
|
netplay_init_serialization(netplay);
|
|
|
|
|
|
|
|
return netplay_init_socket_buffers(netplay);
|
2016-12-15 08:42:03 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* netplay_new:
|
|
|
|
* @server : IP address of server.
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* @mitm : IP address of the MITM/tunnel server.
|
2021-09-18 06:14:34 +02:00
|
|
|
* @port : Port of server.
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
* @mitm_session : Session id for MITM/tunnel.
|
2021-09-18 06:14:34 +02:00
|
|
|
* @stateless_mode : Shall we use stateless mode?
|
|
|
|
* @check_frames : Frequency with which to check CRCs.
|
|
|
|
* @cb : Libretro callbacks.
|
|
|
|
* @nat_traversal : If true, attempt NAT traversal.
|
|
|
|
* @nick : Nickname of user.
|
|
|
|
* @quirks : Netplay quirks required for this session.
|
2016-12-15 08:42:03 -05:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Creates a new netplay handle. A NULL server means we're
|
|
|
|
* hosting.
|
|
|
|
*
|
|
|
|
* Returns: new netplay data.
|
2016-12-15 08:42:03 -05:00
|
|
|
*/
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay_t *netplay_new(const char *server, const char *mitm, uint16_t port,
|
|
|
|
const char *mitm_session,
|
2021-09-18 06:14:34 +02:00
|
|
|
bool stateless_mode, int check_frames,
|
|
|
|
const struct retro_callbacks *cb, bool nat_traversal, const char *nick,
|
|
|
|
uint64_t quirks)
|
2016-12-15 08:42:03 -05:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
netplay_t *netplay = calloc(1, sizeof(*netplay));
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay)
|
|
|
|
return NULL;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->listen_fd = -1;
|
|
|
|
netplay->tcp_port = port;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->ext_tcp_port = port;
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->cbs = *cb;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->is_server = !server;
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->is_connected = false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->nat_traversal = (!server && !mitm) ? nat_traversal : false;
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->stateless_mode = stateless_mode;
|
|
|
|
netplay->check_frames = check_frames;
|
|
|
|
netplay->crc_validity_checked = false;
|
|
|
|
netplay->crcs_valid = true;
|
|
|
|
netplay->quirks = quirks;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->simple_rand_next = 1;
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->self_mode = netplay->is_server ?
|
|
|
|
NETPLAY_CONNECTION_SPECTATING :
|
|
|
|
NETPLAY_CONNECTION_NONE;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->connections = NULL;
|
|
|
|
netplay->connections_size = 0;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
netplay->allow_pausing =
|
|
|
|
settings->bools.netplay_allow_pausing;
|
|
|
|
netplay->input_latency_frames_min =
|
|
|
|
settings->uints.netplay_input_latency_frames_min;
|
|
|
|
if (settings->bools.run_ahead_enabled)
|
|
|
|
netplay->input_latency_frames_min -=
|
|
|
|
settings->uints.run_ahead_frames;
|
|
|
|
netplay->input_latency_frames_max =
|
|
|
|
netplay->input_latency_frames_min +
|
|
|
|
settings->uints.netplay_input_latency_frames_range;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
else
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->connections = &netplay->one_connection;
|
|
|
|
netplay->connections_size = 1;
|
|
|
|
netplay->connections[0].fd = -1;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
if (!string_is_empty(mitm_session))
|
|
|
|
{
|
|
|
|
int flen;
|
|
|
|
unsigned char *buf;
|
|
|
|
|
|
|
|
buf = unbase64(mitm_session, strlen(mitm_session), &flen);
|
|
|
|
if (!buf)
|
|
|
|
goto failure;
|
|
|
|
if (flen != sizeof(netplay->mitm_session_id.unique))
|
|
|
|
{
|
|
|
|
free(buf);
|
|
|
|
goto failure;
|
|
|
|
}
|
|
|
|
|
|
|
|
netplay->mitm_session_id.magic = htonl(MITM_SESSION_MAGIC);
|
|
|
|
memcpy(netplay->mitm_session_id.unique, buf, flen);
|
|
|
|
free(buf);
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcpy(netplay->nick, !string_is_empty(nick)
|
2021-09-18 06:14:34 +02:00
|
|
|
? nick : RARCH_DEFAULT_NICK,
|
|
|
|
sizeof(netplay->nick));
|
2021-04-08 18:16:24 +02:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!init_socket(netplay, server, mitm, port) ||
|
2021-12-04 02:34:21 +01:00
|
|
|
!netplay_init_buffers(netplay))
|
|
|
|
goto failure;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->is_server)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
unsigned i;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
if (netplay->mitm_session_id.magic)
|
|
|
|
{
|
|
|
|
int flen;
|
|
|
|
char *buf;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
|
|
|
|
buf = base64(netplay->mitm_session_id.unique,
|
|
|
|
sizeof(netplay->mitm_session_id.unique), &flen);
|
|
|
|
if (!buf)
|
|
|
|
goto failure;
|
|
|
|
|
|
|
|
strlcpy(host_room->mitm_session, buf,
|
|
|
|
sizeof(host_room->mitm_session));
|
|
|
|
free(buf);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Clients get device info from the server */
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < MAX_INPUT_DEVICES; i++)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t dtype = input_config_get_device(i);
|
|
|
|
netplay->config_devices[i] = dtype;
|
|
|
|
if ((dtype&RETRO_DEVICE_MASK) == RETRO_DEVICE_KEYBOARD)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->have_updown_device = true;
|
|
|
|
netplay_key_hton_init();
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
if (dtype != RETRO_DEVICE_NONE && !netplay_expected_input_size(netplay, 1<<i))
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_WARN("[Netplay] Netplay does not support input device %u\n", i+1);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Start our handshake */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay_handshake_init_send(netplay, &netplay->connections[0],
|
|
|
|
LOW_NETPLAY_PROTOCOL_VERSION);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->connections[0].mode = NETPLAY_CONNECTION_INIT;
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_INIT;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return netplay;
|
2021-11-12 18:58:40 +01:00
|
|
|
|
2021-12-04 02:34:21 +01:00
|
|
|
failure:
|
|
|
|
netplay_free(netplay);
|
2021-11-12 18:58:40 +01:00
|
|
|
return NULL;
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_free
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
*
|
|
|
|
* Frees netplay data/
|
|
|
|
*/
|
|
|
|
void netplay_free(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
if (netplay->listen_fd >= 0)
|
|
|
|
socket_close(netplay->listen_fd);
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->mitm_pending)
|
|
|
|
{
|
|
|
|
for (i = 0; i < NETPLAY_MITM_MAX_PENDING; i++)
|
|
|
|
{
|
|
|
|
int fd = netplay->mitm_pending->fds[i];
|
|
|
|
if (fd >= 0)
|
|
|
|
socket_close(fd);
|
|
|
|
}
|
|
|
|
|
|
|
|
freeaddrinfo_retro(netplay->mitm_pending->base_addr);
|
|
|
|
free(netplay->mitm_pending);
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active)
|
|
|
|
{
|
|
|
|
socket_close(connection->fd);
|
|
|
|
netplay_deinit_socket_buffer(&connection->send_packet_buffer);
|
|
|
|
netplay_deinit_socket_buffer(&connection->recv_packet_buffer);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->connections && netplay->connections != &netplay->one_connection)
|
|
|
|
free(netplay->connections);
|
|
|
|
|
|
|
|
if (netplay->buffer)
|
|
|
|
{
|
|
|
|
for (i = 0; i < netplay->buffer_size; i++)
|
|
|
|
netplay_delta_frame_free(&netplay->buffer[i]);
|
|
|
|
|
|
|
|
free(netplay->buffer);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->zbuffer)
|
|
|
|
free(netplay->zbuffer);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->compress_nil.compression_stream)
|
2021-11-05 18:52:56 +01:00
|
|
|
netplay->compress_nil.compression_backend->stream_free(
|
|
|
|
netplay->compress_nil.compression_stream);
|
|
|
|
if (netplay->compress_nil.decompression_stream)
|
|
|
|
netplay->compress_nil.decompression_backend->stream_free(
|
|
|
|
netplay->compress_nil.decompression_stream);
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->compress_zlib.compression_stream)
|
|
|
|
netplay->compress_zlib.compression_backend->stream_free(netplay->compress_zlib.compression_stream);
|
2021-11-05 18:52:56 +01:00
|
|
|
if (netplay->compress_zlib.decompression_stream)
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->compress_zlib.decompression_backend->stream_free(netplay->compress_zlib.decompression_stream);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->addr)
|
|
|
|
freeaddrinfo_retro(netplay->addr);
|
|
|
|
|
|
|
|
free(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_send_savestate
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
* @serial_info : the savestate being loaded
|
|
|
|
* @cx : compression type
|
|
|
|
* @z : compression backend to use
|
|
|
|
*
|
|
|
|
* Send a loaded savestate to those connected peers using the given compression
|
|
|
|
* scheme.
|
|
|
|
*/
|
|
|
|
static void netplay_send_savestate(netplay_t *netplay,
|
|
|
|
retro_ctx_serialize_info_t *serial_info, uint32_t cx,
|
|
|
|
struct compression_transcoder *z)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t header[4];
|
|
|
|
uint32_t rd, wn;
|
|
|
|
size_t i;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Compress it */
|
|
|
|
z->compression_backend->set_in(z->compression_stream,
|
|
|
|
(const uint8_t*)serial_info->data_const, (uint32_t)serial_info->size);
|
|
|
|
z->compression_backend->set_out(z->compression_stream,
|
|
|
|
netplay->zbuffer, (uint32_t)netplay->zbuffer_size);
|
|
|
|
if (!z->compression_backend->trans(z->compression_stream, true, &rd,
|
|
|
|
&wn, NULL))
|
|
|
|
{
|
|
|
|
/* Catastrophe! */
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
netplay_hangup(netplay, &netplay->connections[i]);
|
|
|
|
return;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Send it to relevant peers */
|
|
|
|
header[0] = htonl(NETPLAY_CMD_LOAD_SAVESTATE);
|
|
|
|
header[1] = htonl(wn + 2*sizeof(uint32_t));
|
|
|
|
header[2] = htonl(netplay->run_frame_count);
|
|
|
|
header[3] = htonl(serial_info->size);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (!connection->active ||
|
|
|
|
connection->mode < NETPLAY_CONNECTION_CONNECTED ||
|
|
|
|
connection->compression_supported != cx) continue;
|
|
|
|
|
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, header,
|
|
|
|
sizeof(header)) ||
|
|
|
|
!netplay_send(&connection->send_packet_buffer, connection->fd,
|
|
|
|
netplay->zbuffer, wn))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
void netplay_frontend_paused(netplay_t *netplay, bool paused)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
|
|
|
size_t i;
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t paused_ct = 0;
|
|
|
|
|
|
|
|
netplay->local_paused = paused;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Communicating this is a bit odd: If exactly one other connection is
|
|
|
|
* paused, then we must tell them that we're unpaused, as from their
|
|
|
|
* perspective we are. If more than one other connection is paused, then our
|
|
|
|
* status as proxy means we are NOT unpaused to either of them. */
|
2021-04-08 18:16:24 +02:00
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
2021-09-18 06:14:34 +02:00
|
|
|
if (connection->active && connection->paused)
|
|
|
|
paused_ct++;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (paused_ct > 1)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Send our unpaused status. Must send manually because we must immediately
|
|
|
|
* flush the buffer: If we're paused, we won't be polled. */
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if ( connection->active
|
|
|
|
&& connection->mode >= NETPLAY_CONNECTION_CONNECTED)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (paused)
|
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_PAUSE,
|
|
|
|
netplay->nick, NETPLAY_NICK_LEN);
|
2021-04-08 18:16:24 +02:00
|
|
|
else
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_send_raw_cmd(netplay, connection, NETPLAY_CMD_RESUME,
|
|
|
|
NULL, 0);
|
|
|
|
|
|
|
|
/* We're not going to be polled, so we need to
|
|
|
|
* flush this command now */
|
|
|
|
netplay_send_flush(&connection->send_packet_buffer,
|
|
|
|
connection->fd, true);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/**
|
|
|
|
* netplay_force_future
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
*
|
|
|
|
* Force netplay to ignore all past input, typically because we've just loaded
|
|
|
|
* a state or reset.
|
|
|
|
*/
|
|
|
|
static void netplay_force_future(netplay_t *netplay)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Wherever we're inputting, that's where we consider our state to be loaded */
|
|
|
|
netplay->run_ptr = netplay->self_ptr;
|
|
|
|
netplay->run_frame_count = netplay->self_frame_count;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We need to ignore any intervening data from the other side,
|
|
|
|
* and never rewind past this */
|
|
|
|
netplay_update_unread_ptr(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->unread_frame_count < netplay->run_frame_count)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
uint32_t client;
|
|
|
|
for (client = 0; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
if (!(netplay->connected_players & (1 << client)))
|
|
|
|
continue;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->read_frame_count[client] < netplay->run_frame_count)
|
|
|
|
{
|
|
|
|
netplay->read_ptr[client] = netplay->run_ptr;
|
|
|
|
netplay->read_frame_count[client] = netplay->run_frame_count;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (netplay->server_frame_count < netplay->run_frame_count)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->server_ptr = netplay->run_ptr;
|
|
|
|
netplay->server_frame_count = netplay->run_frame_count;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_update_unread_ptr(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
if (netplay->other_frame_count < netplay->run_frame_count)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay->other_ptr = netplay->run_ptr;
|
|
|
|
netplay->other_frame_count = netplay->run_frame_count;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
void netplay_core_reset(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
uint32_t cmd[3];
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Ignore past input */
|
|
|
|
netplay_force_future(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Request that our peers reset */
|
|
|
|
cmd[0] = htonl(NETPLAY_CMD_RESET);
|
|
|
|
cmd[1] = htonl(sizeof(uint32_t));
|
|
|
|
cmd[2] = htonl(netplay->self_frame_count);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (!connection->active ||
|
|
|
|
connection->mode < NETPLAY_CONNECTION_CONNECTED) continue;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay_send(&connection->send_packet_buffer, connection->fd, cmd,
|
|
|
|
sizeof(cmd)))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
void netplay_load_savestate(netplay_t *netplay,
|
|
|
|
retro_ctx_serialize_info_t *serial_info, bool save)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
retro_ctx_serialize_info_t tmp_serial_info;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
netplay_force_future(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Record it in our own buffer */
|
|
|
|
if (save || !serial_info)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* TODO/FIXME: This is a critical failure! */
|
|
|
|
if (!netplay_delta_frame_ready(netplay,
|
|
|
|
&netplay->buffer[netplay->run_ptr], netplay->run_frame_count))
|
|
|
|
return;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!serial_info)
|
|
|
|
{
|
|
|
|
tmp_serial_info.size = netplay->state_size;
|
|
|
|
tmp_serial_info.data = netplay->buffer[netplay->run_ptr].state;
|
|
|
|
if (!core_serialize(&tmp_serial_info))
|
|
|
|
return;
|
|
|
|
tmp_serial_info.data_const = tmp_serial_info.data;
|
|
|
|
serial_info = &tmp_serial_info;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (serial_info->size <= netplay->state_size)
|
|
|
|
memcpy(netplay->buffer[netplay->run_ptr].state,
|
|
|
|
serial_info->data_const, serial_info->size);
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Don't send it if we're expected to be desynced */
|
|
|
|
if (netplay->desync)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* If we can't send it to the peer, loading a state was a bad idea */
|
|
|
|
if (netplay->quirks & (
|
|
|
|
NETPLAY_QUIRK_NO_SAVESTATES
|
|
|
|
| NETPLAY_QUIRK_NO_TRANSMISSION))
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* Send this to every peer */
|
|
|
|
if (netplay->compress_nil.compression_backend)
|
|
|
|
netplay_send_savestate(netplay, serial_info, 0, &netplay->compress_nil);
|
|
|
|
if (netplay->compress_zlib.compression_backend)
|
|
|
|
netplay_send_savestate(netplay, serial_info, NETPLAY_COMPRESSION_ZLIB,
|
|
|
|
&netplay->compress_zlib);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
void netplay_toggle_play_spectate(netplay_t *netplay)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (netplay->self_mode)
|
|
|
|
{
|
|
|
|
case NETPLAY_CONNECTION_PLAYING:
|
|
|
|
case NETPLAY_CONNECTION_SLAVE:
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
uint32_t device;
|
|
|
|
|
|
|
|
/* Switch to spectator mode immediately */
|
|
|
|
netplay->self_mode = NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
netplay->self_devices = 0;
|
|
|
|
|
|
|
|
netplay->connected_players &= ~(1<<netplay->self_client_num);
|
|
|
|
netplay->client_devices[netplay->self_client_num] = 0;
|
|
|
|
for (device = 0; device < MAX_INPUT_DEVICES; device++)
|
|
|
|
netplay->device_clients[device] &= ~(1<<netplay->self_client_num);
|
|
|
|
|
|
|
|
/* Announce it */
|
|
|
|
announce_play_spectate(netplay, NULL, NETPLAY_CONNECTION_SPECTATING, 0, -1);
|
|
|
|
|
|
|
|
netplay_cmd_mode(netplay, NETPLAY_CONNECTION_SPECTATING);
|
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
break;
|
|
|
|
case NETPLAY_CONNECTION_SPECTATING:
|
|
|
|
/* Switch only after getting permission */
|
|
|
|
netplay_cmd_mode(netplay, NETPLAY_CONNECTION_PLAYING);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
int16_t netplay_input_state(netplay_t *netplay,
|
|
|
|
unsigned port, unsigned device,
|
|
|
|
unsigned idx, unsigned id)
|
|
|
|
{
|
|
|
|
struct delta_frame *delta;
|
|
|
|
netplay_input_state_t istate;
|
|
|
|
const uint32_t *curr_input_state = NULL;
|
|
|
|
size_t ptr =
|
|
|
|
netplay->is_replay
|
|
|
|
? netplay->replay_ptr
|
|
|
|
: netplay->run_ptr;
|
|
|
|
|
|
|
|
if (port >= MAX_INPUT_DEVICES)
|
|
|
|
return 0;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If the port doesn't seem to correspond to the device, "correct" it. This
|
|
|
|
* is common with devices that typically only have one instance, such as
|
|
|
|
* keyboards, mice and lightguns. */
|
|
|
|
if (device != RETRO_DEVICE_JOYPAD &&
|
|
|
|
(netplay->config_devices[port]&RETRO_DEVICE_MASK) != device)
|
|
|
|
{
|
|
|
|
for (port = 0; port < MAX_INPUT_DEVICES; port++)
|
|
|
|
{
|
|
|
|
if ((netplay->config_devices[port]&RETRO_DEVICE_MASK) == device)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (port == MAX_INPUT_DEVICES)
|
|
|
|
return 0;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
delta = &netplay->buffer[ptr];
|
|
|
|
istate = delta->resolved_input[port];
|
|
|
|
if (!istate || !istate->used || istate->size == 0)
|
|
|
|
return 0;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
curr_input_state = istate->data;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
switch (device)
|
|
|
|
{
|
|
|
|
case RETRO_DEVICE_JOYPAD:
|
|
|
|
if (id == RETRO_DEVICE_ID_JOYPAD_MASK)
|
|
|
|
return curr_input_state[0];
|
|
|
|
return ((1 << id) & curr_input_state[0]) ? 1 : 0;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case RETRO_DEVICE_ANALOG:
|
|
|
|
if (istate->size == 3)
|
|
|
|
{
|
|
|
|
uint32_t state = curr_input_state[1 + idx];
|
|
|
|
return (int16_t)(uint16_t)(state >> (id * 16));
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_MOUSE:
|
|
|
|
case RETRO_DEVICE_LIGHTGUN:
|
|
|
|
if (istate->size == 2)
|
|
|
|
{
|
|
|
|
if (id <= RETRO_DEVICE_ID_MOUSE_Y)
|
|
|
|
return (int16_t)(uint16_t)(curr_input_state[1] >> (id * 16));
|
|
|
|
return ((1 << id) & curr_input_state[0]) ? 1 : 0;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case RETRO_DEVICE_KEYBOARD:
|
|
|
|
{
|
|
|
|
unsigned key = netplay_key_hton(id);
|
|
|
|
if (key != NETPLAY_KEY_UNKNOWN)
|
|
|
|
{
|
|
|
|
unsigned word = key / 32;
|
|
|
|
unsigned bit = key % 32;
|
|
|
|
if (word <= istate->size)
|
|
|
|
return ((UINT32_C(1) << bit) & curr_input_state[word]) ? 1 : 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return 0;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
2021-09-18 06:14:34 +02:00
|
|
|
* get_self_input_state:
|
|
|
|
* @netplay : pointer to netplay object
|
2021-04-08 18:16:24 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Grab our own input state and send this frame's input state (self and remote)
|
|
|
|
* over the network
|
2021-04-08 18:16:24 +02:00
|
|
|
*
|
2021-09-18 06:14:34 +02:00
|
|
|
* Returns: true (1) if successful, otherwise false (0).
|
2021-04-08 18:16:24 +02:00
|
|
|
*/
|
2021-09-18 06:14:34 +02:00
|
|
|
static bool get_self_input_state(
|
|
|
|
bool block_libretro_input,
|
|
|
|
netplay_t *netplay)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
unsigned i;
|
|
|
|
struct delta_frame *ptr = &netplay->buffer[netplay->self_ptr];
|
|
|
|
netplay_input_state_t istate = NULL;
|
|
|
|
uint32_t devices, used_devices = 0, devi, dev_type, local_device;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!netplay_delta_frame_ready(netplay, ptr, netplay->self_frame_count))
|
|
|
|
return false;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* We've already read this frame! */
|
|
|
|
if (ptr->have_local)
|
|
|
|
return true;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
devices = netplay->self_devices;
|
|
|
|
used_devices = 0;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (devi = 0; devi < MAX_INPUT_DEVICES; devi++)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!(devices & (1 << devi)))
|
|
|
|
continue;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Find an appropriate local device */
|
|
|
|
dev_type = netplay->config_devices[devi]&RETRO_DEVICE_MASK;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
for (local_device = 0; local_device < MAX_INPUT_DEVICES; local_device++)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
if (used_devices & (1 << local_device))
|
|
|
|
continue;
|
|
|
|
if ((netplay->config_devices[local_device]&RETRO_DEVICE_MASK) == dev_type)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (local_device == MAX_INPUT_DEVICES)
|
|
|
|
local_device = 0;
|
|
|
|
used_devices |= (1 << local_device);
|
|
|
|
|
|
|
|
istate = netplay_input_state_for(&ptr->real_input[devi],
|
|
|
|
/* If we're a slave, we write our own input to MAX_CLIENTS to keep it separate */
|
|
|
|
(netplay->self_mode==NETPLAY_CONNECTION_SLAVE)?MAX_CLIENTS:netplay->self_client_num,
|
|
|
|
netplay_expected_input_size(netplay, 1 << devi),
|
|
|
|
true, false);
|
|
|
|
if (!istate)
|
|
|
|
continue; /* FIXME: More severe? */
|
|
|
|
|
|
|
|
/* First frame we always give zero input since relying on
|
|
|
|
* input from first frame screws up when we use -F 0. */
|
|
|
|
if ( !block_libretro_input
|
|
|
|
&& netplay->self_frame_count > 0)
|
|
|
|
{
|
|
|
|
uint32_t *state = istate->data;
|
|
|
|
retro_input_state_t cb = netplay->cbs.state_cb;
|
|
|
|
unsigned dtype = netplay->config_devices[devi]&RETRO_DEVICE_MASK;
|
|
|
|
|
|
|
|
switch (dtype)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
case RETRO_DEVICE_ANALOG:
|
|
|
|
for (i = 0; i < 2; i++)
|
|
|
|
{
|
|
|
|
int16_t tmp_x = cb(local_device,
|
|
|
|
RETRO_DEVICE_ANALOG, (unsigned)i, 0);
|
|
|
|
int16_t tmp_y = cb(local_device,
|
|
|
|
RETRO_DEVICE_ANALOG, (unsigned)i, 1);
|
|
|
|
state[1 + i] = (uint16_t)tmp_x | (((uint16_t)tmp_y) << 16);
|
|
|
|
}
|
|
|
|
/* no break */
|
|
|
|
|
|
|
|
case RETRO_DEVICE_JOYPAD:
|
|
|
|
for (i = 0; i <= RETRO_DEVICE_ID_JOYPAD_R3; i++)
|
|
|
|
{
|
|
|
|
int16_t tmp = cb(local_device,
|
|
|
|
RETRO_DEVICE_JOYPAD, 0, (unsigned)i);
|
|
|
|
state[0] |= tmp ? 1 << i : 0;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RETRO_DEVICE_MOUSE:
|
|
|
|
case RETRO_DEVICE_LIGHTGUN:
|
|
|
|
{
|
|
|
|
int16_t tmp_x = cb(local_device, dtype, 0, 0);
|
|
|
|
int16_t tmp_y = cb(local_device, dtype, 0, 1);
|
|
|
|
state[1] = (uint16_t)tmp_x | (((uint16_t)tmp_y) << 16);
|
|
|
|
for (i = 2;
|
|
|
|
i <= (unsigned)((dtype == RETRO_DEVICE_MOUSE) ?
|
|
|
|
RETRO_DEVICE_ID_MOUSE_HORIZ_WHEELDOWN :
|
|
|
|
RETRO_DEVICE_ID_LIGHTGUN_START);
|
|
|
|
i++)
|
|
|
|
{
|
|
|
|
int16_t tmp = cb(local_device, dtype, 0,
|
|
|
|
(unsigned) i);
|
|
|
|
state[0] |= tmp ? 1 << i : 0;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
case RETRO_DEVICE_KEYBOARD:
|
|
|
|
{
|
|
|
|
unsigned key, word = 0, bit = 1;
|
|
|
|
for (key = 1; key < NETPLAY_KEY_LAST; key++)
|
|
|
|
{
|
|
|
|
state[word] |=
|
|
|
|
cb(local_device, RETRO_DEVICE_KEYBOARD, 0,
|
|
|
|
NETPLAY_KEY_NTOH(key)) ?
|
|
|
|
(UINT32_C(1) << bit) : 0;
|
|
|
|
bit++;
|
|
|
|
if (bit >= 32)
|
|
|
|
{
|
|
|
|
bit = 0;
|
|
|
|
word++;
|
|
|
|
if (word >= istate->size)
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
ptr->have_local = true;
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
ptr->have_real[netplay->self_client_num] = true;
|
|
|
|
netplay->read_ptr[netplay->self_client_num] = NEXT_PTR(netplay->self_ptr);
|
|
|
|
netplay->read_frame_count[netplay->self_client_num] = netplay->self_frame_count + 1;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* And send this input to our peers */
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active && connection->mode >= NETPLAY_CONNECTION_CONNECTED)
|
|
|
|
netplay_send_cur_input(netplay, &netplay->connections[i]);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Handle any delayed state changes */
|
|
|
|
if (netplay->is_server)
|
|
|
|
netplay_delayed_state_change(netplay);
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return true;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
bool netplay_poll(
|
|
|
|
bool block_libretro_input,
|
|
|
|
void *settings_data,
|
|
|
|
netplay_t *netplay)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
|
|
|
size_t i;
|
2021-09-18 06:14:34 +02:00
|
|
|
int res;
|
|
|
|
uint32_t client;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
if (!get_self_input_state(block_libretro_input, netplay))
|
|
|
|
goto catastrophe;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If we're not connected, we're done */
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_NONE)
|
|
|
|
return true;
|
|
|
|
|
|
|
|
/* Read Netplay input, block if we're configured to stall for input every
|
|
|
|
* frame */
|
|
|
|
netplay_update_unread_ptr(netplay);
|
|
|
|
if (netplay->stateless_mode &&
|
|
|
|
(netplay->connected_players>1) &&
|
|
|
|
netplay->unread_frame_count <= netplay->run_frame_count)
|
|
|
|
res = netplay_poll_net_input(netplay, true);
|
|
|
|
else
|
|
|
|
res = netplay_poll_net_input(netplay, false);
|
|
|
|
if (res == -1)
|
|
|
|
goto catastrophe;
|
|
|
|
|
|
|
|
/* Resolve and/or simulate the input if we don't have real input */
|
|
|
|
netplay_resolve_input(netplay, netplay->run_ptr, false);
|
|
|
|
|
|
|
|
/* Handle any slaves */
|
|
|
|
if (netplay->is_server && netplay->connected_slaves)
|
|
|
|
netplay_handle_slaves(netplay);
|
|
|
|
|
|
|
|
netplay_update_unread_ptr(netplay);
|
|
|
|
|
|
|
|
/* Figure out how many frames of input latency we should be using to hide
|
|
|
|
* network latency */
|
|
|
|
if (netplay->frame_run_time_avg || netplay->stateless_mode)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* FIXME: Using fixed 60fps for this calculation */
|
|
|
|
unsigned frames_per_frame = netplay->frame_run_time_avg ?
|
|
|
|
(16666 / netplay->frame_run_time_avg) :
|
|
|
|
0;
|
|
|
|
unsigned frames_ahead = (netplay->run_frame_count > netplay->unread_frame_count) ?
|
|
|
|
(netplay->run_frame_count - netplay->unread_frame_count) :
|
|
|
|
0;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
int input_latency_frames_min = netplay->input_latency_frames_min;
|
|
|
|
int input_latency_frames_max = netplay->input_latency_frames_max;
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* Assume we need a couple frames worth of time to actually run the
|
|
|
|
* current frame */
|
|
|
|
if (frames_per_frame > 2)
|
|
|
|
frames_per_frame -= 2;
|
|
|
|
else
|
|
|
|
frames_per_frame = 0;
|
|
|
|
|
|
|
|
/* Shall we adjust our latency? */
|
|
|
|
if (netplay->stateless_mode)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* In stateless mode, we adjust up if we're "close" and down if we
|
|
|
|
* have a lot of slack */
|
|
|
|
if (netplay->input_latency_frames < input_latency_frames_min ||
|
|
|
|
(netplay->unread_frame_count == netplay->run_frame_count + 1 &&
|
|
|
|
netplay->input_latency_frames < input_latency_frames_max))
|
|
|
|
netplay->input_latency_frames++;
|
|
|
|
else if (netplay->input_latency_frames > input_latency_frames_max ||
|
|
|
|
(netplay->unread_frame_count > netplay->run_frame_count + 2 &&
|
|
|
|
netplay->input_latency_frames > input_latency_frames_min))
|
|
|
|
netplay->input_latency_frames--;
|
|
|
|
}
|
|
|
|
else if (netplay->input_latency_frames < input_latency_frames_min ||
|
|
|
|
(frames_per_frame < frames_ahead &&
|
|
|
|
netplay->input_latency_frames < input_latency_frames_max))
|
|
|
|
{
|
|
|
|
/* We can't hide this much network latency with replay, so hide some
|
|
|
|
* with input latency */
|
|
|
|
netplay->input_latency_frames++;
|
|
|
|
}
|
|
|
|
else if (netplay->input_latency_frames > input_latency_frames_max ||
|
|
|
|
(frames_per_frame > frames_ahead + 2 &&
|
|
|
|
netplay->input_latency_frames > input_latency_frames_min))
|
|
|
|
{
|
|
|
|
/* We don't need this much latency (any more) */
|
|
|
|
netplay->input_latency_frames--;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If we're stalled, consider unstalling */
|
|
|
|
switch (netplay->stall)
|
|
|
|
{
|
|
|
|
case NETPLAY_STALL_RUNNING_FAST:
|
|
|
|
if (netplay->unread_frame_count + NETPLAY_MAX_STALL_FRAMES - 2
|
|
|
|
> netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active && connection->stall)
|
|
|
|
connection->stall = NETPLAY_STALL_NONE;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
break;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_STALL_SPECTATOR_WAIT:
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING || netplay->unread_frame_count > netplay->self_frame_count)
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
break;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_STALL_INPUT_LATENCY:
|
|
|
|
/* Just let it recalculate momentarily */
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
break;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
case NETPLAY_STALL_SERVER_REQUESTED:
|
|
|
|
/* See if the stall is done */
|
|
|
|
if (netplay->connections[0].stall_frame == 0)
|
|
|
|
{
|
|
|
|
/* Stop stalling! */
|
|
|
|
netplay->connections[0].stall = NETPLAY_STALL_NONE;
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
netplay->connections[0].stall_frame--;
|
|
|
|
break;
|
|
|
|
case NETPLAY_STALL_NO_CONNECTION:
|
|
|
|
/* We certainly haven't fixed this */
|
|
|
|
break;
|
|
|
|
default: /* not stalling */
|
|
|
|
break;
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
/* If we're not stalled, consider stalling */
|
|
|
|
if (!netplay->stall)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
/* Have we not read enough latency frames? */
|
|
|
|
if (netplay->self_mode == NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
netplay->connected_players &&
|
|
|
|
netplay->run_frame_count + netplay->input_latency_frames > netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
netplay->stall = NETPLAY_STALL_INPUT_LATENCY;
|
|
|
|
netplay->stall_time = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Are we too far ahead? */
|
|
|
|
if (netplay->unread_frame_count + NETPLAY_MAX_STALL_FRAMES
|
|
|
|
<= netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
netplay->stall = NETPLAY_STALL_RUNNING_FAST;
|
|
|
|
netplay->stall_time = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
/* Figure out who to blame */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
for (client = 1; client < MAX_CLIENTS; client++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection;
|
|
|
|
if (!(netplay->connected_players & (1 << client)))
|
|
|
|
continue;
|
|
|
|
if (netplay->read_frame_count[client] > netplay->unread_frame_count)
|
|
|
|
continue;
|
|
|
|
connection = &netplay->connections[client-1];
|
|
|
|
if (connection->active &&
|
|
|
|
connection->mode == NETPLAY_CONNECTION_PLAYING)
|
|
|
|
{
|
|
|
|
connection->stall = NETPLAY_STALL_RUNNING_FAST;
|
|
|
|
connection->stall_time = netplay->stall_time;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If we're a spectator, are we ahead at all? */
|
|
|
|
if (!netplay->is_server &&
|
|
|
|
(netplay->self_mode == NETPLAY_CONNECTION_SPECTATING ||
|
|
|
|
netplay->self_mode == NETPLAY_CONNECTION_SLAVE) &&
|
|
|
|
netplay->unread_frame_count <= netplay->self_frame_count)
|
|
|
|
{
|
|
|
|
netplay->stall = NETPLAY_STALL_SPECTATOR_WAIT;
|
|
|
|
netplay->stall_time = cpu_features_get_time_usec();
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 06:14:34 +02:00
|
|
|
|
|
|
|
/* If we're stalling, consider disconnection */
|
|
|
|
if (netplay->stall && netplay->stall_time)
|
2021-04-08 18:16:24 +02:00
|
|
|
{
|
2021-09-18 06:14:34 +02:00
|
|
|
retro_time_t now = cpu_features_get_time_usec();
|
|
|
|
|
|
|
|
/* Don't stall out while they're paused */
|
|
|
|
if (netplay->remote_paused)
|
|
|
|
netplay->stall_time = now;
|
|
|
|
else if (now - netplay->stall_time >=
|
|
|
|
(netplay->is_server ? MAX_SERVER_STALL_TIME_USEC :
|
|
|
|
MAX_CLIENT_STALL_TIME_USEC))
|
|
|
|
{
|
|
|
|
/* Stalled out! */
|
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
bool fixed = false;
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active &&
|
|
|
|
connection->mode == NETPLAY_CONNECTION_PLAYING &&
|
|
|
|
connection->stall)
|
|
|
|
{
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
fixed = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (fixed)
|
|
|
|
{
|
|
|
|
/* Not stalled now :) */
|
|
|
|
netplay->stall = NETPLAY_STALL_NONE;
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else
|
|
|
|
goto catastrophe;
|
|
|
|
return false;
|
|
|
|
}
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
return true;
|
2021-04-08 18:16:24 +02:00
|
|
|
|
2021-09-18 06:14:34 +02:00
|
|
|
catastrophe:
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
netplay_hangup(netplay, &netplay->connections[i]);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool netplay_is_alive(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
return (netplay->is_server) ||
|
|
|
|
(!netplay->is_server &&
|
|
|
|
netplay->self_mode >= NETPLAY_CONNECTION_CONNECTED);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool netplay_should_skip(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
if (!netplay)
|
|
|
|
return false;
|
|
|
|
return netplay->is_replay
|
|
|
|
&& (netplay->self_mode >= NETPLAY_CONNECTION_CONNECTED);
|
2021-04-08 18:16:24 +02:00
|
|
|
}
|
2021-09-18 22:05:03 +02:00
|
|
|
|
|
|
|
void netplay_post_frame(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
netplay_update_unread_ptr(netplay);
|
|
|
|
netplay_sync_post_frame(netplay, false);
|
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
{
|
|
|
|
struct netplay_connection *connection = &netplay->connections[i];
|
|
|
|
if (connection->active &&
|
|
|
|
!netplay_send_flush(&connection->send_packet_buffer, connection->fd,
|
|
|
|
false))
|
|
|
|
netplay_hangup(netplay, connection);
|
|
|
|
}
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool init_netplay_deferred(const char *server, unsigned port, const char *mitm_session)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
|
|
|
|
if (string_is_empty(server) || !port)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
|
|
|
net_st->netplay_client_deferred = false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
strlcpy(net_st->server_address_deferred, server,
|
|
|
|
sizeof(net_st->server_address_deferred));
|
|
|
|
net_st->server_port_deferred = port;
|
|
|
|
strlcpy(net_st->server_session_deferred, mitm_session,
|
|
|
|
sizeof(net_st->server_session_deferred));
|
|
|
|
net_st->netplay_client_deferred = true;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
return true;
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* input_poll_net
|
|
|
|
*
|
|
|
|
* Poll the network if necessary.
|
|
|
|
*/
|
|
|
|
void input_poll_net(void)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
if (!netplay_should_skip(netplay) && netplay && netplay->can_poll)
|
|
|
|
{
|
|
|
|
input_driver_state_t
|
|
|
|
*input_st = input_state_get_ptr();
|
|
|
|
netplay->can_poll = false;
|
|
|
|
netplay_poll(
|
|
|
|
input_st->block_libretro_input,
|
|
|
|
config_get_ptr(),
|
|
|
|
netplay);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Netplay polling callbacks */
|
|
|
|
void video_frame_net(const void *data, unsigned width,
|
|
|
|
unsigned height, size_t pitch)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
if (!netplay_should_skip(netplay))
|
|
|
|
netplay->cbs.frame_cb(data, width, height, pitch);
|
|
|
|
}
|
|
|
|
|
|
|
|
void audio_sample_net(int16_t left, int16_t right)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
if (!netplay_should_skip(netplay) && !netplay->stall)
|
|
|
|
netplay->cbs.sample_cb(left, right);
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t audio_sample_batch_net(const int16_t *data, size_t frames)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
if (!netplay_should_skip(netplay) && !netplay->stall)
|
|
|
|
return netplay->cbs.sample_batch_cb(data, frames);
|
|
|
|
return frames;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void netplay_announce_cb(retro_task_t *task,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
void *task_data, void *user_data, const char *error)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char *buf_start, *buf;
|
|
|
|
size_t remaining;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
http_transfer_data_t *data = task_data;
|
2021-12-21 11:58:25 -03:00
|
|
|
bool first = !host_room->id;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (error)
|
|
|
|
return;
|
|
|
|
if (!data || !data->data || !data->len)
|
|
|
|
return;
|
|
|
|
if (data->status != 200)
|
|
|
|
return;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
buf_start = malloc(data->len);
|
|
|
|
if (!buf_start)
|
|
|
|
return;
|
|
|
|
memcpy(buf_start, data->data, data->len);
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
buf = buf_start;
|
|
|
|
remaining = data->len;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
char *lnbreak, *delim;
|
|
|
|
char *key, *value;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
lnbreak = (char*) memchr(buf, '\n', remaining);
|
|
|
|
if (!lnbreak)
|
|
|
|
break;
|
|
|
|
*lnbreak++ = '\0';
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
delim = (char*) strchr(buf, '=');
|
|
|
|
if (delim)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
*delim++ = '\0';
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
key = buf;
|
|
|
|
value = delim;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!string_is_empty(key) && !string_is_empty(value))
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
|
|
|
if (string_is_equal(key, "id"))
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
sscanf(value, "%i", &host_room->id);
|
|
|
|
else if (string_is_equal(key, "username"))
|
|
|
|
strlcpy(host_room->nickname, value, sizeof(host_room->nickname));
|
|
|
|
else if (string_is_equal(key, "ip"))
|
|
|
|
strlcpy(host_room->address, value, sizeof(host_room->address));
|
|
|
|
else if (string_is_equal(key, "port"))
|
|
|
|
sscanf(value, "%i", &host_room->port);
|
|
|
|
else if (string_is_equal(key, "core_name"))
|
|
|
|
strlcpy(host_room->corename, value, sizeof(host_room->corename));
|
|
|
|
else if (string_is_equal(key, "frontend"))
|
|
|
|
strlcpy(host_room->frontend, value, sizeof(host_room->frontend));
|
|
|
|
else if (string_is_equal(key, "core_version"))
|
|
|
|
strlcpy(host_room->coreversion, value, sizeof(host_room->coreversion));
|
|
|
|
else if (string_is_equal(key, "game_name"))
|
|
|
|
strlcpy(host_room->gamename, value, sizeof(host_room->gamename));
|
|
|
|
else if (string_is_equal(key, "game_crc"))
|
|
|
|
sscanf(value, "%08d", &host_room->gamecrc);
|
|
|
|
else if (string_is_equal(key, "host_method"))
|
|
|
|
sscanf(value, "%i", &host_room->host_method);
|
|
|
|
else if (string_is_equal(key, "has_password"))
|
|
|
|
host_room->has_password = string_is_equal_noncase(value, "true") ||
|
|
|
|
string_is_equal(value, "1");
|
|
|
|
else if (string_is_equal(key, "has_spectate_password"))
|
|
|
|
host_room->has_spectate_password = string_is_equal_noncase(value, "true") ||
|
|
|
|
string_is_equal(value, "1");
|
|
|
|
else if (string_is_equal(key, "retroarch_version"))
|
|
|
|
strlcpy(host_room->retroarch_version, value,
|
|
|
|
sizeof(host_room->retroarch_version));
|
|
|
|
else if (string_is_equal(key, "country"))
|
|
|
|
strlcpy(host_room->country, value, sizeof(host_room->country));
|
2021-12-21 11:58:25 -03:00
|
|
|
else if (string_is_equal(key, "connectable"))
|
|
|
|
host_room->connectable = string_is_equal_noncase(value, "true") ||
|
|
|
|
string_is_equal(value, "1");
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
remaining -= (size_t)lnbreak - (size_t)buf;
|
|
|
|
buf = lnbreak;
|
|
|
|
} while (remaining);
|
2021-11-12 18:58:40 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
free(buf_start);
|
2021-11-12 18:58:40 +01:00
|
|
|
|
2021-12-21 11:58:25 -03:00
|
|
|
/* Warn only on the first announce. */
|
|
|
|
if (!host_room->connectable && first)
|
|
|
|
{
|
|
|
|
const char *dmsg = msg_hash_to_str(MSG_ROOM_NOT_CONNECTABLE);
|
|
|
|
|
|
|
|
RARCH_WARN("[Netplay] %s\n", dmsg);
|
|
|
|
runloop_msg_queue_push(dmsg, 1, 180, false, NULL,
|
|
|
|
MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
}
|
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
#ifdef HAVE_DISCORD
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (discord_state_get_ptr()->inited)
|
|
|
|
{
|
|
|
|
discord_userdata_t userdata;
|
|
|
|
userdata.status = DISCORD_PRESENCE_NETPLAY_HOSTING;
|
|
|
|
command_event(CMD_EVENT_DISCORD_UPDATE, &userdata);
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#endif
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
static void netplay_announce(netplay_t *netplay)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char buf[8192];
|
|
|
|
const frontend_ctx_driver_t *frontend_drv;
|
2021-11-05 04:42:03 +01:00
|
|
|
char frontend_architecture[PATH_MAX_LENGTH];
|
|
|
|
char frontend_architecture_tmp[32];
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
uint32_t content_crc;
|
2021-11-05 04:42:03 +01:00
|
|
|
char *username = NULL;
|
|
|
|
char *corename = NULL;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
char *coreversion = NULL;
|
2021-11-05 04:42:03 +01:00
|
|
|
char *gamename = NULL;
|
|
|
|
char *subsystemname = NULL;
|
|
|
|
char *frontend_ident = NULL;
|
2021-12-23 17:28:30 -03:00
|
|
|
char *mitm_session = NULL;
|
2021-12-23 09:54:52 -03:00
|
|
|
const char *mitm_custom_addr = "";
|
|
|
|
int mitm_custom_port = 0;
|
|
|
|
int is_mitm = 0;
|
2021-11-05 04:42:03 +01:00
|
|
|
settings_t *settings = config_get_ptr();
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
struct retro_system_info *system = &runloop_state_get_ptr()->system.info;
|
2021-11-05 04:42:03 +01:00
|
|
|
struct string_list *subsystem = path_get_subsystem_list();
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#ifndef NETPLAY_TEST_BUILD
|
|
|
|
const char *url = "http://lobby.libretro.com/add";
|
|
|
|
#else
|
|
|
|
const char *url = "http://lobbytest.libretro.com/add";
|
|
|
|
#endif
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_http_urlencode(&username, netplay->nick);
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_http_urlencode(&corename, system->library_name);
|
|
|
|
net_http_urlencode(&coreversion, system->library_version);
|
|
|
|
|
|
|
|
if (subsystem && subsystem->size > 0)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
|
|
|
unsigned i;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
buf[0] = '\0';
|
|
|
|
for (i = 0;;)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcat(buf, path_basename(subsystem->elems[i++].data), sizeof(buf));
|
|
|
|
if (i >= subsystem->size)
|
|
|
|
break;
|
|
|
|
strlcat(buf, "|", sizeof(buf));
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
net_http_urlencode(&gamename, buf);
|
|
|
|
net_http_urlencode(&subsystemname, path_get(RARCH_PATH_SUBSYSTEM));
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
content_crc = 0;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
const char *base = path_basename(path_get(RARCH_PATH_BASENAME));
|
|
|
|
|
|
|
|
net_http_urlencode(&gamename,
|
|
|
|
!string_is_empty(base) ? base : "N/A");
|
|
|
|
/* TODO/FIXME - subsystem should be implemented later? */
|
|
|
|
net_http_urlencode(&subsystemname, "N/A");
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
content_crc = content_get_crc();
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
frontend_drv =
|
|
|
|
(const frontend_ctx_driver_t*)frontend_driver_get_cpu_architecture_str(
|
|
|
|
frontend_architecture_tmp, sizeof(frontend_architecture_tmp));
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (frontend_drv)
|
|
|
|
snprintf(frontend_architecture,
|
2021-11-05 04:42:03 +01:00
|
|
|
sizeof(frontend_architecture),
|
|
|
|
"%s %s",
|
|
|
|
frontend_drv->ident,
|
|
|
|
frontend_architecture_tmp);
|
|
|
|
else
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
strlcpy(frontend_architecture,
|
|
|
|
"N/A",
|
|
|
|
sizeof(frontend_architecture));
|
2021-11-05 04:42:03 +01:00
|
|
|
net_http_urlencode(&frontend_ident, frontend_architecture);
|
|
|
|
|
2021-12-23 09:54:52 -03:00
|
|
|
if (!string_is_empty(host_room->mitm_session))
|
|
|
|
{
|
|
|
|
is_mitm = 1;
|
|
|
|
net_http_urlencode(&mitm_session, host_room->mitm_session);
|
|
|
|
|
|
|
|
if (string_is_equal(host_room->mitm_handle, "custom"))
|
|
|
|
{
|
|
|
|
mitm_custom_addr = host_room->mitm_address;
|
|
|
|
mitm_custom_port = host_room->mitm_port;
|
|
|
|
}
|
|
|
|
}
|
2021-12-23 17:28:30 -03:00
|
|
|
else
|
|
|
|
{
|
|
|
|
net_http_urlencode(&mitm_session, "");
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
snprintf(buf, sizeof(buf),
|
|
|
|
"username=%s&"
|
|
|
|
"core_name=%s&"
|
|
|
|
"core_version=%s&"
|
|
|
|
"game_name=%s&"
|
|
|
|
"game_crc=%08lX&"
|
|
|
|
"port=%hu&"
|
|
|
|
"mitm_server=%s&"
|
|
|
|
"has_password=%d&"
|
|
|
|
"has_spectate_password=%d&"
|
|
|
|
"force_mitm=%d&"
|
|
|
|
"retroarch_version=%s&"
|
|
|
|
"frontend=%s&"
|
|
|
|
"subsystem_name=%s&"
|
2021-12-23 09:54:52 -03:00
|
|
|
"mitm_session=%s&"
|
|
|
|
"mitm_custom_addr=%s&"
|
|
|
|
"mitm_custom_port=%d",
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
username,
|
|
|
|
corename,
|
|
|
|
coreversion,
|
|
|
|
gamename,
|
|
|
|
(unsigned long)content_crc,
|
|
|
|
netplay->ext_tcp_port,
|
|
|
|
host_room->mitm_handle,
|
|
|
|
!string_is_empty(settings->paths.netplay_password) ? 1 : 0,
|
|
|
|
!string_is_empty(settings->paths.netplay_spectate_password) ? 1 : 0,
|
2021-12-23 09:54:52 -03:00
|
|
|
is_mitm,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
PACKAGE_VERSION,
|
|
|
|
frontend_ident,
|
|
|
|
subsystemname,
|
2021-12-23 09:54:52 -03:00
|
|
|
mitm_session,
|
|
|
|
mitm_custom_addr,
|
|
|
|
mitm_custom_port);
|
2021-11-05 04:42:03 +01:00
|
|
|
|
2021-12-04 02:34:21 +01:00
|
|
|
free(username);
|
|
|
|
free(corename);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
free(coreversion);
|
2021-12-04 02:34:21 +01:00
|
|
|
free(gamename);
|
|
|
|
free(subsystemname);
|
|
|
|
free(frontend_ident);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
free(mitm_session);
|
|
|
|
|
|
|
|
task_push_http_post_transfer(url, buf, true, NULL,
|
|
|
|
netplay_announce_cb, NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void netplay_mitm_query_cb(retro_task_t *task, void *task_data,
|
|
|
|
void *user_data, const char *error)
|
|
|
|
{
|
|
|
|
char *buf_start, *buf;
|
|
|
|
size_t remaining;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
http_transfer_data_t *data = task_data;
|
|
|
|
|
|
|
|
if (error)
|
|
|
|
return;
|
|
|
|
if (!data || !data->data || !data->len)
|
|
|
|
return;
|
|
|
|
if (data->status != 200)
|
|
|
|
return;
|
|
|
|
|
|
|
|
buf_start = malloc(data->len);
|
|
|
|
if (!buf_start)
|
|
|
|
return;
|
|
|
|
memcpy(buf_start, data->data, data->len);
|
|
|
|
|
|
|
|
buf = buf_start;
|
|
|
|
remaining = data->len;
|
|
|
|
do
|
|
|
|
{
|
|
|
|
char *lnbreak, *delim;
|
|
|
|
char *key, *value;
|
|
|
|
|
|
|
|
lnbreak = (char*) memchr(buf, '\n', remaining);
|
|
|
|
if (!lnbreak)
|
|
|
|
break;
|
|
|
|
*lnbreak++ = '\0';
|
|
|
|
|
|
|
|
delim = (char*) strchr(buf, '=');
|
|
|
|
if (delim)
|
|
|
|
{
|
|
|
|
*delim++ = '\0';
|
|
|
|
|
|
|
|
key = buf;
|
|
|
|
value = delim;
|
|
|
|
|
|
|
|
if (!string_is_empty(key) && !string_is_empty(value))
|
|
|
|
{
|
|
|
|
if (string_is_equal(key, "tunnel_addr"))
|
|
|
|
strlcpy(host_room->mitm_address, value,
|
|
|
|
sizeof(host_room->mitm_address));
|
|
|
|
else if (string_is_equal(key, "tunnel_port"))
|
|
|
|
sscanf(value, "%i", &host_room->mitm_port);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
remaining -= (size_t)lnbreak - (size_t)buf;
|
|
|
|
buf = lnbreak;
|
|
|
|
} while (remaining);
|
|
|
|
|
|
|
|
free(buf_start);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool netplay_mitm_query(const char *mitm_name)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
|
|
|
|
if (string_is_empty(mitm_name))
|
|
|
|
return false;
|
|
|
|
|
2021-12-23 09:54:52 -03:00
|
|
|
/* We don't need to query,
|
|
|
|
if we are using a custom relay server. */
|
|
|
|
if (string_is_equal(mitm_name, "custom"))
|
|
|
|
{
|
|
|
|
char addr[256];
|
|
|
|
char sess[sizeof(addr)];
|
|
|
|
unsigned port;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
const char *custom_server =
|
|
|
|
settings->paths.netplay_custom_mitm_server;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-12-23 09:54:52 -03:00
|
|
|
addr[0] = '\0';
|
|
|
|
sess[0] = '\0';
|
|
|
|
port = 0;
|
|
|
|
|
|
|
|
netplay_decode_hostname(custom_server,
|
|
|
|
addr, &port, sess, sizeof(addr));
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-12-23 09:54:52 -03:00
|
|
|
strlcpy(host_room->mitm_address, addr,
|
|
|
|
sizeof(host_room->mitm_address));
|
|
|
|
host_room->mitm_port = port;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
char query[512];
|
|
|
|
#ifndef NETPLAY_TEST_BUILD
|
|
|
|
const char *url = "http://lobby.libretro.com/tunnel";
|
|
|
|
#else
|
|
|
|
const char *url = "http://lobbytest.libretro.com/tunnel";
|
|
|
|
#endif
|
|
|
|
|
|
|
|
snprintf(query, sizeof(query), "%s?name=%s", url, mitm_name);
|
|
|
|
|
|
|
|
if (!task_push_http_transfer(query,
|
|
|
|
true, NULL, netplay_mitm_query_cb, NULL))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/* Make sure we've the tunnel address before continuing. */
|
|
|
|
task_queue_wait(NULL, NULL);
|
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
return !string_is_empty(host_room->mitm_address) &&
|
|
|
|
host_room->mitm_port;
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
int16_t input_state_net(unsigned port, unsigned device,
|
|
|
|
unsigned idx, unsigned id)
|
|
|
|
{
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
if (netplay)
|
|
|
|
{
|
|
|
|
if (netplay_is_alive(netplay))
|
|
|
|
return netplay_input_state(netplay, port, device, idx, id);
|
|
|
|
return netplay->cbs.state_cb(port, device, idx, id);
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ^^^ Netplay polling callbacks */
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_disconnect
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
*
|
|
|
|
* Disconnect netplay.
|
|
|
|
*
|
|
|
|
* Returns: true (1) if successful. At present, cannot fail.
|
|
|
|
**/
|
|
|
|
static void netplay_disconnect(netplay_t *netplay)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < netplay->connections_size; i++)
|
|
|
|
netplay_hangup(netplay, &netplay->connections[i]);
|
|
|
|
|
|
|
|
deinit_netplay();
|
|
|
|
|
|
|
|
#ifdef HAVE_DISCORD
|
2021-11-05 13:45:00 +01:00
|
|
|
if (discord_state_get_ptr()->inited)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
|
|
|
discord_userdata_t userdata;
|
|
|
|
userdata.status = DISCORD_PRESENCE_NETPLAY_NETPLAY_STOPPED;
|
|
|
|
command_event(CMD_EVENT_DISCORD_UPDATE, &userdata);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_pre_frame:
|
|
|
|
* @netplay : pointer to netplay object
|
|
|
|
*
|
|
|
|
* Pre-frame for Netplay.
|
|
|
|
* Call this before running retro_run().
|
|
|
|
*
|
|
|
|
* Returns: true (1) if the frontend is cleared to emulate the frame, false (0)
|
|
|
|
* if we're stalled or paused
|
|
|
|
**/
|
|
|
|
static bool netplay_pre_frame(
|
|
|
|
bool netplay_public_announce,
|
|
|
|
bool netplay_use_mitm_server,
|
|
|
|
netplay_t *netplay)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool netplay_force_disconnect = false;
|
|
|
|
bool sync_stalled = false;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
|
|
|
retro_assert(netplay);
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay->is_server)
|
|
|
|
{
|
|
|
|
if (netplay_public_announce)
|
|
|
|
{
|
|
|
|
if (++net_st->reannounce % 1200 == 0)
|
|
|
|
netplay_announce(netplay);
|
|
|
|
}
|
|
|
|
/* Make sure that if announcement is turned on mid-game, it gets announced */
|
|
|
|
else
|
|
|
|
{
|
|
|
|
net_st->reannounce = -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
else if (netplay->is_connected)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (++net_st->reping % 180 == 0)
|
|
|
|
request_ping(netplay, &netplay->connections[0]);
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* FIXME: This is an ugly way to learn we're not paused anymore */
|
|
|
|
if (netplay->local_paused)
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay_frontend_paused(netplay, false);
|
2021-09-18 22:05:03 +02:00
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
/* Are we ready now? */
|
|
|
|
if (netplay->quirks & NETPLAY_QUIRK_INITIALIZATION)
|
|
|
|
netplay_try_init_serialization(netplay);
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#ifdef HAVE_NETPLAYDISCOVERY
|
2021-11-05 04:42:03 +01:00
|
|
|
if (netplay->is_server && !netplay_use_mitm_server)
|
|
|
|
{
|
|
|
|
/* Advertise our server */
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (net_st->lan_ad_server_fd >= 0 ||
|
|
|
|
init_lan_ad_server_socket())
|
|
|
|
netplay_lan_ad_server(netplay);
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#endif
|
|
|
|
|
|
|
|
sync_stalled = !netplay_sync_pre_frame(netplay, &netplay_force_disconnect);
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (netplay_force_disconnect)
|
|
|
|
{
|
|
|
|
netplay_disconnect(netplay);
|
|
|
|
return true;
|
|
|
|
}
|
2021-11-05 04:42:03 +01:00
|
|
|
|
|
|
|
/* If we're disconnected, deinitialize */
|
|
|
|
if (!netplay->is_server && !netplay->connections[0].active)
|
|
|
|
{
|
|
|
|
netplay_disconnect(netplay);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sync_stalled ||
|
|
|
|
((!netplay->is_server || (netplay->connected_players>1)) &&
|
|
|
|
(netplay->stall || netplay->remote_paused)))
|
|
|
|
{
|
|
|
|
/* We may have received data even if we're stalled, so run post-frame
|
|
|
|
* sync */
|
|
|
|
netplay_sync_post_frame(netplay, true);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
void deinit_netplay(void)
|
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
size_t i;
|
2021-12-09 05:52:42 +01:00
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
|
|
|
if (net_st->data)
|
|
|
|
{
|
2021-12-09 05:52:42 +01:00
|
|
|
if (net_st->data->nat_traversal)
|
|
|
|
netplay_deinit_nat_traversal();
|
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
netplay_free(net_st->data);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->data = NULL;
|
2021-11-05 04:42:03 +01:00
|
|
|
net_st->netplay_enabled = false;
|
|
|
|
net_st->netplay_is_client = false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
#ifdef HAVE_NETPLAYDISCOVERY
|
|
|
|
deinit_lan_ad_server_socket();
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(net_st->chat.messages); i++)
|
|
|
|
{
|
|
|
|
*net_st->chat.messages[i].data.nick = '\0';
|
|
|
|
*net_st->chat.messages[i].data.msg = '\0';
|
|
|
|
net_st->chat.messages[i].data.frames = 0;
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
core_unset_netplay_callbacks();
|
|
|
|
}
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
bool init_netplay(const char *server, unsigned port, const char *mitm_session)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
size_t i;
|
2021-11-05 04:42:03 +01:00
|
|
|
struct retro_callbacks cbs = {0};
|
|
|
|
uint64_t serialization_quirks = 0;
|
|
|
|
uint64_t quirks = 0;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
const char *mitm = NULL;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!net_st->netplay_enabled)
|
|
|
|
{
|
|
|
|
net_st->netplay_client_deferred = false;
|
2021-11-05 04:42:03 +01:00
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
2021-11-05 04:42:03 +01:00
|
|
|
|
|
|
|
core_set_default_callbacks(&cbs);
|
|
|
|
if (!core_set_netplay_callbacks())
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
{
|
|
|
|
net_st->netplay_client_deferred = false;
|
2021-11-05 04:42:03 +01:00
|
|
|
return false;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
}
|
2021-11-05 04:42:03 +01:00
|
|
|
|
|
|
|
/* Map the core's quirks to our quirks */
|
|
|
|
serialization_quirks = core_serialization_quirks();
|
|
|
|
|
|
|
|
/* Quirks we don't support! Just disable everything. */
|
|
|
|
if (serialization_quirks & ~((uint64_t) NETPLAY_QUIRK_MAP_UNDERSTOOD))
|
|
|
|
quirks |= NETPLAY_QUIRK_NO_SAVESTATES;
|
|
|
|
|
|
|
|
if (serialization_quirks & NETPLAY_QUIRK_MAP_NO_SAVESTATES)
|
|
|
|
quirks |= NETPLAY_QUIRK_NO_SAVESTATES;
|
|
|
|
if (serialization_quirks & NETPLAY_QUIRK_MAP_NO_TRANSMISSION)
|
|
|
|
quirks |= NETPLAY_QUIRK_NO_TRANSMISSION;
|
|
|
|
if (serialization_quirks & NETPLAY_QUIRK_MAP_INITIALIZATION)
|
|
|
|
quirks |= NETPLAY_QUIRK_INITIALIZATION;
|
|
|
|
if (serialization_quirks & NETPLAY_QUIRK_MAP_ENDIAN_DEPENDENT)
|
|
|
|
quirks |= NETPLAY_QUIRK_ENDIAN_DEPENDENT;
|
|
|
|
if (serialization_quirks & NETPLAY_QUIRK_MAP_PLATFORM_DEPENDENT)
|
|
|
|
quirks |= NETPLAY_QUIRK_PLATFORM_DEPENDENT;
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!net_st->netplay_is_client)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
struct netplay_room *host_room = &net_st->host_room;
|
|
|
|
|
|
|
|
memset(host_room, 0, sizeof(*host_room));
|
2021-12-21 11:58:25 -03:00
|
|
|
host_room->connectable = true;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
server = NULL;
|
|
|
|
|
|
|
|
if (settings->bools.netplay_use_mitm_server)
|
|
|
|
{
|
|
|
|
const char *mitm_handle = settings->arrays.netplay_mitm_server;
|
|
|
|
if (netplay_mitm_query(mitm_handle))
|
|
|
|
{
|
|
|
|
/* We want to cache the MITM server handle in order to
|
|
|
|
prevent sending the wrong one to the lobby server if
|
|
|
|
we change its config mid-session. */
|
|
|
|
strlcpy(host_room->mitm_handle, mitm_handle,
|
|
|
|
sizeof(host_room->mitm_handle));
|
|
|
|
|
|
|
|
mitm = host_room->mitm_address;
|
|
|
|
port = host_room->mitm_port;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
RARCH_WARN("[Netplay] Failed to get tunnel information. Switching to direct mode.\n");
|
|
|
|
}
|
|
|
|
}
|
2021-11-12 18:58:40 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!port)
|
|
|
|
port = RARCH_DEFAULT_PORT;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
if (net_st->netplay_client_deferred)
|
|
|
|
{
|
|
|
|
server = net_st->server_address_deferred;
|
|
|
|
port = net_st->server_port_deferred;
|
|
|
|
mitm_session = net_st->server_session_deferred;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
#ifdef HAVE_NETPLAYDISCOVERY
|
|
|
|
net_st->lan_ad_server_fd = -1;
|
|
|
|
#endif
|
|
|
|
|
|
|
|
net_st->netplay_client_deferred = false;
|
|
|
|
|
|
|
|
net_st->data = netplay_new(
|
|
|
|
server, mitm, port, mitm_session,
|
2021-11-05 04:42:03 +01:00
|
|
|
settings->bools.netplay_stateless_mode,
|
|
|
|
settings->ints.netplay_check_frames,
|
|
|
|
&cbs,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
settings->bools.netplay_nat_traversal,
|
2021-11-05 04:42:03 +01:00
|
|
|
#ifdef HAVE_DISCORD
|
|
|
|
discord_get_own_username()
|
|
|
|
? discord_get_own_username()
|
|
|
|
:
|
|
|
|
#endif
|
|
|
|
settings->paths.username,
|
|
|
|
quirks);
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (!net_st->data)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
RARCH_ERR("[Netplay] %s\n", msg_hash_to_str(MSG_NETPLAY_FAILED));
|
|
|
|
runloop_msg_queue_push(
|
|
|
|
msg_hash_to_str(MSG_NETPLAY_FAILED),
|
|
|
|
0, 180, false,
|
|
|
|
NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(net_st->chat.messages); i++)
|
|
|
|
{
|
|
|
|
*net_st->chat.messages[i].data.nick = '\0';
|
|
|
|
*net_st->chat.messages[i].data.msg = '\0';
|
|
|
|
net_st->chat.messages[i].data.frames = 0;
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->chat.message_slots = ARRAY_SIZE(net_st->chat.messages);
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
net_st->reannounce = -1;
|
|
|
|
net_st->reping = -1;
|
2021-11-05 04:42:03 +01:00
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
if (net_st->data->is_server)
|
|
|
|
{
|
|
|
|
runloop_msg_queue_push(
|
|
|
|
msg_hash_to_str(MSG_WAITING_FOR_CLIENT),
|
2021-11-12 18:58:40 +01:00
|
|
|
0, 180, false,
|
|
|
|
NULL, MESSAGE_QUEUE_ICON_DEFAULT, MESSAGE_QUEUE_CATEGORY_INFO);
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
if (!settings->bools.netplay_start_as_spectator)
|
|
|
|
netplay_toggle_play_spectate(net_st->data);
|
|
|
|
}
|
|
|
|
|
|
|
|
return true;
|
2021-11-05 04:42:03 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/**
|
|
|
|
* netplay_driver_ctl
|
|
|
|
*
|
|
|
|
* Frontend access to Netplay functionality
|
|
|
|
*/
|
|
|
|
bool netplay_driver_ctl(enum rarch_netplay_ctl_state state, void *data)
|
|
|
|
{
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
bool ret = true;
|
|
|
|
|
|
|
|
if (net_st->in_netplay)
|
|
|
|
return true;
|
|
|
|
net_st->in_netplay = true;
|
|
|
|
|
|
|
|
if (!netplay)
|
|
|
|
{
|
|
|
|
switch (state)
|
|
|
|
{
|
|
|
|
case RARCH_NETPLAY_CTL_ENABLE_SERVER:
|
|
|
|
net_st->netplay_enabled = true;
|
|
|
|
net_st->netplay_is_client = false;
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_ENABLE_CLIENT:
|
|
|
|
net_st->netplay_enabled = true;
|
|
|
|
net_st->netplay_is_client = true;
|
|
|
|
break;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_DISABLE:
|
|
|
|
net_st->netplay_enabled = false;
|
|
|
|
#ifdef HAVE_DISCORD
|
2021-11-05 13:45:00 +01:00
|
|
|
if (discord_state_get_ptr()->inited)
|
2021-11-05 04:42:03 +01:00
|
|
|
{
|
|
|
|
discord_userdata_t userdata;
|
|
|
|
userdata.status = DISCORD_PRESENCE_NETPLAY_NETPLAY_STOPPED;
|
|
|
|
command_event(CMD_EVENT_DISCORD_UPDATE, &userdata);
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_IS_ENABLED:
|
|
|
|
ret = net_st->netplay_enabled;
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_IS_REPLAYING:
|
|
|
|
case RARCH_NETPLAY_CTL_IS_DATA_INITED:
|
|
|
|
ret = false;
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_IS_SERVER:
|
|
|
|
ret = net_st->netplay_enabled
|
|
|
|
&& !net_st->netplay_is_client;
|
|
|
|
goto done;
|
|
|
|
|
|
|
|
case RARCH_NETPLAY_CTL_IS_CONNECTED:
|
|
|
|
ret = false;
|
|
|
|
goto done;
|
|
|
|
|
2021-12-21 07:58:42 -07:00
|
|
|
case RARCH_NETPLAY_CTL_IS_SPECTATING:
|
|
|
|
case RARCH_NETPLAY_CTL_IS_PLAYING:
|
|
|
|
ret = false;
|
|
|
|
goto done;
|
|
|
|
|
2021-11-05 04:42:03 +01:00
|
|
|
default:
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (state)
|
|
|
|
{
|
|
|
|
case RARCH_NETPLAY_CTL_ENABLE_SERVER:
|
|
|
|
case RARCH_NETPLAY_CTL_ENABLE_CLIENT:
|
|
|
|
case RARCH_NETPLAY_CTL_IS_DATA_INITED:
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_DISABLE:
|
|
|
|
ret = false;
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_IS_ENABLED:
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_IS_REPLAYING:
|
|
|
|
ret = netplay->is_replay;
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_IS_SERVER:
|
|
|
|
ret = net_st->netplay_enabled
|
|
|
|
&& !net_st->netplay_is_client;
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_IS_CONNECTED:
|
|
|
|
ret = netplay->is_connected;
|
|
|
|
goto done;
|
2021-12-21 07:58:42 -07:00
|
|
|
case RARCH_NETPLAY_CTL_IS_SPECTATING:
|
|
|
|
ret = netplay->self_mode == NETPLAY_CONNECTION_SPECTATING;
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_IS_PLAYING:
|
|
|
|
ret = netplay->self_mode == NETPLAY_CONNECTION_PLAYING ||
|
|
|
|
netplay->self_mode == NETPLAY_CONNECTION_SLAVE;
|
|
|
|
break;
|
2021-11-05 04:42:03 +01:00
|
|
|
case RARCH_NETPLAY_CTL_POST_FRAME:
|
|
|
|
netplay_post_frame(netplay);
|
|
|
|
/* If we're disconnected, deinitialize */
|
|
|
|
if (!netplay->is_server && !netplay->connections[0].active)
|
|
|
|
netplay_disconnect(netplay);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_PRE_FRAME:
|
|
|
|
ret = netplay_pre_frame(
|
|
|
|
settings->bools.netplay_public_announce,
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
netplay->mitm_pending != NULL,
|
2021-11-05 04:42:03 +01:00
|
|
|
netplay);
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_GAME_WATCH:
|
|
|
|
netplay_toggle_play_spectate(netplay);
|
|
|
|
break;
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
case RARCH_NETPLAY_CTL_PLAYER_CHAT:
|
|
|
|
netplay_input_chat(netplay);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_ALLOW_PAUSE:
|
|
|
|
ret = netplay->allow_pausing;
|
|
|
|
break;
|
2021-11-05 04:42:03 +01:00
|
|
|
case RARCH_NETPLAY_CTL_PAUSE:
|
|
|
|
if (netplay->local_paused != true)
|
|
|
|
netplay_frontend_paused(netplay, true);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_UNPAUSE:
|
|
|
|
if (netplay->local_paused != false)
|
|
|
|
netplay_frontend_paused(netplay, false);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_LOAD_SAVESTATE:
|
|
|
|
netplay_load_savestate(netplay, (retro_ctx_serialize_info_t*)data, true);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_RESET:
|
|
|
|
netplay_core_reset(netplay);
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_DISCONNECT:
|
|
|
|
ret = true;
|
|
|
|
if (netplay)
|
|
|
|
netplay_disconnect(netplay);
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_FINISHED_NAT_TRAVERSAL:
|
|
|
|
netplay_announce_nat_traversal(netplay);
|
|
|
|
goto done;
|
|
|
|
case RARCH_NETPLAY_CTL_DESYNC_PUSH:
|
|
|
|
netplay->desync++;
|
|
|
|
break;
|
|
|
|
case RARCH_NETPLAY_CTL_DESYNC_POP:
|
|
|
|
if (netplay->desync)
|
|
|
|
{
|
|
|
|
netplay->desync--;
|
|
|
|
if (!netplay->desync)
|
|
|
|
netplay_load_savestate(netplay, NULL, true);
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
case RARCH_NETPLAY_CTL_NONE:
|
|
|
|
ret = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
|
|
|
net_st->in_netplay = false;
|
|
|
|
return ret;
|
2021-09-18 22:05:03 +02:00
|
|
|
}
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
/* Netplay Utils */
|
|
|
|
|
|
|
|
bool netplay_decode_hostname(const char *hostname,
|
|
|
|
char *address, unsigned *port, char *session, size_t len)
|
|
|
|
{
|
|
|
|
struct string_list *hostname_data;
|
|
|
|
|
|
|
|
if (string_is_empty(hostname))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
hostname_data = string_split(hostname, "|");
|
|
|
|
if (!hostname_data)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (hostname_data->size >= 1 &&
|
|
|
|
!string_is_empty(hostname_data->elems[0].data))
|
|
|
|
strlcpy(address, hostname_data->elems[0].data, len);
|
|
|
|
|
|
|
|
if (hostname_data->size >= 2 &&
|
|
|
|
!string_is_empty(hostname_data->elems[1].data))
|
|
|
|
{
|
|
|
|
unsigned tmp_port = strtoul(hostname_data->elems[1].data, NULL, 10);
|
|
|
|
if (tmp_port && tmp_port <= 65535)
|
|
|
|
*port = tmp_port;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (hostname_data->size >= 3 &&
|
|
|
|
!string_is_empty(hostname_data->elems[2].data))
|
|
|
|
strlcpy(session, hostname_data->elems[2].data, len);
|
|
|
|
|
|
|
|
string_list_free(hostname_data);
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
bool netplay_is_lan_address(struct sockaddr_in *addr)
|
|
|
|
{
|
|
|
|
static const uint32_t subnets[] = {0x0A000000, 0xAC100000, 0xC0A80000};
|
|
|
|
static const uint32_t masks[] = {0xFF000000, 0xFFF00000, 0xFFFF0000};
|
|
|
|
size_t i;
|
2021-12-20 10:19:42 -03:00
|
|
|
uint32_t uaddr;
|
|
|
|
|
|
|
|
memcpy(&uaddr, &addr->sin_addr, sizeof(uaddr));
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(subnets); i++)
|
2021-12-20 10:19:42 -03:00
|
|
|
if ((uaddr & masks[i]) == subnets[i])
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
return true;
|
|
|
|
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Netplay Widgets */
|
|
|
|
|
|
|
|
#ifdef HAVE_GFX_WIDGETS
|
|
|
|
static void gfx_widget_netplay_chat_iterate(void *user_data,
|
|
|
|
unsigned width, unsigned height, bool fullscreen,
|
|
|
|
const char *dir_assets, char *font_path,
|
|
|
|
bool is_threaded)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
bool menu_open = menu_state_get_ptr()->alive;
|
|
|
|
#endif
|
|
|
|
bool fade_chat = settings->bools.netplay_fade_chat;
|
|
|
|
|
|
|
|
/* Move the messages to a thread-safe buffer
|
|
|
|
before drawing them. */
|
|
|
|
for (i = 0; i < ARRAY_SIZE(net_st->chat.messages); i++)
|
|
|
|
{
|
|
|
|
struct netplay_chat_data *data =
|
|
|
|
&net_st->chat.messages[i].data;
|
|
|
|
struct netplay_chat_buffer *buffer =
|
|
|
|
&net_st->chat.messages[i].buffer;
|
|
|
|
|
|
|
|
/* Don't show chat while in the menu */
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
if (menu_open)
|
|
|
|
{
|
|
|
|
buffer->alpha = 0;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/* If we are not fading, set alpha to max. */
|
|
|
|
if (!fade_chat)
|
|
|
|
{
|
|
|
|
buffer->alpha = 0xFF;
|
|
|
|
}
|
|
|
|
else if (data->frames)
|
|
|
|
{
|
|
|
|
float alpha_percent = (float) data->frames /
|
|
|
|
(float) NETPLAY_CHAT_FRAME_TIME;
|
|
|
|
|
|
|
|
buffer->alpha = (uint8_t) float_max(
|
|
|
|
alpha_percent * 255.0f, 1.0f);
|
|
|
|
|
|
|
|
data->frames--;
|
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
buffer->alpha = 0;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(buffer->nick, data->nick,
|
|
|
|
sizeof(buffer->nick));
|
|
|
|
memcpy(buffer->msg, data->msg,
|
|
|
|
sizeof(buffer->msg));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void gfx_widget_netplay_chat_frame(void *data, void *userdata)
|
|
|
|
{
|
|
|
|
size_t i;
|
|
|
|
video_frame_info_t *video_info = data;
|
|
|
|
dispgfx_widget_t *p_dispwidget = userdata;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
int line_height =
|
|
|
|
p_dispwidget->gfx_widget_fonts.regular.line_height +
|
|
|
|
p_dispwidget->simple_widget_padding / 3.0f;
|
|
|
|
int height =
|
|
|
|
video_info->height - line_height;
|
|
|
|
|
|
|
|
for (i = 0; i < ARRAY_SIZE(net_st->chat.messages); i++)
|
|
|
|
{
|
|
|
|
char formatted_nick[NETPLAY_CHAT_MAX_SIZE];
|
|
|
|
char formatted_msg[NETPLAY_CHAT_MAX_SIZE];
|
|
|
|
int formatted_nick_len;
|
|
|
|
int formatted_nick_width;
|
|
|
|
const char *nick =
|
|
|
|
net_st->chat.messages[i].buffer.nick;
|
|
|
|
const char *msg =
|
|
|
|
net_st->chat.messages[i].buffer.msg;
|
|
|
|
uint8_t alpha =
|
|
|
|
net_st->chat.messages[i].buffer.alpha;
|
|
|
|
|
|
|
|
if (!alpha || string_is_empty(nick) || string_is_empty(msg))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
/* Truncate the message, if necessary. */
|
|
|
|
formatted_nick_len = snprintf(formatted_nick,
|
|
|
|
sizeof(formatted_nick), "%s: ", nick);
|
|
|
|
strlcpy(formatted_msg, msg,
|
|
|
|
sizeof(formatted_msg) - formatted_nick_len);
|
|
|
|
|
|
|
|
formatted_nick_width = font_driver_get_message_width(
|
|
|
|
p_dispwidget->gfx_widget_fonts.regular.font,
|
|
|
|
formatted_nick, formatted_nick_len,
|
|
|
|
1.0f);
|
|
|
|
|
|
|
|
/* Draw the nickname first. */
|
|
|
|
gfx_widgets_draw_text(
|
|
|
|
&p_dispwidget->gfx_widget_fonts.regular,
|
|
|
|
formatted_nick,
|
|
|
|
p_dispwidget->simple_widget_padding,
|
|
|
|
height,
|
|
|
|
video_info->width,
|
|
|
|
video_info->height,
|
|
|
|
NETPLAY_CHAT_NICKNAME_COLOR | (unsigned) alpha,
|
|
|
|
TEXT_ALIGN_LEFT,
|
|
|
|
true);
|
|
|
|
/* Now draw the message. */
|
|
|
|
gfx_widgets_draw_text(
|
|
|
|
&p_dispwidget->gfx_widget_fonts.regular,
|
|
|
|
formatted_msg,
|
|
|
|
p_dispwidget->simple_widget_padding + formatted_nick_width,
|
|
|
|
height,
|
|
|
|
video_info->width,
|
|
|
|
video_info->height,
|
|
|
|
NETPLAY_CHAT_MESSAGE_COLOR | (unsigned) alpha,
|
|
|
|
TEXT_ALIGN_LEFT,
|
|
|
|
true);
|
|
|
|
|
|
|
|
/* Move up */
|
|
|
|
height -= line_height;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void gfx_widget_netplay_ping_iterate(void *user_data,
|
|
|
|
unsigned width, unsigned height, bool fullscreen,
|
|
|
|
const char *dir_assets, char *font_path,
|
|
|
|
bool is_threaded)
|
|
|
|
{
|
|
|
|
settings_t *settings = config_get_ptr();
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
netplay_t *netplay = net_st->data;
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
bool menu_open = menu_state_get_ptr()->alive;
|
|
|
|
#endif
|
|
|
|
bool show_ping = settings->bools.netplay_ping_show;
|
|
|
|
|
|
|
|
if (!netplay || !show_ping)
|
|
|
|
{
|
|
|
|
net_st->latest_ping = -1;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Don't show the ping counter while in the menu */
|
|
|
|
#ifdef HAVE_MENU
|
|
|
|
if (menu_open)
|
|
|
|
{
|
|
|
|
net_st->latest_ping = -1;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
if (!netplay->is_server && netplay->is_connected)
|
|
|
|
net_st->latest_ping = netplay->connections[0].ping;
|
|
|
|
else
|
|
|
|
net_st->latest_ping = -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void gfx_widget_netplay_ping_frame(void *data, void *userdata)
|
|
|
|
{
|
|
|
|
video_frame_info_t *video_info = data;
|
|
|
|
dispgfx_widget_t *p_dispwidget = userdata;
|
|
|
|
net_driver_state_t *net_st = &networking_driver_st;
|
|
|
|
int ping = net_st->latest_ping;
|
|
|
|
|
|
|
|
if (ping >= 0)
|
|
|
|
{
|
|
|
|
char ping_str[16];
|
|
|
|
int ping_len;
|
|
|
|
int ping_width, total_width;
|
|
|
|
gfx_display_t *p_disp = video_info->disp_userdata;
|
|
|
|
|
|
|
|
/* Limit ping counter to 999. */
|
|
|
|
if (ping > 999)
|
|
|
|
ping = 999;
|
|
|
|
|
|
|
|
ping_len = snprintf(ping_str, sizeof(ping_str),
|
|
|
|
"PING: %d", ping);
|
|
|
|
|
|
|
|
ping_width = font_driver_get_message_width(
|
|
|
|
p_dispwidget->gfx_widget_fonts.regular.font,
|
|
|
|
ping_str, ping_len, 1.0f);
|
|
|
|
total_width = ping_width +
|
|
|
|
p_dispwidget->simple_widget_padding * 2;
|
|
|
|
|
|
|
|
gfx_display_set_alpha(p_dispwidget->backdrop_orig,
|
|
|
|
DEFAULT_BACKDROP);
|
|
|
|
gfx_display_draw_quad(
|
|
|
|
p_disp,
|
|
|
|
video_info->userdata,
|
|
|
|
video_info->width,
|
|
|
|
video_info->height,
|
|
|
|
video_info->width - total_width,
|
|
|
|
video_info->height -
|
|
|
|
p_dispwidget->simple_widget_height,
|
|
|
|
total_width,
|
|
|
|
p_dispwidget->simple_widget_height,
|
|
|
|
video_info->width,
|
|
|
|
video_info->height,
|
|
|
|
p_dispwidget->backdrop_orig,
|
|
|
|
NULL);
|
|
|
|
gfx_widgets_draw_text(
|
|
|
|
&p_dispwidget->gfx_widget_fonts.regular,
|
|
|
|
ping_str,
|
|
|
|
video_info->width - ping_width -
|
|
|
|
p_dispwidget->simple_widget_padding,
|
|
|
|
video_info->height -
|
|
|
|
p_dispwidget->gfx_widget_fonts.regular.line_centre_offset,
|
|
|
|
video_info->width,
|
|
|
|
video_info->height,
|
|
|
|
0xFFFFFFFF,
|
|
|
|
TEXT_ALIGN_LEFT,
|
|
|
|
true);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
const gfx_widget_t gfx_widget_netplay_chat = {
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
&gfx_widget_netplay_chat_iterate,
|
|
|
|
&gfx_widget_netplay_chat_frame
|
|
|
|
};
|
|
|
|
|
|
|
|
const gfx_widget_t gfx_widget_netplay_ping = {
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
NULL,
|
|
|
|
&gfx_widget_netplay_ping_iterate,
|
|
|
|
&gfx_widget_netplay_ping_frame
|
|
|
|
};
|
|
|
|
#endif
|
|
|
|
|
|
|
|
#undef NETPLAY_KEY_NTOH
|
|
|
|
|
|
|
|
#undef MITM_SESSION_MAGIC
|
|
|
|
#undef MITM_LINK_MAGIC
|
|
|
|
#undef MITM_PING_MAGIC
|
|
|
|
|
|
|
|
#undef DISCOVERY_QUERY_MAGIC
|
|
|
|
#undef DISCOVERY_RESPONSE_MAGIC
|
|
|
|
|
|
|
|
#undef NETPLAY_MAGIC
|
|
|
|
#undef FULL_MAGIC
|
|
|
|
#undef POKE_MAGIC
|
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
#undef REQUIRE_PROTOCOL_VERSION
|
|
|
|
#undef REQUIRE_PROTOCOL_RANGE
|
|
|
|
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#undef SET_PING
|
|
|
|
|
|
|
|
#undef SET_FD_CLOEXEC
|
|
|
|
#undef SET_TCP_NODELAY
|
|
|
|
|
2021-12-21 23:05:35 -03:00
|
|
|
#ifdef HAVE_INET6
|
Netplay Stuff (#13375)
* Netplay Stuff
## PROTOCOL FALLBACK
In order to support older clients a protocol fallback system was introduced.
The host will no longer send its header automatically after a TCP connection is established, instead, it awaits for the client to send his before determining which protocol this connection is going to operate on.
Netplay has now two protocols, a low protocol and a high protocol; the low protocol is the minimum protocol it supports, while the high protocol is the highest protocol it can operate on.
To fully support older clients, a hack was necessary: sending the high protocol in the unused client's header salt field, while keeping the protocol field to the low protocol. Without this hack we would only be able to support older clients if a newer client was the host.
Any future system can make use of this system by checking connection->netplay_protocol, which is available for both the client and host.
## NETPLAY CHAT
Starting with protocol 6, netplay chat is available through the new NETPLAY_CMD_PLAYER_CHAT command.
Limitations of the command code, which causes a disconnection on unknown commands, makes this system not possible on protocol 5.
Protocol 5 connections can neither send nor receive chat, but other netplay operations are unaffected.
Clients send chat as a string to the server, and it's the server's sole responsability to relay chat messages.
As of now, sending chat uses RetroArch's input menu, while the display of on-screen chat uses a widget overlay and RetroArch's notifications as a fallback.
If a new overlay and/or input system is desired, no backwards compatibility changes need to be made.
Only clients in playing mode (as opposed to spectating mode) can send and receive chat.
## SETTINGS SHARING
Some settings are better used when both host and clients share the same configuration.
As of protocol 6, the following settings will be shared from host to clients (without altering a client's configuration file): input latency frames and allow pausing.
## NETPLAY TUNNEL/MITM
With the current MITM system being defunct (at least as of 1.9.X), a new system was in order to solve most if not all of the problems with the current system.
This new system uses a tunneling approach, which is similar to most VPN and tunneling services around.
Tunnel commands:
RATS[unique id] (RetroArch Tunnel Session) - 16 bytes -> When this command is sent with a zeroed unique id, the tunnel server interprets this as a netplay host wanting to create a new session, in this case, the same command is returned to the host, but now with its unique session id. When a client needs to connect to a host, this command is sent with the unique session id of the host, causing the tunnel server to send a RATL command to the host.
RATL[unique id] (RetroArch Tunnel Link) - 16 bytes -> The tunnel server sends this command to the host when a client wants to connect to the host. Once the host receives this command, it establishes a new connection to the tunnel server, sending this command together with the client's unique id through this new connection, causing the tunnel server to link this connection to the connection of the client.
RATP (RetroArch Tunnel Ping) - 4 bytes -> The tunnel server sends this command to verify that the host, whom the session belongs to, is still around. The host replies with the same command. A session is closed if the tunnel server can not verify that the host is alive.
Operations:
Host -> Instead of listening and accepting connections, it connects to the tunnel server, requests a new session and then monitor this connection for new linking requests. Once a request is received, it establishes a new connection to the tunnel server for linking with a client. The tunnel server's address and port are obtained by querying the lobby server. The host will publish its session id together with the rest of its info to the lobby server.
Client -> It connects to the tunnel server and then sends the session id of the host it wants to connect to. A host's session id is obtained from the json data sent by the lobby server.
Improvements (from current MITM system):
No longer a risk of TCP port exhaustion; we only use one port now at the tunnel server.
Very little cpu usage. About 95% net I/O bound now.
Future backwards compatible with any and all changes to netplay as it no longer runs any netplay logic at MITM servers.
No longer operates the host in client mode, which was a source of many of the current problems.
Cleaner and more maintainable system and code.
Notable functions:
netplay_mitm_query -> Grabs the tunnel's address and port from the lobby server.
init_tcp_socket -> Handles the creation and operation mode of the TCP socket based on whether it's host, host+MITM or client.
handle_mitm_connection -> Creates and completes linking connections and replies to ping commands (only 1 of each per call to not affect performance).
## MISC
Ping Limiter: If a client's estimated latency to the server is higher than this value, connection will be dropped just before finishing the netplay handshake.
Ping Counter: A ping counter (similar to the FPS one) can be shown in the bottom right corner of the screen, if you are connected to a host.
LAN Discovery: Refactored and moved to its own "Refresh Netplay LAN List" button.
## FIXES
Many minor fixes to the current netplay implementation are also included.
* Remove NETPLAY_TEST_BUILD
2021-12-19 12:58:01 -03:00
|
|
|
#undef HAVE_INET6
|
|
|
|
#endif
|