If client reception buffer is bigger than the first frame we receive, the first packet test
will always fail for the second one if it is shorter the the diffence between reception
buffer size and first frame length.
For example, if we receive a PUBLISH message with length = 1517 (payload len = 1514 +
header len = 3), this result in total message length of 1517.
altcp_tls will send MQTT client frame up to 1516 bytes max. This result to PUBLISH
message splitted in two frame: first is 1516 bytes, the second of 1 bytes.
If MQTT_VAR_HEADER_BUFFER_LEN is 1520 (1516 + 4 bytes for stored fixed header), the
second frame of 1 bytes is considered as first publish frame because
client->msg_idx (1517) < MQTT_VAR_HEADER_BUFFER_LEN (1520).
This result in disconnection AND application callback never called for the end of the
payload.
The fix will check `(client->msg_idx - (fixed_hdr_len + length)) == 0` which can be
only true for the first frame of a message.
Below logs showing the bug:
```
April 3rd 2019, 23:14:05.459 lwip_dbg mqtt_parse_incoming: Remaining length after fixed header: 1514
April 3rd 2019, 23:14:05.460 lwip_dbg mqtt_parse_incoming: msg_idx: 1516, cpy_len: 1513, remaining 1
April 3rd 2019, 23:14:05.460 lwip_dbg mqtt_incomming_publish: Received message with QoS 1 at topic: v2/inte...
April 3rd 2019, 23:14:05.461 lwip_dbg mqtt_parse_incoming: Remaining length after fixed header: 1514
April 3rd 2019, 23:14:05.461 lwip_dbg mqtt_parse_incoming: msg_idx: 1517, cpy_len: 1, remaining 0
April 3rd 2019, 23:14:05.461 lwip_dbg mqtt_message_received: Received short PUBLISH packet
```
When using the MEMP_MEM_MALLOC option, memp_malloc() can not be relied on to
limit the number of allocations allowed for each MEMP queue, as the ND6 code
had been. This caused the ND6 queue to keep growing until the heap allocation
failed when using the MEMP_MEM_MALLOC option. So add an explicit queue size
check in ND6.
Replace '\n' with '<br>', as this allows doxygen to understand reference
names followed by newline. For some cases just drop the newline if it's
not required.
Doxygen 1.8.15 doesn't like if the name of reference is followed by
anything else than (selected?) punctuation or whitespace.
bug #56004
According to rfc5681:
https://tools.ietf.org/html/rfc5681
Paragraph 3.2. Fast Retransmit/Fast Recovery
The TCP sender SHOULD use the "fast retransmit" algorithm to detect
and repair loss, based on incoming duplicate ACKs. The fast
retransmit algorithm uses the arrival of 3 duplicate ACKs (as defined
in section 2, without any intervening ACKs which move SND.UNA) as an
indication that a segment has been lost. After receiving 3 duplicate
ACKs, TCP performs a retransmission of what appears to be the missing
segment, without waiting for the retransmission timer to expire.
Now consider the following scenario:
Server sends packets to client P0, P1, P2 .. PK.
Client sends packets to server P`0 P`1 ... P`k.
I.e. it is a pipelined conversation. Now lets assume that P1 is lost, Client will
send an empty "duplicate" ack upon receive of P2, P3... In addition client will
also send a new packet with "Client Data", P`0 P`1 .. e.t.c. according to sever receive
window and client congestion window.
Current implementation resets "duplicate" ack count upon receive of packets from client
that holds new data. This in turn prevents server from fast recovery upon 3-duplicate acks
receive.
This is not required as in this case "sender unacknowledged window" is not moving.
Signed-off-by: Solganik Alexander <sashas@lightbitslabs.com>
Two new API:
err_t mdns_search_service(const char *name, const char *service, enum mdns_sd_proto proto,
struct netif *netif, search_result_fn_t result_fn, void *arg,
s8_t *request_id);
void mdns_search_stop(s8_t request_id);
One compilation flags:
LWIP_MDNS_SEARCH
One options flags:
MDNS_MAX_REQUESTS
Some structure declarations moved to allow use by callback result function.
Result domain names are early uncompress before calling application callback
because it cannot be made by application itself.
Allow search services with multiples labels included, like '_services._dns-sd'.
Search for `_services._dns-sd._udp.local.` is handled in a special way.
Only `PTR` answers are send back to the application.
The `mdns_search_service()` function won't assert if no more space in `mdns_request`
table, just return an error if too many simultanous requests.
Apparently the TFTP server now also invokes the error() function in the
tftp_context struct.
Some tftp clients (for example Windows 10 TFTP client) will open the
remote file before checking the local file can be opened - and will then
send an error indication to the server to indicate there was an error
opening the local file. When the happens, the LWIP tftp server will
invoke the error() member of the tftp_context.
This adds support for RFC4075 SNTP server configuration via DHCPv6.
The DHCPv6 options transmitted are now conditional on how LwIP is
configured.
A new SNTP application option SNTP_GET_SERVERS_FROM_DHCPV6 is used
to enable. For simplicity this is configured to use the global
LWIP_DHCP6_GET_NTP_SRV configuration setting.
Tests:
- Check the global options now control the DHCPv6 request sent
in Wireshark
- Check against 0, 1 and 3 SNTP servers configured on an odhcpd
server configured to support RFC 4075 SNTP server lists.
Verify that the SNTP server list is updated on connection
establishment on an ESP8266 WeMOS D1.
- Verify that SNTP packets are sent and recieved from a
configured server and that system time is updated.
Signed-off-by: David J. Fiddes <D.J@fiddes.net>
Use only one entropy/ctr_drbg context for all altcp_tls_config structure allocated.
(Small adjustments before committing: fix coding style, adapt to changes in master)