Usage of fail_if/fail_unless macros with a message results in a
warning with the latest version of check (0.15.2+) and GCC.
Ignore this specific error (since warnings are treated as errors) for now.
Example failure:
In file included from ../../../../src/../test/unit/lwip_check.h:7,
from ../../../../src/../test/unit/lwip_unittests.c:1:
../../../../src/../test/unit/lwip_unittests.c: In function ‘lwip_check_ensure_no_alloc’:
../../../../src/../test/unit/lwip_unittests.c:55:7: error: too many arguments for format [-Werror=format-extra-args]
55 | "mem heap still has %d bytes allocated", lwip_stats.mem.used);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../../../../src/../test/unit/ip4/test_ip4.c: In function ‘test_ip4_icmp_replylen_short’:
../../../../src/../test/unit/ip4/test_ip4.c:291:35: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
fail_unless(linkoutput_byte_ctr == icmp_len + sizeof(unknown_proto));
Plus minor cleanup in second icmp reply test
It used to fail with this error after building the depfiles:
clang -DLWIP_NOASSERT_ON_ERROR -I/usr/include/check -I../../../../src/../test/unit -Wno-gnu-zero-variadic-macro-arguments -g -DLWIP_DEBUG -Wall -pedantic -Werror -Wparentheses -Wsequence-point -Wswitch-default -Wextra -Wundef -Wshadow -Wpointer-arith -Wcast-qual -Wc++-compat -Wwrite-strings -Wold-style-definition -Wcast-align -Wmissing-prototypes -Wredundant-decls -Wnested-externs -Wunreachable-code -Wuninitialized -Wmissing-prototypes -Wredundant-decls -Waggregate-return -Wlogical-not-parentheses -fsanitize=address -fsanitize=undefined -fno-sanitize=alignment -Wdocumentation -Wno-documentation-deprecated-sync -I. -I../../.. -I../../../../src/include -I../../../ports/unix/port/include -c
clang-11: error: no input files
Also don't include depfiles while cleaning, to avoid generating them
just to remove them.
Having just one depfile (.depend) means it has to be fully regenerated
on every change, and it can't be done in parallel.
After this change the rebuild time after touching a single test file has
gone from 5.0 to 0.9 seconds. (make -j12)
Build of tests from clean has gone from 8.1 to 5.5s.
We could go even further and have one depfile per c-file, but this felt
like a simple first step giving a nice improvement.
To fix the build after ppp_output_cb started taking it as const in
commit b2d1fc119d.
Fixes this failure:
../contrib/examples/ppp/pppos_example.c: In function ‘ppp_output_cb’:
../contrib/examples/ppp/pppos_example.c:163:29: error: cast discards ‘const’ qualifier from pointer target type [-Werror=cast-qual]
return sio_write(ppp_sio, (u8_t*)data, len);
^
Authentication timer might still be running when entering network phase
for any necessary rechallenge, mostly for PPP server support.
Update the detailed analysis of simultaneously running PPP timers
taking into account the authentication timer that might still be
running and chose to increase the base number to 2 instead of
adding more unnecessary complexity.
upap_timeout is not currently stopped on authentication success or
fail events. This may have strange results if session is restarted
in a high pace because even if the timeout callback have a sanity
check against the PAP state the session can be restarted and be
back in the valid state before the timeout callback is actually
called.
Do not assume LWIP_RAND will return 32 bits of randomness because it is
probably going to be defined to directly return the rand() value. For
example, LCP magic numbers are 32-bit random values.
This is already what we inherently have always done for IPv4/IPv6
packets, so it works. Receivers must handle both cases anyway because
both behaviors are seen in the wild.
A previous call to ppp_input might have disconnected the session while
there were still packets in flight in the tcpip mailbox. Drop incoming
packets because ppp_input must never be called if the upper layer is
down.
Speed-up a little bit the PPPoS input parser by only checking the open
flag after calling the ppp_input function, the only one that can induce
a state change here.
ppp_set_* functions that set the PPP session parameters must only be
called when the session is in a dead state (i.e. disconnected),
otherwise not fatal but surprising results may happen.
This function call the notify phase callback that should be called from
the lwIP core thread. This is especially true if the user callback
is not designed to be reentrant.
There is no good reason why this function should take a non-const
pointer, as the output callback should never modify what lwIP gives it.
While changing that also switch to a more generic `void*` instead of
"byte".
There is no good reason why this function should take a non-const
pointer. While changing that also switch to a more generic `void*`
instead of "byte".
There is no good reason why this function should take a non-const
pointer. While changing that also switch to a more generic `void*`
instead of "byte".
We do not have equivalents in PPPAPI for ppp_set_* functions because
calling them only makes sense while session is disconnected, furthermore
they are only setting structure members of the session configuration.
We only have to reserve header space for forwarding for IPv4 and IPv6
packets, all other packets are PPP control packets. Doing so reduce
the need of having to coalesce the PBUF chain before PPP processes
control packets.
PPP peer can negotiate its MRU, therefore we don't know the MTU we are
going to use before starting PPP. This is an issue because netif_add
function assume that the netif init callback function will set the MTU,
netif_add will then copy mtu to mtu6. We have then to update mtu6 each
time we update mtu to keep them in sync. Doing so is fine because PPP
netif MTU is only updated when the netif is in link down state.
Our current HDLC decoder does not protect against starving the Rx
PBUF POOL for one packet, most likely due to received garbage on
the serial port.
Prevent starving the Rx pool by checking incoming packets length
against PPP_MRU with a 10% margin because we only want to avoid
filling all PBUFs with garbage, we don't have to be pedantic.
Fixes bug #58441: Invalid PPP data accumulates forever.
PPP_MRU is now free to be used for what it should have been. Now using
it at PPP init stage to set the wanted MRU value, triggering a MRU
negotiation at the LCP phase.
I doubt anyone needs it anyway, but, well, at least it is fixed and the
MRU/MTU config mess is cleaned.
And while we are at it, better document PPP MRU config values.
RFC1661 mandates that default MRU value, that must be used prior
negotiation of MRU value and if MRU value is not negotiated later, must
be 1500.
That is, any PPP host must accept control frames of at least 1500 when
the PPP session start (there are no way to split them in multiples
frames anyway) and must use a value of 1500 if MRU is no negotiated
during LCP exchanges.
Therefore, having it configurable in ppp_opts is a mistake. It was wrong
and never worked because changing the value never triggered a MRU value
negotiation because it changed both the wanted MRU value and the RFC
default value to which the wanted value is compared to trigger a MRU
negotiation if values are not equal.
Those are private functions, using the netif_ prefix here is not really
nice, especially with functions named netif_set_mtu and netif_get_mtu
for obvious reasons.
We currently retry indefinitely if sending packets fails, for example
if the output interface is down. We are even doing it if we are in
a middle of a connection process. This is not a very nice behavior
because PPP low level will retry indefinitely to connect and the user
application will never be warned that something is wrong.
We have the persist boolean in PPP settings to achieve more or less
the same thing anyway. Except it does it better at only retrying
indefinitely the initiation packet.
Having it configurable does not really make sense anymore, we already
need PBUF_RAM in all transmit paths. There are no real reason to keep
allocating PPP response buffers from the PBUF_POOL which should be now
reserved for receive paths only.
We need PBUF_RAM for quite a while for PPP, e.g. through pbuf_coalesce
and for all PPP transmit paths. There are no real reason to keep
allocating packets from PBUF_POOL for PPP control packets transmit path
by default today.
When pbuf_coalesce fails it does nothing and returns the previous buffer
chain. Adds checks that pbuf_coalesce succeeded, otherwise drop incoming
packet.
If we fail to receive a full packet, for exemple if a memory allocation
fail for some reason, we currently do not wait for next packet flag
character and we start filling a new packet at next received byte. Then
we expect the checksum check to discard the packet.
The behavior seem to have been broken one or two decades ago when adding
support for PFC (Protocol-Field-Compression) and ACFC
(Address-and-Control-Field-Compression).
Rework to drop any character until we receive a flag character at init
and when we drop a packet before it is complete.