The previous check also warned when on tests that were already skipped
in the reference config, which are not really a problem. The purpose of
this "uselessly ignored" check is to make sure that the ignore list
(together with the config common to driver and reference in all.sh)
always correct reflects what works or doesn't in driver-only builds. For
this it's enough to warn when a test is ignored but passing.
The previous, stricter check, was causing issues like:
Error: uselessly ignored: test_suite_pkcs12;PBE Encrypt, pad = 8 (PKCS7 padding disabled)
Error: uselessly ignored: test_suite_pkcs12;PBE Decrypt, (Invalid padding & PKCS7 padding disabled)
Error: uselessly ignored: test_suite_pkcs5;PBES2 Decrypt (Invalid padding & PKCS7 padding disabled)
These are skipped in the reference config because is has PKCS7 padding
enabled, and that's OK.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
This may come in handy when ignoring patterns in test_suite_cipher for
example which is split in several .data files where we'll want to ignore
the same patterns.
Currently none of the entries had a '.' in the test suite name, so this
doesn't change anything for existing entries.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
Change from iterating on all available tests to iterating on tests with
an outcome: initially we were iterating on available tests but
immediately ignoring those that were always skipped. That last part
played poorly with the new error (we want to know if the test ignored
due to a pattern was skipped in the reference component), but when
removing it, we were left with iterating on all available tests then
skipping those that don't have outcomes entries: this is equivalent to
iterating on tests with an outcome entry, which is more readable.
Also, give an error if the outcome file contains no passing test from
the reference component: probably means we're using the wrong outcome
file.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
This also adds a new task in analyze_outcomes.py for checking
the accelaration coverage against the reference element.
Signed-off-by: Valerio Setti <valerio.setti@nordicsemi.no>
- the script now only terminates in case of hard faults
- each task is assigned a log
- this log tracks messages, warning and errors
- when task completes, errors and warnings are listed and
messages are appended to the main log
- on exit the main log is printed and the proper return value
is returned
Signed-off-by: Valerio Setti <valerio.setti@nordicsemi.no>
Restore guards from the previous release, instead of the new, more
permissive guards.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
* Don't break string literals in the allow list definition
* Comment each test that belongs to the allow list is there.
Signed-off-by: Tomás González <tomasagustin.gonzalezorlando@arm.com>
* Turn the warnings produced when finding non-executed tests that
are not in the allow list into errors.
Signed-off-by: Tomás González <tomasagustin.gonzalezorlando@arm.com>
Introduce the --require-full-coverage in analyze_outcomes.py so that
when analyze_outcomes.py --require-full-coverage is called, those
tests that are not executed and are not in the allowed list issue an
error instead of a warning.
Note that it is useful to run analyze_outcomes.py on incomplete test
results, so this error mode needs to remain optional in the long
term.
Signed-off-by: Tomás González <tomasagustin.gonzalezorlando@arm.com>
The allow list explicits which test cases are allowed to not be
executed when testing. This may be, for example, because a feature
is yet to be developed but the test for that feature is already in
our code base.
Signed-off-by: Tomás González <tomasagustin.gonzalezorlando@arm.com>