mirror of
https://github.com/Mbed-TLS/mbedtls.git
synced 2025-01-16 22:20:56 +00:00
Update outcome analysis script & documentation
Now that the script only makes before-after comparison, it no longer makes sense to ignore some test suites. Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
This commit is contained in:
parent
10e3963aa4
commit
222bc85c6c
@ -1,40 +1,21 @@
|
||||
#!/bin/sh
|
||||
|
||||
# This script runs tests in various revisions and configurations and analyses
|
||||
# the results in order to highlight any difference in the set of tests skipped
|
||||
# in the test suites of interest.
|
||||
# This script runs tests before and after a PR and analyzes the results in
|
||||
# order to highlight any difference in the set of tests skipped.
|
||||
#
|
||||
# It can be used to ensure the testing criteria mentioned in strategy.md,
|
||||
# It can be used to check the first testing criterion mentioned in strategy.md,
|
||||
# end of section "Supporting builds with drivers without the software
|
||||
# implementation" are met, namely:
|
||||
#
|
||||
# - the sets of tests skipped in the default config and the full config must be
|
||||
# the same before and after the PR that implements step 3;
|
||||
# - the set of tests skipped in the driver-only build is the same as in an
|
||||
# equivalent software-based configuration, or the difference is small enough,
|
||||
# justified, and a github issue is created to track it.
|
||||
# This part is verified by tests/scripts/analyze_outcomes.py
|
||||
# implementation", namely: the sets of tests skipped in the default config and
|
||||
# the full config must be the same before and after the PR.
|
||||
#
|
||||
# WARNING: this script checks out a commit other than the head of the current
|
||||
# branch; it checks out the current branch again when running successfully,
|
||||
# but while the script is running, or if it terminates early in error, you
|
||||
# should be aware that you might be at a different commit than expected.
|
||||
#
|
||||
# NOTE: This is only an example/template script, you should make a copy and
|
||||
# edit it to suit your needs. The part that needs editing is at the top.
|
||||
#
|
||||
# Also, you can comment out parts that don't need to be re-done when
|
||||
# NOTE: you can comment out parts that don't need to be re-done when
|
||||
# re-running this script (for example "get numbers before this PR").
|
||||
|
||||
# ----- BEGIN edit this -----
|
||||
# Space-separated list of test suites to ignore:
|
||||
# if SSS is in that list, test_suite_SSS and test_suite_SSS.* are ignored.
|
||||
IGNORE="md mdx shax" # accelerated
|
||||
IGNORE="$IGNORE entropy hmac_drbg random" # disabled (ext. RNG)
|
||||
IGNORE="$IGNORE psa_crypto_init" # needs internal RNG
|
||||
IGNORE="$IGNORE hkdf" # disabled in the all.sh component tested
|
||||
# ----- END edit this -----
|
||||
|
||||
set -eu
|
||||
|
||||
cleanup() {
|
||||
@ -69,7 +50,6 @@ cleanup
|
||||
scripts/config.py full
|
||||
record "after-full"
|
||||
|
||||
|
||||
# analysis
|
||||
|
||||
populate_suites () {
|
||||
@ -79,11 +59,7 @@ populate_suites () {
|
||||
for data in $data_files; do
|
||||
suite=${data#test_suite_}
|
||||
suite=${suite%.data}
|
||||
suite_base=${suite%%.*}
|
||||
case " $IGNORE " in
|
||||
*" $suite_base "*) :;;
|
||||
*) SUITES="$SUITES $suite";;
|
||||
esac
|
||||
SUITES="$SUITES $suite"
|
||||
done
|
||||
make neat
|
||||
}
|
||||
|
@ -386,15 +386,16 @@ are expressed (sometimes in bulk), to get things wrong in a way that would
|
||||
result in more tests being skipped, which is easy to miss. Care must be
|
||||
taken to ensure this does not happen. The following criteria can be used:
|
||||
|
||||
- the sets of tests skipped in the default config and the full config must be
|
||||
the same before and after the PR that implements step 3;
|
||||
- the set of tests skipped in the driver-only build is the same as in an
|
||||
equivalent software-based configuration, or the difference is small enough,
|
||||
justified, and a github issue is created to track it.
|
||||
|
||||
Note that the favourable case is when the number of tests skipped is 0 in the
|
||||
driver-only build. In other cases, analysis of the outcome files is needed,
|
||||
see the example script `outcome-analysis.sh` in the same directory.
|
||||
1. The sets of tests skipped in the default config and the full config must be
|
||||
the same before and after the PR that implements step 3. This is tested
|
||||
manually for each PR that changes dependency declarations by using the script
|
||||
`outcome-analysis.sh` in the present directory.
|
||||
2. The set of tests skipped in the driver-only build is the same as in an
|
||||
equivalent software-based configuration. This is tested automatically by the
|
||||
CI in the "Results analysis" stage, by running
|
||||
`tests/scripts/analyze_outcomes.csv`. See the
|
||||
`analyze_driver_vs_reference_xxx` actions in the script and the comments above
|
||||
their declaration for how to do that locally.
|
||||
|
||||
|
||||
Migrating away from the legacy API
|
||||
|
Loading…
Reference in New Issue
Block a user