This commit makes a first step towards modularizing the incoming record
processing by having it operate on instances of the structure mbedtls_record
representing SSL records.
So far, only record encryption/decryption operate in terms of record
instances, but the rest of the parsing doesn't. In particular,
ssl_parse_record_header() operates directly on the fixed input buffer,
setting the various ssl->in_xxx pointers and fields, and only directly
before/after calling ssl_decrypt_buf() these fields a converted to/from
mbedtls_record instances.
This commit does not yet remove the ssl->in_xxx fields, but makes a step
towards extending the lifetime of mbedtls_record structure representing
incoming records, by modifying ssl_parse_record_header() to setup an
instance of mbedtls_record, and setting the ssl->in_xxx fields from that
instance. The instance so-constructed isn't used further so far, and in
particular it is not yet consolidated with the instance set up for use
in ssl_decrypt_record(). That's for a later commit.
Previously, ssl_parse_record_header() did not check whether the current
datagram is large enough to hold a record of the advertised size. This
could lead to records being silently skipped over or backed up on the
basis of an invalid record length. Concretely, the following would happen:
1) In the case of a record from an old epoch, the record would be
'skipped over' by setting next_record_offset according to the advertised
but non-validated length, and only in the subsequent mbedtls_ssl_fetch_input()
it would be noticed in an assertion failure if the record length is too
large for the current incoming datagram.
While not critical, this is fragile, and also contrary to the intend
that MBEDTLS_ERR_SSL_INTERNAL_ERROR should never be trigger-able by
external input.
2) In the case of a future record being buffered, it might be that we
backup a record before we have validated its length, hence copying
parts of the input buffer that don't belong to the current record.
This is a bug, and it's by luck that it doesn't seem to have critical
consequences.
This commit fixes this by modifying ssl_parse_record_header() to check that
the current incoming datagram is large enough to hold a record of the
advertised length, returning MBEDTLS_ERR_SSL_INVALID_RECORD otherwise.
We don't send alerts on other instances of ill-formed records,
so why should we do it here? If we want to keep it, the alerts
should rather be sent ssl_get_next_record().
As explained in the previous commit, if mbedtls_ssl_fetch_input()
is called multiple times, all but the first call are equivalent to
bounds checks in the incoming datagram.
In DTLS, if mbedtls_ssl_fetch_input() is called multiple times without
resetting the input buffer in between, the non-initial calls are functionally
equivalent to mere bounds checks ensuring that the incoming datagram is
large enough to hold the requested data. In the interest of code-size
and modularity (removing a call to a non-const function which is logically
const in this instance), this commit replaces such a call to
mbedtls_ssl_fetch_input() by an explicit bounds check in
ssl_parse_record_header().
Previously, `ssl_handle_possible_reconnect()` was part of
`ssl_parse_record_header()`, which was required to return a non-zero error
code to indicate a record which should not be further processed because it
was invalid, unexpected, duplicate, .... In this case, some error codes
would lead to some actions to be taken, e.g. `MBEDTLS_ERR_SSL_EARLY_MESSAGE`
to potential buffering of the record, but eventually, the record would be
dropped regardless of the precise value of the error code. The error code
`MBEDTLS_ERR_SSL_HELLO_VERIFY_REQUIRED` returned from
`ssl_handle_possible_reconnect()` did not receive any special treatment and
lead to silent dopping of the record - in particular, it was never returned
to the user.
In the new logic this commit introduces, `ssl_handle_possible_reconnect()` is
part of `ssl_check_client_reconnect()` which is triggered _after_
`ssl_parse_record_header()` found an unexpected record, which is already in
the code-path eventually dropping the record; we want to leave this code-path
only if a valid cookie has been found and we want to reset, but do nothing
otherwise. That's why `ssl_handle_possible_reconnect()` now returns `0` unless
a valid cookie has been found or a fatal error occurred.
Availability of sufficient incoming data should be checked when
it is needed, which is in mbedtls_ssl_fetch_input(), and this
function has the necessary bounds checks in place.
The check is in terms of the internal input buffer length and is
hence likely to be originally intended to protect against overflow
of the input buffer when fetching data from the underlying
transport in mbedtls_ssl_fetch_input(). For locality of reasoning,
it's better to perform such a check close to where it's needed,
and in fact, mbedtls_ssl_fetch_input() _does_ contain an equivalent
bounds check, too, rendering the bounds check in question redundant.
mbedtls_ssl_decrypt_buf() asserts that the passed transform is not NULL,
but the function is only invoked in a single place, and this invocation
is clearly visible to be within a branch ensuring that the incoming
transform isn't NULL. Remove the assertion for the benefit of code-size.
The previous code performed architectural maximum record length checks
both before and after record decryption. Since MBEDTLS_SSL_IN_CONTENT_LEN
bounds the maximum length of the record plaintext, it suffices to check
only once after (potential) decryption.
This must not be confused with the internal check that the record
length is small enough to make the record fit into the internal input
buffer; this is done in mbedtls_ssl_fetch_input().
Adopt a simple method for tracking whether there was a failure: each
fallible operation sets overall_status, unless overall_status is
already non-successful. Thus in case of multiple failures, the
function always reports whatever failed first. This may not always be
the right thing, but it's simple.
This revealed a bug whereby if the only failure was the call to
psa_destroy_se_key(), i.e. if the driver reported a failure or if the
driver lacked support for destroying keys, psa_destroy_key() would
ignore that failure.
For a key in a secure element, if creating a transaction file fails,
don't touch storage, but close the key in memory. This may not be
right, but it's no wronger than it was before. Tracked in
https://github.com/ARMmbed/mbed-crypto/issues/215
When a key slot is wiped, a copy of the key material may remain in
operations. This is undesirable, but does not violate the safety of
the code. Tracked in https://github.com/ARMmbed/mbed-crypto/issues/86
The methods to import and generate a key in a secure element drivers
were written for an earlier version of the application-side interface.
Now that there is a psa_key_attributes_t structure that combines all
key metadata including its lifetime (location), type, size, policy and
extra type-specific data (domain parameters), pass that to drivers
instead of separate arguments for each piece of metadata. This makes
the interface less cluttered.
Update parameter names and descriptions to follow general conventions.
Document the public-key output on key generation more precisely.
Explain that it is optional in a driver, and when a driver would
implement it. Declare that it is optional in the core, too (which
means that a crypto core might not support drivers for secure elements
that do need this feature).
Update the implementation and the tests accordingly.
Register an existing key in a secure element.
Minimal implementation that doesn't call any driver method and just
lets the application declare whatever it wants.
Pass the key creation method (import/generate/derive/copy) to the
driver methods to allocate or validate a slot number. This allows
drivers to enforce policies such as "this key slot can only be used
for keys generated inside the secure element".
Let psa_start_key_creation know what type of key creation this is. This
will be used at least for key registration in a secure element, which
is a peculiar kind of creation since it uses existing key material.
Allow the application to choose the slot number in a secure element,
rather than always letting the driver choose.
With this commit, any application may request any slot. In an
implementation with isolation, it's up to the service to filter key
creation requests and apply policies to limit which applications can
request which slot.
This function no longer modifies anything, so it doesn't actually
allocate the slot. Now, it just returns the empty key slot, and it's
up to the caller to cause the slot to be in use (or not).
Add a slot_number field to psa_key_attributes_t and getter/setter
functions. Since slot numbers can have the value 0, indicate the
presence of the field via a separate flag.
In psa_get_key_attributes(), report the slot number if the key is in a
secure element.
When creating a key, for now, applications cannot choose a slot
number. A subsequent commit will add this capability in the secure
element HAL.
Add infrastructure for internal, external and dual-use flags, with a
compile-time check (if static_assert is available) to ensure that the
same numerical value doesn't get declared for two different purposes
in crypto_struct.h (external or dual-use) and
psa_crypto_core.h (internal).
mbedtls_ctr_drbg_random can only return up to
MBEDTLS_CTR_DRBG_MAX_REQUEST (normally 1024) bytes at a time. So if
more than that is requested, call mbedtls_ctr_drbg_random in a loop.
When psa_generate_random fails, psa_generate_key_internal frees the
key buffer but a the pointer to the now-freed buffer in the slot. Then
psa_generate_key calls psa_fail_key_creation which sees the pointer
and calls free() again.
This bug was introduced by ff5f0e7221d54e5a11db13c5198093a6b6bf4d53
"Implement atomic-creation psa_{generate,generator_import}_key" which
changed how psa_generate_key() cleans up on errors. I went through the
code and could not find a similar bug in cleanup on an error during
key creation.
Fix#207