In Part 1 of this series, we spent some time discussing the history of the Secure Sockets Layer / Transport Layer Security protocols, in particular the characteristics of TLS v1.2. Today we will be looking at the latest available version of the protocol (TLS v1.3) and understand what are its key differences and advantages.
Before we begin, I strongly recommend that you read Part 1 of this series as it will give you the necessary background for understanding the topics discussed in this article. If you haven't done it already, get a cup of coffee (or tea) and enjoy your read. Done it? Let's look at TLS 1.3 then!
Despite all the improvements introduced over its predecessors, TLS 1.2 has several flaws, mainly around the significant complexity around its configuration - meaning it's easy to get it wrong and create unintended vulnerabilities.
TLS 1.2 Shortcomings
Let's quickly summarize what are the main shortcomings that TLS 1.2 has, so we can understand better how TLS 1.3 is an improvement:
The cipher suites available for TLS 1.2 are extensive. There are tens of available combinations, ultimately making it difficult for the end-user to configure the best ones. This also increases the chances that a client and a server have to agree on a different cipher suite each time.
The cipher suites for TLS 1.2 allow for the configuration of four different algorithms (Key Exchange, Authentication, Session Encryption and Hashing/Pseudo Random Function), increasing the configuration complexity.
Adding to the above complexity, the parameter choice for certain algorithms (Diffie-Hellman key exchange) is left to the user, which introduced security issues as insecure choices can be made by the people configuring the server.
Some parts of the TLS handshake are not included in the digital signature that the server generates as proof of private key possession. This left the door open for a series of downgrade attacks which allowed to force the client and the server to communicate using a weak, easily crackable cipher.
Over time, it was realized that the performance of the TLS 1.2 handshake could be improved, reducing the number of roundtrips required to establish a secure communication channel. This becomes particularly important on mobile or other low powered devices, where either the connectivity may be spotty (and the latency elevated) or the capabilities of the device are limited and maximum economy is required.
Let's have a look now at what has changed in TLS 1.3 to overcome these and other limitations of the previous version.
TLS 1.3 Handshake
As discussed, the TLS 1.3 handshake is different. You can notice that it saves a full roundtrip compared to TLS 1.2, meaning it's faster than that. This is a great advantage for long-distance communications and for any device that can only rely on patchy or reduced network connections (such as mobiles, IoT or other distributed devices).
As before, I've found the following resource very useful for my understanding
In TLS v1.3, for the sake of simplification and in order to avoid mis-conifguration, the ciphersuite only specifies two algorithms:
Data Encryption Algorithms: the method used to encrypt and decrypt the data to be secured, using the keys derived from the agreed master secret.
Data Integrity Algorithms: the method used to detect data errors or data tampering attempts for the encrypted messages, and also used for deriving key material from the master secret.
As an example, as you may have noticed from the Mozilla page I linked, TLS 1.3 suites only specify these two, for example TLS_AES_256_GCM_SHA384.
The choice around Authentication and Key Exchange have been removed, Support for RSA key exchange have been dropped, and only Ephemeral Diffie Hellman exchanges are supported.
The handshake is optimised for the typical use cases, and relies on the fact that there is less variability on the possible cipher suites. This means that the client can optimistically guess which suite the server is going to accept, and therefore is able to send the initialization parameters and keys in the very first message to the server.
This is exactly what allows to save one entire roundtrip in the handshake. The client has sent his key share and parameters in the 'Hello' message, and the server can respond with his half of the keyshare in the response. At this point, both client and server already possess both halves of the key share and can compute the master secret and derive the keys required to initialize the symmetric encryption algorithm.
Should the server not support the "presumed" ciphersuite that the client has guessed, then the client can retry the 'Hello' message - this is an unlikely circumstance however.
TLS 1.3 Authentication
In TLS 1.3, there are three options available:
RSA - Only used for signature and NOT for key exchange.
EdDSA: Edwards-curve Digital Signature Algorithm
Because, by design, the server can start encrypting data much earlier in the TLS 1.3 handshake, it can sign (using its private key) the entire sequence of the handshake, and include this in the response that is sent to the client. The client can then verify that the signature is valid using the server's public key, which proves that it possesses the private key.
The other aspects of the certificate validation are the same as described before in the TLS v1.2 scenario.
TLS 1.3 Key Exchange
As we discussed above, the Key Exchange has been vastly improved. The client "guesses" what the server supports, and sends its half and the DH parameters in the initial message out to the Server.
The server in most cases will be able to respond with his other half of the key share, and is immediately able to derive the symmetric key as it already has the client half and the initialization parameters.
This means that the server can return encrypted data from the very first response to the client (Certificate and Finished messages).
Session Resumption: 0-RTT
If we look at a real life scenario, it is very likely that a client which requested a TLS connection earlier will want to do so again in the near future. Imagine for example all the apps on your phone, regularly contacting the APIs that are used to implement them: all these Twitter notifications, Whatsapp messages and Instagram content are coming from somewhere, aren't they? You are also likely to visit the same websites, or use the same services more than once.
It is therefore natural to think that there should be a way to "resume" a previous connection without having to incur every time in the latency cost of the handshake.
In TLS 1.2 (although we did not discuss this in my previous article) this is supported with Session IDs or Session Tickets (see RFC 5077 for the details), allowing to save one round trip, and achieving one Round Trip Time (1-RTT). In short, this is a way to allow the server to cache a session key seen from a particular client, so that the client can reuse the same shared key in the future. These methods came with security concerns too.
TLS 1.3 replaced the above methods with a Pre Shared Key (PSK) based resumption: this is either a shared secret that the server and the client obtained outside of the protocol, or a shared secret that was established during a previous encrypted session. This can be stored on the server and tied to an ID, which is then given to the client, or encrypted by the server and sent back to the client, in a very similar way as it happened with TLS 1.2 Session IDs / Tickets.
When a client wants to resume a connection, it can include the PSK in the first flight out to the server. It can also include application data, already encrypted with the shared master secret established before. This means that the client and the server can start exchanging secure messages ("Early Data") from the very first exchange, achieving 0 Round Trip Time (0-RTT).
This is a significant improvement, however it comes at an expense:
Forward Secrecy is not applicable for the "Early Data", as it was encrypted with keys derived on the PSK and not with fresh keys. The rest of the handshake is still used to compute a fresh key, which is used for any application data coming afterwards. Still, the problem persists for the Early Data.
Replay Attacks: as the Early Data can be obtained independently of the data recieved from the server, it is possible that an attacker able to capture a 0-RTT message from the client can replay it to the server, and the server may end up serving the request as valid. This could cause significant issues if the application data can modify the server state - for example, if the message is "Take 10 GBP from Account A and move it to Account B". The problem can be alleviated if the specific system using TLS and 0-RTT is architected so that the data messages sent to the server are idempotent - in other words, by guaranteeing processing that same message multiple times does not modify the final state of the application. Easier said than done!
As with everything in life, you can't have your cake and eat it 🍰
I hope in the last two articles I gave you a good, initial understanding of TLS and specifically version 1.2 and 1.3 - These are fairly complex topics and it was definitely a learning curve for me as well. I am sure there are other aspects of note that I have not fully delved into, perhaps we will revisit this again in the future.
If you are interested in this topic, I strongly recommend watching this talk (Feb 2018) from Andy Brodie at OWASP London