Pressure from government on companies and institutions to provide access to encrypted communications and stored data us increasing. Many people call it the second crypto war. An influential report often cited in the discussion is “Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications” written by a score of well known and respected scientists. The report raises many important and relevant points. However, it is very much focused on the argument that government access is a bad idea from a technical perspective. And I happen to disagree with that point of view. There are many good reasons against indiscriminate government access to public infrastructure, but the technical arguments are the least convincing in my mind. In fact I think it is dangerous and ineffective to argue against government access on technical grounds. Instead the real arguments against indiscriminate government access are of an ethical, legal, political and organisational nature. Here is why.
The technical arguments against government access are weak
The report cites the following main technical arguments against government access: it thwarts forward security, it interferes with authenticated encryption, it increases system complexity, and the point through which government obtains access is an attractive target for bad actors. Let’s address each of the technical arguments one by one.
Government access thwarts forward security
Forward security guarantees that any information exchanged before a compromise remains confidential. A typical cryptographic protocol uses both long term keys and short term keys. The long term keys are used to exchange the short term keys. The short term keys are then used for the task at hand (e.g. encrypt or decrypt a message) after which they are disposed. If the way the short term keys are established is such (this typically involves a so-called Diffie-Hellman key exchange) that an eavesdropper cannot recover the short term keys, even if it knows the long term keys, the system is forward secure. This term captures the fact that for such systems, even if say the NSA records all messages (both key exchange and subsequent encrypted communication) and later compromises one of the participants in the conversation, it still cannot recover the short term decryption keys and decrypt the conversation.
At first glance it would appear that forward security is indeed at odds with government access: the government wants access after the fact, it wants to later decrypt communication it intercepted. But this is not necessarily the case. Yes, forward security is thwarted if, as the report suggests, the short term decryption key is encrypted against a long term government access key. In that case indeed, if the government access key ever becomes public, all communication can be decrypted after the fact. But this is only one, and apparently not so very smart, way to implement government access.
By way of contrast, the following method for government access does provide forward security. The idea is to set up a second key exchange, between the user and the government access point, to establish a second short term key in a forward secure manner. The message is encrypted separately by both short term keys, allowing government to decrypt the message if it deems necessary to do so. Cooperation of (the equipment of) the user is required, as in the case where the short term decryption key is encrypted against a long term government access key.
Also, in both cases, the escrow mechanism must be designed in such a way that the user cannot circumvent the escrow mechanism while still communicating securely.
Of course the main drawback of this method is that the government needs to store each short term key separately. Then again: if the government is willing to collect and store all messages we transmit, storing the associated keys (in a separate location) is only an incremental increase in effort and resources. Moreover, the key exchange will involve some kind of active authentication of the government access point, meaning a private key needs to be readily available. This is an attractive target for bad actors (see below).
Another drawback is that this access point needs to be online (compared to the passive, non-forward-secure escrow mechanism). The encrypted communication itself needs to be intercepted in real time too, however, so this does not add much overhead.
In other words, this idea may certainly not be ideal or practical, but this may be only a matter of further research to iron out any practical inconveniences. The point is only to show that a forward secure way of implementing government access is, in theory, possible. Thus weakening this particular technical argument against government access.
Government access interferes with authenticated encryption
Authenticated encryption provides both confidentiality (only the intended receiver can decrypt and access the message) and authenticity (the receiver knows the message was sent by the intended sender and not modified in transit) using a single cryptographic operation using a single key. Now if government is given access to this key, this means that also the government could have been the sender of the message. In other words, authenticity of the message is no longer guaranteed. At least not cryptographically.
Authenticated encryption is efficient and has proven security properties. But there is nothing that requires us to use authenticated encryption in a context where government believes a need for access is required.
Interestingly, the report writes
“Going back to the encryption methods of the 1990s, with separate keys for encryption and authentication, would not only double the computational effort required, but introduce many opportunities for design and implementation errors that would cause vulnerabilities.”
This is true, of course, but this again does not mean it is impossible to use traditional methods to provide authenticity and confidentiality using separate keys. It is merely less efficient. And yes, you have to be careful to implement it in a secure manner. Again this is certainly possible; there are even standards for that.
Government access increases system complexity
The report argues that government access increases system complexity. It cites two examples (healthcare.gov and the FBI Trilogy program) of software systems developed by the government that show how difficult it is to develop complex systems that are usable and secure. Yet, as also the report acknowledges, there are many complex systems out there that actually do work, and do not suffer from major security breaches. The fact that secure software engineering is hard is independent of the fact whether government access needs to implemented. And therefore not an argument against government access in itself.
Increased complexity has not stopped society form developing complex systems before. One could even argue that innovation and progress is based on our ability to manage the complexity of designing and operating ever more advanced systems.
Clearly some effort needs to spend to think about the right fundamental approach to implement government access (and this is not easy, because government is not clear about what it wants exactly). But the concept of secondary access is not uncommon and, as also the report mentions, in a business context some kind of escrow for both work related communication as well as data storage is common practice. Simply because loss of the decryption keys of a company employee should not endanger the day to day running of the business. Also, widely used products like PGP (for secure email) and iMessage (for secure messaging) allow the message to be sent encrypted to several participants at the same time.
The report argues that “scaling up a corporate mechanism to a global one is hard”. This may sound reasonable. It may even be true. But this needs to be more substantiated. And there are counterexamples. Some corporations are global, for example, and are able to deal with this complexity (including complexity introduced by differences in legal regimes across countries).
Also the Internet and all its basic services (some of them quite complex) is itself a global protocol that appears to work… Standardisation is key, which actually is not such a bad thing in itself if that means that communication protocols (that currently differ among service providers, thus creating walled gardens) will be harmonised… Such standards do in fact exist for so called lawful interception in telephone switches.
In fact many things increase system complexity. Business requirements, for example. If the requirements are important enough, the system development team will try to satisfy them while keeping the complexity within acceptable margins.
The point through which government obtains access is an attractive target for bad actors
If the government requires (near) real time access to the data (which is at least the case for communications, see below), the keys or credentials that provide access cannot be stored in highly secure facilities, and cannot be stored offline, using some secret sharing technique. This is a genuine problem. The mere fact the keys have to be readily available, means they are also easily available to bad actors.
Accessing communications versus accessing data stored on smart phones
The Keys Under Doormats report actually discusses two scenarios.
- Providing exceptional access to globally distributed, encrypted messaging applications.
- Exceptional access to plaintext on encrypted devices such as smartphones.
The (presumed) technical risks discussed earlier all follow from the first scenario. But this scenario is in fact mostly problematic because of organisational and legal reasons, as will be described below.
Accessing data stored on a smart phone is a different matter though. The latest versions of both iOS and Android encrypt the data using a random key (different for each device), encrypted against a so-called Key-Encryption-Key (KEK) that is derived from the user password, and some device specific information. Escrowing this KEK for each government that might want to have access to the device is unworkable: no smart phone would ever be usable in a business context for fears of corporate espionage. Less inconceivable would be to escrow the KEK to the government key for the country in which the device is sold. This gives consumers the option to choose the surveillance regime they prefer (if there is anything to choose…)
The most obvious choice, however, is to encrypt this KEK against a vendor key. This allows the vendor to decrypt the KEK, recover the random encryption key, and decrypt all data on the device. Government can ask the cooperation of the vendor to decrypt data on smart phone implicated in a serious crime or terrorist attack. The vendor will only cooperate if the request is substantiated and complies with the law. This implies an appropriate process with proper oversight and guarantees is in place (and this is by no means a given in certain areas of the world). But the fact a vendor is an essential and independent part of the loop does provide better guarantees against indiscriminate government access. The process could even be strengthened by involving an independent oversight board.
Access to the data can be provided in two ways. Either the vendor provides the government agency with the (device specific) random device encryption key. Or they can require government hands over the device, after which the vendor itself unlocks it and provides the decrypted data found on the phone in return. The advantage of the second approach is that it involves a significant amount of effort, which means that there is only limited capacity in terms of the number of smart phones that can or will be unlocked in this way. The advantage of the first approach is that the vendor itself does not get to see the data. If there are concerns about abuse of this unlocking feature over the air (e.g. through some trojan, or special hacking software provided by the likes of Gamma and FinFisher), an interesting countermeasure may be to implement the unlocking feature in such a way that it requires physical access to the phone to submit the decryption key over a special hardware port that is otherwise unused (similar to a JTAG port).
Such a vendor access key might sound risky. But it is no riskier than the typical update mechanism all smart phone vendors use to push updates to their smart phones. These updates are typically signed by a global vendor key. Such an update mechanism allows the vendor, in principle, to push an update that bypasses the device encryption altogether, having an even worse effect than a vendor access key…
Round-up concerning the technical arguments against
As I’ve argued above, the technical arguments against government access are not very convincing. A straightforward implementation of government access indeed breaks forward security. But this does not mean a form of access that maintains forward security cannot be designed. Although authenticated encryption is a nice, efficient, primitive to use, there is nothing inherently wrong with the conventional approach of using separate keys for confidentiality and authenticity. Government access indeed increases system complexity, but there are many things that increase system complexity, which has not stopped them being implemented successfully either. The most convincing technical argument is that the point through which government obtains access is an attractive target for bad actors.
All of the above discussion has focused on a rather conventional approach to government access, namely systems that somehow give government direct access to the data (or the keys with which the data is encrypted) whenever they see a need. As I will argue below, one important argument against such proposals for government access is that they lack an inherent balancing mechanism that makes it impossible to pursue a capture and analyse all approach. There are other approaches (for example based on revocable privacy) that impose strict technical limits to what government has access to. I plan to write a detailed blog post about some of these approaches soon.
The ethical, legal, political and organisational arguments against
There are also numerous ethical, legal, political and organisational arguments against government access. These arguments are, in my mind, more important than the technical arguments. I will discuss them next.
Is government access necessary?
First and foremost, the question is whether government access is necessary at all. Contrary to what is often said, law enforcement is not “going dark”. In fact it has better and more effective surveillance capabilities now than ever before. If government access is deemed necessary, because certain crimes and acts of terror are hard to fight without it, the question is what type of access is required, and which procedures and safeguards need to be in place.
This is related to one important argument the ‘Keys Under Doormats’ report raises, namely that government is very unclear about what kind of access it wants, and under which conditions. This makes the concept of government access a moving target, and thus hard to argue for or against (on whichever grounds). Governments’ positions (yes, all of them ;-) have to be made more clear on this, and this involves an active dialogue between policy makers, technical experts, representatives of industry and other stakeholders, and, most importantly, civil society.
Limitless access at odds with democratic society
The current proposals for government access (in so far as they are very clearly articulated, see the previous section), and in fact the current extent of dragnet surveillance as revealed by Edward Snowden, are at odds with the careful system of checks and balances that are the foundation of a democratic society. They provide the government with access to essentially all data, without any inherent limitation in terms of effort required. This is the essential difference between the virtual world (where after an initial investment the cost per data item surveilled becomes marginal) and the real world (where you cannot have half the world surveil the other half — although the Stasi in East Germany came pretty close).
We see the real risks of any system for government access are elsewhere: such a system is easily abused by insiders, and function creep looms around the corner. It is very tempting to use a system that is already in operation for other, more invasive, purposes if the perceived need is high and the cost to do so is essentially zero.
The question of jurisdiction
Although a global, standardised, system for government access to communications is certainly possible (as argued above), this does not mean the question which government is allowed access to what is easily resolved. In fact, as the Keys Under Doormats report rightfully argues, this is a highly problematic issue.
First of all, different countries may have different legal requirements that may have consequences for the way access must be implemented technically. If the differences are large, or if certain legal requirements from different countries are even conflicting, no standard for government access can be agreed on in the first place. This is in fact a very likely scenario, as one might at least hope that the legal framework for government access in democratic societies is fundamentally different from such frameworks in a dictatorial regime.
If no standard can be agreed on, the question indeed is “If a British-based developer deploys a messaging application used by citizens of China, must it provide exceptional access to Chinese law enforcement?” And if so, how is that going to be enforced?
Now let us assume that somehow a standard can be agreed on. Then the question becomes: who gets access to what, who controls the keys that provide access to the data, and how can a meaningful level of oversight be implemented? The first question is especially important because if this not clearly specified and technically enforced, one essentially opens up the whole Internet to dragnet surveillance to whichever country is interested in this. For starters this may lead to rampant economic espionage. More importantly it leaves citizens totally unprotected because they may have no legal means to hold a foreign nation that surveilled them accountable.
In other words: we cannot ask for government access and expect only ‘us’, the good guys, to use it and use it wisely. The bad guys will demand (and get) access too. And in any case, recent history has shown that even the good guys are not using their excessive powers wisely at all.
Note that for plain old telephone systems (POTS) this issue is less problematic. POTS typically are national networks, under control of a few national telecommunication operators. For fixed line systems, the physical location of the caller and callee is known. Therefore, it is clear under which jurisdiction a request for access must be dealt with. For mobile systems, if a caller or callee is abroad the guest network that provides the connection with the phone similarly determines this jurisdiction. Moreover, there are no obvious ways for nations not involved in the communication to obtain access to the (encrypted) communication in the first place.
It transforms the Internet from open to highly regulated
Government access at a global scale means the use of secure communication technology must be highly regulated. In essence, the use of unescrowed products that do not cater for government access must be limited or outright banned. Sale of such products must be prohibited. This is relatively straightforward to achieve, although the ‘virtuality’ of software makes it harder to control its distribution, especially if the volumes are low enough to stay under the radar. But not only must the sale of end products be prohibited, also the (free) distribution of libraries, or of source code that can be compiled to recreate the required applications must be similarly restricted. Also, the distribution of information how to bypass the access mechanism must be prevented, including any tools that can be used to patch and disable such mechanisms in otherwise approved products. Finally, any research into how such mechanisms for government access work must be banned, as they may reveal information on how to bypass the system. This has repercussions for security research in general, but especially in research that aims to strenghten the security of the government access mechanism.
All this is strikingly similar to the fruitless effort to ban the export of cryptography in the previous century, that was bypassed even in an entirely legal when the source code for PGP was printed in a book (and hence protected by free speech), shipped to Europe, and got OCRed and hence converted back to compilable source code there…
Smart crooks will use unescrowed products
Given the above discussion it is more than likely that unescrowed products, that do not provide government access, will be available — at least to an inner circle of knowledgeable people. Criminals and terrorists (and perhaps even respectable businessmen) that really care for their security will approach these people to ensure that they will be able to communicate securely, without the risk of government accessing their communications or their data.
The use of such products can only be prevented if all Internet traffic is monitored and anything that looks like encrypted yet unescrowed communications is blocked. This is not an easy task, and can most certainly not be performed with 100% accuracy. The problem is similar to what certain regimes currently are faced with when trying to prevent their citizens to access certain Internet services or websites with unwanted content. Such censorship only partially works as many citizens successfully bypass them using censorship prevention tools. In a similar vein, encrypted communications can be made to look like messages of any other (unencrypted) Internet protocol (like that used for web browsing).
Of course, 100% accuracy is not necessarily required: by making the means for really secure communications sufficiently hard to come by and the use of them a clearly criminal offense, a significant fraction of the people will simply not bother — for similar reasons, digital rights management sort of works. Moreover, it raises the bar for operational security for those that do try to stay unobserved by government. A mistake, especially in emergency situations, is easily made. Providing government with just that small piece of data it was waiting for.
The crypto wars are not about crypto.
Some people claim that it is impossible to make a secure system that also provides government access. Which is true if your security requirement is to not give government access. As I have argued above, the purely technical arguments against government access are not very convincing. Unfortunately, it is exactly those arguments that are most often raised in the debate. The Obama administration recently announced that it will not mandate government access ‘at the moment’. This position may well change after the first terrorist attack or other incident that could have been prevented had government access been possible. Instead of arguing against (unrestricted) government access on technical grounds, it is important to argue against it on more fundamental and convincing ethical, political and organisational grounds.
This still leaves the question whether some form of restricted government access should be catered for. This means we should have a societal debate about the extent of this access, and the safeguards that should be in place. In other words, we should start thinking about proper solutions to bridge the dilemma that both security (in the societal sense) and privacy are fundamental rights. These will be technical solutions. But based on what is possible, instead of what supposedly impossible.
If we don’t, we may win this battle but loose the war. Leaving us with a government and its intelligence and security services having unfettered access in the name of security, creating a surveillance state in which privacy no longer meaningfully exists.
Acknowledgements: I would like to thank Ryan Calo for helpful comments on an earlier draft of this article.
Bron: Blog Jaap-Henk Hoepman