Can a Lemon Market Remdey a Lemon Market?

It is commonly agreed that the market for cybersecurity products and services is what economists call a lemon market (according to the 1970 work of the economist George Akerlof who was jointly received the prestigious Nobel Memorial Prize in Economic Sciences with Michael Spence and Joseph Stiglitz in 2001), and people sometimes argue that certification may remedy the situation.

In this note, I contradict this argument, mainly because the market for certificates is itself a lemon market. So the key question is: Can we remedy a lemon market by putting in place another lemon market, or do we need something else? To reasonably argue about this question, one has to first look at the market for certificates as it stands today.

Since the early 1980s, people have tried to define criteria to evaluate and certify the security of computer systems used for the processing, storage, and retrieval of sensitive or
even classified information. In 1983, for example, the U.S. Department of Defense (DoD) released the Trusted Computer System Evaluation Criteria (TCSEC), frequently referred to as the Orange Book. In 1998, the European Union published its Information Technology Security Evaluation Criteria (ITSEC) based on previous work done in Germany, France, the United Kingdom, and the Netherlands. These and a few other initiatives finally culminated in the Common Criteria (CC) that refer to internationally agreed and standardized criteria to evaluate and certify the security of computer systems. Unfortunately, the CC are not self-contained, meaning that every nontrivial set of functionalities requires a protection profile (PP), against which the CC can be applied. These PPs are usually defined by the largest maufacturers in the respective field, and hence they tend to be a little bit biased towards what can be done by the leading products. Also, certificates issued in the context of a CC PP are usually hard to understand by the customers. The situation is far away from being mature and satisfactory for both the manufacturers and the customers.

A similar lack of maturity applies to cerzificates issued for information security management systems (ISMS) according to ISO/IEC 27001. The systems can be customized and fine-tuned according to their scopes and statements of applicability, and hence one has to exactly look at what an ISO/IEC 27001 certificate is actually standing for. It does not necessarily stand for best practices and a reasonable level of security in all cases. As is usually the case in security, the devil is in the details.

In all types of certificates that are currently on the market, including the CC and ISO/IEC 27001, the owner of the certificate pays for the evaluation and certification processes done by some accredited body. This is expensive and time-consuming. Consequently, almost all actors go for a a body that is minimally invasive. This means that the body that makes the best offer is usually going to win the competition. This means that the market in driven by pricing, and that low-priced offerings are always preferred. This is exactly how a lemon market works and downgrades quality in the long term.

The bottom line is that we have a lemon market for cybersecurity products and services that we want to remedy with another lemon market for certificates. This is not going to work. The manufacturers of low-security products and services are always going to find a body that takes a loose stance and doesn’t really question the security promises of the products and services they look at. They will find something (they have to, because they are paid for it), and everybody is happy, if the findings are not too embarassing. Even the customers like a positive statement in favor of security. The economic incentives are not going to change, unless the customers pay for the evaluation and certification. This, however, is illusionary and not going to happen. So we have to live with the situation that security certificates are neither expressive nor particularly useful, and we have to find other means to convince us about the security of a product or service. This is not simple, but needed in the field.

Posted in Uncategorized | Leave a comment

Is E2EE Conferencing Meaningful?

In these days, my new book entitled End-to-End Encrypted Messaging is being printed and prepared to be shipped. Due to this fact, but mainly due to the Corona crisis, I am often asked these days whether the various conferencing tools that are used worldwide, such as Zoom and Microsoft Teams, are reasonably secure and adhere to the state of the art. The short answer is “no,” but it makes a lot of sense to scrutinize both the question and the answer.

With regard to the question, the first counterquestion I would ask is why would you want to encrypt a conference in the first place, especially if the conference has many participants. Remember the famous quote attributed to Benjamin Franklin: “Three may keep a secret if two of them are dead.” It seems exaggerated, but the bottom line is still that keeping a secret becomes increasingly difficult, the more people are to share it. One may argue about the threshold, and the quote is overly pessimistic here, but beyond only a handful or persons it seems very unlikely that a secret can ever be kept secret. This, in turn, means that secure – maybe even E2EE – conferencing is meaningful for small groups, but certianly becomes more and more pointless the larger the group is. Note that any group member can tape the audio and/or video streams and redistribute them at will. If we are talking about dozens, hundreds, or even thousands of particpants in a large conference, then encrypting it may be a nice engineering execise, but its actual value may be small. The information is going to leak anyway, even if E2EE. This insight is just a consequence of human behavior and its (in)ability to keep secrets.

With regard to the answer, I am more optimistic. In spite of the fact that most conferencing tools are not truely end-to-end encrypting and have sometimes even devastating shortcomings (e.g., Zoom seemingly encrypting with AES-128 in ECB mode), cryptographic research has come up with E2EE protocols that are highly secure and permanently refresh their keying material, such the Signal protocol that is also used in WhatsApp, Facebook Messenger, and many more. This protocol is optimized for the asynchronous setting, but it works equally well in the simpler case of the (synchronous) setting of a conference. Some messengers are already using this protocol for small-group conferencing (e.g., WhatsApp for groups up to 4 members). Furthermore, the community (in particular the IETF MLS WG) is working on a messaging layer security (MLS) protocol that is particularly well suited for large groups with thousands of members. This protocol can be used for E2EE messaging, but it can also be used for E2EE conferencing. So from a technical persepctive, the problem of how to implement E2EE messaging and conferencing in a scalable way seems to be solved. The remaining question is how reasonable and meaningful it is to use it and end-to-end encrypt large conferences. My personal impression is that secret information should not be discussed in large conferences, and hence currently deployed messengers (that support groups up to a few members) are sufficient here.

If you still want to use an E2EE conferencing tool, then it makes sense to study the details. As is usually the case in security, the devil is really in the details and they make the difference. In times like today, marketing departments are good in putting together various buzzwords and acronyms (like E2EE) to make product sheets as interesting and promising as possible. It is therefore important to stay critical and ask the right questions. E2EE conferencing may not be the appropriate solution in all situations.

Posted in Uncategorized | Leave a comment

New Book Released Soon

My new book about secure and end-to-end encrypted (E2EE) messaging will be released soon. It addresses E2EE messaging protocols, like OpenPGP and S/MIME, as well as OTR, Signal, iMessage, Wickr, Threema, Telegram, and many more. The core of the book is the Signal protocol that represents the state-of-the-art in E2EE messaging as it stands today. Besides the Signal messenger, it is also used in WhatsApp, Viber, Wire, and the Facebook Messenger. The book can already be preordered from Artech House UK or US.

Posted in Uncategorized | Leave a comment

Intelligence-Driven Cyber Defense

In a 2015 article, I argued that conventional wisdom in information security management is deeply flawed, because it requires a risk-based approach knowing well that any form of risk analysis – be it quantitative or qualitative – is somehow arbitrary and therefore largely useless. But in spite of this argument, most information officers and managers still continue to ask for compliance and audit (some organizations have even made their information security officer to also become a compliance manager). Most efforts being spent on information security management are therefore wasted, meaning that the respective labor is Sisyphean.

In this post, I want to continue this line of argumentation by proposing something that may replace risk-based information security management anytime in the future. For the lack of a better term, I call it intelligence-driven (instead of “risk-based”) cyber defense (instead of “information security management”).

  • The first part of the term should make it clear that any form of risk analysis is better replaced with intelligence, meaning that information security can only be achieved if one knows what is going on in a particular information technology (IT) infrastructure. Without this knowledge, one is blind and doomed to fail. Intelligence is key to anything related to security.
  • The second part of the term should make it clear that cyber security takes place in a game-theoretic setting, in which there is an offence – represented by the adversaries – and a defense. A security professional’s job is to defend the IT infrastructure of his or her employer, i.e., make sure that no adversary is able to successfully mount an attack. This job is very comparable to a defender’s job in a soccer team. It doesn’t matter, whether the next offensive is launched by a wing player or the center forward; a good defense must mitigate either of them. There is no use in arguing about probabilities: If most of the times, an opposing soccer team attacks with the center forward, then this does not mean that the defense can count on that and forget about the wing players in the next offensive. Instead, a good defense must be prepared to anything, independent from any probability, and it must be able to react dynamically and situationally. The same line of thinking applies in cyber security: It is mainly about mitigating all possible attacks.

Putting the parts together, I think that future information security management needs to be intelligence-driven, and that the ultimate goal must be to set up a solid and profound cyber defense. It goes without saying that this requires a major mind change in future generations of information security professionals. We have to move away from risk analysis to mechanisms and tools that allow us to gather as much intelligence as possible and to use it properly and wisely. We also have to take the stance of a good defense: Be prepared to anything, even if it is highly unlikely and unprobable.

Posted in Uncategorized | Leave a comment

CALL FOR QUESTIONS

CRYPTOlog

I have added a cryptology blog named CRYPTOlog to the Web site of eSECURITY Technologies Rolf Oppliger (cryptolog.esecurity.ch). The aim is to answer questions related to cryptology that are of common interest. I am looking forward to receive many interesting questions to answer.

Posted in Uncategorized | Leave a comment

What is PFS and PCS?

A topic that is ultimatively important to understand the current discussions about secure and E2EE messaging is related to the different notions of secrecy. Assume some long-term keying material being compromised. What is the impact on the secrecy of the cryptographically protected (i.e., encrypted) data? Is the secrecy of the data still protected? Is there a difference for data sent in the past and data to be sent in the future? Questions like these have led to different notions of secrecy that are sometimes referred to using different (and sometimes confusing) terms.

Since the early 1990s, people have been using the term perfect forward secrecy (PFS) to refer to the property of a cryptographic system using a particular key agreement protocol that ensures that session keys don’t get compromised even if a long-term (typically private) key gets compromised. This definition is informal and not mathematically precise, but it is still intuitively clear what it means and what it is standing for. Because the word “perfect” misleads people to believe that the notion of PFS is somehow related to Claude Shannon’s notion of “perfect secrecy,” people sometimes leave aside the word “perfect” and use the term forward secrecy instead, i.e., synonynmously and interchangeably with PFS.

From a practical viewpoint, the provision of PFS or forward secrecy typically requires an ephemeral Diffie-Hellman key exchange for every session key that is required, and the long-term private key to be used only to authenticate the respective key exchange. If this (authentication) key gets compromised, then there is still no way to recompute the session key. Such a key can only get compromised while it (or any of the Diffie-Hellman parameters that have been used to generate it) is stored in memory or is in actual use. Once it is deleted, there is no way to recompute it – this, in turn, ensures PFS or forward secrecy.

Things get more involved, if one considers alternative approaches to achieve PFS or forward secrecy (than always performing an ephemeral Diffie-Hellman key exchange). Look, for example, what happens if one generates a new session key simply by hashing the old one. In this case, if a session key gets compromised, it is not possible to compute any previously used session key (because this would require to compute the inverse of the hash function in use), but it is still perfectly feasible to compute all subsequently used session keys (because the session key can simply be subjected to the hash function). Hence, this simple key update mechanism provides some sort of PFS or forward secrecy, namely the one that is backward-oriented in time: Any previously used session key remains secret, but any session key to be used in the future gets compromised trivially.

This insight has led to a more subtle use of the terms PFS and forward secrecy. In fact, the terms are still used, but they are used in the sense that the respctive key agreement protocol protects data secrecy against a key compromise that may occur in the future. In contrast, if the key agreement protocol protects data secrecy against a key compromise that may have occured in the past, then people often use the complementary terms post-compromise security (PCS) or future secrecy. The above-mentioned scheme to generate a new session key by hashing the old one provides PFS and forward secrecy in the new and narrow sense, but it does not provide PCS or future secrecy. So when discussing the level of secrecy a key agreement protocol provides, one usually has to discuss the two cases. The question to ask is what happens if some keying material gets compromised? Is the secrecy of past data still protected or not, and vice-versa, is the secrecy of future data protected or not? The first question leads to the notions of PFS and forward secrecy, whereas the second question leads to the notions of PCS and future secrecy. In the ideal case, both notions of secrecy apply.

Figure 1: The notions of secrecy

The terminology used to refer to the notions of secrecy is summarized in Figure 1. It is confusing, because the two notions of secrecy could also be referred to as pre-compromise security and PCS (but in this case, both acronyms would be the same) or backward secrecy and future secrecy (but in this case, we would have to use the term “backward secrecy” as a synonym to “forward secrecy,” and this is not very intuitive). For lack of better terminology, we (as a sommunity) use the terms forward secrecy and PCS in most cases (these are the terms that are written in bold face in Figure 1). This terminology is neither elegant nor intuitive, but it is in line with the literature. Forward secrecy and PCS are important criteria when it comes to discussing secure and E2EE messaging on the Internet. While OpenPGP and S/MIME provide neither of the two properties, modern approaches and solutions, like OTR and Signal, typically do. In fact, the provision of forward secrecy and PCS is one of the distinguishing features of a modern and state of the art E2EE messaging protocol.

Posted in Uncategorized | 4 Comments

“Off-the-record” (OTR) messaging

After the development and deployment of OpenPGP and S/MIME, it was commonly agreed that the secure messaging problem was solved, and that public key cryptography provides a viable solution: Digital signatures for authentication (and nonrepudiation) and digital envelopes for confidentiality protection. It was argued that the slow deployment of OpenPGP and S/MIME in the field was only due to poor usability, rather than technical inadequacy. This changed in 2004, when Nikita Borisov, Ian Goldberg, and Eric Brewer published a research paper, in which they challenged common wisdom and questioned the adequacy of existing technologies for secure messaging. In particular, they stressed the fact that these technologies do neither provide forward secrecy nor deniable authentication, and that these shortcomings limit their usefulness in the field. Note what happens if a long-term private key gets compromised. In this case, all messages that have been or will ever be enveloped with this key are compromised, as well. The respective damage in terms of confidentiality loss is as large as possible. Furthermore, many of these messages will be digitally signed, and hence carry a cryptographic proof of their origin. This, in turn, means that the originator of such a message cannot meaningfully deny having sent it. There are certainly cases, in which this undeniability does not pose any problem. But there are also cases, in which it may pose a huge problem to the originator of such a message. Think about whistle-blowers, dissidents, and political activists.

Against this background, the above-mentioned paper argued that people sometimes want to hold a casual conversation that is private, informal, and unofficial. It is like a conversation held in a backyard without witnesses. In real life, we attribute the term “off-the-record” to this type of conversation; it does neither leave traces nor records that may prove that the conversation ever took place. The term “off-the-record” can also be used in the realm of secure messaging, and hence OTR messaging refers to this type of message exchange. It is as private as possible, and it provides repudiation, meaning that it can be denied by all participants. For the reasons mentioned above, OTR messaging cannot be implemented with digital envelopes and digital signatures. Instead, technologies are needed that provide forward secrecy and deniability – or even plausible deniability. Forward secrecy is usually achieved by using ephemeral Diffie-Hellman key exchangesand it can be even further improved by restricting the lifetime of the keys in use. OTR messaging uses a Diffie-Hellman ratcheting mechanism to refresh the key as often as possible. Plausible deniability, in turn, is achieved by using MACs instead of digital signatures. Note that a MAC is symmetric in nature, meaning that a MAC can be generated and verified by either side of a communication, i.e., the sender and the recipient of a message. This, in turn, allows the originator of a message to deny having sent it. There are even some complementary techniques that can be used to further strengthen deniability.

From today’s perspective, OTR messaging has paved the way to end-to-end encrypted (E2EE) messengers that use similar techniques. Most importantly, the Signal protocol (that is also used in WhatsApp and the Facebook Messenger) combines the Diffie-Hellman ratchet with some other technologies that can be used in an asynchronous setting. Keep in mind that OTR messaging has been developed for instant messaging, where the participants need to be online. This need not be the case in an asynchronous setting, like the one used for e-mail. This complicates things a little bit, but OTR messaging is still a milestone in the development of contemporary secure and E2EE messaging solutions. It will be explained in detail in a new and upcoming book on E2EE messaging on the Internet that is scheduled for 2020.

Posted in Uncategorized | 4 Comments

TLS 1.3

David Wong has created an animated TLS 1.3 specification that is more readable and accessible than the purely text-based RFC 8446.

Posted in Uncategorized | Leave a comment

TLS transcripts available for download

If you want to delve more deeply into the technical specificities and details of the TLS 1.2 and TLS 1.3 protocols, then you may consider downloading and analyzing two log files that have been captured with Wireshark (TLS12Handshake.pcapng for TLS 1.2 and TLS13Handshake.pcapng for TLS 1.3). You can use the files for self-study or for educational purposes.

Posted in Uncategorized | Leave a comment

eSECURITY Academy

The 2019 program is available at esecurity.academy. There are several new courses and bootcamps on TLS 1.3, messaging security (including Signal and WhatsApp), cryptography, and cybersecurity.

Posted in Uncategorized | 2 Comments