Wednesday, July 5, 2017
The Coming Quantum Computing Crypto Apocalypse
Modern media, both fact and fiction, loves the Apocalypse and the Dystopian future. The Quantum Apocalypse is just one example but one close to the subject of this blog. It posits that the coming revolution called quantum computing will obsolete modern encryption and destroy modern commerce as we have come to know it. It was the hook for the 1992 movie Sneakers starring Robert Redford, Sydney Poitier, Ben Kingsley, and River Phoenix.
This entry will tell the security professional some useful things about the application of Quantum Mechanics to information technology in general, and Cryptography in particular, that will help equip him for, and enlist him in, the effort to ensure that commerce, and our society that depends upon it, survive. Keep in mind that the author is not a Physicist or even a cryptographer. Rather he is an octogenarian, a computer security professional, and an observer of and commentator on the experience that we call modern Cryptography beginning with the Data Encryption Standard.
For a description of Quantum Computing I refer you to Wikipedia. For our purpose here it suffices to say that it is very fast at solving certain classes of otherwise difficult problems. One of these problems is to find the factors of the product of two prime numbers, the problem that one must solve to find the message knowing the cryptogram and the public key or the private key knowing the message, the cryptogram, and the public key in the RSA crypto systems.
This vulnerable algorithm is the one that we rely upon for symmetric key exchange in our infrastructure. In fact, because it is so computationally intensive, that is the only thing we use it for.
In theory, using quantum computing, one might find the factors almost as fast as one could find the product, while the cryptographic cover time of the system relies upon the fact that the former takes much longer than the latter. Cryptographers would certainly say that, by definition, at least in theory, the system would be "broken." However, the security professional would ask about the relative cost of the two operations. While the former can be done by any cheap computer, the latter can only be done quickly by much more rare and expensive "quantum" computers.
Cryptanalysis is one of the applications that has always supported cutting edge computing. One of the "Secrets of ULTRA" was that we invented modern computing in part to break the Enigma system employed by Germany. ULTRA was incredibly expensive for all that. While automation made ULTRA effective, it was German key management practices that made it efficient. On the other hand, the modern computer made commercial and personal cryptography both necessary and cheap.
One can be certain that NSA is supporting QC research and will be using one of the first practical implementations for cryptanalysis. They will be doing it literally before you know it and exclusively for months to years after that.
Since ULTRA, prudent users of cryptography have assumed that, at some cost, nation states (particularly the "Five Eyes," Russia, China, France, and Israel) can read any message that they wish. However, in part because the cost of reading one message includes the cost of not reading others, they cannot read every message that they wish.
The problem is not that Quantum Computing breaks Cryptography, per se, but that it breaks one system on which we rely. It is not that we do not have QC resistant crypto but that replacing what we are using with it will take both time and money. The faster we want to do it, the more expensive it will be. Efficiency demands that we take our time; effectiveness requires that we not be late.
By some estimates we may be as much as 10 years away from an RSA break but then again, we might be surprised. One strategy to avoid the consequences of surprise is called "crypto agility." It implies using cryptography in such a way that we can change the way we do it in order to adapt to changes in the threat environment.
For example, there are key exchange strategies that are not vulnerable to QC. One such has already been described by the Internet Engineering Task Force (IETF). It requires a little more data and cycles than RSA but this is more than compensated for by the falling cost of computing. It has the added advantage that it can be introduced in a non-disruptive manner, beginning with the most sensitive applications.
History informs us that cryptography does not fail catastrophically and that while advances in computing benefit the wholesale cryptanalyst, e.g., nation states, before the commercial cryptographer, in the long run they benefit the cryptographer orders of magnitude more than the cryptanalyst. In short, there will be a lot of work but no "Quantum Apocalypse." Watch this space.
Labels:
Cryptography,
QSH,
Quantum Computing,
Quantum Safe Hybrid Key Exchange,
RSA,
TLS
Wednesday, May 3, 2017
The Next Great Northeastern Blackout
It has already been more than thirteen years since the last great northeastern blackout. The mean time between such blackouts is roughly twenty years.
Blackouts are caused by a number of simultaneous component failures that overwhelm the ability of the system to cope. While the system copes with most component failures so well that the consumer never even notices, there is an upper bound to the number of concurrent failures that it can tolerates. In the face of these failures the system automatically does an orderly protective shutdown that assures its ability to restart within tens of hours to days.
However, such surprising shutdowns are experienced by the community as a "failure." One result is finger pointing, blaming, and shaming. Rather than being seen as a normal and predictable occurrence with a proper and timely response, the Blackout is seen as a "failure" that should have been "prevented."
These outages are less disruptive than an ice storm. However, even though they are as natural and as inevitable as weather related outages, they are not perceived that way. The public and their political representatives see weather related outages as unavoidable but inevitable technology failures as one hundred percent preventable.
Security people understand that perfect prevention has infinite cost. That as we increase the meantime between outages, we stop, at about twenty years, way short of infinity. This is in part because the cost of the next increment exceeds the value and in part because we reach a natural limit. We increase the resistance to failure by adding redundancy and automation. However, we do this at the cost of ever increasing complexity. There is a point, at about twenty years MTBF, at which increased complexity causes more failures than it prevents.
Much of the inconvenience to the public is a function of surprise. Since they have come to expect prevention, they are not prepared for outages. The message should be that we are closer to the next Blackout than to the last. If you are not surprised and are prepared, you will minimize your own cost and inconvenience.
Sunday, March 26, 2017
Internet Vulnerability
On March 24, 2017 Gregory Michaelidis wrote in Slate on "Why America’s Current Approach to Cybersecurity Is So Dangerous."
He cited an article by Bruce Schneier.
In response, I observed to a number of colleagues, proteges, and students that "One takeaway from this article and the Schneier article that it points to is that we need to reduce our attack surface. Dramatically. Perhaps ninety percent. Think least privilege access at all layers to include application white-listing, safe dcfaults, end-to-end application layer encryption, and strong authentication."
One colleague responded "I think one reason the cyber attack surface is so large is that the global intel agencies hoard vulnerabilities and exploits..." Since secret "vulnerabilities and exploits" account for so little of our attack surface, I fear that he missed my point.
He cited an article by Bruce Schneier.
In response, I observed to a number of colleagues, proteges, and students that "One takeaway from this article and the Schneier article that it points to is that we need to reduce our attack surface. Dramatically. Perhaps ninety percent. Think least privilege access at all layers to include application white-listing, safe dcfaults, end-to-end application layer encryption, and strong authentication."
One colleague responded "I think one reason the cyber attack surface is so large is that the global intel agencies hoard vulnerabilities and exploits..." Since secret "vulnerabilities and exploits" account for so little of our attack surface, I fear that he missed my point.
While it is true that intelligence agencies enjoy the benefits of our vulnerable systems and are little motivated to reduce the attack surface, the "hoarded vulnerabilities and exploits" are not the attack surface and the intel agencies are not the cause.
The cause is the IT culture. There is a broad market preference for open networks, systems, and applications. TCP/IP drove the more secure SNA/SDLC from the field. The market prefers Windows and Linux to OS X, Android to iOS, IBM 360 to System 38, MVS to FS, MS-DOS to OS/2, Z Systems to iSeries, Flash to HTML5, von Neumann architecture [Wintel systems] to almost anything else.
One can get a degree in Computer Science, even in Cyber Security, without ever even hearing about a more secure alternative architecture to von Neumann's [e.g. IBM iSeries. Closed, finite state architecture (operations can take the system only from one valid state to another), limited set of strongly-typed (e.g., data can not be executed, programs cannot be modified) objects, single level store, symbolic only addressing, etc.)]
We prefer to try and stop leakage at the end user device or the perimeter rather than administer access control at the database or file system. We persist in using replayable passwords in preference to strong authentication, even though they are implicated in almost every breach. We terminate encryption on the OS, or even the perimeter, rather than the application. We deploy user programmable systems where application only systems would do. We enable escape mechanisms and run scripts and macros by default.
We have too many overly privileged users with almost no multi-party controls. We discourage shared UIDs and passwords for end users but default to them for the most privileged users, where we most need accountability. We store our most sensitive information in the clear, as file system objects, on the desktop, rather than encryptied, in document management systems, on servers. We keep our most sensitive data and mission critical apps on the same systems where we run our most vulnerable applications, browsing and e-mail. We talk about defense in depth but operate our enterprise networks flat, any to any connectivity and trust, not structured, not architected. It takes us weeks to months just to detect breaches and more time to fix them.
I can go on and I am sure you can add examples of your own. Not only is the intelligence community not responsible for this practice, they are guilty of it themselves. It was this practice, not secret vulnerabilities, that was exploited by Snowden. It is this culture, not "hoarded vulnerabilities and exploits," that is implicated in the breaches of the past few years. It defies reason that one person acting alone could collect the data that Snowden did without being detected.
Nation states do what they do; their targets of choice will yield to their overwhelming force. However, we need not make it so easy. We might not be able to resist dragons but we are yielding to bears and brigands. I admit that the culture is defensive and resistant to change but it will not be changed by blaming the other guy. "We have seen the enemy and he is us."
Wednesday, January 4, 2017
All "Things" are not the Same
My mentor, Robert H. Courtney, Jr. was one of the great original thinkers in security. He taught me a number of useful concepts some of which I have codified and call "Courtney's Laws." At key inflection points in information technology I find it useful to take them out and consider the problems of the day in their light. The emergence of what has been called the Internet of Things (IoT) is such an occasion.
Courtney's First Law cautioned us that "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment." This law can be usefully applied to the difficult, not to say intractable, problem of the Internet of things (IoT). All "things" are not the same and, therefore do not have the same security requirements or solutions.
What Courtney does not address is what we mean by "security." The security that most seem to think about in this context is resistance to interference with the intended function of the "thing" or appliance. The examples de jour include interference with the operation of an automobile or with a medical device. However, a greater risk is the that general purpose computer function in the device will be subverted and used for denial of service attacks or brute force attacks against passwords or cryptographic keys.
Key to Courtney's advice are "application" and "environment." Consider application first. The security we expect varies greatly with the intended use of the appliance. We expect different security properties, features, and functions from a car, a pacemaker, a refrigerator, a CCTV camera, a baby monitor, or a "smart" TV. This is critical. Any attempt to treat all these things the same is doomed to failure. This is reflected in the tens of different safety standards that the Underwriters Laboratories has for electrical appliances. Their list includes categories that had not even been invented when the Laboratories were founded at the turn of the last century.
Similarly our requirements vary with the environment in which the device is to be used. We have different requirements for devices intended to be used in the home, car, airplane, hospital, office, plant, or infrastructure. Projecting the requirements of any one of these on any other can only result in ineffectiveness and unnecessary cost. For example, one does not require the same precision, reliability, or resistance to outside interference in a GPS intended for use in an automobile as for one intended for use in an airliner or a cruise ship. One does not require the same security in a device intended for connection only to private networks as for those intended for direct connection to the public networks.
When I was at IBM, Courtney's First Law became the basis for the security standard for our products. Product managers were told that the security properties, features, and functions of their product should meet the requirements for the intended application and environment. The more things one wanted one's product to be used for and the more, or more hostile, the environments that one wanted it to be used in, the more robust the security had to be. For example, the requirements for a large multi-user system were higher than those for a single user system. The manager could assert any claims or disclaimers that she liked; what she could not do was remain silent. Just requiring the manager to describe these things made a huge difference. This was reinforced by requiring her to address this standard in all product plans, reviews, announcements, and marketing materials. While this standard might not have accomplished magic, it certainly advanced the objective.
Achieving the necessary security for the Internet of things will require a lot of thought, action and, in some cases, invention. Applying Courtney's First Law is a place to start. A way to start might be to expect all vendors to speak to the intended application and environment of his product. For example, is the device intended "only for home use on a home network; not intended for direct connection to the Internet." While the baby monitor or doorbell must be able to access the Internet, attackers on the Internet should not be able access the baby monitor.
Courtney's First Law cautioned us that "Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment." This law can be usefully applied to the difficult, not to say intractable, problem of the Internet of things (IoT). All "things" are not the same and, therefore do not have the same security requirements or solutions.
What Courtney does not address is what we mean by "security." The security that most seem to think about in this context is resistance to interference with the intended function of the "thing" or appliance. The examples de jour include interference with the operation of an automobile or with a medical device. However, a greater risk is the that general purpose computer function in the device will be subverted and used for denial of service attacks or brute force attacks against passwords or cryptographic keys.
Key to Courtney's advice are "application" and "environment." Consider application first. The security we expect varies greatly with the intended use of the appliance. We expect different security properties, features, and functions from a car, a pacemaker, a refrigerator, a CCTV camera, a baby monitor, or a "smart" TV. This is critical. Any attempt to treat all these things the same is doomed to failure. This is reflected in the tens of different safety standards that the Underwriters Laboratories has for electrical appliances. Their list includes categories that had not even been invented when the Laboratories were founded at the turn of the last century.
Similarly our requirements vary with the environment in which the device is to be used. We have different requirements for devices intended to be used in the home, car, airplane, hospital, office, plant, or infrastructure. Projecting the requirements of any one of these on any other can only result in ineffectiveness and unnecessary cost. For example, one does not require the same precision, reliability, or resistance to outside interference in a GPS intended for use in an automobile as for one intended for use in an airliner or a cruise ship. One does not require the same security in a device intended for connection only to private networks as for those intended for direct connection to the public networks.
When I was at IBM, Courtney's First Law became the basis for the security standard for our products. Product managers were told that the security properties, features, and functions of their product should meet the requirements for the intended application and environment. The more things one wanted one's product to be used for and the more, or more hostile, the environments that one wanted it to be used in, the more robust the security had to be. For example, the requirements for a large multi-user system were higher than those for a single user system. The manager could assert any claims or disclaimers that she liked; what she could not do was remain silent. Just requiring the manager to describe these things made a huge difference. This was reinforced by requiring her to address this standard in all product plans, reviews, announcements, and marketing materials. While this standard might not have accomplished magic, it certainly advanced the objective.
Achieving the necessary security for the Internet of things will require a lot of thought, action and, in some cases, invention. Applying Courtney's First Law is a place to start. A way to start might be to expect all vendors to speak to the intended application and environment of his product. For example, is the device intended "only for home use on a home network; not intended for direct connection to the Internet." While the baby monitor or doorbell must be able to access the Internet, attackers on the Internet should not be able access the baby monitor.
Thursday, December 29, 2016
"Women and Children First"
As we approach autonomous cars, some have posed the question "In accident avoidance should the car prefer its passenger or the pedestrian?" It is posed as a difficult ethical dilemma. I even heard an engineer suggest that not only does he not want to make the decision but that he would like Congress to make it as a matter of law.
This is really just another instance of an ethical dilemma that humanity has faced forever. It has many illustrations but one that has been used in teaching ethics is called the "life boat" problem. If there is not enough room in the lifeboat for everyone, who gets in? If there is not enough food and water, who gets preference?
The simple answer is "women and children first." Human drivers will steer away from a woman with a baby carriage even if they do not have time to evaluate the alternative. It is built into the culture, all but automatic, but the reason is that it is pro life. Children are the future of the species. Women can nurture and reproduce. Men can sow but they cannot reap. While the male role takes minutes, the female role takes months. Life needs more females than males.
The reason that we do not apply this pro life rationale to the autonomous automobile is that we assume that the consideration is beyond its capability. However, most of what one expects of an autonomous car today was beyond its capability a decade ago. For the moment, most may not be able to consider all the factors we might like. For example, they may not recognize age and gender, much less consider them. Ten years from now, they certainly will.
In this context it is useful to consider how such systems make a decision. They identify a number of scenarios, depending upon the time available, assign an outcome and a confidence level to each, and choose statistically. The kind of ties implied by the strawman dilemma will be vanishingly rare, even more so as the computers become faster and the number of things the can consider increases.
Compare the autonomous car to the human driver. In the two tenths of a second that it takes a young adult to recognize and react, the autonomous car will evaluate dozens of possibilities with as many considerations. Like the human driver, the autonomous car may confront instances when there are simply no good options but the whole reason for using them is that they are less likely than the human driver to overlook the least damaging.
Wednesday, June 29, 2016
The Role of Risk Assessment in Digital Security
The very idea of Risk Assessment has always been controversial. I have been engaged in the controversy for fifty years. My ideas on the subject are well considered if otherwise no better than anyone else's. I record them here.
I attribute the application of this idea to what was then called Computer Security to my mentors, later colleagues, Robert H. Courtney, Jr. and Robert V. Jacobson. They did it in an attempt to rationalize decision making, more specifically the allocation of scarce security resources, to the then nascent field. They did it in response to their observation that many, not to say most, security decisions were being made based upon the intuition of the decision maker and their belief, and a tenet of this blog, that security is a space in which intuition does not serve us well. They wanted to bring a little reason to the process.
They could not possibly have known that in a mere fifty years that the resources applied to this effort would grow to the tens to hundreds of billions of dollars, that the safety and liberty of the individual, the health of public and private enterprise, the efficiency and resilience of our economy, and the security of the nations would turn on how effectively and efficiently we used those resources.
So, at its core risk assessment is a decision making tool. It is a tool that we use to answer the question "where to spend the next dollar of our limited resources?" Courtney's Second Law says one should "Never spend more mitigating a risk than tolerating it will cost you." We will, do, make this decision, with or without tools. We make it intuitively or we make it rationally but we do make it.
At its most elaborate risk assessment is a very expensive tool requiring significant knowledge, skill, ability, and experience to use, more than most of us enjoy. It should be used only for expensive decisions, decisions that are expensive to reverse if we get them wrong. At its simplest, it protects us from making decisions based solely upon the threat, attack, vulnerability, or consequence de jour. It protects us from intuition, from fear.
All that said, few of us are confronting expensive or difficult decisions, decisions requiring sophisticated decision making tools, risk assessment or otherwise.. We have yet to implement all those measures that we know to be so effective and efficient as to require no further justification. They are what Peter Tippett calls essential practices. Anyone can do them, with available resources, they are about 0.8 effective but work synergistically to achieve an arbitrary level of security. They fall in that category that we call "no brainers." All we need is the will.
Monday, April 25, 2016
Compromise of Credit Card Numbers
Recently FireEye published an intelligence report stating that a previously unknown
cybercrime group has hacked into numerous organizations in the retail and
hospitality sectors to steal an estimated 20 million payment cards,
collectively worth an estimated $400 million on the "cybercrime" black market.
To a near approximation, all credit card numbers more than a few months old are public. The market price has dropped to pennies. We are all equally targets of opportunity. That any one of us has not been a victim of fraud is mere chance. They have so many that they simply cannot get to us all.
The brands are at fault for marketing a broken system, one that relies upon the secrecy of credit card numbers but which passes them around and stores them in the clear. Their business model is at risk. They have technology, EMV, tokenization, and checkout proxies, but the first is too slow for many applications and they are not promoting the other two to merchants or consumers.
Issuers take much of the fraud risk. They are attempting, with some short run success, to push this to the merchants. However, with merchants and consumers, they share in the risk of our broken system.
As the referenced report suggests, bricks and mortar merchants, particularly "big box" retailers and hospitality, are finding that both issuers and consumers are blaming them for the disclosure of the numbers. Issuers are charging back fraudulent transactions. and suing merchants for the expense of issuing new cards after a breach. Their systems are being penetrated and numbers ex-filtrated wholesale. Point of sale devices are being compromised, or even replaced, to capture debit card numbers and PINs. These are used to produce counterfeit cards. Some of these are used to,purchase gift cards or get cash at ATMs. Merchant brands have been badly damaged by bad publicity surrounding breaches. While most of these merchants can resist compromise, there are more than enough to guarantee that some will fall. Merchants can reduce fraudulent transactions by preferring mobile, EMV cards, and by checking cards, signatures, and IDs but all but the first slow the transaction and inconvenience the customer.
Online merchants are the target of all kinds of "card not present" scams and take the full cost of the fraud. While it will not stop the fraud, the online merchants can both protect themselves and speed up the transaction by not accepting credit cards and using only proxies like PayPal, Visa Checkout, Apple Pay, and Amazon.
While, at least by default, consumers are protected from financial loss from credit card fraud, the system relies heavily upon them to be embarrassed by it. At least on court has agreed to hear evidence as to whether or not consumers as a class are otherwise damaged when their card numbers are leaked to the black market.
All this is by way of saying that as long as anyone accepts credit card numbers in the clear, we will be vulnerable to their fraudulent use. There are now alternatives and we need to promote them, not simply tolerate them. Think numberless, card-less, and contact-less.
To a near approximation, all credit card numbers more than a few months old are public. The market price has dropped to pennies. We are all equally targets of opportunity. That any one of us has not been a victim of fraud is mere chance. They have so many that they simply cannot get to us all.
The brands are at fault for marketing a broken system, one that relies upon the secrecy of credit card numbers but which passes them around and stores them in the clear. Their business model is at risk. They have technology, EMV, tokenization, and checkout proxies, but the first is too slow for many applications and they are not promoting the other two to merchants or consumers.
Issuers take much of the fraud risk. They are attempting, with some short run success, to push this to the merchants. However, with merchants and consumers, they share in the risk of our broken system.
As the referenced report suggests, bricks and mortar merchants, particularly "big box" retailers and hospitality, are finding that both issuers and consumers are blaming them for the disclosure of the numbers. Issuers are charging back fraudulent transactions. and suing merchants for the expense of issuing new cards after a breach. Their systems are being penetrated and numbers ex-filtrated wholesale. Point of sale devices are being compromised, or even replaced, to capture debit card numbers and PINs. These are used to produce counterfeit cards. Some of these are used to,purchase gift cards or get cash at ATMs. Merchant brands have been badly damaged by bad publicity surrounding breaches. While most of these merchants can resist compromise, there are more than enough to guarantee that some will fall. Merchants can reduce fraudulent transactions by preferring mobile, EMV cards, and by checking cards, signatures, and IDs but all but the first slow the transaction and inconvenience the customer.
Online merchants are the target of all kinds of "card not present" scams and take the full cost of the fraud. While it will not stop the fraud, the online merchants can both protect themselves and speed up the transaction by not accepting credit cards and using only proxies like PayPal, Visa Checkout, Apple Pay, and Amazon.
While, at least by default, consumers are protected from financial loss from credit card fraud, the system relies heavily upon them to be embarrassed by it. At least on court has agreed to hear evidence as to whether or not consumers as a class are otherwise damaged when their card numbers are leaked to the black market.
All this is by way of saying that as long as anyone accepts credit card numbers in the clear, we will be vulnerable to their fraudulent use. There are now alternatives and we need to promote them, not simply tolerate them. Think numberless, card-less, and contact-less.
Subscribe to:
Posts (Atom)