- Recently the media has reported that, as the result of a gross failure of security at the U.S. Office of Personnel Management, the service and security records of twenty-seven million Americans have been compromised, likely by a foreign power. The compromise of these records has broken faith with these brave Americans and put them at risk of every thing from credit fraud to coercion, blackmail, and extortion, More recently the reports have noted that these records include the fingerprints of the subjects of the compromised records and have speculated wildly about the risk that result from that.
- For example, while the ability to spoof Touch ID might be useful in gaining access to,the content and capabilities of my mobile, it is far from sufficient. First one must have the phone. While there have been demonstrations of retrieving latent prints using gelatin and using them to fool biometric system, that is an easier problem than trying to go from a paper record.
Monday, November 23, 2015
On Resisting Payment Fraud
A recent report suggested that credit card numbers captured by malware installed on point of sale devices at hospitality sites, including twenty at Starwood Property Group hotels, are being used in fraudulent transactions. The Verizon Data Breach Incident Report (DBIR) confirms that point of sale devices at hospitality sites frequently leak credit card numbers.
But there is no shortage of compromised credit card numbers; their street price is approaching a dime a dozen. It is too late to address fraud by keeping credit card numbers secret. We need a new strategy, similar to those being promoted by American Express and described by Ken Chenault at President Obama's Conference at Stanford University.
Chenault told the conference that by confirming every card transaction to the customer's mobile, they are able to detect fraudulent transactions within sixty seconds. This is just one example of how we can use the mobile to resist fraud.
American Express also confirms transactions by e-mall. In order not to overwhelm the mailbox, the customer can set thresholds. One switch is the "card not present" switch. If as expected mobile transactions and EMV cards drive fraud to CNP then the ability to detect fraud early, for example, before goods are shipped, will be key to,resisting fraud.
We need a strategy that relies not on secrecy but on feedback. The default should be that the subject of a record be notified of any change or query to that record, that the owner of every account be notified of every transaction. The digital,networks not only make this possible but cheap enough to be efficient.
Needless to say, the lobby of the credit reporting industry that is empowered by law to charge the consumer for telling him about the content of and activity to,his record will resist this strategy. Legislation will be required to change this but it is essential to to resisting application fraud.
On the other hand, American Express and its competitors are embracing it. Even bankers are embracing it. My little three branch community bank uses SMS to notify me intra-day of all large (as defined by me) transactions to my account.
Eventually competition and efficiency will force most enterprises to adopt these tactics. You can make it strategic rather than merely tactical
But there is no shortage of compromised credit card numbers; their street price is approaching a dime a dozen. It is too late to address fraud by keeping credit card numbers secret. We need a new strategy, similar to those being promoted by American Express and described by Ken Chenault at President Obama's Conference at Stanford University.
Chenault told the conference that by confirming every card transaction to the customer's mobile, they are able to detect fraudulent transactions within sixty seconds. This is just one example of how we can use the mobile to resist fraud.
American Express also confirms transactions by e-mall. In order not to overwhelm the mailbox, the customer can set thresholds. One switch is the "card not present" switch. If as expected mobile transactions and EMV cards drive fraud to CNP then the ability to detect fraud early, for example, before goods are shipped, will be key to,resisting fraud.
We need a strategy that relies not on secrecy but on feedback. The default should be that the subject of a record be notified of any change or query to that record, that the owner of every account be notified of every transaction. The digital,networks not only make this possible but cheap enough to be efficient.
Needless to say, the lobby of the credit reporting industry that is empowered by law to charge the consumer for telling him about the content of and activity to,his record will resist this strategy. Legislation will be required to change this but it is essential to to resisting application fraud.
On the other hand, American Express and its competitors are embracing it. Even bankers are embracing it. My little three branch community bank uses SMS to notify me intra-day of all large (as defined by me) transactions to my account.
Eventually competition and efficiency will force most enterprises to adopt these tactics. You can make it strategic rather than merely tactical
Monday, November 16, 2015
Lessons From the J P Morgan Chase Breach
A recent report suggested that the J P Morgan Chase breach teaches us the importance of encryption.
We know that information was recovered in the clear. What we do not know is whether encryption would have been effective in protecting it or even whether it was used but was not effective.
What we do know is that the credentials of authorized users were compromised and used to access the data. Authorization to the data includes the ability to see it in clear text even though it might be stored in encrypted form.
Encryption is not magic. It is a tool. Government propaganda to the contrary not withstanding, it is no more perfect security for the bank, than it is for the criminal.
Encryption is used to restrict access to data at rest, for example on file servers, from those who do not have credentials to access the server. It is used to protect data in transit, for example user credentials, as they cross networks. Properly used, it is very powerful. It is not effective against those with the credentials, the authorization, whether legitimate or otherwise, to access the data.
Are there many banks that are storing information in the clear that should be encrypted? Yes. Was JPMorganChase one of these? We do not know; that information has not been disclosed. Should all banks take to heart the lesson that they should be using encryption to protect data at rest? Yes. Does that lesson flow from the "breach" of JPMorganChase? No, but it does flow from the "attacks."
What we do know is that of the thousands of applications and servers at JPMorganChase, fewer than a hundred were compromised and none of those were using strong authentication. So, the first lesson that I want banks to take from the JPMorganChase breach is to use strong authentication, particularly for privileged users of applications, databases, and servers. Without this, encryption is not likely to be effective.
Strong authentication is policy at JPMorganChase and appears to have been effective where used.
We know that information was recovered in the clear. What we do not know is whether encryption would have been effective in protecting it or even whether it was used but was not effective.
What we do know is that the credentials of authorized users were compromised and used to access the data. Authorization to the data includes the ability to see it in clear text even though it might be stored in encrypted form.
Encryption is not magic. It is a tool. Government propaganda to the contrary not withstanding, it is no more perfect security for the bank, than it is for the criminal.
Encryption is used to restrict access to data at rest, for example on file servers, from those who do not have credentials to access the server. It is used to protect data in transit, for example user credentials, as they cross networks. Properly used, it is very powerful. It is not effective against those with the credentials, the authorization, whether legitimate or otherwise, to access the data.
Are there many banks that are storing information in the clear that should be encrypted? Yes. Was JPMorganChase one of these? We do not know; that information has not been disclosed. Should all banks take to heart the lesson that they should be using encryption to protect data at rest? Yes. Does that lesson flow from the "breach" of JPMorganChase? No, but it does flow from the "attacks."
What we do know is that of the thousands of applications and servers at JPMorganChase, fewer than a hundred were compromised and none of those were using strong authentication. So, the first lesson that I want banks to take from the JPMorganChase breach is to use strong authentication, particularly for privileged users of applications, databases, and servers. Without this, encryption is not likely to be effective.
Strong authentication is policy at JPMorganChase and appears to have been effective where used.
Labels:
banking,
Cryptography,
encryption,
protection,
security,
strong authentication
Security of the Internet of Things (Part III)
As we said in Part I, "While the conversion of 'things' to malicious purposes makes for dramatic Hollywood scenarios, most devices will not be vulnerable to either takeover or malicious use, much less both. However, all those that are vulnerable to takeover can be exploited for their computer function or capacity. This function and capacity can be compromised and turned against the host network for a variety of attacks ranging from simple,spoofing through denial of service to brute attacks against passwords and cryptographic keys. Moreover, the sheer number of things will dwarf the number of general.purpose computers. It is this that we will argue is the most serious risk."
This risk results in part from the generality and flexibility of the "chips" used to implement the "things," the appliances. Much of the design and implementation of the appliance will involve stripping away and hiding this gratuitous capability.
It will result in part from the method chosen for installation, setup, initialization, administration, or to deal with implementation induced flaws or vulnerabilities. We have already seen a number of cases where the appliance itself, e.g., medication dispenser, worked as intended but the administration capability, dosage setting, was vulnerable to takeover. The appliance function was purpose-built but the administration was done via capabilities, e.g., Telnet, ftp, optionally included in the underlying operating system. This kind of gratuitous functionality, often included without proper consideration of its security or its impact on the security of its environment, the Internet, will dramatically weaken the Internet.
This functionality will be used to mount denial of service attacks, spam, and brute force attacks against passwords and cryptographic keys. This is not speculation on my part; this vulnerability has been demonstrated and the attacks reported. This functionality will be included, in part, because developers and vendors are reluctant to give up control, realize that problems will arise in the future because consumers may look to them for remedies, and because it is cheap to do. If the problem is in the software, we may just fix the software just like we have been doing in information technology for two generations.
This is very different from the way that we have dealt with problems in traditional purpose built hardware-only appliances. By default we have dealt with safety flaws in traditional appliances and other products with product "recalls,"sometimes by repair but even more often with replacement. Often we have done this even where computer chips have been used. We have simply replaced the chip. We have not attempted to patch the software, either locally or remotely. However, as "chips" have become cheaper and more powerful, we have succumbed to the temptation to treat them like personal computers.
One must act locally but should think globally. If one wishes to use the Internet, one should do so responsibly. That includes not attaching weak, vulnerable, or even gratuitous capability to the Internet. Problems will arise and we must deal with them but we should do so in the most conservative possible manner. Consider the following strategies for fixing problems:
• Replace hardware and software.
• Replace all software and data (like iOS apps) from a secure server, recognized (VPN, public key) by the device.
• Replace software only, retain data.
• Patch software using a secure server.
• Patch using remote control of function on the device.
• Make patch available to owner to apply at his discretion.
These are equal in terms of their ability to fix the problem. They vary in their economics. However, they vary considerably in their security. Even if the problem is limited to the software, in a world of cheap chips, replacing both as a package may be the most efficient way to repair it. Moreover, as a strategy it can reduce the attack surface of the device to the minimum.
This risk results in part from the generality and flexibility of the "chips" used to implement the "things," the appliances. Much of the design and implementation of the appliance will involve stripping away and hiding this gratuitous capability.
It will result in part from the method chosen for installation, setup, initialization, administration, or to deal with implementation induced flaws or vulnerabilities. We have already seen a number of cases where the appliance itself, e.g., medication dispenser, worked as intended but the administration capability, dosage setting, was vulnerable to takeover. The appliance function was purpose-built but the administration was done via capabilities, e.g., Telnet, ftp, optionally included in the underlying operating system. This kind of gratuitous functionality, often included without proper consideration of its security or its impact on the security of its environment, the Internet, will dramatically weaken the Internet.
This functionality will be used to mount denial of service attacks, spam, and brute force attacks against passwords and cryptographic keys. This is not speculation on my part; this vulnerability has been demonstrated and the attacks reported. This functionality will be included, in part, because developers and vendors are reluctant to give up control, realize that problems will arise in the future because consumers may look to them for remedies, and because it is cheap to do. If the problem is in the software, we may just fix the software just like we have been doing in information technology for two generations.
This is very different from the way that we have dealt with problems in traditional purpose built hardware-only appliances. By default we have dealt with safety flaws in traditional appliances and other products with product "recalls,"sometimes by repair but even more often with replacement. Often we have done this even where computer chips have been used. We have simply replaced the chip. We have not attempted to patch the software, either locally or remotely. However, as "chips" have become cheaper and more powerful, we have succumbed to the temptation to treat them like personal computers.
One must act locally but should think globally. If one wishes to use the Internet, one should do so responsibly. That includes not attaching weak, vulnerable, or even gratuitous capability to the Internet. Problems will arise and we must deal with them but we should do so in the most conservative possible manner. Consider the following strategies for fixing problems:
• Replace hardware and software.
• Replace all software and data (like iOS apps) from a secure server, recognized (VPN, public key) by the device.
• Replace software only, retain data.
• Patch software using a secure server.
• Patch using remote control of function on the device.
• Make patch available to owner to apply at his discretion.
These are equal in terms of their ability to fix the problem. They vary in their economics. However, they vary considerably in their security. Even if the problem is limited to the software, in a world of cheap chips, replacing both as a package may be the most efficient way to repair it. Moreover, as a strategy it can reduce the attack surface of the device to the minimum.
Tuesday, October 13, 2015
A Leapfrog Enterprise Security Strategy
Recently I was quoted in an article on newly reported, but somewhat old, breaches. In the report I was quoted as suggesting that these breaches suggest that security has fallen behind and that, just in order to catch up, we need a "leapfrog" strategy. This post will suggest what such a strategy might contain.
Mine would start with strong authentication close to the users, i.e., at the end point. Strong authentication will start with privileged users and move to all employees. We have known about the limitations of passwords and what to do about them for thirty years. It is way past time to get on with it. Going forward, the end point of choice will be the mobile computer, colloquially referred to as a "smartphone." This device already contains powerful sensors that can be used for authentication of claims to identity. Apple Touch ID and Samsung Face Unlock are simply early examples of what can be done. These are quick and easy to use and, in combination with possession of the device, constitute strong authentication.
Mine would start with strong authentication close to the users, i.e., at the end point. Strong authentication will start with privileged users and move to all employees. We have known about the limitations of passwords and what to do about them for thirty years. It is way past time to get on with it. Going forward, the end point of choice will be the mobile computer, colloquially referred to as a "smartphone." This device already contains powerful sensors that can be used for authentication of claims to identity. Apple Touch ID and Samsung Face Unlock are simply early examples of what can be done. These are quick and easy to use and, in combination with possession of the device, constitute strong authentication.
My strategy would include reducing the number of privileged users, the reduction of their privileges, and accountability for the use and exercise of privileges. It would include involving two or more people in the exercise of sensitive but rarely used privileges. We have too many privileged users and too little visibility into how those privileges are used.
It would include the automatic notification of the subjects of records and the owners and managers of accounts of all use, changes to, or transactions against those records or accounts. If we are to detect breaches on a timely basis, we must increase and improve transparency and accountability.
It will include isolating e-mail and browsing from mission critical and other sensitive systems and data. The intelligence is clear that many, not to say most, compromises begin by duping the users of these two applications.
It will include end-to-end end, end-point to application, not perimeter, not operating system, encryption. We cannot continue to operate large enterprise networks as flat spaces, as spaces in which any system may address any other system in the network.
It will include restrictive, i.e., "white list only," granular access control close to the applications and data. It will probably include access control at every layer, e.g. between the application and the database, between the database and the file system.
These measures are neither expensive nor disruptive. Google has demonstrated that even strong authentication can be flexible and convenient. They can be implemented in parallel. There are vendors recommending them and with products and services to implement them.
This is an "off-the-top of my head" list; I am sure I have omitted something important. However, it is informed by fifty years of thinking about this problem. I am sure that many of my colleagues have measures not on my list but which they would include in a leapfrog strategy.
Friday, September 4, 2015
Security of the Internet of Things (IoT) Part II
This is the second in a planned series of three posts on the subject of the "Internet of things" (IoT). This one will treat the security of the "thing" itself. It is about how to ensure that the appliance can be used safely, how to protect it from outside interference, and how to use it to protect its data.
The problem of securing appliances is much more tractable than that of securing more open and general purpose systems like operating systems and browsers. It should not be difficult to develop purpose-built appliances that do only what they are intended to do. This idea is supported by the experience that we have with mobile computer (specifically iOS) "apps;" Most are orderly and well-behaved. While there are millions that are connected to the Internet, only a handful have been turned against it. While the attack surface of an iPhone might be quite large, that of an individual app is very small. The value of a successful attack against an app is usually limited to that app.
While there have been reports of misuse of apps, they have been far more limited than our experience with PCs might have led us to expect. While the number of mobile computers will soon exceed the number of more general purpose computers, the security problems have been much more limited. We have seen nothing approaching the kind of security problems that we have seen with Windows or Linux based personal computers.
The implication is that we can build safe things. The fear is that we will not do so. The fear is based in part upon the recent history with other kinds of computer systems. It is amplified by reports of so-called "security researchers" who have found and reported on things that were not secure. While we may have done okay with iOS, our experience with browsers has been frightening. Even computers that are otherwise secure can be compromised by duping or baiting a user into simply pushing a button.
Let's look at the iOS experience to see what we might learn about how to build secure things. First, things should run in a closed environment, either dedicated hardware, like a router, or in an environment isolated from other things like that provided by iOS. Second, it should have a limited and easily understood application or use.
The app makes the device into an application only machine. Unlike a personal computer, it does not present the function to make persistent changes to its own programming. While it is a programmed device, it is not a programmable device. The underlying computing functionality is hidden from the user. The user should not be able to see or control the file system, write or execute an arbitrary program, or even make persistent changes to data by any means other than by operating the thing.
The program for the thing should be written in an appropriate language and using an appropriate systems development kit (SDK). One does not have to know the features of the iOS SDK to know that it makes it easier to do things right than otherwise. If that were not so, we would not have well over a million apps with so few problems.
The thing should hide its computer from the network. As with the user interface, the thing should hide its file and operating systems from the network interface. Said another way, it should not answer on any standard "ports," but only those specific to the use or operation of the thing.
The thing's software should not present a multi-user interface. As with the app, possession of the thing should be both necessary and sufficient for its use. While many iOS apps do require "login" it is not for use of the app itself but to authenticate the user to a multi-user network application.
While the personal computer expects and supports late changes to its own programming, the app does not. Late changes are made by replacing the app with a new version of itself rather than patching the existing one. This is facilitated in part by the fact that the app is small and self-contained. It is required because the limited function of the app does not contain the capability of making late and persistent changes to itself.
We should not conclude that building secure or "securable" things is easy but it is fair to conclude that it is possible and definitely easier than general purpose computers. However, while most "things" will work pretty much as intended, given the number of things and the number of sources, there will be broken things sold. Some of these will be in sensitive applications like healthcare, transportation, and banking. These failures will not be seen as exceptions but as typical of the Internet of Things and maycause more fear, uncertainty, and doubt than diligence and care.
The problem of securing appliances is much more tractable than that of securing more open and general purpose systems like operating systems and browsers. It should not be difficult to develop purpose-built appliances that do only what they are intended to do. This idea is supported by the experience that we have with mobile computer (specifically iOS) "apps;" Most are orderly and well-behaved. While there are millions that are connected to the Internet, only a handful have been turned against it. While the attack surface of an iPhone might be quite large, that of an individual app is very small. The value of a successful attack against an app is usually limited to that app.
While there have been reports of misuse of apps, they have been far more limited than our experience with PCs might have led us to expect. While the number of mobile computers will soon exceed the number of more general purpose computers, the security problems have been much more limited. We have seen nothing approaching the kind of security problems that we have seen with Windows or Linux based personal computers.
The implication is that we can build safe things. The fear is that we will not do so. The fear is based in part upon the recent history with other kinds of computer systems. It is amplified by reports of so-called "security researchers" who have found and reported on things that were not secure. While we may have done okay with iOS, our experience with browsers has been frightening. Even computers that are otherwise secure can be compromised by duping or baiting a user into simply pushing a button.
Let's look at the iOS experience to see what we might learn about how to build secure things. First, things should run in a closed environment, either dedicated hardware, like a router, or in an environment isolated from other things like that provided by iOS. Second, it should have a limited and easily understood application or use.
The app makes the device into an application only machine. Unlike a personal computer, it does not present the function to make persistent changes to its own programming. While it is a programmed device, it is not a programmable device. The underlying computing functionality is hidden from the user. The user should not be able to see or control the file system, write or execute an arbitrary program, or even make persistent changes to data by any means other than by operating the thing.
The program for the thing should be written in an appropriate language and using an appropriate systems development kit (SDK). One does not have to know the features of the iOS SDK to know that it makes it easier to do things right than otherwise. If that were not so, we would not have well over a million apps with so few problems.
The thing should hide its computer from the network. As with the user interface, the thing should hide its file and operating systems from the network interface. Said another way, it should not answer on any standard "ports," but only those specific to the use or operation of the thing.
The thing's software should not present a multi-user interface. As with the app, possession of the thing should be both necessary and sufficient for its use. While many iOS apps do require "login" it is not for use of the app itself but to authenticate the user to a multi-user network application.
While the personal computer expects and supports late changes to its own programming, the app does not. Late changes are made by replacing the app with a new version of itself rather than patching the existing one. This is facilitated in part by the fact that the app is small and self-contained. It is required because the limited function of the app does not contain the capability of making late and persistent changes to itself.
We should not conclude that building secure or "securable" things is easy but it is fair to conclude that it is possible and definitely easier than general purpose computers. However, while most "things" will work pretty much as intended, given the number of things and the number of sources, there will be broken things sold. Some of these will be in sensitive applications like healthcare, transportation, and banking. These failures will not be seen as exceptions but as typical of the Internet of Things and maycause more fear, uncertainty, and doubt than diligence and care.
Labels:
Internet of Things,
iOS,
IoT,
security
Monday, August 3, 2015
Security of the Internet of Things (Part I)
This is the first of a contemplated three posts on the subject of the Internet of Things (IoT). In this post we identify the space and the potential,security issues. In subsequent parts we will make recommendations to address the issues. Comment, feedback, explication, and even argument are invited.
As the cost of information technology falls and the wireless Internet becomes more and more ubiquitous, we are witnessing the emergence of smart appliances, "things," that are connected to the public networks. An early example was the smart printer; we have been living with these for almost a decade. Others include baby monitors, front door monitors, lighting controls, and home security devices.
Perhaps an even earlier example was the ATM While early ATMs used proprietary operating systems, protocols, and network connections, at some point they began to use Windows, Internet protocol, and Ethernet connections because this was the cheapest way to build them. We now have wireless ATMs and wireless point of sale devices. Other examples include the Nest thermostat, the smart TV, the Chromecast, and smart watches.
One particularly interestiing example is the Samsung Smart Refrigerator that "will allow you to browse the web, access apps and connect to other Samsung smart devices – opening up a world of interactive communication and entertainment." Note that none of these functions has anything to do with keeping food fresh. They certainly distinguish this refrigerator from others but it appears that they are included mostly because this is a cheap way to do so.
Perhaps the most ubiquitous thing to date is the mobile computer, popularly called the "smartphone." It is interesting in part because of its ubiquity, in part because of the number of different things that it implements. Each of the hundreds of thousands of its application is a different thing, everything from toys to banking machines. Because of its falling cost and increasing power, we use it for things that we could hardly contemplate as recently as a decade ago.
These are real world examples but the technology is now so cheap that we must expect to see an accelerating flood of smart connected "things." There will be smart things that are not connected, but connection will be so cheap and add so much value that most will be part of the Internet of things (IoT). While we are going to speak of "things" in the abstract, keep the examples in mind because they have things to teach us about the impact that they will have on our lives and our security.
There are at least three security issues associated with the IoT. The first is that malicious actors will take control the of the thing and misuse or abuse its application to do harm to the owner or user of the device. The favorite example is that of an implanted medical device instructed to kill its user. This example is supported by so-called "research" that concludes that many early connected medical devices have implementation-induced vulnerabilities that might be exploited for malicious purposes. More about medical devices in a moment.
The second security issue is that the underlying computer functionality or capacity of the device might be taken over to participate in attacks against other entities in the Internet. The things might be co-opted into "botnets" and be used in denial of service attacks or brute force attacks against passwords or encryption keys.
The final security issue is that instances of problems, no matter how sparse will be used to sow fear, uncertainty and doubt about things in general. Take the example of smart printers, that is most of the printers that have been sold in the last decade. Millions of them have been sold by a half a dozen or so manufacturers for prices ranging from as little as a hundred dollars to thousands. They are used to scan, print, and store our most sensitive data along with the potential to leak it. Collectively they present a huge attack surface. However, the number of reported attacks is exceeded by the number of attack scenarios and "proof of concepts" dreamed up by "researchers" and reported by the media. At least so far, "things" and their applications dwarf the problems associated with their use.
While the conversion of "things" to malicious purposes makes for dramatic Hollywood scenarios, most devices will not be vulnerable to either takeover or malicious use, much less both. However, all those that are vulnerable to takeover can be exploited for their computer function or capacity. This function and capacity can be compromised and turned against the host network for a variety of attacks ranging from simple,spoofing through denial of service to brute attacks against passwords and cryptographic keys. It is this that we will argue is the most serious risk.
As the cost of information technology falls and the wireless Internet becomes more and more ubiquitous, we are witnessing the emergence of smart appliances, "things," that are connected to the public networks. An early example was the smart printer; we have been living with these for almost a decade. Others include baby monitors, front door monitors, lighting controls, and home security devices.
Perhaps an even earlier example was the ATM While early ATMs used proprietary operating systems, protocols, and network connections, at some point they began to use Windows, Internet protocol, and Ethernet connections because this was the cheapest way to build them. We now have wireless ATMs and wireless point of sale devices. Other examples include the Nest thermostat, the smart TV, the Chromecast, and smart watches.
One particularly interestiing example is the Samsung Smart Refrigerator that "will allow you to browse the web, access apps and connect to other Samsung smart devices – opening up a world of interactive communication and entertainment." Note that none of these functions has anything to do with keeping food fresh. They certainly distinguish this refrigerator from others but it appears that they are included mostly because this is a cheap way to do so.
Perhaps the most ubiquitous thing to date is the mobile computer, popularly called the "smartphone." It is interesting in part because of its ubiquity, in part because of the number of different things that it implements. Each of the hundreds of thousands of its application is a different thing, everything from toys to banking machines. Because of its falling cost and increasing power, we use it for things that we could hardly contemplate as recently as a decade ago.
These are real world examples but the technology is now so cheap that we must expect to see an accelerating flood of smart connected "things." There will be smart things that are not connected, but connection will be so cheap and add so much value that most will be part of the Internet of things (IoT). While we are going to speak of "things" in the abstract, keep the examples in mind because they have things to teach us about the impact that they will have on our lives and our security.
There are at least three security issues associated with the IoT. The first is that malicious actors will take control the of the thing and misuse or abuse its application to do harm to the owner or user of the device. The favorite example is that of an implanted medical device instructed to kill its user. This example is supported by so-called "research" that concludes that many early connected medical devices have implementation-induced vulnerabilities that might be exploited for malicious purposes. More about medical devices in a moment.
The second security issue is that the underlying computer functionality or capacity of the device might be taken over to participate in attacks against other entities in the Internet. The things might be co-opted into "botnets" and be used in denial of service attacks or brute force attacks against passwords or encryption keys.
The final security issue is that instances of problems, no matter how sparse will be used to sow fear, uncertainty and doubt about things in general. Take the example of smart printers, that is most of the printers that have been sold in the last decade. Millions of them have been sold by a half a dozen or so manufacturers for prices ranging from as little as a hundred dollars to thousands. They are used to scan, print, and store our most sensitive data along with the potential to leak it. Collectively they present a huge attack surface. However, the number of reported attacks is exceeded by the number of attack scenarios and "proof of concepts" dreamed up by "researchers" and reported by the media. At least so far, "things" and their applications dwarf the problems associated with their use.
While the conversion of "things" to malicious purposes makes for dramatic Hollywood scenarios, most devices will not be vulnerable to either takeover or malicious use, much less both. However, all those that are vulnerable to takeover can be exploited for their computer function or capacity. This function and capacity can be compromised and turned against the host network for a variety of attacks ranging from simple,spoofing through denial of service to brute attacks against passwords and cryptographic keys. It is this that we will argue is the most serious risk.
Saturday, May 9, 2015
On the Resilience of the Power Grid
The power generation and distribution system, for short "the grid." is an interesting system in a number of ways. One of these is that, within fairly narrow limits, the supply must equal the demand. There is a small amount of slack but little storage. Supply decisions are made by utilities in 500 KWH increments while demand decisions are made by individuals 150 W at time.
Obviously such a system benefits from scale. At least within limits, the more sources and uses in the system, the easier it is to achieve the necessary balance. In order to achieve the scale, all providers and users,p within a geographic region embracing many large and small states, are connected in a market network. Any supply, a generator, can be directed to any user. Suppliers with excess capacity offer it for sale to all other suppliers in the grid. Each supplier may buy from any other, and will do so, as long as that suppliers offer is lower than his own marginal cost of generation.
This market is highly automated and very efficient. Not only does it provide a low average cost for all customers but it also provides reliability. A utility that has a component, for example, a generator, failure, can use supply from any of its peers. However, in the short run, the loss of supply may create an imbalance between supply and demand. Other components may be momentarily overloaded. Since a sustained overload might ultimately cause a component to fail in a destructive manner, components are designed to either shed load, e.g., from damage. Within limits the system can absorb multiple simultaneous component failures, and re-balance, while maintaining, service to most users.
However, this design means that the grid is vulnerable to "cascading failures," in which the failure of one component may cause the protective shutdown of other components. While the resilience of the system is continually improving, there will always be an upper bound to the number of simultaneous component failures that the system can tolerate. When that threshold is crossed, apparently about once a generation, the system is designed to shut down in an orderly and non-destructive manner. These successful shutdowns enable the system to resume normal service in hours to tens of hours. Such successful shutdowns will continue to be described by politicians and the media as "failures." The designers and operators of the network will continue to think of them as successful "power grid security."
Notice that once the system has shut down, it must be restarted in a systematic way such that supply and demand are both added back to the system in such a way as to sustain the necessary balance between supply and demand. Said another way, we cannot simply turn everything back on at once. This is complicated by the fact that many using components draw significantly more power at start-up than they do while up and running. It is easy to imagine that restarting all the air-conditioners and refrigerators in a neighborhood at the same time, takes dramatically more power than sustaining them as they cycle on and off in normal operation. While most components will restart automatically, some may require manual operation. The more extensive the outage, the longer the re-start will take.
Within limits we can increase the reliability of the grid by adding redundant capacity and automatic controls. Redundancy increases cost and drives down revenue per component. Therefore, there is an economic limit to the amount of redundancy we will add. As we add redundancy, we must add more automatic controls; redundant components and controls increase the complexity of the system. At some point that increased complexity begins to cause more failures than it prevents. A mean time to failure of infinity implies infinite cost; long before we reach that point, somewhere about a mean time to shut down of the entire grid of about twenty years, we will stop.
Notice that even these massive shut downs are less disruptive than such natural disasters as ice storms where many homes may be without power or heat for days to weeks.
Obviously such a system benefits from scale. At least within limits, the more sources and uses in the system, the easier it is to achieve the necessary balance. In order to achieve the scale, all providers and users,p within a geographic region embracing many large and small states, are connected in a market network. Any supply, a generator, can be directed to any user. Suppliers with excess capacity offer it for sale to all other suppliers in the grid. Each supplier may buy from any other, and will do so, as long as that suppliers offer is lower than his own marginal cost of generation.
This market is highly automated and very efficient. Not only does it provide a low average cost for all customers but it also provides reliability. A utility that has a component, for example, a generator, failure, can use supply from any of its peers. However, in the short run, the loss of supply may create an imbalance between supply and demand. Other components may be momentarily overloaded. Since a sustained overload might ultimately cause a component to fail in a destructive manner, components are designed to either shed load, e.g., from damage. Within limits the system can absorb multiple simultaneous component failures, and re-balance, while maintaining, service to most users.
However, this design means that the grid is vulnerable to "cascading failures," in which the failure of one component may cause the protective shutdown of other components. While the resilience of the system is continually improving, there will always be an upper bound to the number of simultaneous component failures that the system can tolerate. When that threshold is crossed, apparently about once a generation, the system is designed to shut down in an orderly and non-destructive manner. These successful shutdowns enable the system to resume normal service in hours to tens of hours. Such successful shutdowns will continue to be described by politicians and the media as "failures." The designers and operators of the network will continue to think of them as successful "power grid security."
Notice that once the system has shut down, it must be restarted in a systematic way such that supply and demand are both added back to the system in such a way as to sustain the necessary balance between supply and demand. Said another way, we cannot simply turn everything back on at once. This is complicated by the fact that many using components draw significantly more power at start-up than they do while up and running. It is easy to imagine that restarting all the air-conditioners and refrigerators in a neighborhood at the same time, takes dramatically more power than sustaining them as they cycle on and off in normal operation. While most components will restart automatically, some may require manual operation. The more extensive the outage, the longer the re-start will take.
Within limits we can increase the reliability of the grid by adding redundant capacity and automatic controls. Redundancy increases cost and drives down revenue per component. Therefore, there is an economic limit to the amount of redundancy we will add. As we add redundancy, we must add more automatic controls; redundant components and controls increase the complexity of the system. At some point that increased complexity begins to cause more failures than it prevents. A mean time to failure of infinity implies infinite cost; long before we reach that point, somewhere about a mean time to shut down of the entire grid of about twenty years, we will stop.
Notice that even these massive shut downs are less disruptive than such natural disasters as ice storms where many homes may be without power or heat for days to weeks.
Monday, May 4, 2015
Chip and PIN Compared to Chip and Signature
As we begin on the long process of changing credit cards from the obsolete magnetic stripe technology to smart (EMV) "chip" cards, there has been a lot of criticism of the decision of the credit card issuers not to implement "Chip and PIN." Much of this discussion has asserted that "Chip and PIN" is more secure than the chosen chip card and signature strategy. Apparently this position is so obvious that it has stifled analysis.
I assert that Chip and PIN is only marginally more secure than Chip and Signature. It protects against the fraudulent use of lost or stolen cards. However, fraudulent use of lost or stolen cards is only a small portion of the fraud. The largest part uses counterfeit cards; chips resist counterfeiting.
For both the individual and the issuer, the best protection against fraudulent use of lost or stolen cards is to report the card lost or stolen. The individual is now protected against any use of the card. The issuer will revoke the card and is now protected against any online use of the card.
Note that the effectiveness of revocation depends in part upon the market. In the U.S., where most transactions take place online, it is very effective. In markets where the infrastructure is less robust and many transactions take place offline, revocation is less effective. Thus in the U.S. issuers are opting for Chip and Signature while in other markets Chip and PIN is chosen.
Note that only the issuers know what the losses are for fraudulent use of lost or stolen cards is, that is, how much fraud might be reduced by the use of a PIN on all transactions. It is fair to assume that they know what they are doing.
Some have asserted that, in the absence of the PIN, security will rely upon clerks to reconcile a signature on the transaction document to,the reference signature on the card. For most routine transactions we do not rely upon the clerk to verify the signature or even to touch the card. While in some places we still sign a chit, at checkout stands we sign on a little tablet (I hate them.) No one ever checks the signature unless the transaction is disputed. Said another way, at least in the U.S., we rely mostly on possession of a current card to authenticate most transactions; both signatures and PINs are backup and there is little to choose between them?
Labels:
Chip and PIN,
credit cards,
DSS,
magnetic stripe cards,
PCI,
PIN,
POS
Sunday, February 22, 2015
On Trust
Steve Bellovin wrote:
At the time when the work was being done on the TCB, Ken Thompson wrote his seminal response to the Turing Award in which he asserted that unless one wrote it oneself in a trusted environment, one could not trust it.
Peter Capek and I wrote a response to Thompson in which we pointed out that in fact we do trust. That trust comes from transparency, accountability, affinity, independence, contention, competition, and other sources.
I recall having to make a call on Boeing in the seventies to explain to them that the floating point divide on the 360/168 was "unchecked." They said, "You do not understand; we are using that computer to ensure that planes fly." I reminded them that the 727 tape drive was unchecked, that when you told it to write a record, it did the very best it could but it did not know whether or not it had succeeded. The "compensating control" in the application was to back-space the tape, read the record just written and compare it to what was intended. If one was concerned about a floating point divide, the remedy was to check it oneself using a floating point multiply.
In the early fifties one checked on the bank by looking at one's monthly statement. Before my promotion to punch-card operator, I was the messenger. Part of my duties included taking the Brewery's pass book to the bank every day to compare it to the records of the bank. As recently as two years ago, I had to log on to the bank every day to ensure that there had been no unauthorized transactions to my account. Today, my bank confirms my balance to me daily by SMS and sends another SMS for each large transaction. American Express sends a "notification" to my iPhone for every charge to my account.
In 1976, for IBM, I published Data Security Controls and Procedures. It included the following paragraph:
I often think of the world in which today's toddlers will spend their adult lives, the world that we will leave to them. My sense is that they will have grown up in a world in which their toys talked and listened and generally told the truth, but that every now and then one must check with dad.
I'm not looking for concrete answers right now. (Some of the work in secure multiparty computation suggests that we need not trust anything, if we're willing to accept a very significant performance penalty.) Rather, I want to know how to think about the problem. Other than the now-conceputal term TCB, which has been redfined as "that stuff we have to trust, even if we don't know what it is", we don't even have the right words. Is there still such a thing? If so, how do we define it when we no longer recognize the perimeter of even a single computer? If not, what should replace it? We can't make our systems Andromedan-proof if we don't know what we need to protect against them.When I was seventeen I worked as a punched card operator in the now defunct Jackson Brewing Company. I was absolutely fascinated by the fact the job to job controls always balanced. I even commented on it at the family dinner table. My father responded. "Son, those machines are amazing. They are very accurate and reliable. But check on them." Little could either of us know that I would spend my adult life checking on machines.
At the time when the work was being done on the TCB, Ken Thompson wrote his seminal response to the Turing Award in which he asserted that unless one wrote it oneself in a trusted environment, one could not trust it.
Peter Capek and I wrote a response to Thompson in which we pointed out that in fact we do trust. That trust comes from transparency, accountability, affinity, independence, contention, competition, and other sources.
I recall having to make a call on Boeing in the seventies to explain to them that the floating point divide on the 360/168 was "unchecked." They said, "You do not understand; we are using that computer to ensure that planes fly." I reminded them that the 727 tape drive was unchecked, that when you told it to write a record, it did the very best it could but it did not know whether or not it had succeeded. The "compensating control" in the application was to back-space the tape, read the record just written and compare it to what was intended. If one was concerned about a floating point divide, the remedy was to check it oneself using a floating point multiply.
In the early fifties one checked on the bank by looking at one's monthly statement. Before my promotion to punch-card operator, I was the messenger. Part of my duties included taking the Brewery's pass book to the bank every day to compare it to the records of the bank. As recently as two years ago, I had to log on to the bank every day to ensure that there had been no unauthorized transactions to my account. Today, my bank confirms my balance to me daily by SMS and sends another SMS for each large transaction. American Express sends a "notification" to my iPhone for every charge to my account.
In 1976, for IBM, I published Data Security Controls and Procedures. It included the following paragraph:
Compare Output with Input
Most techniques for detecting errors are methods ofSaid another way, while we prefer preventative controls like checked operations and the TCB, ultimately, trust comes late from checking the results.
comparing output with the input that generated it.
The most common example is proofreading or
inspecting the context to indicate whether a
character is correct. Another example, which
worked well when input was entered into punched
cards, is key verification in which source data is key
entered twice. Entries were mechanically compared
keystroke by keystroke, and variances were flagged
for later reconciliation.
I often think of the world in which today's toddlers will spend their adult lives, the world that we will leave to them. My sense is that they will have grown up in a world in which their toys talked and listened and generally told the truth, but that every now and then one must check with dad.
Friday, February 20, 2015
Fraud Alerts
Recently Bank Info Security raised the question of whether fraud alerts can be used to garner customer loyalty. I suggest that this is the wrong question.
In a world in which merchant, bank, and insurance systems are routinely breached by nation states and rogue hackers and in which hundreds of millions of credit card numbers, PINs, social security numbers, e-mail addresses, and dates of birth are freely traded for pennies in both white and black markets, it is hardly a question of "fraud alerts and customer loyalty."
I prefer to do business via proxies like PayPal, Amazon, and Apple Pay, that hide my credit card and bank credentials from the merchant. However, I use my American Express card exclusively because all transactions to my AmEx account are communicated to me in real-time via the American Express app on my iPhone. Both AmEx and I understand that this is essential to our mutual security. It is not a mere convenience or customer loyalty gimmick.
Kenneth Chennault, CEO of AmEx, speaking before the President's "Cyber Security" urged that the regulation forbidding the use of SMS for this purpose be relaxed. This regulation that was intended to discourage nuisances is in fact resisting a necessary use.
Anthem, the victim of the world's largest breach has offered to pay for fraud protection services for some of its customers, on an opt in basis. eBay, the victim of the second largest breach has not even done that. I think we need a law that requires all banks and credit bureaus to provide automatic notice of all activity to their subject's accounts on an opt-out basis. While I am willing to pay for such a service, it really ought to be a cost to those who trade in data about me.
Rogue hackers, data brokers, and the intelligence agencies have all but destroyed the trust on which our commerce is based. Reliance upon periodic statements and late detection of fraud is no longer adequate. "Fraud alerts" are not a marketing feature, In order to restore some order to our markets, "activity notices" need to become standard.
In a world in which merchant, bank, and insurance systems are routinely breached by nation states and rogue hackers and in which hundreds of millions of credit card numbers, PINs, social security numbers, e-mail addresses, and dates of birth are freely traded for pennies in both white and black markets, it is hardly a question of "fraud alerts and customer loyalty."
I prefer to do business via proxies like PayPal, Amazon, and Apple Pay, that hide my credit card and bank credentials from the merchant. However, I use my American Express card exclusively because all transactions to my AmEx account are communicated to me in real-time via the American Express app on my iPhone. Both AmEx and I understand that this is essential to our mutual security. It is not a mere convenience or customer loyalty gimmick.
Kenneth Chennault, CEO of AmEx, speaking before the President's "Cyber Security" urged that the regulation forbidding the use of SMS for this purpose be relaxed. This regulation that was intended to discourage nuisances is in fact resisting a necessary use.
Anthem, the victim of the world's largest breach has offered to pay for fraud protection services for some of its customers, on an opt in basis. eBay, the victim of the second largest breach has not even done that. I think we need a law that requires all banks and credit bureaus to provide automatic notice of all activity to their subject's accounts on an opt-out basis. While I am willing to pay for such a service, it really ought to be a cost to those who trade in data about me.
Rogue hackers, data brokers, and the intelligence agencies have all but destroyed the trust on which our commerce is based. Reliance upon periodic statements and late detection of fraud is no longer adequate. "Fraud alerts" are not a marketing feature, In order to restore some order to our markets, "activity notices" need to become standard.
Saturday, February 7, 2015
Crypto Wars Redux
This morning, while researching another question, I found the following from Aaron Schumann to alt.security, quoting a post to the Risk Forum from me. While written a quarter of a century ago, it might have been written this morning.
From: schuman@sgi.com (Aaron Schuman)
Newsgroups: alt.security
Subject: Congress to order crypto trapdoor?
Message-ID: <1991apr11 .231215.19779="" dragon.wpd.sgi.com="">
Date: 11 Apr 91 23:12:15 GMT 1991apr11>
The United States Senate is considering a bill that would require
manufacturers of cryptographic equipment to introduce a trap door,
and to make that trap door accessible to law enforcement officials.
If you feel, as I do, that the risk of abuse far outweighs the
potential benefits, please write to Senators Joseph Biden and Dennis
DeConcini, and to the Senators that represent your state, asking that
they propose a friendly amendment to their bill removing this
requirement.
I don't have exact addresses for Senators Biden and DeConcini, and
I hope someone will post them here, but the Washington DC post office
can deliver letters addressed to
Senator Joseph Biden Senator Dennis DeConcini
United States Senate and United States Senate
Washington, DC 20510 Washington, DC 20510
------------------------------
RISKS-LIST: RISKS-FORUM Digest Wednesday 10 April 1991 Volume 11 : Issue 43
Date: Wed, 10 Apr 91 17:23 EDT
From: WHMurray@DOCKMASTER.NCSC.MIL
Subject: U.S. Senate 266, Section 2201 (cryptographics)
Senate 266 introduced by Mr. Biden (for himself and Mr. DeConcini)
contains the following section:
SEC. 2201. COOPERATION OF TELECOMMUNICATIONS PROVIDERS WITH LAW ENFORCEMENT
It is the sense of Congress that providers of electronic communications
services and manufacturers of electronic communications service equipment shall
ensure that communications systems permit the government to obtain the plain
text contents of voice, data, and other communications when appropriately
authorized by law.
------------------------------
The referenced language requires that manufacturers build trap-doors
into all cryptographic equipment and that providers of confidential
channels reserve to themselves, their agents, and assigns the ability to
read all traffic.
Are there readers of this list that believe that it is possible for
manufacturers of crypto gear to include such a mechanism and also to reserve
its use to those "appropriately authorized by law" to employ it?
Are there readers of this list who believe that providers of electronic
communications services can reserve to themselves the ability to read all the
traffic and still keep the traffic "confidential" in any meaningful sense?
Is there anybody out there who would buy crypto gear or confidential services
from vendors who were subject to such a law?
David Kahn asserts that the sovereign always attempts to reserve the use of
cryptography to himself. Nonetheless, if this language were to be enacted into
law, it would represent a major departure. An earlier Senate went to great
pains to assure itself that there were no trapdoors in the DES. Mr. Biden and
Mr. DeConcini want to mandate them. The historical justification of such
reservation has been "national security;" just when that justification begins
to wane, Mr. Biden wants to use "law enforcement." Both justifications rest
upon appeals to fear.
In the United States the people, not the Congress, are sovereign; it should not
be illegal for the people to have access to communications that the government
cannot read. We should be free from unreasonable search and seizure; we should
be free from self-incrimination. The government already has powerful tools of
investigation at its disposal; it has demonstrated precious little restraint in
their use.
Any assertion that all use of any such trap-doors would be only
"when appropriately authorized by law" is absurd on its face. It is not
humanly possible to construct a mechanism that could meet that
requirement; any such mechanism would be subject to abuse.
I suggest that you begin to stock up on crypto gear while you can still get it.
Watch the progress of this law carefully. Begin to identify vendors across the
pond.
William Hugh Murray, Executive Consultant, Information System Security 21
Locust Avenue, Suite 2D, New Canaan, Connecticut 06840 203 966 4769
We fought this battle once and thought that we won the war.
My Little Mark
One of my conscious life goals is to "leave my little mark on culture." I do this mostly through my work. I often tell my audiences that they are my slate. This blog is part of my mark and the Internet a place to leave it. I do, record, and distribute much of my work using e-mail.
This morning I was listening to Walter Isaacson, journalist, historian, and cultural commentator, on C-SPAN2's BookTV. He was bemoaning the disappearance of letter writing and the loss to the historian of this important source. He noted that most of us now use e-mail for what we used to do with letters but that e-mail is ephemeral, of limited use to the historian.
I wanted to test this assertion so I did a Google search on whmurray@dockmaster.mil, the first public e-mail address that I ever used. Now this was before the world wide web and long before Google, but sure enough, Google found many messages, not all in the same place. After listing 66 messages, Google said "In order to show you the most relevant results, we have omitted some entries very similar to the 66 already displayed." Few of the items returned seem to point to the origin or destination systems but rather to quotes or citations. So, while there are more messages than those returned, it is unlikely that all messages from the era, or even most of those with historical interest, survive.
Dockmaster may be a special case, one of historical interest. It was a domain hosted on a Multics system by the National Computer Security Center, a part of the National Security Agency. It was used by most of the computer security thought leaders of the era and hosted many productive discussions on the topic. Indeed, it was an example of many of the best ideas on the subject.
Isaacson may be right and we may have lost much of the e-mail. The e-mail I found may be exceptional. Perhaps the content of my message was exceptional; perhaps it was even curated. I found one message on anus.com, The American Nihilist Underground Society. (More on this message later.) However, as storage continues to become cheaper and denser, the potential for e-mail to survive increases. Thanks to Google, Bing, et.al., we will be able to sift the tiny number of messages with historical interest from the remainder.
Most of us are not aware of the significance of what we are saying or doing at the time we say or do it. It is only with the passage of time that the significance becomes apparent. The Internet in general, and e-mail in particular, amplify our writings. They have the potential to filter out and preserve that which is important to history. However, the recording and reporting of history are, of their essence, imperfect. History will note and report the impact of paper mail yielding to e-mail.
Isaacson did not comment on blogs, another important source for historians, replacing diaries and journals. Blogs too may prove to be ephemeral but more will be written, some will survive, and historians will be able to find those that do.
I am satisfied that electronic media contribute to "My Little Mark."
This morning I was listening to Walter Isaacson, journalist, historian, and cultural commentator, on C-SPAN2's BookTV. He was bemoaning the disappearance of letter writing and the loss to the historian of this important source. He noted that most of us now use e-mail for what we used to do with letters but that e-mail is ephemeral, of limited use to the historian.
I wanted to test this assertion so I did a Google search on whmurray@dockmaster.mil, the first public e-mail address that I ever used. Now this was before the world wide web and long before Google, but sure enough, Google found many messages, not all in the same place. After listing 66 messages, Google said "In order to show you the most relevant results, we have omitted some entries very similar to the 66 already displayed." Few of the items returned seem to point to the origin or destination systems but rather to quotes or citations. So, while there are more messages than those returned, it is unlikely that all messages from the era, or even most of those with historical interest, survive.
Dockmaster may be a special case, one of historical interest. It was a domain hosted on a Multics system by the National Computer Security Center, a part of the National Security Agency. It was used by most of the computer security thought leaders of the era and hosted many productive discussions on the topic. Indeed, it was an example of many of the best ideas on the subject.
Isaacson may be right and we may have lost much of the e-mail. The e-mail I found may be exceptional. Perhaps the content of my message was exceptional; perhaps it was even curated. I found one message on anus.com, The American Nihilist Underground Society. (More on this message later.) However, as storage continues to become cheaper and denser, the potential for e-mail to survive increases. Thanks to Google, Bing, et.al., we will be able to sift the tiny number of messages with historical interest from the remainder.
Most of us are not aware of the significance of what we are saying or doing at the time we say or do it. It is only with the passage of time that the significance becomes apparent. The Internet in general, and e-mail in particular, amplify our writings. They have the potential to filter out and preserve that which is important to history. However, the recording and reporting of history are, of their essence, imperfect. History will note and report the impact of paper mail yielding to e-mail.
Isaacson did not comment on blogs, another important source for historians, replacing diaries and journals. Blogs too may prove to be ephemeral but more will be written, some will survive, and historians will be able to find those that do.
I am satisfied that electronic media contribute to "My Little Mark."
Friday, January 9, 2015
Darkness in the City of Light
Last night the Tour Eifel went dark.
Once more the forces of darkness have donned their black clothing and armor and struck out, not quite blindly. This time their target was freedom, freedom of speech, a freedom associating France with America.
Je suis Charlie!
Once more the forces of darkness have donned their black clothing and armor and struck out, not quite blindly. This time their target was freedom, freedom of speech, a freedom associating France with America.
Je suis Charlie!
Subscribe to:
Posts (Atom)