IT projects are historically, not to say always, late. There are a number of reasons for this. We prioritize schedule before quality; it is part of our culture. We think that schedule is easy to measure. We think that we are on schedule until late in the effort, when quality jumps up and bites us in the ass. Another reason that we are late is that we fix things in the order of their discovery rather than in the order of their importance.
This is a way of behaving that we replicate in Cybersecurity. Not only do we fix things in the order of their discovery but we fix them in the order that someone else discovers them. Microsoft announces forty vulnerabilities, ten critical, on "patch Tuesday." We drop anything else we might be doing to apply the patches.
Microsoft was shamed into publishing one or more of the patches. Google Project Zero discovered the vulnerability and generously gave Microsoft ninety days to fix it under the threat of a public shaming if they failed.
Ninety days is arbitrary. It is not based on the ease of exploiting the vulnerability, how wide spread it is, how costly it is to fix, what the fix might break, or what other vulnerabilities Microsoft may have on its plate. It is one size fits all. Sometimes Microsoft even chooses the shaming, in part because of what it knows that Google does not and cannot know. We often patch without even considering whether or not the vulnerability represents a risk to us.
Again, it is part of our culture. Of course, as a result of this automatic, Lemming like, behavior, we may all be at greater risk than we need to be.
Whatever our vendors or our peers may be doing, we need to fix things in order of their risk to our enterprise. We need to resist letting others allocate our scarce resources into "unplanned activity." We need to put aside fear generated by the breach of our neighbor because of an unapplied patch.
Know your risk tolerance. Identify your risks. Mitigate, accept, and assign them in the order of that risk. Document risk acceptance. Plan your work and work your plan. Prefer mitigation measures that are broad over those that are merely most effective. Keep in mind that hiding vulnerabilities, for example behind firewalls, is often more efficient than patching them. At least the mistakes you make will be your own.
Thursday, October 31, 2019
Wednesday, October 23, 2019
FBI Recommends Use of Biometrics
In its Private Industry Notification, 17 September 2019 PIN Number 20190917-001, the FBI encourages the use of biometrics to resist what they see as the limitations of strong authentication. In fact what they have observed is effective social engineering attacks necessitated by effectiveness of one-time passwords. Other strong authentication, which might include biometrics, is the solution that I would recommend.
Consider my financial services firm. They offer me strong authentication based upon a software token installed on my mobile computer. I downloaded the token from the App Store and gave its identity, 4 letters and 8 digits, to my financial services firm and they associated that token with my account. When I logon with my UID and password, I am prompted for a one-time password, six digits, generated by that token, with a life of sixty seconds, and expected by a server used by my financial services firm.
Now, suppose I were to lose the mobile. I would have to get a new mobile and download a new token. I would have to associate the replacement token with my account. In the capability to do that lies a potential vulnerability. If an attacker were successful in convincing my financial services firm to associate his token with my account, then he might be able to defeat the strong authentication. Therefore, my financial services firm must be able to resist this "social engineering" attack. This is where biometrics can play a useful role.
When I call my financial services firm to replace my lost token, or for any other purpose, they may recognize me from my "calling number ID." They authenticate me by my voice, a biometric, something that only I can do, one that works over the phone. Yep, they really do; they tell me that that is what they are doing. While I am a stranger to the agent, the computer recognizes my voice as the one to expect for my phone number. The agent also asks me for another piece of shared information, a challenge and response, a second factor. Only then will they honor my request to replace the lost token ID with the the new one. I think that this is an instance of the use of biometrics that would meet the expectation of the FBI.
Of course, the process does not end there. My firm e-mails me, out-of-band confirmation, that they have changed the token associated with my account. This gives me the opportunity to recognize a fraudulent change to my token ID.
Now the link above not only points to my blog entry on limitations of one-time passwords but also to the limitations of biometrics. One needs to understand those limitations in order to use biometrics effectively. I like the voice implementation used by my financial services firm because it is dynamic and resists replay attacks; replay attacks are one of the limitations of biometrics. Along with facial recognition, voice is one of two biometrics that both people and machines can reconcile reliably.
(I am sure that you have heard of static facial recognition being duped by a photograph, a limitation, but fooling a four year old child in dynamic facial recognition, for example, over Skype or FaceTime, as to the identity of her grandmother might be more difficult.)
While there are alternatives to the use of biometrics, the FBI and I agree that they can be both convenient and secure in some applications and environments. The FBI recommends them to resist what they see as limitations of multi-factor authentication. I recommend them as effective and efficient measures for resisting one form of "social engineering."
Consider my financial services firm. They offer me strong authentication based upon a software token installed on my mobile computer. I downloaded the token from the App Store and gave its identity, 4 letters and 8 digits, to my financial services firm and they associated that token with my account. When I logon with my UID and password, I am prompted for a one-time password, six digits, generated by that token, with a life of sixty seconds, and expected by a server used by my financial services firm.
Now, suppose I were to lose the mobile. I would have to get a new mobile and download a new token. I would have to associate the replacement token with my account. In the capability to do that lies a potential vulnerability. If an attacker were successful in convincing my financial services firm to associate his token with my account, then he might be able to defeat the strong authentication. Therefore, my financial services firm must be able to resist this "social engineering" attack. This is where biometrics can play a useful role.
When I call my financial services firm to replace my lost token, or for any other purpose, they may recognize me from my "calling number ID." They authenticate me by my voice, a biometric, something that only I can do, one that works over the phone. Yep, they really do; they tell me that that is what they are doing. While I am a stranger to the agent, the computer recognizes my voice as the one to expect for my phone number. The agent also asks me for another piece of shared information, a challenge and response, a second factor. Only then will they honor my request to replace the lost token ID with the the new one. I think that this is an instance of the use of biometrics that would meet the expectation of the FBI.
Of course, the process does not end there. My firm e-mails me, out-of-band confirmation, that they have changed the token associated with my account. This gives me the opportunity to recognize a fraudulent change to my token ID.
Now the link above not only points to my blog entry on limitations of one-time passwords but also to the limitations of biometrics. One needs to understand those limitations in order to use biometrics effectively. I like the voice implementation used by my financial services firm because it is dynamic and resists replay attacks; replay attacks are one of the limitations of biometrics. Along with facial recognition, voice is one of two biometrics that both people and machines can reconcile reliably.
(I am sure that you have heard of static facial recognition being duped by a photograph, a limitation, but fooling a four year old child in dynamic facial recognition, for example, over Skype or FaceTime, as to the identity of her grandmother might be more difficult.)
While there are alternatives to the use of biometrics, the FBI and I agree that they can be both convenient and secure in some applications and environments. The FBI recommends them to resist what they see as limitations of multi-factor authentication. I recommend them as effective and efficient measures for resisting one form of "social engineering."
Sunday, October 20, 2019
EBA Relaxes Requirements for Strong Authentication
"The European Banking Authority (EBA) has issued a new Opinion that provides the European payments industry with an EU-wide additional 15 months to comply with strong customer authentication (SCA) requirements for online ecommerce transactions."
Since there are banks that are already in compliance, the solution for consumers is to do business only with those banks.
While there is no international law on this, there is good banking practice that is universal. All banks have an obligation to "know their customers," and to ensure that "transactions are properly authorized." Passwords that are vulnerable to fraudulent reuse do not meet these standards of good practice.
In an era when most customers have e-mail, mobile computers, or both, strong authentication is not sufficiently difficult to implement to justify an extension. This is an example of "regulatory capture." The authority is derelict. It is serving banks rather than customers. Shame.
Since there are banks that are already in compliance, the solution for consumers is to do business only with those banks.
While there is no international law on this, there is good banking practice that is universal. All banks have an obligation to "know their customers," and to ensure that "transactions are properly authorized." Passwords that are vulnerable to fraudulent reuse do not meet these standards of good practice.
In an era when most customers have e-mail, mobile computers, or both, strong authentication is not sufficiently difficult to implement to justify an extension. This is an example of "regulatory capture." The authority is derelict. It is serving banks rather than customers. Shame.
Subscribe to:
Posts (Atom)