Whether or not this is a security problem, it has certainly been a public relations disaster. It is so bad that one government agency, that is a SecurID customer, has announced that they will switch to another product. Whether or not RSA has done the right thing in this case, it is clear that no one is happy with the way that they have handled it.
This is a case study in how difficult it is to handle a breach. The otherwise disinterested curious want full disclosure. On the other hand, the victim would like as little disclosure as possible. Customers want to know but do not want anyone else to know.
I am reminded of Franklin National Bank. A rogue trader lost about $50M dollars of the bank's money, painful but still only a fraction of the bank's capital. The bank managed to keep the loss "secret" for about ninety days. At that point, the Wall Street Journal reported it. In the next ninety days, the bank lost $2B in deposits and it failed. It could have survived the loss but was killed by the publicity.
As this case illustrates, the first concern that a victim has is to ensure that the publicity is not worse than the breach. What could be worse for a security company than to have to admit to ineffective security or a breach that reduces the effectiveness of products that they have already sold.
However, in fairness to RSA, they have other concerns. As a security company, they have an obligation to their customers to tell them about anything that diminishes the security that they think that they have purchased. They also have a responsibility to not make the situation worse by unnecessary disclosure.
They have a responsibility to cooperate with law enforcement. They want to protect the investigative process and the utility of the product of the investigation.
Now add to this that they really are not sure of the extent of the damage. The longer they delay disclosure, the more they know, the more certain they are of what they know. However, for more sophisticated and patient attacks, one may never be confident about the extent of its success or what information has been compromised.
Note that, as a target they owe a certain duty to peer target enterprises to share information that might be useful in protecting themselves. As a security company, they owe a certain duty to the security community at large to share information necessary to judge the effectiveness, or damage thereto, of the products and services that they offer.
As a vendor, they owe a duty to their customers. However, this duty may be different to those who purchase the SecurID tokens and servers and those to whom they also provide identity management and authentication services.
As security professionals, we can sympathize with this over-constrained problem. Few among us would like to be confronted with such a dilemma. None of this is to say that RSA has done the right thing or that this is not a PR disaster of epic proportions but only that we may never know enough to fairly judge what they have done. Microsoft has never divulged the details of the compromise of their development system.
If you are merely among the curious, a peer company, security professional, or prospective customer of RSA, you may never know what really happened.
You should know that:
Six pieces of special knowledge are necessary to successfully authenticate to the RSA system:
* the (address of the) system that will accept the credential
* the user ID
* the PIN or passphrase
* the seed value
* the algorithm
* the association or bind among the first four
RSA does not know all of these things. Therefore, while a compromise of its systems might reduce the cost of attack, it cannot make it free or even trivial.
The algorithm has been reverse engineered and software that implements it is available for download.
The token is both a forgery-resistant artifact and a mechanism for resisting re-play. Knowledge of the seed lowers the cost of forgery but does not lower the cost of replay.
Since its compromise, RSA has encouraged all of its customers to monitor their authentication servers for evidence of attack against PINs and to encourage their users to employ strong PINs. This is good practice in any case but might be more important if there was reason to believe that any seeds have been compromised.
Under NDA, RSA has told some customers more. If you are a customer and if you are willing to agree not to share what they tell you, RSA may tell you more about the compromise. Note that, since you cannot discuss with others, you cannot verify everything, perhaps anything, that they tell you.
Finally, If one is using strong authentication and one is compromised, the most likely cause is that someone took bait and compromised the network.
The bad news is that RSA may never know exactly what happened; the rest of us will definitely never know.
The good news is that we know enough.
Most of us need only get over the fact that we will never know.
Most users of the tokens need not do anything.
Those of you whose principals are peer targets of RSA must talk to RSA and request a remedy. On the low side the remedy may be nothing. On the high side it may be replacement and re-enrollment of any compromised tokens. Under normal circumstances, one might have weeks to months to get this done. However, since we do not know when the breach took place, days to weeks is a safer time-frame.
A colleague of mine, one who knows this space and this company better than most, wonders that there should be any doubt, that token seeds would ever be connected to the enterprise network. Can you say hardened system?
I am uncomfortable with the expression "Advanced Persistent Threat' but the clear implication of it is that, at least for some identifiable set of enterprises, the threat environment has changed by an order of magnitude.
The heavy, not to say exclusive, reliance on perimeter security that we have used for a generation is no longer adequate. Real defense in depth must be the new order of the day. Defense in depth implies identification of the "crown jewels." It implies that the compromise of one, two, or even three or four defenses should not compromise them. It implies that no single insider can compromise them on purpose, much less by accident or error.
Since, based upon the Verizon data breach report, the time to detection of a compromise is measured in weeks-to-months, this data must be protected based on the assumption that there are compromised systems on the enterprise network. Some data must be behind an air-gap.
Systems and users that access external objects, for example, e-mail messages or web pages, may have to use application-only or locked-down systems to reduce compromise by taking bait. VPNs must terminate on the application, not on the perimeter, not on an operating system. These may be just some of the hard choices we will have to make.
Our choice is to adapt our security strategy to deal with the higher threat level or our public relations strategy to deal with the kind of breach that RSA is dealing with now. It is a difficult dilemma but that is why we are called professionals and are paid the big bucks.