Monday, November 24, 2014

Formal Risk Acceptance


The Security Executive's Ultimate Tool

I recently met with seventy-five chief information security officers.  I was reminded that they are staff, not line, executives.  Their authority is limited. They do not own the assets to be protected nor do they have the authority and discretion to allocate resources to that protection.  While they can propose standards and guidelines, they usually do  not have the authority to mandate or enforce them. They can neither reward nor punish.

The real work of protecting assets rests with the managers who are responsible for those assets, for allocating them, for prescribing how and by whom they may be used.  As much as we might wish that it were otherwise, the responsibility for protecting assets cannot be separated from the discretion to use them, from "ownership."

Yet when things inevitably go wrong, when systems are breached, data leaks, or applications fraudulently used, it is likely that the staff executive will be held accountable, not to say lose his job.  There are a number of tools available to the staff executive including persuasion, awareness training, standards, guidelines, measurement, and reporting.  Another, and the subject of this blog, is formal risk acceptance.  It is the staff security executive's measure of last resort.

There are three things that management can do with risk.  They can mitigate it, accept it, or assign it to others through insurance.  Unfortunately risk acceptance is often 'seat of the pants" and without accountability.

Formal risk acceptance is a process in which the risk is documented by staff, usually security staff, and accepted by line management.  The expression of the risk may refer to policy, standards, guidelines, or other expressions of good practice.

Documentation of risk will usually involve some negotiation so that the accepting manager understands the real risk, the description or expression of it, and the alternatives to accepting it. Therefore, this negotiation may involve some reallocation between mitigation and acceptance.  As these negotiations proceed, the manager's understanding of the risk and his options will improve and may result in choices that were not apparent when the negotiation began. The document should also describe and price all alternatives to acceptance that were considered. Note that sometimes a risk is accepted in part because it is believed that it is cheaper to mitigate it late than early.

The manager who accepts the risk must have the authority, discretion, and resources to mitigate the risk if he chooses to do so.  This test is necessary to ensure that the risk is accepted by the right manager or executive.  Said another way, risk should be accepted by a manager or executive who could implement one of the alternatives if he or she preferred.  It should not be accepted as a forced choice.

Risk acceptance decisions have to be revisited periodically.  Therefore, they are finite, they expire.  Often, the risk acceptance is part of a plan to tolerate the risk for a fixed period of time but mitigate it before a time certain in the future, for example in ninety days.  In such cases, a plan scheduled date for the mitigation becomes the expiration date.  Where there is no plan, the acceptance should expire after a term set by policy, usually one year.  This insures the decision will be reviewed periodically.  Managers should understand that risk acceptance is not the same thing as risk dismissal or ignoring.

Finally, risk acceptances should expire with the authority of the accepting manager or executive. When a manager's tenure ends, for whatever reason, all risks accepted by that manager must be revisited and re-accepted.  This will usually be by the manager's successor.  However, in the case of reorganization the risk acceptances may be distributed across multiple other managers.

Staff should keep track of all outstanding risk acceptances, ensure that they are revisited on time. measure whether in the aggregate they are increasing or decreasing, and report on them to higher management.

While, as a matter of fact and by default, a manager does accept any risk which he fails to mitigate or assign, some may be relcutant to document the fact.  In such cases, the staff should escalate.  In any case, the risk must be documented and shared with higher management.

Special attention should be given to audit findings.  While some of these may result from oversight, some may result from decisions taken but not documented.  Note that auditors are rarely in a position to assess the risk associated with their findings.  Therefore, risk assessments should be documented for all their findings and used in the planning process as to what to do about them.  Risk acceptances must be documented for any findings that will not be mitigated before the next audit.  Auditors may want to attend to whether cumulative risk is going up or down.

Monday, September 15, 2014

Q & A About Apple Pay

"Nothing useful can be said about the security of a mechanism except in the context of a specific application and environment."


In that context, what can one say about the security of Apple Pay?

We can say with confidence that Apple Pay is more secure than the alternative widely used payment mechanisms such as cash, mag-stripe cards, or comntactless (RFID) (debit or credit) cards.  Its security is comparable to that of EMV ("Chip" cards).


What is necessary to use Apple Pay?

One must have one or more credit card or other bank accounts to charge.  (By default, Apple Pay will use the account registered with the Apple Store. ). One must  have use of an iPhone 6 or iPhone 6 Plus and Touch ID.  Finally, the merchant must have point of sale devices that have contactless readers.  These readers work with both contactless  (RFID) credit cards and mobile computers using Near Field Communication (NFC).


If one loses one's iPhone can a finder use Apple Pay.

No.  Both possession of the IPhone and the right fingerprint are necessary to use Apple Pay.  Similarly someone with merely a copy of your fingerprint cannot use it.  Of course, one would still want to remotely disable the iPhone.


If my password is disclosed, I can change it, but I cannot change my fingerprint.

True but there is no need.  Passwords work only because they are secret.  Fingerprints work because they are difficult to counterfeit; no need for secrecy.  In fact one leaves copies of one's fingerprints all around in the normal course of things.


One can do Apple Pay with Apple Watch.  Does it have a fingerprint reader?  

No. The Apple Watch uses a different but equally effective authentication scheme.  After one puts the Watch on, one enters a 4 digit personal identification number (PIN).  This lasts until the sensors on the watch indicate that the watch has been taken off.  Both of these authentication schemes are examples of Two-factor Authentication, iPhone and Touch ID, Watch and PIN.  When used with the Secure Module and the one-time digital token to resist replay, Apple Pay has Strong Authentication..  


What is "NFC?"  

NFC is a low power, low speed, extremely short range digital radio capability.  Its applications include retail payments.  Apple Pay uses NFC to communicate with the register or point-of-sale device.  While NFC is only one alternative communication method, payment systems that use it may be identified as "NFC" systems.  


Is NFC secure?

NFC makes no security claims.  All required security must be built into the application.  While it is low power and short range, NFC includes no other security properties, functions or features.  Apple Pay does not rely upon NFC for security.  The token value that Apple Pay uses NFC to send to the point of sale is a one-time value.  Unlike a credit card number, it is not vulnerable to disclosure or reuse.  


How do I know how much I am being charged?

As with credit card transactions, the amount that you will be charged is displayed on the register. As with credit card transactions, you may be asked to "accept" or confirm pthe amount to the register.  As with credit card transactions, the register will provide you with a paper receipt.   


How do I know that the amount that appears on the register, that I confirm, and that is printed on the receipt is what is actually charged to my account?

By benign design and intent, systems will automatically ensure that the displayed amount and the charged amount are the same.  One can imagine a system designed to cheat but these will be very rare, easily detected, and quickly shut  down.  To some degree, this will depend on you.  

As with credit cards and checks, some of you must reconcile charges (perhaps using the paper receipt) to your account to what you authorized.  (Some other "wallet" programs immediately confirm the location and amount to the mobile device by SMS.  It remains to be seen whether Apple Pay will do this but it is likely.)   (Your bank or account holder may also offer transaction confirmation features.  For example, American Express offers its customers the option to have "card not present" transactions confirmed in real time by e-mail.  Incidentally, Apple Pay transactions look to American Express as "card not present.")


What if the charges to my account are not the same as I authorized?

Errors or fraud will be rare but you will continue to enjoy the same right to dispute charges that you have always had.


Lest you think that these questions are trivial, I heard each of them raised seriously by serious people on TV this week.

Monday, September 8, 2014

"Come Back with a Warrant."

Recently, in recognition of my routine contribution, the Electronic Frontier Foundation (EFF) sent me a little sheet of stickers highlighting their areas of interest and action.  Since advocacy of the Fourth Amendment to the US Constitution is one of my pursuits, I particularly liked the one that said "Come Back with a Warrant."  I inferred that, as good custodians of the private information of others, when asked for that information by government, our default response should be "Come back with a warrant."

As one who has had occasion to draft rules and regulations, if not law, I have always stood in awe of those who crafted our Constituion.  It is a model of brevity, clarity, and balance.  While tortured by events and progress, it has served us well.  Not only is the Fourth Amendment not an exception to this observation, it is an example of it.  Having recently thrown off the yolk of tyranny, the Authors were exquisitely sensitive to the potential for abuse of the power of the state.  In the Fourth Amendment the Authors sought to place a limit on the magisterial police powers of their awesome creature.

They stipulated that the people have a right to be secure in their "persons, houses, papers, and effects" from "searches and seizures."  In consideration of police necessity, the Authors qualified the searches and seizures that they were addressing as "unreasonable," leaving open the possibility of reasonable ones, and to specifically include those where the state had a "warrant" of a specific character.

In recent times, in response to threats real and imagined, the state, congress, courts, and executive, have dramatically limited the right of the people to be secure in "persons, houses, papers, and effects."  Congress has passed laws, such as the USA Patriot Act, granting massive exceptions to the requirements for warrants in the name of "counter-terrorism."  Secret courts have permitted seizures so massive as to defy the wildest definitions of reasonable.  The Executive has engaged in secret programs of "warrantless surveillance" and officially lied to the American people about their existence.  They have systematically parsed every word in the Amendment, specifically including "unreasonable," "seizure," "papers," and even "their" so as to eviscerate the protection that the Amendment was intended to afford.

For example, It is hard to imagine a definition of seizure that does not include "taking from another under force of law."  However, for their own convenience this administration, Departments of Defense and Justice, secretly agreed among themselves to a definition that such an act did not constitute seizure as long as one promised not to look at what one had "taken."  Having gotten a secret court to agree to this definition, the act was now not only "legal" but also, at least by this arguable definition, constitutional.  Such "weasel wording" might be laughable in another context.

So, where should we take our stand?  I propose that we stand with the EFF, that we adopt enterprise policy that, at least by default, we expect a warrant.  We should not wait until we are served with a National Security Letter, which may even say that we may not consult counsel, but we should proactively adopt and direct counsel to implement a policy that we expect a warrant and will resist deficient orders.

I am willing to grant the government access to almost anything for which they have a warrant.  Some even say I have given up.  However, even a capricious warrant offers us fundamental protections.  First, unlike some other orders, it is never unilateral.  Two people, usually with different motives, must cooperate before there can be a warrant.  An investigator must at least have the consent of a magistrate.

Second, a warrant requires probable cause, not merely "articulable suspicion."  It requires that an investigator not only present the court with "probable cause" but do so under oath, subject to penalties for perjury.  The investigator may not simply make an assertion.

Finally, while it may be broad, a warrant must be limited in its scope.  The Amendment requires specify the "place to be searched, and the persons or things to be seized."  As the custodians of the personal data of others, we should at least assert that the warrant should specify the data to be searched, the arguments to be used and the functions that are responsive.  We should be prepared to challenge warrants that we believe to be overly broad but even if we fail, the specifications will be a matter of record.

The Authors of the Amendment gave state the, admittedly carefully limited, warrant as an exception to the right of the people to be secure from searches and seizures.  Even those who do not agree with me that they should be required, have to concede that they are just not that hard to get. Let's expect them to bring one.

Friday, August 22, 2014

Managing Insider Risk

"Outsiders damage the brand; insiders bring down the business."

"We use automated controls over insiders only to the extent that they are more efficient than management supervision; under no circumstances are they a substitute for supervision."

Management of insider risk is not for the indolent, ignorant, or incompetent.  It requires diligence, special knowledge, and skill.  Here are some ideas that you may find useful.

Focus controls on the early detection and correction of errors.  Not only will such controls also resist malice but they reduce the temptation that results when employees make errors and recognize that they are not detected.

Focus controls on executives, officers and managers rather than clerks and tellers.  History suggests that we often focus on those likely to steal little and be caught early rather than those able to destroy the business but be caught late.

Ensure that supervisors have the necessary knowledge, skills, and abilities to perform and assess the duties of subordinates.  Historic losses from insider errors or malice have involved employees whose superiors did not understand what they did.

Structure duties and roles such that one person, simply performing his assigned duties, without doing anything heroic or exercising extraordinary judgement, acts as a control over others.  This arrangement detects errors and omissions, and discourages and detects malicious acts.

Separate origination from approval, record creation from maintenance, and custody of assets from the records about those assets.  These rules are as old as double-entry bookkeeping and originate with the same little monks.

Require the cooperation of two or more people to exercise extraordinary privileges or capabilities.  No one should have been able to do what Edward Snowden appears to have done.

Consider the rule of "least possible privilege" when granting access and authorizing capabilities.  Said another way, employees should have only those privileges and capabilities necessary to carry out their assignments. Guard against the accretion of privileges as employees move from role to role through their careers.

Use automatic alerts and alarms. Distribute them to those best able to recognize the need for and the authority to take the necessary corrective action. Distribute them such that one person has to deal with only a few a day. Require that individuals make a record of the disposition of all alerts and alarms

Instruct all employees to report all anomalies and variances from expectation to the attention of at least two people, including one manager and a member of the audit or security staff.  Be sure to treat all such reports and reporters with respect; dismissing them will discourage future reporting.

Measure and report on performance; changes in performance are suspicious.  However, "If the numbers are too good to be true, they are not true."   Massive frauds, including Bearings Bank, Enron, and Equity Funding, all began with glowing revenue numbers.  Management fraud has resulted from attempts to keep beating earlier numbers.

Rotate employees in assignments and enforce mandatory vacations; continuity is often necessary to mask malicious activity.  Officers who come into the office when they are supposed to be on vacation should be viewed as suspicious rather than diligent.

Compensate employees in a manner that is consistent with the amount of economic discretion that they exercise.  Under paying is corrupting.

Use invoices, statements, confirmations and other communications to and from customers, suppliers, investors, and taxing authorities to control insider risk.  While these controls operate late, and may be seen by the media as relying upon chance, they are legitimate, effective, and efficient; management is entitled to rely upon them.  Automatic, i.e., not under the control of the originator, transaction confirmations sent by e-mail or SMS are both timely and cheap.

Say "please" and "thank you." With few exceptions, unhappy insiders believe that their contribution is not recognized or appreciated by management.

Revoke all access, privileges, and capabilities immediately upon termination or separation.  Of course, this requires that one keep track of what they are.


Monday, August 4, 2014

Defensive Ethical Hacking

In 2006 Eric McCarty pleaded guilty to a SQL injection attack into a database at the University of Southern California.  The prosecutor and the court rejected McCarty's defense that he was a "security consultant" just doing what such consultants do.  His defense counsel claimed that he had acted responsibly by only giving the records of seven people to a reporter.  By pleading guilty, McCarty avoided jail and served only six months house arrest.

Several years earlier, while working on a gap analysis at a major media conglomerate, I became aware of a penetration test by a competitor that ran amuck.  It seems that after successfully penetrating file servers, the consultant arbitrarily extended the test to include an AS/400 on the client's network triggering multiple alarms and involving the FBI.

These are only two examples of so-called "ethical" hacking that went awry.  Without addressing the issue of whether "ethical" is a matter of motive or behavior, I have always had a set of defensive rules that I have imposed upon myself, my clients, and my associates that are intended to, among other things, keep me out of courtrooms and jails.

The first of these rules is that I do not engage in covert or clandestine activities.  My client, including all his personnel, must know about and acknowledge, all the activities in which I am to engage.

I do not engage in fraud, deception, or other forms of social engineering, not even for money.  I already know that these attacks will work; they have worked throughout human history.  I do not need to embarrass the client or his people to demonstrate that I am a proficient liar.

I do not work without a contract or letter of agreement.  Such a letter is part of my authority to do what I do.  It also demonstrates that both the client and I understand the extent and limitations of that authority.

I do not work for free.  There is little better proof that I was engaged by the client to do what I did than his check.  McCarty had no letter of agreement, much less a check.  Out of respect for my professional colleagues, I do pro bono work only for bona fide non-profits.  I price my work at my normal rates and require that the beneficiary acknowledge my contribution with a receipt.

I do not work alone.  I prefer to work with the client's people; failing that, I work with my associates.  Not only are my collaborators potential witnesses for the defense, they act as an ethical check on my behavior.  One is far less likely to cross an ethical line with another watching.

I do not share the client's data with others not expressly authorized by the client to see it; not even with the authorities.  If the state wants my client's information it must get it from him, not me.  Short of torture, it will not get it from me.  (I do not contract or commit to resist torture; even if I knew my own capacity to resist it, I would not know how to price it.)

Not all my clients or even my associates like all of these rules all the time.  Clients may think that disclosing all of my activities to his emploeyees in advance defeats his purpose.  There are those in my profession who deceive client personnel for the purpose  of discovering vulnerabilities or demonstrating naivete.  If the client wants that done, he should engage those professionals.  Some of my associates may feel that such activities are effective or that always working with others is inefficient.

I will not knowingly or willingly engage in any behavior, such that if I were caught in the act of that behavior it might embarrass or alarm me, my associates, the client, or the client's people.

These rules may increase my cost of service or even reduce my potential revenue.  However, they are both defensive and conservative.  They act early to help me avoid ethical conflicts and assist me late in resolving such ethical dilemmas as may arise in the  course of an engagement.

They have served me well.  They might have saved McCarty from conviction.  I commend them to you.

Sunday, August 3, 2014

Please do not say "Two Factor"

Thirty years ago I wrote a list for my staff to address what I thought was sloppy and problematic use of special language.  It was of the form "Please do not say _______ when you really mean _______."  I cannot even remember many of the entries but one was "Please do not say 'privacy' when you really mean 'confidentiality.'" Another was "Do not say 'secure' when you mean 'protected."  While the distinctions may seem small, they are nonetheless useful.

In the spirit of that list, I would like to suggest that one should not say "two-factor," or "multi-factor" authentication when what one really intends is "strong authentication."  Strong Authentication is defined as "at least two kinds of evidence, at least one of which is resistant to replay."  Thus, all strong authentication is two-factor but not all two-factor authentication is strong.

For example, a password and a biometric is clearly two-factor but might not be strong.   It is more resistant to brute force attacks than a password alone but might be no stronger against a record and replay attack than the password alone. We are no longer seeing brute force attacks but credential replay attacks are a major problem.  If all one wants to do is resist brute force, adding bits to the password is likely to be more efficient than adding a biometric.

If one accepts that record and replay attacks are the greater problem, then one wants a second factor that resists replay, something like a one time password (OTP), whether token-based or sent out-of-band to a phone or mobile computer.

The use of  "two factor" enjoys so much currency that it suggests that any second form of evidence is the same as any other.  The irony is that RSA, the vendor of one of the original and most popular OTP token is one of the sources of that currency.  However, when they spoke of two factor, the first factor was the OTP.  The second factor a PIN used to resist fraudulent use of a lost or stolen token.

One popular "second factor" with banks is challenge-response based upon shared secrets.  The secret is established at enrollment time.  One popular method is to ask the user to select a number of questions from a list and record his own answers to those questions.  Questions may be similar to "what was the name of your first pet, school, or playmate?"  "In what hospital or city were you born?"  "What were the names of your grandparents?"  "The mascot of your high school?"  Answers should be easy for the subject to remember but not obvious except perhaps to an intimate.  At authentication time one question is chosen at random.  Actually this method can be resistant to replay provided that the set of questions is large enough relative to how often they are used. 

One bank started using this method only for large transactions, those above a threshold value.  However, they figured if it was good for large transactions, wouldn't it be better for all?  They lowered the threshold to zero.  Because the size of the set of questions was not large enough for this kind of use, all the answers for some accounts were soon compromised.  

The Verizon Data Breach Incident Report (DBIR) demonstrates that use of strong authentication would have resisted many of the breaches reported upon.  Because it is so powerful, we should be encouraging its use by all available means.  These means should include distinguishing between it and mere multi-factor authentication. 






Good Security Practice for Programmers

This is the one of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for enterprise development programmers, the individuals who produce the computer programs on which enterprise managers wish to rely.  Like other posts in this series, this post suggests useful separations of duties to ensure quality and fix accountability.


An individual programmer should not both specify and write a procedure.

Should not both write and test a procedure.

Should not both create and maintain a procedure.

Should not name procedures that he writes. (Program names are analogous
to account number which are normally assigned as part of the approval
by management or a designee separate from the originator).

Should not both write and execute a procedure (exception: data local to
himself as in testing or personal computing).

Should not both program and maintain the program library (exception:
they do all maintenance to that library).

Programmers should have personal copies of specifications. data definitions. source
code. test data. test results. load modules and object modules. All transfers
between the programmers personal libraries and project or production
libraries should be controlled by someone else.

The above represents the ideal. Because of limitations of scale, it may not be
realizable in all installations. However. under no circumstances should one
person specify, write, test. name. maintain and execute the same program.

On Nation States and the Limits of Anonymity - Tor

As a general rule, society has a preference for accountability.  For this reason, governments discourage anonymity.  Among the exceptions to this rule is citizen communications in resistance to government.  In this context, governments in general, and police states in particular, abhor anonymity.

Tor (formerly TOR ("The Onion Router")) is a tool for providing anonymity in the Internet.  It uses thousands of contributed routers, communicating using nested encryption, along a randomly selected path, such that when the communication finally appears in the clear, it cannot be traced back to its origin.  It raises the general problem of attribution in the Internet to a whole new level.  Its uses range from hiding browsing activity from routine state surveillance to hiding criminal or revolutionary communications.  

The following news item recently appeared:

 --Russian Government Seeking Technology to Break Tor Anonymity (July 25 & 28, 2014) 
The Russian government is offering a 3.9 million rubles (US $109,500) contract for a technology that can be used to identify Tor users. Tor was initially developed by the US Naval Research Laboratory and DARPA, but is now developed by The Tor Project, a non-profit organization. Tor is used by journalists and others who need to keep their identities hidden for their own safety; it is also used by criminals for the same purposes. The entrance fee for the competition is 195,000 rubles (US $5,500).


In my role as a member of the editorial board of SANS Newsbites, I made the observation that:

"In his most recent novel, Richard Clarke implied that NSA had targeted and broken TOR."

A reader responded in part:

"...more out of curiosity, didn’t the NSA have trouble cracking TOR, and at best, could only identify ingress and egress points?  As told by Team CYMRU.org, anyway."

Now you have a context for this post.  I responded to him as follows:


Thanks for your note.  It allows me to know that the comment did what I had hoped it would do, i.e., raise questions.

I was deliberately vague and cited a questionable authority.

My working hypothesis, the advice I give my clients, is that nation states, at least wealthy ones, can read any message that they want to, rarely in near real time.  However, they cannot read every message that they want to.  Incidentally, that is why they store every cryptogram they see.  Decryption is expensive but storage is cheap.  The cost of decryption is falling but not nearly as fast as that of storage.  

When applied to Tor and anonymity, my assumption is similar.  I assume that nation states can identify the origin of any message that they want to, again, probably not in near real time.  However, they cannot identify the source of every message that they want to.   Again, that is why they require acres of storage.   Like breaking ciphers, breaking Tor is expensive.  However, given their resources and determination, it would be foolish to bet one’s life that they cannot do it.   They know the protocol better than anyone and they own some of the routers.

If you think about it, your question implies a point in time.  However, my guidance assumes that what they cannot do today, they will be able to do tomorrow.  Cheap storage buys them time.  It took them fifty years to crack Venona but they never gave up.

As with crypto, the resistance of Tor to nation states depends in part upon how much it is used.  The more they have to deal with, the less efficient they are.  Therefore, one wants to encourage its use while discouraging anyone from betting their life on it.

The net is that Tor is adequate to provide individual privacy.  It is probably adequate for most political discourse, at least in democratic states.  It becomes problematic when fomenting revolution or disclosing state secrets in authoritarian, or even wealthy but vindictive, countries.  

Monday, May 5, 2014

Good Security Practices for Programming

This is the one of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for programming, the processes, including management, by which programs are produced. 


Procedures should exist for enforcing adherence to rules. standards and
conventions (see Good Practice for Programs). Such procedures should be
sufficiently rigorous to make variances and anomalies obvious to management.

Procedures should exist for enforcing separation of duties and involvement
of multiple people (see Good Practice for Programmers).

Procedures should exist for requiring and recording the approval and
authorization of user and development management. These may be forms
or other procedures external to the system. or transactions or procedures
internal to it which can be invoked only by the designated managers.

Procedures should exist for maintaining the integrity of module and version
names. (see good practice for program libraries).

Procedures should exist for maintaining a record of the creation and
modification of all programs, The record should contain the content of the
change and references to the programmers. the date and time and the
process used.

Procedures should exist for reconciling the program to the specification.
These should include tests. independent review,s and structured walk-throughs.

Procedures should exist for maintaining a record of the results of all test.
review, and walk-through results.

Procedures should exist for requiring and recording the acceptance of user
management.

Procedures should exist for reconciling resource consumed (e.g., programmer
time. computer time) with expectation.


These procedures can effectively be built into the forms. editors. compilers.
library managers. and test drivers and other tools used by programmers, librarians. and
management.

The Obama White House Response to Internet Vulnerabilities

The White House has responded to recent reports that the NSA knew about and exploited the Heartbleed vulnerability rather than fixing it.  While denying the report, the White House pointed out that the choice is not quite so obvious as it might seem.  They assert that the narrow intelligence or investigative advantage from the exploitation of the vulnerability might trump the broader security advantage of fixing it.

"Governing is about making (often difficult) choices."
This is a particularly troubling space. There has been a dramatic loss in public trust and confidence in government over the last two administrations. "National Security" and "investigative necessity" have contributed to that loss.  Not only do they cover a multitude of sins, they are often used for that purpose. Finally, neither the NSA nor the FBI tell the White House everything they know or do. Indeed, in pursuit of "plausible deniability," both agencies often keep their own leadership in the dark.
We are not going to,reform these agencies as long as the current leadership remains in place. While Alexander has retired, Brennan, Clapper, and Holder would be justified in concluding that they are doing the right thing and ought to do it harder.
I do not advocate public disclosure of vulnerabilities. "Heartbleed" and the recent Internet Explorer vulnerability demonstrate that the angst surrounding such disclosure is often more costly than the risk of the vulnerability. However, quiet and immediate notification of the developers, distributors, and those others best able to,close it, should be the default. Indeed, many of these vulnerabilities are so pervasive that even the best we can hope for in closing them will leave a big window in which the government can exploit them for its other purposes, legitimate and otherwise.

Saturday, March 22, 2014

Thoughts for the White House

The idea of this post is not so presumptuous as it might seem.  I had a note from John Podesta on White House e-stationery asking for my thoughts.  In the interest of transparency, I thought I would share them with you.


Under the Rule of Law, the citizen surrenders to the state the exclusive right to the use of force in return for severe restrictions on all other powers of government and transparency and accountability.  Since WWII, and particularly since 9/11, in the name of "national" and "homeland" security, the activities of government have become increasingly powerful, intrusive, and secret to the point that they violate this fundamental social contract.  The governing classes appear to have a morbid fear of the citizens and see us as the "enemy," not to be trusted.

There must be no secret government.  It is antithetical to any idea of self-government.  Since databases are instruments of government, there must be no secret databases.

We must forgo any intelligence that we cannot collect through transparent and accountable means.  While we may not have to know sources, methods, or results,  there can be no secret programs.  There must be protection of whistle-blowers; the government must avoid even the appearance of persecution.  The government must assume full responsibility to keep its secrets; it may not make it a crime to report those secrets when it fails to keep them.  There must be swift and certain punishment of public officials who mislead, lie to,  or conceal from Congress and the people.  (It is past time for Directors Alexander, Brennan, and Clapper to retire with their honor in tact.)

The government must admit that it "knows" and is accountable for all use, misuse, and abuse of any data that it collects or stores.  The excuse that "we do not look at it" does not reassure; the mere collection and possession is intimidating.  The citizen cannot be said to consent to a government of which he lives in perpetual fear.

The government must admit that it has no right to claim a power for itself just because Google has it.  Google does not have guns, tanks, drones, nukes, or even dogs.  Our contract with Google may be just as asymmetric as the one that we have with government but it is different.

The government must forgo the use of other governments or agencies to avoid the restrictions of the Fourth Amendment.  A search is no less "unreasonable" because it is conducted by the United Kingdom, AT&T, Citibank, Cablevision, or a private investigator.  Information from such sources must not be used  as "cause," "probable" or otherwise, to initiate investigations or justify warrants.

If the government is going to exploit modern technology, it must admit to its power.  It must admit that at some point a change in quantity represents a change in kind, that a search that is reasonable at one scale may be unreasonable when amplified by new technology.

The government must admit that it cannot use the departments of Defense, Homeland Security, or Justice to usurp the police powers reserved to the states and municipalities.


They asked me for my thoughts.  I gave them my thoughts.


Thursday, March 20, 2014

Good Security Practices for Program Libraries


This is the second of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for program libraries, i.e., collections of programs related to one another for purposes of management and control.


Only one person should be allowed to make changes to a program (library).

The person who changes the program (library) should not write any of the changes (or he should write all of them).

All changes to a program (library) should be authorized by management.

All changes to a program (library) should be logged as to the event. the authority, and the content.

Content of the library should be known (in terms of named and structured sub-libraries. named modules. bits and bytes) to at least two people (e.g..developer and user. operations and user).

Content of the library should be reconciled to the expected content on an appropriate and adequate frequency.

Different forms (source, object. load) of the same module should be maintained by different people. An audit trail should be maintained such that a load module can be unambiguously related to the associated source module.

Procedures should exist to maintain a consistent relationship between, and to enable the unambiguous association of, different forms. versions and parts (e.g.. specification. test data. source module. object module. load module, test results and user documentation) of a program or procedure. (In the literature this subject is often called configuration management)

Libraries of specifications, test data. test results. procedures, and other documentation should be recorded on structured and responsive media with appropriate functions for update.


Wednesday, March 19, 2014

Good Security Practices for Programs

This is the first of a series of posts on "Good Data Processing Security Practices."  The context for the series can be found here.  The following practices and controls are for computer programs, the instructions to a computer as to how it is to proceed.

A program should be limited in size and scope (e.g.. 50 verbs, one ( 1 )  page of source code. 2K bytes of object code) or  be composed of such programs.

A program should be limited in complexity, i.e.. should employ limited .control structures, (e.g.. SEQUENCE. IF-THEN-ELSE. DO-WHILE.  and CASE),. be limited in the total number of paths, and the paths (may embrace but) should not cross each other.

A  program should be written in a language appropriate to the application and employ mnemonic symbols.

A program'should be predictable in result. use or  effect. i.e.. it must be rigorously specified and obvious in its intent.

A program or  procedure should be clearly distinct from its data so that its intent can be readily determined and so  that it can avoid contamination by its data (i.e.. must not modify itself, must not contain embedded constants).

The specification must be complete. It should describe the anticipated output or  response for all anticipated inputs or  stimuli. It should also describe the output or result (e.g.. error message) for all unanticipated input or stimuli.

A specification should include the test data.

The specification must describe all relationships (data and control flows) between a program and its environment.

A program or process must pass to other processes in its environment only those resources (data or  other processes) that are consistent with its specification. To  the extent that the resource to be delivered is variable. Mechanisms must be included to permit only management or the resource owner to control what resource is passed.

A record must be made (in a journal, log. or  data-set) of all communications (event and content, stimulus and response) between a program and other processes in its environment (users. operating system access methods, data base manager.)

All communications between a process and its environment should contain enough redundancy (e-g.. parity bits, counts. longitudinal redundancy checks. hashing codes. control totals. feedback. acknowledgements. confirmations. etc.) to enable the receiving process to recognize that data has been lost. added or  modified. and to fix accountability and facilitate corrective action. The amount of redundancy required is a function of the reliability of the process. (Consider. for example, a hardware process. a software process. and a user. More redundancy is indicated for communication with another program than for communication with hardware and less than for communication with a user.)

A failed communication between the program and other processes in its environment must be recognized.
In the presence of an indication of a failed communication between the program and another process in its environment, a program must attempt corrective action (e.g., repeat the exchange); failing that, it must communicate the failure to at least two other processes (e.g.. log and console or log and user).

The use of a program or procedure must be recorded with reference made to the user. Where this service is not provided by a superior process. the program itself should provide it.



Good Security Practices

In the early 1980s I published some recommendations on "Good Data Processing Security Practices." Because "publication" then was not the same as it is today, I plan to reprise those recommendations here over the next several weeks,

The recommendations are based upon the following propositions:

1) Risk is lowered when multiple people are involved in sensitive functions;

2) Risk is minimized when each individual has access to only those resources required to do their jobs
(rule of "least possible privilege");

3) Sensitive activity should be independently authorized;

4)  Duties should be assigned in such a way and records kept such that individuals can be held accountable for their actions;

5) Duties should be assigned such that one person doing his job acts as a check upon those in his environment and is checked upon by them;

6) No one should have access to sensitive combinations of resources.

It is not for me to justify or defend these propositions;   I did not originate them.   Rather they are the controls first articulated by the "little monks" who originated double entry bookkeeping.  While one might be tempted to dispute with the monks, they are long since dead; the ideas survive them.   While I will not defend them, I will be happy to explain or clarify.

I will make recommendations on the application of these principles to programs, libraries of programs, the process of programming, those who engage in the process, and those who use the programs.  Finally, I will make recommendations on their application to systems.

Unlike the principles, I did articulate the recommendations.  While I assert that they follow from the principles above, that is at least arguable.  Therefore, I stand ready to defend them.

Some of the ideas may seem inconsistent with contemporary or common practice.  As I will point out in a few places, all of the recommended controls are subject to considerations of scale. Even with those caveats, I think that you will still find some useful ideas.