Wednesday, April 22, 2026

Computer Security

When I started in this field that we now call cybersecurity, we called it Data Security and Privacy.  In these seventy years it has had a number of names as has what we now call information technology or IT.  

I like to think of what I do as Computer Security, the art and science of keeping the computer safe, using it safely, using it to preserve its contents, and assuring its results.  

In the late eighties or early nineties, I chaired the ISSA committee that undertook to define the Professional Body of Knowledge, that is the scope and content of the knowledge that information system security professionals expected of one another, the knowledge that defined and limited the profession.  If memory serves, after extensive consideration and discussion we organized the knowledge into thirteen domains.  Today those have been combined, refined, and reduced to: 

  • Security and Risk Management: Governance, legal/regulatory compliance, ethics, threat modeling, and business continuity.
  • Asset Security: Protection of data assets, data classification, and retention.
  • Security Architecture and Engineering: Secure engineering processes, security models, cryptography, and physical security.
  • Communication and Network Security: Secure network design, components, and communication channels.
  • Identity and Access Management (IAM): Physical/logical access control, managing user identity, and authentication.
  • Security Assessment and Testing: Vulnerability assessment, penetration testing, and security auditing.
  • Security Operations: Incident response, disaster recovery, digital forensics, and investigations.
  • Software Development Security: Security in the software lifecycle, secure coding guidelines, and software configuration management.
The same content, more efficient expression.  Keep in mind that the field and the body of knowledge simply are. that is to say, they exist independent of any all attempts to describe them. They are what this blog is all about.  

Wednesday, April 15, 2026

Artificial Intelligence

This blog post is a work in progress.  Comments are solicited.  It is an attempt to bring my seventy years of experience in using, governing, managing, and securing innovations in computing technology to the issue of "artificial" intelligence.

In 1956, I was part of the first generation of computer user/programmers.  We stood on the shoulders of the giants that I call generation zero, Turing, Flowers, Shannon, Aiken, Eckert, Mauchly, von Neuman, Hopper, et. al.   

We had to tell the computer exactly how to arrive at the result that we wanted.  We expressed the computations that we wanted in terms of the operation codes of the hardware.  Almost all computers had op codes for add, some could subtract. The powerful ones even had operations for multiply and divide.  However, even those did multiplication by iterative addition and division by iterative subtraction.  We called that programming.  Those of us who could describe how to achieve a result by such tiny steps were called programmers.

In the mid fifties, one might create a program on a blackboard, flip-chart, or yellow pad.  We would transcribe the program into punched paper cards or tape that could  be eletro-mechanically "read" by and into the storage of the computer.  The paper and the reader were our user input mechanism, our application programming interface, our API.  We would enter the program first, followed by the data, also in punched paper.  The process was so expensive that the results had to be very valuable in order for the process to be efficient.  

While in the early days, I operated the computer, within a decade programmers had pretty much been excluded from the "computer room" and the role of operator had become specialized.  Within another decade the computer terminal was the common user interface. In 1981 the IBM personal computer had a key-board and a CRT that could display 80 columns by 25 lines of green alpha-numeric characters.  Like the terminals, it provided what we now think of as a command line interface.  Three years later the Apple Macintosh had a  graphical user interface (GUI) with a mouse.  Two years later came Windows.  "Point and click" was now part of how one used a computer. 

As computers have become more powerful and cheaper, much of that improved power has gone into the user interface, into making them easier to use.  Early examples include high level languages, like Fortran and Cobol, and interpreters like APL and Basic

Today's input devices include touch screens, cameras, microphones, ear phones, and speakers.  Modern computers can talk, listen, and recognize and process images.  I can testify that we hardly thought about such capabilities in 1956.  

Games have always been part of computing and a measure of computer program "intelligence."  In 1956, my colleague, Dick Casey, and I wrote a program to teach the IBM 650, using console switches and lights for i/0, to play Tic-Tac-Toe.  My  mentor, Dr. Albert Samuels, wrote a program for the IBM 701 to "learn" to play checkers.  It got really serious in 1997 when IBM's Deep Blue defeated chess champion, Gary Kasparov in a six game match.  On February 16, 2011 IBM's Watson defeated champions Ken Jennings and Brad Rutter at the TV Show Game, Jeopardy.  

In 2015, Google's Alpha defeated European champion Fan Hui 5-0. Its most famous victory was against world champion Lee Sedol in 2016, winning the series 4-1.  Alphagozero "taught" itself the game.  

In November 2022 OpenAI introduced ChatGPT and demonstrated its ability to converse in natural language.  Using text-to-speech and speech recognition, it can even talk and listen.  It was to be the first of many large language models (LLMs), the latest user interface to the computer.  However, the ability of these chat bots to communicate in natural language hides the huge data sets, the enormous computing power, and the complexity involved.  It caused many people to invest them with the personality and autonomy that they merely mimic.

Using these programs one no longer has to provide the computer with a program, the steps to create the result.  One need merely describe the result and the program "figures out" how to arrive at it.  The "Model" instantiates both the capabilities and the limitations of the program.  

We call the model "large" because it includes a huge amount of data, so much that it all but defies human comprehension.  Indeed it can solve difficult problems in conversational time.  However, it is correct to think of it as fast "table lookup" in a big table, using so much  computing power and energy that it is just now becoming efficient.  

The program organizes its data to optimize its use; we call the data "training" and the organization "learning," expressions that we analogize from human intelligence.  Therefore, the "model" instantiates both the capabilities and limitations of the program and its data.  

In part, because of the complexity, training, and the lack of precision of the natural language used in the prompts, what we want the computer to do, the results are sometimes not what the user intends and expects.  Said another way, in spite of its capabilities, artificial intelligence may be no less error prone than natural intelligence.  Sometimes the results are so far from the user's intention, again by analogy to the natural, the computer is said to "hallucinate."  Therefore, it is best to restrict the use of AI to the set of problems where, while we might not be able to arrive at the results by ourselves we can easily  check the result.  For example, while one might not be able to compute the cube root of a large integer, we can recognize it when we see it.  

Nothing useful can be said about the security of any mechanism except in the context of a specific application and environment.  --Robert H. Courtney, his first law.  

Some human being or enterprise is responsible for everything a computer is asked to do and for all the properties and uses of the result.  

 The responsibility of both individuals and enterprises for the application and use of a computer, specifically to include those applications that use natural language and mimic people, includes compliance with law, regulation, contract, morality standards, and prudence.  

As with anything else, an individual exercises her responsibility by complying with the law, regulation, contract, ethics, use of appropriate tools, and due care.  This requires knowledge, skill, ability, experience, and good judgement.  One starts by ensuring that one meets the applicable requirements.   One ends by doing nothing rather than doing the wrong thing.  

An enterprise exercise its responsibility by governance and policy, i.e.,what it authorizes its agents to do and tells them not to do.  Management must use all available controls including, but not limited to:

  • Direction
  • Training
  • Assignment of duties, roles, responsibility, limits, discretion, and authority
  • Supervision
  • Multi-party controls
  • Automation
  • Budget
  • Recognition and respect
  • Compensation
  • Disciplinary action
  • Other



One of the things that policy must do is to express managements risk tolerance in such a way that all managers and professionals at all levels understand what that means that they should do.  For example, the management of a mature enterprise might say such things as "Novel technology must be applied in a cautious, conservative manner,"  "line of business managers (not IT) must authorize and budget for the application of novel technology," "application of novel technology must be initiated, authorized, and budgeted for by two or more levels of line of business management," or "novel technology should be exploited and applied consistent with normal business risk."

Alternatively, the management of a high business risk start-up might say "novel technology should be aggressively applied and exploited in pursuit of business opportunity."  Needless to say, that this author recommends the more conservative approach for most enterprises.  

Recommendations

Things you Should be Doing Anyway

Use strong Authentication: Use at least two kinds of evidence, (shared secret, possession, biometric, or behavior). At least one form of which is resistant to replay; e.g. one-time password (OTP), biometric with liveness test.   Internet facing applications, mission critical applications, on management and administrative controls, and everywhere else, in that order.  This is the most efficient of all measures.  Nothing else will give you as much protection for dollar expended.  Consider Passkeys, one time passwords from hardware tokens, software tokens, e-mail, or SMS messages, in decreasing order of strength.  Consider offering users a choice of methods for their convenience.  

Structure your network.  Start by isolating high risk Internet facing applications, e.g., e-mail, browsing, and messaging from internal enterprise applications, from mission critical applications, administrative and management controls, from servers, e.g. database, file, communications, from storage devices.  

Layer your defenses.

Employee "Least Privilege" access control.  

Restrict "write" access.  In order to preserve accountability, write access to any object should be restricted to a single individual or process, e.g., application, database manager.  If you employ the common risky practice of granting everyone "read/write," changing to this policy will involve some administrative effort and may encounter some user resistance.  However, it will reduce insider risk.   

Employ "zero trust."  Zero Trust can be thought of a special case of least privilege in which every connection between processes is authenticated in both directions, both vertically and horizontally.  For example, users and applications mutually authenticate one another. Applications authenticate the database manager, the database manager authenticates the file system and vice versa.  This policy can often be implemented using existing controls,  

Monitor and log all activity.  Monitor and log both the traffic that passes a firewall and the traffic that is rejected.  Changes in traffic patterns may be evidence of an attack.  

These policies and controls can greatly increase the cost and time-to-success of an attack.  They are efficient, that is their cost is small, usually justified by the reduction in the cost of the risk that they mitigate.  They are both essential and efficient.  

As we have noted, large language models increase both risk and opportunity.  In the next section we will make recommendations to reduce the risk and safely exploit the opportunities. 

Reducing AI Risk






Friday, February 6, 2026

Enterprise Network Security

Taking only the success of ransomware for evidence, one infers that too many of our enterprise networks are flat. There is a path between every pair of nodes in the enterprise.  That is to say, the ease and latency of connecting between any two selected nodes in the network is roughly the same as any two chosen at random.  This is the default that network engineers strive for.  Too often security is not even on the list of requirements.  The result is that compromise of the credentials of one end user can, and and does, bring down the entire enterprise.  

At a minimum, mission critical applications should be isolated from fundamentally vulnerable applications like e-mail and browsing.  However, isolating users, from applications, from services, from storage is even better.   Remote access should be by end-to-end application layer encryption.

Taking the isolation strategy further, create multiple layers, for example, Internet, users, applications, services, files, and storage. Nodes on one layer can access and be accessed only by those on adjacent layers.    

Finally and best, visualize a smart switch; all users, applications, and services are connected only to that switch. Think about one cable connecting that application or service directly and only to the switch (but dedicated VLANs would be more efficient.)  Any connection between a user and an application or between an application and a service can only be through this smart switch.  Users connect to the switch via TLS and strong authentication (e.g., FIDO2 for security and convenience).

The switch uses a list of rules that describes all permitted connections between an authenticated user and an application or an application and a service.  All other possible connections are denied by default, the restrictive access control policy (see Cheswick and Bellovin), least privilege, or "zero trust."  

These strategies come at the expense of some inconvenience, administrative cost, reduced function, and an increase in latency.  However, they increase the cost of attack and resist lateral spread within the enterprise network. 

Getting from a flat network to one like the ones proposed here is not trivial.  The switch must scale to the number of users, applications, services, and traffic.  The necessary and permitted connections, that is the access rules (white list), must be identified and recorded.  Mistakes may cause temporary disruption. Fortunately there are suppliers and consultants that specialize in this.  

Thursday, December 11, 2025

Bits are Bits

 NIST Cannot quite get it right.  They have gone from encouraging the use of special characters to discouraging them and relying only on password length.  

The strength issue is not about what the user must enter in an ascii or other code but how much work it would take an exhaustive, or brute force attack to find it.  Both length and special characters are ways to increase the work of attack.  Each adds bits.

We first started to Insist upon upper and lower case and special character to get more bits in fixed length passwords.  While most of you are to young to  remember it, for years passwords were limited to 8 characters, or one fetch, for performance reasons.  While modern computers are so fast that performance is no longer an issue and modern database managers will accommodate passwords of any length, there may still be systems that limit the length of passwords.  Unfortunately, we forgot why we were insisting upon complexity.

Before the Internet, most end users had fewer than a handful of passwords, many only one.  Today many users have tens, even hundreds of passwords. (As I write this, I have 310.)  As the number of passwords grew so did bad practice.  Users chose passwords that were easy to remember and enter, and then reused those that met these tests.   

To resist this user behavior, many managers introduced rules to encourage strong passwords and resist weak or reused ones.  This solution has become the problem. Choosing passwords was already hard enough; choosing passwords that meet well intended but otherwise arbitrary rules is often too much.  Otherwise strong passwords, including those generated by a password manager, might not meet the rules.  Forcing periodic changes added insult to injury.  

Thus, NIST now recommends length.  While length adds to strength, the longer the password, the harder it is to enter, particularly without error.   The strength is measured in bits, not , but the use of the entire character set may help; in some special cases may still be required.  

All this is by way of saying choosing, remembering, and using strong passwords is not easy.  Choosing, remembering, and entering, more than a handful of passwords is not easy.  It has become a computer application.  Password managers are somewhere between popular and necessary.   

Courtney taught us that "nothing useful can be said about the security of a mechanism (including passwords) except in the context of a specific application and environment."  Writing guidance that covers all applications and environments has always been what we call a "hard problem."  Writing guidance that will stand up to changes over time is particularly hard.

A final word. Well chosen and managed passwords are resistant to brute force attacks.  Those are not the kind of attacks that we are seeing.  Rather, we are seeing social engineering followed by fraudulent replay attacks.  Passwords, of whatever strength, are fundamentally vulnerable to replay attacks.  Rather, we need strong authentication, that is, at least two kinds of evidence, at least one of which is resistant to replay.  Said another way, all strong authentication is multi-factor but not all multi-factor is strong.  

Prefer length to complexity, but allow the whole character set.  (Encourage complexity if length is otherwise restricted.)  Encourage your users to use a (cross platform?) password manager. Offer them strong authentication options.  Mandate strong authentication for employees.  Consider, indeed prefer, passkeys (https://whmurray.blogspot.com/search?q=passkeys).  Use biometrics for convenience in applications where replay is otherwise resisted. Prefer one-time passwords to mandatory periodic password change.  





Wednesday, September 17, 2025

Security Now

It is Tuesday evening.  I am listening to Security Now.  If you are not, I recommend that you do so.  

Security now features Steve Gibson, the provider of the personal pen test, Shields Up, and the author of the storage integrity progam, Spin Right.  

I find myself passing over reports of problems, vulnerabilities, attacks and breaches.  I simply wait for Steve's weekly informed analysis.  Now, I admit it, I am both old and lazy.  If I applied what remains of my intellect, I might be able to distill from the media, perhaps, as much as ninety percent of what Steve does.  After all, what I lack in intellect, I make up for in experience. Still, it turns out to be much more efficient for me to wait for Steve's articulate analysis, than to do the work myself.  

Security Now is a weekly two hour podcast on the security and privacy issues of the week.  They pride themselves on being available everywhere in every format.  While I simply rely upon my YouTube subscription, you can expect to find it in your favorite place and format.  

I hope that you find the weekly two hours to be as valuable. efficient, not to say entertaining, as I do.  



  

Wednesday, September 10, 2025

iPadOS 26

 The geeks have been militating to make iPadOS more like Mac OS, Android, or even like Windows.   This frightened me.  I take comfort from the fact that with iOS I am more than a click away from contaminating my system.  I take comfort from the fact that one can recommend iOS for children and people born before 1980. 

As beta releases of iPadOS 26 have become available, there have been reviews saying that the iPad is "ready for laptop duty," you  "can finally ditch your mac," and "the iPad is a full-on computer now."  

Thank God!  All the hype to the contrary not withstanding, one still "cannot change the core system or application code of iPadOS 26 directly through the user interface. Apple's operating systems, including iPadOS, are designed as a "walled garden" for security and stability. This prevents users from altering the compiled code, which is what the system and apps actually run on."  That from Apple; I could have saved myself a lot of angst if I had asked Apple in the first place.


Yes, the screen in 26 is much more like that of the Mac.  The windowing and multi-tasking are more like that of the Mac.  The file system is more capable.  There is a task bar with drop-down menus.   One can copy and paste from one app to another, indeed from one device to another.  One takes comfort in the fact that Apple first figures out how to do a feature or a function safely before adding it to the system.  


But the iPad is still an application-only computer.  It still uses purpose built apps, nearly two million of them in the store.  It is still a closed system.  Program code is still hidden.  It is a system in which one can enjoy in safety, most, but not quite all, of the benefits of the general purpose computer on which it is built.  Rest easy, Steve Jobs.


 


Monday, September 1, 2025

Attack Surface Managment

 Thanks to our colleague, Ben Carr, for the idea and the title of this post.  I wrote most of what follows in response to a post of his on LinkedIn

The attack surface of the typical enterprise includes all the users as well as all the other resources.

I think about the desktop where most of the vulnerabilities are in system code, system code that dwarfs the applications.

I think about all the applications that are on that system that are rarely if ever used.  

I think about the orphan data and servers.

I think about the excess privileges that permit entire enterprises to be compromised starting with one user who clicks on bait in an e-mail or on a web page that he visits out of curiosity.  


So, one way to manage the attack surface is to reduce it.

  • Remove unused user IDs.  Reverify and reauthorize users at least annually.  
  • Remove unused or rarely used applications or services.
  • Install only what you really need.
  • Prefer purpose-built apps to general and flexible facilities (e.g., browsers, spread-sheets, word processors).
  • Hide systems, applications, services, and sensitive data behind firewalls and end-to-end application-layer encryption.
  • Employee restrictive access control (i.e., least privilege, zero-trust, "white-list") at all layers
  • Scan and patch only what is left (i.e., that which can be seen by potentially hostile people and  processes).
  • Other.