In the US government, we have a pervasive problem of over classification. https://www.cnn.com/videos/tv/2023/01/27/exp-gps-0129-fareeds-take-us-classification-system.cnn This results from a number of factors. First, almost any author or officer can Classify data, that is specify, among other things, how much is to be spent to protect the data. Said another way, he specifies how much others must spend to protect the data but may not incur the cost of protection himself.
Second, the authority to classify, does not include the authority to change the classification. Once the data has been labeled, often with a rubber stamp, it is too late to change it. The implicit assumption is that the decision, once made, is irrevocable. The decision is reviewable, even by a higher authority, but following a procedure specified for the class.
Third, and as already noted, the classification includes a specification about the procedure that must be followed to lower the classification. The higher the classification, the more rigorous and expensive the process. Since the cost of declassifying may be equal to or even greater than the cost of declassifying, declassifying is rare.
In enterprise things are a little different. The authority to classify includes the authority to re-classify or declassify. The classifier's authority comes from his role, it is not arbitrary. Classification is normally limited in time. Because sensitivity decreases with age, because we are normally protecting plans and rarely sources, by default classification ends automatically, usually in no more than three years, unless renewed.
Do you believe over-classification could have been exacerbated by the tenet of "write-up, read down"?
ReplyDelete