In 1999, I tracked down every report I could find of a product that had completed a published, formal security evaluation in accordance with trusted systems evaluation criteria. This led to some preliminary results.
At the Last National Information Systems Security Conference (23rd NISSC) in October, 2000, I presented a paper (PDF) that surveyed the trends shown in the previous 16 years of formal computer security evaluations.
I collected all of my data in an Excel 97/98 spreadsheet that contained an entry for every evaluation I could find through the end of 1999. At the moment the spreadsheet includes the reported evaluations by the United States (TCSEC/NCSC and Common Criteria), United Kingdom (ITSEC and Common Criteria), Australia, and whatever evaluations were reported from Canada, France, and Germany by the US, UK, and Australian sites. I am not convinced that this is every published evaluation that took place, but it's every report I could find.
For additional insight, I'd suggest looking at Section 23.3.2 of Ross Anderson's book Security Engineering, which describes the process from the UK point of view. Ross isn't impressed with the way the process works in practice; while the process may be somewhat more stringent in the US, the US process simply produces different failure modes.
I would be thrilled if anyone interested in a weird research project would use my spreadsheet as a starting point to further analyze the phenomenon of security evaluations. There are probably other facts to be gleaned from the existing data, or other information to be collected. As noted, I stopped collecting data at the end of the last century.
Richard E. Smith, email@example.com
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/2.0/
or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.