My textbook lists categories of cyber-attacks that focus on an attack’s lasting impact: how does it affect the target’s assets and resources? Since the categories really reflect the attack’s impact on the target, they really represent risks. Here are the categories I use right now:
Denial of service – Pillage – Subversion
Masquerade – Forgery – Disclosure
This is a work in progress as I figure out some conceptual ideas.
Continue reading The Six Types of Cyber-Risks
The big news this week is a protocol flaw in the Wireless Protected Access protocol, version 2 (WPA2). The Ars Technica article covers the details pretty well. This is what every Wi-Fi wireless router on the planet uses these days. The problem does not directly damage your system, but it can uncover data you had intended to encrypt.
The technique can trick the system into reusing a cryptographic key. To keep encrypted data safe we must avoid encrypting the same data twice (here’s an example of how it fails). While crypto system designs usually account for this, the attack on WPA2 tricks the system into reusing the key.
Continue reading The Big Bug in the News: the WPA2 flaw
I sympathize with developers who throw up their hands and say, “I don’t do security stuff.” No matter what you choose, there’s a trade off that could go wrong. It’s especially troublesome if one deploys a “security website.” I’ve deployed security education websites in many environments over the past 20 years, and I rarely achieve the security level I’d like.
I wanted to watch a security webinar today. But the webinar requires Adobe Flash, in which security researchers seem to uncover 1 or 2 vulnerabilities a month. I discarded Flash when upgrading my OS a couple years ago. It’s ironic that a security webinar might tempt it back onto my machine.
Continue reading Tiptoeing Through Vulnerabilities
Symantec is one of the companies that holds the keys to the Internet: they are a trusted certificate authority for authenticating major web sites. All major browsers recognize Symantec as a trustworthy source of SSL/TLS authentication certificates. Symantec (also known by its subsidiary name Verisign) is part of a chain of trust that keeps our Internet traffic safe.
Recent reports suggest that they have broken their trust with the Internet community. Symantec has apparently delegated some of its authentication authority to Blue Coat software, a company that makes and sells network snooping gear. A 2013 report by Reporters Without Borders contains 2 pages highlighting Blue Coat’s role in helping repressive regimes monitor encrypted web traffic.
Symantec has issued Blue Coat its own authority certificate. Blue Coat can use this to create and distribute bogus certificates that allow its gear to decrypt encrypted web traffic.
Continue reading Symantec Breaks Trust with the Internet?
The current fight is about whether we will impose a technological infrastructure which will be exceptionally vulnerable to attackers in order to provide nothing more useful than some very, very short-term advantages to people investigating crimes.
Let me say it differently: We put everyone in danger if we weaken cybersecurity. We only help a few detectives in a few investigations.
I don’t want hackers playing with my home thermostats, my car’s computer, my water or electric utility systems, or financial computers. If we make it convenient for police to reach into our computers, we also make it easy for hackers. This threatens peoples lives directly.
Continue reading The Apple case isn’t “privacy” versus “safety”
About 20 years ago, I worked with a fellow who proudly told me that he had once written a flawless piece of software. He kept its inch-thick line printer listing as a shrine in his cubicle. I never asked him for details, because he got angry when people questioned his judgement on computing. After all, he had once been in a panel discussion with Grace Hopper!
I have my own Grace Hopper stories, but today’s interesting panel discussion took place earlier in December at the 2013 ACSAC in New Orleans. Roger Schell, a luminary in the annals of cyber security, declared that 1980s techniques had indeed created “bug-free software.”
Roger Schell is wrong. Continue reading The “Bug-Free Software” fallacy
Last week I participated in a very geeky panel discussion about a now-defunct standard for computer system security: the TCSEC. I showed some charts and diagrams about costs, error rates, and adoption of government-sponsored programs for evaluating computer security. During the panel, some audience members made the following claim:
“After its evaluation, Multics never needed a security patch.”
I admit I find this hard to believe, and it’s not consistent with my own Multics experience. However, most of my Multics experience predated the evaluation. So I ask: does anyone know if Multics had a security patch after its B2 TCSEC evaluation?
[see newer posting; also, I’ve added links below to Multics information on line] Continue reading Multics was flawless?