The Apple case isn’t “privacy” versus “safety”

The current fight is about whether we will impose a technological infrastructure which will be exceptionally vulnerable to attackers in order to provide nothing more useful than some very, very short-term advantages to people investigating crimes.

Let me say it differently: We put everyone in danger if we weaken cybersecurity. We only help a few detectives in a few investigations.

I don’t want hackers playing with my home thermostats, my car’s computer, my water or electric utility systems, or financial computers. If we make it convenient for police to reach into our computers, we also make it easy for hackers. This threatens peoples lives directly.

While many people may see this as a political issue (libertarian vs others, authoritarian vs others, peace officers vs others, etc.) I discuss it here from a technological point of view:

Do we want systems that are both reliable and secure, or do we want to compromise this to provide occasional benefits to law enforcement? 

Here is how I see the technical problem.

It is hard to build technological systems.

It is easy to break technological systems.

Every “feature” we have in a system makes it more complicated, and more error-prone. Good design is simple as in Keep it Simple, Stupid.

We have more than enough trouble already trying to build secure and/or private systems.

When we add “law enforcement access” to all security systems, we make the design a lot harder and a lot more expensive. We encourage many vendors to ignore security entirely even when they shouldn’t.

Perry Metzger argues that the risks to the public don’t justify the benefits to law enforcement.

In other words, the law enforcement community just doesn’t use this type of interception often enough. There are statistics out there on the number of wiretaps and the number of demands made of ISPs. Given the hundreds of millions of people in this country, I don’t think the wiretap numbers reflect a very large percentage of law enforcement activity.

Metzger is probably right (my political opinion, I suppose): this occasional benefit to an investigation doesn’t justify the dangers posed to all other automated systems today and in the future.

What does “law enforcement access” mean, anyway?

This is an essential and largely unspoken part of the argument.

Politicians and tech-savvy supporters can construct glib answers to this question referring to “warrants” and “court orders” and such.

Almost all service providers grant this kind of access by ensuring that they themselves can eavesdrop on your data.

Apple’s iMessage service is a rare outlier in that it encrypts “end-to-end,” which gives Apple itself no access to your message contents.

If we grant law enforcement access to iMessage, how do we limit it to the specific individual or individuals targeted?

Right now “law enforcement access” seems to be a broad-brush thing that Congress occasionally demands and vendors provide in some easy-for-them manner.