There's a concept from computer security known as a class break. It's a
particular security vulnerability that breaks not just one system but an entire
class of systems. An example would be the vulnerability in a particular
operating system that allows an attacker to take remote control of every
computer that runs on that system's software. It's a particular way computer
systems can fail, exacerbated by the characteristics of computers and software.
It only takes one smart person to figure out how to attack the system. Once they
do that, they can write software that automates the attack. They don’t have to
be anywhere near the victim; they can
automate this attack so it works while they sleep. Then they can pass the
ability to someone—or to lots of people—without the skill. This changes the
nature of security failures, and completely upends how we need to defend
against them.
By illustration: Picking a mechanical door lock requires both skill and
time. Each lock is a new job and success at one lock doesn't guarantee success
with another of the same design. Electronic door locks, like the ones found in
hotel rooms, have different vulnerabilities. An attacker can find a flaw in the
design that allows him to create a key card that opens every door. If he
publishes his attack software, not just the attacker, but anyone can now open
every lock. And if those locks are connected to the Internet, attackers could
potentially open door locks remotely—they could open every door lock remotely
at the same time. That's a class break.
This is how computer systems fail but it's not how we think about
failures. We still think about vehicle security in terms of individual car
thieves manually stealing cars. We don't think of hackers remotely taking control
of cars over the Internet, or remotely disabling every car over the Internet.
As we move into the world of the Internet of Things, where computers
permeate our lives at every level, class breaks will become increasingly
important. Security notions like the precautionary principle—where the
potential of harm is so great that we err on the side of not deploying a new
technology without proofs of security—will become more important in a world
where an attacker can open all doors. It's not an inherently less secure world
but it's a differently secure world. It's a world where driverless cars are
much safer than people-driven cars, until suddenly they're not. We need to
build systems that assume the possibility of class breaks—and maintain security,
despite them.