Main Page | See live article | Alphabetical index

Security through obscurity

Security through obscurity or security by obscurity is a controversial principle in security engineering, which attempts to use secrecy to ensure security. A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them.

For example, if somebody stores a spare key under the doormat in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using the spare key. However, the house owner believes that the location of the key is not known to the public, and a burglar is unlikely to find it.

In cryptography, the reverse of security by obscurity is Kerckhoffs' principle from the late 1880s, which states that system designers should assume that the entire design of a security system is known to all attackers, with the exception of cryptographic key secrets. It is, from the original French, "The security of a cypher resides entirely in the key". Claude Shannon, the father of modern cryptography (and of much else), rephrased it as "The enemy knows the system". Historically, security through obscurity has been a very feeble reed on which to rely in matters cryptographic. Obscure codes, cyphers, and crypto systems have repeatedly fallen to attack regardless of the obscurity of their vulnerabilities.

The full disclosure movement goes futher, suggesting that security flaws should be disclosed as soon as possible, delaying the information no longer than is necessary to fix or workaround the immediate threat.

Table of contents
1 Advantages and Disadvantages
2 In Practice
3 Historical note

Advantages and Disadvantages

It is sometimes argued that security by obscurity is better than no security. In the above example, the claim might be that it is better to hide a spare key under the mat than to leave the door unlocked. Non believers might reply that these are not the only possibilities.

Many people believe that 'security through obscurity' is flawed because:

In Practice

Operators of systems that rely on security by obscurity often keep the fact that their system is broken secret, so as not to destroy confidence in their service or product. It is possible that this may amount in some cases to fraudulent misrepresentation of the security of their products.

Typically, the designers believe that they have ensured security by keeping the design of the system secret. It is difficult for those who approach security by trying to keep things secret to have enough perspective to realize they are inviting trouble, sometimes very big trouble. Self delusion or ignorance is a very difficult problem and has many, almost universally unfortunate, consequences.

This security practice occasionally sets the world up for debacles like the RTM worm of 1988 (see Morris worm) which relied on some (somewhat obscure) holes in common software to spread itself throughout the Net.

Historical note

There are conflicting stories about the origin of this term. It has been claimed that it was first used in the Usenet newsgroup in news:comp.sys.apollo during a campaign to get HP/Apollo to fix security problems in its Unix-clone Aegis/DomainOS (they didn't change a thing). ITS fans, on the other hand, say it was coined years earlier in opposition to the incredibly paranoid Multics people down the hall, for whom security was everything. In the ITS culture it referred to (1) the fact that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community; and (2) (self-mockingly) the poor coverage of the documentation and obscurity of many commands. One instance of deliberate security through obscurity is recorded; the command to allow patching the running ITS system (altmode altmode control-R) echoed as $$^D. If you actually typed alt alt ^D, that set a flag that would prevent patching the system even if you later got it right.