Luke Sheppard's blog about information security, web development, and hacking 

Home Blog Security Through Obscurity Is Widely Misunderstood

Security Through Obscurity Is Widely Misunderstood

Native American Code Talkers, speaking in Navaho, Apache, and other languages, provided critical tactical communications during combat in the Pacific during in WWII. Japanese military code breakers never figured out which “obscure” languages the Americans were speaking. Shown here are Navajo Code Talkers Henry Bake and George Kirk, operating a portable radio set in the South Pacific, 1943. Courtesy National Archives and Records Administration.

Pretty much any experienced hacker you talk to, whether an actual crook or a professional IT security researcher, will tell you that “Security Through Obscurity” is useless—at best a waste of your time, and at worst gives you a false sense of security. But if you’re serious about IT security you’ve got to employ several heterogeneous layers of security, and one of those layers should be security through obscurity. Assuming you’re still talking to the hacker or IT security person mentioned above, about now she’d remind you that you’re wasting your time. This attitude is rampant among hardcore security purists. The most emphatic of them border on fundamentalism, wanting to protect all secrets with nothing more than strong, publicly transparent encryption algorithms.

But try to imagine any number of high-stakes secrets, national security, proprietary industrial formulas, or your own passwords, SSN, and ATM PIN. Even if you are a talented cryptographer, do you really think you’d be willing to post the encrypted versions of these secrets on your public blog? Remember, we’re being fundamentalists in this scenario. Obviously this kind of puritanical avoidance of security through obscurity would itself give a false sense of security—and probably attract unwanted attacks upon your encrypted files.

But a sensible, thorough IT security system is layered. E.g., encrypted files on a password protected system having some kind of user authorization scheme, presumably behind some kind of firewall logic (put that firewall on the host itself, your life will be easier). In such a layered system, it is wise to design in some obscurity. Don’t name the computer “password-storage-server.example.com”. Name it something obscure. Name it “pss.example.com” instead. Or at least entertain yourself by naming it something obscure based on an inside joke like “CSC-STD-002-85.example.com” (Google the host portion of that FQDN), or maybe “iso9564.example.com”.

In fact, an IT security design without intentional obscurity at key points will be lacking. On the surface, this position seems to directly contradict one of the sages of information security, Hal Tipton. In the sixth edition of his famous book, The Information Security Management Handbook, Tipton pretty much trashes security through obscurity and all forms of obfuscation. This book is one of the canonical treatises in an exhaustive (and exhausting) Information Security education. So why contradict it? Well, Tipton, like the disdainful hacker and IT security researcher imagined above, was talking about the so-called “Security Through Obscurity Model”. Ya. Of course. As the overall model, or philosophy for an information security system, you could hardly find a guiding principle worse than security through obscurity. But fortunately, eventually even Tipton acknowledges that, “Sometimes Security Through Obscurity is not such a bad thing after all…But a dash of obscurity added to an overall security recipe…can make things even stronger.” I just wish the vocal and respected IT security experts handing out advice and criticism to sysadmins and users would go a little deeper and make this distinction. They’re doing everyone a disservice by denouncing all forms of security through obscurity.

 
 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn