I know this seems like a pretty weak byline, but bear with me. In studying for the GCIH exam I have been finding myself pondering some of the wisdom I have been given by John Strand, the VoD’s recorded instructor. In the course-ware, he stresses the need for an organization to truly understand their environment and patch efficiently, and that the best way to facilitate that might be to standardize on as few platforms as possible.The homogeneity of the environment will both simplify the  patching and vulnerability management AND make the environment easier to understand and thus protect.  This gets back to a fundamental concept in securing anything: you can’t protect what you aren’t aware of.

This  resonated with me – many of my customers have too many systems, both in their infrastructure as well as in the end systems they are defending. The end result is that they find themselves continually out of date and unable to deal with Patch Tuesday or any other fixes holistically. As a result, they don’t deal with it at all, and are thus open for attackers to pillage.

All was well until I was listening to last week’s Paul’s Security Weekly episode in which they were discussing a opinion piece/Twitter feud over patching vulnerabilities promptly in the enterprise. An argument was made that most hackers will use older tried-and-true exploits or vectors whenever they can because 1) developing new ones takes time, and 2) it is unlikely that the old ones have been patched in a majority of systems.  Furthermore, patching as soon as one is available can lead to instability and that danger far outweighs the exposure of waiting until regression testing can be completed.  Here is the statement by the writer (Woody Leonard) that they focused on:

With a few notable exceptions, in the real world, the risks of getting clobbered by a bad patch far, far outweigh the risks of getting hit with a just-patched exploit. Many security “experts” huff and puff at that assertion. The poohbahs preach Automatic Update for the unwashed masses, while frequently exempting themselves from the edict.

While they took issue with the tone (Tweeps were pretty up-in-arms as well), Paul & Co. made some great arguments for and against, but several times it was mentioned that companies should strive for heterogeneous environments, almost as if the variety of platforms will baffle the adversary.  And they also argued in favor of holding on patches  until testing is conducted.  There are two things there I needed to unpack.

My own experience cannot hold a candle to that of any of the PSW folks, but I am having a hard time finding heterogeneity favorable over homogeneity in the environment. Time after time, we see companies big and small struggle with patching against even older flaws.  Complexity ==confusion for the Blue Team, not robustness, and there is no better way to step that up than to double or triple the number of operating systems or application platforms in play.  Using heterogeneity as a clunky means of segmenting your network makes troubleshooting and monitoring it that much harder. Tuning and configuring tools is hard enough, but accounting for more platforms increases the burden.

simple_complex

Now if all C-suites took cybersecurity seriously and fully supported a rigorous testing process, didn’t skimp on tools, and fully staffed and trained for this, great!  But in the companies I have direct contact with, this isn’t workable.  Security by obfuscation is more harmful to the Blue Team than it is to the Red Team. Attackers thrive in environments where the defenders are overwhelmed or confused. Having many different platforms increases the noise floor and stresses the tools and operators alike.

Einstein was fond of saying “Everything should be made as simple as possible, but not simpler“. I believe simplification (which a smaller number of well-understood platform variants can assist with) benefits the defenders over the attackers – even the resource constrained Blue team has a much better shot against attackers if they have less to understand/process/monitor, and they can then leverage real segmentation (network and host based) more effectively and uniformly.  The better understanding will help them better identify anomalies and incidents. As for patching, it would be too optimistic to assume that anyone has the time to properly evaluate the system impacts, but having a minimal number of variants in the environment certainly makes this more achievable to the small and medium businesses.

What do you folks think?  Please let me know your thoughts – this struggle between simplicity and complexity has been on my mind a lot lately!