IT Security’s 50 Shades Of Gray
By
Leonid Shtilman
Any pragmatic security professional understands all too well that there’s a world of difference between theory and practice when it comes to protecting corporate data and managing risks. In theory, antivirus detects malicious software and stops it from ever executing on endpoints. In practice, intruders always exploit a time slot before malicious software will be added to and detected by AV databases. In theory, instituting compliance mandates are all it takes to maintain a successful security department. In practice, a lot of risks must be managed outside the rudimentary requirements instituted by regulators. In theory, security professionals have an endless supply of resources at hand to install every control they identify as beneficial to enterprise security. In practice, well, we all know that never happens.
It’s that differential between sounds-good-at-first theories and the day-to-day realities of IT operations that have kept the very rational approach of security whitelisting from ever taking root in the enterprise in any meaningful way. The idea behind whitelisting is to only accept a very constrained list of known-good applications to run on enterprise endpoints and block all other unknowns. The problem is that on any given endpoint application list, there’s white and black, and then there are about 50 shades of gray. For every obvious productivity program and obvious malicious program, there are about a dozen more that are not so obvious and administrators simply do not have the time to handle the categorization process.There are some programs that aren’t malicious but might not be really appropriate on a work machine, such as an employee’s copy of a Porsche simulator.
There are some very productive programs that only a very selective audience might use and IT might not see all the time. And then there are still other programs that might be somewhat beneficial to employees but have such dangerous behaviors that they aren’t worth the risk to have on a machine. But the work it takes to sift through all of those gray applications and scenarios to categorize them as good or bad quickly becomes onerous.
In our study of customer endpoints, we’ve found that it is not unheard of to find over 20,000 different applications once you consider all of the processes associated with executables under the hood. With that kind of scale, the task of going through all of those applications to identify the working whitelist is a monumental first step. And that’s not even the most difficult part. The most difficult part comes the day after the first day. Because few applications are ever really static—they usually need to be updated and patched. So, now IT needs to distinguish whether an update is legitimate or not. For example, when somebody is updating Adobe Reader, is that indeed Adobe’s program updating them or is it a malicious intruder installing something on the computer?
It’s a serious problem and one that can’t really be solved without looking at some sort of forensic information. In any application file, if you right click on its properties you can get information on how many bytes it is and who the producer of the program is. But to really determine whether or not it is ‘white’ you probably need much more information. You need to be able to know how the endpoint receives the program, who the person installing it is, which software installed it, and so on until you have the whole history of what’s happened in order to make the right decision.
Some people say that you can automate this decision-making process against a few criteria, for example, automatically assuming that programs signed by Microsoft go onto the whitelist. But, again, we run up against that differential between theory and reality. The reality is that in an enterprise-level environment, not every program is signed by vendors and not every vendor is accurate about signing their programs. For example, Microsoft Word is signed by Microsoft, and Microsoft Notepad is not. If you followed the criteria that an unsigned program can’t run, then you wouldn’t find Notepad on the whitelist.
If automatic criteria are that imperfect, then clearly IT has to frequently put the whitelist status of many applications on hold in order to investigate. Theoretically that’s the right thing to do. But practically this means that there will be a lot of people in the organization that have legitimate work to do on legitimate applications who have to wait until IT finds the time to investigate. These users start flooding IT’s voicemail box. They demand to get the programs they need and they generally make IT’s life miserable.
So, what do you do with these grey applications? You shouldn’t just allow them to run if you don’t know their status, but it is inefficient to block all of them.
Well, if you can’t put them into heaven—automatic allow—or hell—automatic block—then why not try purgatory? What I mean by that is allow the gray programs to run but limit their access to resources until a decision can be made as to their white or black status. So the worker can use the application, but the program can’t access the Internet, for instance, or it cannot access certain servers in the organization or overwrite certain registry keys.
Some people might ask, isn’t this what application sandboxing accomplishes? Well, not really. The thought behind sandboxing is to put applications in a bubble and run them in complete isolation from any other application or the operating system so that it will not damage your computer. The difficulty with that approach is that the inconvenient factor of reality rears its head again. When applications run in isolation, things tend to break. The Windows OS is not built for full sandboxing. The simplest example is that an isolated application will not be able to reach a shared DLL, or writing to the virtual registry may create some other problems. I’m not criticizing sandboxing, but I am saying that there is a difference between what you can do in the lab and what you can do in an enterprise-level working environment.
It’s that disparity between theoretical approaches and real-life operations that makes it necessary to approach whitelisting with pragmatism. Because right now the major problem with whitelisting is that it is very expensive from the point of view of human involvement. You can’t completely eliminate that expense, but you can at least minimize it by keeping user workflows unimpeded while the decision-makers look closely into those 50 shades of gray.
Leonid Shtilman is CEO of Viewfinity.