President Obama has ordered federal employees to monitor the behavioral patterns of coworkers and report suspicious observations to officials. Under this policy a coworker’s failure to cast suspicion on another coworker could result in penalties that include criminal charges.
Seriously! This is the current policy for preventing the next insider threat, to pit coworker against coworker!
Well…interestingly enough, they are half-right. Behavior profiling is the only way to identify an insider threat. Typically these “threats” are clever people who conceal a hidden agenda, often in plain sight. If a trusted insider is careful, as both Bradley Manning and Edward Snowden were, then we shouldn’t expect to catch them in the act of stealing, spying, or exfiltrating. They will do their jobs normally, act normally, and do nothing careless that would alert suspicion. Of course, that’s just on the surface. There will always be little behaviors these people can’t control that are different from “normal” because, let’s face it, they are different from normal coworkers. Insider threats have a secret agenda and the burden of carrying whatever motivates them to embrace that agenda. They might be good fakers, but at some level they are different, and those differences will manifest in behavior – maybe not in big things, but in little things they do every day. If it were possible to monitor their behaviors with a sensitive enough instrument then, theoretically, we could detect the fact they are different and isolate “suspicious” differences from “normal” differences.
Of course the experts in the field (behavior psychologists and security researchers) have no idea what constitutes suspicious behavior. Heck, in any given group of workers we don’t even know what we should consider normal, let alone suspicious. If you typically print 20 pages per week and suddenly have a week where you print 100 pages, is that evidence you are the insider threat or were you just assigned a big presentation where lots copies are needed? If someone sends you a link to a file or website that is unrelated to your normal work, is opening that link or downloading that file evidence you are a threat? Perhaps, but probably 99.999% of the time the answer is no.
Fooled by Randomness
The problem with the Government’s Insider Threat Program is it asks squishy human beings to be the sensor, the profiler, and the alarm. Suddenly coworkers are jotting down notes when a cubemate takes an unusual number of bathroom breaks. Is she the next Edward Snowden or is she pregnant? It’s left to an individual’s imagination to consider what is “normal” vs. “abnormal”. Naturally, people will inject identity and cultural bias, they will show favor to coworkers whom they like and show disfavor to those they dislike, office politics will weigh in, and people will err when attempting to read suspicion into normal events. Nassim Nicholas Taleb’s great book, “Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets” illustrates so clearly how human beings are easily fooled into seeing causality when there is only correlation, or at misreading the presence of correlation. Behaviors one might think are clear indicators of suspicion, like printing 100 pages when 20 is the norm, are just part of the everyday “jitter” in the normal behavior of individuals, departments, and organizations.
Behavior profiling results in a tremendous number of false positives, i.e. false accusations. The experts don’t know what behaviors to monitor, there is no proper baseline for “normal”, and no objective way of discerning whether a novel behavior is threatening or benign. Moreover, because differences and novelty stand out simply because they are different – humans are biased toward labeling such as suspicious. Imagine an already clogged government bureaucracy that is further impeded by a flood of false accusations, each of which requiring some non-trivial investigation in order to clear the names of good people. Also imagine that the actual insider threat, the next Bradley Manning or Edward Snowden, doesn’t behave in any of the “obvious” ways that would trigger coworker suspicion, their behavior modalities are subtle and go unnoted, allowing them to continue to be successful at inflicting damage.
Analytics to the Rescue
The worst thing we could do, far worse than doing nothing, would be what the administration’s policy requires; i.e. use coworkers to monitor each other and report suspicious behavior in a context where underreporting is punishable under the criminal code. There’s absolutely no way that ends well.
If the goal is to solve the problem and mitigate the insider threat then we need to take the human out of the loop. The correct approach is to use digital sensors to collect a wide array of features that are representative of daily activity in a workforce and then feed such collection streams into an Analytics Process that objectively profiles behavior to separate normal behavior from unusual behavior as well as classifying non-threatening vs. threatening. This is the best way to identify an insider threat; but not without its own set of problems.
First, the problem of determining “normal” behavior and separating that from “abnormal/anomalous” behavior. On the surface this appears easily done with simple statistical methods, but with deeper reflection it gets much more complicated when dealing with human behaviors. Even when a person does the same job every day, we do it differently each time. There is variation, i.e. “jitter”, in almost every aspect of both human behavior and organizational behavior. For example, is suddenly printing 100 pages when 20 is the norm something we should consider an outlier, or is it ok despite the fact it is statistically unusual? We end up needing a technology that can discriminate between “normal anomalies” and “abnormal anomalies”; which, despite the grammatical contradiction, is exactly the challenge.
Second, the problem of false positives. Because bona fide threats are so rare compared to the number of everyday things people do that are different or unusual, false positives are inevitable. The social system breaks down if we are constantly casting a shadow of suspicion on good workers who express normal everyday behaviors. This has prevented behavior profiling technology from succeeding in the past. Although researchers have invented various ways to crack the normal vs. anomaly problem, the technologies still produce a flood of false positives that make them impractical and unusable.
Third, the problem of scoring the threat itself. Even when a behavior has been correctly profiled as “truly unusual”, it might still be ok. Radically unusual behavior is often a good thing, especially if we want a workforce to innovate, adapt, and progress; otherwise we might as well use robots. It’s no easy task to profile a behavior and determine its potential as a threat. Expert psychologists and security researchers have never found any reliable predictive patterns, there is no “standard model” for a bad actor in a high-trust environment.
Insider Threat Detector (Shameless Plug Time)
At Personam, we have developed, and are currently field testing, the only technology in the world that actually does this in a practical sustainable way, at scale, and in real-time. We use a common type of cyber-security appliance as a sensor to collect features that are representative of a workforce’s daily activities. That sensor drives real-time streams into a unique Analytics Processor that incorporates advanced profiling and unsupervised machine learning to create behavioral profiles and to identify human (and non-human) actors with truly anomalous behavior. One of our most important secret ingredients is our approach to radically reducing false positives, something that makes this type of technology practical for the first time. Another secret ingredient is our solution to the problem of profiling behaviors in real-time, at scale, on incredibly large data streams. The final stage in processing is to analyze the profiling output with a supervised machine learning layer that scores the threat.
Our technology has thus far proven effective at finding insider threats, simulated with AI bots, early in their activity cycle and before they would defect or go public. Unlike previous experiments and prototypes that have been developed by others in this area, ours is a practical and fieldable technology that effectively detects insider threats without clogging the bureaucracy with false positives.
Barriers to Adoption
Anytime an employer considers deploying a technology that collects on the behaviors of its workforce there will be concerns about ethics, privacy, and civil liberties. People don’t like being monitored while they work, particularly if they think a subconscious tick might expose something private or interfere with reputation and career advancement. These are valid concerns that cannot be easily dismissed. Some workforces will be more sensitive than others. For example, people who work in classified environments already expect to be monitored and agree to random search every time they enter a facility; the same isn’t necessarily true for people who work at the insurance company, hospital, brewery, or bank.
We don’t open people’s mail!
Personam developed Insider Threat Detector technology with these concerns in mind. The cyber-based sensor component doesn’t invasively snoop into what people are doing on the computer or the network. We don’t analyze payload data and the contents of private communications remain private. Our technology doesn’t provide security staff any access to private communications or the ability to eavesdrop. In fact, it works just as well on encrypted data streams as it does on unencrypted streams. Our design goal was to be no more intrusive than technologies that are already common in large enterprises.
That leaves false-positives. Our technology reduces false positives from a flood to a very manageable trickle, but there will always be some false-positives because the science is based on math, not magic. This technology is intended to provide early warning and alert, not to accuse or indict. We don’t label people as threats, we identify suspicious behavior and score the behavior on a threat scale – where the scale is adjusted so even the highest score is still low. Investigation and forensics are required before anyone can be considered a threat. That said, even this will worry some – hence, there will be barriers to universally adopting this technology. Regardless of those barriers, however, this is far less intrusive than pitting coworker against coworker under a cloud of universal suspicion, as is the current policy.
Shout Out for a Demo
If you would like a private demonstration of this technology please contact us. We love showing off!