Posts

Personam ITD Case Study – IP Law Firm

INSIDER THREAT DETECTION

Detecting a conspiracy

PERSONAM detected a conspiracy to steal client data from an overseas office of an international IP law firm. The four employees involved in the activity included one of the senior partners of the firm.

 

Download the Case Study

Insider Threat Detection Software

 

 

Why the Government Insider Threat Program Will Fail

President Obama has ordered federal employees to monitor the behavioral patterns of coworkers and report suspicious observations to officials.  Under this policy a coworker’s failure to cast suspicion on another coworker could result in penalties that include criminal charges.

Seriously! This is the current policy for preventing the next insider threat, to pit coworker against coworker!

Well…interestingly enough, they are half-right. Behavior profiling is the only way to identify an insider threat. Typically these “threats” are clever people who conceal a hidden agenda, often in plain sight. If a trusted insider is careful, as both Bradley Manning and Edward Snowden were, then we shouldn’t expect to catch them in the act of stealing, spying, or exfiltrating. They will do their jobs normally, act normally, and do nothing careless that would alert suspicion. Of course, that’s just on the surface. There will always be little behaviors these people can’t control that are different from “normal” because, let’s face it, they are different from normal coworkers. Insider threats have a secret agenda and the burden of carrying whatever motivates them to embrace that agenda. They might be good fakers, but at some level they are different, and those differences will manifest in behavior – maybe not in big things, but in little things they do every day.  If it were possible to monitor their behaviors with a sensitive enough instrument then, theoretically, we could detect the fact they are different and isolate “suspicious” differences from “normal” differences.

Of course the experts in the field (behavior psychologists and security researchers) have no idea what constitutes suspicious behavior.  Heck, in any given group of workers we don’t even know what we should consider normal, let alone suspicious.  If you typically print 20 pages per week and suddenly have a week where you print 100 pages, is that evidence you are the insider threat or were you just assigned a big presentation where lots copies are needed? If someone sends you a link to a file or website that is unrelated to your normal work, is opening that link or downloading that file evidence you are a threat?  Perhaps, but probably 99.999% of the time the answer is no.

Fooled by Randomness

The problem with the Government’s Insider Threat Program is it asks squishy human beings to be the sensor, the profiler, and the alarm. Suddenly coworkers are jotting down notes when a cubemate takes an unusual number of bathroom breaks.  Is she the next Edward Snowden or is she pregnant?  It’s left to an individual’s imagination to consider what is “normal” vs. “abnormal”.  Naturally, people will inject identity and cultural bias, they will show favor to coworkers whom they like and show disfavor to those they dislike, office politics will weigh in, and people will err when attempting to read suspicion into normal events.  Nassim Nicholas Taleb’s great book, “Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets” illustrates so clearly how human beings are easily fooled into seeing causality when there is only correlation, or at misreading the presence of correlation.  Behaviors one might think are clear indicators of suspicion, like printing 100 pages when 20 is the norm, are just part of the everyday “jitter” in the normal behavior of individuals, departments, and organizations.

False Positives

Behavior profiling results in a tremendous number of false positives, i.e. false accusations.  The experts don’t know what behaviors to monitor, there is no proper baseline for “normal”, and no objective way of discerning whether a novel behavior is threatening or benign.  Moreover, because differences and novelty stand out simply because they are different – humans are biased toward labeling such as suspicious.  Imagine an already clogged government bureaucracy that is further impeded by a flood of false accusations, each of which requiring some non-trivial investigation in order to clear the names of good people.  Also imagine that the actual insider threat, the next Bradley Manning or Edward Snowden, doesn’t behave in any of the “obvious” ways that would trigger coworker suspicion, their behavior modalities are subtle and go unnoted, allowing them to continue to be successful at inflicting damage.

Analytics to the Rescue

The worst thing we could do, far worse than doing nothing, would be what the administration’s policy requires; i.e. use coworkers to monitor each other and report suspicious behavior in a context where underreporting is punishable under the criminal code.  There’s absolutely no way that ends well.

If the goal is to solve the problem and mitigate the insider threat then we need to take the human out of the loop. The correct approach is to use digital sensors to collect a wide array of features that are representative of daily activity in a workforce and then feed such collection streams into an Analytics Process that objectively profiles behavior to separate normal behavior from unusual behavior as well as classifying non-threatening vs. threatening.  This is the best way to identify an insider threat; but not without its own set of problems.

First, the problem of determining “normal” behavior and separating that from “abnormal/anomalous” behavior.  On the surface this appears easily done with simple statistical methods, but with deeper reflection it gets much more complicated when dealing with human behaviors.  Even when a person does the same job every day, we do it differently each time. There is variation, i.e. “jitter”, in almost every aspect of both human behavior and organizational behavior. For example, is suddenly printing 100 pages when 20 is the norm something we should consider an outlier, or is it ok despite the fact it is statistically unusual? We end up needing a technology that can discriminate between “normal anomalies” and “abnormal anomalies”; which, despite the grammatical contradiction, is exactly the challenge.

Second, the problem of false positives.  Because bona fide threats are so rare compared to the number of everyday things people do that are different or unusual, false positives are inevitable. The social system breaks down if we are constantly casting a shadow of suspicion on good workers who express normal everyday behaviors.  This has prevented behavior profiling technology from succeeding in the past. Although researchers have invented various ways to crack the normal vs. anomaly problem, the technologies still produce a flood of false positives that make them impractical and unusable.

Third, the problem of scoring the threat itself.  Even when a behavior has been correctly profiled as “truly unusual”, it might still be ok. Radically unusual behavior is often a good thing, especially if we want a workforce to innovate, adapt, and progress; otherwise we might as well use robots.  It’s no easy task to profile a behavior and determine its potential as a threat. Expert psychologists and security researchers have never found any reliable predictive patterns, there is no “standard model” for a bad actor in a high-trust environment.

Insider Threat Detector (Shameless Plug Time)

At Personam, we have developed, and are currently field testing, the only technology in the world that actually does this in a practical sustainable way, at scale, and in real-time.  We use a common type of cyber-security appliance as a sensor to collect features that are representative of a workforce’s daily activities. That sensor drives real-time streams into a unique Analytics Processor that incorporates advanced profiling and unsupervised machine learning to create behavioral profiles and to identify human (and non-human) actors with truly anomalous behavior. One of our most important secret ingredients is our approach to radically reducing false positives, something that makes this type of technology practical for the first time.  Another secret ingredient is our solution to the problem of profiling behaviors in real-time, at scale, on incredibly large data streams.  The final stage in processing is to analyze the profiling output with a supervised machine learning layer that scores the threat.

Our technology has thus far proven effective at finding insider threats, simulated with AI bots, early in their activity cycle and before they would defect or go public. Unlike previous experiments and prototypes that have been developed by others in this area, ours is a practical and fieldable technology that effectively detects insider threats without clogging the bureaucracy with false positives.

 

Barriers to Adoption

Anytime an employer considers deploying a technology that collects on the behaviors of its workforce there will be concerns about ethics, privacy, and civil liberties.  People don’t like being monitored while they work, particularly if they think a subconscious tick might expose something private or interfere with reputation and career advancement. These are valid concerns that cannot be easily dismissed.  Some workforces will be more sensitive than others.  For example, people who work in classified environments already expect to be monitored and agree to random search every time they enter a facility; the same isn’t necessarily true for people who work at the insurance company, hospital, brewery, or bank.

We don’t open people’s mail!

Personam developed Insider Threat Detector technology with these concerns in mind. The cyber-based sensor component doesn’t invasively snoop into what people are doing on the computer or the network. We don’t analyze payload data and the contents of private communications remain private.  Our technology doesn’t provide security staff any access to private communications or the ability to eavesdrop.  In fact, it works just as well on encrypted data streams as it does on unencrypted streams.  Our design goal was to be no more intrusive than technologies that are already common in large enterprises.

That leaves false-positives.  Our technology reduces false positives from a flood to a very manageable trickle, but there will always be some false-positives because the science is based on math, not magic.  This technology is intended to provide early warning and alert, not to accuse or indict.  We don’t label people as threats, we identify suspicious behavior and score the behavior on a threat scale – where the scale is adjusted so even the highest score is still low.  Investigation and forensics are required before anyone can be considered a threat.  That said, even this will worry some – hence, there will be barriers to universally adopting this technology. Regardless of those barriers, however, this is far less intrusive than pitting coworker against coworker under a cloud of universal suspicion, as is the current policy.

Shout Out for a Demo

If you would like a private demonstration of this technology please contact us. We love showing off!

 

algorithm, anomaly detection, answer, Bradley Manning, breach, cyber security, Edward Snowden, insider threat, preventable, wikileaks

The world was awed two years ago when IBM’s Watson defeated Jeopardy! champions Brad Rutter and Ken Jennings. Watson’s brilliant victory reintroduced the potential of machine learning to the public. Ideas flowed, and now this technology is being applied practically in the fields of healthcare, finance and education. Emulating human learning, Watson’s success lies in its ability to formulate hypotheses using models built from training questions and texts.

Three years ago, Army Private First Class Bradley Manning leaked massive amounts of classified information to WikiLeaks and brought to public awareness the significance of data breaches. In response to this and several other highly publicized data breaches, government committees and task forces established recommendations and policies, and invested heavily in cyber technologies to prevent such an event from reoccurring. Surely, we thought, if anyone had the motivation and resources to get a handle on the insider threat problem, it is the government. But, Edward Snowden, who caused the recent NSA breach, has made it painfully obvious how impotent the response was.

Lest we assume this is a just government problem, enormous evidence abounds showing how vulnerable commercial industry is to the insider. We are inundated with a flood of articles describing how malicious insiders have cost private enterprise billions of dollars in lost revenue, so why has no one offered a plausible solution?

The insider threat remains an unmitigated problem for most organizations, not because the technologies do not exist, but rather because the cyber defense industry is still attempting to discover the threat using a rules-based paradigm. Virtually all cyber defense solutions in the market today apply explicit rules, whether they are antivirus programs, firewalls with access control lists, deep packet inspectors, or protocol analyzers. This paradigm is very effective in defending against known malware and network exploits, but fails utterly when confronted with new attacks (i.e. “zero-days”) or the surreptitious insider.

In contrast, acknowledging that it was impossible to build a winning system that relied on enumerating all possible questions, IBM designed Watson to generalize and learn patterns from previous questions and use these models to hypothesize answers to novel questions. The hypothesis with the highest confidence was selected as the answer.

Like Watson, an effective technology to detecting the insider must adaptively learn historical network patterns and then use those patterns to automatically discover anomalous activity. Such anomalous traffic is symptomatic of unauthorized data collection and exfiltration.

Inspired by the WikiLeaks incident, Sphere’s R&D team has investigated machine learning algorithms that construct historical models by grouping users by their network fingerprints. As an example, without any rules or specifications, the algorithms learn that bookkeeping applications transmit a distinctive pattern that enables grouping accountants together, and HR professionals are grouped by the recruiting sites they visit. These behavioral models generalize normal activity and can be used as templates to detect outliers. While users commonly generate some outliers, suspicious users deviate significantly from their cohorts, such as the network administrator that accesses the HR department’s personnel records. Like Watson, the models allow the system to form hypotheses.

Applied to cyber security, every time an entity accesses the network, the algorithms hypothesize if the activity conforms to its model. If it does not conform, that activity is labeled an outlier. Because these methods use a statistical confidence that dynamically balances internal thresholds on network activities (e.g., sources and destinations, direction and amount of data transferred, times, protocols, etc.), it becomes extremely hard for a malicious insider to outsmart. Simply the fact that the system does not reveal its thresholds can have a significant deterrent effect.

A paradigm shift in cyber technologies is happening now. Cyber security professionals agree that preventing data breaches from a malicious insider is a difficult task, and the past suggests that next major breach will not be detected with existing rules-driven cyber defense solutions. Next generation cyber security technology developers must seek inspiration from IBM’s Watson and other successful implementations of machine learning before we can hope to prevail against the insider threat.