This article is more than 1 year old

Congrats on keeping out the hackers. Now, you've taken care of rogue insiders, right? Hello?

And you're doing it in real time, yes? Is that a no?

Comment It's exasperating how each high-profile computer security breach reveals similar patterns of failure, no matter the organization involved.

One such reoccurring theme is that IT departments find it can be hard to stop employees going rogue, or spilling their login details into the wrong hands, ultimately leading to damage or sensitive data leaking out. Now why is that?

Insider attacks are difficult to detect and thwart because businesses prefer their staff to access networks and be productive without security barriers, false positives, and complexity getting in the way and slowing them down.

However, lacking effective controls, organizations discover a network breach the hard way when customer data turns up on the dark web, or a sample is emailed to the boss as part of an extortion threat.

It’s an issue that lights up like a beacon in Verizon’s most recent Data Breach Investigations Report (DBIR). This dossier covers 2,216 reported network breaches, and 53,000 security incidents across 65 countries, in the 12 months to October 2017 – and concluded that 28 per cent of these were classified as involving insiders in one way or another.

Intriguingly, while cyber-espionage is often seen as a bigger menace, this accounted for only 310 security incidents leading to 151 known breaches. This stands in striking contrast to privilege misuse, defined as “any unapproved or malicious use of organizational resources,” which accounted for 212 breaches and 10,556 incidents – almost one in four of the total recorded.

The breakdown for the health sector in Verizon’s Protected Health Information Data Breach Report (PHIDBR) is even more stark, with more than half of all 1,368 network breaches traced to insiders. Where motivation could be discerned, money topped the list, but 94 incidents were blamed on “fun and curiosity,” a reference to employees peeking at the medical records of famous people or relatives and friends.

Groundhog Day

In the past, insiders were thought of as being employees sitting on the organization side of a firewall. This perspective has become almost meaningless. Today’s networks are accessed by numerous partners and contractors, who count as insiders despite being outside the network, as well as a mass of remote users. What matters is where a user’s credentials are, not where the user is.

Reading dossiers on IT security blunders is a depressing pastime, but it offers some important learning.

The first is that focussing cybersecurity defenses solely on external actors is a flawed strategy. The second is that breach reports, and the failures that led to the intrusions, tell us about the past, not what might be happening in the present. Many of the companies whose hacker invasions made it to Verizon’s pages had probably been doing things in the same way for years or even decades. Months or years later, many organizations will have no clear idea what if any role an insider played in a security breach.

Monitoring ‘exfiltraitors’

The logical answer to misbehaving insiders is user activity monitoring (UAM) and/or user and entity analytics (UEBA), but what is it that should be monitored? Traffic is one possibility. All traitorous insiders have to get their stolen data out of the network at some point, so defenders inspect traffic for outbound connections, the creation of new and possibly unauthorized accounts, unusual emails and database searches, large print jobs, suspicious use of USB drives, any one or combination of which might be tied to accessing and exfiltrating valuable data.

In practice, spotting a skilled insider adversary in this way can be like hunting for a needle in a field of haystacks. There are simply too many layers and protocols to be analyzed, too many security logs to check, and not enough time to translate this kind of monitoring into real-time detection. At the very least it can lead to sprawl, with organizations deploying layers of tech such as data loss prevention (DLP) to keep a lid on insider risks. Arguably, crude user profiling would be quicker, which involves picking based on risk and analyzing their computer activity for bad behavior.

Standing back, it’s not hard to understand why user behavior analytics (UBA) and its big brother, user and entity behavioral analytics (UEBA), have started to look like one path out of the morass. Instead of simply measuring an insider or insider account against a static series of rules, UEBA asks whether that user is behaving as they normally do or departing from that pattern. Getting to this point of understanding a "normal" state takes time, of course, but once in place offers the chance of more quickly detecting anomalous deviations from that.

Realtime detection v insiders

It’s a truism that frontline security systems, including UEBA, set out to detect threats in real (or near real) time. The complex part has always been designing what happens when an anomalous event is detected, how much of any subsequent action should be automated, and at what point human need to step in.

This is tough enough when detecting external threats but throws up even bigger challenges when pitted against insiders. Attacks arriving from the outside generate network traffic and traditional indicators of compromise (malware contacting unusual domains from a PC for instance), none of which are present when insiders do something risky or go rogue. The conceptual strength of UEBA is that it makes no distinction between internal and external – what counts is what is defined as a "normal" state for that user and user account. An event is either anomalous, non-anomalous, or somewhere in between.

Graphs showing deviation

Could you hack your bosses without hesitation, repetition or deviation? AI says: No

READ MORE

Making this work requires an organisation to first identify its valuable assets and create a baseline of access to them from every user. This means monitoring how data is copied to and from different points in the network, and especially out of the network as has become common when integrating with cloud services. Numerous indicators must be assessed, including the privilege level of the user, the size of a transfer, its time, place, the IP of the destination of data, and even failed login attempts.

In UEBA, this will also be correlated to devices and user accounts (admins having more than one) and compared to the history of behaviour associated with these. UEBA’s claim is that by overlaying machine intelligence built specifically to spot subtle changes in the patterns of connectivity and behaviour, an alert can be generated in real time (usually defined as anything from seconds to minutes) so that a security operations center team member can review and intervene.

The precise chain of interventions varies from system to system – some will intervene to stop data from being copied unless the user elevates his or her privileges – but human intervention is usually a priority. UEBA has been characterized as a three-dimensional way of understanding security monitoring but it is not yet an automated system for stopping bad things from happening without the need for trained eyes.

There’s always a ‘but’

The appeal of UEBA is that deploying it doesn’t require binning existing security technologies, which provide sensors feeding data into its big data. The question is how organizations differentiate one UEBA from another.

All work along similar-sounding principles, and yet, up close, not all might turn out to be the same thing. The first consideration is that a UEBA should handle the "entity" piece of the puzzle, which monitors things like devices, applications, servers IP addresses, and even data itself, which is essential when putting what users are doing into context. Another is big data itself – does the architecture underpinning this part of the system on stand up to technical scrutiny? A UEBA system should do as much of the latter as possible out of the box without the need for complex customizations.

Arguably, the biggest challenge of all is that UEBA should be something that a network’s security teams understands. Given how much of machine-driven UEBA depends on specialised, hard to-to-find skills, this can’t be taken for granted, especially when asking vendors to explain the basis for the opaque algorithms they use to conduct baselining. Containing the insider risk using a UEBA should always be about a system that delivers on its promises today, not at some idealised point in the future. ®

More about

TIP US OFF

Send us news


Other stories you might like