Original URL: https://www.theregister.com/2007/02/20/online_risk_assessment/

Laptop losses and phishing fruit salad

The need for accurate risk assessment

By Dr Neal Krawetz, SecurityFocus

Posted in Channel, 20th February 2007 11:54 GMT

Dr Neal Krawetz takes a look at the numbers behind reports of laptop thefts and phishing attacks, showing inconsistent metrics and the difficulty in using numbers to determine the real level of threat.

Security is about evaluating risks. And who knows more about evaluating risks than insurance companies? For example, the automobile insurance industry invests in studies about driver safety, likelihood of an accident, estimated amount of damage, and the average cost of repair. This is how they measure risk.

In the computer field, risk is based on attributes such as ease of exploitation, required skillset to conduct the exploit, number of impacted systems, estimated loss, and amount of damage. It doesn't make sense to spend $10,000 on a high-end firewall to protect a $2,000 computer containing little intellectual property.

Whether it is car, medical, or liability coverage, insurance companies have very specific metrics. My insurance agent can quickly look up my chances of being in a serious auto accident based on my occupation, distance from work, number of miles driven per year, and type of car - and that's before adding in my driving history. Banks have similar metrics and in-depth understandings of their risks.

However, few computer organisations have equivalent metrics. What are your odds of being attacked? What is the likelihood of a successful attack? What is the estimated loss from an attack? Many of the metrics we use today are based on half-truths and floating numbers - random statistics without context. When we hear that a laptop was stolen and that it contained thousands of pieces of personal information, should we be worried? What is the likelihood of the compromised information actually being used?

Just as fear, uncertainty, and doubt (FUD) can sway opinions about our security, these random statistics also influence our opinion about how safe we are online. But exactly how safe are we?

Playing with numbers

In September 2006, the Washington Post reported that 1,137 government laptops had been stolen since 2001 from the Commerce Department. That's a big number...However, it is a number without context. How many laptops has the Commerce Department had since 2001?

The US Commerce Department employs about 36,000 people. So if we assume that they all have laptops, then 1,137 lost laptops becomes three per cent of their workforce. Now we have context - and it seems like a high number. The percentage increases if we assume that only 10 per cent of the people have laptops (30 per cent lost), and decreases if we count replacement laptops.

For example, few people use laptops longer than three years. Between dead batteries, damage from long-term use, and an inability to run the latest-and-greatest software, laptops get replaced. If we assume a replacement every three years, then every laptop at the Commerce Department would have been replaced twice, tripling the number of laptops that could be stolen. That initial assumption of a three per cent loss rate suddenly drops to one per cent, and the 30 per cent assumption drops to 10 per cent.

Now, 10 per cent (and even one per cent) sounds like a lot, and it accounts for a significant amount of lost personal information. However, I don't know anyone with a laptop who doesn't have some kind of personal or sensitive information on the hard drive. If a laptop is stolen, then personal or sensitive information will be stolen. The only real question is whether the information is useful to the thief. If the data is obscured or encrypted, then the answer is "maybe not". Remember: most laptops are believed to be stolen for the hardware and not the data.

Retailers, big companies, universities, and non-profit organisations expect "shrinkage" - they know that a percentage of merchandise and equipment will be lost, stolen, or broken. Knowing that every missing laptop contains something of importance, we can then start asking: Is "one per cent" an unexpected loss rate?

Unfortunately, I cannot find any laptop-loss statistics for any big companies - we hear about individual laptop losses, but not the total percentage. However, I have worked for a couple of Fortune 500 companies and universities. Every few years (or every year, depending of the company), they do an inventory of equipment (PDF). The inventory is almost always followed by an obligatory email saying, "Does anyone know where the <equipment name> is? We're looking for the one with <tracking number>. We're also looking for <long list>."

Shrinkage. It always seems worse after a large round of layoffs. Some of the missing equipment can be physically big, like computers the size of Volvos - these are usually found. However, many items are small, such as laptops, cameras, projectors, and other portable devices. These small items rarely turn up. And remember: every missing computer contains some kind of sensitive information - the only question is whether the data is valuable to the thief. Yet, these data losses are rarely reported, even in publicly traded companies.

All of this loss adds to the amount of information potentially compromised. However, the general public does not know these numbers and cannot measure this risk.

By the way, according to law enforcement officers at JustStolen.net, one in ten laptops will be stolen. That is 10 per cent, so the Commerce Department doesn't look that bad by comparison.

Phishing for numbers

The percentage of lost items is not the only number regularly taken out of context. For example, consider the question: how much email is spam? In 2005, values from respected experts ranged from 70 per cent to 95 per cent. There was no consensus among experts, but all of the numbers sounded "bad."

Today, some companies no longer report the "percentage of spam" - they only report raw values (PDF). The only thing we really know is that it is a big number. But, we don't know what the number is (such as, 86 per cent) or the accuracy range (a five per cent margin of error?). We actually have better numbers and statistics for American Idol voting than spam volume.

The same issue arises when we ask where the spam comes from. The general consensus is that today's botnets generate a majority of spam. However, we do not actually know how big the majority is.

This counting problem also shows up in reports on phishing. Every few months the Anti-Phishing Working Group (APWG) releases their Phishing Trends Report. For example, the APWG Sept-Oct 2006 report (PDF) shows an increase in phishing emails. In fact, their reports over the last few years have shown a nearly steady increase intermixed with a few sharp increases in volume.

The problem with the APWG numbers is that they don't match other sightings. For example, Usenet's "news.admin.net-abuse.sightings" (<NANAS) is a high-volume newsgroup where people post their spam messages. NANAS receives thousands of postings per day - approximately 40,000 spam postings just for December 2006. The postings are sample spam emails submitted by people all over the world, and the samples appear to match the distribution of world-wide spam. If you don't have access to hundreds of honeypot accounts for collecting spam and want to do spam research, then NANAS is the next best thing.

Back in 2004, NANAS had literally hundreds of phishing emails posted every day. Phishing was big. In 2005, the volume dropped. By December 2006, there were 10 to 20 phishing emails posted per day. This is a significant drop compared to previous years, and it is a measurable contradiction to the APWG findings.

So what is going on? In 2004, the APWG was growing their membership and bringing in partners. This means that they were increasing their ability to capture and measure phishing emails. The growth at APWG seems to correspond with sharp increases in phishing volume. How should you interpret this? The numbers show an increase in phish sightings by the APWG, but do not necessarily indicate an increase in phishing. The numbers only mean that the APWG is getting better at seeing phishing, not that there is more of it.

In late 2004, the APWG repeatedly modified their definition of phishing, corresponding with additional increases (PDF) in volume (PDF). Was the increase because there was more of it? Or was it because they expanded their definition to include more? In any case, they do not appear to have revised all of their old numbers to match their new definitions. Thus, new months cannot be directly compared against old months since they measure different things.

What the APWG does not mention is that 2005 heralded a profound change in how most phishing operations work. Rather than sending blast-o-gram phishing emails to everyone and "hoping" that the recipient might have an account at eBay (or Citibank or Amazon or ...), phishers began spear-phishing. In spear-phishing, they use market research (and stolen email lists) to better target potential victims. For example, if you are likely to have a Bank of America account, then you will receive a BofA phish. However, if you are unlikely to have a BofA account, then today you are unlikely to receive a BofA phish (maybe one a month or less, not the one-a-week like you'd see a year ago).

This trend of directed phishing actually started in 2004, when phishers began to target based on countries. For example, Wells Fargo does not exist in the United Kingdom, so they stopped sending Wells Fargo phish to Blueyonder accounts (a UK ISP). Then they started narrowing by state. For example, if you are likely in Arizona then you are more likely to receive an Arizona Credit Union phish. They can guess where you are based on the forums you use. If you post in a Tucson forum or write about Flagstaff and Phoenix, then you might be in Arizona.

Today, there are very few blast-o-gram phishing e-mails. I'm measuring one to two per month per honeypot account. That's down from eight per account per month in November 2005 and 15 in October 2005. Other people measuring phishing volume may have different raw numbers, but should have similar ratios for blast-o-gram phishing. Today, nearly all phishing emails are targeted.

Count what, exactly?

The inability to accurately count phish and compare results with previous months is dependent on a basic definition: what should you count? For example, on 29 December 2006, NANAS recorded 17 phishing email sightings - some of the NANAS phishing posts were for phish received by the recipient up to three days prior (not everyone posts to NANAS immediately). The 17 postings represented six companies: Bank of America (nine sightings), Fifth Third Bank (three sightings), Halifax (two sightings), Nationwide, Western Union, and PayPal (one sighting each). Yet, many of these sightings actually account for what is likely the same mass mailing. For example, both Halifax sightings used the same email content and the same phishing server. This is one mass mailing counted twice. The 17 sightings likely represent eight distinct mass mailings (three for Bank of America).

Like spammers, phishers do not send out one email; they send hundreds of thousands of emails. When groups like the APWG, Websense (PDF), Ironport (PDF), and even the Federal Trade Commission (PDF) release numbers about phishing and spam, you needs to ask yourself: are they counting the raw number of emails, or the number of mass mailings?

As an aside, note that in the APWG Sept-Oct 2006 report, they state that they measure individual phishing campaigns based on unique subject lines. This does not take into account mass mailing tools that randomly modify the subjects in each email. This method also incorrectly assumes that subjects commonly used in different mass mailings (e.g., "Security Measures") are actually the same mass mailing.

Consider this alternate example: In 2006, two companies lost laptops that contained personal information. One company lost ten laptops, while the other lost six. Which is worse? At face value, ten is worse than six.

However, I can add in additional information. The ten laptops were all stolen at once, while the six were stolen over three separate occasions. Just based on this information, which is worse? Six, because it shows an ongoing pattern compared to one big mistake.

Note that I am intentionally ignoring the data loss in this example - HCA Inc compromised 7,000 people when 10 laptops were stolen, while Ernst & Young compromised hundreds of thousands of people across three separate incidents. In this case, yes - losing six laptops was worse than losing ten.

This example is analogous to the reporting of phishing and spam trends. Is 800 phish sightings bad? How many mass mailings does that represent and how many victims are estimated? The 800 sightings represent what percent of the total? Just as raw values give a sense of scope, the size of each incident, number of incidences, and estimated effectiveness of each mailing campaign also provides valuable information needed to assess risk.

Summary

With the explosive growth in identity theft, increase in botnets for spam and network attacks, and the rise in zero-day exploits (PDF), now more than ever, we need to be able to quickly and effectively evaluate risks. Unfortunately, we are only beginning to see metrics and they are not consistent. Rather than being shown threat levels, we have floating numbers without any context, respected experts citing vastly different values, and no means to compare threats. Apples and oranges make for a good fruit salad, but they do not help risk assessment.

This article originally appeared in Security Focus.

Copyright © 2007, SecurityFocus