Monthly Archives: April 2010

That False Positive: the Real Positive

If you’re expecting me to try to capitalize on the misfortunes of McAfee (and more so of  its customers) because I work for another vendor, boy, are you looking at the wrong blog. This is yet another case of “there but for the grace of God…”: no vendor is immune to false positives, and while we would all like to achieve the goal of 100% detection with 0% false positives, it isn’t achievable: not with antivirus, not with any of the panaceas du jour that are already being touted in some quarters, not with any other operating system that you may happen to prefer to any of Microsoft’s. That’s a technical issue, and no amount of shouting “this shouldn’t happen” and suggesting that red-hot pokers should be thrust into McAfee’s collective eyes will change it.

Any honest researcher will acknowledge that there is a constant, unavoidable trickle of false positives that mostly go unnoticed. Unfortunately, every so often a false positive will cause enough damage to cause a PR disaster. Most of us have been there, and those who haven’t surely will.

This does not mean at all that I aim to trivialize the impact of an event like this on the customers who are affected by it. But the measure of a vendor’s worth isn’t whether it generates a false positive, or whether it offers a convincing auto-da-fé before being burned at the stake on a fire fed by its own product packaging, but what positive act of remediation it responds with.

There is plenty of comment around demonstrating the impact of this FP on McAfee customers, and while I suspect that some of it is will be seized on with the intention of proving that the AV industry is staffed with incompetents and worse – it isn’t (in general)! – that doesn’t mean that the community at large doesn’t have a right to know what happened.

What I’m not seeing is acknowledgement that McAfee have made strenuous attempts to offer help to people and companies affected by this issue, and pointers to those attempts. The company did it what any responsible company would do and withdrew the update as soon as they became aware of the problem, and generated an amended update as quickly as possible. I don’t see corporate spin here: I see a company concerned with limiting the damage to its customers, not just to its own reputation.

So here are a couple of pointers and some relevant extracts.

http://us.mcafee.com/en-us/landingpages/npdatupdate.asp?cid=77151 offers a quick guide to remediation for consumers.

http://siblog.mcafee.com/support/mcafee-response-on-current-false-positive-issue/ 

Corporate Customers
– These entries in our virus information library and the knowledge base provide workarounds for this issue for corporate customers
– Customers are discussing the issue in our online support community

Consumers
– This support page provides information for impacted consumers
– Consumers are also discussing the topic in the online community

http://siblog.mcafee.com/support/a-long-day-at-mcafee/ 

“If you are a enterprise/corporate account, and you have an issue these entries in our virus information library and the knowledge base provide workarounds for this issue. If you are a consumer and have an issue, this support page provides information for impacted consumers or call +1 866 622 3911. We have teams of people standing by to help. (To contact McAfee by phone in your region, go to the “Contact Us” page on our Web site and select your country for the correct number.)”

The essential steps are:

  • checking that you don’t have the defective DAT
  • if you do, and you have the looping boot problem, safebooting to remove  the defective DAT and de-quarantining or replacing svchost.exe

The McAfee knowledgebase article at https://kc.mcafee.com/corporate/index?page=content&id=KB68780 also refers.

David Harley FBCS CITP CISSP
AVIEN Chief Operations Officer
Mac Virus
Small Blue-Green World
ESET Research Fellow & Director of Malware Intelligence

Also blogging at:
http://www.eset.com/blog
http://macvirus.com/
http://smallbluegreenblog.wordpress.com/
http://blogs.securiteam.com
http://blog.isc2.org/
http://dharley.wordpress.com
http://chainmailcheck.wordpress.com
http://amtso.wordpress.com

Changing Passwords: Should You Pass On It?

I’m seeing a lot of traffic about a story in the Boston Globe and taken up elsewhere suggesting that changing passwords is “a waste of time”. Well, actually, the study by Cormac Herley doesn’t exactly say that, and I suggest that you read the actual study to see what it does say. It’s actually well worth reading and makes some excellent points, though it’s not a particularly new paper, and some of the points it makes are much older. 

Should you stop changing passwords? Well, you probably don’t have much choice, in general. You should certainly use strong passwords, where possible (some systems actively work against you in that respect, by only accepting limited password options). Randy Abrams and I wrote a paper for ESET last year that discussed some password strategies, and one of the points made there was: 

 “It’s sometimes useful to consider whether frequent changes are really necessary or desirable. After all, if you’re encouraging the use of good password selection and resistance to social engineering attacks, and making it difficult for an attacker to use unlimited login attempts, a good password should remain a safe password for quite a while.”

I don’t think that the “change passwords every thirty days” mantra has been as universally enthused over by security specialists as the Globe suggests. System administrators (not always the same thing as security specialists) do often enforce such measures, of course. But while I was working on some notes for a journalist today on social engineering, I came across this quote in a paper I presented at EICAR in 1998. (I’ll have to put that paper up somewhere: it’s actually not bad, and not particularly outdated.)

“Documented research into social engineering hasn’t kept pace with dialogue between practitioners, let alone with real-world threats. Of course password stealing is important, but it’s [also] important not to think of social engineering as being concerned exclusively with ways of saying “Open, sesame…..”

Even within this very limited area, there is scope for mistrusting received wisdom. No-one doubts the importance of secure passwords in most computing environments, though the efficacy of passwording as a long-term solution to user authentication could be the basis of a lively discussion. Still, that’s what most systems rely on. It’s accepted that frequent password changes make it harder for an intruder to guess a given user’s password. However, they also make it harder for the user to remember his/her password. He/she is thus encouraged to attempt subversive strategies such as:

  • changing a password by some easily guessed technique such as adding 1, 2, 3 etc. to the password they had before the latest enforced change.
  • changing a password several times in succession so that the password history expires, allowing them to revert to a previously held password.
  • using the same password on several systems and changing them all at the same time so as to cut down on the number of passwords they need to remember.
  • aides-memoire such as PostIts, notes in the purse, wallet or personal organizer, biro on the back of the wrist…..

How much data is there which ‘validates’ ‘known truths’ like “frequent password changes make it harder for an intruder to guess a given user’s password”? Do we need to examine such ‘received wisdom more closely?”

Nor do I claim that those thoughts were particularly original: luminaries like Gene Spafford and Bruce Schneier have made similar observations. That doesn’t mean you should accept uncritically what they, or I, say. But it’s always worth wondering if received wisdom is really wise.

And as Neil Rubenking points out, an attacker isn’t going to waste time on trying to crack your password with brute force if he can trick you into telling it to him, or into running a keylogger. Which takes me right back to that social engineering paper… [Update: now available at http://smallbluegreenblog.wordpress.com/2010/04/16/re-floating-the-titanic-social-engineering-paper/]

David Harley FBCS CITP CISSP
AVIEN Chief Operations Officer
ESET Research Fellow & Director of Malware Intelligence
Mac Virus
Small Blue-Green World

Also blogging at:
http://www.eset.com/blog
https://avien.net/blog/
http://smallbluegreenblog.wordpress.com/
http://blogs.securiteam.com
http://blog.isc2.org/
http://dharley.wordpress.com
http://chainmailcheck.wordpress.com
http://amtso.wordpress.com

Testing AV: Why VB Tests are still relevant

The latest Virus Bulletin Anti-Malware product test, the largest ever of it’s type (a mammoth 60 product test) demonstrates several things; that testing Anti-Virus products never gets any easier; that discussing (or dissing) the tests never gets any less popular; and that the results of testing are never less than controversial.

Virus Bulletin has been in the testing game a very long time, and their comparative testing and VB Award have been around since early 1998. Before that time, VB was reviewing AV products since its inception in 1989. Their test methodology is well known, and is based on a combination of Wildlist testing, tests for ‘zoo’ viruses (that is, non-wildlist known malware) and False Positive (FP) testing. The full current methodology can be found here.

Despite there being a large number of people decrying this sort of WildList based testing, and indeed some vendors entirely withdrawing from any sort of ‘static’ tests (i.e. based on scanning of predetermined files, rather than live incoming threats), the fact that 60 products participated in a test like this shows that there is still life, and worth, in this type of testing.

The surprising thing is that while many criticize WildList based tests for being limited in scope (the WildList certainly is not a comprehensive list of malware) so many products fail to pass these tests. This perhaps more than anything highlights their usefulness as a baseline. If your product isn’t reasonably consistent in achieving the VB 100 Award, perhaps you should think about a different one. Often the problem is not detection so much as false detection, making the FP part of the test very important. Any product could detect 100% of all viruses very easily, it’s much more difficult to detect ONLY viruses, and nothing else.

The other aspect of the testing, that perhaps is not clear from the results, but is highlighted in the short review written of each product, is that of the experience of the tester in being able to test and use the product.

John Leyden, writing in the register points out that 20 out of the 60 products (1/3 for those of you who still remember how fractions work) failed to achieve the certification. He also quotes John Hawes (VB’s tireless tester) as saying “It was pretty shocking how many crashes, freezes, hangs and errors we encountered in this test” – indeed damning words considering that the test was on Windows XP, a mature platform that has been a standard for many years.

So, while attaining VB 100 Awards is not the be all and end all of testing Anti-Malware products, it’s still a good place to start looking. Congratulations to all those whose products did pass, from someone who knows only too well just how high that particular bar is set.