Microsoft vs. McAfee: How free antivirus outperformed paid

Started by DorsetBoy, Nov 19, 2010, 18:01:06

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

DorsetBoy


http://www.zdnet.com/blog/bott/microsoft-vs-mcafee-how-free-antivirus-outperformed-paid/2614


QuoteHow effective is free antivirus software? I had a chance to see a real, in-the-wild example just this month, and the results were, to put it mildly, unexpected. The bottom line? Microsoft's free antivirus solution found and removed a threat that two well-known paid products missed. Here are the details. [Update: After I publlished this post, a second example appeared, courtesy of a rogue commenter in the Talkback section. See the results at the end of this post.]

I've had Microsoft Security Essentials (MSE) installed on my main working PC for most of the past year. Mostly, I use it for real-time protection. I typically disable the scheduled virus scans on my PCs and instead occasionally do a manual scan just to confirm that nothing out of the ordinary has snuck through. Last month I decided to perform a scan using the Full option. Because I have 2.5 terabytes of hard disk space, with roughly 40% of it in use, I knew the scan would take a long time. So I scheduled it to run while I was out running errands.......... (more)

Rik

An interesting report, Dorset. Maybe we're all going to have to think again about MSE?
Rik
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

DorsetBoy

Quote from: Rik on Nov 19, 2010, 18:06:42
An interesting report, Dorset. Maybe we're all going to have to think again about MSE?

It was shown previously that MSE outperformed many of the paid apps and Avast 5 Free protected better than suites, yet it has nothing but an AV, no firewall etc.

Steve

It's not going to do the image of the security software business any good if they are out performed by MS, personally I don't see a reason to use anything else at the moment.
Steve
------------
This post reflects my own views, opinions and experience, not those of IDNet.

zappaDPJ

I think it's a bit unfair to compare MSE to McAfee, McAfee is dreadful :laugh:

I've used both, MSE has to date detected and removed a number of threats for me with no false positives. It's also let nothing though as far as I'm aware. McAfee on the other hand is a virus magnet touting for business. It detects and stops nothing that I've seen. My daughter's laptop was riddled with nasty things despite running an up-to-date McAfee. Since I removed it and installed MSE, we haven't had a problem.
zap
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

armadillo

The author of that article should know better than to make stupid claims based on anecdotal evidence.

All AV software misses threats. The fact that he found three threats that were picked up by a particular AV and missed by many others means absolutely nothing.

If he had run a test pack of say, 500,000 viruses and trojans through 25 different AV products, free and paid, he would have been able to choose a combination of a few viruses that were stopped by any single AV of his choice and missed by most of the others.

That is just by the laws of probability.

He is misled by the fact that his event occurred by chance rather than by his choice.

Now, it may be true that a given free product outperforms one or more paid products. But his article and anecdote are not relevant to the demonstration of that. I have said before and will say again: only large statistically analysed tests have any significance.

For example, in the latest published test (Aug 2010 - they are three-monthly) from
http://www.av-comparatives.org

they ran a few more than 900,000 malware nasties through the 20 AVs they tested. MSE and McAfee are amongst those.

MSE stopped 97.6%
McAfee stopped 99.4%

Gary

Quote from: armadillo on Nov 20, 2010, 20:03:20
The author of that article should know better than to make stupid claims based on anecdotal evidence.

All AV software misses threats. The fact that he found three threats that were picked up by a particular AV and missed by many others means absolutely nothing.

If he had run a test pack of say, 500,000 viruses and trojans through 25 different AV products, free and paid, he would have been able to choose a combination of a few viruses that were stopped by any single AV of his choice and missed by most of the others.

That is just by the laws of probability.

He is misled by the fact that his event occurred by chance rather than by his choice.

Now, it may be true that a given free product outperforms one or more paid products. But his article and anecdote are not relevant to the demonstration of that. I have said before and will say again: only large statistically analysed tests have any significance.

For example, in the latest published test (Aug 2010 - they are three-monthly) from
http://www.av-comparatives.org

they ran a few more than 900,000 malware nasties through the 20 AVs they tested. MSE and McAfee are amongst those.

MSE stopped 97.6%
McAfee stopped 99.4%

Yes and the next week that could have changed, quoting percentages helps but is only a small part of a bigger picture and that's sensible layered protection that does not cause untold issues with your system of eat CPU cycles madly.

I used Kapsersky internet security when I used Windows for years until it made a mess of my pc with its hooks and alas many bugs. I still use Nod32 on my wife's Laptop but that's going when the license is up. A free AV like MSE with the other free tools available out there I believe is now just as good as any paid for suite.  So 97.6% vs 99.4% does not tell me anything really apart from both are good on the that day, the user forums give a bigger picture generally.

Google for the forums of what security you may be about to use, see what issues there could be with various suits/standalone AV's then see how they run on your box, as no two AV's will run the same on two computers with the amount of software choices available really let alone what's in your registry. I would rather use an AV that gets 96.4 % with other tools than one that gets 99.4% that may possibly have a high false positive rate, or one that brings you machine to a grinding halt.

PC security is about safety and stability not solely percentages, I always think percentages are nasty things that never show a true picture anyway.   ;)
Damned, if you do damned if you don't

armadillo

Yes, I completely agree about percentages. No single statistic gives a full picture. My point was not that 99.4% is better than 97.6%. Certainly those could be the other way round next time. The point was that the original blog article was meaningless.

All the top 10 (at least) AV products are probably roughly as good as each other. The article was wrongly claiming that a particular free product outperformed the others and he was "surprised". He should not have been surprised as that result was no less likely than pure chance. He might as well have been surprised by throwing a dice and scoring a 5.

The point about false positives is very important indeed too.

Funnily enough, I also tried Kaspersky. For quite a long time actually as I was a beta tester for it. It made my machine almost unusable, with extremely slow bootup and shutdown and lots and lots of BSODs. Nod32 runs with no obvious impact on anything. Other users may have the opposite experience. I am keeping Nod32 because it works and its cost is so negligible compared with everything else I spend money on that it is not worth the hassle of replacing it with something else which I may later find I do not like as much, even if it is free. Trying to save £30 per year is not a priority for me. I fully agree you need to try out a product to see if it runs well on your own set up.  But I get irritated by articles that try to show a product is bad (or great) simply because there is some difference between success rates on a tiny sample of malware that the writer happens to have encountered.

I do not agree about placing more reliance on forum posts than on large-sample tests though (maybe that is not what you meant anyway). Certainly, forum posts should not be a guide to how successful or otherwise a product is at trapping malware or generating false positives. The big-sample tests are much better for that (even if they show that there is not much significant difference between the top players). All forum posts are discussions of anecdotal evidence regarding success, impact on machine running, or whatever. They can be useful as discussions on how to solve particular problems, such as difficulties installing or uninstalling, or as comments on how intrusive a product might be. The slight problem with product forums is that they concentrate on problems. So if you look at the forums for all the products, you can find a reason not to install anything.

I agree about the layered approach. Though I prefer to have only one product that gives real-time protection and supplement that with other stand-alone scans. In principle, I do not approve of having two or more real time protection processes running together and no amount of anecdotal evidence from people who have successfully done that will make me change my mind.

tehidyman

One point I have not yet spotted in the tests discussed above is the speed at which the AV program is updated.  Given that we hear of new threats on a frequent basis that could be important. It may be that a program gets 99.9% of last weeks viruses but if it misses one of this weeks one may have a short-term problem. Thanks to Armadillo and Gary for enlightening comments.

Gary

Quote from: armadillo on Nov 21, 2010, 00:39:58
Yes, I completely agree about percentages. No single statistic gives a full picture. My point was not that 99.4% is better than 97.6%. Certainly those could be the other way round next time. The point was that the original blog article was meaningless.

All the top 10 (at least) AV products are probably roughly as good as each other. The article was wrongly claiming that a particular free product outperformed the others and he was "surprised". He should not have been surprised as that result was no less likely than pure chance. He might as well have been surprised by throwing a dice and scoring a 5.

The point about false positives is very important indeed too.

Funnily enough, I also tried Kaspersky. For quite a long time actually as I was a beta tester for it. It made my machine almost unusable, with extremely slow bootup and shutdown and lots and lots of BSODs. Nod32 runs with no obvious impact on anything. Other users may have the opposite experience. I am keeping Nod32 because it works and its cost is so negligible compared with everything else I spend money on that it is not worth the hassle of replacing it with something else which I may later find I do not like as much, even if it is free. Trying to save £30 per year is not a priority for me. I fully agree you need to try out a product to see if it runs well on your own set up.  But I get irritated by articles that try to show a product is bad (or great) simply because there is some difference between success rates on a tiny sample of malware that the writer happens to have encountered.

I do not agree about placing more reliance on forum posts than on large-sample tests though (maybe that is not what you meant anyway). Certainly, forum posts should not be a guide to how successful or otherwise a product is at trapping malware or generating false positives. The big-sample tests are much better for that (even if they show that there is not much significant difference between the top players). All forum posts are discussions of anecdotal evidence regarding success, impact on machine running, or whatever. They can be useful as discussions on how to solve particular problems, such as difficulties installing or uninstalling, or as comments on how intrusive a product might be. The slight problem with product forums is that they concentrate on problems. So if you look at the forums for all the products, you can find a reason not to install anything.

I agree about the layered approach. Though I prefer to have only one product that gives real-time protection and supplement that with other stand-alone scans. In principle, I do not approve of having two or more real time protection processes running together and no amount of anecdotal evidence from people who have successfully done that will make me change my mind.

Its not about saving £30 its just that I feel it does not perform as it once did, also it's on a machine that's not used 24/7 or even once a week sometimes. Kaspersky, yes I agree completely with you on.

I did not mean use forums as an aid to its malware credibility though. It is possible to use two real-time products but you have to be careful I agree. I think for Justina's Laptop Prevx and Nod worked fine together, except as I said I cannot see the value of spending money now on a machine that is so rarely used. I know you had an issue with Prevx, but a quick chat with them resolves a lot, they will see what the issue is, as it may be at your end only, if not the next release should fix it.

  There are others that work well apart from Prevx. Wilders is a great place for security tips, I used returnil when I used windows and sandboxie as well. A lot depends on where you browse as well. Having spent time on 4chan I liked the extra layers  ;)
Damned, if you do damned if you don't

Gary

Quote from: tehidyman on Nov 21, 2010, 08:37:06
One point I have not yet spotted in the tests discussed above is the speed at which the AV program is updated.  Given that we hear of new threats on a frequent basis that could be important. It may be that a program gets 99.9% of last weeks viruses but if it misses one of this weeks one may have a short-term problem. Thanks to Armadillo and Gary for enlightening comments.

Heuristics help with this, but tbh most AV's will leave a window in which you are potentially vulnerable, good heuristics should catch them, as will behavioural analysis/HIPS  but there is no absolute with a zero day with any product.
Damned, if you do damned if you don't

zappaDPJ

Quote from: armadillo on Nov 20, 2010, 20:03:20
MSE stopped 97.6%
McAfee stopped 99.4%

I find those figures astonishing for both products. I take on work for a local PC repair shop, the majority of which is centred around retrieving data from infected hard drives. In recent times McAfee has been given away free with new hardware, particularly laptops. I'd stake my lunch that the current version isn't capable of stopping over 99% of viruses. I don't know what the test environment is but in my mind it can't emulate real world usage such as users using Limewire.
zap
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

Gary

Quote from: zappaDPJ on Nov 21, 2010, 13:40:26
I find those figures astonishing for both products. I take on work for a local PC repair shop, the majority of which is centred around retrieving data from infected hard drives. In recent times McAfee has been given away free with new hardware, particularly laptops. I'd stake my lunch that the current version isn't capable of stopping over 99% of viruses. I don't know what the test environment is but in my mind it can't emulate real world usage such as users using Limewire.
I agree Zap, and especially since Limewire no longer works  ;) It was shutdown in October I believe, which explains why I don't get so many 'Gary' style calls from muppets with malware in return for free music, either that or they got the message  ::)
Damned, if you do damned if you don't

zappaDPJ

Hmmm, if the court order on Limewire isn't lifted I can see my income diminishing :laugh:

I can't put a figure on it but I'd say I find the Limewire client installed more often than not if it's laptop I'm cleaning out.
zap
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

armadillo

Quote from: zappaDPJ on Nov 21, 2010, 13:40:26
I find those figures astonishing for both products. I take on work for a local PC repair shop, the majority of which is centred around retrieving data from infected hard drives. In recent times McAfee has been given away free with new hardware, particularly laptops. I'd stake my lunch that the current version isn't capable of stopping over 99% of viruses. I don't know what the test environment is but in my mind it can't emulate real world usage such as users using Limewire.

I would imagine that their test environment is more representative of real-world malware than yours. It is a huge bank of 900,000 malware agents and it attempts to include all wild malware. But it also depends on using fully up to date definitions databases in all the products tested. So it is probably not a good reflection of real-world user behaviour.

I would suggest that in your situation, where you are presented with users' infected drives, what you are seeing is a subset of users who do not keep their virus definitions up to date or who operate without a firewall, or both of those causes, or who were infected by a trojan as a result of those things and which then opened their system to further infection. What you are seeing is how badly those products performed when they are not kept up to date. Although McAfee is given away free, it does not include a lifetime subscription. Users are quite likely not to update it even if it is still within its free period and I would guess they are very unlikely to pay for a subscription, given that their use of Limewire suggests they have a predilection to not paying for things.

I will comment on non-up-to-date databases and heuristics elsewhere in this thread.

The avcomparatives tests are highly rigorous, much more so than any anecdotal evidence from any individual user or repairer (me or you) can possibly be. They publish their methodology on their website and they will even accept submissions of malware from anyone who cares to offer it to them.

Their test results are also consistent with the ones reported by http://www.virusbtn.com

The virusbtn tests are much more difficult to extract because the navigation on their website is very heavy going, needing you to get through nested tables, layered by product, year, OS and it also requires registration.

The virusbtn pass/fail criteria are very strict though they do report much more detail than pass/fail, if you have the will to dig for it. For a pass, the product must catch 100% of the approx 2000 in-the-wild viruses, trojans that they use and must have no false positives at all. They also test with not-yet-in-the-wild malware and simply report the success rate. They test every two months but not always on the same OS, and report the results for all those tests, going back as long as they have kept records. So they have results for all OSs but not every OS in each test.

I will not quote detailed comparative percentages, since people seem intent on not wanting to be influenced by percentages. Suffice it to say that most of the products tested scored 100% on wild viruses, with fully up-to-date databases in most tests though they all missed a few in some months.

It is interesting that the few products which have Linux versions did not achieve 100% on that platform, though they all did for wild viruses. Some of them had detection rates as low as 40% for Trojans on Linux.

The tests are complicated and I would not be happy to summarise them, nor to name specific products.

What this says to me about Linux is not that it is inherently more secure than Windows but is simply less generally targeted because its takeup is still very small compared to Windows. If it ever achieves general use, particularly in corporate, web-facing environments, it will become as insecure as Windows, despite its open-source nature.



armadillo

Quote from: Gary on Nov 21, 2010, 11:48:41
Heuristics help with this, but tbh most AV's will leave a window in which you are potentially vulnerable, good heuristics should catch them, as will behavioural analysis/HIPS  but there is no absolute with a zero day with any product.

Very good point.

The avcomparatives people also tested the AV software using databases a month old, combined with heuristics. As with the tests on fully up to date databases, there was little to choose between the top 10 softwares. Without fully up to date databases, they all detected only around 50% of threats.

What this tells me is that the best bang for buck, as it were, in a layered approach is to combine your on-access, real-time protection with regular full on-demand scans of your system, always with fully updated databases.

The real-time protection with fully up-to-date databases will pick up some 99% of wild threats. Virtually anything that gets past that (zero day threats) will be picked up by a subsequent on-demand scan once the databases have caught up. You just have to hope you did not get stung by a rootkit.

I think that is a more productive and less risky solution than running more than one real-time protector. You never know when one on-access scanner will trip up another one, with your own unique combination of 100s of low level drivers running on the system.

So far, my own anecdotal and hence completely atypical and unreliable experience is that, in ten years of web use, my system has never been infected with a virus or trojan. And I do have a tendency to visit some high-risk areas of the web (though never with IE).


Rik

Rik
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

armadillo


armadillo

Quote from: tehidyman on Nov 21, 2010, 08:37:06
One point I have not yet spotted in the tests discussed above is the speed at which the AV program is updated.  Given that we hear of new threats on a frequent basis that could be important. It may be that a program gets 99.9% of last weeks viruses but if it misses one of this weeks one may have a short-term problem. Thanks to Armadillo and Gary for enlightening comments.

See my reply #15 to a similar point made by Gary. The tests do cover that and the results may be even more alarming than you hoped. Thanks for the appreciation. I enjoy Gary's comments too.

zappaDPJ

I completely agree about the integrity of avcomparatives, I browse their results from time to time and often base my own AV protection on their findings. However if I get a hard drive that is completely recoverable I look to see what AV/Firewall (if any) the user has employed. Occasionally I find a fully up-to-date AV installed with current definitions although admittedly more often I don't. Under these circumstances I've run a full AV scan to see what will be picked up and I don't recall many occasions where a specific AV will detect and clean off all the infections. Antivirus Pro 2009 sailed past a fully up-to-date McAfee on my daughter's laptop and when I came to remove it (manually) I found it to be just the tip of the iceberg. I just find 99.4% almost impossible to believe.

Perhaps it's that lab tests don't fully emulate real world conditions e.g. I'd be surprised if MSE wasn't detecting and removing far more infections than any other product. It's Microsoft, it's free and once installed it'll go about it's merry way without any interaction from the user. McAfee on the other hand requires the user to ensure a subscription. Something else I've seen recently are infections that shut down background control processes. If the virus delivery system goes undetected, the payload continues by shutting down background processes i.e. the Background Intelligent Transfer Service that controls the method by which MSE connects to Microsoft's servers. I've not found a single AV product free or otherwise that can cope with that situation. It is possible to clean it all off and restart the processes but it's tortuous and usually requires running scripts from a DOS window.

I don't know, I'd like to believe the figures as they inspire confidence but they just don't tally with my experience even when an infected PC has a properly installed firewall and AV with the latest definitions.

Sorry, that was probably a bit of a ramble!
zap
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

armadillo

Hi zappaDPJ

Not a ramble at all. I am pleased we agree about the integrity of avcomparatives.

Quote from: zappaDPJ on Nov 21, 2010, 23:33:17
Perhaps it's that lab tests don't fully emulate real world conditions e.g. I'd be surprised if MSE wasn't detecting and removing far more infections than any other product. It's Microsoft, it's free and once installed it'll go about it's merry way without any interaction from the user. McAfee on the other hand requires the user to ensure a subscription.

Well, MSE simply is not detecting far more infections than any other product. Not only avcomparatives but virusbtn as well confirm that. No large independent test I have ever seen shows MSE as superior. All the products hover around, say, 98% to 99.9% and their relative ranking shifts about like top racehorses in consecutive events.

Given that MSE is free and automatically updated, it will certainly detect far more infections than paid products which the user has not updated. And, for reasons I try to explain below, in some circumstances, observed results with small sample sizes may not reflect the true underlying relative performance even when the products are fully up to date.

I am sure that the avcomparatives figure (and the virusbtn similar figure) for McAfee of 99.4% is absolutely correct and fully believable when applied to in-the-wild malware and a fully current database.

QuoteI don't know, I'd like to believe the figures as they inspire confidence but they just don't tally with my experience even when an infected PC has a properly installed firewall and AV with the latest definitions.

You really should believe the figures. The fact that they do not tally with your experience means only that your experience is not representative. Actually, it is slightly more complex than that and I shall try to explain. It follows from probability theory.

Suppose we take two products; for the sake of argument they can be MSE and McAfee but it does not matter what we take. Suppose there is some underlying, and unknown, success rate for each of those products and we want to estimate what that success rate is. We apply a batch of test infections to each product and measure the success rate. What we are doing is testing the products on a sample. The larger that sample, the more likely are the results to be an accurate estimate of the underlying actual success rate.

In fact, if the sample is large enough, there is only a very tiny probability that the "poorer" product will achieve the higher measured success rate. With sample sizes of the 900,000 that avcomparatives use, the probability of the results not being in the correct order is absolutely infinitesimal.

But the same is not true of small samples. Your experience, and mine, are small samples as far as the mathematics is concerned.

If we run small samples through the two products, there is a probability, which can be calculated, that the "poorer" product will achieve the higher success rate. For smallish samples, the probability of this "wrong" result is surprisingly, astonishingly high.

So, just to take some illustrative figures out of thin air - we might look at a few repair shops such as yours, and consider the observed performance on machines that have been exposed to a comparatively small number of threats.

Let us say we look at 100 such repair shops. The probability calculations would show something like a 30% or 40% probability of getting the "wrong" result in sample sizes typical for the exposure of users' machines. Hence, we would expect that some 30 or so of those 100 repair shops would report that their experience was contrary to the large sample results reported by avcomparatives. So the result you reported is not at all surprising and it does not cast any doubt at all on avcomparatives' ranking or their 99.4% success rate for McAfee. What you reported is simply fully consistent with the fact that there is a very substantial probability of a small sample not reflecting the true underlying performance.

This is far more important in medicine than in AV tests! Researchers all too often base conclusions on small samples. I have even read some medical papers where conclusions were drawn from fewer than 10 cases. (One of those was written by a doctor who was subsequently struck off the medical register). Drugs have to be tested on very large samples. Indeed, the necessary sample sizes are so large that they cannot be done within testing that precedes release of the drug for prescription. There are rules for the testing procedures and how large the samples need to be but drugs are released once it is shown that there is no evidence of harm. Once they get out into "the wild", they get actually prescribed for hundreds of thousands of patients. It is at that stage that side effects and success rates get a true measure and those measures may well not reflect the results achieved in trials.

They have to be careful not to forbid the use of a drug simply because some (possibly rare) side effect arose in a limited trial. They may also see dangerous side effects in the wild that did not show up in trials and that justify withdrawal of the drug.

All this is fully understandable through probability theory.

So, do not be surprised by your experience. The avcomparatives results are valid. Your contrary experience is simply showing something which is fully consistent with the theory.

Hey, and you thought you were rambling!

zappaDPJ

Quote from: armadillo on Nov 22, 2010, 13:51:47
Hey, and you thought you were rambling!

Well it was certainly a worthy one and I thank you for it :)

I agree with your probability synopsis, it's something I've often argued myself, usually to no avail. I'd also say that my perception of AV software is almost certainly tainted by the shear number of infected computers I get to deal with coupled with the fact that I only get those units which the repair shop can't deal with. It's not unusual to find multiple types of infections, compromising thousands of files. In fact it's quite often the PC won't boot at all.

Up until now, if I find a PC using McAfee and it's re-storable, I usually remove it and install MSE assuming a legitimate copy of windows is found. Perhaps I need to review that if there's an active subscription to McAfee.

This is proving to be an interesting topic :)
zap
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

Rik

Quote from: armadillo on Nov 22, 2010, 13:51:47
There are rules for the testing procedures and how large the samples need to be but drugs are released once it is shown that there is no evidence of harm. Once they get out into "the wild", they get actually prescribed for hundreds of thousands of patients. It is at that stage that side effects and success rates get a true measure and those measures may well not reflect the results achieved in trials.

A very good point, Dill. Look what happened with thalidomide. Tests were tightened as a result, but a tricyclic anti-depressant, imipramine, was subsequently released with initially good results. Only after some 20 years of use did the medical profession find that it produced violent reactions in patients, mood swings and a tendency to self harm.

In many respects, an older drug can be better than a newer one, because its shortcomings are known.
Rik
--------------------

This post reflects my own views, opinions and experience, not those of IDNet.

magicred

I had a problem a couple of weeks back.I decided to use MSE for the first time.I was really surprised that it found 2 bugs that my so called other virus detector couldn't.Now i have it on constantly.

pctech

Quote from: zappaDPJ on Nov 21, 2010, 14:31:00
Hmmm, if the court order on Limewire isn't lifted I can see my income diminishing :laugh:

I can't put a figure on it but I'd say I find the Limewire client installed more often than not if it's laptop I'm cleaning out.

You too eh Zap

Spent many an hour ridding machines of that garbage but alas I didn't charge.