Select Page

I like Secunia, so no hard feelings from our side.

But truly, this test they published the other day, showing that “security suites fail exploit tests” is a silly and useless PR stunt. I think they were just trying to get some news for their business of patch scanning or something, and decided to kick the AV players around for fun.

Testing guru Andreas Marx of AV-Test.org pretty much sums up the issues with it:

– Some critical details are missing, for example, the time of the last update of the scanners, the exact product versions, and the like.

– Only the on-demand scanner and the on-access guard was tested, so it was only checked if the file-scanner would trigger an alert.

– The paper also speaks about a test with html/web pages, but I cannot see a single test case for the part in the review (is it missing or was it excluded?)

The “scan some files only” part especially concerns me, as only one out of many built-in security features of a suite was tested (but it’s very fast: such a test might just take a minute or two completing, for scanning the entire set of files).

In most cases, it is simply not practical to scan all data files for possible exploits, as it would slow-down the scan speed dramatically. Instead of this, most companies focuse on some widely used file-based exploits (like the ANI exploits) and some companies also remove the detection of such exploits after some time has passed by (as most users should have patched their systems in the meantime and in order to avoid more slow-downs).

There are a lot more practical solutions built-in to security suites, like the URL filter (which checks and blocks known URLs which are hosting malware or phishing websites) and the exploit filter in the browser (which would also block access to many “bad” websites). Some tools also have virtualization and buffer/stack/heap overflow protection mechanisms included, too.

Then we have the traditional “scanner” — and even if some exploit code gets executed, a HIPS, IDS or personal firewall system might be able to block the attack. For example, some security suites are knowing that Word, Excel or WinAmp won’t write EXE files to disk — so potentially dropped malware cannot get executed and the system is left in a “good” state.

A few weeks back, I’ve written the following text for our own test report:

“A comprehensive review should not only concentrate on detection scores of the on-demand scanner, as this would give a user only a very misleading and limited view of the product’s capabilities.”

When comparing the security of cars, we would not only focus on the
safety belts, but also check the ABS system (anti-lock braking system), one or more airbags, crush zones, the ESP (electronic stabilization program) as well as constructional changes and many other features which make a car secure. The different detection types have to be taken together to make a valid statement about the whole detection mechanisms: neither static nor proactive detection mechanisms alone can catch all malware.

It is important to have good heuristics, generic signatures and dynamic
detection and prevention in place to be able to handle new unknown malware without any updates. It is crucial to have good response times, to be able to react to new malware, when proactive mechanisms fail to detect them. It is essential to have good static detection rates, to be able to handle already known malware, even before it is executed on a system. So comparing single features makes less sense, as we should think about the fact that a user has not bought an AV product to find some viruses and report them, but he has actually bought a service to keep his system malware-free.”

Therefore, a better test setup would be to actually have the vulnerable applications installed on the test PC, together with the security suite. (BTW: I’m sure, no user would have all of the different applications on Secunia’s list on his PC — so one might concentrate on the most recent or most widespread exploits only.) Then the tester would need to trigger the exploit, and see if the machine was exploited successfully or not. (Please note that the scanner or guard might not be able to see a file at all, if it’s a memory-based exploit, so the quoted detection rates might not even be relevant in some cases, as no files are written to disk.)

This would actually a much more interesting and relevant test which is really focusing on the entire suites’ features and not only on the “traditional” scanner part of an AV product. A few more points are mentioned in two papers, published by AMTSO, the Anti-Malware Testing Standards Organization.

Alex Eckelberry