2013年2月24日 星期日

Antivirus Testing 101

At various locations around the world, teams of dedicated researchers put dozens of antivirus products through grueling tests. Some of these antivirus testing labs run procedures that take months. Others challenge antivirus products to detect hundreds of thousands of samples. There's no way a lone reviewer like me could duplicate those efforts, but I persist in performing hands-on testing for every antivirus review. Why? There are several reasons.

Timeliness is one reason. I do my best to review each new security product as soon as it's released. The labs perform their tests on a schedule that rarely matches my needs. Comprehensiveness is another. Not every security company participates with every lab; some don't participate at all. For those that don't participate,High quality chinamosaic tiles. my own results are all I've got to go on. Finally, hands-on testing gives me a feel for how the product and company handle tough situations, like malware that prevents installation of the protective software.

To get a reasonable comparison, I need to run each antivirus product against the same set of samples. Yes, that means I'm never testing with zero-day, never-before-seen malware. I rely on the labs, with their greater resources, to perform that kind of testing. Creating a new set of infested test systems takes a long time,Beautiful fridgemagnet in a wide range of colors & sold at factory direct prices. so I can only afford to do it once a year. Given that my samples aren't remotely new, you'd think all security products would handle them well, but that's not what I observe.

The big independent labs maintain a watch on the Internet, constantly capturing new malware samples. Of course they have to evaluate hundreds of suspects to identify those that are truly malicious,Add depth and style to your home with these large format streetlight. and determine what sort of malicious behavior they exhibit.

For my own testing, I rely on help from experts at many different security companies. I ask each group to supply real-world URLs for ten or so "interesting" threats. Of course not every company wants to participate, but I get a representative sample. Grabbing the files from their real-world location has two benefits. First, I don't have to deal with email or file-exchange security wiping out samples in transit. Second, it eliminates the possibility that one company might game the system by supplying a one-off threat that only their product can detect.

Malware writers are constantly moving and morphing their software weapons, so I download suggested samples immediately upon receiving the URLs. Even so, some of them have already vanished by the time I try to grab them.

The next step, an arduous one, involves launching every suggested sample in a virtual machine, under the scrutiny of monitoring software. Without giving away too much detail, I use a tool that records all file and Registry changes, another that detects changes using before and after system snapshots, and third that reports on all running processes. I also run a couple rootkit scanners after each installation, since in theory a rootkit might evade detection by other monitors.

The results are frequently disappointing. Some samples detect when they're running in a virtual machine and refuse to install. Others want a specific operating system, or a specific country code, before they'll take action. Still others may be waiting for instruction from a command-and-control center. And a few damage the test system to the point that it doesn't work any longer.

Out of my most recent set of suggestions, 10 percent were already gone by the time I tried to download them, and about half of the rest were unacceptable for one reason or another.Panasonic solarlantern fans are energy efficient and whisper quiet. From those that remained, I chose three dozen, looking to get a variety of malware types suggested by a mix of different companies.

Selecting malware samples is just half the work. I also have to go through reams and reams of log files generated during the monitoring process. The monitoring tools record everything, including changes not related to the malware sample. I wrote a couple of filtering and analysis programs to help me winnow out the specific files and Registry traces added by the malware installer.

After installing three samples apiece in twelve otherwise-identical virtual machines, I run another little program that reads my final logs and checks that the running programs, files, and Registry traces associated with the samples are actually present. Quite often, I have to adjust my logs because a polymorphic Trojan installed using different filenames than it used when I ran my analysis. In fact, over a third of my current collection needed adjustment for polymorphism.Save up to 80% off Ceramic Tile and molds.

With all of this preparation complete, analyzing a particular antivirus product's cleanup success is a simple matter. I install the product on all twelve systems, run a full scan, and run my checkup tool to determine what (if any) traces remain behind. A product that removes all executable traces and at least 80 percent of the non-executable junk scores ten points. If it removes at least 20 percent of the junk, that's worth nine points; less than 20 percent gets eight points. If executable files remain behind, the product scores five points; that goes down to three points if any of the files are still running. And of course a total miss gets no points at all.

Averaging the points for each of three dozen samples gives me a pretty good view on how well the product handles cleaning malware-infested test systems. In addition, I get hands-on experience of the process. Suppose two products get identical scores, but one installed and scanned without issues and the other required hours of work by tech support; the first is clearly better.

沒有留言:

張貼留言