Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!

Tuesday, June 19, 2012

Virus Bulletin RAP

Source
The VB RAP (Reactive And Proactive) test has been running as part of the bi-monthly VB100 comparative reviews since February 2009. The test measures products' detection rates over the freshest samples available at the time the products are submitted to the test, as well as samples not seen until after product databases are frozen, thus reflecting both the vendors' ability to handle the huge quantity of newly emerging malware and their accuracy in detecting previously unknown malware.
The following chart shows average scores achieved in the last four comparative reviews:
RAP averages quadrant October - April 2012
In the compilation of this chart, any RAP score achieved in a test in which the product in question generated one or more false positives is NOT counted towards that product's average score. Products that entered only one of the comparatives used to generate the chart (or for which only one set of results are counted due to false positives in other tests) are marked in RED - this indicates that the score may be considered less reliable an indicator of detection capability than those for whom an average of measures across several tests are available.

Procedures

The RAP tests are run according to the following procedures:
RAP samples are split into four sets. The set known as 'week +1' is gathered in the period from one to seven days after the product submission deadline. The 'week -1' set covers the deadline day itself and the six previous days. The 'week -2' set includes samples gathered eight to 14 days before the deadline, and the 'week -3' set consists of samples gathered 15 to 21 days before the deadline.
All samples are counted as dating from the point at which they are first seen by the Virus Bulletin test lab processing systems, or the date label of the batch with which they were received, whichever is earlier. Sample sources are not considered when compiling sets.
Samples are validated using our standard lab protocols, and classified to exclude certain inappropriate sample types. These include adware and other items considered 'potentially unwanted' by some products, partial samples requiring other components to operate, and original samples of true viruses received from external sources. Self-replicating viruses are replicated in-house and only new replications are considered for inclusion in the RAP sets.
Samples are rated by prevalence and significance as accurately as possible, using prevalence data from a wide range of sources. Sets are weighted to remove the least prevalent items. Scores are also weighted to minimise the impact of large quantities of similar items - for example, large batches of server-side morphed trojans and replicated true viruses are given a lower weighting than one-off unique items.
For each product entered for a review, we measure detection using our standard on-demand scanning procedure; this uses default product settings and ignores detections labelled as 'suspicious' only. Scores used in the per-test RAP quadrants are labelled 'Proactive' (the 'week +1' score) and 'Reactive' (the average of the scores for weeks -1, -2 and -3). Scores used in the four-test RAP averages quadrant are the averages of each score over the last four tests.
In the per-test quadrants, products with false positives in the test in question are marked by striking through the product identifier. For the four-test RAP averages quadrant, such scores are excluded when calcuating averages.
Product identifiers on quadrant charts may be simplified or abbreviated to keep the chart readable.

No comments: