Tips and Tricks from Daily Testing

Testing security products in an enterprise environment can be a tedious task. In between pressing timelines, infrastructure nightmares and pushy sales personnel, the actual product evaluation often comes off badly. In contrast to this, a customer-driven Proof of Concept (PoC) installation can give valuable insights into a product and showcase its strengths and (especially) weaknesses. The aim of this blog post is to give an insight into when and how security products should be evaluated and it’s enhanced with tips and stories from a team dedicated to doing just that.

How do I find a good security product?

There are many answers to this question. The first, and probably most important one is to focus not on how to find it, but when to search. Based on individual experience, there are a few main drivers that lead to evaluation or purchase of a new product:

  1. The Key Account or Sales Manager of a security company comes around the corner with a new, revolutionary product.
  2. Gartner, Kuppingercole or some other analyst company publishes a new Quadrant/Wave/Report
  3. The marketing looks shiny and somebody got a little over enthusiastic

Granted that marketing often actually is shiny, all of the above should be considered bad decision drivers. They lead to products that are either unused or ill-suited, sitting on shelves all around the world or claiming ESXi resources some developer could put to better use. Instead one should concentrate on two other main drivers for an evaluation:

  • There is a new technological approach or architecture that requires additional security measures. Examples could be BYOD, Office 365 or macOS devices introduced into the enterprise.
  • An existing product has either proven to be patchy (probably during the last outbreak of Ransomware or Cryptominers) or needs replacement for some other reason, such as uncompetitive pricing, or one’s general unwillingness to further cope with its Windows Vista only fat client.

To conclude the pre-evaluation: Never change a running system, unless there is a good reason to do so. If the current system is just about right, fulfills its requirements and is well integrated, spend time on something else.

How do I evaluate/PoC the product?

Never conduct an evaluation if the result is already predetermined. If the team is already convinced that a certain product will be bought there is no reason to do so (apart from “our purchasing department forces us to”). A proper evaluation should start with a number of criteria the product needs to fulfill. Those should not be a list of multiple hundred fine granular checks, but a high level description of required capabilities and features. Those criteria can be subjective (“The UI should be usable”) and fuzzy (“Should not drain too much CPU”). Counter-intuitively, a little room for interpretation of criteria usually leads to better results. There will be a lot of intuition in this process and this is actually wanted. “Intuition is usually right, listen to it.”

After the high level criteria have been identified, a market research will lead to a number of potentially suitable products. Out of those, at least two products should be chosen for a PoC. There is little sense in only PoC’ing a single product. Comparison between products leads to insights that would otherwise be lost.

The next step is to document timescale and objective of the PoC with the products vendor or distributor. A reasonable time frame should not be shorter than a week of testing (assuming you spent the majority of your day with the PoC), plus the time required for installation. Once this has been completed, the security product should be installed in an isolated environment. Testing security products in productive networks can lead to unforeseen and unpleasant consequences! If the solution requires real data to work with, all users of the network should be warned. It is advisable to use a network of tech-savvy people, since those will generally better understand what the solution does, how their data is affected and will probably be a little more patient in the case of incidents arising.

“… the main work should be conducted and understood by the customer.”

Installation of the product should be conducted by the (potential) customer. Most vendors would usually install the product themselves, but the installation itself is a great opportunity to start your evaluation. It can give a very good impression of product documentation and general usability. Is there a lot of manual work necessary? How extensive is the documentation? Of course, the vendor can assist during that step, but the main work should be conducted and understood by the customer. Setting up the product should also be possible only by consulting the documentation. If a security products setup is hard to understand and poorly documented, its operation will probably be as well.

The actual product testing should be conducted without a vendor representative present. It might be convenient, but later on there won’t be a support engineer around during operations. The documentation and a quick start from the vendor should be enough to master a product.

Which criteria should be tested for?

There are a number of ways to generate a proper test set. The approach documented in this blog entry has been proven to work well in real life, but there might be other options. In this approach, the test set is split into two halves: Functional and non-functional criteria.

Functional criteria

Those that directly reflect product capabilities: For an AV product, one criterion could be the percentage of detected malware. An encryption solution would probably be rated by the number of supported ciphers, a CASB by support of relevant cloud services. It’s a good idea to work use-case oriented and rate the execution of typical workflows the product is involved in.

Non-functional criteria

Those considered to be the “soft”, less capability-driven criteria: What’s the product’s usability ,documentation and security like? How well does the product integrate into enterprise’s IT? A few suggestions for non-functional criteria are given below:

  • Documentation: Availability, search ability, overall quality, available trainings
  • Compliance: Certifications, GDPR readiness, role and rights management, audit-safe logging
  • Enterprise Readiness: Architecture (IPv6, NAT), High-Availability, Backup/Restore
  • Lifecycle: Installation, Update, Product removal
  • Usability
  • Security: Check low hanging fruit (XSS, SQL Injection, SSL integrity, Man-in-the-middle). Does the vendor have a process to publish security incidents/CVEs?

Both functional and non-functional criteria should be equally weighted for the end-result of testing.
Testing criteria do not necessarily need to be precise, its a good idea to give them a little room for interpretation. Nobody should need to measure usability by eye tracking of an analyst’s workflow. Simply talking to the same analyst is usually sufficient.

After all the testing has been conducted a decision on the product to buy can be taken based on these results.

  • A few tips and tricks from the daily testing:
  • Attend conferences and trade fairs to get a good idea of the security market and collect business cards. (But don’t get hyped or too deeply involved too early)
  • Get an overview of products already deployed in your organization, and see if there are synergies that could be used.
  • Use test-management tools to track testing progress. Most of them are tailored towards software development but can also be used for evaluation purposes.
TL;DR

Don’t get pushed into an assessment by vendors − only evaluate if it makes sense. Take your time. Watch non-functional aspects. Testing pays!

Who we are
The “Technology Scouting & Evaluation” (TSE) service identifies and evaluates promising IT security solutions. With this service, DCSO supports companies in staying ahead of a dynamic and ever-changing market. The centralized and unbiased evaluation process is supplemented with the experience of all community members.