CDSA

M&E Journal: Apples to Apples

The vendor selection process is something that organizations go through many times per year, each time they are looking to implement a new technology or service.

In a B2B environment, the process usually involves a lot of research, and can take months or even years.

The goal of the process is, of course, to find the most appropriate vendor of the solution being sought and consists of sequential whittling down of the list of potential suppliers until the most attractive remains.

Are we sucking eggs yet, granny?

Throughout this process of whittling down, the buyer is making comparisons between potential solutions and suppliers, checking each supplier’s offering against a list of desirables, which might include certain features, functions, pricing, ease of implementation, support capabilities, sustainability credentials and many more — these lists are usually long, and of course will differ depending on the technology or service that the buyer is looking for.

Once you (the customer) have compiled the checklist, it should be just a simple task of submitting it to the various vendors you’ve shortlisted (this often forms the basis of a formal RFP process).

They supply the answers to your questions, you compare how well each performs, and select based on that. Simple, right?

Well, yes. But no. To be able to make an informed decision based on the appropriate data, you must be sure that the answers you’ve received from all vendors are responding to the same question, using the same metrics. This may sound obvious, but technology solutions are multi-faceted, and different solutions may take a different approach to solving a problem.

So, they may answer the questions positively (that is, they solve the problem you outline), but to reach that positive outcome, a vendor may be using a different metric, or a different scale, to another vendor.

As a content security example, let’s use take-down of pirated content.

Your RFP asks, “On average how many instances of pirated content that you discover does your solution disable/take down within 3 days?” (this is a simplified question for illustrative purposes).

Vendor A replies “100 percent.” Vendor B replies “16,000.” This might appear to be a tick in the box, for vendor A, of course, but how many streams has each vendor discovered and dealt with? Vendor A’s monitoring technology might only uncover 20 pirated streams per event and disable them all, whereas Vendor B might discover 20,000, and disable 16,000.

In the example above the issue stems from using the most appropriate and consistent metric. This tells us that, wherever possible, it is important to quantify the desired outcome for a given feature or service, then make sure that all vendors are responding on the same scale, in the same unit of measurement.

Of course, the customer might not always be aware of what is the best metric to use for a given problem, and this is where we as vendors should be able to help.

While vendor honesty and transparency are crucial to helping the customer understand the full extent of their requirements, it goes without saying that vendors will try to frame their solution in the most positive light and may use the measurements that best enable them to do so; in an unsure situation, it’s essential that the customer consults a range of suppliers, and uses the information they provide to form a judgement on the most suitable metric(s).

The slight fly in the ointment (or maggot in the apple, in our case) is that not all questions can be answered in easily measurable terms; they may not have an answer that can be given in a specific unit of measurement.

Some might require qualitative rather than quantitative data, for example when comparing reputations, or types of existing customers.

But even if an answer isn’t given in numbers, it should be able to be backed up with evidence — and it’s crucial that the customer does request access to this evidence as part of the process.

Far be it from me to suggest that vendors might sometimes embellish or overstate, but … suffice to say, it’s important to always check and validate what you’re being told.

SOME OF THE CONTENT SECURITY APPLES

So, apples to apples, like-for-like comparisons are critical.

For video content security solutions, what might some of those apples be? Below are some real-world examples that we’ve encountered at Friend MTS during RFP response processes in the past:

Robustness (in particular watermarking). The very value of the content protected by security solutions means that pirates will do whatever they can to disable or circumvent this protection, and therefore the security itself has to be robust against attacks.

When comparing robustness of, for example, watermarking, make sure that each vendor’s solutions protect against all forms of attack: do they all protect against manipulation of watermarked content? Are they all able to track across stream switching? Are they effective against collusion attacks?

Make sure that each vendor has a response to your full list of required protections.

Watermark extractions. There are many considerations around watermarking extraction — let’s look at a couple we encounter quite often. The first is around volume: how many extractions do you require per month?

This is a more complex question than it might seem, and the answer depends on the type of content you’re seeking to protect, the value of that content (which impacts the amount of piracy you’re likely to encounter) and the quantity and type of your distribution platforms.

Low numbers (double digit) of extractions per month might be suitable for e.g., production content (screeners etc.) that are not likely to have significant amounts of source leaks. But we’ve seen vendors quote similar numbers for live sports environments, which, in our extensive experience, can require hundreds of thousands or even millions of extractions per month.

The second consideration in watermark extraction is duration. How long does it take to extract the water- mark? This varies hugely with the different implementations of watermarking (e.g., A/B variant, client-side, client-composited), and understanding which will be most effective for your content, in your environment, is crucial.

Likewise, the result of each vendor’s extraction process is important: rapid extraction of a watermark is great, but is that extracted information viable enough to be used for subsequent remedial action?

Again, we’ve seen vendors quote rapid extraction times, but the ex- traction is of insufficient quality to provide any further use.

Does it work, and does it work at scale? This question doesn’t just apply to content security, it’s a question we all need to ask about any solution we implement.

Marketing materials and presentations are one thing, but they don’t necessarily reflect usage.

Is the solution already deployed at one or more customer sites? And are there customers using this solution who are of similascale and scope to you, the buyer?

Ask for your vendors for reference customers that you can contact independently to see how they use a particular solution.

Monitoring – automated or manual? And what do I get for my money? Many companies offer content monitoring, and customers may also implement their own in-house monitoring systems.

But the methods a vendor uses to search and locate infringing content may differ. Some are more automated than others, using technology to search and identify, others rely very much on manual methods, eyeballing content with large teams.

Each has their pros and cons, but you should have a clear picture of the types of processes that vendors use to make a fair comparison.

In a similar fashion, the output from these monitoring processes may also be different.

For a given price, how much feedback do you get? Will you receive regular reports? Dedicated resource to help you define business intelligence from the results? If a vendor comes in with a higher price than another, it might be that they offer a more comprehensive service.

Pricing. Finally, pricing.

All the examples set out above will influence pricing. The more functions or services that a vendor provides, generally the more expensive it’s likely to be. This doesn’t necessarily mean that it’s the right solution for you but knowing what you’re getting for your money is key.

And what about discounts? We are all very fond of those, for sure.

Are any services that you’re investigating bundled as part of other services with a commensurate reduction in price? That’s great of course, but if you’re comparing this against a vendor who isn’t offering a bundled solution, make sure you split out the pricing for this component so that you can compare specifics.

And it’s always worth asking, if a vendor is heavily discounting one component of a bundle to reduce the overall price, what does that say about how that vendor perceives the component?

These are just a few considerations in the weighing-up process.

There are a huge number of others, and this makes it a tricky and time-consuming exercise.

But these are generally expensive purchases, which affect operational margins, revenue, subscriber churn, brand value – in fact most aspects by which a business measures success.

So, it’s important to get it right, and this starts with knowing what you’re asking (and why) and understanding how the answers of each potential vendor relate to those of others.

When you’re shopping for apples, make sure you look closely inside every basket.

* By Nik Forman, Marketing Director, Friend MTS *

=============================================

Click here to download the complete .PDF version of this article
Click here to download the entire Winter 2022 M&E Journal