Back to all articles

A CAPTCHA for Ads? Playable and Interactive Ads as a Turing Test

Ad fraud has been a pressing issue advertisers and publishers have been facing ever since ads ever existed, and is part of most discussions about ad-tech in the media, but the news coverage in the last few months, specifically the reports of the fake video ad views amounting to millions of dollars daily and of Uber suing Fetch for click fraud have really brought the subject to the spotlight.

The change in status quo has forced advertisers to do some homework and take some action like using 3rd party solutions (like Adjust’s fraud prevention suite), demanding more transparency from publishers and generally asking way more questions about each traffic source.

On the surface level, this seems like a positive change: the more awareness of ad-fraud out there – the harder it is for fraud to go unnoticed. While that is true, a lot of confusion and misinformation is causing legitimate traffic sources and publishers to be falsely flagged as fraudulent, and while a lot of the media buying is programmatic and therefore more transparent, a lot of quality traffic still goes through manual buying through other sources that aren’t as transparent.

Advertisers still want users from sources that aren’t fully transparent.

But they also want to make sure it’s not fraudulent. This leaves both advertisers and publishers in quite the pickle. I won’t pretend to know how to solve all of ad-tech’s problems, but I do have a proposition that could help both advertisers and publishers that drive legitimate, quality traffic.

What if you could effectively determine if a user is real on the ad level, regardless of where he was exposed to the ad, based on his interactivity with it? You couldn’t do that with banners or videos, since clicks are easily fabricated. That’s where playable and interactive ads come in. We believe that playable ads can act as seamless a Turing test, like a ‘CAPTCHA’, but without any effect on user experience.

In order to get through a playable or interactive ad, the user has to tap a bunch of buttons in an order that makes sense with the gameplay in order to finish it, and each one of those taps is logged, whether in a database or even on a heat map. At Persona.ly, we accumulated plenty of experience creating and running playable ads for our partnered advertisers, by monitoring these in-ad events we were able to make changes to optimize the ad’s flow, and we also noted how they could easily be used to determine if a user is real or not.

By measuring the order of the events, the time between them, and their frequency, it’s rather easy to notice repeated and suspicious patterns. Advertisers already measure the time between the ad click to the install, which is an efficient metric but isn’t accurate enough – some users are just quicker than others and some have a better connection – therefore flagging each quick installation as a fraud is an effective way to detect and filter out fraud, but it isn’t completely accurate and can also falsely flag sources that aren’t fraudulent. Measuring an entire session of a playable ad, consisting of several taps as well as other gestures like swiping can be exponentially more effective.

For example, check out the playable ad we created for Netmarble’s Travelling Millionaire:

Like the actual game, the ad offers the user multiple choices, all of which have different consequences. Theoretically, if we would have run the ad and saw that a high percentage of the players from a specific traffic source make the same exact choices, our systems will automatically flag that source as suspicious, and we would evaluate the rest of the data to see if there are other reasons to suspect it, and perhaps even make changes to the flow and see if players from the suspected source would still finish it.

In this day and age, where hacking and methods of fraud are sometimes more advanced than the apps and tracking solutions themselves, you can’t take too many precautions and you must always try to be at least one step ahead of the curve – we, personally, recommend to be at least two steps ahead.

As mentioned earlier, we optimize the flow of our playable ads in order to increase the number of users who finish the flow (which naturally also increases the conversion rate), this optimization includes making significant changes in the ad’s flow – the order of the actions the user needs to make, the amount of time required to finish the ad, etc.

But can’t bots learn how to play these games and appear to be more human?

Assuming some hood wearing, mean hacker somewhere was able to create a bot that follows the ad’s flow, which is highly improbable in the first place, since that would be extremely hard and time consuming compared to just faking clicks on video ads, making these changes on the fly, and in some cases, in real time, can help confirm that no fraudulent clicks are going through since it would be easy to detect users acting in accordance with the old flow. In some of the game types we created playables for, like match-3, the pattern required to “solve” the playable is randomly generated, which makes the creation of a bot that could solve it even more improbable.

Objectively speaking, ad fraud is always going to be something advertisers and publishers have to contend with – as our tools for detection and prevention improve, so does the technology on the fraud’s side. We believe that focusing on confirming the user is human based on his interactivity with the ads is a great idea that can help all the players that can technologically run playable ads and deal with measuring in-ad events could use to vet their traffic sources.

 

Feel free to contact us at info@persona.ly if you’re looking for interactive monetization solutions for your apps or quality user acquisition. We’d love to connect.

Share
Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere