Allow the BeOp Crawler
The BeOp semantic engine uses the page contents to refine ads, and therefore serve the most relevant creatives in a given context.
Our bot, which we call the BeOp Bot, fetches your pages for that purpose.
#Updating your robots.txt
If you're not specifying anything particular on your
robots.txt file, things should work by default.
If you use a restrictive
robots.txt, you might need to add the following lines to the file:
#Allowing access through paywalls
Like other bots, having the full page contents helps BeOp perform the most relevant ad matches, and better matches for your own editorial campaigns.
You can add our bot's user agent string to your filters so that it can read the full page contents:
Mozilla/5.0 (compatible; BeOpBot/VERSION)
It shows up like so:
It that helps you, here's a regular expression to match it:
With all that configured, everything should work nicely!