Why do adserver and web analytics stats differ so much?

May 9th, 2011
As long as there are different measuring systems there will always be different figures. The key however is understanding why there are differences and at what point the differences become unacceptable. The below reasons and discrepancies should give you a bench mark to compare to and possibly set up some business rules that say you are willing to accept the following margin of difference however anything over needs to be reviewed.
Adserver vs Adserver
If you are a media buyer you are well aware that a website or networks impressions and clicks are higher than your adserver. The difference normally ranges between 10% to 20%. The more adservers there are in the process the higher the discrepancy. For instance if the advertiser servers their banners through double click and books the campaign on a network, the network is run on another adserver like OpenX and the network then puts the banners on websites who use either their own systems or DFP or Atlas you start getting very far removed from the original impressions. So in the above example you could be as far out as 30%.
Below are a couple of reasons why those discrepancies exist
Impression definitions: Publishers count the ad requested and advertisers count the ad displayed.
Large creatives have long load times resulting in differences in impression counts.
Latency: Any lag in the connection between the ad request and the displaying of the ad can create differences in counts; the user may navigate away before seeing the ad or page
Network connection and server reliability: An ad server may fail briefly, not receive a connection, or encounter an issue while logging a request, resulting in different counts.
Ad blockers: Publishers issue an ad request, but the ad is prevented from being displayed by an ad blocker.
Caching: A creative may be cached in the browser or on a proxy server; no ad request is seen by the advertiser server, which results in impression count differences.
Trafficking errors: An ad tag may be implemented incorrectly so that one ad server is able to see the impressions and clicks while another server doesn’t (or only receives a subset of the statistics).
Frequency capping: An advertiser’s frequency cap could prevent an ad request from being filled, which may cause different impression counts.
Timing differences: Ad servers may operate on different time intervals or time zones, which results in temporal differences.
Spam filtering: Ad servers may filter out spam impressions and clicks, impressions from robots and spiders, back-to-back clicks, and other activities. These filtering technologies are implemented in different ways; some servers may be more or less aggressive in their filtering, which results in spam and click count differences.
Google Analytics vs Adserver
As mentioned above different systems measure impressions and clicks differently. There is almost always a discrepancy between ad serving and web analytics because they are two different platforms that measure data in different ways. In my experience, I’ve often seen the web analytics report lower numbers than the ad server’s numbers. Discrepancies are okay if they are consistently inconsistent. It helps to know how an ad server is “defining” a set of data vs. your web analytics platform. For example, a “visit” may be defined differently by DART (clicks) vs. Omniture (visits to an actual web page). In this example, it would be possible for a user to click on an ad a few times and repeatedly visit your site, but the web platform could count this as one visit.
Here are a couple of reasons why there are discrepancies
Ad Servers report clicks that result in a redirect to a web page. There is no guarantee that the visitor makes it to the webpage or isn’t further redirected.
The statistics are affected by a user who closes a browser after clicking an ad, hijacking (toolbars that redirect traffic), bots, and in some cases an ad server that times out. Ad servers accurately measure ad displays and clicks. They are not so accurate at telling you how many people visited a website.
A log analysers reports on pages served by a web server, it doesn’t see pages served from caching proxies used by ISPs and doesn’t see pages served from a browser’s cache. Log analysers accurately report server activity and nothing else.
Java script based metrics (like Google Analytics): Reports accurately if the end user has java script and no software that blocks your tracker (7-15% of computers have this depending on who’s metrics you are using). Java script based metrics tell you within 7-15% what pages have been viewed.
Coding errors on the website – checking web analytics tags are laborious which means it is easy to miss something. You can use this tool to check tags http://wasp.immeria.net/

systems-thinking1

We decided to post this article written by the Head of The Performance Network, Jaysen Juplessis as we found clients were asking us the common question:

Why do adserver and web analytics stats differ so much?

As long as there are different measuring systems there will always be different figures. The key however is understanding why there are differences and at what point the differences become unacceptable. The below reasons and discrepancies should give you a bench mark to compare to and possibly set up some business rules that say you are willing to accept the following margin of difference however anything over needs to be reviewed.

Adserver vs Adserver

If you are a media buyer you are well aware that a website or networks impressions and clicks are higher than your adserver. The difference normally ranges between 10% to 20%. The more adservers there are in the process the higher the discrepancy. For instance if the advertiser servers their banners through double click and books the campaign on a network, the network is run on another adserver like OpenX and the network then puts the banners on websites who use either their own systems or DFP or Atlas you start getting very far removed from the original impressions. So in the above example you could be as far out as 30%.

Below are a couple of reasons why those discrepancies exist

• Impression definitions: Publishers count the ad requested and advertisers count the ad displayed.

• Large creatives have long load times resulting in differences in impression counts.

• Latency: Any lag in the connection between the ad request and the displaying of the ad can create differences in counts; the user may navigate away before seeing the ad or page

• Network connection and server reliability: An ad server may fail briefly, not receive a connection, or encounter an issue while logging a request, resulting in different counts.

• Ad blockers: Publishers issue an ad request, but the ad is prevented from being displayed by an ad blocker.

• Caching: A creative may be cached in the browser or on a proxy server; no ad request is seen by the advertiser server, which results in impression count differences.

• Trafficking errors: An ad tag may be implemented incorrectly so that one ad server is able to see the impressions and clicks while another server doesn’t (or only receives a subset of the statistics).

• Frequency capping: An advertiser’s frequency cap could prevent an ad request from being filled, which may cause different impression counts.

• Timing differences: Ad servers may operate on different time intervals or time zones, which results in temporal differences.

• Spam filtering: Ad servers may filter out spam impressions and clicks, impressions from robots and spiders, back-to-back clicks, and other activities. These filtering technologies are implemented in different ways; some servers may be more or less aggressive in their filtering, which results in spam and click count differences.

Google Analytics vs Adserver

As mentioned above different systems measure impressions and clicks differently. There is almost always a discrepancy between ad serving and web analytics because they are two different platforms that measure data in different ways. In my experience, I’ve often seen the web analytics report lower numbers than the ad server’s numbers. Discrepancies are okay if they are consistently inconsistent. It helps to know how an ad server is “defining” a set of data vs. your web analytics platform. For example, a “visit” may be defined differently by DART (clicks) vs. Omniture (visits to an actual web page). In this example, it would be possible for a user to click on an ad a few times and repeatedly visit your site, but the web platform could count this as one visit.

Here are a couple of reasons why there are discrepancies

• Ad Servers report clicks that result in a redirect to a web page. There is no guarantee that the visitor makes it to the webpage or isn’t further redirected.

• The statistics are affected by a user who closes a browser after clicking an ad, hijacking (toolbars that redirect traffic), bots, and in some cases an ad server that times out. Ad servers accurately measure ad displays and clicks. They are not so accurate at telling you how many people visited a website.

• A log analysers reports on pages served by a web server, it doesn’t see pages served from caching proxies used by ISPs and doesn’t see pages served from a browser’s cache. Log analysers accurately report server activity and nothing else.

• Java script based metrics (like Google Analytics): Reports accurately if the end user has java script and no software that blocks your tracker (7-15% of computers have this depending on who’s metrics you are using). Java script based metrics tell you within 7-15% what pages have been viewed.

• Coding errors on the website – checking web analytics tags are laborious which means it is easy to miss something. You can use this tool to check tags http://wasp.immeria.net/

Get in touch with our team for more advice on display media and best practice.

Australia: experts@3dinteractive.com.au

New Zealand: enquiry@3dinteractive.co.nz

The Performance Network (TPN) is a premium performance based advertising network offering key benefits to many of Australia’s and new Zealand’s leading advertisers and publishers. TPN offers a wide range of CPC (Cost per Click) and CPA (Cost per Acquisition) opportunities across display, email, links and co-registration media placements. Advertiser and Publisher opportunities with TPN are available exclusively from 3dinteractive.

NEW : 10 Second Survey

April 10th, 2011

Ten Second Survey

This month Great Sites via its research offering Great Research launched a new weekly survey the “Ten Second Survey“.

The survey asks members of the popular portal short topical or interesting questions resulting in one off polls and surveys, which are then published on the site and via social media channels. This activity is part of Great Sites ongoing mission to engage with its members in fun and rewarding ways. Thousands of members have already taken part in these surveys and polls and are rewarded with extra chances to win Great Sites grand prize of $30,000 and additional monthly prizes.

The Ten Second Survey can be utilised by advertisers in a range of innovative ways to meet various marketing objectives. The only rule is that members should be able to read and answer the question(s) in around ten seconds.

Advertiser options are listed below;

1) Poll Campaigns Members are invited to answer a short Great Research poll on a question relevant to the advertisers products or services, the advertiser then sends their solus EDM campaign to the respondents that answered in a certain way. Pricing is typically at a premium CPM rate for each pre-qualified member targeted by the main EDM.

2) Poll Research Marketers and researchers that want quick answers to specific simple questions can use the Ten Second Survey to ask a ‘general population’ normalised panel of members the questions of interest. Its a fast and simple way to get insights into what the people of Australia think about a particular issue. Pricing is typically on a per respondent basis.  (cost per interview)

3) Link Bait Content Generation In the world of SEO (search engine optimisation) links are highly prized as they increase an advertisers sites search engine rankings for desirable keywords. One popular method for generating links is to product interesting and topical articles and blog posts that other sites and blog link to. This viral form of marketing is effective but requires great content.

The Ten Second Survey is one way of quickly generating content that can be published and blogged. Results are broken down by key demographic groups offering additional insights and opertunities to generate catchy headlines. Pricing is normally a fixed price (1,000 respondents is recommend).

To find out more about these options contact 3di.

Examples of invites and survey below;

On site invite - Great ResearchEmail InviteTen Second Survey

Great Research is Hard to Find

March 30th, 2011

06-04-2011-April-06-09-42-16

Great Research (exclusive to 3di) operates a research panel across Australia, New Zealand and the UK with over 350,000 active members in 2010 (AU=170,000, NZ=126,000 & UK =54,000). The panel is used by market researchers to carryout comprehensive research projects.

.

.
Online Sample

Great Research sends out emails to a target group of its members (profile) and invites them to participate in the research project normally an online survey run by the market researcher. A wide range of projects can be accommodated and the exact structure of each research project is devised in consultation with the Great Research Manager. Prize draws are used as an incentive.
Pricing is normally on a CPM (cost per invitation) or Cost per interview (CPI) basis.

Monitors
These are on-going research projects run at regular sample intervals to help the market research monitor metrics relating to their client over time. Metrics can include things like brand awareness and purchase preference. Monthly monitors are typical with a small sample group of members across defined demographics responding to each sample. Normally the number and constitution of each sample is specified by the researcher. Prize draws are used as an incentive.
Pricing normally on a cost per sample and cost per interview basis with a 12 month contract.

Top Ups
Many clients have access to their own online panel however often they find it is not possible to obtain enough responses from all the demographic groups required by the study. Great Research is then called in to help complete the research project by filling in the gaps. We send research invitations to the missing demographics. A fast turnaround is available to meet the most urgent requests. Prize draws are used as an incentive.
Pricing is normally on a CPM (cost per invitation) basis.

Hosted Interviews (Surveys)
For researchers and direct clients who are interested in a complete outsourced survey, Great Research offers a hosted survey solution where an online survey is created based on questions supplied by the client and invitations sent out to a group of members. The survey results are supplied in an excel spreadsheet along with the demographic profile of each respondent. Prize draws are used as an incentive.
Pricing is normally on a cost per interview basis, question writing available on request.

“The 10 Second Survey”
Every week Great Research sends its members a topical or point of interest poll or survey. The intention is to engage with members in a light hearted but interesting way and to generate stats that can be used to generate catchy public opinion headlines and article. The results are published on the Great Research site but publishing rights are also given to sponsors who use the catchy headlines and articles as “link bait” on their own sites to support their SEO link building strategy. Link baiting is normal managed by First Rate, the search marketing agency in the Q Ltd group. (Q Ltd also owns Great Research). Prize draws are used as an incentive.
Pricing is per “10 second survey” runs for 1 week, includes email invites.

Panel Building
Via its partner www.TPN.com.au, Great Research offers panel and fieldwork providers the ability to build and recruit for panels. TPN offers a comprehensive performance based Lead Generation service that uses multiple channels to drive panel joins. Incentives are usually handled by the client.
Pricing is normally per panel recruit (CPA).

Focus Group Invites
Researchers that are interested in speaking to a select group of people can use Great Research to recruit for their focus groups. Our panel have agreed to be contacted by email, phone and SMS meaning we can invite and remind them of upcoming focus groups they have signed up for. We can target on postal code.
Pricing is normally on a cost per invitation (CPM) or Cost per interview (CPI) basis.

Free Sample Giveaways (IHUT or in-home usage tests)
Fast moving consumer goods or researchers that are interested in feedback from consumers on consumables send their product to our members, and ask them questions after they have used the product. Great Research is in a great position to handle these products because we know what kind of products people use (lifestyle data) and have validated addresses on our members.
Pricing is normally on a cost per invitation (CPM) or Cost per interview (CPI) basis.

Watch and Answer (show TVC and ask questions)
Companies and researchers interested in measuring the effectiveness of their television commercials (TVC) use Great Research to show a TVC to our members and then ask questions about the commercial.
Pricing is based on per completed view basis.

Telephone Surveys (data)
Great Research also collects phone and mobile numbers from its panel members and can provide this data to researchers wishing to carryout phone interviews. Targeted phone number lists are generated using Great Research’s detailed member demographic, interest and intent profile data providing the best possible response rates for the phone interviewer.
Pricing is on a Cost per record basis.

Australia: 1300 806 986 or Email
New Zealand :  09 920 1755 or Email