Continuando con mi investigación sobre la efectividad del post view (view through), he planteado esta consulta a varios especialistas internacionales. Aquí algunas de las respuestas obtenidas de momento:
I have to share my bias: I think view through is a useless metric cooked up by sellers of advertising to show some value from the billions of banner ads that are ignored (or blocked by people like me who use AdBlock).
All studies that I have seen to show “value” from “exposure” are from ad agencies and they all tend to be one time very controlled studies (because there is really no other way to do ‘em).
It is possible that there is some value from “ad exposure”, but for me there are many ways to have concrete metrics to measure outcomes from marketing campaigns. I prefer to rely on those.
See thoughts outlined in my “engagement is a excuse post”, same points here:
I think you make excellent points. However, we are always narrowing in on a shifting target, aren´t we? For instance, I have run huge media blitzes for companies, and run affiliate networks of tens of thousands of affiliates. Often, an affiliate would run a test, and, in the early days, not track a conversion properly because they already had a cookie on their computer from another affiliate who would be credited with the sale. If you have an interest in a niche, you tend to surf in and around many related sites, often all or many of those sites are displaying the same ad. so…who gets credit?
I think that banner blindness tends to evaporate when you are interested in a subject. It´s like, when you are a new parent, you suddenly notice how may other people are out pushing strollers. You notice all the bike shops in your neighborhood when you are in the market for a new bike.
All of these measures are such poor proxies of reality, and all of our approximations and assumptions are generally either marginally or grossly wrong. However, the question is, can they provide us with insights and help us improve our performance? I think the answer is yes, as long as we don´t jump to conclusions, and understand the inherent flaws in the model.
To me that is the most iimportant factor. Educating users of performance metrics to understand the inherent flaws in the system. …and systematically hunting down those flaws. That requires that we never make assumptions or conclude that there is “standard wisdom”.
I have worked recently with companies that offer to design post-impression ad campaigns. The model is….they write a cookie on the computer of people who visit the site, but do not make a purchase, then serve ads for the product on other network sites, and get a percentage of revenue if the person later returns to make a purchase. Would that person have made a purchase anyways? It is all arguable. The efficacy of the campaign is in the details. and that requires granular tracking. being able to follow the individual. defining the characteristics of an acquisition. extensive cross-checking.
I think the greatest danger of these stats being used as a billing model is that they are often skewed to overreport the relationship, counting on the client´s lack of understanding or attention to the exceptions. If such measures are generated and used internally, with a clear understanding that they are descriptive (suggesting possible relationships) rather than predictive (useful for predicting specific outcomes), they can be extremely useful in suggesting strategic opportunities…but not as means of allocating revenue.
Thanks for including me in this discussion. I think I am going to go and write a piece about descriptive versus predictive modelling.
I´ll look forward to hearing your thoughts. I hope I didn’t ramble too much.