Posts Tagged ‘data’

Analytics

The short answer is yes – the product/team will definitely benefit by having web/app analytics tracking as part of the definition of done (DoD).

The only time that a separate analytics tracking story should be written and played is typically in the scenario of:

  1. There’s no existing analytics tracking, so there’s tracking debt to deal with including the initial API integration
  2. A migration from one analytics provider to another

The reason why it’s super important to ensure that analytics/tracking is baked into the actual feature acceptance criteria/DoD, is so then:

  1. It doesn’t get forgotten
  2. It forces analytics tracking to be included in MVP/each product iteration as default
  3. It drives home that having tracking attached to a feature before it goes live is just as important as QAing, load testing, regression testing or code reviews

Unless you can measure the impact of a feature, it’s hard to celebrate success, prove the hypothesis/whether it delivered the expected outcome or know whether it delivered any business value – the purpose of product development isn’t to deliver stories or points, it’s to deliver outcomes.

Having a data-driven strategy isn’t the future, it’s now and the advertising industry adopted this analytics tracking philosophy over two decades ago, so including analytics tracking within the DoD will only help set the product/team in the right direction.

Velocity

Velocity = Projected amount of story points which a team can burn over a set period

A development team’s velocity using Scrum or Kanban can be worked out by totalling up the amount of points which has been burned across 3-5 sprints/set periods and then dividing it by the periods the totals were calculated over (taking an average across the periods).

It’s important to use an average across the last 3-5 periods, so then holiday seasons and a sprint where items have moved over to the following sprint doesn’t dramatically impact the numbers as much as it would if you only looked at the last period.

A team can use their velocity in many ways, for example:

  • Understanding how many points they can commit to during sprint planning/work out how many PBIs (Product Backlog Items) could be done across the next 2 weeks
  • To aid prioritisation (The ‘I’ in ROI)
  • Predicting when software can be delivered in the backlog, which can then be used to forecast future feature delivery
  • Understanding the impact on any resources eg. Scrum team member changes or adding extra teams to the product
  • Understanding the impact which dependencies are having which can be reviewed in the retro, great example being build pipelines
  • Providing a more accurate estimate than a t-shirt size
  • As a KPI for efficiency improvements

I tend to refer to points being ‘burned’ rather than ‘delivered’ because it’s quite easy to fall into the velocity/story point delivery trap of obsessing about points being delivered rather than obsessing about delivering outcomes (business value).

So many awesome ideas from so many people to improve product, but it’ll always be impossible to fulfil all desires in an acceptable time frame to stakeholders, making prioritisation not only challenging but extremely important.

Process, data, collaboration and determination can certainly make prioritisation all the more effective and smoother, so looking at these areas in more detail:

Process: Status of projects, where do product requests / bugs sit in the pecking order, ETA on delivery, investment cost and projected value of projects held in a transparent way will help with the communication overhead and help maintain trust.

Data: To ensure that high value items are being worked on you need data to backup assumptions. It can be easy to flap and try to make a problem out to be bigger than it is to get it done, but there should always be some kind of data to back it up with examples being: incremental revenue which can be reverse engineered from retention uplift rates or projected acquisition volume increases using ARPU for example. Other ways of projecting value / determining scale of the problem is customer support queries or customer feedback, site loading times, efficiency in terms of £££ saving eg. Man hours / days or software costs etc.

Collaboration: Discussing value and priority options openly with your colleagues will help you deliver a product in a more confident and focused way, as it’s not easy making the big decisions on prioritisation because what’s at the top or moves to the top means that the items below won’t be done now or perhaps anytime soon, so checking and agreeing on the focus / roadmap helps to give confidence to just get on with delivering a high quality & value product without having to worry about justifying a decision you’ve made alone every minute of the day.

Determination: Prioritisation changes frequently if you work in an agile environment, so being positive and determined to deliver upcoming projects you’ve been discussing for months or even years helps to keep focus on delivering the key business goals and provides reminders that it’s still on the agenda, no matter the level of incoming bombshells / distractions.

If someone asks for something to be done urgently without providing any numbers representing the projected value or any element to give an idea of the scale of the problem you’re looking to solve, then asking why do it or what happens if we don’t do it in the next 12 months should help to quickly prompt the need to research more into the value.

Projecting investment cost and taking time to dig into the real value the product change will make in a collaborative way, will ensure that you’re delivering frequent value to customers internally and externally in a happy, fun and relaxed environment.

It’s powerful, flexible, customisable, saves thousands of man hours, provides valuable customer insights / behaviour and most importantly ensures that you get a healthy ROI if used in the right way.

Meet The Brain: The Brain is MediaMath’s proprietary algorithm and ingests data (60 billion opportunities everyday to be exact) and decisions against that data.

Their algorithm’s left-brain and right-brain work together to analyse large pools of impressions, looking at dozens of user and media variables, to determine which impressions will best meet an advertiser’s goal.The Brain values each impression based on its likelihood of driving an action, and bids accordingly.

image

wpid-Big-window-in-Gothenburg-Version-2.jpg

It continues to disappoint me when I hear about large blue chip clients working on the default 30 day PV (post view) cookie window for display campaigns and then accepting 100% of the PV conversions. Not only this, but in most cases no viewability tech is being used.

When looking at your PV cookie window, typically it should be set to mirror what you have deemed to be the average consideration time to purchase as well as taking into account the ad format.

On the other hand, you want to avoid coming up with an arbitrary PV window which so many brands do.

Fortunately there is a robust way of finding out what percentage of PV conversions are genuine which you can use for future campaigns. This is called a ‘Placebo Test’. You would run an A/B test with one of your creatives adserved alongside a charity creative. Post campaign you minus the in view PV conversions which the charity creative delivered (which are obviously incorrect) from the in view PV conversions your brand creative delivered. This will leave you with the remainder of in view PV conversions which you can class as genuine. Work out what the percentage of genuine in view PV conversions were and then you can use this percentage within the buying platform which will mean only the percentage which has been proved genuine in the past will be accepted and attributed for the current and future campaigns.

Ideally you should expect the ‘Placebo Test’ to look something like the below. If both lines are similar then the banners are not working on a brand basis and they therefore don’t offer any value outside the click. The mention of ‘Placebo’ below would be a charity creative.

PV

Things to consider:

  1. You need £10k media investment
  2. Banners incl. charity banners
  3. Partner eg. MediaMath, DBM or Media IQ
  4. Viewability tech eg. Spider.io
  5. You only have to run it once per product

By overvaluing a channel like display has two main consequences 1. Wasting marketing budget as you could re-allocate some of the display budget to other better performing channels and 2. An algorithm optimising on bad data will only mean that it will aim to optimise towards that bad data more.

On the subject of display wastage, I recently worked with Exchange Wire on an article about my frustrations of DSP’s not integrating with third party viewability tech and the impact.

If agencies and brands stop wasting marketing budget and run display campaigns as they should be done in a more genuine way, the channel will then get the respect it deserves.

computer-thief

Can we place a pixel across your whole site and we’ll give you free customer insights? Can we place a pixel on each stage of the user journey so that we can optimise towards all site traffic data?

These are two very common questions which originated from traditional ad networks and still lives on even though technology has evolved.

If you ask a marketer if they could target anyone in the world with advertising with no restrictions, it would no doubt be their competitors customers.

I am fortunate enough to have bought display remarketing campaigns targeting competitor customers in the past. This was when I worked across the largest hotel chain in the UK at an ad agency via an ad network. That level of targeting, special offer creative and high frequency reaped rewards as you’d expect.

Marketers spend £millions a year on advertising and driving quality traffic can be expensive, so the last thing they want is a competitor just simply remarketing all of their users who visit their site either through FBX or display.

Fortunately this can be avoided if marketing deploys a strict policy that they only allow media pixels to fire on an attributed basis, yes some partners might say that they’d need all data to optimise but when you weigh up value vs. risk, it’s simply not worth it. Optimising on attributed traffic only is good enough for third party ad partners.

On the analysis front eg. Google Analytics, Click Tale, Quantcast etc. it’s a case of applying a bit of logic, experience and research so then when deploying tracking / pixels on site, your data will not be sold in a data exchange or given to a competitor for remarketing. When it comes to big blue chip companies like Facebook, Adobe and Google, there’s no need to hesitate about data security because if it gets out that they’re selling your data then it would be disastrous for them. Whereas the likes of Quantcast who are very well known for giving you FREE customer insights just for placing a pixel across your whole site, is one of those cases where big red warning lights should appear because in this world nothing is really for free and the likes of Qantcast make money from using your data.

Having a strict cookie / tracking policy is safe and advisable but by not having one could cause your market share to decrease as your competitors steal your customers.

You don’t walk across a busy road without looking in either direction so think twice before implementing code on your site.

Cookie

With ad spend still over £15bn / year in the UK, there are a few digital suppliers and publishers who continue looking for the quick buck by cookie stuffing.

Worryingly some marketing consultants and CMO’s turn a blind eye or use the dodgy practices knowingly to improve on the marketing tracked performance.

A few examples of cookie stuffing:

  • When managed service media buys are told to only run prospecting campaigns, yet they use remarketing aggressively to get the last post view conversion.
  • Suppliers popping banners across the net on a blank page to get the last post view conversion.
  • Publishers delivering multi banners below the footer of a site to get the last post view conversion and generate more revenue for themselves.
  • Ad networks requesting a click tracker for a piece of copy and logo, but then just use the click command to pop the site to post click cookie bomb.
  • Pop suppliers popping site when people search for your brand on Google – dropping a cookie when someone is just about to visit your site already.
  • Pop suppliers popping site using a click tracker and therefore dropping a post click cookie on the view – another form of cookie bombing.
  • Affiliates have an abundance of click trackers at their disposal where CTR doesn’t get monitored. Many use these to pop the site to post click cookie bomb also.

These are just a few of the common practises which go on, but this neither helps the industry improve, is fair for genuine suppliers who do things by the book or helps advertisers grow volume incrementally.

Fortunately there are a few tech suppliers out there who can at least help you identify whether traffic is showing a fraudulent pattern such as Traffic Cake.

Agencies and marketing managers need to have a stricter policy on cookie stuffing so then it can finally be put to bed along with the suppliers who do it.

Cookies

Firstly you need to have access to a DSP and have adserver container tags across your whole site. When implementing the container tags, it’s essential to pass back as much data through the custom variables as possible eg. age, gender, bucket amount, revenue, customer loyality type.

Container tags should be placed across each site / product page and then a tag from homepage throughout the whole conversion process to the sales thank you page. Also a tag across the current customer login section is required.

Now it’s a case of building up your CRM database within the DSP. A pixel within the DSP represents a targeting segment to be included / excluded such as:

  • Main acquisition homepage
  • March 2013 microsite landing page
  • Car Insurance homepage
  • Home Insurance homepage
  • Quote confirmation page
  • Logged in
  • Deposit confirmation page
  • Business awards landing page
  • Affiliate landing page
  • CRM email – non converting
  • Males
  • Age 25-34
  • Gold customers

Once your pixels have been created in the DSP, it’s a case of implementing them within the adserver container tags using the existing variables which have already been setup. This will allow you to setup basic scripts to conditionally fire the pixels to match the segment. To increase cookie volume, implement separate pixels across all of your CRM emails also.

The tech part is out of the way and now you just need to setup all of the relevant strategies in the DSP including / excluding the newly created CRM segments accordingly.

As new product pages, websites, microsites and CRM email campaigns get created, then the same process needs to take place in order to keep the cookie CRM database updated.

As the cookie database is held within a DSP such as MediaMath, you can deliver the CRM campaigns across ad exchanges, yield optimisers and FBX.

Winner

You’ve spent months working with the data team setting up all of the marketing data feeds for the DMP and now it’s a case of setting the briefs for multi and custom attribution models.

Last click attribution is typically default and the most common. It’s not wrong to only stick to one and if there’s no motivation to work with others, then last click isn’t a bad choice to stick to.

Viewing multi custom attribution models gives you insight into the campaigns which are getting undervalued by contributing more higher up the funnel than lower for example. Off the back of the data, you can then increase targets / goals / CPA accordingly for the relevent campaigns / media buys.

The benefit of using custom attribution models is that you can amend certain exposures / campaigns in order for the output to make more sense in an actionable way eg. remove all banner impressions which did not get viewed, remove brand search clicks, remove remarketing impressions etc.

Firstly the data team will need to setup the 5 key out of the box attribution models:

    • Last interaction
    • Linear
    • First interaction
    • Time decay
    • Position based

    Once built out, within your visulisation tool there should be options to customise the data further eg. remove banners which weren’t in view, remove brand search, remarketing and CRM campaigns which will leave you with insight into the real performance of your prospecting campaigns across different attribution models.

    Google have been attempting attribution modelling over the past few years via DFA. They unfortunately still have a couple of bugs making the tool unusable, but they are still further ahead than any other third party attempting custom attribution modelling on a self sevice basis.

    It will always be difficult for third party companies to successfully deal with attribution because attribution models should be built using the data from the in house DMP, which includes back end customer / revenue / LTV data.

    In order to understand how all of your ad campaigns are really performing and what role they fully play, viewing performance data across multi custom attribution models is key.

    Puddle

    Offline brand activity has been measured in the same way for decades through econometrics – mainly looking at the correlation which offline activity has with brand search volume and bottom line acquisitions / revenue.

    Many digital specialists claim that this way of measuring offline brand activity was built for offline and it would be unfair to use this method for measuring online brand. Yet, those digital specialists are more than happy to attribute post view data to all online advertising without analysing actual cause and causality.

    The reason why many feel that it’s unfair, is because online branding is expensive and when looking at the correlation of online brand spend vs. offline spend through an econometrics model, offline shows a greater ROI for many advertisers. Also when it comes to banners, in many instances there is zero correlation between banner impression volume and brand search uplift / bottom line acquisitions.

    Just because you can track post view, it doesn’t mean that you should attribute post view conversions to campaigns. Most digital planners who have been around for a while know how this can be easily abused, you only have to look back at the classic Yahoo Marketplace placement on the Yahoo HP where an impression counter could be attached to the ad to remember this.

    The key objective for all brand activity is to deliver a positive ROI no matter how the consumer got to your site / store or whether the ad was delivered online or offline. I can’t imagine any marketer spending money on advertising and not ever wanting a return from that spend, so it’s pretty safe to say that the key objective above is fact.

    So what is the most robust way of measuring the ROI of online brand activity.

    Analysing the correlation that both uplift in brand search volume and bottom line acquisitions / revenue has on any medium to large weight brand campaign (online or offline), is the most effective way of viewing impact / ROI in a robust and truthful way. This would mean that econometrics would be perfect to measure the effectiveness of online brand campaigns also.

    In order to determine cause / causality, the brand activity would have to be signficant eg. Portal / social network takeovers, online video or high volume display burst campaigns so then the noise will show up in an econometrics model.

    For very low volume online branding, there is an option to use in view post view data as a proxy of success, but it’s essential to remember that you won’t know whether the conversions would have happened anyway, unless you have run a placebo controlled test.

    The ultimate goal is to know what brand opportunities are the most cost efficient way of increasing conversions / revenue. Basic econometrics is still the most effective way of reaching this goal across all marketing channels.

    DMP’s 2.0

    Posted: Jul 21, 2013 in Business, Data, Marketing
    Tags: , , ,

    Puzzle

    DMP’s have been around for decades but the acronym only started getting banded around the ad industry recently.

    DMP’s up until recently pretty much included only back end data which was overlayed with a visualisation tool such as QlikView or Omniscope. Typically media planner buyers and marketing execs used to use adservers to pull off basic performence reports as all costs were flat ie. Not biddable and held within the adserver.

    Since programmatic buying became more popular, media buyers have been spending a significant amount of time pulling data together from different sources just to see how campaigns are performing – combining bid tool, adserver and back end data manually.

    Programmatic media buyers should be spending as much time as possible setting up strategies and optimising campaigns rather than spending days merging data or reconciling costs.

    Clearly things needed to change and they have started to, resulting in programmatic buyers having to work closer than ever to the database team who manages the DMP.

    Due to this change, the volume of work load and briefs to deal with data has tripled over night for data teams. To deal with the new data demand from marketing, it’s essential to have incremental resource to deal with the additional work because otherwise it will either take years to get done or get done in a shoddy way.

    Allowing marketing the extra data resource to support a data led marketing strategy is essential for business success. A DMP should now include log level data updated in real time / within three hours as standard including:

    • Back end data showing cohort conversion and revenue data
    • Paid Search bid tool spend and impression / click data
    • Social Media bid tool and fan page spend and impression / click data
    • Display bid tool spend and impression / click data
    • Banner inview data
    • CRM email impression / click data
    • Affiliate spend and impression / click data
    • Natural Search impression / click data with any flat agency fee attached
    • Mobile spend and click data
    • TV spots and any other offline channel activity with the relevant spend and volumes attached
    • Adserver data incl. adserving fee making all channel spends fully loaded
    • Site traffic data
    • Weather data
    • Competitor exposure data
    • Site / product issue data

    All of this data is essential for knowing exactly what is happening across the business and why. With a click of a button marketing should be able to view real time campaign performance (CPA and projected ROI) across all campaigns and channels as well as impact of what branding, weather, competitor activity and any site / product downtime has on revenue / acquisitions. Also user journey analysis from first touch point to last and the key five attribution models should be built out from the data which all take into account CRM.

    Without this, marketing cannot be expected to grow the business profitably.