Posts Tagged ‘data’

Analytics

The short answer is yes – the product/team will definitely benefit by having web/app analytics tracking as part of the definition of done (DoD).

The only time that a separate analytics tracking story should be written and played is typically in the scenario of:

  1. There’s no existing analytics tracking, so there’s tracking debt to deal with including the initial API integration
  2. A migration from one analytics provider to another

The reason why it’s super important to ensure that analytics/tracking is baked into the actual feature acceptance criteria/DoD, is so then:

  1. It doesn’t get forgotten
  2. It forces analytics tracking to be included in MVP/each product iteration as default
  3. It drives home that having tracking attached to a feature before it goes live is just as important as QAing, load testing, regression testing or code reviews

Unless you can measure the impact of a feature, it’s hard to celebrate success, prove the hypothesis/whether it delivered the expected outcome or know whether it delivered any business value – the purpose of product development isn’t to deliver stories or points, it’s to deliver outcomes.

Having a data-driven strategy isn’t the future, it’s now and the advertising industry adopted this analytics tracking philosophy over two decades ago, so including analytics tracking within the DoD will only help set the product/team in the right direction.

Velocity

Velocity = Projected amount of story points which a team can burn over a set period

A development team’s velocity using Scrum or Kanban can be worked out by totalling up the amount of points which has been burned across 3-5 sprints/set periods and then dividing it by the periods the totals were calculated over (taking an average across the periods).

It’s important to use an average across the last 3-5 periods, so then holiday seasons and a sprint where items have moved over to the following sprint doesn’t dramatically impact the numbers as much as it would if you only looked at the last period.

A team can use their velocity in many ways, for example:

  • Understanding how many points they can commit to during sprint planning/work out how many PBIs (Product Backlog Items) could be done across the next 2 weeks
  • To aid prioritisation (The ‘I’ in ROI)
  • Predicting when software can be delivered in the backlog, which can then be used to forecast future feature delivery
  • Understanding the impact on any resources eg. Scrum team member changes or adding extra teams to the product
  • Understanding the impact which dependencies are having which can be reviewed in the retro, great example being build pipelines
  • Providing a more accurate estimate than a t-shirt size
  • As a KPI for efficiency improvements

I tend to refer to points being ‘burned’ rather than ‘delivered’ because it’s quite easy to fall into the velocity/story point delivery trap of obsessing about points being delivered rather than obsessing about delivering outcomes (business value).

So many awesome ideas from so many people to improve product, but it’ll always be impossible to fulfil all desires in an acceptable time frame to stakeholders, making prioritisation not only challenging but extremely important.

Process, data, collaboration and determination can certainly make prioritisation all the more effective and smoother, so looking at these areas in more detail:

Process: Status of projects, where do product requests / bugs sit in the pecking order, ETA on delivery, investment cost and projected value of projects held in a transparent way will help with the communication overhead and help maintain trust.

Data: To ensure that high value items are being worked on you need data to backup assumptions. It can be easy to flap and try to make a problem out to be bigger than it is to get it done, but there should always be some kind of data to back it up with examples being: incremental revenue which can be reverse engineered from retention uplift rates or projected acquisition volume increases using ARPU for example. Other ways of projecting value / determining scale of the problem is customer support queries or customer feedback, site loading times, efficiency in terms of £££ saving eg. Man hours / days or software costs etc.

Collaboration: Discussing value and priority options openly with your colleagues will help you deliver a product in a more confident and focused way, as it’s not easy making the big decisions on prioritisation because what’s at the top or moves to the top means that the items below won’t be done now or perhaps anytime soon, so checking and agreeing on the focus / roadmap helps to give confidence to just get on with delivering a high quality & value product without having to worry about justifying a decision you’ve made alone every minute of the day.

Determination: Prioritisation changes frequently if you work in an agile environment, so being positive and determined to deliver upcoming projects you’ve been discussing for months or even years helps to keep focus on delivering the key business goals and provides reminders that it’s still on the agenda, no matter the level of incoming bombshells / distractions.

If someone asks for something to be done urgently without providing any numbers representing the projected value or any element to give an idea of the scale of the problem you’re looking to solve, then asking why do it or what happens if we don’t do it in the next 12 months should help to quickly prompt the need to research more into the value.

Projecting investment cost and taking time to dig into the real value the product change will make in a collaborative way, will ensure that you’re delivering frequent value to customers internally and externally in a happy, fun and relaxed environment.

It’s powerful, flexible, customisable, saves thousands of man hours, provides valuable customer insights / behaviour and most importantly ensures that you get a healthy ROI if used in the right way.

Meet The Brain: The Brain is MediaMath’s proprietary algorithm and ingests data (60 billion opportunities everyday to be exact) and decisions against that data.

Their algorithm’s left-brain and right-brain work together to analyse large pools of impressions, looking at dozens of user and media variables, to determine which impressions will best meet an advertiser’s goal.The Brain values each impression based on its likelihood of driving an action, and bids accordingly.

image

wpid-Big-window-in-Gothenburg-Version-2.jpg

It continues to disappoint me when I hear about large blue chip clients working on the default 30 day PV (post view) cookie window for display campaigns and then accepting 100% of the PV conversions. Not only this, but in most cases no viewability tech is being used.

When looking at your PV cookie window, typically it should be set to mirror what you have deemed to be the average consideration time to purchase as well as taking into account the ad format.

On the other hand, you want to avoid coming up with an arbitrary PV window which so many brands do.

Fortunately there is a robust way of finding out what percentage of PV conversions are genuine which you can use for future campaigns. This is called a ‘Placebo Test’. You would run an A/B test with one of your creatives adserved alongside a charity creative. Post campaign you minus the in view PV conversions which the charity creative delivered (which are obviously incorrect) from the in view PV conversions your brand creative delivered. This will leave you with the remainder of in view PV conversions which you can class as genuine. Work out what the percentage of genuine in view PV conversions were and then you can use this percentage within the buying platform which will mean only the percentage which has been proved genuine in the past will be accepted and attributed for the current and future campaigns.

Ideally you should expect the ‘Placebo Test’ to look something like the below. If both lines are similar then the banners are not working on a brand basis and they therefore don’t offer any value outside the click. The mention of ‘Placebo’ below would be a charity creative.

PV

Things to consider:

  1. You need £10k media investment
  2. Banners incl. charity banners
  3. Partner eg. MediaMath, DBM or Media IQ
  4. Viewability tech eg. Spider.io
  5. You only have to run it once per product

By overvaluing a channel like display has two main consequences 1. Wasting marketing budget as you could re-allocate some of the display budget to other better performing channels and 2. An algorithm optimising on bad data will only mean that it will aim to optimise towards that bad data more.

On the subject of display wastage, I recently worked with Exchange Wire on an article about my frustrations of DSP’s not integrating with third party viewability tech and the impact.

If agencies and brands stop wasting marketing budget and run display campaigns as they should be done in a more genuine way, the channel will then get the respect it deserves.

computer-thief

Can we place a pixel across your whole site and we’ll give you free customer insights? Can we place a pixel on each stage of the user journey so that we can optimise towards all site traffic data?

These are two very common questions which originated from traditional ad networks and still lives on even though technology has evolved.

If you ask a marketer if they could target anyone in the world with advertising with no restrictions, it would no doubt be their competitors customers.

I am fortunate enough to have bought display remarketing campaigns targeting competitor customers in the past. This was when I worked across the largest hotel chain in the UK at an ad agency via an ad network. That level of targeting, special offer creative and high frequency reaped rewards as you’d expect.

Marketers spend £millions a year on advertising and driving quality traffic can be expensive, so the last thing they want is a competitor just simply remarketing all of their users who visit their site either through FBX or display.

Fortunately this can be avoided if marketing deploys a strict policy that they only allow media pixels to fire on an attributed basis, yes some partners might say that they’d need all data to optimise but when you weigh up value vs. risk, it’s simply not worth it. Optimising on attributed traffic only is good enough for third party ad partners.

On the analysis front eg. Google Analytics, Click Tale, Quantcast etc. it’s a case of applying a bit of logic, experience and research so then when deploying tracking / pixels on site, your data will not be sold in a data exchange or given to a competitor for remarketing. When it comes to big blue chip companies like Facebook, Adobe and Google, there’s no need to hesitate about data security because if it gets out that they’re selling your data then it would be disastrous for them. Whereas the likes of Quantcast who are very well known for giving you FREE customer insights just for placing a pixel across your whole site, is one of those cases where big red warning lights should appear because in this world nothing is really for free and the likes of Qantcast make money from using your data.

Having a strict cookie / tracking policy is safe and advisable but by not having one could cause your market share to decrease as your competitors steal your customers.

You don’t walk across a busy road without looking in either direction so think twice before implementing code on your site.

Cookie

With ad spend still over £15bn / year in the UK, there are a few digital suppliers and publishers who continue looking for the quick buck by cookie stuffing.

Worryingly some marketing consultants and CMO’s turn a blind eye or use the dodgy practices knowingly to improve on the marketing tracked performance.

A few examples of cookie stuffing:

  • When managed service media buys are told to only run prospecting campaigns, yet they use remarketing aggressively to get the last post view conversion.
  • Suppliers popping banners across the net on a blank page to get the last post view conversion.
  • Publishers delivering multi banners below the footer of a site to get the last post view conversion and generate more revenue for themselves.
  • Ad networks requesting a click tracker for a piece of copy and logo, but then just use the click command to pop the site to post click cookie bomb.
  • Pop suppliers popping site when people search for your brand on Google – dropping a cookie when someone is just about to visit your site already.
  • Pop suppliers popping site using a click tracker and therefore dropping a post click cookie on the view – another form of cookie bombing.
  • Affiliates have an abundance of click trackers at their disposal where CTR doesn’t get monitored. Many use these to pop the site to post click cookie bomb also.

These are just a few of the common practises which go on, but this neither helps the industry improve, is fair for genuine suppliers who do things by the book or helps advertisers grow volume incrementally.

Fortunately there are a few tech suppliers out there who can at least help you identify whether traffic is showing a fraudulent pattern such as Traffic Cake.

Agencies and marketing managers need to have a stricter policy on cookie stuffing so then it can finally be put to bed along with the suppliers who do it.