Archive for the ‘Tracking’ Category

Analytics

The short answer is yes – the product/team will definitely benefit by having web/app analytics tracking as part of the definition of done (DoD).

The only time that a separate analytics tracking story should be written and played is typically in the scenario of:

  1. There’s no existing analytics tracking, so there’s tracking debt to deal with including the initial API integration
  2. A migration from one analytics provider to another

The reason why it’s super important to ensure that analytics/tracking is baked into the actual feature acceptance criteria/DoD, is so then:

  1. It doesn’t get forgotten
  2. It forces analytics tracking to be included in MVP/each product iteration as default
  3. It drives home that having tracking attached to a feature before it goes live is just as important as QAing, load testing, regression testing or code reviews

Unless you can measure the impact of a feature, it’s hard to celebrate success, prove the hypothesis/whether it delivered the expected outcome or know whether it delivered any business value – the purpose of product development isn’t to deliver stories or points, it’s to deliver outcomes.

Having a data-driven strategy isn’t the future, it’s now and the advertising industry adopted this analytics tracking philosophy over two decades ago, so including analytics tracking within the DoD will only help set the product/team in the right direction.

Velocity

Velocity = Projected amount of story points which a team can burn over a set period

A development team’s velocity using Scrum or Kanban can be worked out by totalling up the amount of points which has been burned across 3-5 sprints/set periods and then dividing it by the periods the totals were calculated over (taking an average across the periods).

It’s important to use an average across the last 3-5 periods, so then holiday seasons and a sprint where items have moved over to the following sprint doesn’t dramatically impact the numbers as much as it would if you only looked at the last period.

A team can use their velocity in many ways, for example:

  • Understanding how many points they can commit to during sprint planning/work out how many PBIs (Product Backlog Items) could be done across the next 2 weeks
  • To aid prioritisation (The ‘I’ in ROI)
  • Predicting when software can be delivered in the backlog, which can then be used to forecast future feature delivery
  • Understanding the impact on any resources eg. Scrum team member changes or adding extra teams to the product
  • Understanding the impact which dependencies are having which can be reviewed in the retro, great example being build pipelines
  • Providing a more accurate estimate than a t-shirt size
  • As a KPI for efficiency improvements

I tend to refer to points being ‘burned’ rather than ‘delivered’ because it’s quite easy to fall into the velocity/story point delivery trap of obsessing about points being delivered rather than obsessing about delivering outcomes (business value).

Devops

Development effort isn’t cheap, but extremely valuable no matter what industry you work in, so once a product iteration is ready to ship, automating the final steps including the software build, deployment, environment and release process will help continuously deliver customer value in an efficient way without unnecessary delays or bottlenecks.

DevOps is the combination of cultural philosophies, practices, and tools that increases an organisation’s ability to deliver applications and services at high velocity: evolving and improving products at a faster pace than organizations using traditional software development and infrastructure management processes. This speed enables organisations to better serve their customers and compete more effectively in the market.” – AWS

There’s often a significant amount of thought and effort which goes into getting an idea into development, so when the code (solution) is ready to kick off the build (ship) process, it’s important that this is as automated as possible to avoid unnecessary delays with customers getting hold of the feature within a timely fashion.

Due to the rise of the DevOps culture, it’s now possible to automate the whole build, deployment and release process. As well as customers getting features sooner as mentioned above, other benefits of adopting a DevOps culture includes:

  • Software Development division remaining competitive
  • Reduction in waste from having to wait for software to build, deploy, dealing with environment issues and working with the operation team to handle the release
  • Increasing the R in ROI (Return on Investment) as less waste results in delivering more value to customers
  • Improving team morale – dealing with environmental, build and release issues manually isn’t fun
  • Improving on sprint goal complete rates as it’s less likely stories will drag over to multiple sprints because of build / release issues
  • Decreasing lapsed time of development work
  • Improved security
  • Easier to track build to release timeframe
  • Automated
  • Scalable

Adopting a DevOps culture should ideally come from bottom up rather than top down – a Product Owner shouldn’t need to create stories, sell in the importance of it to dev teams or prioritise it, as optimising the software build and release process should be BAU (Business as Usual) and should always be constantly looked at and improved.

As development teams adopt a DevOps culture and they start migrating over to a fully automated process, the benefits will be obvious and lucrative.

Kpi

In order to prioritise effectively you need both the projected value and effort, but these aren’t always easy to come by. Projecting value can be particularly challenging if the data isn’t easily accessible which can have a knock on effect when analysing your KPIs (Key Performance Indicators).

Ensuring that a product / feature have KPIs is beneficial for a few reasons including: Aiding prioritisation, celebrating success, feeding back on software development iterations and to feed into the general product vision and wider business goals.

Your KPIs don’t have to be a financial value (although a good attempt at projecting a monetary value should be made to aid ROI projections) or just one KPI, but they just need to be measurable, an indication of success and for them to be linked in someway to the overall business goals, so how can you identify what your KPIs are:

  • Incremental revenue – benchmarking on existing revenue volumes for the relevant feature in question. What do you anticipate increasing the revenue / ARPU by
  • How many customer queries are you hoping to reduce and how much does it cost per contact
  • Is it solving a common problem / request that high value players have been submitting
  • Will solving the problem increase website stability, reducing downtime for customers
  • Are you expecting to increase customer acquisition numbers / conversion rate
  • Will it increase retention rates – a measure of this is churn rate / drop off as well as LTV
  • Efficiency savings – by completing a piece of work could it increase team output / Velocity whether it be development or a marketing team
  • Feature traffic / usage – if conversions or direct revenue from the feature isn’t relevant then at a minimum having sessions, dwell time and value of customers using the feature can be used as a KPI

    Identifying your KPIs is one thing, but having the data available at your disposal on a self-service basis to cut, analyse and share is naturally fundamental, but once you have identified your KPIs and have access to the data, you can be confident that you’re well equipped to contribute to the Agile piece, but also your helping meet the wider business goals.

    wpid-tag-management.jpg

    The top two tag management tools currently on the market are BrightTag and Tagman. Both offer more than just tag management such as attribution modelling and performance reporting.

    As clients add front end marketing data into their in-house DMP’s, elements such as attribution modelling and performance reporting is pulled from the DMP. I have mentioned this before, but the reason why attribution modelling and performance reporting should always be driven from the in-house DMP is because the data will be:

    1. Across all channels and data sources.
    2. You can build your own custom attribution models.
    3. Performance is defined across a multitude of KPIs such as cohort ROI, projected ROI, cohort CPA and projected CPA.

    Let’s focus on the actual tag management feature, this has certainly been attractive to many brands over the years especially those dev teams who take + 12 months to implement a new tag (I have personally witnessed this for a car insurance brand). Times have changed, websites are now more advanced than ever alongside more advanced products across multiple devices which has led to a prioritisation of developer recruitment in-house. As a result something like creating a tag management feature within their back office system would be second nature to the majority of dev teams.

    What they’d be able to build which would typically take one developer two weeks to a very maximum of one month including QAing would be:

    • Compatibility with all tags including floodlights, analytics such as GA, SEO tracking.
    • No limit to volume of tags.
    • Implementation by URL string.
    • Fire pixels based on variables.
    • Passing back variables to pixel suppliers.
    • Killing a tag from loading if the response from the third party is slow.
    • Asynchronous tag loading.

    Sending a work request to your dev team to build a tag management feature in your in-house back end system certainly proves to be cost efficient as you’ll find it would save you at least £60k to £100k every year. Those who aren’t motivated to build a brief like this or have much dev resource then there is the completely free Google Tag Manager product available.

    Cookies

    Firstly you need to have access to a DSP and have adserver container tags across your whole site. When implementing the container tags, it’s essential to pass back as much data through the custom variables as possible eg. age, gender, bucket amount, revenue, customer loyality type.

    Container tags should be placed across each site / product page and then a tag from homepage throughout the whole conversion process to the sales thank you page. Also a tag across the current customer login section is required.

    Now it’s a case of building up your CRM database within the DSP. A pixel within the DSP represents a targeting segment to be included / excluded such as:

    • Main acquisition homepage
    • March 2013 microsite landing page
    • Car Insurance homepage
    • Home Insurance homepage
    • Quote confirmation page
    • Logged in
    • Deposit confirmation page
    • Business awards landing page
    • Affiliate landing page
    • CRM email – non converting
    • Males
    • Age 25-34
    • Gold customers

    Once your pixels have been created in the DSP, it’s a case of implementing them within the adserver container tags using the existing variables which have already been setup. This will allow you to setup basic scripts to conditionally fire the pixels to match the segment. To increase cookie volume, implement separate pixels across all of your CRM emails also.

    The tech part is out of the way and now you just need to setup all of the relevant strategies in the DSP including / excluding the newly created CRM segments accordingly.

    As new product pages, websites, microsites and CRM email campaigns get created, then the same process needs to take place in order to keep the cookie CRM database updated.

    As the cookie database is held within a DSP such as MediaMath, you can deliver the CRM campaigns across ad exchanges, yield optimisers and FBX.

    Winner

    You’ve spent months working with the data team setting up all of the marketing data feeds for the DMP and now it’s a case of setting the briefs for multi and custom attribution models.

    Last click attribution is typically default and the most common. It’s not wrong to only stick to one and if there’s no motivation to work with others, then last click isn’t a bad choice to stick to.

    Viewing multi custom attribution models gives you insight into the campaigns which are getting undervalued by contributing more higher up the funnel than lower for example. Off the back of the data, you can then increase targets / goals / CPA accordingly for the relevent campaigns / media buys.

    The benefit of using custom attribution models is that you can amend certain exposures / campaigns in order for the output to make more sense in an actionable way eg. remove all banner impressions which did not get viewed, remove brand search clicks, remove remarketing impressions etc.

    Firstly the data team will need to setup the 5 key out of the box attribution models:

      • Last interaction
      • Linear
      • First interaction
      • Time decay
      • Position based

      Once built out, within your visulisation tool there should be options to customise the data further eg. remove banners which weren’t in view, remove brand search, remarketing and CRM campaigns which will leave you with insight into the real performance of your prospecting campaigns across different attribution models.

      Google have been attempting attribution modelling over the past few years via DFA. They unfortunately still have a couple of bugs making the tool unusable, but they are still further ahead than any other third party attempting custom attribution modelling on a self sevice basis.

      It will always be difficult for third party companies to successfully deal with attribution because attribution models should be built using the data from the in house DMP, which includes back end customer / revenue / LTV data.

      In order to understand how all of your ad campaigns are really performing and what role they fully play, viewing performance data across multi custom attribution models is key.