Posts Tagged ‘data’

This is the best book I’ve read on DevOps and it follows on nicely from Gene Kim’s other book The Phoenix Project.

It’s quite easy to think that DevOps practices are just something that dev teams deal with and the value is simply just an increase in throughput, but the book provides clarity on the colossal value that adopting a DevOps culture and the principles can have on teams, the business, and customers.

Throughout the book, Gene echoes the importance of having the whole product team (product manager, designer and several engineers)) involved in the transformation, as well as focusing on outcomes, and to achieve outcomes you need to collect data and learn through experimentation which is covered in the book too.

Gene gives good advice that it’s important to avoid funding projects and instead you should fund services and products: “A way to enable high-performing outcomes is to create stable service teams with ongoing funding to execute their own strategy and road map of initiatives”.

This is the most comprehensive and practical DevOps guide out there and the layout makes the content easy to digest. The book covers:

– History leading up to DevOps, and Lean thinking
– Agile, and continuous delivery
– Value streams
– How to design your organisation and architecture
– Integrating security, change management, and compliance

The principles and tech practices of:
1. Flow
2. Feedback
3. Continual Learning and Experimentation

“Our goal is to enable market-oriented outcomes where many small teams can quickly and independently deliver value to the customer”

Previously ‘Product Management’ as a function has been part of the tech or marketing department and it’s still a relatively new concept to have Product Management as a separate department in an organisation.

As a result, there are many common misconceptions that the main function of Product Management and the Product Manager/Owner is to define features themselves and work with tech to deliver them, making it somewhat frustrating when marketing, insights, commercial team or any other department outside of product make a request which takes up tech effort which could have otherwise been spent on pushing your own changes.

So if this is a misconception, what is the role of Product Management in the wider organisation?

Product Management as a function/department sits in the middle of the organisation where the Product Manager is a generalist who collaborates with the specialists across the business to help manage the product business and develop the product, which includes working with:

  • Technology / DevOps and Designers/UX to learn through experimentation and reach outcomes early and often by developing the product continuously
  • Marketing to grow the product
  • Customer support to help them provide an A* customer service
  • Legal / compliance team to ensure the product is compliant
  • PMO / Project Managers to support them on cross-cutting high-value initiatives
  • Commercial / Bus Dev to take advantage of opportunities
  • Data / Insights team to gain access to qual/quant data, learn and understand how you can use data better to deliver more effective outcomes
  • The C-suite especially CEO to understand the business goals and ensure your product goals aligns with them
  • Yourself, the market and customers to analyse qual/quant data to find out what problems there could be to solve

As a Product Manager, you may feel overwhelmed by a sudden bombardment of requests coming from certain departments all of a sudden for example marketing requests, and a positive way of looking at this is that these inputs are essentially all just product ideas and part of qual/quant data analysis to help improve the product/product business which as a Product Manager you need to manage.

You may also find that you are spending the majority of tech resources on marketing requests for months, which is absolutely fine, if this is the highest priority work – the importance of growing the product business should not be underestimated.

With lots of valuable input incoming at a frequent rate, as a Product Manager, it means that you need to be organised, proactive and utilise your emotional intelligence to ensure you get the most out of everyone and that you handle situations rationally. What will help you is:

  • Accepting and believing that you are one team working together to improve the product/product business
  • Having a tidy data-driven prioritised product backlog which anyone can access
    • This will make it easier to say why someone can’t have what they want now!
  • Presenting your product roadmap, successes and what’s up next to key stakeholders on a regular heartbeat, but also ensuring that stakeholders have access to real-time updates of the product roadmap. Aha.io is a great tool for this
  • Know your customers, market, product strategy, backlog and data, so you can be assertive and lean into tense situations – Managing Product = Managing Tension is a great book to help give you confidence to lean into tension

A Product Manager is accountable for the success of their product and therefore also needs to manage the product business, not just develop a product.

Steven Haines is a globally recognised expert in Product Management who has done incredible work professionalising Product Management. I’d recommend reading the below books of his:

As Haines says “The system of product management touches and influences all the organic supporting structures-all the business functions. Think of the human body; product management is in the circulatory system, the neural network, and, of course, the command and control center (the brain).”

A PRD (Product Requirement Document) helps a product manager write a story on how to get from problem -> solution methodically. Often problems can be so big, complex and have ambiguity that it’s hard to know where to start, so a PRD will help you rationally approach the problem and get the support you need to reach an efficient and effective outcome.

It’s important to point out that it’s not necessary to:

  1. Use a PRD for every single problem/idea, normally only for epics/initiatives/features/medium or large-sized items.
  2. Complete the whole document, only fill out bits you need, you may find that you only complete the problem/value and hypothesis parts.
  3. Work on it in a silo. You will achieve a more efficient and effective outcome if you get the rest of the product team (designer and engineers) and stakeholders involved at the beginning in a workshop format so that you can work as one team across the discovery phase.

A good place to create a PRD is in Confluence, so it’s easily accessible across the business allowing colleagues to easily comment remotely. I’ve also used Google Docs previously and copied some of the information into the epic of JIRAs to reinforce the problem we are looking to solve/projected value we are looking to get after to the development team.

The first few PRDs you write you might find it quite slow whilst you work out how to get hold of the qual/quant data, but you soon pick up speed as you become familiar with the key elements of discovery and have the data already accessible at your fingertips.

It would also be helpful for new starters if you add a completed example of a PRD to the top of the PRD template which will help add some context for them. Some PRDs can be quite lengthy if the problem is big, complex and has a lot of ambiguity, so it’s worth adding a table of contents at the top making it easy to navigate through the document.


Including a list of people who are involved:

RoleContact
Product Manager. <tag person/<name>
UX/Design<tag person/<name>
Technical<tag people/<names>
Stakeholders<tag people/<names>
Jira/Design/Helpful Links <tag person/<name>

Problem/Value/Idea

A description of the problem or idea along with projected value if you have this. This is a good chance to spend quality time digging into the problem to build up a business case using qual/quant data.

  • Idea: Supporting Dark Mode in our apps…
  • Problem: Our iOS app is currently a 3* rating and Android app 2*, with the top problems being…
  • Problem: 70% of our customer support queries relate to promotions which cost us £x / month
  • Problem: Our day 1 churn rate is higher than we expect…
  • Problem: We currently release new software iterations monthly, rather than daily…
  • Problem: In the latest customer survey, 10% of respondents report the product being slow and unstable…
  • Problem: 20% of customers don’t feel rewarded for their loyalty
  • Problem: Our total marketing communication engagements have gone down since complying with the new GDPR regulations, making it harder to talk to customers frequently
  • Projected Value: Saving customer support costs by £
  • Projected Value: App rating > 4* resulting in healthier ASO and raking and therefore an increase in organic installs creating 20 new customers a day
  • Projected Value: Day 1 churn rate under 20% resulting in £xx revenue increase
  • Projected Value: New customer conversion rate > 30% resulting in £xx revenue increase

Hypothesis

List out one or more hypotheses you come up with to test out. This is a great opportunity to collaborate with the whole product team (designer and engineers) and stakeholders, not only to compile the list of experiments but also what data you need to learn and what tools you might need to execute the experiments.

  • Providing existing customers with the ability to refer their friends will increase new customers incremental by 10% worth an extra £xx revenue increase
  • Having a stepped registration form will increase the registration rate by 10%
  • Solving all customer queries in the app store and responding to every app store review within 3 hours will increase customer satisfaction and therefore our app store rating, which will in turn help ASO and our app store ranking

KPIs

  • Volume of engagements with feature x
  • Conversation rate
  • ARPU
  • CPA
  • Retention rate
  • Day 1 churn rate
  • LTV
  • Registration rate
  • Crash levels
  • Deposit elapsed time
  • Session time
  • Login elapsed time

Market Analysis

If you have competitors who have solved the problem already, this is a good place to document the UX. Also, detail your target personas and other details about the market which will give you a better idea of who the product iteration is for.


Customer Research / Validation

Detail any qual/quant data you have gathered relating to the problem/idea eg. customer interviews, financial/engagement data, trends, and any historical experiment results which are relevant.


Constraints

  • Regulatory live date of x
  • Marketing tv campaigns going live x
  • Low front-end development capacity
  • Utilising platform/tool x
  • Time to market
  • Dependencies on teams x

High-Level Requirements / Use Cases

This shouldn’t be at user story level (detailed spec), but instead, just an idea of customer flows/use cases and considerations covering:

  • Functional
  • Non-functional (since the rise of DevOps, this gets covered as BAU/as part of development in most cases)
  • Customer support
  • Marketing
  • Tracking

Flow

Embed a mock-up, flow, UX or prototype.


Risks

  • Lots of ambiguity, so it could take a while to reach the desired outcome
  • Other higher priority work could mean that we don’t have the tech capacity to get the solution to market in time to reach the optimal outcome
  • The problem may not be such a big problem when we go to market
  • We may only solve part of the problem because of x
  • It could take more than 3 months before we learn because of x

Technical Considerations

  • We have plans to replace the existing platform in the next 12 months.
  • We need to conduct an RFP on tooling
  • This is the first time we are conducting experiments, so we need to considering process and tooling

Go-to-market Strategy

This is where you can detail the elements you need to consider/action to have a successful go-live covering:

  • What product support marketing require to market the product iteration effectively
    • Special promotions
    • Signposting across the product
    • Training, user guides
  • Customer support training
  • Release preparation to coordinate with any time-sensitive fixed timeframe marketing campaign especially TV ads
    • Day 1 plan
    • Feature switch process
  • Regulatory approval
  • Production access for end-users

Q&A

Adding a question and answers section at the bottom allowing you to add notes from meetings and tag anyone responsible for answering the question.

QuestionsAnswers

Analytics

The short answer is yes – the product/team will benefit by having web/app analytics tracking as part of the definition of done (DoD).

The only time that a separate analytics tracking story should be written and played is typically in the scenario of:

  1. There’s no existing analytics tracking, so there’s tracking debt to deal with including the initial API integration
  2. A migration from one analytics provider to another

The reason why it’s important to ensure that analytics/tracking is baked into the actual feature acceptance criteria/DoD, is so then:

  1. You can measure the value/outcome which the output had on the customer
  2. It doesn’t get forgotten
  3. It drives home that having tracking attached to a feature before it goes live is just as important as QAing, load testing, regression testing or code reviews

Unless you can measure the impact of a feature, it’s hard to celebrate success, prove the hypothesis/whether it delivered the expected outcome or know whether it delivered any business value – the purpose of product development isn’t to deliver stories or points, it’s to deliver outcomes.

Having a data-driven strategy isn’t the future, it’s now and the advertising industry adopted this analytics tracking philosophy over two decades ago, so including analytics tracking within the DoD will only help set the product/team up for success.

Velocity

Velocity = Projected amount of story points which a team can burn over a set period

A development team’s velocity using Scrum or Kanban can be worked out by totalling up the amount of points which has been burned across 3-5 sprints/set periods and then dividing it by the periods the totals were calculated over (taking an average across the periods).

It’s important to use an average across the last 3-5 periods, so then holiday seasons and a sprint where items have moved over to the following sprint doesn’t dramatically impact the numbers as much as it would if you only looked at the last period.

A team can use their velocity in many ways, for example:

  • Understanding how many points they can commit to during sprint planning/work out how many PBIs (Product Backlog Items) could be done across the next 2 weeks
  • To aid prioritisation (The ‘I’ in ROI)
  • Predicting when software can be delivered in the backlog, which can then be used to forecast future feature delivery
  • Understanding the impact on any resources eg. Scrum team member changes or adding extra teams to the product
  • Understanding the impact which dependencies are having which can be reviewed in the retro, great example being build pipelines
  • Providing a more accurate estimate than a t-shirt size
  • As a KPI for efficiency improvements

I tend to refer to points being ‘burned’ rather than ‘delivered’ because it’s quite easy to fall into the velocity/story point delivery trap of obsessing about points being delivered rather than obsessing about delivering outcomes (business value).

So many fantastic ideas from so many people to improve the product, but it’ll always be impossible to fulfil all desires in an acceptable time frame to stakeholders, making prioritisation not only challenging but extremely important.

Communication, data, collaboration and determination can certainly make prioritisation all the more effective and smoother, so looking at these areas in more detail:

Communication: Status of iteration, where do product requests/bugs sit in the pecking order, ETA on delivery, investment cost and the projected value of iterations held in a transparent way for stakeholders to pull the data when they want will help with the communication overhead and help maintain trust.

Data: To ensure that high-value items are being worked on you need data to back up assumptions. It can be easy to flap and try to make a problem out to be bigger than it is to get it done, but there should always be some kind of data to back it up with examples being: incremental revenue which can be reverse-engineered from retention uplift rates or projected acquisition volume increases using LTV for example. Other ways of projecting value / determining scale of the problem are customer support queries or customer feedback, site loading times, efficiency in terms of £££ saving eg. Manhours/days or software costs etc.

Collaboration: Discussing value and priority options openly with your colleagues will help you deliver a product in a more confident and focused way, as it’s not easy making the big decisions on prioritisation because what’s at the top or moves to the top means that the items below won’t be done now or perhaps anytime soon, so checking and agreeing on the focus/roadmap helps to give confidence to just get on with delivering a high quality & valuable product without having to worry about justifying a decision you’ve made alone every minute of the day. The final decision of course lands with the Product Manager who is accountable for the success of the product and prioritisation decisions, but getting feedback from stakeholders/colleagues in an inclusive way can yield even more positive outcomes.

Determination: Prioritisation changes frequently if you work in an agile environment, so being positive and determined to deliver upcoming high priority / high effort iterations you’ve been discussing for months or even years helps to keep the focus on delivering the key product goals and provides reminders that it’s still on the agenda, no matter the level of incoming bombshells/distractions.

If someone asks for something to be done urgently without providing any numbers representing the projected value or any element to give an idea of the scale of the problem you’re looking to solve, then asking why do it or what happens if we don’t do it in the next 12 months should help to quickly prompt the need to research more into the value.

Projecting investment cost and taking time to dig into the projected value the product iteration will make collaboratively, will ensure that you’re delivering frequent value to customers internally and externally in a happy, fun and relaxed environment.

It’s powerful, flexible, customisable, saves thousands of man hours, provides valuable customer insights / behaviour and most importantly ensures that you get a healthy ROI if used in the right way.

Meet The Brain: The Brain is MediaMath’s proprietary algorithm and ingests data (60 billion opportunities everyday to be exact) and decisions against that data.

Their algorithm’s left-brain and right-brain work together to analyse large pools of impressions, looking at dozens of user and media variables, to determine which impressions will best meet an advertiser’s goal.The Brain values each impression based on its likelihood of driving an action, and bids accordingly.

image

wpid-Big-window-in-Gothenburg-Version-2.jpg

It continues to disappoint me when I hear about large blue chip clients working on the default 30 day PV (post view) cookie window for display campaigns and then accepting 100% of the PV conversions. Not only this, but in most cases no viewability tech is being used.

When looking at your PV cookie window, typically it should be set to mirror what you have deemed to be the average consideration time to purchase as well as taking into account the ad format.

On the other hand, you want to avoid coming up with an arbitrary PV window which so many brands do.

Fortunately there is a robust way of finding out what percentage of PV conversions are genuine which you can use for future campaigns. This is called a ‘Placebo Test’. You would run an A/B test with one of your creatives adserved alongside a charity creative. Post campaign you minus the in view PV conversions which the charity creative delivered (which are obviously incorrect) from the in view PV conversions your brand creative delivered. This will leave you with the remainder of in view PV conversions which you can class as genuine. Work out what the percentage of genuine in view PV conversions were and then you can use this percentage within the buying platform which will mean only the percentage which has been proved genuine in the past will be accepted and attributed for the current and future campaigns.

Ideally you should expect the ‘Placebo Test’ to look something like the below. If both lines are similar then the banners are not working on a brand basis and they therefore don’t offer any value outside the click. The mention of ‘Placebo’ below would be a charity creative.

PV

Things to consider:

  1. You need £10k media investment
  2. Banners incl. charity banners
  3. Partner eg. MediaMath, DBM or Media IQ
  4. Viewability tech eg. Spider.io
  5. You only have to run it once per product

By overvaluing a channel like display has two main consequences 1. Wasting marketing budget as you could re-allocate some of the display budget to other better performing channels and 2. An algorithm optimising on bad data will only mean that it will aim to optimise towards that bad data more.

On the subject of display wastage, I recently worked with Exchange Wire on an article about my frustrations of DSP’s not integrating with third party viewability tech and the impact.

If agencies and brands stop wasting marketing budget and run display campaigns as they should be done in a more genuine way, the channel will then get the respect it deserves.

computer-thief

Can we place a pixel across your whole site and we’ll give you free customer insights? Can we place a pixel on each stage of the user journey so that we can optimise towards all site traffic data?

These are two very common questions which originated from traditional ad networks and still lives on even though technology has evolved.

If you ask a marketer if they could target anyone in the world with advertising with no restrictions, it would no doubt be their competitors customers.

I am fortunate enough to have bought display remarketing campaigns targeting competitor customers in the past. This was when I worked across the largest hotel chain in the UK at an ad agency via an ad network. That level of targeting, special offer creative and high frequency reaped rewards as you’d expect.

Marketers spend £millions a year on advertising and driving quality traffic can be expensive, so the last thing they want is a competitor just simply remarketing all of their users who visit their site either through FBX or display.

Fortunately this can be avoided if marketing deploys a strict policy that they only allow media pixels to fire on an attributed basis, yes some partners might say that they’d need all data to optimise but when you weigh up value vs. risk, it’s simply not worth it. Optimising on attributed traffic only is good enough for third party ad partners.

On the analysis front eg. Google Analytics, Click Tale, Quantcast etc. it’s a case of applying a bit of logic, experience and research so then when deploying tracking / pixels on site, your data will not be sold in a data exchange or given to a competitor for remarketing. When it comes to big blue chip companies like Facebook, Adobe and Google, there’s no need to hesitate about data security because if it gets out that they’re selling your data then it would be disastrous for them. Whereas the likes of Quantcast who are very well known for giving you FREE customer insights just for placing a pixel across your whole site, is one of those cases where big red warning lights should appear because in this world nothing is really for free and the likes of Qantcast make money from using your data.

Having a strict cookie / tracking policy is safe and advisable but by not having one could cause your market share to decrease as your competitors steal your customers.

You don’t walk across a busy road without looking in either direction so think twice before implementing code on your site.

Cookie

With ad spend still over £15bn / year in the UK, there are a few digital suppliers and publishers who continue looking for the quick buck by cookie stuffing.

Worryingly some marketing consultants and CMO’s turn a blind eye or use the dodgy practices knowingly to improve on the marketing tracked performance.

A few examples of cookie stuffing:

  • When managed service media buys are told to only run prospecting campaigns, yet they use remarketing aggressively to get the last post view conversion.
  • Suppliers popping banners across the net on a blank page to get the last post view conversion.
  • Publishers delivering multi banners below the footer of a site to get the last post view conversion and generate more revenue for themselves.
  • Ad networks requesting a click tracker for a piece of copy and logo, but then just use the click command to pop the site to post click cookie bomb.
  • Pop suppliers popping site when people search for your brand on Google – dropping a cookie when someone is just about to visit your site already.
  • Pop suppliers popping site using a click tracker and therefore dropping a post click cookie on the view – another form of cookie bombing.
  • Affiliates have an abundance of click trackers at their disposal where CTR doesn’t get monitored. Many use these to pop the site to post click cookie bomb also.

These are just a few of the common practises which go on, but this neither helps the industry improve, is fair for genuine suppliers who do things by the book or helps advertisers grow volume incrementally.

Fortunately there are a few tech suppliers out there who can at least help you identify whether traffic is showing a fraudulent pattern such as Traffic Cake.

Agencies and marketing managers need to have a stricter policy on cookie stuffing so then it can finally be put to bed along with the suppliers who do it.

Cookies

Firstly you need to have access to a DSP and have adserver container tags across your whole site. When implementing the container tags, it’s essential to pass back as much data through the custom variables as possible eg. age, gender, bucket amount, revenue, customer loyality type.

Container tags should be placed across each site / product page and then a tag from homepage throughout the whole conversion process to the sales thank you page. Also a tag across the current customer login section is required.

Now it’s a case of building up your CRM database within the DSP. A pixel within the DSP represents a targeting segment to be included / excluded such as:

  • Main acquisition homepage
  • March 2013 microsite landing page
  • Car Insurance homepage
  • Home Insurance homepage
  • Quote confirmation page
  • Logged in
  • Deposit confirmation page
  • Business awards landing page
  • Affiliate landing page
  • CRM email – non converting
  • Males
  • Age 25-34
  • Gold customers

Once your pixels have been created in the DSP, it’s a case of implementing them within the adserver container tags using the existing variables which have already been setup. This will allow you to setup basic scripts to conditionally fire the pixels to match the segment. To increase cookie volume, implement separate pixels across all of your CRM emails also.

The tech part is out of the way and now you just need to setup all of the relevant strategies in the DSP including / excluding the newly created CRM segments accordingly.

As new product pages, websites, microsites and CRM email campaigns get created, then the same process needs to take place in order to keep the cookie CRM database updated.

As the cookie database is held within a DSP such as MediaMath, you can deliver the CRM campaigns across ad exchanges, yield optimisers and FBX.

Winner

You’ve spent months working with the data team setting up all of the marketing data feeds for the DMP and now it’s a case of setting the briefs for multi and custom attribution models.

Last click attribution is typically default and the most common. It’s not wrong to only stick to one and if there’s no motivation to work with others, then last click isn’t a bad choice to stick to.

Viewing multi custom attribution models gives you insight into the campaigns which are getting undervalued by contributing more higher up the funnel than lower for example. Off the back of the data, you can then increase targets / goals / CPA accordingly for the relevent campaigns / media buys.

The benefit of using custom attribution models is that you can amend certain exposures / campaigns in order for the output to make more sense in an actionable way eg. remove all banner impressions which did not get viewed, remove brand search clicks, remove remarketing impressions etc.

Firstly the data team will need to setup the 5 key out of the box attribution models:

    • Last interaction
    • Linear
    • First interaction
    • Time decay
    • Position based

    Once built out, within your visulisation tool there should be options to customise the data further eg. remove banners which weren’t in view, remove brand search, remarketing and CRM campaigns which will leave you with insight into the real performance of your prospecting campaigns across different attribution models.

    Google have been attempting attribution modelling over the past few years via DFA. They unfortunately still have a couple of bugs making the tool unusable, but they are still further ahead than any other third party attempting custom attribution modelling on a self sevice basis.

    It will always be difficult for third party companies to successfully deal with attribution because attribution models should be built using the data from the in house DMP, which includes back end customer / revenue / LTV data.

    In order to understand how all of your ad campaigns are really performing and what role they fully play, viewing performance data across multi custom attribution models is key.

    Puddle

    Offline brand activity has been measured in the same way for decades through econometrics – mainly looking at the correlation which offline activity has with brand search volume and bottom line acquisitions / revenue.

    Many digital specialists claim that this way of measuring offline brand activity was built for offline and it would be unfair to use this method for measuring online brand. Yet, those digital specialists are more than happy to attribute post view data to all online advertising without analysing actual cause and causality.

    The reason why many feel that it’s unfair, is because online branding is expensive and when looking at the correlation of online brand spend vs. offline spend through an econometrics model, offline shows a greater ROI for many advertisers. Also when it comes to banners, in many instances there is zero correlation between banner impression volume and brand search uplift / bottom line acquisitions.

    Just because you can track post view, it doesn’t mean that you should attribute post view conversions to campaigns. Most digital planners who have been around for a while know how this can be easily abused, you only have to look back at the classic Yahoo Marketplace placement on the Yahoo HP where an impression counter could be attached to the ad to remember this.

    The key objective for all brand activity is to deliver a positive ROI no matter how the consumer got to your site / store or whether the ad was delivered online or offline. I can’t imagine any marketer spending money on advertising and not ever wanting a return from that spend, so it’s pretty safe to say that the key objective above is fact.

    So what is the most robust way of measuring the ROI of online brand activity.

    Analysing the correlation that both uplift in brand search volume and bottom line acquisitions / revenue has on any medium to large weight brand campaign (online or offline), is the most effective way of viewing impact / ROI in a robust and truthful way. This would mean that econometrics would be perfect to measure the effectiveness of online brand campaigns also.

    In order to determine cause / causality, the brand activity would have to be signficant eg. Portal / social network takeovers, online video or high volume display burst campaigns so then the noise will show up in an econometrics model.

    For very low volume online branding, there is an option to use in view post view data as a proxy of success, but it’s essential to remember that you won’t know whether the conversions would have happened anyway, unless you have run a placebo controlled test.

    The ultimate goal is to know what brand opportunities are the most cost efficient way of increasing conversions / revenue. Basic econometrics is still the most effective way of reaching this goal across all marketing channels.

    DMP’s 2.0

    Posted: Jul 21, 2013 in Business, Data, Marketing
    Tags: , , ,

    Puzzle

    DMP’s have been around for decades but the acronym only started getting banded around the ad industry recently.

    DMP’s up until recently pretty much included only back end data which was overlayed with a visualisation tool such as QlikView or Omniscope. Typically media planner buyers and marketing execs used to use adservers to pull off basic performence reports as all costs were flat ie. Not biddable and held within the adserver.

    Since programmatic buying became more popular, media buyers have been spending a significant amount of time pulling data together from different sources just to see how campaigns are performing – combining bid tool, adserver and back end data manually.

    Programmatic media buyers should be spending as much time as possible setting up strategies and optimising campaigns rather than spending days merging data or reconciling costs.

    Clearly things needed to change and they have started to, resulting in programmatic buyers having to work closer than ever to the database team who manages the DMP.

    Due to this change, the volume of work load and briefs to deal with data has tripled over night for data teams. To deal with the new data demand from marketing, it’s essential to have incremental resource to deal with the additional work because otherwise it will either take years to get done or get done in a shoddy way.

    Allowing marketing the extra data resource to support a data led marketing strategy is essential for business success. A DMP should now include log level data updated in real time / within three hours as standard including:

    • Back end data showing cohort conversion and revenue data
    • Paid Search bid tool spend and impression / click data
    • Social Media bid tool and fan page spend and impression / click data
    • Display bid tool spend and impression / click data
    • Banner inview data
    • CRM email impression / click data
    • Affiliate spend and impression / click data
    • Natural Search impression / click data with any flat agency fee attached
    • Mobile spend and click data
    • TV spots and any other offline channel activity with the relevant spend and volumes attached
    • Adserver data incl. adserving fee making all channel spends fully loaded
    • Site traffic data
    • Weather data
    • Competitor exposure data
    • Site / product issue data

    All of this data is essential for knowing exactly what is happening across the business and why. With a click of a button marketing should be able to view real time campaign performance (CPA and projected ROI) across all campaigns and channels as well as impact of what branding, weather, competitor activity and any site / product downtime has on revenue / acquisitions. Also user journey analysis from first touch point to last and the key five attribution models should be built out from the data which all take into account CRM.

    Without this, marketing cannot be expected to grow the business profitably.