Virtual DA Hub, May 29th - Agenda & Summaries

Please note this is a live event. All times are Eastern Standard Time.
We do not record the discussions to ensure delegates are comfortable to share openly and honestly.

First Huddle Round

Round #1
1:25 - 2:45 pm EST
June Dershewitz
Amazon Music

Analytics leaders face multiple challenges that can undermine the value they deliver to their organizations. In this huddle, led by June Dershewitz, Head of Data Strategy at Amazon Music, we will tackle headfirst some of these challenges.

Participants will share success stories as well as roadblocks relating to these questions:

  • How to turn data and analytics into a primary driver for business strategy?
  • How to prioritize cultural change and foster a data-enabled orientation?
  • How to expand the value gleaned from data?

The conversation will focus on practical examples helping you to reassess your analytics program and breathe some fresh ideas into your strategic thinking.

  • Instrumentation is often an afterthought. To highlight the need, make sure that data work has a dedicated section in every launch checklist. If you continue to have chronic data gaps, keep a tally and share it with your execs
  • Your analytics org should be seen as a strategic partner, not a service desk. A productive first question when an analyst meets a stakeholder is, “How are you measured?” Better yet, ask, “What analysis can I do for you that would help you make your boss look good?”
  • Put information in front of all staff so they get used to it being there. In a physical office, analysts can make charts to post in elevators or other common areas. New challenge: finding ways to adapt that tactic to a remote work culture
  • There are no walk-by conversations when you WFH. Analysts must find new ways to remain visible and accessible in a virtual environment. Consider a random 1:1 series with stakeholder teams
  • Where is Your Organization on the Marketing Data Literacy Spectrum? https://medium.com/aimarketingassociation/where-is-your-organization-on-the-marketing-data-literacy-spectrum-b0988740b9e2
Shane Murray
NY Times

Correlation may not be causation, but it’ll give you a few hints that might transform your business.

In this huddle, Shane Murray, SVP Data at the New York Times, will guide us through the following questions:

  • How might you apply data science to understand the causal drivers of your business?
  • How do you communicate this to senior executives, in ways that result in accountability and action?
  • How can you establish a culture of learning at the executive level, to continually strengthen this causal understanding?

Expect to come away with a set of ideas for engaging senior executives in the science of cause and effect.

How might you apply data science to understand the causal drivers of your business?

  • Participants discussed the use of causal reasoning and analysis to tackle the problems at the heart of their businesses, with a focus on explanatory models that can be shared across executives and business operators
  • Commonly used techniques include logistic regression, random forest and decision tree approaches. Multiple participants found value in having executives state their hypotheses, such that they could be proven / disproven
  • Participants talked about how this understanding of “drivers” allowed business leaders to focus on what matters, rather than an output metric (e.g. customer satisfaction) that might be driven by internal and external factors
  • Repeating these studies over time can build confidence in the causal drivers, as we see which relationships withstand changes in the business, while also deepening our understanding and uncovering new relationships
  • Reading recommendations:

How do you communicate this to senior executives, in ways that result in accountability and action?

  • Focus executive presentations on insights and recommendations, rather than methodology and caveats
  • Language is important – translation of metrics/features to behavioral concepts
  • Some education on statistical concepts (correlation, regression, confidence, etc) can go a long way
  • Repetition – repeat the analysis, adjust your model, reiterate in monthly or quarterly metrics, focusing on the underlying drivers of the business
  • Impact-empathy sandwich – one participant talked about the use of impact and/or empathy to encourage action
  • Using enterprise goals and targets as a way to build accountability. One participant talked about cascading these to OKRs, with operational leaders responsible for driving these OKRs
  • Participants discussed the friction that can come from narrow focus on a metric, at the expense of a true understanding of the strategy
  • Reading recommendations:
    • Don’t Let Metrics Undermine Your Business by Michael Harris and Bill Tayler (HBR)
    • Resonate: Presenting Visual Stories that Transform Audiences by Nancy Duarte
    • DataStory: Explain Data and Inspire Action Through Story by Nancy Duarte (Stories have a beginning. middle and end also the "What is/What could be" story structure.)
    • Storytelling With Data by Cole Knaflic (Very practical, hit the ground running book)
    • Let's Practice Storytelling With Data Workbook by Cole Knaflic
    • Avoiding Data Pitfalls by Ben Jones (This has been an awesome read for me.)
    • Effective Data Storytelling by Brent Dykes

How can you establish a culture of learning at the executive level, to continually strengthen this causal understanding?

  • Commit to high impact, risky experiments with an end state in mind that may change the causal model or dramatically advance your understanding
  • Participants talked about how these large-scale experiments required strong executive buy-in as well as a larger role from their data teams to ensure the project succeeds
  • Consider regular updates to executives on the state of experimentation across the company, focused on discussing hypotheses, results and lessons learned

Occasionally we experience a dramatic shift in our key metrics. Whether we can explain that shift or not, often we are unable to address it quickly enough to impact a change.

As you would expect, the CDC often experiences massive surges in traffic to its websites during public health emergencies and other events. What you might not expect is that the surge is not confined to the current event response.

In this huddle, Fred Smith, Technology Team Lead and Digital Metrics Lead in the Digital Media Branch of the CDC, will moderate a discussion about surges in metrics and what we can do about it. We will look to answer the following questions:

  • How do you profile change in usage?
  • How do you change communication both internally and externally?
  • How analytics can shape responses and operations?
  • What are the most effective ways to communicate with senior decision-makers, both during and after a surge?
  • How can you plan for a surge/crisis?

This discussion will be relevant to both the commercial and non-commercial sectors. Come share your experiences so together we can improve outcomes delivered through analytics when metrics shift unexpectedly.

How do you profile change in usage?

  • Know your baselines
  • Segmentation is your friend. Know you audience and be ready to capitalize on changes to capture a halo effect to increase overall traffic after the surge (new repeat visitors or customers)

How do you change communication both internally and externally?

  • Fighting knee jerk reactions with data.  Internal perceptions are often different than external perceptions
  • Gather data and use it.  Customer satisfaction.  Search and other referrers.  Bounce rates and paths. Search terms analysis.  Everything you normally do, but faster and more.  Eg: CDC redesigns sites several times during a sustained response.  The events change rapidly and the audience changes rapidly, your site needs to keep pace

How analytics can shape responses and operations?

  • What’s working?  What’s not working?  What can be better?
  • Use data and predictions to support operations and maintenance.  Using predictions, for example, CDC built out several servers before they were needed during the early days of the surge
  • Let data inform UX decisions.  Know that surge traffic may be different than normal traffic.  Gauge the shift in audience segmentations and adjust your site to the change in audience
  • UX, Referrers, Trends, Partners, outreach
  • Measure, measure, measure
  • How do visitor behaviour change during the surge?  Can you capitalize on the increased traffic to gain more repeat customers?
  • Depending on your audience, analyse metrics differently.  For highly charged topics that may have distinct entrenched views that are not reflective of your site, discard x percent of the top positive and bottom negative responses and analyse the middle portion separately

What are the most effective ways to communicate with senior decision-makers, both during and after a surge?

  • Keep the baseline in sight and present surge data along with baseline.  (eg 2019 average daily page views overlaid on the daily surge page views traffic)
  • Communicate the decline clearly as well.  Know (if possible) what is driving the decline.  For example, when traffic is primarily through Google searches, how closely does your traffic follow the Google Trends curve for the most common terms?
  • Manage expectations!  Name a surge a surge and communicate the expected decrease

How can you plan for a surge/crisis?

  • Establish your baselines; know your data
  • Plan for a multiple factor scale up of infrastructure and operations.  5x, 10x…20x  Depends on your knowledge of your industry and traffic.  The decision is driven from a cost/benefit analysis of cost to scale and maintain against loss of revenue/image if site is unavailable
  • Set up trip wires and alerts to notice a potential surge early.  It’s better to have several false positives than miss a real surge
  • Pay attention to “talking heads” and trends that might inform other changes particularly in retail
  • Internal communications: Include infrastructure increases in advertising campaign budgets. Marketing should inform infrastructure and metrics
  • Budget plans.  Know what changes for a surge.  Are you ready for it?  If you triple your metrics, can all your contracts and vendors absorb the extra load, or do you need to increase them as well?  Do you have increase clauses?  Do you have alternate mechanisms for collecting and analysing your data that are less expensive but maybe not as robust?
  • Partnerships before hand.  It’s best to have working relationships with internal and external stakeholders before a surge rather than trying to forge them during a surge when everyone may already be busy.  Know your contacts in marketing, infrastructure, budgeting, leadership, other metrics teams and engage them regularly
  • Continually review the skill sets on your team

Growing and developing analyst talent continues to be an important agenda item on every analytics leader’s agenda. Training and resources for the tools are abound. But how do you turn a junior analyst into a rock star senior analyst?

In this huddle, led by Amy Sample, VP, Business Intelligence at PBS, we are going to discuss successful approaches to developing the skills and savvy we need from our analysts. We will ask:

  • What are the key skills to develop in analysts and data engineers?
  • What mentoring techniques are being used to help them?
  • How to we become better mentors to our team members?
  • How do we encourage analysts to produce more meaningful and actionable insights, improve storytelling and data visualization, and develop soft skills such as persuasion, influence, professional presence?

Join this huddle to find out how other businesses help analysts hone their skills, increase their motivation and keep them motivated.

Key Takeaways

  • Skip-level meetings are great to have higher ups share strategy and engage in meaningful feedback. It makes employees feel more valued and gives them a voice.
  • Remember that everyone is different and have different motivations – Create a mentoring strategy for each person which results in both retention and refresh strategies.
  • Find mentors outside of the department – Many skills that need to be developed are less technical. Expanding the mentoring network allows employees to understand business strategy and see how their work connects to the bigger picture.
  • Appeal both to tactical skills and professional development
  • Training on when to ask questions - knowing when to ask for help and when it is appropriate to figure out on their own. Sometimes a great investment in your future is to learn the hard way. Learning by failing is important too.
  • Not talking about analytics in the mentorship – Usually the analytics skills are there, but it is the soft skills that need developing.
  • Carve out the time for mentoring – Most of us make it a part of 1:1s. Schedule mentoring time for your team as well as yourself and prioritize it like you would a meeting. Sometimes it is good to look to an outside resource for mentoring too.
  • When mentoring remotely – Documentation and structure are really important. Document goals and progress and make sure to discuss on a regular cadence.
  • Make an effort to acknowledge people’s success – When we are remote it is harder to share successes and learning moments. Make an effort to recognize performance with upper management and acknowledge success in front of the team. Virtual lunches, shout outs in slack, happy hours, and employee of the week were all ways cited to engage and motivate employees.
  • There is a difference between coaching and mentoring – coaching navigates situations and mentoring provides guidance for the arc of a career.
  • “In order to be a mentor, and an effective one, one must care. You must care. You don’t have to know how many square miles are in Idaho, you don’t need to know what is the chemical makeup…of blood or water. Know what you know and care about the person, care about what you know and care about the person you’re sharing with.” - Maya Angelou

Digital marketing solutions continue to grow at an unprecedented pace. Who does all of that technology belong to? What staffing and skills are needed to successfully support the practice? Each of these technologies brings new and unique challenges and implications to our analytics organizations.

Join Rusty Rahmer, Head of Customer Experience Solutions at GSK, for an exciting discussion on whether we are coping with these challenges, figuring out organizational design options, roles, skills, and operating models to support the new solutions and understanding the people, process, and org design components that are necessary to successfully support the new capabilities.

Together we will conquer this tidal wave and come away from this huddle more than just afloat.

Key Takeaways:

  • Success has very little to do with staying aware of the technologies, platforms, architecture, organizational design, etc. and so much more with change management. In fact, achieving successful digital transformation in your organization is actually an exercise in managing massive Change Management
  • Because of this, it is paramount to have a strong vision of digital success at the c-suite level to drive through the challenges of leading massive change, groundswell is not enough
  • The change in tech, platforms, and org designs is both a spectrum of maturity and continuous. This fact reinforces the importance of defining this problem as one of setting one’s organization up for successful and continuous change management
  • Consultants can really help with the inputs and direction on the c-suite vision and they can help in the trenches of the tasks, but the real transformation is the work in the middle and has to desired and led by the organization itself
  • This is a huge monumental lift. Assess all aspects of your organizations readiness for this, and design and apply solutions in the areas of greatest need... from c-suite support and vision, to middle management and beyond
  • The recommendations here are to...
    1. Acknowledge what the challenge of your transformation really is (Change Management)
    2. Focus first on developing  passionate executive/C-Suite sponsor to own the vision of success for your digital transformation
    3. Over-estimate the challenge of change and design strategies accordingly
      1. Slice thin – Target smaller/leaner business lines to be pilot areas for transformation
      2. Get early wins and demonstrate the power of the model in these smaller areas
      3. Use the success as momentum to work through progressively larger areas
    4. Don't worry about the tech stack, worry about defining the right problems, in the right order of priorities, and designing the right reusable processes for dealing with them (i.e. Business/Analytics/IT joint Team Vendor Product Evaluation Processes, etc.)

You have grown from one part-time A/B testing resource to an optimization program of nine team members running both quantitative and qualitative testing.  You had lots of growing pains!  You manage a governance team which sets your roadmap and priorities. And yet you feel that optimization is not delivering to its full potential.

This is exactly the challenge Paula Sappington of Hilton faced. In this huddle we will examine how to drive more out of our optimization programmes. We will look at examples of effective prioritization processes, how to measure your program’s health, the role of analytics and executive reporting and scaling the program up.

Time permitting, we will touch upon utilizing qualitative research to inform the testing roadmap.

Looking for refresh perspectives and tangible new ideas for your experimentation program? This is the huddle for you.

Telling the Story

  • Important to determine the best way to tell the story: Share how learnings from experiments are relevant to that individual / team; may need to customize findings
  • Many participants indicated they often have to start at ground zero: what is experimentation? Why is reaching statistical significance important? etc
  • Roadshows are very effective ways of getting buy-in

Prioritization/Roadmapping: Often a challenge.

  • Some teams have large projects handed down, dictating roadmap for months or more. Often, they must find ways to declare a 'win.' One recommendation here was to create 2 (or more) teams: one strategic and one operational. Also, for large projects/goals, decompose into smaller, incremental goals.
  • For smaller teams with limited resources, how do we balance strategic vs. tactical? Study the learn rate for each type of experiment, also look at what we can technically get done to unlock potential, try to move faster
  • Does it make sense for a company that won't allow you to prioritize a product to even have a program?
  • Appetite to integrate optimization is slower when not data-driven
  • Important to have leadership buy-in, but not leadership control

Team Structure

  • Include testers and analytics in scrum teams
  • One group uses a triad: UX, Analytics, Optimization
  • 2 pizza teams are effective: a small group doing the work = fewer handoffs, less red tape

General challenges

  • Some people don't understand/believe in experimentation; sometimes we can't detect a signal; sample size challenges are ubiquitous. For all of these, education is key.
  • Some teams work for proper hypothesis testing; some want adaptive learning
  • Some problems are physics problems that we can's solve; some are computational problems we can solve; then there's the business problem.
  • One company is challenging itself: If it's important to experiment for everything and achieve statistical significance, how do we go from idea, to hypothesis to launch in a day?
Scott Breitenother
Brooklyn Data Co.

Successful leaders know data is critical to taking their business to the next level. However, despite the best intentions and well-resourced initiatives, many companies fail to build data capabilities. This can result in expensive tools left unused, high employee turn over, and losing ground to more data-driven competitors. Why is that the case and more importantly what can we learn from each other to fix it?

Scott Breitenother of Brooklyn Data Co, specializes in building data capabilities that last. In this discussion, Scott and huddle participants will share their views, experiences, and use cases of building best-in-class data capabilities.

We will use the people, process, and technology framework to ask and answer the following questions:

  • Why do many data and analytics initiatives underperform?
  • What are the characteristics of a best-in-class data functions?
  • What is the path to success? What steps you should take to get there?

This huddle will focus on practical examples and use cases to empower participants and help them significantly improve their data capabilities. Come ready to share your experiences and get to hear the success and challenges faced by other leading organizations.

Discussion
Why do many data and analytics initiatives underperform? (“Where do we go wrong?”)

  • People are protective of their current processes. People get used to doing something a certain way. People push back on change because the processes they use are a proxy for their identity at work
  • Trying to move from manual (e.g., Excel) to automated processes
  • Organizational political issues. Lack of communication from and with stakeholders
  • It’s easier to communicate with stakeholders as a consultant (i.e. external) than if part of the organization
  • I have needed consultants in the past just to validate what I’ve already been saying
  • Change can take some time
  • Different roles / different motivations (leader vs. practitioners)
  • Directors <> practitioners should communicate and teach each other about their perspectives / priorities / challenges
  • There can be different levels of maturity in our organizations. Maybe the more mature ones need to develop norms
  • Amount of money / investment available is part of the challenge. With finite resources, how do you allocate / prioritize?
  • Speed to market sometimes increases pressure to deliver (maybe sacrificing optimal processes / solutions along the way). Too much governance can hinder the team’s progress

What is the path to success? What steps should you take to get there? (“How do we get there?”)

  • Speak with the analysts and see what their reasons / arguments are. They may be right
  • Be clear on definitions. It’s possible 2 Google Analytics metrics are “right” but because they have different definitions, some may think one is “right” and the other is “wrong”
  • Recruit allies in the organization who can help develop / defend your plan. Sometimes you’ll have to be persistent...it can take 1+ year to get people to hear / accept your ideas
  • There might be disparate needs: get these people in a room together and let them “fight it out.” Let’s map out what’s most valuable? Is it speed, accuracy, intimacy, or maybe some other criteria?
  • I see data as a product. You want everyone drinking from the same firehouse: same definitions, same data
  • You need to prioritize what you really want
  • Should start with business questions and then break them down into data questions (and identify use cases to help plan)
  • One big challenge we have: too many teams owning the data: because everyone owns it, no one owns it
  • It’s difficult when you have so many teams. At [organization redacted] we have 12 different research teams from 5 different sources
  • Should data roll up ultimately up to one person in the organization?
    • Support idea of having a data gatekeeper / ultimate arbiter of data
    • Data governance guy was hated at my last organization. Even though he had the important job of protecting data quality / integrity
    • At [organization redacted] data governance is very much distributed. Some platforms are centrally managed, but we also use various open APIs to make the data more accessible. In some ways it’s like the wild west when it comes to how teams use the data. We have some governance to keep consistency, but teams have flexibility, too. It’s tough also with federal regulations involved

Wrap up

  • It’s a journey
  • Nothing’s perfect (at any stage)
  • You have to really think about the data organization you want to create. It doesn’t build itself organically
  • We don’t work in a vacuum: we have to deal with personalities and legacy systems
  • Start with audit and getting each team’s needs / wants
  • Having internal or executive champions to push your ideas can contribute to immensely to your success
  • Last words of advice from the group:
    • Some kind of framework is required
    • If you have the data you usually have a position of power in the organization
    • Process is very important (not just a plan)
    • Transparency and buy-in
    • “Asking for forgiveness instead of permission” works well (in some environments)
    • Plan for the near-term (long-term plans are likely be become obsolete / inadequate before you get there)
Jason Harris
Fivetran

Cloud-based technologies have changed the face and pace of business. Fading away are the deep, customized integrations. Self-serve applications are flourishing, and enterprise level SaaS has gained great momentum simply because its flexibility creates the opportunity to build better technology than traditional methodologies. Yet integration challenges have not disappeared completely.

In this huddle, Jason Harris of Fivetran, will engage participants in a discussion around these key questions:

  • What are your current key pain points with respect to data integration?
  • How are you solving for these challenges?
  • What are the use cases for automated vs manual integrations?

Whether you have conquered all your integration demons or not this is the huddle for you. You will emerge from this conversation with new ideas and use cases to boost your data stack efficacy and value to your org.

  • Many different data silos still exist - on-prem and cloud silos of all kids.
  • Biggest challenges for our round table: data quality and freshness
  • Great opps for professional development:
    • Datacamp IDd as a fantastic conference/training
    • Self-learning with college textbooks (really!)
    • Data Analytics Association was identified as a great organization as well
    • Reach out for coffee dates with colleagues and peers in other companies

Second Huddle Round

Round #2
3:10 - 4:30 pm EST
Gary Angel
Digital Mortar

Some of my DA Hub conversations are relentlessly tactical in focus- and that’s a good thing. But I think it might be interesting to have a different kind of conversation. I would like to kick-back a bit and explore what we might do with our analytics teams that might be dangerous or interesting or transformative.

Should we be deconstructing product? Investigating consumer cognitive psychology? Analyzing our own budgets? Gamifying optimization programs? If you’ve ever had a wild idea that you knew would never get budget or that seemed too hard or too radical to tackle, I (and I think others) would love to hear it and kick it around.

If this sounds like a risky, unlikely conversation to be productive…all the better.

This huddle was designed to be an open discussion where the group collectively decides which topic to address. The session started with a quick survey of topics suggested by tye group including:

  • Data Literacy
  • Zoomification
  • Bridging the online and offline worlds
  • Customer Archetypes
  • Handling Massive Growth
  • Post Cookie World
  • How to Capture Data Behavior to Make Decisions
  • Data Science vs Data Analyst

Given the available time only several of these were discussed.

The underlying thread among the ones chosen is that they all dealt with leveraging data points that can help us arrive at data behavior and help gain insight on potential audiences for customer acquisition.

  1. Capturing Data Behavior -  we discussed this in the context of how to make editorial decision making based on data. We discussed how we can arrive at a hypothesis but there is no metric to measure if it was a good decision. So how can we measure a good editor? Several things to keep in mind:
    • Look at either customer preferences that have been self selected by the user base or third party data from segments that browse our site to curate articles for our news site
    • Based on readership and customer satisfaction results, we could measure the "success" of the editor's choices of articles to present to his readers
    • Some suggested there aren't metrics to measure the value of decision making but that was contested. There was more consensus that post mortems seldom take place
  2. Post Cookie World - this was a lively discussion as it was sparked with a need for a call to action and being proactive in what to do with 3rd party cookies going away. Someone speculated if this is the end to behavioral targeting. While first party data gives us a host of demographic data it is the third party data that helps us understand the interests or "most likely to" of our customer base. As we rounded up this discussion, another key point was made if organizations even fully tapped the data they do have for a good ROI to begin with
  3. Customer Archetypes - rounding out our topics and staying focused on the customer, we had a discussion around creating surveys to capture information that can help us gain more insight on customer behavior and preferences. Gary made a good point that an effective survey can potentially gain more insight because people naturally like to talk about themselves. Better survey design is required to get this valuable data

Analytics roles are framed around the analyses you produce, the models you code, and the platforms you build. But for your team to be effective in that work, someone must spend a lot of time in meetings coordinating with stakeholders and users, onboarding and mentoring, improving team processes, and doing lots of other less visible work.

Often, spending too much time on that invisible work can hold back junior team members from advancing in their field. Tanya Reilly coined the term "glue work" to capture all those tasks that make the team more successful but are not explicitly valued for individual contributors. Unmanaged, glue work can result in deep inequity on your team and sometimes push employees out of the field.
In this huddle, led by Head of Analytics at Trove Recommerce, Caitlin Moorman, we will discuss how do you, as a leader, solve for these challenges:

  • Recognize glue work conducted by your team
  • Explicitly value all the work that makes your team successful
  • Fairly allocate all the other work that just needs to be done
  • Help your team members choose career paths they truly want to be on

We will draw on participants experience to come up with tangible outcomes so you can apply them to your program immediately following this huddle.

What is the glue work in your role?

  • Getting access to the data
  • Cleaning up the data
  • All the backend work to go from 0 to dashboard, or to implement a seemingly user-friendly tool
  • Data integrity - proactive work and break-fixes
  • Peer review processes
  • Ad-hoc requests
  • Scoping roles to be specific and effective prior to hiring
  • Planning and organization. Measure twice, cut once - get team to buy in up front and it really makes things more efficient

How do you help others recognize the scale and value of this work? How do you get recognition for your team for doing this work? How do you communicate to execs?

  • Visualize complexity
  • Create giant diagram of what is involved - what pieces must be brought together, what data sources, etc. More overwhelming can be better to help non-tech folks understand.
  • Lead stakeholders through it, and try to get them to volunteer that it is hard and complicated, or to identify the areas where you will run into ambiguity, confusion, or other difficulties
  • Share the output of audits/tag scans - help them see complexity + understand how their behavior adds to it (new vendors, etc)
  • Try to make clear up front how much is going into a project. Set expectations appropriately. Can be hard in a consulting environment where the impulse is to say yes to the client    
  • If that fails, or a project includes much more complexity than expected, document time spent and share retroactively. Help improve estimation for next time
  • In Agile environment, point tasks to include this glue work.
  • Let the person closest to the work point it.
  • Takes time to learn how to give yourself the right number of points.
  • Phase the work - break projects out into phase 1, phase 2, phase 3.
  • Separate exploratory phase from implementation phase. Those play to different strengths (some folks want to do exploratory work, others do not)
  • Make it an explicit goal
  •  Align on % of time a role should spend on different types of glue work and try to stick to that
  • Put proactive projects clearly on the roadmap
  • Easier to do once it is already a problem - proactive work can be hard to get resources for
  • Count the hours
  • In all environments, document everything
  • Meetings to clarify
  • Hours on break-fix, etc
  • Time tracking sounds onerous, but can be incredibly productive
  • Example: team measured for a month how much time they spend on data aggregation/data cleansing - 80% agg/cleaning and 20% analysis, put a dollar amount on it, and made the case for a tool that would help with the problem
  • Team may push back but will be happy to see their work valued.
  • Do not make it permanent and use the data for change.
  • Test it on yourself first so you can show the team you are in it too.

How does this change mentoring? How do you help the team identify their strengths and what types of work they want to be doing, and help them do more of that?  How does it work for different personalities and career paths?

  • Ask either/or questions on projects: focusing on the immediate need / specific question can help illuminate the underlying preferences, but open-ended questions can be very hard to answer effectively
  • Knowing someone does not enjoy a glue task may not make it go away, but at least helps you present it in a way that resonates more with them.
  • Understand strengths and how to pair folks to divide work well.
  • Personality type tests, etc can help identify work styles to prevent misunderstandings, pair strengths, etc.
  • Give folks time on a new team to help them figure out how they work best
  • How did you personally choose staying technical versus moving managerial and making glue work your role?
  • Understand your relative strengths. Can you solve a technical problem quickly? Are you better at solving org problems? Try to find a need within the organization you can effectively solve to craft a role that works for you + company.
  • The way up is often in people management.
  • Orgs should have solid IC paths, but not always true
  • Many of us like/love people management but do not want to lose touch with the technical
  • Helpful to try to find technical project you can get involved with periodically even if it is not your day to day
  • How do you upskill, and get better at being a manager?
  • Invest in training for folks even if you are playing to their strengths
  • Small session discussion format works really well
  • Mentoring former employee - no power or control - pure mentorship. Can give more honest feedback about how the mentor is not being helpful.
  • Local environments, doing technical hands on training, etc. Forking GH repos and learning from them.
  • Writing exercises after reading a book/article. Reflect on how you would have handled a recent situation differently if you had known then what you just read

What are you changing coming out of this?

  • Writing after reading a book to cement knowledge
  • Tracking time: communicating to others how you spend your time + figuring out how you wish you were spending your time. x 3
  • Name it (make glue work clear) and document it
  • More clearly phase projects
  • Visualize the “glue” - make it more real how all the small pieces interconnect and work together. x 2
  • Focus on identifying representative projects and talking to team more as either/or than open ended preference/future questions
  • Communicate the difference between circular processes in data (when you complete phase 1 you have learned just enough to go back and need to do it again) v. linear process (sequential building like traditional software engineering)
Michelle Ballen
Billie

Every company needs analytics, but some have more resources than others to dedicate toward it. Start-ups in particular, with high ambitions and limited funds, need to be strategic in their approach to building and running an analytics practice.

In this discussion, led by Michelle Ballen, Head of Data at beauty start-up Billie, we will rely on the group’s collective experience to explore the methods and processes used in digital and business analytics to speed up analysis and gain a competitive advantage.

Questions we will answer include:

  • What techniques have proven successful for running an analytics practice efficiently?
  • Which parts of the agile methodology, typically reserved for software engineers, should data teams adopt?
  • How to get your organization and team to embrace agile approaches?
  • What are the drawbacks of speed in analytics? How do we overcome them?

Come share your experiences and discover how other leading organizations are successfully speeding up their analytics practice.

This discussion went all over the place -- and for good reason! There are many different variables that contribute to how efficient an analytics practice can be. Some variables revolve around workflow and communication while others revolve around personal motivators and team dynamics.
 
Most of the participants in the session have at least experimented with the agile methodology in the past, some more successfully than others. There was a unanimous consensus that the agile workflow can be adapted well to the engineering/development aspects of analytics but needs to be adjusted slightly to support the research/exploratory side of analytics.
 
The key takeaways from the discussion included:

  • Conduct regular meetings with business stakeholders to align on short and long term priorities for analytics within your organization
  • Develop quarterly roadmaps based on the feedback from business stakeholders
  • Keep the backlog of tasks well-groomed and prioritized
  • Experiment with sprint schedules vs. ongoing kanban boards to see what works best for you
  • Keep team members motivated by playing into their strengths and switching up tasks
  • Prioritize time for research and creativity whether that be through an 80/20 type commitment, regular time-bound “spikes”, or even data hackathons
  • A good data product manager is hard to find but can be immensely helpful. If a data product manager doesn’t click, it can cause more harm than good
  • Regular “ceremonies”, including data demos and retrospectives, can help spark new ideas and identify areas for improvement

Further reading/viewing:

 

Building and maintaining trust in analytics data has always been critically important. The challenge is becoming more complex in an evolving technology landscape with an increasing number of data sources. Do you have a comprehensive strategy for data governance?

In this huddle, Amber Zaharchuk of Disney will focus the discussion on how to improve data integrity and stakeholder trust. Questions we would look to answer include:

  • What strategies are at our disposal to ingest and groom data from disparate sources?
  • How do we maintain consistency across products and business units?
  • Which stakeholder/s should be responsible for data quality and governance?
  • How do you make your data accessible to all the users that need it? What if their skill levels vary?

If you are looking to improve trust in your data then join this huddle and obtain some real world examples of how to achieve that goal.

Key Takeaways...

  • The day of reckoning is coming - we need to standardize!
  • Some government agencies need to get in line with the commercial world, with accessibility 508 compliance, agile training, determining ways to communicate the full data picture
  • Web analytics still has a lot of issues with the basics, even though it’s been around for some time; perhaps either start a small team or hire someone on the outside to make sure the data we report is correct
  • Contribute to the shift from owning everything yourself to working more with end users to make sure we increase data usability
  • Educate your colleagues and stakeholders on the value of the metrics you’re providing
  • Employ a RACI matrix for data governance – document the source for all data and who is accountable, then make this part of public documentation
  • Determine the ROI of governance activities. I can tell you you’re going 60 mph, but what if I told you the speedometer was broken? We need to make people aware of the consequences of making poor decisions made with unreliable data
  • Organization and documentation must be more of a priority. When new people are onboarded, make that part of their role from the start. Devote time in your day or week for governance efforts, such as documentation
  • We must stop taking shortcuts, and invest the time and effort into keeping data clean and usable

What do we mean when we say data governance? What is the scope?

  • Increasingly, we think more about privacy, controlling data, and preventing leaks. Privacy is more of a concern now than when we had this conversation in 2017
  • Governance includes privacy, but is really about the processes in place and the people to manage them
  • Ensuring the data is integral, usable, correct, well documented, and ready for analysis
  • Documentation is key, especially when there are personnel changes
  • People, processes, and technology are all a part of the governance wheel

Does your organization have a formal data governance strategy or team?

  • It still seems pretty rare for organizations to have a dedicated data governance team that is focused on consistency and quality
  • A few of our delegates’ organizations do have data governance teams, but they're not responsible for looking at digital analytics data
  • Ultimately, the responsibility still falls to individuals to step up and take measures to ensure data quality
  • What are the best practices for the governance entity to work with siloed teams across the organization?
    • Analytics Team works with engineering on governance to create alerts to let you know if there are lags or spikes in the data
    • Best way to tether local to enterprise like a hub and spoke model, because the people that know the data best are those that are closest to it, but it’s also important to think enterprise-wide. Centralized data architecture is the way of the future.
  • Ways we can foster a more collaborative approach to data governance:
    • People across the org need to have a desire to do things the right way
    • Reaching out and communicating with other teams is the first step, as the bigger group is sometimes not aware of the smaller initiatives. Build that awareness by collaborating
    • Flags should be triggered for things like, “I have a feeling you’re trying to build something on your own and we should talk.”
    • One delegate’s org brought in an expert with governance experience, and they built out a governance team, with “Data Governance” in their titles
    • Another delegate has someone on her analytics team focused on data quality. There is also a separate data governance team, but the scope of that team is unclear, and they are not closely connected

Where does a data governance team fit in an organizational structure?

  • We all have so many sources of data, from HR to finance, and it’s hard to know “what data is good.”
  • The people producing the data should define quality; there should be a SME to evaluate and sign off on data
  • There should be a centralized model, but working with people in each team to ensure integrity, as a centralized team can’t manage all the pieces
  • A centralized entity can set rules for ETL and copying, data flows, etc, but we need input from the source/producer owners, otherwise this is too complex

We know documentation is important, but it’s a constant challenge. What are some tips for managing this important component of our roles?

  • Keep track of version numbers and dates when changes are made, bugs are fixed, or data losses occur, so that we can go back and reference when and where things were changed. Provide detail so that others have context
  • Some are doing this in Excel, though we know this is not the best tool for sharing. Google Sheets is a bit better for sharing/collaborating because of version control, filtering, and searching, and no one can overwrite it on a hard drive. Confluence, JIRA, and SharePoint are also used so that documentation is accessible
  • Make use of the features in your analytics tool! For example, calendar notations or Workspace notes and descriptions in Adobe Analytics
  • Teams are stretched, so they are using sub-optimal solutions to govern, such as “DO NOT USE” or “USE THIS” notations on variables, segments and report suites. WE MUST STOP DOING THIS!
  • We should invest time in cleaning up our implementations. Do it bit by bit, allotting a percentage of your time each week as not to get overwhelmed
  • When analysis/reporting and implementation teams are separate, both groups have the responsibility for data governance

Is QA part of your role? What is your relationship with Technology teams?

  • Analysts learning how to QA can be beneficial. When you inherit an SDR, the best thing to do is start with a QA audit yourself while going through the SDR to make sense of it and ensure that it is still valid
  • QA teams may not do as extensive or robust testing on the analytics implementations
  • QA needs to run all the way from the product itself through logs to the actual analytics tool to ensure the data appears properly
  • One example is that a delegate has an implementation team that does initial QA, and then it goes to the analysts on the analytics team to do their checks all the way to the Adobe Analytics interface
    • They have a RACI matrix to help define ownership
    • Each team is equally responsible for quality
    • Some teams brought in ObservePoint to automate some of the QA, but who should be responsible for creating the journeys?

Anyone else using automation? If so, who is responsible for set-up and maintenance?

  • The cost of a bad implementation is so high, but there is also a cost to test everything manually, or to spend time setting up automation use cases
  • ObesrvePoint hasn’t worked well for native apps/dynamic content
  • Automation is a challenge when we have different implementations across channels
    • Naming conventions and casing, so simple, but so problematic
    • Creating the variables needs to be done by the analytics team; you should ask them for new events/variables, not just create them.
  • Think about the end user and how the analyst is going to use this data

Is all this governance work helpful to meaningful analysis on the back-end?

  • Cleanup leads to noticeable improvements and efficiencies in analysts’ work, but so far, it’s been tough to quantify a financial benefit
  • What is the motivation for an org to build these teams internally? Can you prove the ROI?
  • Difference between messy and reliability - am I guiding the business with the wrong data?

What about other stakeholder groups? How do you build trust in the data with other users of the data?

  • With social media groups, as you are talking to them about campaign parameters, show them how they can grab only their parameters and their results
  • Standardized checklist is the best way to manage parameters
  • Tools or templates that spit out parameters for you help with making sure you don’t use the same one twice

Experimentation is something that everyone in your organization should be doing but how do you increase adoption? And as adoption grows, how do you ensure trustworthy decisions are being made by right practices in experiment design, delivery and analysis?

Join Dylan Lewis, Product Manager for Intuit's internal experimentation tool, for a discussion about how to navigate the path to experimentation everywhere in your organization. The conversation will cover some of the critical requirements for success, and some of the things to avoid along the way.

Come share success stories as well as letdowns and ongoing challenges. In return you will discover how your peers have solved for these challenges and learn new ideas to help turn your experimentation program into a product that teams cannot live without.

Key takeaways...

Three methods to bring experimentation into the organization:

  1. COE - create and run from a central organization that drives capability, education, always-on, and available.
  2. Third party apps - often brought on when a COE doesn’t meet the needs, or there isn’t enough organizational support for standing up and resourcing an entire tech + capability team.
  3. Lone-wolf - single, passionate person brings in the tool to drive the change and capability (most difficult)

How do you gain adoption of that critical second team?

  1. Demystify the technical complexity for teams - make it so easy to understand that even a kindergartner can do it
  2. Showcase outcomes from the first team - create a sense of FOMO or competition (I can do better than them)

What are critical requirements for successful experiments:

  1. Qualitative insights - know the real customer problem
  2. Quantitative insights - understand the relative impact solving the problem will have.
  3. Historical experiment outcomes that are generalizable to solving the customer problem

What are some of the challenges (and solutions) found in experimentation programs

  1. Lack of leadership or frontline support - Must have tops down alignment and resourcing of experimentation as a priority for the company. Show the many wins across the program to demonstrate impact to meaningful outcomes. Benchmark against other companies and their capabilities. Over communicate the message of experimentation being critical to learning fast
  2. It takes too long (..with the treatments that we have deployed) - flip the discussion around from with this lift we will need to run the experiment for 4 months to if we want to get an answer in two weeks we need to have this level of impact to our primary metrics
  3. Lack of treatments with impact - Look for meaningful issues that users are having by doing follow-me-homes, or rapid prototyping. Walking in the users shoes will often reveal larger problems that can be used to create better experiences. Use design-4-delight methodology
  4. Lack of cross team coordination leads to cancelled or invalidated experiments if teams aren’t communicating about their plans broadly enough. Look for ways to maintain connectivity with teams in related areas to ensure alignment if not coming from leadership cascades
Asif Rahman
Accuweather

Most analytics teams are already lean to begin with. Team members often wear multiple hats and work across disciplines. From storyteller to implementation expert there is never a shortage of work or responsibility for the average analytics professional.  How can already lean teams maintain productivity levels in the face of shrinking budgets and resources due to the economic downturn?  

In this huddle, led by Asif Rahman of AccuWeather, we will discuss the role automation plays in growing and sustaining productivity in an analytics organization.  Automation is already (or should be) a part of our daily lives – how can we take it a step further and do more with less in these challenging times?

Questions we will answer include:

  • Can automation finally flip the “80/20 rule” on its head? (80% finding the issue or cleaning the data; 20% analyzing or solving the problem)
  • What can we automate beyond alerts, reports and dashboards?
  • What are the most effective approaches to get your team’s buy-in to automation?
  • What tools and technologies can be used for automation?

We encourage managers looking to optimize resources available to them to join this huddle and benefit from the collective insight shared by the group.

 

Key points:

  • Having a good idea for what is a good candidate for automation.  
  • Not automating for the sake of automating
  • Knowing what can be automated will speed up the process (API available? open source?)
  • Out of the box automation software does not provide the flexibility needed
  • If you are building vs. buying, then you need proper documentation and use a common (open-source language, i.e. python, R)
  • If not tech savvy to start digging into languages that can help with automation: python, R, JS etc., then there is a need to ask for external vendor help or help from other development teams within the organization.  (i.e. data engineering)
  • Data engineering is a big part of automation; maybe we need to think about a dedicated data engineering person within the team
Puja Ghosh
JP Morgan

We strive to build a strong analytics and experimentation capability to support our stakeholders and impact business change. In most cases the desire is to build an in-house team which would give us more control at a more affordable cost.

However, challenges lay ahead. How do we get the right people? What is the right balance? Can we get the head count? How do we ramp up our programme quickly enough to show return on investment? These are testing challenges, some that could be eased or relieved by bringing in subject matter experts to augment our in-house competencies.

In this huddle, led by Puja Ghosh, VP Digital Relationship Manager & Product Owner at JP Morgan, we will answer the following questions:  

  • What are the benefit use cases of each model and what are their hidden costs?
  • What functions can be accelerated through vendor support?
  • How to transition from one to the other?
  • Did you inherit an existing model or did you create it? What were your considerations?

The conversation will focus on tips and insights from senior managers who have tackled similar challenges in staffing and building out teams.

Best Situations to bring in an external agency to help

  • Well defined, time bound projects with the client/brand learning to manage it themselves during the engagement. Training and continuity documents should be part of the scope of work
  • Agencies can help you scale and help you have stability in your talent pool while you train/hire and build pipelines
  • They can help drive standardization of best practices as they have seen the problem across multiple organizations

Not good fits for

  • Agencies can slow things down if they are trying to sell you their products or try to shoe-horn a one size fits all solution. (Make sure you have a good culture fit with the agency you are bringing on and understand if they are trying to drive their own agenda)
  • Handing off a fully developed process. Agency may lack the context and relationships to understand the barriers to completion. However, they may ask good questions and have suggestions that optimize the process you would have come up with because they have external context

Vendor Perspective

  • Sometimes agencies can help evangelize products, create best practices, and increase tool retention
  • However, some agencies have relationships and are more familiar with certain vendors and will push those solutions because they are in their comfort zone. If this happens is the brand genuinely getting the best capability or advice for them?

How do we overcome some of the negatives?

  • Many issues with agencies and brands can be overcome with clear and constant communication. Make expectations very clear and if those are not met the brand needs to address that with the agency immediately to ensure a healthy relationship and success
  • Brand and agency should be unified on the problem and have each other's back. Agency should be training the brand and the brand should be communicating institutional barriers and context

Managing talent pipelines

  • Talent should be rotated every-so-often to prevent burn-out and data blindness in both brand and agency. This should be married with developed career tracks within the firm which leads to better talent retention while continuing to grow skillsets
  • Talent should not be viewed in a territorial manner. Agency benefits when someone gets hired by a brand because they are more likely to get hired in the future by the brand
  • Brands can attract/retain better talent if analysts feel their careers have mobility. Good brands will have good people return and returning with stronger skillsets is an asset

Taking a proactive approach to governing your analytics and marketing tags can provide huge benefits for the accuracy of your implementation, boosting your data quality and team's credibility.

In this huddle, led by Chris O'Neill, we will discuss ways you can become your company’s data quality guru and catch analytics errors before they happen, including how to:

  • Address data quality issues before they reach a production environment, with less risk and cost
  • Focus on implementing tag governance in early development environments, such as staging, dev and QA
  • Instill a culture of proactive tag governance

Data quality is a continuous challenge. Come share your experiences and learn practical techniques for improving your data so your team can shift focus to analytics rather than QA.

Closing Notes & Happy Hour