Here's a quick feature of a couple of post-event questions in the realms of data science storytelling from attendees at our recently concluded webinar.
Prannoy D'souza: What is your advice when the stories are weak or nothing stands out in the data? Unfortunately it does happen :(
Laura Warren: Generally speaking, when we say a story is weak what we really mean is that it hasn’t met our expectation. (A personal truth.) We wanted the data to say something, and are at loose ends when it doesn’t. When that happens, I suggest we reframe what’s happening. It’s not a weak story, it’s just not what we were expecting. Step back and identify what we were expecting and why. If the data didn’t prove that theory, why is that? What does that teach us about the original problem – why should we care? What will we do next to resolve the original problem? Your story lies there.
Weak stories are simply those that don’t connect the dots between problem, finding, and action. If we change our perspective on what we’re trying to communicate, every analysis, regardless of the finding, can become a strong story.
Khalid Hasan: We see currently (due to COVID) many of the research agencies moving from statistics based sampling to web-based sampling (not so much statistically representative). Therefore, data is not representative, may be biased, but they are showing those data. People are interpreting those data. What do you think?
Laura Warren: Here’s the thing. The reality is that in the absence of data, it can sometimes feel like any data is good enough. The more caution that needs to be used with the data, the greater the responsibility of the provider to tell a clear story; to clearly communicate both the potential and limitations of the information, and the appropriate actions based on the findings.
It’s a scary time for a lot of businesses (and people) right now. From a human perspective, I empathize with the need for *any* data to help make sense of what’s happening. Because of that, I think the onus is on research providers to make a clear distinction between bad, good enough, and ideal data sets. Bad or misleading data stays in the bin. Good enough data is carefully shared with a strong story to minimize the risk of misinterpretation – what it means, what it doesn’t mean, and what you should do about it. (Channel your internal Dr. Anthony Fauci. He does this really well.) Ideal data is shared widely and proudly with a clear purpose, explanation, and action. If guidance is not given by the provider, then the receiver tells their own story. And that’s where it gets dangerous. Minimize the risk by owning the story at the outset.
Laura Warren is the Founder and CEO of Storylytics. You can connect with her here.
© Marketing Research and Intelligence Association