Better Data, Not More: What Data Science Can Learn From NBC

While in the “What is Data Science?” session at analytics camp, someone asked this question:

“I understand that you can overcome a lot of problems with more data. When do you know you need a bigger data set?”

“Not bigger,” I said. “Better.”

We looked at each other for a second, and then we both smiled. That’s one of the great things about un-conferences, those little “Aha” moments when the talk turns into a conversation and you both realize that you’ve hit upon something important. I had never thought about the “more vs better” idea, but I realized then that it was something that wasn’t really on anyone’s radar. We keep saying “Big Data” like it’s a cure-all, but collecting more data without thinking about what you’re trying to measure can cause more problems than it solves. Randomness has a pattern, and it’s the data scientist’s job to separate informative randomness from uninformative randomness. Expanding your data can add not only more randomness but new and different kinds of randomness, further muddying the waters and hiding the patterns you’re looking for.

That’s why I’m so impressed with NBC’s handling of Community, a sitcom in NBC’s prime comedy slot that just couldn’t seem to get the ratings it needed. NBC executives did everything right with this show, and they used the right mix of analysis and business knowledge to solve a riddle that would have flummoxed most TV executives.

Community is a critical darling. It’s well-written, funny, and unique in a lot of ways. The favored 18-34 demographic loves this show. Following Twitter and/or Facebook would tell you in a moment that its sentiment score was high. Hulu’s numbers (which are arguably more accurate than the Nielsen ratings) were solid. Yet, no one was watching the show live. Every number was right except the one that mattered: The all-powerful Nielsen rating.

Now, NBC could have poo-poohed Nielsen and their rating system, but that would have been a losing battle. They needed those numbers to sell ads, and they needed ad money to keep the show on the air. They could have blamed the show, interfering in the script writing, casting, and story arc, but they didn’t–they new this show was unique, the writing staff and cast was solid, and it was going to succeed or fail as itself. Changing it to be more like the shows it was competing with would just sink its numbers further.

So, with a lot of conflicting evidence on their hands, the show runners and the network executives took a chance: They put the show on hiatus. The fan outcry was loud. People continued to talk about the show and how much they wanted to see what happened next. Sentiment not only stayed at its current level, but it stayed for a long while after the show disappeared, which is a great indicator of the television statistic attachment–will viewers stay with the show, even through time slot changes or (much worse) if it’s put on hiatus. They did.

Most importantly, however, NBC took 30 Rock, a show that has been a huge winner for them, and put it into Community’s time slot. This was a designed experiment, and the results were key. 30 Rock, a witty, quirky comedy that shared a lot of features with Community, did just as poorly if not worse in Community’s time slot. NBC now had a key piece of data they could not have gotten otherwise. The 8:30pm on Thursday time slot was a tough one–they all knew that–and now they knew that even their best comedy couldn’t compete there.

So, when Community came back, it was placed in the 8:00 time slot. NBC invested in a trailer and supported the show’s return–and the show’s numbers soared. The staff was hoping for a modest increase to a Nielsen rating of 1.7--they got a 2.2, and 4.9 million viewers. Moreover, in that all-important 18-34 year-old demographic, Community outperformed every show in the 8:00 time slot, including Fox powerhouse American  Idol.

The best ideas look easy once someone else succeeds with them. It’s obvious in hindsight: All of Community’s numbers were right except one (live viewing as measured by Nielsen), and it makes sense that someone would do an experiment to check the key features of that one. When you’re making decisions, however, with millions of dollars of advertising money on the line, with a few months to rescue an expensive project that you’ve put your heart and soul into, you don’t always think of doing something smart. We’re all enamored with “Big Data” right now, and that’s partly because it sounds easy. You put your math wizard into a room alone with a lot of information that costs you nothing to collect, and they come out with the solution to your problem. It’s also a safe solution: Everyone else is doing it, so if it doesn’t work out, you’re covered. You did what you were “supposed” to do.

The first people who mined a huge data set for nuggets of information, however, were taking big risks with an idea no one thought would work. They probably didn’t succeed the first time, and I’m guessing the first executive who heard a pitch about marketing or operations from a person who sits in a cube with a computer all day was pretty skeptical. Data mining (what we called big data before big data became classy) has been around for decades, and it took years to develop those methods to a point where they produced reliable and consistent answers.

Designed experiments, in my opinion, is where we’ll make our next big leap for data science. Following NBC’s model, a designed experiment can be used to test insights gleaned from your big data analysis (sentiment scores, social media analysis or what-have-you) and prove or disprove your recommended course of action.  Testing your idea before you roll it out can save your company millions if you’re wrong–or, as it did with NBC, it can encourage you to push forward even more aggressively if you’re right.

No one likes to make mistakes, but failing quickly (hopefully before you’ve made a lot of investment) is a key to innovation. The “scientist” in the data scientist title indicates a willingness to use the scientific method, including hypothesis formation and testing. We owe it to our clients to do what we can to give them reliable answers. That includes working with them to test the results. Designed experiments are the idea behind AB testing, but they are a lot more.  A well-designed experiment gives you a ton of information with the smallest possible investment.

For more about designed experiments, Wikipedia’s page is a pretty solid introduction, and their “more information” links are pretty good. If you have a background in statistics, this book is the gold standard.

About Melinda Thielbar

Melinda Thielbar is a co-founder of Research Triangle Analysts, Ph.D. statistician, spinner of fine yarn, martial artist, fraud analyst, and fiction writer. In other words, she's a polymath. Follow Melinda on Twitter @mthielbar, or join the Research Triangle Analysts group on G+ to join the conversation about data science.
This entry was posted in Analysis and tagged , . Bookmark the permalink.

3 Responses to Better Data, Not More: What Data Science Can Learn From NBC

  1. Pingback: Better Data, Not Bigger Data – Thoughts from the Data 2.0 Conference | Data @

  2. Pingback: Better Data, Not Bigger Data – Thoughts from the Data 2.0 Conference | Data @

  3. Pingback: On Being a Mad Genius: What We Can Learn From Dan Harmon’s Exit from Community | Melinda Thielbar

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s