What Research Respondents Can Tell You About Songs
What can respondents tell you about songs? Keep in mind, they’re respondents – by definition they’ll respond to anything you ask them. People who think a question is irritating or meaningless or confusing often end up becoming non-respondents. But, those who keep giving answers to questions, even if those answers are not very meaningful, are the backbone of market research the world over.
We believe that music testing is best when it keeps respondents out of their heads – when they’re reacting without consciously considering their choices. In the car, listeners aren’t making reasoned choices concerning whether to turn a song up or switch to another station.
Good song? Her thumb glides over the thumbwheel on one side and gooses the volume. Bad song? Her other thumb nudges the switch on the other side of the steering wheel and the station is changed. No analysis.
To keep her in that part of her brain, keep the interview simple. Our respondents are faced with a scale containing six choices for each song. Except for the response for an unfamiliar hook (we wouldn’t ask people to rate a song they don’t know), all the choices are related to actions.
- It’s one of my FAVORITES – turn it up and give me more!
- LIKE the song – I’ll always listen.
- I’LL STICK AROUND – won’t turn it up and won’t turn it off.
- TIRED – they’ve played it too much and I change the station when I hear it.
- NEVER LIKED IT – I didn’t like it the first time and I still change the station when I hear it.
- NEVER HEARD the song before now.
We know that it takes each respondent a few songs to get comfortable with the scale, so hooks are randomized to negate that placement bias. Once they get the hang of it, they fly through the interview – and that’s the way we think it should be.
Sure, you could ask whether songs fit on your station. They’ll give you an answer, because they’re respondents. But, they really don’t know whether or not the song fits. That’s your job. Respondents know what songs and types of songs they’ve heard on your station. When you ask them about “fit,” they’ll respond with what you’ve taught them. By using the information, you create a tautology and reinforce the too-narrow, same-sounding, repetitive playlists that have been giving radio a black eye for decades.
Worst of all, asking the additional questions about fit pulls respondents back into their heads to think critically about an issue they’ve never thought about before. And knowing those additional questions are lurking after each hook means they’re now thinking about issues they’re not going to be thinking about when they’re tooling down the highway at 60 miles per hour.
Or you could ask whether they perceive a song to be going up or down the chart. We believe that if they really think it’s “played out,” the song will have a high burn score (and a savvy programmer will pull back on exposure). If the song is really increasing in popularity, its scores will improve week over week. Again, respondents will answer the question because they’re respondents, but each additional answer means they’re trying to think more like a program director and less like a listener.
We’d prefer respondents not think about our interview very much. Maybe you’ve heard the effect in a focus group as listeners start rationalizing their behavior when talking about radio stations for an hour. Or you’ve listened to a friend substantiate a new car purchase with the car’s great safety record or tremendous gas mileage – when you know the decision was actually motivated by how the car makes them feel.
Valuable data will tell you which songs are familiar and which songs people really like. From there, it’s up to expert programmers to make decisions using experience and intuition. Online services like Pandora do the best they can with algorithms, but human-curated playlists and schedules, augmented by actionable information from consumers, have created magic for many stations for many years.