Who Should be in Your Music Test Sample?
If you’re thinking about a library test or just pondering the screen used to recruit respondents for weekly music testing, you’re probably considering who you want in the sample. You’ll want to avoid getting the demo too wide. The axiom, “narrow focus yields broad results,” holds true when administered wisely. A truly wide test demo makes it tougher to find consensus away from the biggest hits tested. Also, the sample sizes typically used for music testing mean there isn’t sufficient sample to populate reliable sub-samples for more than two or three demo breakouts. In most cases, an age range of 10 to 15 years is ideal.
Single-gender samples make sense when the targeted AQH composition of the station leans more than 60% toward one gender. Even at the 60% mark, there are good arguments for having a stable-sized cell of the non-dominant gender, perhaps 25%-33%. While the non-dominant gender shouldn’t be able to get songs on the station, the additional information allows you to reduce overall exposure of a title or use sound coding to eliminate runs of songs not favored by that gender.
100% cume is usually a good specification for a sample, since music testing is a TSL tool (and you can’t impact TSL of people who don’t listen to your station). However, if your station is brand new or doesn’t have enough cumers, you may want to reach outside your cume for some of the respondents. In these cases, using montages to identify the people you want to include makes great sense.
If montage screening is employed with on-going music testing, like NuVoodoo OMR (Online Music Research) or its telephone counterpart, “callout,” montages need to be updated regularly to ensure that they remain in-step with the desired programming archetype. Using at least two formatically-identical montages (but with no titles in common) randomized or rotated for respondents is a good practice.
Unless your station is undergoing a sharp turn in positioning, we recommend against using montages to screen out respondents who are already in your core. Sure, most P1’s will make it through well-designed montages that represent the station’s format. But, screening out any existing station P1’s, perhaps because one title in a montage didn’t excite her, risks losing TSL from the portion of the P1’s who would be represented by those screened-out respondents.
If your station is healthy, you’re wise to specify 40%-60% of your sample as your station’s P1’s, but it’s important to have a reliable breakout of “outer cumers” (those P2’s and P3’s you’re hoping will become P1’s in the future). While it’s tempting to limit these to be P1’s from two or three close competitors, you should use ratings’ analytics to make sure those few competitors are the right ones.
You can use cume duplication numbers to estimate the level of competing-station P1’s in your cume. Cume duplication showing that 40% of your cume listens to one of your competitors does not mean you should expect to find that 40% of your cume is P1 to the competitor. As a rule of thumb, among cumes shared by two stations, about one quarter is P1 to one station, another quarter is P1 to the other station and the remaining half are P1’s to other stations. For example, if WZZZ shares 40% of WAAA’s cume, it’s reasonable to guess that one quarter of that 40%, 10%, is P1 to WZZZ – and other another quarter is P1 to WAAA.
You can bend the levels of competitive P1’s a bit in once or twice-a-year library tests and over-represent the P1’s of these key competitors. To reduce the problem of “over-fishing” these P1’s in weekly music testing, you may need to reach outside your cume to allow in a few P1’s who don’t cume your station. Or, you may want to rely on montages to identify the outer cumers for you, regardless of which station they say is their P1.
Music testing isn’t about getting high test scores. It’s about using the available science to artfully construct a playlist and manage music scheduling to use one programming stream to keep the widest-possible portion of listeners satisfied.