Gerrymandering & Music Test Samples

Wikipedia says, “Gerrymandering is the practice of setting boundaries of electoral districts to favor specific political interests within legislative bodies.” The term goes back to an 1812 redistricting map in Massachusetts set by Governor Elbridge Gerry. Political scientists talk about gerrymandering allowing candidates to choose voters, rather than voters being allowed to choose candidates.

When established stations use music montages to screen research samples, the thinking goes that they only want to consider the music opinions of people who like the music mix that the station plays. So, they use montages to exclude others within the station’s cume. Like gerrymandering, these montage-screened samples use songs to pick listeners, rather than allowing listeners to pick songs.

In practice, such montages screen out a small-to-moderate percentage of station core and screen out lots of P2’s and P3’s. The music test data are generally pleasing to work with, showing higher average scores and easier common-threading across the list. While these are good things for programmers wearing too many hats, we’re less certain they’re good for listening levels.

  • Screening out any of the station’s existing P1’s risks building a playlist less friendly to those specific P1 listeners, possibly ending up with them finding themselves listening more to another station.
  • Ignoring the opinions of the wider group from the station’s P2’s and P3’s reduces the impression of variety on the station and results in a narrower, same-sounding playlist … and fewer listening occasions over time.
  • While it’s prudent to exclude some cross-cuming P1’s from music test samples (the stray Urban P1 who shows up as a P3 to a Country station, for example), we think it’s wisest to allow a realistic sample of a station’s P2’s and P3’s.

We DO support the use of montages when screening a test for a new station without an established cume or a station in the midst of repositioning – with two caveats:

  • There need to be at least two functionally-identical montages (with no titles in common) randomized for respondents.
  • When montages are used for on-going music testing, montages need to be updated regularly to ensure they remain in-step with the desired programming archetype.

Other best practices for music test samples:

  • Single-gender samples make sense when the targeted AQH composition of the station leans more than two-thirds toward one gender.
  • Below the two-thirds mark, there may be good arguments for having at least a single cell of the non-dominant gender, perhaps 25%-33% of the sample. The non-dominant gender shouldn’t be able to “vote” songs on to the station, but the additional information allows you to better manage songs your playlist.
  • If your station is healthy, you’re wise to specify 40%-60% of your sample as your station’s P1’s, but it’s important to have a reliable breakout of “outer cumers” (those P2’s and P3’s you’re hoping will become P1’s in the future).
  • Use cume duplication numbers to estimate the levels of competing-station P1’s in your cume. Remember that cume duplication showing that 40% of your cume listens to one of your competitors does not mean you should expect to find that 40% of your cume is P1 to the competitor (among cumes shared between two stations, in general, about one quarter is P1 to one station, another quarter is P1 to the other station and the remaining half is splattered around as P1’s to many other stations).

Music testing isn’t about getting high test scores. It’s about using the available science to artfully construct a playlist and manage music scheduling to use one programming stream to keep the widest-possible portion of listeners satisfied.