The BURN Factor: How much is too much? A new way to think about Burn and interpret your music research

By Ken Benson

 

BURN may be the most misunderstood and misused tool in music research, over the past 25 years. We often hear programmers debate the Burn threshold. Is it 25% or 30%, or more or less? Frankly, what programmers really need to know is how their audience will react the next time that song plays on their station. How many listeners will turn it up, leave it on or turn it off?

 

King of Pollsters on Burn

 

Back in the early development of the digital-dial music research technology, one of the world’s leading researchers, George Gallup of the Gallup Organization, made a startling proclamation… “The best research asks only one question at a time! If your goal is to find the most appealing songs to play on the radio then it doesn’t matter why songs do or don’t test well?” Wow! In other words, Gallup believed that a song’s negative qualities, whether its burn, unfamiliarity, indifference, dislike, or hate, or other would be reflected in its overall appeal score. So why ask? Why confuse your respondents with four or five separate questions within a seven second period, especially during auditorium testing where songs are racing by them at lightening speed!

 

Want-to-Hear

 

In Hollywood, the movie studios measure ONLY what we “want-to-see”. Either we have an interest in seeing a movie or we don’t. But in radio we measure all kinds of things that don’t necessarily correlate with what we really NEED to know… what do we “want-to-hear”. If you buy into the “want-to-hear” logic, perhaps the best thing to do is simply ask respondents “How often they want to hear each song on the Radio? – more often; often; sometimes; rarely; or never. This eliminates burn altogether, and makes the preference score far more relevant – namely, do we play it or not and how much? But that may be too radical of a concept to explore this time.

 

The Process

 

Most music researchers ask respondents a minimum of three questions after each callout hook or auditorium hook is played.

 

“Are you familiar with that song?”
If the respondent says “yes, its familiar”, they are then asked to give an appeal score. Appeal scores are generally on a 1 to 5 scale, with a 1 score meaning the respondent “strongly dislikes or hates the song”, while a 5 score means “they love it or its one of their favorites”.
The final question is “Are you tired of it that song… yes or no?”

 

The Dilemma – A song tests top 10 but is 40% burned

A respondent just scored “Song X.” They were familiar with it, scored it a “5 – one of my favorites”, and then when asked if they were tired of it, they said “Yes”. How can that be?

 

Many programmers have equated the respondents that answer “Yes, I’m tired of the song”, means they don’t want to hear it any more and sharply reduce a songs rotation or pull it off the air even if its testing in the Top 10. We believe this is a big mistake.

 

Burn vs. Appeal

 

If a song is testing in the top 10 week-after-week in callout does burn really matter? BURN becomes an issue when the overall appeal score begins to trend downwards. When reviewing the trending in your callout, you can see a direct correlation between BURN and APPEAL. As the BURN score increases the overall APPEAL score decreases. A downward trend in appeal (for true hit product) is the best indicator for reducing a songs rotation, and when the appeal score of a song finally, drops below minimum acceptable levels it is at this point that we recommend pulling the song from current rotations.

 

Retired consultant Rik Delisle, Director of Alan Burns & Asscoiates – Europe says “I try and approach “burn” scores as positive results and not negative. That may sound like a head game. In reality though, if 30% are saying they have heard the song enough, 70% are saying they haven’t heard it enough yet. That’s good enough for me to keep playing it as long as it tests.”

 

“As long as it tests” meaning as long as the songs overall appeal score still tests at playable levels.

 

Actionable Burn

 

If you are not ready to do away with Burn scores just yet, then consider this – BURN is a function of current music. Library songs don’t burn, they either test well enough to be played or they don’t, and should be rotated based on overall appeal. Here’s how we would recommend modifying the “Burn” question for your callout and/or Online research. After the respondents score a song based on its appeal, we would ask the burn question as follows:

 

Are you?

a) Not at all tired of hearing the song,
b) Somewhat tired of the song or,
c) Very tired of hearing the song.

 

We believe the respondents who choose response c). Very tired of hearing the song on the radio is truly the only actionable response. Burn scores in the 30%+ range for Very Tired responses along with decreasing appeal score trending would be reason to consider a lesser rotation.

 

Here are a few examples from our own clients. The number one testing song for a client recently was “Lose Yourself” by Eminem. It scored 92% familiar, with a 4.37 overall appeal score, and was also the most tired of song of the week at 37%. But when examining the Burn score further, we found that the 37% of the respondents who said they were tired of “Lose Yourself”, 32% had said they were only “Somewhat Tired of it”. So our recommendation was to continue to play the song in power rotation.

 

Beyond Burn and Appeal

 

Listeners love to hear their favorite songs. Programmers are constantly looking for the most popular and passionate songs to play for their listeners to maximize Time Spent Listening. Research is an excellent tool in determining which songs your listener’s love, which ones they hate, and what songs they will tolerate. The most successful stations play songs that their audience is passionate about.

 

Most weekly callout music scores provide an enormous amount of data for each song, well beyond overall appeal, familiarity and burn scores. For many years as both a radio programmer, and as Vice President of Music Programming for MTV in New York, I would review the passion/favorite scores. These are the respondents who voted 5 – and claimed these were their favorite songs. The songs with highest 5 scores are the songs that would get the most consideration for power.

 

Conversely, I would also look at high hate scores too. It is not unusual, especially with novelty songs, to see both high favorite scores and high hates scores.

 

Trending

 

One of the biggest complaints all programmers have about callout research is the dramatic changes that can occur in scores week to week. One week a song may rank in the top 5, the following week it ranks number 20, then the next week it is back in the Top 10. As a result many programmers move songs in an out of power rotation week to week. Do you really believe your audiences feelings on music swings back and forth like a pendulum? Of course, not. Utilize the trending page in your weekly music research.

 

If a song has been growing in popularity, overall appeal, and 5 scores (passion/favorite) you should probably disregard a week of poor testing. Likewise, when a song has been testing poorly in call week after week and then out of nowhere it is Top 10, you should probably wait for another week of research before moving it to a power rotation.

 

A Song’s Life Cycle:

Like any other product, every hit song has a 4 Stage Life Cycle.

Introduction – when a song first goes into rotation and/or begins to be exposed in a marketplace

Growth – Listeners decide they like it and exposure increases accordingly

Maturity – it becomes a favorite of the audience and exposure is at its peak

Decline – Audience interest begins to wane and exposure is reduced.

 

Remember that every song’s Life Cycle is different and unique. We can all recall a hit song that went from the introduction to maturity phase with a couple of days or even a few spins, such as, Adele’s “Hello.” There are also many songs that never make it past the introduction phase. In the Mature stage you may have songs that can may be in power rotation for only a week or two, while others may stay their for 10 weeks or more. What about the hit songs that never make it to recurrent, while others will still be played on the radio 40 years from now.

 

Research Limitations

 

While callout and auditorium music research are incredibly valuable tools, they don’t provide all the answers. If it did, we wouldn’t need programmers. Great stations are a combination of art and science. Research is the science. Strategic and perceptual studies help identify market opportunities, strengths and weaknesses. Tactical research, like callout and library music testing, help you execute your strategy. Unfortunately it is the Art that is missing from many stations around the world. The art is what the program directors, the presenters and the imaging producers bring to a radio station. They are the ones responsible for creating the magic over the airwaves and creating the one-on-one emotional connection with each listeners.

 

Great programmers through years of experience and intuition ultimately rely on their instincts to make their programming decisions.

 

About us

P1 Media Group prides itself in going beyond the ordinary to give our partners the services they need to achieve their goals. These services include, but are not limited to Strategy and Planning, Music Research, and Talent Development. Our collaborative approach is both personable and personalized. We’re more concerned with doing what’s best for you, than we are with following “best practices” or telling you only what you want to hear. We’re real people with real answers who are united with our partners to propel rad