Updated January 11, 2022
Want a balanced and actionable way to know if your content is doing what it has to do?
Create a content scorecard.
The content scorecard allows for a standardized score to be calculated based on a benchmark determined by the performance of the content in your industry or your company’s content standards.
It combines both qualitative and quantitative assessment. Quantitative scores are based on performance metrics like views, engagement, SEO ranking, etc. Qualitative scores are derived from predefined criteria, such as readability, accuracy, and accuracy. voice consistency (a little more than that).
Let’s start creating content score card template which you can tailor to your situation.
Table of Contents
Set your quantitative success indicators
First, you have to measure what matters. What is the job of that piece of content?
For example, an index or landing page is rarely designed to be the ultimate destination. If readers spend too much time on that type of page, that might not be a good sign. On the other hand, time spent on a detailed article or white paper is a positive reflection of user engagement. Be specific with your goals your content when deciding what to measure.
What should you measure against content intent? Here are some ideas:
After you have defined your quantitative criteria, you need to define benchmarks. What are you measuring? Industry standard? Internal standards? A bit of both?
A good starting point for researching common standards of user behavior is Nielsen Norman Corporation . If you want to focus on your industry, check out the marketing teams in your industry or even enter something like “web metrics for best user experience in . ”
Here is a sample benchmark key. The left column identifies the metric, while the top row indicates the resulting score on a scale of 1 to 5. Each row lists the parameters for the metric to score in its column.
Content Quantitative Sample Score 1-5
|Pageviews / Total Section||2 – 3%||3-4%||4 – 5%||> 5%|
|Return visitor||20 – 30%||30 – 40%||40 – 50%||> 50%|
|Trends in pageviews||> 50% off||Decrease||Static||Increase||Increase > 50%|
|Views / Page Hits||1.2 – 1.8||1.9 – 2.1||2.2 – 2.8||> 2.8|
|Time / Page||20 – 40 seconds||40 – 60 seconds||60 – 120 seconds||> 120 seconds|
|Bounce rate||> 75%||65 – 75%||35 – 65%||25 – 35%|
|Link||0||1 – 5||5 – 10||10 – 15||> 15|
|SEO||35 – 45%||45 – 55%||55 – 65%||> 65%|
Values must be determined against industry or company benchmarks.
Using a scale of 1 to 5 makes it easier to analyze content that can have different goals and still identify the good, the bad, and the ugly. Your scorecard may look different depending on the benchmark you choose.
How to make a profile
You will create two quantitative spreadsheets.
Label the first table as “Quantitative Benchmark”. Create a chart (similar to the one above) tailored to identify your key metrics and the ranges needed to hit each point. Use it as your reference.
Label a new spreadsheet as “Quantitative Analysis”. Your first columns should be the asset URL, subject, and type. Label subsequent columns based on your quantitative metrics (i.e. pageviews, returning visitors, pageview trends).
After adding details for each piece of content, add a score for each one of the respective columns.
Remember that a 1 to 5 rating is based on objective standards that you have recorded on a quantitative reference spreadsheet.
Define your qualitative analysis
It’s easy to see your content index, shrug and say, “Let’s get started remove everything eyeball was not obtained. ” But if you do, you run the risk of throwing out great content that the only error may be that it doesn’t exist discovered. Scoring your content for quality (using a different five-point scale) helps you identify valuable content that may be buried in the long tail.
In this content scorecard process, a content strategist or someone equally qualified on your team/agency analyze content based on your goals.
TIP: Have the same person review all content to avoid any deviations in the qualitative scoring standards.
Here are some of the qualitative criteria we used:
- Consistency – Is the content suitable? brand voice and style?
- Clear and precise Is the content easy to understand? exactly and current?
- Discoverability – Does the layout of the information support the main information flows?
- Interactive – Show content using appropriate techniques to influence or attract visitor?
- Relevance – Does the content meet the needs of all types of intended users?
To standardize the assessment, use yes – no questions. One point is earned for each yes answer. No points are earned for a zero. The average quality score is then determined by adding the yes scores and dividing the total number of questions by category.
The section below illustrates how to do this for clarity and accuracy categories and discoverability. Bold indicates the answer is yes.
Clear and precise: Is the content easy to understand, accurate, and current?
- Is the content understandable to all types of users?
- Does it use the appropriate language?
- Is the content clearly labeled?
- Do images, video and audio meet technical standards so that they are clear?
Score: 3/4 5 = 3.8
Discoverability: Is there an informational layout on the page that supports the main information flows? Is the user path to related answers and next steps clear and user-friendly?
Score: 1/5 5 = 1.0
TIP: Tailor questions in the relevancy category based on the information you have access to. For example, if the reviewer knows the audience, the question “Relevant to viewer interests” is valid. If the reviewer doesn’t know the audience, don’t ask that question. But almost any reviewer can answer if the content is current. So that would be a valid question for analysis.
How to make a profile
Create two qualitative sheets.
Label the first sheet as “Qualifying Question”.
The first columns are the body URL, subject, and type. Then split columns for each category and its questions. Add the average formula to the cell under each category label.
Let’s illustrate this in the example above:
After the body of content, label the next column as “Clearness and Accuracy” and add a column for each of the four questions respectively.
Then, go through each piece of content and question, entering 1 for yes and a 0 for no.
To calculate the average rating for clarity and accuracy, enter this formula in the “= (B5 + B6 + B7 + B8) / 4” cell to determine the average for the first piece of content.
For simpler viewing, create a new sheet labeled “Quality Analysis”. Include only content information with the average category in each subsequent column.
Putting it all together
With quantitative and qualitative measurements defined, you can now create your scorecard spreadsheet.
Here’s what it looks like based on the previous example (minus specific asset URLs).
|Article A||DISCLAIMER||Article||EASY thing||Article E|
|Sc Qualitative Average Ore||3.8||2.2||3.4||2.6|| 2.2
|Average Quantitative Score||2.4||2.2||2.6||2.4||2.6|
|Average Quality Score||3.8||2.2||3.4||2.6||2.2|
|Recommended action||Review and finalize||Delete and avoid||Review the distribution plan||Review the distribution plan||Review and finalize|
On the scorecard, an “average” column has been added. It is calculated by summing the numbers for each category and dividing it by the total number of categories.
You now have a quantitative and qualitative side-by-side comparison of the average of each content URL score. Here’s how to break down the numbers and then optimize your content:
- The qualitative score is higher than the quantitative score: Analyze your distribution plan. Consider replacement time, channel or formats for this “good” content.
- Quantitative score is higher than qualitative score : Review the content to determine how to get there improve it. Can its quality be improved with a rewrite? What about adding data-backed research?
- Low quantification and quantification score : Remove this content from circulation and adjust your content plan to avoid this type of content in the future.
- High quantitative and qualitative scores : Promote and reuse this content as much as possible. Update your content plan to recreate this type of content in the future.
Of course, sometimes the difference between the quantitative score and the qualitative score can indicate that the qualitative assessment has been turned off. Use your judgment, but at least consider the alternatives.
RELATED CONTENT TO BE HAND-VIEWED:
When should you create a content scorecard? Although it may seem like a daunting task, don’t let that stop you. Don’t wait until the next big migration. Take bite-sized pieces and make it a continuous process. Start now and optimize every quarter, then the process won’t feel too complicated.
The choice of how much and what content should be evaluated depends largely on the variety of content types and the consistency of content within the same category. You need to select enough content to see samples of topics, content types, traffic, etc.
While there’s no hard and fast science on sample size, in our experience 100 to 200 suffice. Your number will depend on:
- Total inventory size
- Consistency within a content type
- Rating frequency
Watch in batches so you don’t get overwhelmed. Set review cycle and review quarterly shipments, modify, remove or reposition your content based on ts review process all the time. And remember to choose content across performance. If you focus solely on high-performing content, you won’t be able to identify the hidden gems.
CAREFULLY SELECT RELATED CONTENT:
Cover photo by Joseph Kalinowski / Content Marketing Institute