Scorecard to Evaluate SEO Agencies: Template with Weights + How to Use It

Scorecard to Evaluate SEO Agencies: Template with Weights + How to Use It

Scorecard to Evaluate SEO Agencies

When comparing 2–5 proposals, the risk isn’t “choosing expensive vs. cheap.” The risk is choosing an agency with incomparable deliverables, vague promises, or an approach that doesn’t fit your business.

A scorecard helps you make an objective decision: you define criteria, weights, what evidence they need to show, and how it’s scored. If you want to raise the bar, you can add semantic SEO criteria (entities, topic mapping, and internal linking QA) as an advanced layer.

💡 Want to start with concrete data before comparing?
A semantic SEO audit gives you priorities, quick wins, clusters, and an executable roadmap.

Table of Contents


When to Use a Scorecard

A scorecard is worth the effort when:

  • You’re comparing 3+ proposals with different structures
  • The decision involves multiple stakeholders (marketing, IT, management)
  • Budget is significant and you need to justify the choice
  • You’ve been burned before by agencies that overpromised

If you’re only comparing two agencies with similar proposals, a simple pros/cons list might suffice. But for anything more complex, a weighted scorecard removes gut-feel bias.


Scorecard Template (Copy/Paste)

Criteria Weight Agency A Agency B Agency C
Technical SEO expertise 15% /10 /10 /10
Content strategy depth 15% /10 /10 /10
Proven results (case studies) 20% /10 /10 /10
Reporting & transparency 10% /10 /10 /10
Team experience & credentials 10% /10 /10 /10
Communication & responsiveness 10% /10 /10 /10
Price vs. scope alignment 10% /10 /10 /10
Semantic SEO capabilities (bonus) 10% /10 /10 /10
WEIGHTED TOTAL 100%

How to calculate: Multiply each score by its weight, sum all results.

Example: If Agency A scores 8/10 on Technical SEO (15% weight) = 8 × 0.15 = 1.2 points toward total.


Core Criteria (90%)

1. Technical SEO Expertise (15%)
– Evidence: Site audit sample, technical recommendations, Core Web Vitals knowledge
– Red flag: Can’t explain crawl budget or rendering issues

2. Content Strategy Depth (15%)
– Evidence: Content brief sample, keyword research methodology, topic clustering approach
– Red flag: “We’ll write X articles per month” without strategic rationale

3. Proven Results (20%)
– Evidence: Case studies with before/after data, client references, traffic screenshots
– Red flag: Generic “we increased traffic 300%” without context

4. Reporting & Transparency (10%)
– Evidence: Sample report, reporting frequency, metrics tracked
– Red flag: Proprietary metrics you can’t verify independently

5. Team Experience (10%)
– Evidence: Team bios, certifications, years in SEO, industry experience
– Red flag: Junior team with senior pricing

6. Communication (10%)
– Evidence: Response time during proposal phase, dedicated point of contact
– Red flag: Slow responses before they’ve even won the contract

7. Price vs. Scope (10%)
– Evidence: Detailed scope document, clear deliverables, payment terms
– Red flag: Vague “monthly retainer” without specific outputs


How to Score: Evidence to Request

For each criterion, ask specific questions:

Technical SEO

  • “Walk me through how you’d approach a site migration”
  • “What’s your process for JavaScript SEO?”
  • “Show me a technical audit you’ve done (anonymized)”

Content Strategy

  • “How do you decide what content to create?”
  • “Show me a content brief template”
  • “How do you handle content in multiple languages?”

Proven Results

  • “Can I speak with a current client in my industry?”
  • “Show me traffic charts with timeline of your work”
  • “What happened to rankings after an algorithm update?”

Scoring Guide

  • 9-10: Exceeds expectations, clear evidence, proactive insights
  • 7-8: Meets expectations, adequate evidence, solid responses
  • 5-6: Partially meets, some gaps, acceptable but not impressive
  • 3-4: Below expectations, weak evidence, concerning gaps
  • 1-2: Significantly below, red flags, likely disqualify

Semantic SEO Bonus Criteria

If you want to evaluate cutting-edge capabilities, add these:

Entity Optimization (ask about):
– Do they mention entities, not just keywords?
– Can they show entity analysis of competitor content?
– Do they understand Knowledge Graph optimization?

Topic Architecture:
– Do they create topic clusters with clear pillar/support relationships?
– Is internal linking strategic or just “add some links”?
– Can they show a topic map for a client site?

Future-Readiness:
– What’s their approach to AI search (GEO/LLMO)?
– How does their strategy prepare for answer engines?
– Are they tracking AI Overviews or similar features?

Scoring for Semantic SEO:
9-10: Demonstrates deep entity/semantic understanding with examples
7-8: Mentions semantic concepts, has some implementation experience
5-6: Aware of semantic SEO but limited practical experience
1-4: Still keyword-focused, no entity or topic cluster methodology


Common Mistakes When Comparing Agencies

Choosing based on price alone
The cheapest agency often delivers the least. Calculate cost per expected outcome, not just monthly fee.

Ignoring cultural fit
An agency might be technically excellent but terrible at communication. Both matter.

Not checking references
Case studies are curated. Ask to speak with actual clients—preferably ones they don’t suggest.

Comparing incompatible proposals
If one agency offers “10 articles” and another offers “topic cluster development,” you’re not comparing the same thing. Normalize before scoring.

Skipping the technical deep-dive
Ask uncomfortable questions. A good agency welcomes scrutiny; a weak one gets defensive.


Quick Decision Framework

After scoring, you’ll likely have one of these situations:

Clear winner (15%+ higher total): Move forward with confidence.

Two close options (within 5%): Schedule follow-up calls, ask harder questions, check references again.

All options underwhelming: Consider expanding your search or adjusting expectations/budget.

One much cheaper but lower score: Calculate if the savings justify the capability gap.


Next Steps

  1. Download/copy the scorecard template above
  2. Customize weights based on your priorities
  3. Send the same RFP to all agencies for comparable responses
  4. Score independently if multiple stakeholders, then compare
  5. Make the call based on weighted totals + gut check

If you want a head start on understanding your SEO needs before talking to agencies, a semantic SEO diagnostic can give you the baseline data and priorities that make RFP conversations more productive.


This scorecard methodology is used by Pos1 clients evaluating agency partners across markets.

Leave a Comment

Your email address will not be published. Required fields are marked *