How to Write a Competitive Intelligence Report (Template + Examples)

Most competitive intelligence reports die in a shared drive. Someone compiles 40 pages of data — competitor pricing, feature lists, LinkedIn posts — and sends it to the client. The client skims the executive summary, clicks through three slides, and never opens it again.

The problem isn't the data. It's the structure. Raw data without analysis is just noise. A CI report isn't a research dump — it's a decision-support tool. Every section should answer a question the client is actually trying to answer.

This guide covers what goes in a competitive intelligence report, how to structure each section, and what the output looks like when done right — with B2B SaaS examples for each section. If you're still building these reports manually, also read why most consultants are still doing competitive analysis like it's 2010 — because the process matters as much as the format.


Why Most CI Reports Get Ignored

Before the template, the honest diagnosis: most CI reports are ignored because they answer the wrong question. They answer "what data can we find?" instead of "what should our client do differently next quarter?"

Three patterns kill CI reports:

  • Data dumps without synthesis. Forty bullet points about competitor features with no interpretation of what those features mean for your client's positioning.
  • No recommendations. A thorough analysis that ends with "we'll continue monitoring." That's not analysis — it's note-taking.
  • Stale data presented as current. A report built on a quarterly research sprint that's already three months out of date when it lands. Competitors have already moved.

A good CI report is shorter than you think, opinionated, and built around decisions — not just observations. The format below is designed to force that discipline.


The 5 Sections Every CI Report Needs

This is the core template. Not every section needs to be exhaustive — a tight 8-page report that's actionable is better than a 40-page deck that nobody reads. Use each section to answer one question, then stop.

Market Overview

Question it answers: What is the shape of the market right now, and where is it moving?

Cover: total addressable market, dominant segments, 2–3 macro trends affecting the category, and how the competitive dynamic is shifting. Keep this tight — 1 page max. The goal is shared context, not a market sizing exercise.

What to avoid: long trend lists that don't connect to the client's position. If a trend isn't going to change what your client should do, cut it.

Example — B2B Project Management SaaS "The project management software market is consolidating around two modes: all-in-one platforms (Asana, Monday.com, Notion) competing on breadth, and vertical-specific tools competing on depth for single industries. Enterprise deals (>500 seats) increasingly require SSO, audit logging, and SCIM provisioning as table stakes — not differentiators. Mid-market (<200 seats) is still winnable on UX and time-to-value. AI-assisted task creation and automated status updates are being commoditized across all tiers over the next 12–18 months."

Competitor Profiles

Question it answers: Who are the real threats, and what are they actually doing?

Cover 3–5 direct competitors maximum. For each: positioning statement (in their own words from their website), pricing tier structure, key product capabilities, recent moves (last 90 days — new features, pricing changes, leadership changes, funding), and ICP (ideal customer profile). One paragraph per competitor, plus a quick-reference table.

Recent moves matter most. A competitor's static pricing page is less valuable than what they changed last month. If you're pulling this data manually, you're already behind — automated monitoring catches these changes as they happen.

Example — Competitor Profile: Linear (for a dev tools client) "Linear positions as 'the issue tracker for high-performance teams' — deliberately excluding enterprise and casual users. Pricing: Free (10 members), Standard $8/user/month, Plus $14/user/month. Recent moves: launched Linear Asks (customer feedback integration) in March 2026, raised Series C at $400M valuation February 2026. ICP: seed to Series B engineering teams of 5–50, technical founders who care about tool speed. Key threat: they are expanding from issue tracking into full project management, moving toward your client's core use case without losing their positioning advantage with engineering buyers."

Feature & Pricing Comparison

Question it answers: Where does our client win, where do they lose, and what's the pricing context?

A comparison table across the 5–8 features that matter most to buyers in this category. Mark your client's position clearly. Add a pricing row so the relative value is visible. The table should be scannable in 30 seconds.

Focus only on features that come up in sales conversations or drive purchase decisions. Don't list every feature — list the ones buyers compare. See our competitive analysis template for a full 8-section framework that complements this report format.

Example — Feature Comparison Table (dev tools category)
Feature Client Linear Jira Shortcut
Git integration depth Deep Deep Medium Medium
Custom workflow automation Yes Limited Yes No
SSO / SCIM Plus plan only Plus plan only Standard Standard
Starting price (per user/mo) $12 $8 $8.15 $8.50

SWOT / Strengths & Gaps Analysis

Question it answers: Given this competitive landscape, where is our client positioned to win — and where are they vulnerable?

A classic SWOT works if you use it to synthesize the previous three sections, not as a standalone brainstorm. Strengths and weaknesses are internal (product, pricing, brand, team). Opportunities and threats come from what the competitive landscape reveals.

The key discipline here: every item in the SWOT should be backed by evidence from the previous sections. "Strong brand recognition" means nothing without the data that supports it. "Threat from enterprise expansion by Competitor X" should reference the funding round or product launch you documented in the profiles section.

Example — SWOT Snippet (dev tools client) "Strength: Superior Git integration depth is a genuine differentiator in deals where engineering managers are the economic buyer — confirmed by 7/10 win-call transcripts citing it as a deciding factor. Gap: SSO only available on Plus tier ($14/mo) vs. Jira offering SSO on Standard ($8.15/mo) — creates a friction point in mid-market enterprise deals where IT mandates SSO. Threat: Linear's move into project management (Asks product, March 2026) narrows the product gap for engineering-led teams, which is the client's core segment. Opportunity: 'Automation without code' is an unoccupied position — neither Linear nor Shortcut offers no-code workflow builders, and 4 of 5 recent lost deals cited needing developer help for automations."

Strategic Recommendations

Question it answers: What should the client do about this, specifically?

This is the section most reports skip — and it's the only one clients remember. Give 3–5 concrete recommendations, each with: the action, the rationale (what in the data supports it), and the timing/owner. Vague recommendations ("invest in enterprise features") are useless. Specific ones ("move SSO to the Standard tier and reprice Plus at $18 to capture the 23% of churned accounts who cited SSO pricing as the blocker") are the whole point.

If you can't generate specific recommendations from your analysis, the analysis wasn't opinionated enough. Go back to the SWOT and sharpen it. This is also where the work you've done productizing competitive intelligence as a service pays off — clients who receive actionable recommendations renew retainers. Clients who receive data dumps do not.

Example — Recommendation (dev tools client) "Recommendation 1: Move SSO to Standard tier. Rationale: SSO is now table stakes at mid-market (>50 seats). Competitors Jira and Shortcut offer it on base tiers. Win-loss analysis shows SSO pricing friction in 4 of last 12 enterprise losses. Timing: Q3 pricing change, bundled with Plus tier reprice to $18. Owner: Product + RevOps. Expected impact: reduces friction in 20%+ of mid-market deals where IT is a veto holder."

Common Report Mistakes That Kill Credibility

Even with a solid structure, these three mistakes show up in CI reports constantly:

Mistake 1: Data dumps without synthesis Listing every feature a competitor has without interpreting what it means for your client. "Competitor X has 47 integrations" is useless without "and 12 of those integrations overlap with your client's top-10 customer workflows, which means the switching cost argument is weakening."
Mistake 2: No recommendations A report that ends with "we will continue monitoring" has failed. Every CI report should close with decisions the client can make. If the data doesn't support a recommendation, say what's missing and what evidence would unlock a position — not just "more research needed."
Mistake 3: Stale data presented as current A competitor's pricing page from 90 days ago, presented as "current pricing," is a credibility risk. Competitors move fast. Reconbase's monitoring tracked an average of 3.2 pricing or positioning changes per competitor per quarter across the B2B SaaS clients we analyzed in 2025. If your CI pipeline runs quarterly, you're missing most of them.

The stale data problem is the hardest to solve manually. It's also the one that does the most damage — because a client who catches an outdated fact in your report stops trusting the rest of it. See the full comparison of CI tools to understand which approaches keep data current vs. which require manual refresh cycles.


How to Automate the Hard Parts

The five-section structure above is straightforward to execute once. The challenge is doing it at scale — multiple clients, multiple competitors, monthly or quarterly refresh cycles — without it consuming your entire practice.

The parts that are actually hard to automate:

  • Continuous data freshness. You need to know when a competitor changes their pricing, launches a feature, or shifts their messaging — not in 90 days, but when it happens. That requires monitoring, not just periodic research.
  • Structured competitor profiles. Pulling the five key fields for each competitor (positioning, pricing, recent moves, ICP, capabilities) from web sources and keeping them current is hours of work per refresh cycle if done manually.
  • Change detection. Knowing that Competitor X changed their homepage headline last week is only possible if you're tracking it continuously.

Reconbase handles all three automatically. You set up the competitors you want to monitor, and the system runs daily research cycles — Brave Search queries, AI-extracted structured data, and change detection against the previous cycle. The output maps directly to the Competitor Profiles section of this template: positioning, pricing, recent moves, and ICP, refreshed daily.

For consultants running CI as a retainer product, this is the leverage point. Automated monitoring handles the data collection. You handle the interpretation and recommendations — the sections clients actually pay for. The result is a retainer that takes 2–3 hours per month to produce instead of 15–20.

The combination that works: Reconbase for continuous monitoring and structured data collection, your judgment for the SWOT synthesis and strategic recommendations. The report template above works with either approach — but with automated data feeding Section 2, you spend your time on Sections 4 and 5, where the actual value is.


Related Reading

Generate CI reports without the manual research.

Reconbase monitors your competitors daily — pricing changes, feature launches, messaging shifts — and delivers structured profiles ready to drop into your reports. The 40 hours you'd spend on data collection, automated.

Generate Your First Report →
Not sure yet? See a sample report first →