⚡ Promptolis Original · Data & Analytics

📐 Metric Framework Design — North Star + Input Metrics That Drive Action

The structured metric hierarchy — covering north star metric selection, input metrics, counter-metrics, and the 'OMTM (One Metric That Matters) + inputs' framework that distinguishes focused measurement from dashboard sprawl.

⏱️ 4 hours design + ongoing 🤖 ~2 min in Claude 🗓️ Updated 2026-04-20

Why this is epic

Most companies track 50+ metrics + optimize nothing. This Original produces structured hierarchy: north star + input metrics + counter-metrics. Based on Sean Ellis + Lean Analytics patterns.

Names 5 metric framework failures (no north star / wrong north star / no input metrics / no counter metrics / vanity metrics dominant).

Produces north star selection + input metric framework + counter metrics + implementation.

The prompt

Promptolis Original · Copy-ready
<role> You are an analytics strategist specializing in metric framework design. 12 years experience at companies from startup to Fortune 500. You draw on Sean Ellis, Alistair Croll (Lean Analytics), Amplitude's Product-Led Growth, + empirical patterns. You are direct. You will name when north stars wrong, when input metrics missing, when counter-metrics absent, and when tracking vanity metrics. </role> <principles> 1. ONE north star metric. 2. Not revenue (that's outcome). 3. 3-5 input metrics. 4. Counter-metrics prevent gaming. 5. OMTM: one company focus at a time. 6. Functional teams cascade. 7. Review annually. 8. Measurement drives behavior — design carefully. </principles> <input> <business-context>{company + model + stage}</business-context> <current-metrics>{what you track now}</current-metrics> <strategic-goals>{primary business objectives}</strategic-goals> <user-value>{what your users actually get}</user-value> <competitive-context>{who else + what they measure}</competitive-context> <team-structure>{functional teams}</team-structure> <data-availability>{what data you can track}</data-availability> <timeline>{implementation}</timeline> </input> <output-format> # Metric Framework: [Business] ## North Star Selection With reasoning. ## Input Metrics (3-5) Drivers of north star. ## Counter Metrics What could go wrong. ## Functional Team Metrics Cascading hierarchy. ## Measurement Implementation Tracking + tooling. ## Reporting Cadence Weekly, monthly, quarterly. ## Common Anti-Patterns Avoid these. ## Key Takeaways 5 bullets. </output-format> <auto-intake> If input incomplete: ask for business, current metrics, goals, user value, competition, team, data, timeline. </auto-intake> Now, design:

Example: input → output

Here's how this prompt actually performs. Real input below, real output from Claude Opus 4.

📝 Input
<business-context>B2B SaaS. Customer feedback platform. $20M ARR. 2,200 customers (mix SMB/mid-market). 4 years old.</business-context>
<current-metrics>Tracking 35+ metrics: MRR, ARR, churn, NPS, MAU, WAU, DAU, feature adoption rates (per 15 features), trial-to-paid rate, CAC, LTV, email open rates, support ticket volume, etc. No single focus.</current-metrics>
<strategic-goals>(1) Grow to $40M ARR in 2 years, (2) Expand enterprise segment, (3) Improve product-market fit signals (currently mixed), (4) Build moat against competitors.</strategic-goals>
<user-value>Customers learn what their customers think. Primary value: surveys + response analysis + action taken on feedback. Loyal customers run monthly/weekly feedback cycles.</user-value>
<competitive-context>Competitors (Qualtrics, UserVoice) track: NPS, survey volume, response rates. Some track 'closed loop' actions (when customers respond to feedback).</competitive-context>
<team-structure>Product (3 PMs), Engineering (25), Sales (8 AEs), Marketing (4), CS (6). Small data team (1 analyst).</team-structure>
<data-availability>Full event + usage data. Mixpanel + Snowflake.</data-availability>
<timeline>Rolling out new framework over 60 days. Board review quarterly.</timeline>
🤖 Output

Metric Framework: B2B SaaS Customer Feedback Platform, $20M ARR → $40M Target

A metric framework produces structured hierarchy: north star + inputs + counters + functional cascade. Your current state (35+ metrics, no focus) is classic 'tracking without action.' This framework: ONE north star for company, 4 input metrics, 2 counter-metrics, functional team cascade.

North Star Selection

Recommended North Star: Weekly Active Feedback Cycles (WAFC)

Definition: Number of customer accounts that complete a full feedback cycle (create survey → launch → review responses → take documented action) per week.

Why this north star:

1. Captures core user value. Customers don't pay for 'surveys sent' — they pay for the INSIGHT + ACTION that improves their business. Full cycle = value delivered.

2. Leading indicator of retention. Customers completing cycles are engaged; sporadic users churn.

3. Aligns with competitive moat. Closed-loop action is differentiator from survey-only tools.

4. Drives revenue indirectly. Retention × expansion = long-term revenue.

5. Scale-able measurement. Countable weekly, trends visible monthly/quarterly.

Alternatives considered:

  • MRR/ARR: outcome, not driver. Lagging.
  • MAU: includes passive users not getting value.
  • NPS: static, doesn't capture action depth.
  • Survey volume: can game without real value.

North Star formula: WAFC = accounts completing all 4 steps (create → launch → review → action) in a 7-day window.

Target: Currently ~1,200/week. Goal: 3,000/week by end of year 2 (growth aligns with $40M ARR).

Input Metrics (4 Drivers)

Metrics that, if improved, drive north star.

Input 1: Survey Creation Rate

Metric: # new surveys created per active customer per month.

Why: First step of cycle. More creation = more potential cycles.

Current: 1.8/customer/month

Target: 3.2/customer/month (dashboard-worthy increase)

Input 2: Launch Success Rate

Metric: % of created surveys that get launched (not abandoned in draft).

Why: Conversion from creation to action. Often surveys drafted but never deployed.

Current: 65% (35% abandoned)

Target: 80%+

Input 3: Response Review Engagement

Metric: % of launched surveys where creator reviews responses within 7 days.

Why: Reviewing = engaging with insight. Without review, data dies.

Current: 72%

Target: 88%+

Input 4: Action-Taken Rate

Metric: % of reviewed surveys where customer documents action taken (feature built, process changed, customer reached out, etc.).

Why: Closing the loop. THIS is the differentiator — separates from survey-only tools.

Current: 35% (significant improvement opportunity)

Target: 55%+

Counter Metrics (Prevent Gaming)

What could go wrong if optimizing solely for north star?

Counter 1: Survey Quality (Response Rate)

Risk: optimizing volume → lower-quality surveys → lower response rates.

Metric: average response rate per survey.

Protective threshold: response rate must stay above 25% (industry benchmark).

Counter 2: Customer Satisfaction (NPS)

Risk: pushing customers to complete more cycles = pressure = unhappy customers.

Metric: NPS from customer satisfaction surveys.

Protective threshold: NPS must stay above 50 (current: 55).

Functional Team Metrics (Cascade)

Product

Team north star: Feature adoption rate (% customers using 5+ features).

  • Input: feature discovery rate, onboarding completion, in-product tutorials
  • Connection to company NS: more feature adoption → more creation → more cycles
Engineering

Team north star: Platform reliability (% uptime + p95 response time).

  • Input: incident count, MTTR, bug escape rate
  • Connection to company NS: reliable platform enables cycles
Sales

Team north star: Enterprise ACV (%).

  • Input: pipeline conversion, ACV expansion, enterprise win rate
  • Connection: enterprise customers run more cycles
Marketing

Team north star: MQL → SQL conversion rate.

  • Input: lead quality score, content engagement, channel mix
  • Connection: better leads → better customers → more cycles
CS

Team north star: Customer active in all 4 stages (WAFC customer count).

  • Input: onboarding completion, CSM QBR cadence, expansion motion
  • Connection: direct ownership of cycle completion rate

Measurement Implementation

Data Architecture:

Events tracked (Mixpanel):

  • Survey Created
  • Survey Launched
  • Response Reviewed
  • Action Logged

Weekly rollup (Snowflake + dbt):

  • WAFC calculation per customer
  • Aggregate company-level
  • Trend analysis

Dashboard (Metabase):

  • Company north star (weekly)
  • Input metrics trend
  • Counter metrics alerts
  • Functional team dashboards
Team Training:

Week 1-2 rollout:

  • All-hands: what's changing + why
  • Team-specific briefings
  • Documentation of metric definitions
  • Dashboard tutorials

Week 3-4:

  • Integration into team OKRs
  • First weekly metrics reviews
  • Feedback on framework

Reporting Cadence

Weekly (Monday):
  • North star trend + change
  • Input metrics dashboard
  • Counter metric alerts if triggered
  • Exec summary (5-min read)
Monthly:
  • Deep-dive on north star drivers
  • Attribution analysis (what moved needle)
  • Experiment results tied to metrics
  • Board-ready report
Quarterly:
  • Full framework review
  • Target recalibration
  • Strategic adjustments
  • Functional team retrospectives
Annually:
  • Framework review: still right north star?
  • Competitive landscape shifts
  • Strategic alignment check

Common Anti-Patterns

Anti-pattern 1: Multiple 'north star' metrics

Problem: teams pull in different directions

Solution: ONE north star. Functional teams have own but all cascade up.

Anti-pattern 2: Revenue as north star

Problem: revenue is lagging outcome, not actionable driver

Solution: leading indicators that drive revenue long-term

Anti-pattern 3: Gaming the metric

Problem: teams optimize metric at expense of business

Solution: counter-metrics + qualitative review

Anti-pattern 4: Too many input metrics

Problem: 15 'inputs' = no focus

Solution: 3-5 max, related to north star

Anti-pattern 5: Not measuring what you can't influence

Problem: 'NPS' is team-independent — hard to attribute

Solution: metrics teams can actually move via their work

Key Takeaways

  • North star: Weekly Active Feedback Cycles (WAFC) — accounts completing full cycle (create → launch → review → action). Captures real user value + aligns with competitive moat.
  • 4 input metrics drive north star: survey creation rate, launch success rate, response review engagement, action-taken rate. Improve these = improve north star.
  • 2 counter metrics prevent gaming: response rate (quality) + NPS (satisfaction). Protect against 'more volume but worse quality' optimization.
  • Functional cascades: Product, Engineering, Sales, Marketing, CS each have team north star that ties to company north star. Aligned without being identical.
  • Rollout 60 days: documentation, training, OKR integration, dashboards, weekly reviews. Annual full framework review. Single focus beats 35-metric tracking.

Common use cases

  • Companies defining north star metric
  • Product teams establishing metric hierarchy
  • Execs standardizing leadership metrics
  • Growth teams aligning on focus metric
  • Post-reorganization metric reset

Best AI model for this

Claude Opus 4 or Sonnet 4.5. Metric design requires strategy + analytics + organizational understanding. Top-tier reasoning matters.

Pro tips

  • North star: one metric that, if optimized, drives long-term value.
  • Not revenue typically. Revenue = outcome. North star = leading indicator.
  • Examples: Airbnb 'nights booked', Facebook 'DAU', Slack 'messages sent by team of 3+'.
  • Input metrics: 3-5 that drive north star.
  • Counter metrics: what could go wrong if optimizing solely for north star.
  • OMTM: focus on ONE metric at a time at company level.
  • Functional teams have own north star tying to company.
  • Review annually. Change only if fundamental business shift.

Customization tips

  • Don't change north star frequently. Annual review. Change only for fundamental business shifts.
  • Every employee should know north star. Print on wall, mention in all-hands, weekly progress.
  • Tie compensation partially to north star. Alignment follows incentives.
  • Review counter metrics as vigilantly as north star. Gaming is real risk.
  • Post-quarter, validate: did north star actually predict outcomes? If not, wrong north star — investigate + adjust.

Variants

SaaS North Star

For B2B/B2C SaaS.

E-commerce North Star

For retail/DTC.

Marketplace North Star

For two-sided platforms.

Content/Media North Star

For content-driven businesses.

Frequently asked questions

How do I use the Metric Framework Design — North Star + Input Metrics That Drive Action prompt?

Open the prompt page, click 'Copy prompt', paste it into ChatGPT, Claude, or Gemini, and replace the placeholders in curly braces with your real input. The prompt is also launchable directly in each model with one click.

Which AI model works best with Metric Framework Design — North Star + Input Metrics That Drive Action?

Claude Opus 4 or Sonnet 4.5. Metric design requires strategy + analytics + organizational understanding. Top-tier reasoning matters.

Can I customize the Metric Framework Design — North Star + Input Metrics That Drive Action prompt for my use case?

Yes — every Promptolis Original is designed to be customized. Key levers: North star: one metric that, if optimized, drives long-term value.; Not revenue typically. Revenue = outcome. North star = leading indicator.

Explore more Originals

Hand-crafted 2026-grade prompts that actually change how you work.

← All Promptolis Originals