This website uses cookies

Read our Privacy policy and Terms of use for more information.

The data is there. The value story is not.

The rise of usage-based and credit-based pricing has made it easier than ever to track what customers do.

Every revenue team has more consumption data than they know what to do with. Credits burned per month. Seats active. API calls made. Logins tracked. Health scores calculated from activity patterns.

And yet — NRR is declining across the sector. Renewals are getting harder. CS teams walk into renewal meetings with dashboards full of activity and leave without a signed contract.

The data is there. The value story is not.

Usage data is supplier-side accounting

Stefan Kontschinsky, Product Management Director at Ping Identity, put it directly last week in his guest article "AI Pricing Has a Value Problem" on Ed Arnold’s The Valorizer: usage-based pricing is supplier-oriented cost recovery.

When a vendor tracks credits burned, seats active, and logins per month, they are counting their own output — not documenting the customer's outcome.

These two things look similar from the inside. They feel completely different to a customer who is deciding whether to renew at full price.

A health score tells the vendor how engaged the customer is. It does not tell the customer what return they got on their investment. A usage dashboard shows the customer is using the product. It does not show what the product did for them.

This is not a problem of dashboard design. It is a problem of evidence category.

You are bringing usage data to a value conversation.

Three problems with using usage as a proxy for value

Problem 1: Usage metrics require interpretation

“Usage is up 18%” is a fact. What it means for the customer’s business is an interpretation.

That interpretation is where value arguments get weakened.

The CS manager has to translate “18% usage increase” into a claim about productivity, efficiency, or revenue impact. The customer hears the claim and asks: “Where is the calculation that connects usage to revenue?”

Most of the time, there is no calculation. There is a narrative. The customer is not persuaded by narratives.

The further the metric from customer economics, the harder the renewal conversation.

Problem 2: Activity is not impact

A customer can use your product heavily and still not see value. A customer can use it lightly and see outsized impact.

Activity-based health scores measure engagement. They do not measure outcomes.

When a renewal comes up for discussion, the customer does not care how many times someone logged in. They care what changed in their business because they paid for your product.

If the only evidence you bring is activity data, you are asking the customer to infer value from usage. That is not a customer-friendly argument.

Problem 3: Generic AI and dashboards give you numbers — not cited calculations

Most usage dashboards are powered by BI tools or generic AI summarization. They give you numbers. They do not give you the cited, outcome-connected calculation that links your product to this customer’s revenue, cost, or risk.

A customer evaluating a renewal does not want a summary of what happened. They want a quantified account of what was delivered — with equations they can scrutinize, assumptions they can challenge, and a payback calculation they can defend internally.

Usage data does not give them that. It gives them a proxy metric that someone in the room has to translate into a value claim.

Translation is where credibility breaks down.

What value proof looks like

Value proof connects directly to customer economics — not through a proxy metric, but through cited calculations that link your product to revenue, cost, or risk.

Here is what that looks like in practice:

A SaaS company sells workflow automation software. At deal close, they build a value case showing the customer will save 1,200 hours per quarter by automating manual approvals. The value case cites the equations: average hourly cost of the approval team, number of approvals per quarter, time saved per approval, total quarterly savings.

Six months later, the CS team walks into the renewal meeting. They bring the original value case — and an updated version showing what was actually delivered.

Approvals automated: 4,800 (target was 4,500). Hours saved: 1,320 (target was 1,200). Payback period: 4.2 months (forecast was 6 months).

The customer sees a quantified record of what was promised and what was delivered. The renewal is not a negotiation. It is a confirmation.

That is value proof.

The difference between usage metrics and value proof

Why this matters now

The customers who feel like they are not getting value are not wrong. They are just not being shown the right evidence.

When the only proof you bring to a renewal conversation is usage data — credits consumed, seats active, dashboards showing engagement — you are asking the customer to infer value from activity.

That inference is where renewals stall.

The companies that retain at full price in the next renewal cycle will not be the ones with the best dashboards. They will be the ones who can show, in the customer’s language, what they actually delivered.

Value proof. Not usage proxies.

What connects deal close to renewal proof

The business case that closes the deal should become the baseline for proving value at renewal.

Here is the structure that makes that connection work:

At deal close: Build a value case — cited equations showing the ROI the customer will see, the payback period they can expect, and the outcomes the product will deliver.

During onboarding: Create a value realization plan — a document that tracks what was promised in the business case and how it will be measured after close.

At QBR: Update the value realization plan with delivered outcomes. Show what was promised, what was delivered, and where the gaps are.

At renewal: Bring the updated value case to the customer. Show the quantified record of what was delivered against what was promised. Defend price with proof, not activity dashboards.

This is the value lifecycle. The companies that build this structure stop losing renewals to “we are not sure it is worth the price” and start renewing at price — or expanding beyond it.

What you can do differently

If you are a CS leader walking into renewal meetings with usage dashboards and leaving without signed contracts — the problem is not your product. The problem is the evidence.

Stop bringing activity data to a value conversation. Start bringing cited calculations that connect your product to customer economics.

The customer does not care how many times someone logged in. They care what changed in their business because they paid for your product.

Show them that. In their language. With equations they can scrutinize.

That is how renewals close at full price.

FAQ

What is the difference between usage data and value proof?

Usage data measures supplier output — what the customer did with your product. Value proof measures customer outcomes — what your product did for the customer’s business. Usage data tells you the product was used. Value proof tells you what the customer got for their investment.

Can usage metrics ever be part of a value story?

Yes — if they are connected to outcomes. “You made 10,000 API calls” is usage data. “You made 10,000 API calls, which automated 4,800 approvals, saving 1,320 hours and recovering $180K in labour cost” is value proof. The usage metric becomes meaningful when it is cited in a calculation that connects to customer economics.

What does a value realization plan include?

A value realization plan connects the business case built at deal close to the value proof delivered at renewal. It includes: what was promised in the original business case, how those outcomes will be measured, what milestones will be tracked during the customer lifecycle, and what evidence CS will bring to the renewal conversation to prove value was delivered.

Why do customers reject usage-based health scores at renewal?

Because health scores measure engagement, not outcomes. A customer evaluating a renewal does not care how engaged the customer was. They care what return they got on their investment. A high health score with no quantified outcome is not a renewal defense — it is a correlation hoping to be treated as causation.

What is the most common mistake CS teams make at renewal?

Walking into the renewal meeting with a dashboard showing activity — and no quantified record of what was delivered. The customer asks “what did we get for this investment?” and the CS manager says “your team used it heavily.” That is not an answer. That is a usage proxy hoping to be accepted as value proof. The renewal stalls because the evidence is in the wrong category.

Reply

Avatar

or to participate

Keep Reading