The SQL Server Query That Looks Fast ….Until It Hits Production. Why?

The SQL Server Query That Looks Fast… Until It Hits Production

SQL Server Performance Series – Real World Query Pitfalls

Hi SQL SERVER Guys, 

Today we talk about a query pattern that every SQL Server developer has seen at least once.

A query that runs perfectly in development.

It returns results instantly.
Execution plan looks simple.
Everyone thinks the problem is solved.

Then the code goes to production. and...

...And suddenly the query that ran in milliseconds now takes seconds… or minutes.

What happened?

The problem is usually not the query itself.

The problem is the data.


P.S. if you missed my last post: 

SQL Server Performance Weekly Recap – What You Might Have Missed This Week!


The Typical Development Query

In your development databases typically the amount of data is often very small.

Let's say we write something like this:


SELECT *
FROM Orders
WHERE CustomerID = @CustomerID

With a few thousand rows this query looks incredibly fast.

The execution plan is trivial.

Everything seems perfect.


Now Add Real Production Data

In production things change dramatically.

  • Orders table grows to millions of rows
  • Some customers have thousands of orders
  • Data distribution becomes uneven

Now SQL Server must choose a plan based on parameter values.

And that is where the trouble begins.


The Hidden Enemy: Parameter Sniffing

When SQL Server compiles a query, it uses the first parameter value it sees to generate the execution plan.

If the first execution uses a very selective value, SQL Server might choose an index seek.

But if later executions use a very common value, that same plan can become extremely inefficient.

The query is correct.

The plan is wrong for the data distribution.


A Simple Example

Imagine a table with 10 million rows.

Customer 1 has 10 orders.
Customer 999 has 200,000 orders.

The same query:


SELECT *
FROM Orders
WHERE CustomerID = @CustomerID

For CustomerID = 1 an index seek is perfect.

For CustomerID = 999 a scan may be much more efficient.

But SQL Server only chooses one plan.


Benchmark Scenario

Table size: 10 million rows

Query compiled using a rare customer value.

  • Execution time: 15 ms
  • Logical reads: 150

Now executed with a very common customer value:

  • Execution time: 4.2 seconds
  • Logical reads: 2,500,000+

Same query. Same plan. Completely different performance.


Possible Solutions

Depending on the workload, several strategies can help:

  • OPTION (RECOMPILE)
  • OPTIMIZE FOR hint
  • Query plan guides
  • Better indexing strategies
  • Separate queries for different workloads

The key is understanding the data distribution.


Why This Happens So Often

Because development environments rarely mirror production data.

Developers test logic.

But SQL Server performance depends on:

  • data volume
  • data distribution
  • statistics
  • execution plans

Without realistic data, performance testing becomes misleading.


The Real Lesson

A query that runs fast in development proves almost nothing.

Real performance appears only with real data.

Always test queries with production-like datasets.

Execution plans tell the real story.


Final Takeaways

  1. Fast queries in development can fail in production.
  2. Parameter sniffing can produce inefficient plans.
  3. Data distribution matters more than query syntax.
  4. Always test with realistic datasets.

And remember…

No SQL Server was harmed during these benchmarks 😉

See you in the next deep dive — where we keep pushing SQL Server to its limits!!!

Luca Biondi @2026

Comments

I Post più popolari

Speaking to Sql Server, sniffing the TDS protocol

SQL Server, find text in a Trigger, Stored Procedures, View and Function. Two ways and what ways is better

SQL Server, Avoid that damn Table Spool!