SQL Server, Again info about statistics and the "Ascending Key Problem"

Hi Guys!

Here the post of today i wrote in one go.
So don't care about the punctuation, please.

In the last post we talked about Cardinality Estimation with a short introduction to the statistics.

Before moving forward i would like to tell you some words about how they work and expecially about a problem that can arise with them called the "Ascending Key Problem"

So, are you ready for another deep dive? GO!


Statistics and Automatic update of statistics.

I will try to explain an important concept through the example below.

Suppose to have an already populated table (called listofdata) with statistics up to date.
The structure of our table is:


Create table listofdata (id integer, data datetime, value float)
Create index idx_listodata_data on listofdata(data)


Watching associated statistics for the index we see that we have 1000 rows each with the same value in the field data.


dbcc show_statistics (listofdata, 'idx_listodata_data') WITH HISTOGRAM
     



Now insert others 1000 rows in the table but with a different value for the field data.
Now observe our statistics again.


IMPORTANT:

What can you observe?  
Well: our new 1000 rows are not inside statistics!

Why? 
Because inserted rows are few enough to not trigger the automatic update of statistics.

This becomes a problem when we search data just inserted.

Look at the execution plan:




Oops! Estimated number of Row is equal to 1 (rather than 1000!)

This is a big problem because if the optimizer didn't know the exact cardinality then it could choose an execution plan not optimal.

For example Optimizer could use an index scan instead of a index seek or viceversa.
This is so called "Ascending Key Problem"

But the question is: why the value 1 is used?
This value is simply a default used when the value we search is not inside the statistics!

P.S. Some traceflag exists to mitigate this problem but we won't talk about it today!

Old Cardinality estimator vs. New Cardinality estimator

What we have seen is related to the old Cardinality estimator use by SQL Server until versione 2012 included.

SQL Server 2014 bring to us the so called New cardinality estimator.

So in order to use the new cardinality estimator you need SQL Server 2014 onward with it's native compatility level.

One thing that New Cardinality estimator bring us is a solution to the "Ascending Key Problem".

How?

Important:

New Cardinality estimator assumes that some rows may exists!
So, to estimate a number of rows of the "all density" value  and multiplies that by the cardinality of the table.

In the picture below you can view che "All density" value. SQL Server simply multiply this value for the rumber of rows of the table.
This lead to a more realistic extimated row number and usually avoid the "Ascending Key Problem"




That's all for today!
I hope you enjoy this post!
See you soon.

Luca Biondi @ SQLServerPerformance blog!





 

 

 

 

Next post: Your biggest customer can’t work! It is stuck due to a SQL Server randomly crash? And now?

Previous Post: Inside the SQL Server Query Optimizer - part 3 Cardinality estimation

Listen a song with my mood of the Day13 buone ragioni

Comments

I Post più popolari

SQL Server, execution plan and the lazy spool (clearly explained)

SQL Server, datetime vs. datetime2

La clausola NOLOCK. Approfondiamo e facciamo chiarezza!