Posts

Showing posts from September, 2021

SQL Server queries, write them Sargable: the ISNULL function

Image
 Hi Guys, Welcome back to this blog. This will be a light post just to review some important concepts that i have already discussed in the past. We talked about the Sargable concept here: Write high performing query: Is your query SARGABLE? Sargable Queries part 2. Examples   In shoert, your query is sargable when it can use index by doing a SEEK (and not a SCAN) operation. This way you can read from a table only the specific rows  you need. This approach is also the only  one that leads to scalable performance. We have already seen that functions are not Sargable and we have seen some examples and therefore our ISNULL function is also not! So how can we proceed? so how can we write the same logic in a sargable way? How to replace the ISNULL function using Sargable logic Suppose you are reading data from a table.  Table in our example is JDT1 , a standard Journal Entry Table from SAP ONE. You need to extract rows where the column SourceId is equal to 420. Now if you use the ISNULL func

SQL Server, Using the OPTIMIZE FOR SEQUENTIAL KEY Clause to reduce the last page insert latch contention

Image
Welcome back guys! With today we conclude this series of post related to the last page insert contention issue speaking of a new feature introduced with SQL Server 2019. This new feature is called OPTIMIZE_FOR_SEQUENTIAL_KEY and it is intended to reduce the latch contention. It is actually a clause that you can specify when creating an index . First we will see how it works then we will do some tests Are you ready? Go! The OPTIMIZE_FOR_SEQUENTIAL_KEY feature   In order to use this feature you need to add the OPTIMIZE FOR SEQUENTIAL KEY = ON to the definition of the index: CREATE CLUSTERED INDEX [IDX_ARTICO_CLUSTERED_ID] ON [dbo].[ARTICO] ( [ID] ASC ) WITH ( PAD_INDEX = OFF , STATISTICS_NORECOMPUTE = OFF , SORT_IN_TEMPDB = OFF , DROP_EXISTING = OFF , ONLINE = OFF , ALLOW_ROW_LOCKS = ON , ALLOW_PAGE_LOCKS = ON , OPTIMIZE_FOR_SEQUENTIAL_KEY = ON ) ON [PRIMARY] Just remember to set the compatibility level to 15, which is the native SQL Server 2019 level. The magic behind the movie! Ye

SQL Server, How to Reduce the Last Page Insert Latch Contention

Image
Welcome back guys! Two posts ago i blogged about Choosing the Clustered Key and the last page insert latch contention problem I remember last time we broke up talking about some possibilities that you can adopt such as: Adopt a random clustered key value Reverse index technique Enable OPTIMIZE_FOR_SEQUENTIAL_KEY option in sql 2019 In memory technique Partitioning technique Today we make some considerations on how to reduce the contention of the last page latch (also by giving examples) by applying these methods: Adopt a random clustered key value (GUID) Reverse index technique We will run some tests to measure both the Contention generated and the Execution time. To generate a workload by running multiple simultaneous sessions of our query we use a microsoft tool named RML Utilities that i mentioned (at the momentin only in italian) here We also run the test to have our baseline which is inserting data into a table with an autoincrement integer ID column with a clustered index def

SQL Server, Difference between a seek and a scan operation and the Latch coupling

Image
Hi Guys, During my last post i blogged about buffer latch (*) and the Last page insert latch contention problem . (*) What is a Latch? ...As written in my previous post, which I recommend to read: Latches are synchronization primitives that are used by SQL Server to manage data concurrency and guarantee consistency of its in-memory structures like index, data pages and internal structures, such as non-leaf pages in a B-Tree. Since it is very important to understand how latches work, today i will try to deepen. I will try to be as clear as possible even if the topic is not the simplest. We will also understand what is the difference is between a seek and a scan operation . Seek operation and Latch Coupling Surely you already know that you can access data in SQL Server via an index with a seek or scan operation .   Doing a seek operation means that you will use the B-Tree structure to reach the leaf-level data pages .   First the Index Root Page is read, then an Index Page in the u

SQL Server, How to choose a clustered key value and the Last Page Insert Latch Contention problem

Image
Hi Guys, Today you are the data architect ! your first task is to design a table from scratch. But...how do you choose the clustered key value? It sounds easy but rember that your decisions will have a lot of consequences for the long life of the project ... and you can't go wrong!   How to choose a clustered key value Today let's review some theory .. A good Clustered key must be narrow . It is not only a question of data occupation (today storage devices are spacious) but a lot of data takes time to be read. We always consider that the value used will be used not only in the clustered key. It will in fact be used in any non-clustered index.  So for example fields Id and page_id, category_id must have the same data type. Normally you should use a surrogate key value (like an ID identity, a progressive numerber) instead of a natural key (whose values ​​are formed by attributes that exist and are used in the external world, for example surname+name). A surrogate key value can

SQL Server. Statistics, cardinality estimation and some thoughts about my previous post

Image
Hi Guys, I would say a big thanks to Jeff Moden who have commented my previous post SQL Server, statistics, performance and the Ascending key problem . He made me think more deeply about the statistics and so i decided to do this second post just to clarify and add some thoughts. Enjoy the reading!   Statistic, cardinality and indexes From the point of view of returning a correct cardinality estimate value (and therefore having a precise execution plan), reading my previous post in the example paragraph, we referred to the existence of a clustered index on the table. It should be noted that it does not matter whether there is a clustered index or a not clustered one . I will demonstrate this with an example.   Example Suppose we create a heap table (therefore without any index) Suppone our table will have an id field and a value field. Now let's fill our table with 100 rows by putting a value from 1 to 100 in both the id and value fields. Create Table Example (id int, [v