Posts

Cloud + Tuning = Money Saving, a big deal!

Image
Hi Guys, Welcome back! Today only a small post for a small thought. Today it is as if we were back to many years ago. I remember the first years when (at least in Italy) the connection was not absolutely flat! Every minute of internet browsing had a cost and it was not cheap. After some time things changed a bit and for Browsing internet we have to pay for the amount of data downloaded. Now with the cloud it’s a bit the same logic. If you have a database on the cloud, the more data you read, the more you pay. So that’s our "favorite weapon" to avoit to pay many money on the cloud?  Well, the tuning! it is still really useful. So in this case, we don’t just use tuning to make our queries faster and to eliminate or reduce locks and deadlocks. We can and must also use tuning to reduce the number of data to read from the database . Which is particularly welcome if our database is on the cloud. I will so a the simply example just to give to you a concept. Example Suppose

SQL Server 2022: Accelerated Database Recovery enhancements from SQL 2019 to SQL 2022

Image
Hi Guys,   Welcome back! You Start to be really many to read this blog! Just two words: thank you very much! Today we will talk again about the Accelerated Database Recovery (ADR) feature, we talked about the same topic already 4 years ago here  SQL Server 2019 and the Accelerated Database Recovery . Yeah, it’s been four years. In 2019 we described in detail what ADR is, now it’s time to talk about what improvements this feature has in SQL 2022. I want to remind you that SQL 2022 is the latest version of the famous RDBMS produced by Microsoft.   Accelerated Data Recovery: a short summary! As we have seen some time ago, the ADR has been introduced in SQL 2019 (15.x) with the obective of improving the database availability , especially in the presence of long running transactions.  To achieve this, the database engine recovery process has been redesigned. We do not have to scan the transaction log from and to the beginning of the oldest active transaction instead our log is read only

SQL Server, multiple "GROUP BY" in a single statement? The GROUPING SETS explained in a CLEAR way and with EXAMPLES!

Image
Hi Guys!  Welcome back! Today we will talk about GROUPING SETS. As usual I will try to explain what we are talking about in the clearest possible way and we will always give examples . So what is a GROUPING SET? GROUPING SETS can be considered an extension of the GROUP BY clause in fact it gives you the possibility to combine multiple GROUP BY clauses . How many times do we have to show in video a series of data that maybe adding them up and regrouping them for more than one field? How many times have we had to use UNION clauses? Often many clauses union together make the code less readable and less maintainable. The solution is to use GROUPING SETS. Enjoy the reading!   GROUPING SETS  As said, a grouping set can be considered as an extension of the GROUP BY clause. Speaking in a technical way, SQL grouping sets are a way to group data in SQL queries by multiple dimensions. Now, in a typical SQL GROUP BY clause, data is grouped by one or more columns. For example, if you hav

"SQL Query Design Patterns and Best Practices" My review!

Image
Hi Guys! It’s been a while since the last post ...it’s been a lot of work! In the meanwhile SQL Server 2022 get its second CU ( download ) and SSMS has reached version 19.02 ( download ) However we are here and so … welcome! I must say I like to review books because this is also a good way to stay updated. For this I say a big thanks both the publisher Packt and the kind Nivedita Singh for offering me, once again, the opportunity to talk about their new book. Packt is an English publishing house known worldwide and specialized in books related to information technology , including programming, web design, data analysis and hardware . This new book will be published on March 22nd ( but yes you can preorded on Amazon here: https://packt.link/AYWje ) and it is related to the topic writing of Queries in SQL Server . Its title is " SQL Query Design Patterns and Best Practices " and the authors are: Steve Hughes, Dr. Ram Babu Singh, Leslie Andrews, Dennis Neer, Shabbir H.

How to get the n-th row in a SQL table ..in 4 different ways! Simple, clear and with example!

Image
Hi guys, After the deep previous post about the SQL statistic today we will talk about a more more easy topic. Have you ever needed to extract from a table the second row, or the third or the n-th row? This post is for you!    Extract ..Get..Select... the n-th row in a table We can solve this task in more than one way. To follow my example you can simply execute the following command that create and fill the Ord table with some data Create Table Ord (Id int identity (1,1) primary key , Code varchar (10)) Insert into Ord (Code) values ( 'A' ),( 'B' ),( 'C' ),( 'D' ),( 'E' ),( 'F' ),( 'G' ),( 'H' ),( 'I' ) This is our table:   The first way I seen many times is using a derivate table .   Yes, you can solte the request by split the problem in two parts. In the first part you can get the first n-th rows. This will be the derivate table.   Then you can get first row of this derivate table orde

A proposal for a new way to manage SQL Statistics for data not evenly distributed. A dip in the logic of SQL Statistics!

Image
Hi guys, I love database theory because it includes mathematics, statistics, logic, programming and data organization .  I wrote this post because i read a post on linkedin about the well known  " 201 bucket limitation " in the statistics used in SQL Server. Bob Ward (the Principal Architect at Microsoft, in the photo on the right while dreaming statistics and rugby...lol ) rensponded to the author of the linkedin post asking him to give a real example where this was a limitation. He also said he would verify. I then asked myself whether the logic with which statistics are managed could not be improved in any way without increasing their size.   In the post on Linkedin was also referred to a post of the well-known mr. brent Ozar : The 201 Buckets Problem, Part 1: Why You Still Don’t Get Accurate Estimates Brent used the Stack Overflow 2010 database database for his examples and I will do the same in this post. If you want to follow my examples then, first of all, download