Make your SQL Server fly! Still on Batch Mode… The Part 6
Still on Batch Mode… – Part 6: Compatibility & Query Settings That Influence Execution
Back to the series: Part 5 – How to Force Batch Mode on Rowstore
Hi guys, We’re still on Batch Mode, because understanding when it works is just as important as knowing how to force it.
Today we go deeper: we will see compatibility levels, memory grants, cardinality estimation, and trace flags.
Happy reading and may your CPU stay cool under heavy aggregation!
Compatibility Requirements
Batch Mode on Rowstore requires both:
- ✔ SQL Server 2019 or newer
- ✔ Database compatibility level ≥ 150
So, if your database runs at compatibility 140 or lower then Batch Mode on Rowstore simply cannot activate.
Check Compatibility
SELECT name, compatibility_level
FROM sys.databases
WHERE name = DB_NAME();
Change Compatibility
ALTER DATABASE CURRENT
SET COMPATIBILITY_LEVEL = 150;
GO
Always test before upgrading — optimizer behavior changes across compatibility levels.
Deep Dive: Query Settings That Influence Batch Mode
A) Memory Grants
Batch Mode relies heavily on correct memory grants. If the optimizer estimates too little memory, it may avoid Batch Mode.
Hash joins and aggregations in Batch Mode are sensitive to memory availability.
-- Check memory grant info in actual execution plan
SET STATISTICS XML ON;
SELECT CustomerID, SUM(Amount)
FROM Sales
GROUP BY CustomerID;
Look for:
- MemoryGrantInfo in execution plan XML
- Spills to tempdb
- Excessive granted memory
Underestimated memory → spills → Batch Mode advantage reduced.
B) Cardinality Estimation
Batch Mode decisions depend on row count estimates.
If the optimizer estimates very small row counts, it may choose Row Mode because the overhead of Batch Mode is not considered worthwhile.
Compatibility level 150 also enables newer Cardinality Estimation behavior.
You can compare estimation behavior using:
-- Force legacy CE
OPTION (QUERYTRACEON 9481);
-- Force new CE
OPTION (QUERYTRACEON 2312);
Be cautious with trace flags — use only for testing or troubleshooting.
C) Trace Flags & Engine Behavior
Certain trace flags and hints influence optimizer behavior and Batch Mode eligibility.
- QUERYTRACEON 9347 (Batch Mode related testing scenarios)
- QUERYTRACEON 9453 (diagnostic behavior)
- USE HINT('ForceBatchMode')
SELECT CustomerID, SUM(Amount)
FROM Sales
GROUP BY CustomerID
OPTION (USE HINT('ForceBatchMode'));
Forcing Batch Mode does not guarantee better performance.
Always compare:
- Elapsed time
- CPU time
- Logical reads
When Batch Mode May Not Appear
- Low estimated row counts
- Small tables
- Insufficient memory grant
- Unsupported operators in plan
- Wrong compatibility level
Batch Mode is a cost-based decision — not a guaranteed feature.
Takeaways
- Batch Mode on Rowstore requires SQL Server 2019+ and compatibility level ≥ 150.
- Memory grants strongly influence performance.
- Cardinality estimation affects whether Batch Mode is chosen.
- Trace flags are powerful but must be used carefully.
- Always benchmark Row Mode vs Batch Mode.
Batch Mode is not magic. It’s math, memory, and cost-based optimization.
...As usually we Benchmark Everything Properly so you can do it too!
Theory is good. Execution plans are better. But benchmarks are the truth.
Let’s compare Row Mode vs Batch Mode under controlled conditions.
Step 1 – Prepare a Large Dataset
-- Example large dataset generator
SELECT TOP (5000000)
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS ID,
ABS(CHECKSUM(NEWID())) % 1000 AS CustomerID,
ABS(CHECKSUM(NEWID())) % 500 AS ProductID,
ABS(CHECKSUM(NEWID())) % 1000 AS Amount
INTO SalesBig
FROM sys.objects a
CROSS JOIN sys.objects b;
Make sure compatibility level is 150.
Step 2 – Row Mode Benchmark
SET STATISTICS IO, TIME ON;
SELECT CustomerID, SUM(Amount)
FROM SalesBig
GROUP BY CustomerID;
Record:
- CPU time
- Elapsed time
- Logical reads
- Execution plan operator mode
Step 3 – Forced Batch Mode Benchmark
SET STATISTICS IO, TIME ON;
SELECT CustomerID, SUM(Amount)
FROM SalesBig
GROUP BY CustomerID
OPTION (USE HINT('ForceBatchMode'));
Compare the metrics again.
On large aggregations, CPU reduction can be dramatic.
Step 4 – Clear Cache (Optional Controlled Test)
CHECKPOINT;
DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE;
⚠ Never run this on production.
Step 5 – Inspect Runtime Statistics via DMV
SELECT
qs.total_worker_time,
qs.total_elapsed_time,
qs.total_logical_reads,
qs.execution_count,
SUBSTRING(qt.text, qs.statement_start_offset/2,
(CASE
WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
ORDER BY qs.total_worker_time DESC;
This gives you real aggregated performance data.
What You Should See
- Lower CPU usage in Batch Mode
- Same logical reads (usually)
- Better performance on large scans and aggregations
- Minimal benefit on small datasets
If row count is small, Batch Mode overhead may negate benefits.
Final Thought
Batch Mode is not about forcing features.
It’s about measuring, comparing, and understanding.
Performance tuning without benchmarking is guessing. Batch Mode without understanding memory and cardinality is gambling.
Seeeee you on Part 7!
Luca Biondi @ 2026

Comments
Post a Comment