Hi There,

#1 Data Platform Summit 2022 Session Recordings are now available on YouTube. Click Here.

Sessions include content from Microsoft Azure Data Teams, MVPs & Industry Experts. Make sure to watch the keynote by Bob Ward & Buck Woody.

#2 Last week, Part 1 of “Top 3 SQL Server Tuning Techniques” was delivered. More than 350 SQL folks joined. See the feedback here.

Part 2 is happening on March 23. Make sure you do not miss it. Block your seat today.

Session Title: Top 3 SQL Server Tuning Techniques – Part 2
Date: 23 March 2023
Time: 10:30 am EST | 3:30 pm GMT | 9 PM IST

Register now.

#3 New SQL Server Tutorials:

New videos have been added to our Video Library.
SELECT Query But No Access Method
Why is the CPU Time more than the Elapsed Time?
How is the query cost derived?
and more…

The Azure Data learning calendar leading up to DPS 2023 will be announced soon. Stay tuned.

Stay tuned, more learning coming your way.

With Warm Regards
DPG Team

SQL Server Notes by AB | Note #1 (THREADPOOL wait type)
A thread is the lowest unit of execution. Your SQL query, at the least, needs one thread to run. Depending on the SQL instance configurations and the eligibility of the query, SQL Server may decide to execute your query with multiple threads – this is called parallelism. If too many such workloads are parallelized and if they are long-running, your subsequent incoming queries might not even get one thread to run. In other words, they will be waiting for a thread to be assigned to them because SQL Server has a limited number of threads (by default) based on the number of processors on the hardware. This wait is called THREADPOOL. It is not a good situation to be in and may go unnoticed. If you are facing this wait type, consider reducing parallelism (one of the ways). Consider increasing the Cost Threshold of Parallelism (server config) so that not all inexpensive workloads go in parallel. Or change the MAXDOP value to a reasonable number, say 4. Especially on hardware with a lot of cores. Maybe you can set MAXDOP to the number of cores per NUMA node. This will reduce parallelism and thereby make more threads available in the thread pool.

Like the notes? Read more.

Subscribe to DataPlatformGeeks Newsletter | Access Past Newsletters