Tag Archives: SQL Server

Fourth day at #SQLPass

Started early today with a breakfast seesion at 07.00 with “SANless clustering with FusionIO and SISO”. Interesting session where I learned about a limitiation in AlwaysOn that id didn’t know of. Apparently only one thread per Availability node is used for SQL replication between nodes in the cluster, limiting throughput to about 40 Mb/s. That could potentially become a problem in a larger environment. Think about Sharepoint….

SISO clustering seemed like a nice build-on feature to Windows clustering, combined with FusionIO it looks like a great solution which seems very simple to install and maintain. Need to check pricing on that.

Next was a wonderful session with Kimberly Tripp on Statistics and skewed data. She have put together a fantastic collection of Stored procedures to find and create statistics for skewed tables. I really love to come home test this on my customers row table which have some strange query performance from time to time. I bet it will solve a a lot of problem. I think they will be available on hhtp://www.sqlskills.com website shortly.

After lunch Rob Farley from Down Under hosted a session called: “Theses aren’t the joins your looking for”, where he deep dived into the different forms of joins and what kind of actions they have on returned rows. I won’t even try to get into details, but he a great presenter with a good deal of jokes and conversions with the audience. It was very entertaining.

The CAT team had the put together a session where they compared performance on ROLAP vs. Tabular when running on Parallel Data Warehouse. This session got a bit outside of what I had thought it would be about. Actually the first session so far where I totally lost concentration and started thinking about other stuff. The topic was interesting even for me who is not a BI developer but the presentation skills where not that good and to many really weird questions kept popping up all the time. Too bad, I wanted to learn more about PDW.

Last, Paul Randal is on stage to talk about waits, latches and spinnlocks. This was a great level 400-500 session on deep internal stuff. He is really awesome and his presentation got 25 minutes late. No one cared, it was so interesting to see how to pin down contention on this level. I think I have a couple of environments back home where I can actually have use of thhis new knowledge, can’t wait to get home…

Evening finished up in Nasacar Hall of Fame with the Community appreciation party. Kind of cool place where you go a interactive tour to prepare yourself and your car that you build during the tour, to finally drive a NASCAR simulation. Given that we where like 3-4000 people I skipped that part and went for food and beverage.


Third day at #SQLPass, the real sessions begin #sqlblog

Finally the conference started for real. So many people all over the place. Crowded in all sessions and even the dining hall was almost full for lunch.

Today I attended four session. First was the Key-note with Bill Graziano from Pass and Quentin Clark from Microsoft. SQL 2014 CTP 2 was announced and and brief walk through of all new features and  functionallity was very interesting. In memory databases was demonstrated and this looks cool. Huge improvments will be availabale by just altering a table or procedure to take advantge of this functionality. No code rewriting is necessary.

Power BI Q&A was also a new feature that looks very promising and the audience seem to like it very much.

SQL backups to Azure storage also seemed like a nice feature, especially since it’s available also for 2005 and 2008 via a downloadable tool. There was also a new feature that would automatically trigger transaction log backups based on the amount of changes data in the tables.

The keynote is available on http://www.sqlpass.org/summit/2013/Live.aspx

After that there was a great session with Erin Stellato about “Taking the leap from Profiler to Extended Events”.  Very good presentation and it really made me understand that there are so much more you can do with Extended Events then with Profiler. Need to start using that when I get back home again.

Then there was a session with Glenn Alan Berry called the “Professor of DMV”, where he went through his pack of DMV scripts that can be used to troubleshoot SQL server performance problems. I have used these DMV script before but I have never really fully been able too utilize them to it’s full potential. Now I know how to use them better and will dig into them again. They can be downloaded here:  http://sqlserverperformance.wordpress.com/tag/dmv-queries/

Then I attended a seesion about automating your SQL server deployment with Powershell. The tool is called SPADE and is available on http://www.codeplex.com. I have started to automate SQL server installations myself but my still limited knowledge of Powershell prevented my from going all the way. This tool will save tons of time spent on installing and configuring SQL server.

Lats session was about Troubleshooting Clusters and Alan Hirt gave a nice walk through of some pitfalls in cluster install and configuration and how to troubleshot them.

Then we attended the Exhibition recpetion for some food and chat and ended up on a pub for a couple of beer.


Make your SQL Server Apps go faster, day 2 pre-con at #SQLPass

Second day pre-conference at SQL Pass Summit for me was a session with Brent Ozar, Kendra Little and Jes Schultz Borland called “Make your SQL Server Apps go faster”

Here is some of the things that we learned today.

Key focus in the presentation was to see what can be done to make applications run faster from a DBA point of view. What can I as a DBA do without involving a developer or changing any of the application code.

First we need to know what is our worst performing queries, there are a number of ways to find that out. Activity Monster Monitor is NOT the best tools for this, beacuse it’s not telling the truth all the time.

sp_whoisactive is a free stored procedure that can be downloaded and installed. It can be downloaded from here http://sqlblog.com/files/folders/beta/entry42453.aspx. Brent Ozar also has a quick guide here  http://www.brentozar.com/archive/2010/09/sql-server-dba-scripts-how-to-find-slow-sql-server-queries/.

Kendra Little aslo gave a quick guiding to Extended Events and how that can be used to give us information of our top queries, more about that in my next post after I have attended the “Making the Leap from Profiler to Extended Events” tomorrow.

We are also interested in what kind of waits we have in our server, this together with the information we get about the heaviest queries will help us determine what choices we have to tune.

SQL server have some DMV’s that can be queried to find missing indexes, there is also the Database Tuning Advisor that is commonly used by developers to create indexes on tables. Netiher of these are very good tools, just implementing recommendation from theses tools might give you worse performance in the long run. Recommendations should really been seen as just that. It is vital to know how your tables are accessed and queried to be able to build the correct indexes.

Correct indexing can not only solve direct IO problems but can in some cases also help to solve  fixed implicit conversions.

Sp_blitzindex is a great tool to figure out your index needs. It available here http://brentozar.com/go/blitzindex. It will also help you find your duplicates.

It’s also important to know that some simple queries are executed as trivial and will not generate a index recommendation. If they are run frequently it might be worth while to look at it anyway. Find them with sp_whoisactive.

Remember that in SQL 2012 index recommendations are flushed when a index rebuild takes place, this was not the case in earlier versions. Index defragmentation does not flush the recommendations, this can make it tricky to maintain indexes in SQL 2012.

Index maintenance should be carried out every 4-6 weeks, this is because usage pattern and data volumes changes a lot over time.

CX_PACKET is a wait stat that is very common in SQL Server and often it is thought of as a problem. A common reason why this can become a problem is the “parallel threshold” value in the SQL Server configuration. Default value for that has been 5 since a very long time. In todays high performance servers it is way to low. A starting value of 50 is now days the recommendation. MAXDOP is also taking part here and that should not be left at default value of 0. Less or equal to number of cores on server is a good starting point. Usually no benefit over 16.

Always try to avoid table variables unless your 100% sure they will ever return more then one row. http://brentozar.com/go/serialudf describes this more in detail.

These where just some takeaways from todays pre-con. It will be interesting to get home and practice this on real data…..



Planning my #SQLPass session schedule….

Started planning my session schedule for SQLPASS while I was waiting inline for immigrations at O’Hare airport. It was a long line so I had time to go through most of the details in the #Guidebook and I picked everything I liked.

No I have a schedule with 3, 4 or even more sessions I want to go to for any given time. How should I choose ?  As a #sqlfirsttimer I’m not sure what the best approach is ? Should I go for the   speaker that I have read and listened to on webinars, youtube, the once that I’m following in Twitter and who seems to really know their stuff ? Or should I go for the once that I never heard off and hope that they have something completely new to teach me ?

There are so many interesting areas I would like to dig inn to, should I choose one area all the way or should I mix them as much as possible to get new input ?

BigData, SANLess Clustering, Extended Events, Power BI and Power Map, database corruption, AlwaysOn, Powershell, PDW, 2014 InMemory OLTP, Clustered updatable columnstore indexes, Indexes, Central Managment Server, SQLCAT session, SQL Clinic, Query optimising, etc, etc…

Not to mention all the other activities, First-Timers orientation, Welcome reception, Exhibitor reception, #SQLRun,  Sponsor breakfasts, Keynote, Community appreciation Party at Nascar hall of Fame, Birds of a Feather Luncheon, SQL Server Clinic an SQLCAT, SQL Kilt day and #SQLKaraoke.

I starting to believe I will not make to all of these events. But I will sure try. I will probably end up bying the DVD…

Monitor Sql server blocking sessions in Nagios

Here is a script that uses Freetds to monitor sql server blocking it can be used by Nagios to trigger alarms.

#Script is designed to be used by Nagios. It checks for blocking sessions in MSSQL.


# Requirements:


# FreeTDS 6.0+ (http://www.freetds.org/)


# Original check_mssql.sh written by Tom De Blende (tom.deblende@village.uunet.be) in 2003.

# Changed script to check for blocking sessions in a database by Staffan.olofsson@b3it.se in 2013


# Script takes 4 parameters 1. Servername 2.username 3.password 4. 2000 (for SQL 2000 and up)

# Freedtds has to be configured with connection information


# Current threshold for blocking is 35000 ms. This can be changed in the SQL code below.


# You might want to change these values:


catcmd=`which cat`

grepcmd=`which grep`

rmcmd=`which rm`

mktempcmd=`which mktemp`

wccmd=`which wc`

sedcmd=`which sed`

trcmd=`which tr`

uniqcmd=`which uniq`






if [ ! "$#" == "4" ]; then

echo -e "\nYou did not supply enough arguments. \nUsage: $0 <host> <username> <password> <version> \n \n$0 checks Microsoft SQL Server connectivity."

elif [ $tsqlcmd == "" ]; then

echo -e "tsql not found! Please verify you have a working version of tsql (included in the FreeTDS version 6.0+) and enter the full path in the script."



# Creating the command file that contains the sql statement that has to be run on the SQL server.

tmpfile=`$mktempcmd /tmp/$hostname.XXXXXX`

if [ $srv == "7" ]; then


elif [ $srv == "2000" ]; then



echo -e "$srv is not a supported MS SQL Server version!" && exit "3"


echo -e "SELECT host_name,program_name,login_time,login_name,convert(nvarchar(4000), [text]),wait_duration_ms

FROM sys.dm_os_waiting_tasks w

INNER JOIN sys.dm_exec_sessions s

ON w.session_id = s.session_id

INNER JOIN sys.dm_exec_requests r

ON s.session_id = r.session_id

OUTER APPLY sys.dm_exec_sql_text (r.sql_handle) t

WHERE s.is_user_process = 1

and wait_duration_ms >35000\ngo" > $tmpfile

# Running tsql to get the results back.

resultfile=`$mktempcmd /tmp/$hostname.XXXXXX`

errorfile=`$mktempcmd /tmp/$hostname.XXXXXX`

$tsqlcmd -S $hostname -U $usr -P $pswd < $tmpfile 2>$errorfile > $resultfile

$grepcmd -q "Login failed for user" $errorfile

if [ "$?" == "0" ]; then

$rmcmd -f $tmpfile $resultfile $errorfile;

echo CRITICAL – Could not make connection to SQL server. Login failed.;

exit 2;


$grepcmd -q "There was a problem connecting to the server" $errorfile

if [ "$?" == "0" ]; then

$rmcmd -f $tmpfile $resultfile $errorfile;

echo CRITICAL – Could not make connection to SQL server. Incorrect server name or SQL service not running.;

exit 2;


resultfileln=`$catcmd $resultfile | $wccmd -l | $sedcmd ‘s/ //g’`

if [ "$resultfileln" == "2" ]; then

$rmcmd -f $tmpfile $resultfile $errorfile;

echo CRITICAL – Could not make connection to SQL server. No data received from host.;

exit 2;


nmbr=`$catcmd $resultfile | $grepcmd -v locale | $grepcmd -v charset| $grepcmd -v 1\> | $sedcmd ‘/^$/d’ | $sedcmd ‘s/ //g’ | $wccmd -l | sed ‘s/ //g’`

hosts=`$catcmd $resultfile | $grepcmd -v locale | $grepcmd -v charset| $grepcmd -v 1\> | $sedcmd ‘/^$/d’ | $sedcmd ‘s/ //g’ | $uniqcmd -c | $trcmd \\\n , | $sedcmd ‘s/,$/./g’ | $sedcmd ‘s/,/, /g’ | $sedcmd ‘s/ //g’ | $trcmd \\\t " " | $sedcmd ‘s/ \./\./g’ | $sedcmd ‘s/ ,/,/g’`

$rmcmd -f $tmpfile $resultfile;

if [ "$nmbr" == "0" ]; then

echo "OK – MS SQL Server $srv has $nmbr blocks: $hosts" | sed ‘s/: $/./g’;


echo "$hosts $nmbr"


exit 0;


# Cleaning up.

$catcmd $tmpfile

$catcmd $resultfile

$catcmd $errorfile

$rmcmd -f $tmpfile $resultfile $errorfile

echo $stdio

exit $exit