Use SQL Server Central Management Server remotley through powershell

Ever wanted to run a scheduled powershell script that uses the CentralManagementServer to connect to all your SQL servers ?

After trying out a bunch of diffrerent solutions I found this piece of code on a Microsoft blog.

I have modified it and combined it with several other code pieces to get the result I wanted.
This code will run on towards the Central Management Server you havce setup, it will work from any other server
which has the following installed on it:

* Shared Management Objects
* SQLSysClrTypes
* PowershellTools

I had some initial problems to create the output file beacuse the SQLPS module dosen’t allow use of UNC paths as output,
there fore I had to use the New-PSDrive  command to map a driveletter.

To use the script chnage the pathc and file names to your liking,
change the CMS servername and change the name of you group in CMS .ServerGroups[ “All SQL servers” ] .
Here it is possible to add more levelsin the folder structure if you have organized your servers in prod, test and dev or similar.
Like this .ServerGroups[ “All SQL servers” ].ServerGroups[ “Test” ]

In this code I have just include code to get me the size of all databases on all servers in my CMS server.
But you could of course change that code to execute things or get any other information  out from your servers.
I have added some powershell formatting to get the size in GB and with no decimals.

$date=Get-Date -format "yyMMd"
#Need to map a drive letter since UNC paths are not supported with SQLPS
New-PSDrive -Name "G" -PSProvider FileSystem -Root "\\server\path" -Persist $file = "G:\SQL_dev_stats_$date.csv"

function GetSQLStats {
#Load SMO assemblies
$CentralManagementServer = "CMS-server\instancename"
$MS='Microsoft.SQLServer' @('.SMO', '.Management.RegisteredServers', '.ConnectionInfo') |
foreach-object {if ([System.Reflection.Assembly]::LoadWithPartialName("$MS$_") -eq $null) {"missing SMO component $MS$_"}}

$connectionString = "Data Source=$CentralManagementServer;Initial Catalog=master;Integrated Security=SSPI;"
$sqlConnection = new-object System.Data.SqlClient.SqlConnection($connectionString)
$conn = new-object Microsoft.SqlServer.Management.Common.ServerConnection($sqlConnection)
$CentralManagementServerStore = new-object Microsoft.SqlServer.Management.RegisteredServers.RegisteredServersStore($conn)

$My="$ms.Management.Smo" #
$CentralManagementServerStore.ServerGroups[ "DatabaseEngineServerGroup" ].ServerGroups[ "All SQL servers" ].GetDescendantRegisteredServers() |
    Foreach-object {new-object ("$My.Server") $_.ServerName } | # create an SMO server object
       Where-Object {$_.ServerType -ne $null} | # did you positively get the server?
          Foreach-object {$_.Databases } | #logins for every server successfully reached

          # Put your Powershell SQL statements here
              Select-object @{Name="Server"; Expression={$_.parent}}, Name, @{Label="Size (MB)";Expression={"{0:N0}" -f $_.Size | foreach {$_.replace(",","")}} } |
          # End of your code here
########## Main Program ###########
GetSQLStats  | out-file $file #-notypeinformation -Delimiter ";"

Virtualizing SQL server on Hyper-V

Here is very good withepaper about virtualizing SQL Server on Hyper-V. It’s a Microsoft written paper but I think it gives a very good insight about waht to consider when virtualizing SQL Server.,d.d2k

Nutanix, is it a gamechanger ?

Ever heard of Nutanix ?

I think its time now to tell the story about something that might change the data center as we know it.

Traditional infrastructure for virtualization includes servers, network and storage. Normally operated by different departments within the IT organization. This often leads to problems when there is a performance issue, everybody start to blaming each other.

This is also in general a very expensive solution with both servers, switches and SAN equipment. All of this often duplicated to make it high available with redundant extra HW, often doing nothing most of the time.

With VDI becoming more and more common, even in smaller organization, it’s has become a challenge to provide adequate performance for that environment without being affected by other storage activity.

One way could be to separate storage for the VDI environment on it’s on storage, preferably on something with low latency, able to provide high IOPS during bootstroms etc. Here flash-based arrays like Nimbus, Purestorage, Violin or hybrid arrays like Nimble could play a part. Even the big storage providers like NetApp, EMC, HP, IBM now have both hybrid and pure flash-based arrays to compete with the new kids on the block.
But even with these solutions we still have a solution divided into three different areas. Server, network and  storage are just faster, but they will still need to be maintained by different groups of people. And they will still need to communicate over a network.

We are more and more moving towards the software defined data center with public, hybrid and private cloud becoming the standard for the new data center. We need to adopt to this and get rid off all the old legacy systems to be able to smoothly migrate resources between in-house and hosted solutions.

With IT becoming more centralized and outsourcing into cloud solutions becoming some kind of best practice, IT technicians must strive to make the infrastructure as easy to maintain as possible, otherwise it will be outsourced to someone doing it faster and cheaper.

Nutanix, delivering kind of appliance box with CPU and storage locally in the box is definitely on the right track here. By delivering a 2U box with four servers, each with 2 CPU’s, up to 512 GB RAM,SSD and SATA drives with a total of 8 TB useable storage with high IOPS and a distributed filesystem where everything scales linear when adding more 2U boxes it becomes a very easy to manage solution.

Inline dedupe, compression, remote replication to a DR site box, support for running Vmware, Hyper-V or KVM as hypervisor makes it a very attractive solution for running VM’s.

There are a number of different HW specs to choose from, from high density boxes with four nodes to low latency with 2 and added storage capacity or even with Nvidia GPUs to provide VDI solutions capable of delivering 3D and CAD performance.

if you are already running a Vmware shop, just add the box to your Vmware data center or DRS cluster and live migrate machines over to the new hosts.

There is so much new about this that I must urge you all to look deeper into it.  Below are links to some interesting sites and some withepapers.

Update #1:

vSAN is a new feature in vSphere 5.5 that have some of the features that Nutanix offers. This article explains Nutanix more in detail and also compares it to vSAN.

SQL Server HA and DR solutions ?

I have been trying to figure out the best way to HA and DR protect SQL servers for some time now.

My customer is currently running SQL server 2008 R2 on Veritas cluster with Veritas Volume Manager as block level replication between primary and DR site.  The cluster has 4 physical nodes on each site with multiple instances for different system. Two of the hosts on each site has FusionIO flash cards for tempdb storage.  All other data is stored in a NetApp SAN connected through dual 8 GB FC HBA’s. Symantec NetBackup 7.6 is used for backups, it’s using the latest snapshot technology in NetApp and Vmware to make backups of databases in just seconds. Restore is also done in just minutes.

Upgrade to SQL 2012 is in the pipe and we are also looking into the possibility of making these servers virtual in Vmware. Load on the physical boxes is not that high and the performance of the Vmware environment would be sufficient. Vmware is running on HP blades, blade chassi is connected to NetApp through dual (or more) 10 GB Ethernet. Datastores in Vmware uses NFS protocol. Veritas is considered expensive and the goal is to get rid of it.

IOmeter och Crystal diskmark gives us almost the same disk performance between physical and virtual servers. We are looking into the possibility of using either InfinIO and/or IOTurbine in the Vmware hosts to get even better performance.

Last 8 months we have been testing AlwaysON to see if that is the solution for a new environment. AlwaysOn is very nice when used with non-shared storage and both synchronous and asynchronous replication between hosts and sites seems to be working very well. We setup a cluster with 3 plus 3 hosts on primary and DR site and created AG’s for both Sharepoint environments and other both big and small databases. We setup AG’s local to only one site and AG’s spanning both sites and we used a file share witness that could also be moved to DR site.

All worked well until we performed  a planned DR test where we failed over all AG’s to the DR site and then pulled the plug between the sites. Primary site cluster stayed up but DR site stopped because of lost quorum…. My mistake of course, had to manually change the quorum configuration after failing over. Have fixed that now with a powershell script that fails over all AG’s, resumes replication and changes quorum. So far so good.

But there is still a problem. As I said, one of the AG’s I have created is only local to the servers on the DR site, it’s a database only used by application installed on the DR site so we don’t need it on the primary site. Problem is that the cluster on the DR site goes down when we lose connectivity between the sites..  Microsoft  Cluster is designed this way to avoid a split brain situation, but it’s not what I want….

One way to solve this would be two have separate AlwaysOn clusters, but that’s expensive and it would be better to utilize the same servers for more than one thing.

Another way would be two have two FCSI clusters and put AlwaysON on top of that, but that would require shared storage for each FCSI on each site and in Vmware this requires FC or ISCSI. If I understand it correctly it could work in Windows 2012 and Vmware 5.5. But we are not there yet.

On top of this there is the problem with logins and jobs that need s to be synched between the servers. Having six servers and 4 AG’s makes this a bit complicated.

Jonathan Kehayias at SQLskills has written a small application to help out with this, but it’s still not enough. Check it out here

So, I have started to look into different solutions. I have found the following alternatives that we should consider:

  • Vmware native HA with SRM for DR protection
  • SIOS
  • DH2i
  • Veritas

What I’m considering right now is to just put the SQL servers in Vmware as is and use SRM for DR protection, it would give basically the same protection level as the Veritas solution. Block level snapshot replication in the NetApp instead of block level volume replication with Veritas. Failover between local host on primary would be slightly slower because  it would require the virtual machine to restarted from scratch on a new hosts, compared to Veritas where only SQL server has to startup. But Veritas has to take care of shared storage and unmounting and mounting diskgroups and that can also take some time. Failing to DR site is manual in Veritas and would be the same in Vmware, data loss would possibly be the same if the line between the sites is not fast enough for high peak load.

Using some other third-party clustering like SIOS or DH2i would maybe be better than Veritas, less complicated, possibly cheaper but something new that needs to be implemented.

With the protection that Vmware, NetApp and Symantec provides is it really neccesary to add AlwaysOn protection as well if we don’t need readable secondarys or backup offloadning ?

What’s your opinion ?

SQL Pass Rally 2013


Rally is back in Stockholm, apparently this years event was the largest Rally event ever outside US. Even bigger then last time. That is nice and even though I only participated part-time this year I just love it.

This blog has been written during the event but was finished almost a week later. Timeline within the blog is therefore a bit confusing. But I prefer to leave it that way so that you can read it as I experienced it.

This Conference is obviously small compared to SQL Pass Summit in Charlotte some weeks ago, but the speakers are as good, food seems even better and Tuesdays  entertainment sound promising.


“Setup best practice of the year” – Brent Ozar

Brent Ozar

I attended Brent Ozar’s pre-con  the first day. And it’s amazing that there is always something ew to pickup, Brent has the same kind of background as me, learned SAN and virtualization because admins in these areas didn’t understand SQL related problems. SQL is after all the most demanding servers in most SAN or virtualization environment.

I won’t go into detail about all that was discussed during the day, I’m just going to comment on some of the notes I took during the session. is a great source to get deeper into all of this. There are ton’s of free material to explore.

Try out sp_AskBrent with @expertmode=1 to find what your SQL Server is waiting on.

Try sp_Blitz to get at start on troubles in your SQL server.

Combine with Adam Machanic’s  sp_WhoIsActive and use the @get_plans=1 parameter and you have a wonderful combination of tools to troubleshoot your environment.

Look also for the triage_waitstats script to look deeper into waits.

Best advice to boost performance of your SQL server is to buy memory instead of upgrading SAN. RAM is so cheap these days, getting enough RAM to hold your complete DB in memory is the best way to get rid of storage IO.

To define your hardware for HA and DR the best way you need to define RPO (RecoverPointObject) and RTO (RecoveryTimeObject). In other word, how much data can you lose and how long can your environment be down.
All systems needs to be defined in the following matrix.

HA                   DR
none to 1 sec
1 min
1 hour
1 day

Depending on your choices above you then have several options to achieve this RPO and RTO. AlwaysON, clustering, mirroring, replication in various combinations and with underlying storage replication and such.
AlwaysON is not always the best choice, it is still a product that requires a lot of hands on.And it’s not as easy as running just a wizard. You might end up with a less reliable environment if you don’t do it right.

Run more virtual hosts instead of instances. Combining more instances on one host often more difficult than running multiple VM’s instead. And if you are running Enterprise edition, licensing that on the hypervisor level, means you can run an unlimited number of instances on that host. Might be a solution for your environment.

Always start all VM’s with 2 cpu and increase when you can see wait’s on CPU, giving a VM more CPU can actually make it slower.

I’m a big fan of PCI flashcards and Intel 910 seems to be a bargain, need to check it up. It comes in 600 and 800 Gb size and delivers up to 2 Gb/s.

PCI cards with caching software is maybe a good idea, but Products like OCZ ZD-XL or FusionIO with IOTurbine adds an extra layer of problem with bugs etc. I was interested in these at first but I Think for SQL Server it is better to avoid this.

“Putting SSD in a SAN is like flushing money down the toilette”, according to Brent Ozar. SSD is better local to SQL Server. Fabric will still be the biggest bottleneck.

A normal core these days is capable of receiving about 200 MB/s. To saturate all cores in a 2 socket server with 4 cores each the SAN needs to deliver 1600 MB/s.

Imagine the SAN needed to saturate 2 socket, 10 cores….

Vmware 5.5 with local SSD storage is ideal for SQL server, it can be very good for standard edition with limited memory, cache can appear bigger with this solution. Less SAN IO.

INFINIO also has release a Product last week that uses RAM in vmware host to cache IO. All hosts forms a pool of RAM that is shared to get better cache hit rate. Would like to try that out in the future.

You can also check out for more info on this subject.

To check if there is enough RAM in your server run perfmon. Target memory should be same as max memory otherwise we have memory pressure


“Getting the most from your SAN – file and filegroup layout” – Stephen Archbold


User databases actually has the same hotspot problem as tempdb, adding extra files to a filegroup gives an immediate performance increase of up to 15%.

Check that read ahead of 512 kb/s test with an aggregate select on fact table and watch avg. bytes /transfer in perfmon .

The following startup parameters for SQL Server can be used to get better IO performance, investigate and make sure you understand how they should be used and for which enviornments. Google for SQL Server Datawarehouse Fast Track  from Microsoft and read it all.


“Use your brain to beat SQL Server” – Adam Machanic and Thomas Kejser


Sherlock Holmes kind of feeling in this session. Thomas and Adam tried to learn us tricks we can use to manipulate how the optimizer creates queryplans. Especially when the queryplans is not looking the way we want and we can’t fix it with normal solutions. I will not even try to recap what was said during the session, I was just amazed. Looking forward to the presentation material when becomes available.

“Index Impact Workload Analysis” – Klaus Aschenbrenner


Klaus Aschenbrenner had an interesting session about new SQL 2012 feature named distributed replay. This tool can be used to record a trace of all activity in system for a specified time. The trace can then be moved to another system where is can be re run. This gives excellent opportunities to test out indexes or other changes to a system without affecting the production environment. Load testing can also be done by utilizing up to 16 clients to run the trace.
I have only one problem with this and that is that prod and test environment needs to be very similar so that we don’t get different queryplans and result just because of the HW differences.

In the evening on Tuesday there was a dinner with Brynolf and Ljung entertaining, they are two Swedish magicians how gave us an amazing show in magic and illusions.



“Data Warehouse in the Cloud – Marketing or Reality?” – Alexei Khalyako


Alexei tried to convince us that we can actually run DW’s in Azure. He has a point and for some customers this probably a viable solution, but for the majority of customers I still Think performance is not enough. Give it a year and will be possible.

“Automating Data Warehouse Patterns Through Metadata” – Davide Mauri


Davide is a great speaker and has a way of getting your attention that only Brent Ozar can beat. In this presentation he discussed different DW load patterns and how we can solve them and in the end he gave us insight in how we can automate much of the “monkey work” in a DW development with BIML (Business Intelligence Markup Language).

“HA and DR in the context of the SLA” – Tobiasz Koprowski


Was a nice recall of what we actually have SLA for. It’s not always as easy as just saying that a DB has a 99,999% uptime, all components a system is relaying must be added to the equation to get the real uptime. This polish guy did a great job describing the consequences.

“Does Optimistic Concurrency Resolve Your Blocking Problems?” – Margarita Naumova


Maggie Naumova walked us through the problems with locking and how we can solve some of the problems with RowLevelVersioning. I have personally used this on several systems to solve locking problems with good result.
She finished of by explaining how most of the locking problems we have to today will be solved by SQL 2014.

In Summary I had some very nice days at Arlanda Airport where the Conference was held. I meet old friends and also made some new. I learned alot, especially from Brent, how made me think twice about some coming purchases.

I’m sorry I could not attend SQL Rally in Amsterdam also. That would have been nice.

SQLPassTV session recordings

During SQLPass some of the sessions where recorded and has now been made available on youtube. Please follow the below links to watch them.

KEYNOTE: Microsoft’s Data Platform (Quentin Clark) …

KEYNOTE: Hekaton: Why, What, and How (David DeWitt) …

SESSION: Professor DMV: How to Use DMVs to Diagnose Performance Issues with Glenn Alan Berrys performance DMV’s …

SESSION: Automate Your ETL Infrastructure with SSIS and PowerShell (Allen White ) …

SESSION: Building Your T-SQL Tool Kit: Window Function Fundamentals …

SESSION: Skewed Data, Poor Cardinality Estimates, and Plans Gone Bad (Kimberly Tripp) …

SESSION: 10 Query Tuning Techniques Every SQL Programmer Should Know …

SESSION: Troubleshooting Windows Clusters (Allan Hirt) …

SESSION: SQL Server Index Internals: A Deep Dive (Tim Chapman) …

SESSION: Index Fragmentation: Internals, Analysis, and Solutions (Paul Randal) …

SESSION: Parallel Query Execution (Paul White) …

Last day at #SQLPass summit 2013

It’s been a very long week but now it comes to and end. I already looking forward to next year in Seattle and I have not even left Charlotte yet.

I’m getting a bit tiered, all this new knowledge everyday and then all the social activities at night is beginning to take its toll.

But this last day at SQLPass was also a great one. I started out with a session with Tim Chapman called “SQL Server Index internals: A deep dive”, that was almost a bit to much at 8.00 in the morning, especially after last nights party at NASCAR Hall of fame. But Tim had  great presentation skills and he kept us all awake with great demos.

Then I went to a session about Central Management Server and Policy based management with Ryan Adams. I have never really gotten the hang on policy based management but Ryan sorted it out just fine and I know think it’s time to start using this feature to get better control that all servers adhere to a given standard.

Then I went to a great session with David Klee, “squeezing Top Performance from Your Virtualized SQL Server”, got out with some really useful tips on how to configure Vmware, Windows and SQL server to get top notch performance. A lot of this can be found on his website.

The for lunch there was a event called “Birds of a feather luncheon”, where all tables has a sign describing the topic to be discussed. I went for a table called AlwaysOn and got to talk to some great guys about difficulties about clustering and AlwaysOn.

The I continues on that track after lunch with a session called “CAT: AlwaysOn Customer panel – Lessons learned & Best practices”. four guys from different companies described their environment and the problems they had encountered while setting up various HA & DR solutions. very interesting.

The last session of this years summit was Paul White describing Parallel query executions in detail. Didn’t know there was something called a Parallel deadlock situation, but Paul managed to great such a scenario and then describe why this happened and how we may avoid it. He also went into detail about a lot of other parallel execution problems one might run into. Paul was also a very skilled presenter an managed to keep us awake even though it was late on friday afternoon.

To summarize this has been a fantastic week and there has been so much interesting sessions and so many magnificent speakers it even hard to imagine where one could get this elsewhere.

I’m so looking forward to next years SQL Pass summit in Seattle, se you all there.



Fourth day at #SQLPass

Started early today with a breakfast seesion at 07.00 with “SANless clustering with FusionIO and SISO”. Interesting session where I learned about a limitiation in AlwaysOn that id didn’t know of. Apparently only one thread per Availability node is used for SQL replication between nodes in the cluster, limiting throughput to about 40 Mb/s. That could potentially become a problem in a larger environment. Think about Sharepoint….

SISO clustering seemed like a nice build-on feature to Windows clustering, combined with FusionIO it looks like a great solution which seems very simple to install and maintain. Need to check pricing on that.

Next was a wonderful session with Kimberly Tripp on Statistics and skewed data. She have put together a fantastic collection of Stored procedures to find and create statistics for skewed tables. I really love to come home test this on my customers row table which have some strange query performance from time to time. I bet it will solve a a lot of problem. I think they will be available on hhtp:// website shortly.

After lunch Rob Farley from Down Under hosted a session called: “Theses aren’t the joins your looking for”, where he deep dived into the different forms of joins and what kind of actions they have on returned rows. I won’t even try to get into details, but he a great presenter with a good deal of jokes and conversions with the audience. It was very entertaining.

The CAT team had the put together a session where they compared performance on ROLAP vs. Tabular when running on Parallel Data Warehouse. This session got a bit outside of what I had thought it would be about. Actually the first session so far where I totally lost concentration and started thinking about other stuff. The topic was interesting even for me who is not a BI developer but the presentation skills where not that good and to many really weird questions kept popping up all the time. Too bad, I wanted to learn more about PDW.

Last, Paul Randal is on stage to talk about waits, latches and spinnlocks. This was a great level 400-500 session on deep internal stuff. He is really awesome and his presentation got 25 minutes late. No one cared, it was so interesting to see how to pin down contention on this level. I think I have a couple of environments back home where I can actually have use of thhis new knowledge, can’t wait to get home…

Evening finished up in Nasacar Hall of Fame with the Community appreciation party. Kind of cool place where you go a interactive tour to prepare yourself and your car that you build during the tour, to finally drive a NASCAR simulation. Given that we where like 3-4000 people I skipped that part and went for food and beverage.


Third day at #SQLPass, the real sessions begin #sqlblog

Finally the conference started for real. So many people all over the place. Crowded in all sessions and even the dining hall was almost full for lunch.

Today I attended four session. First was the Key-note with Bill Graziano from Pass and Quentin Clark from Microsoft. SQL 2014 CTP 2 was announced and and brief walk through of all new features and  functionallity was very interesting. In memory databases was demonstrated and this looks cool. Huge improvments will be availabale by just altering a table or procedure to take advantge of this functionality. No code rewriting is necessary.

Power BI Q&A was also a new feature that looks very promising and the audience seem to like it very much.

SQL backups to Azure storage also seemed like a nice feature, especially since it’s available also for 2005 and 2008 via a downloadable tool. There was also a new feature that would automatically trigger transaction log backups based on the amount of changes data in the tables.

The keynote is available on

After that there was a great session with Erin Stellato about “Taking the leap from Profiler to Extended Events”.  Very good presentation and it really made me understand that there are so much more you can do with Extended Events then with Profiler. Need to start using that when I get back home again.

Then there was a session with Glenn Alan Berry called the “Professor of DMV”, where he went through his pack of DMV scripts that can be used to troubleshoot SQL server performance problems. I have used these DMV script before but I have never really fully been able too utilize them to it’s full potential. Now I know how to use them better and will dig into them again. They can be downloaded here:

Then I attended a seesion about automating your SQL server deployment with Powershell. The tool is called SPADE and is available on I have started to automate SQL server installations myself but my still limited knowledge of Powershell prevented my from going all the way. This tool will save tons of time spent on installing and configuring SQL server.

Lats session was about Troubleshooting Clusters and Alan Hirt gave a nice walk through of some pitfalls in cluster install and configuration and how to troubleshot them.

Then we attended the Exhibition recpetion for some food and chat and ended up on a pub for a couple of beer.


Make your SQL Server Apps go faster, day 2 pre-con at #SQLPass

Second day pre-conference at SQL Pass Summit for me was a session with Brent Ozar, Kendra Little and Jes Schultz Borland called “Make your SQL Server Apps go faster”

Here is some of the things that we learned today.

Key focus in the presentation was to see what can be done to make applications run faster from a DBA point of view. What can I as a DBA do without involving a developer or changing any of the application code.

First we need to know what is our worst performing queries, there are a number of ways to find that out. Activity Monster Monitor is NOT the best tools for this, beacuse it’s not telling the truth all the time.

sp_whoisactive is a free stored procedure that can be downloaded and installed. It can be downloaded from here Brent Ozar also has a quick guide here

Kendra Little aslo gave a quick guiding to Extended Events and how that can be used to give us information of our top queries, more about that in my next post after I have attended the “Making the Leap from Profiler to Extended Events” tomorrow.

We are also interested in what kind of waits we have in our server, this together with the information we get about the heaviest queries will help us determine what choices we have to tune.

SQL server have some DMV’s that can be queried to find missing indexes, there is also the Database Tuning Advisor that is commonly used by developers to create indexes on tables. Netiher of these are very good tools, just implementing recommendation from theses tools might give you worse performance in the long run. Recommendations should really been seen as just that. It is vital to know how your tables are accessed and queried to be able to build the correct indexes.

Correct indexing can not only solve direct IO problems but can in some cases also help to solve  fixed implicit conversions.

Sp_blitzindex is a great tool to figure out your index needs. It available here It will also help you find your duplicates.

It’s also important to know that some simple queries are executed as trivial and will not generate a index recommendation. If they are run frequently it might be worth while to look at it anyway. Find them with sp_whoisactive.

Remember that in SQL 2012 index recommendations are flushed when a index rebuild takes place, this was not the case in earlier versions. Index defragmentation does not flush the recommendations, this can make it tricky to maintain indexes in SQL 2012.

Index maintenance should be carried out every 4-6 weeks, this is because usage pattern and data volumes changes a lot over time.

CX_PACKET is a wait stat that is very common in SQL Server and often it is thought of as a problem. A common reason why this can become a problem is the “parallel threshold” value in the SQL Server configuration. Default value for that has been 5 since a very long time. In todays high performance servers it is way to low. A starting value of 50 is now days the recommendation. MAXDOP is also taking part here and that should not be left at default value of 0. Less or equal to number of cores on server is a good starting point. Usually no benefit over 16.

Always try to avoid table variables unless your 100% sure they will ever return more then one row. describes this more in detail.

These where just some takeaways from todays pre-con. It will be interesting to get home and practice this on real data…..