Author Archives: sqlsos

Use SQL Server Central Management Server remotley through powershell

Ever wanted to run a scheduled powershell script that uses the CentralManagementServer to connect to all your SQL servers ?

After trying out a bunch of diffrerent solutions I found this piece of code on a Microsoft blog.

I have modified it and combined it with several other code pieces to get the result I wanted.
This code will run on towards the Central Management Server you havce setup, it will work from any other server
which has the following installed on it:

* Shared Management Objects
* SQLSysClrTypes
* PowershellTools

I had some initial problems to create the output file beacuse the SQLPS module dosen’t allow use of UNC paths as output,
there fore I had to use the New-PSDrive  command to map a driveletter.

To use the script chnage the pathc and file names to your liking,
change the CMS servername and change the name of you group in CMS .ServerGroups[ “All SQL servers” ] .
Here it is possible to add more levelsin the folder structure if you have organized your servers in prod, test and dev or similar.
Like this .ServerGroups[ “All SQL servers” ].ServerGroups[ “Test” ]

In this code I have just include code to get me the size of all databases on all servers in my CMS server.
But you could of course change that code to execute things or get any other information  out from your servers.
I have added some powershell formatting to get the size in GB and with no decimals.

$date=Get-Date -format "yyMMd"
#Need to map a drive letter since UNC paths are not supported with SQLPS
New-PSDrive -Name "G" -PSProvider FileSystem -Root "\\server\path" -Persist $file = "G:\SQL_dev_stats_$date.csv"

function GetSQLStats {
#Load SMO assemblies
$CentralManagementServer = "CMS-server\instancename"
$MS='Microsoft.SQLServer' @('.SMO', '.Management.RegisteredServers', '.ConnectionInfo') |
foreach-object {if ([System.Reflection.Assembly]::LoadWithPartialName("$MS$_") -eq $null) {"missing SMO component $MS$_"}}

$connectionString = "Data Source=$CentralManagementServer;Initial Catalog=master;Integrated Security=SSPI;"
$sqlConnection = new-object System.Data.SqlClient.SqlConnection($connectionString)
$conn = new-object Microsoft.SqlServer.Management.Common.ServerConnection($sqlConnection)
$CentralManagementServerStore = new-object Microsoft.SqlServer.Management.RegisteredServers.RegisteredServersStore($conn)

$My="$ms.Management.Smo" #
$CentralManagementServerStore.ServerGroups[ "DatabaseEngineServerGroup" ].ServerGroups[ "All SQL servers" ].GetDescendantRegisteredServers() |
    Foreach-object {new-object ("$My.Server") $_.ServerName } | # create an SMO server object
       Where-Object {$_.ServerType -ne $null} | # did you positively get the server?
          Foreach-object {$_.Databases } | #logins for every server successfully reached

          # Put your Powershell SQL statements here
              Select-object @{Name="Server"; Expression={$_.parent}}, Name, @{Label="Size (MB)";Expression={"{0:N0}" -f $_.Size | foreach {$_.replace(",","")}} } |
          # End of your code here
              format-table
}
########## Main Program ###########
GetSQLStats  | out-file $file #-notypeinformation -Delimiter ";"
Advertisements

Virtualizing SQL server on Hyper-V

Here is very good withepaper about virtualizing SQL Server on Hyper-V. It’s a Microsoft written paper but I think it gives a very good insight about waht to consider when virtualizing SQL Server.

http://www.google.com.eg/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CDIQFjAB&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2F6%2F1%2FD%2F61DDE9B6-AB46-48CA-8380-D7714C9CB1AB%2FBest_Practices_for_Virtualizing_and_Managing_SQL_Server_2012.pdf&ei=C_e6UsT6L-KQ0AWdl4DQDQ&usg=AFQjCNFDzRHvfSnvAieXDrt8WLZ5lbNH9Q&bvm=bv.58187178,d.d2k

Nutanix, is it a gamechanger ?

Ever heard of Nutanix ?

I think its time now to tell the story about something that might change the data center as we know it.

Traditional infrastructure for virtualization includes servers, network and storage. Normally operated by different departments within the IT organization. This often leads to problems when there is a performance issue, everybody start to blaming each other.

This is also in general a very expensive solution with both servers, switches and SAN equipment. All of this often duplicated to make it high available with redundant extra HW, often doing nothing most of the time.

With VDI becoming more and more common, even in smaller organization, it’s has become a challenge to provide adequate performance for that environment without being affected by other storage activity.

One way could be to separate storage for the VDI environment on it’s on storage, preferably on something with low latency, able to provide high IOPS during bootstroms etc. Here flash-based arrays like Nimbus, Purestorage, Violin or hybrid arrays like Nimble could play a part. Even the big storage providers like NetApp, EMC, HP, IBM now have both hybrid and pure flash-based arrays to compete with the new kids on the block.
But even with these solutions we still have a solution divided into three different areas. Server, network and  storage are just faster, but they will still need to be maintained by different groups of people. And they will still need to communicate over a network.

We are more and more moving towards the software defined data center with public, hybrid and private cloud becoming the standard for the new data center. We need to adopt to this and get rid off all the old legacy systems to be able to smoothly migrate resources between in-house and hosted solutions.

With IT becoming more centralized and outsourcing into cloud solutions becoming some kind of best practice, IT technicians must strive to make the infrastructure as easy to maintain as possible, otherwise it will be outsourced to someone doing it faster and cheaper.

Nutanix, delivering kind of appliance box with CPU and storage locally in the box is definitely on the right track here. By delivering a 2U box with four servers, each with 2 CPU’s, up to 512 GB RAM,SSD and SATA drives with a total of 8 TB useable storage with high IOPS and a distributed filesystem where everything scales linear when adding more 2U boxes it becomes a very easy to manage solution.

Inline dedupe, compression, remote replication to a DR site box, support for running Vmware, Hyper-V or KVM as hypervisor makes it a very attractive solution for running VM’s.

There are a number of different HW specs to choose from, from high density boxes with four nodes to low latency with 2 and added storage capacity or even with Nvidia GPUs to provide VDI solutions capable of delivering 3D and CAD performance.

if you are already running a Vmware shop, just add the box to your Vmware data center or DRS cluster and live migrate machines over to the new hosts.

There is so much new about this that I must urge you all to look deeper into it.  Below are links to some interesting sites and some withepapers.

Update #1:

vSAN is a new feature in vSphere 5.5 that have some of the features that Nutanix offers. This article explains Nutanix more in detail and also compares it to vSAN.

http://up2v.nl/2013/08/26/nutanix-virtual-computing-platform-compared-to-vmware-vsan-1-0/

http://www.nutanix.com

http://stevenpoitras.com/

http://stevenpoitras.com/the-nutanix-bible/

http://stevenpoitras.com/2013/11/advanced-nutanix-sql-server-nutanix-best-practices-released/

http://www.nutanix.com/blog/2013/01/02/2013-predictions-the-end-of-big-iron/

SQL Server HA and DR solutions ?

I have been trying to figure out the best way to HA and DR protect SQL servers for some time now.

My customer is currently running SQL server 2008 R2 on Veritas cluster with Veritas Volume Manager as block level replication between primary and DR site.  The cluster has 4 physical nodes on each site with multiple instances for different system. Two of the hosts on each site has FusionIO flash cards for tempdb storage.  All other data is stored in a NetApp SAN connected through dual 8 GB FC HBA’s. Symantec NetBackup 7.6 is used for backups, it’s using the latest snapshot technology in NetApp and Vmware to make backups of databases in just seconds. Restore is also done in just minutes.

Upgrade to SQL 2012 is in the pipe and we are also looking into the possibility of making these servers virtual in Vmware. Load on the physical boxes is not that high and the performance of the Vmware environment would be sufficient. Vmware is running on HP blades, blade chassi is connected to NetApp through dual (or more) 10 GB Ethernet. Datastores in Vmware uses NFS protocol. Veritas is considered expensive and the goal is to get rid of it.

IOmeter och Crystal diskmark gives us almost the same disk performance between physical and virtual servers. We are looking into the possibility of using either InfinIO and/or IOTurbine in the Vmware hosts to get even better performance.

Last 8 months we have been testing AlwaysON to see if that is the solution for a new environment. AlwaysOn is very nice when used with non-shared storage and both synchronous and asynchronous replication between hosts and sites seems to be working very well. We setup a cluster with 3 plus 3 hosts on primary and DR site and created AG’s for both Sharepoint environments and other both big and small databases. We setup AG’s local to only one site and AG’s spanning both sites and we used a file share witness that could also be moved to DR site.

All worked well until we performed  a planned DR test where we failed over all AG’s to the DR site and then pulled the plug between the sites. Primary site cluster stayed up but DR site stopped because of lost quorum…. My mistake of course, had to manually change the quorum configuration after failing over. Have fixed that now with a powershell script that fails over all AG’s, resumes replication and changes quorum. So far so good.

But there is still a problem. As I said, one of the AG’s I have created is only local to the servers on the DR site, it’s a database only used by application installed on the DR site so we don’t need it on the primary site. Problem is that the cluster on the DR site goes down when we lose connectivity between the sites..  Microsoft  Cluster is designed this way to avoid a split brain situation, but it’s not what I want….

One way to solve this would be two have separate AlwaysOn clusters, but that’s expensive and it would be better to utilize the same servers for more than one thing.

Another way would be two have two FCSI clusters and put AlwaysON on top of that, but that would require shared storage for each FCSI on each site and in Vmware this requires FC or ISCSI. If I understand it correctly it could work in Windows 2012 and Vmware 5.5. But we are not there yet.

On top of this there is the problem with logins and jobs that need s to be synched between the servers. Having six servers and 4 AG’s makes this a bit complicated.

Jonathan Kehayias at SQLskills has written a small application to help out with this, but it’s still not enough. Check it out here http://www.sqlskills.com/blogs/jonathan/synchronize-availability-group-logins-and-jobs/

So, I have started to look into different solutions. I have found the following alternatives that we should consider:

  • Vmware native HA with SRM for DR protection
  • SIOS
  • DH2i
  • Veritas

What I’m considering right now is to just put the SQL servers in Vmware as is and use SRM for DR protection, it would give basically the same protection level as the Veritas solution. Block level snapshot replication in the NetApp instead of block level volume replication with Veritas. Failover between local host on primary would be slightly slower because  it would require the virtual machine to restarted from scratch on a new hosts, compared to Veritas where only SQL server has to startup. But Veritas has to take care of shared storage and unmounting and mounting diskgroups and that can also take some time. Failing to DR site is manual in Veritas and would be the same in Vmware, data loss would possibly be the same if the line between the sites is not fast enough for high peak load.

Using some other third-party clustering like SIOS or DH2i would maybe be better than Veritas, less complicated, possibly cheaper but something new that needs to be implemented.

With the protection that Vmware, NetApp and Symantec provides is it really neccesary to add AlwaysOn protection as well if we don’t need readable secondarys or backup offloadning ?

What’s your opinion ?

SQL Pass Rally 2013

sql_rally_nordic_logo_500

Rally is back in Stockholm, apparently this years event was the largest Rally event ever outside US. Even bigger then last time. That is nice and even though I only participated part-time this year I just love it.

This blog has been written during the event but was finished almost a week later. Timeline within the blog is therefore a bit confusing. But I prefer to leave it that way so that you can read it as I experienced it.

This Conference is obviously small compared to SQL Pass Summit in Charlotte some weeks ago, but the speakers are as good, food seems even better and Tuesdays  entertainment sound promising.

Monday

“Setup best practice of the year” – Brent Ozar

Brent Ozar

I attended Brent Ozar’s pre-con  the first day. And it’s amazing that there is always something ew to pickup, Brent has the same kind of background as me, learned SAN and virtualization because admins in these areas didn’t understand SQL related problems. SQL is after all the most demanding servers in most SAN or virtualization environment.

I won’t go into detail about all that was discussed during the day, I’m just going to comment on some of the notes I took during the session.

BrentOzat.com is a great source to get deeper into all of this. There are ton’s of free material to explore.

Try out sp_AskBrent with @expertmode=1 to find what your SQL Server is waiting on.

Try sp_Blitz to get at start on troubles in your SQL server.

Combine with Adam Machanic’s  sp_WhoIsActive and use the @get_plans=1 parameter and you have a wonderful combination of tools to troubleshoot your environment.

Look also for the triage_waitstats script to look deeper into waits.

Best advice to boost performance of your SQL server is to buy memory instead of upgrading SAN. RAM is so cheap these days, getting enough RAM to hold your complete DB in memory is the best way to get rid of storage IO.

To define your hardware for HA and DR the best way you need to define RPO (RecoverPointObject) and RTO (RecoveryTimeObject). In other word, how much data can you lose and how long can your environment be down.
All systems needs to be defined in the following matrix.

HA                   DR
none to 1 sec
1 min
1 hour
1 day

Depending on your choices above you then have several options to achieve this RPO and RTO. AlwaysON, clustering, mirroring, replication in various combinations and with underlying storage replication and such.
AlwaysON is not always the best choice, it is still a product that requires a lot of hands on.And it’s not as easy as running just a wizard. You might end up with a less reliable environment if you don’t do it right.

Run more virtual hosts instead of instances. Combining more instances on one host often more difficult than running multiple VM’s instead. And if you are running Enterprise edition, licensing that on the hypervisor level, means you can run an unlimited number of instances on that host. Might be a solution for your environment.

Always start all VM’s with 2 cpu and increase when you can see wait’s on CPU, giving a VM more CPU can actually make it slower.

I’m a big fan of PCI flashcards and Intel 910 seems to be a bargain, need to check it up. It comes in 600 and 800 Gb size and delivers up to 2 Gb/s.

PCI cards with caching software is maybe a good idea, but Products like OCZ ZD-XL or FusionIO with IOTurbine adds an extra layer of problem with bugs etc. I was interested in these at first but I Think for SQL Server it is better to avoid this.

“Putting SSD in a SAN is like flushing money down the toilette”, according to Brent Ozar. SSD is better local to SQL Server. Fabric will still be the biggest bottleneck.

A normal core these days is capable of receiving about 200 MB/s. To saturate all cores in a 2 socket server with 4 cores each the SAN needs to deliver 1600 MB/s.

Imagine the SAN needed to saturate 2 socket, 10 cores….

Vmware 5.5 with local SSD storage is ideal for SQL server, it can be very good for standard edition with limited memory, cache can appear bigger with this solution. Less SAN IO.

INFINIO also has release a Product last week that uses RAM in vmware host to cache IO. All hosts forms a pool of RAM that is shared to get better cache hit rate. Would like to try that out in the future.

You can also check out brentozar.com/go/filecache for more info on this subject.

To check if there is enough RAM in your server run perfmon. Target memory should be same as max memory otherwise we have memory pressure

Tuesday

“Getting the most from your SAN – file and filegroup layout” – Stephen Archbold

stephen

User databases actually has the same hotspot problem as tempdb, adding extra files to a filegroup gives an immediate performance increase of up to 15%.

Check that read ahead of 512 kb/s test with an aggregate select on fact table and watch avg. bytes /transfer in perfmon .

The following startup parameters for SQL Server can be used to get better IO performance, investigate and make sure you understand how they should be used and for which enviornments. Google for SQL Server Datawarehouse Fast Track  from Microsoft and read it all.

-E
-T1117
-T834

“Use your brain to beat SQL Server” – Adam Machanic and Thomas Kejser

Thomas-Kejser_w163adam

Sherlock Holmes kind of feeling in this session. Thomas and Adam tried to learn us tricks we can use to manipulate how the optimizer creates queryplans. Especially when the queryplans is not looking the way we want and we can’t fix it with normal solutions. I will not even try to recap what was said during the session, I was just amazed. Looking forward to the presentation material when becomes available.

“Index Impact Workload Analysis” – Klaus Aschenbrenner

untitled

Klaus Aschenbrenner had an interesting session about new SQL 2012 feature named distributed replay. This tool can be used to record a trace of all activity in system for a specified time. The trace can then be moved to another system where is can be re run. This gives excellent opportunities to test out indexes or other changes to a system without affecting the production environment. Load testing can also be done by utilizing up to 16 clients to run the trace.
I have only one problem with this and that is that prod and test environment needs to be very similar so that we don’t get different queryplans and result just because of the HW differences.

In the evening on Tuesday there was a dinner with Brynolf and Ljung entertaining, they are two Swedish magicians how gave us an amazing show in magic and illusions.

2013-11-06-07_39_51-Brynolf-Ljung

Wednesday

“Data Warehouse in the Cloud – Marketing or Reality?” – Alexei Khalyako

Alexei

Alexei tried to convince us that we can actually run DW’s in Azure. He has a point and for some customers this probably a viable solution, but for the majority of customers I still Think performance is not enough. Give it a year and will be possible.

“Automating Data Warehouse Patterns Through Metadata” – Davide Mauri

fc8982cb2e7f5b465b92e89b83fde541

Davide is a great speaker and has a way of getting your attention that only Brent Ozar can beat. In this presentation he discussed different DW load patterns and how we can solve them and in the end he gave us insight in how we can automate much of the “monkey work” in a DW development with BIML (Business Intelligence Markup Language).

“HA and DR in the context of the SLA” – Tobiasz Koprowski

Tobiaz

Was a nice recall of what we actually have SLA for. It’s not always as easy as just saying that a DB has a 99,999% uptime, all components a system is relaying must be added to the equation to get the real uptime. This polish guy did a great job describing the consequences.

“Does Optimistic Concurrency Resolve Your Blocking Problems?” – Margarita Naumova

maggie

Maggie Naumova walked us through the problems with locking and how we can solve some of the problems with RowLevelVersioning. I have personally used this on several systems to solve locking problems with good result.
She finished of by explaining how most of the locking problems we have to today will be solved by SQL 2014.

In Summary I had some very nice days at Arlanda Airport where the Conference was held. I meet old friends and also made some new. I learned alot, especially from Brent, how made me think twice about some coming purchases.

I’m sorry I could not attend SQL Rally in Amsterdam also. That would have been nice.

SQLPassTV session recordings

During SQLPass some of the sessions where recorded and has now been made available on youtube. Please follow the below links to watch them.

KEYNOTE: Microsoft’s Data Platform (Quentin Clark) http://www.youtube.com/watch?v=A5WgNvA1kg8&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

KEYNOTE: Hekaton: Why, What, and How (David DeWitt)http://www.youtube.com/watch?v=aW3-0G-SEj0&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Professor DMV: How to Use DMVs to Diagnose Performance Issues with Glenn Alan Berrys performance DMV’s http://www.youtube.com/watch?v=AWFg6KdVoh4&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Automate Your ETL Infrastructure with SSIS and PowerShell (Allen White ) http://www.youtube.com/watch?v=rlrw0dNmFdU&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Building Your T-SQL Tool Kit: Window Function Fundamentals http://www.youtube.com/watch?v=tUK3ohs7uDI&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Skewed Data, Poor Cardinality Estimates, and Plans Gone Bad (Kimberly Tripp) http://www.youtube.com/watch?v=li5HwaZF8tc&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: 10 Query Tuning Techniques Every SQL Programmer Should Know http://www.youtube.com/watch?v=XUCxQkFoqpw&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Troubleshooting Windows Clusters (Allan Hirt) http://www.youtube.com/watch?v=ELtmP_1uEi8&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: SQL Server Index Internals: A Deep Dive (Tim Chapman) http://www.youtube.com/watch?v=MpAKdy54Eqg&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Index Fragmentation: Internals, Analysis, and Solutions (Paul Randal) http://www.youtube.com/watch?v=p3SXxclj_vg&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

SESSION: Parallel Query Execution (Paul White) http://www.youtube.com/watch?v=yzW5cqI0uZs&feature=share&list=PLoGAcXKPcRvbTr23ujEN953pLP_nDyZJC …

Last day at #SQLPass summit 2013

It’s been a very long week but now it comes to and end. I already looking forward to next year in Seattle and I have not even left Charlotte yet.

I’m getting a bit tiered, all this new knowledge everyday and then all the social activities at night is beginning to take its toll.

But this last day at SQLPass was also a great one. I started out with a session with Tim Chapman called “SQL Server Index internals: A deep dive”, that was almost a bit to much at 8.00 in the morning, especially after last nights party at NASCAR Hall of fame. But Tim had  great presentation skills and he kept us all awake with great demos.

Then I went to a session about Central Management Server and Policy based management with Ryan Adams. I have never really gotten the hang on policy based management but Ryan sorted it out just fine and I know think it’s time to start using this feature to get better control that all servers adhere to a given standard.

Then I went to a great session with David Klee, “squeezing Top Performance from Your Virtualized SQL Server”, got out with some really useful tips on how to configure Vmware, Windows and SQL server to get top notch performance. A lot of this can be found on his website.

The for lunch there was a event called “Birds of a feather luncheon”, where all tables has a sign describing the topic to be discussed. I went for a table called AlwaysOn and got to talk to some great guys about difficulties about clustering and AlwaysOn.

The I continues on that track after lunch with a session called “CAT: AlwaysOn Customer panel – Lessons learned & Best practices”. four guys from different companies described their environment and the problems they had encountered while setting up various HA & DR solutions. very interesting.

The last session of this years summit was Paul White describing Parallel query executions in detail. Didn’t know there was something called a Parallel deadlock situation, but Paul managed to great such a scenario and then describe why this happened and how we may avoid it. He also went into detail about a lot of other parallel execution problems one might run into. Paul was also a very skilled presenter an managed to keep us awake even though it was late on friday afternoon.

To summarize this has been a fantastic week and there has been so much interesting sessions and so many magnificent speakers it even hard to imagine where one could get this elsewhere.

I’m so looking forward to next years SQL Pass summit in Seattle, se you all there.