SQL Server 2012: building failover cluster

Just a note to self after tonights exam preparation on how to create a SQL Failover Cluster.

Every database server that hosts critical system should be created as high-availability solutions. This can be done by creating “Failover Cluster” or “AlwaysOn Availability Groups”. This post only includes notes on important points for Failover cluster.

  1. Configure shared storage. Make sure servers are in the OU for SQL Servers. Apply firewall rules for ports TCP/135, TCP/3260, in addition to UDP/138 and any SAN target software. Bring SAN volumes online and create disk partitions. Add DNS record for cluster name.
  2. Configure Windows Server R2 failover cluster. Apply .NET Framework 3.5.1 on all nodes. Configure cluster servers (nodes). Set cluster name and IP.
  3. Run “Advanced Cluster Preparation” on each node.  Each node in cluster need to perform this preparation task with node reboot. Installation includes db engine, replication, Mgmnt tools on default instans.
  4. Run “Advanced Cluster Completion” on node with shared storage.  Verify SAN disks. Set server network name, resource group, cluster IP and subnet-mask, including online status verfication.
  5. Perform a cluster failover. This can be done by powershell command “Move-ClusterGroup <resourcegroup> <node>”

Measure and Monitoring Disk Performance

Performance on a SQL server much down to how the server handles disk I/O operations. Many enterprise SQL servers are using large SAN instead of local storage systems. When the price development in high-speed storage cards might reduce the concern for poor disk performance. Large datacenters are using these high performance storage cards when systems needs the best IO performance possible. There exists essentially four main storage types, and these are either traditional magnetic drives or SSD disks:

  • Internal drives
  • PCI-E: high speed storage cards
  • DAS: Direct-Attached Storage
  • SAN: Storage Area Networks

How can you measure disk performance? Has the vendor or IT-department given you disks with poor, good or excellent performance? These are questions database administrators have to ask when they receive hardware for a new server. When you a specifying new hardware, don’t just ask for storage based on space requirements. Ask the vendor or IT-department for performance requirements.  Performance parameters for different database types are often measured in the following:

  • Sequential performance in MB/second or GB/second
  • Random performance in input/output operations per second (IOPS)

Sequential performance is very important for data warehouses (DWH) database where large sequential reads are performed on the data files. Random I/O performance very important for databases for Online Transaction Processing (OLTP) where frequent writes to data files and log file. The also applies for Online Analytical Processing (OLAP) where many random reads from cube files.

  • CrystalDiskMark. Is used to measure sequential and randon read/write throughput.
  • SQLIO. Same as CrystalDiskMark, but this is a command line Microsoft tool that is used to determine the I/O capacity of a given configuration. You are able to simulate different block size of the IO request, duration of tests and number of outstanding requrests.

Have have done some testing locally on my laptop where I have a SSD as Disk 0. Here is a screenshot of the result from my SQLIO test. So the performance should be pretty good.