SQL Server 2014 CTP1 – New bits & bytes

Brent Ozar has put together a nice list with changes in SQL Server 2014 CTP1 with emphisis on DMVs
http://www.brentozar.com/archive/2013/06/new-dmvs-in-sql-server-2014-ctp1/

Advertisements

SQL Server 2014 – Memory Optimized Table (CTP1)

The new SQL Server 2014 CTP1 comes with a new cool feature called “memory optimized data”. This feature allow us to create a table in pretty much same manner as before with some different switches when defining the table file group and some options when the table is created. The feature is available from the Management Studio GUI, but wil open a new query windows as when we define a FileTable. But I guess this will be changed in future releases.

OLTP_Table

The below example shows how to create a database with a memory optimized filegroup.

sql-in-memory-create_db

The below example show how to create a table in the newly created memory optimized filegroup. This is done by the option “MEMORY_OPTIMIZED = ON”. A memory-optimized table can be defined with one of two DURABILITY values: SCHEMA_AND_DATA or SCHEMA_ONLY with the former being the default. A memory-optimized table defined with DURABILITY=SCHEMA_ONLY, which means that changes to the table’s data are not logged and the data in the table is not persisted on disk. This will increase performance dramatically.

sql-in-memory_create_table

Rember that NONCLUSTERED indexes are not available in CTP1.

New – “Microsoft SQL Server 2014 CTP1” available

Today Microsoft realeased the SQL Server 2014 CTP1. Just completed my install of a new VM with CTP1 on Windows Server 2012.

  • Project code-named “Hekaton”: “Hekaton” provides in-memory OLTP capabilities built into core SQL Server database to significantly improve the performance of your database application.
  • xVelocity ColumnStore:  xVelocity ColumnStore provides in-memory capabilities for data warehousing workloads that result in dramatic improvement for query performance, load speed, and scan rate, while significantly reducing resource utilization (i.e., I/O, disk and memory footprint).
  • Extending Memory to SSDs:  Seamlessly and transparently integrates solid-state storage into SQL Server by using SSDs as an extension to the database buffer pool, allowing more in-memory processing and reducing disk IO.
  • Enhanced High Availability
    • New AlwaysOn features: availability Groups now support up to 8 secondary replicas that remain available for reads at all times, even in the presence of network failures. Failover Cluster Instances now support Windows Cluster Shared Volumes, improving the utilization of shared storage and increasing failover resiliency. Finally, various supportability enhancements make AlwaysOn easier to use.
    • Improved Online Database Operations: includes single partition online index rebuild and managing lock priority for table partition switch, greatly increasing enterprise application availability by reducing maintenance downtime impact.

Measure and Monitoring Disk Performance

Performance on a SQL server much down to how the server handles disk I/O operations. Many enterprise SQL servers are using large SAN instead of local storage systems. When the price development in high-speed storage cards might reduce the concern for poor disk performance. Large datacenters are using these high performance storage cards when systems needs the best IO performance possible. There exists essentially four main storage types, and these are either traditional magnetic drives or SSD disks:

  • Internal drives
  • PCI-E: high speed storage cards
  • DAS: Direct-Attached Storage
  • SAN: Storage Area Networks

How can you measure disk performance? Has the vendor or IT-department given you disks with poor, good or excellent performance? These are questions database administrators have to ask when they receive hardware for a new server. When you a specifying new hardware, don’t just ask for storage based on space requirements. Ask the vendor or IT-department for performance requirements.  Performance parameters for different database types are often measured in the following:

  • Sequential performance in MB/second or GB/second
  • Random performance in input/output operations per second (IOPS)

Sequential performance is very important for data warehouses (DWH) database where large sequential reads are performed on the data files. Random I/O performance very important for databases for Online Transaction Processing (OLTP) where frequent writes to data files and log file. The also applies for Online Analytical Processing (OLAP) where many random reads from cube files.

  • CrystalDiskMark. Is used to measure sequential and randon read/write throughput.
  • SQLIO. Same as CrystalDiskMark, but this is a command line Microsoft tool that is used to determine the I/O capacity of a given configuration. You are able to simulate different block size of the IO request, duration of tests and number of outstanding requrests.

Have have done some testing locally on my laptop where I have a SSD as Disk 0. Here is a screenshot of the result from my SQLIO test. So the performance should be pretty good.

SQLIO_Result