Moving tempdb to other directory

After installing a few standalone SQL Servers yesterday, I today discovered that the tempdb databases were located in invalid directories according to EVRY installing guide. The tempdb should be located on another disk than the data and system databases. In our project we have allocated the k: drive for the tempdb.mdf and the templog.ldf should be located together with the other log files.

You can use the below SELECT to find out where tempdb files currently are located:

SELECT name, physical_name AS CurrentLocation
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb');

In my case, these where installed in the SQL Server default location on the c: drive, and I needed to move the MDF (data) file to k: drive and LDF (log) to the j: drive. I used the following statements to configure the new directories for tempdb:

USE master;
MODIFY FILE (NAME = tempdev, FILENAME = 'K:\MSSQL\Data\tempdb.mdf');
MODIFY FILE (NAME = templog, FILENAME = 'J:\MSSQL\Data\templog.ldf');

You need to restart SQL server services, and the temp files will be recreated, and you can delete the old files.


How to test SQL Server connectivity

When you install a new SQL Server instance, you need to confirm that other machines are able to connect to it in a propper fashion. Since I’m a developer, I have created a custom application for this where developer can test their .NET connection string that they use in app.config and web.config.

But there are an easier way of doing this by using “Microsoft Data Link” with the following steps:

  1. create a .txt file on your desktop.
  2. rename file to .udl
  3. dobble-click on the file
  4. enter the appropriate connection parameters
  5. click on “test connection button to test

When you double-click on the .udl file, the “Connection” tab will appear in the Data Link Properties dialoguebox. In this tab you will enter the servername, username and passord (if not current windows login shall be used, and the database you will test. But first you have to set the correct db providere.


The following screenshot shows the provider tab. For our case, we should select the “SQL Server Native Client” provider.


When the SQL Server Native provider is selected, you go back to the “Connection” tab, and press the “Test Connection” button.



If this test is successfull, you have confirmed that

  • SQL Server Service is running and a correct TCP/IP network protocol is enabled
  • ports are open between the two machines
  • username and password (logon credentials) is correct and connect to the target SQL Server instance and database


Perfmon – store counter data in SQL

In my previous two post I tried to categorize performance counters for OS and SQL Server and how these performance counters could be collected using Performance Monitor and collector sets. The next step is to put these counter values on regular basis to a database.

When storing the counter values into a SQL database we can use select and aggregate values in a better fashion, even create a data warehouse where fast reports can be presented on the state of the server performance.

When you have one or more BLG files stored on disk, you can use the relog utility to store all counter definitions and values into a 32-bit ODBC data source. An example for the syntax is shown below:

relog MyCounterData.blg -f SQL -o SQL:MyOdbc!MyServer

This command will take all performance counter values from the file MyCounterData.blg and put these into a database where connection parameters are defined in the ODBC DSN MyOdbc running on the server MyServer. The database must be created before the command can be ran, but the relog will create the needed tables for storing the counter data if it has necessary rights.

The output will look something like:

     MyCounterData.blg (Binary)

Begin:    23.5.2013 15:29:36
End:      23.5.2013 15:55:21
Samples:  104


File:     SQL:MyOdbc!MyServer

Begin:    23.5.2013 15:29:36
End:      23.5.2013 15:55:21
Samples:  104

The command completed successfully.

If your user defined in the ODBC DSN has sufficent access right, the following tables will be created (slightly changed for better readablility):

CREATE TABLE [dbo].[DisplayToID](
	[GUID] [uniqueidentifier] NOT NULL,
	[RunID] [int] NULL,
	[DisplayString] [varchar](1024) NOT NULL,
	[LogStartTime] [char](24) NULL,
	[LogStopTime] [char](24) NULL,
	[NumberOfRecords] [int] NULL,
	[MinutesToUTC] [int] NULL,
	[TimeZoneName] [char](32) NULL,

CREATE TABLE [dbo].[CounterDetails](
	[CounterID] [int] IDENTITY(1,1) NOT NULL,
	[MachineName] [varchar](1024) NOT NULL,
	[ObjectName] [varchar](1024) NOT NULL,
	[CounterName] [varchar](1024) NOT NULL,
	[CounterType] [int] NOT NULL,
	[DefaultScale] [int] NOT NULL,
	[InstanceName] [varchar](1024) NULL,
	[InstanceIndex] [int] NULL,
	[ParentName] [varchar](1024) NULL,
	[ParentObjectID] [int] NULL,
	[TimeBaseA] [int] NULL,
	[TimeBaseB] [int] NULL,

CREATE TABLE [dbo].[CounterData](
	[GUID] [uniqueidentifier] NOT NULL,
	[CounterID] [int] NOT NULL,
	[RecordIndex] [int] NOT NULL,
	[CounterDateTime] [char](24) NOT NULL,
	[CounterValue] [float] NOT NULL,
	[FirstValueA] [int] NULL,
	[FirstValueB] [int] NULL,
	[SecondValueA] [int] NULL,
	[SecondValueB] [int] NULL,
	[MultiCount] [int] NULL,

This should be enough details for this time, but I will most probably come back later with refined approaches.

Happy debugging and monitoring….


Perfmon – Configuring counter collector sets

I am currently working with “Performance Monitor“, shortened perfmon, to measure “Windows
” performance. Performance Monitor is a standard tool that is installed default and will
probably exist on every server you work on. The screenshot below show the main window of perfmon. In the window below, you can see I have created two collector sets, one for “SQL Server 2008 R2” counters and one
collector set for standard “Windows Server 2008 R2” counters. In my previous post, I found a range of basic and extended counters for Windows Server and SQL Server. These are named “Sql2008R2” and “StdWinSrv2008R2” below.


In Real-Time

When you define collector sets, these can be created based on a template or manually selected counters. You can also monitor a server performance in real-time. The screenshot below show this option that is available under the “Monitor Tools | Performance Monitor” in the screenshot above. The screenshot below show how this window looks like on a server with the counters that is available in the “System Summary” page.


Create new set

When you create a counter collector set, you right-click on the User-defined in the menu on left, selecting “New | Collector Set”. In the first screenshot you give the collector set a name and decide if the collector set shall be created based on a template or manually. I will not cover the template based collectors in this post, but the template available are as follow:

  1. Basic
  2. System Diagnostics
  3. System Performance

Each of these collector sets will run for approximately 1 min, collect data and generate a HTML report. But I will use the manual approach for this post. The screenshot below shows the first screen where name and type is defined.


The next step you need to define that the collector set shall apply for performance counters. The other options will covered later.


The next step is used to select the performance counters you want the collector set to include in the data collection session. In the screenshot below, I have only selected two performance counters: “Memory – Available Mbytes” and “Processor Information (_Total)\% Idle Time“. The collector set will collect data each 15 seconds when it runs, but this interval might be changes.


The next step is to save collector set, and you might change the user that user that shall run the collector set. You also have the option to set the properties for the collector set where the schedule or file settings can be configured. But these settings are covered later in this post.


Automatic scheduling

When you have defined the Collector set with proper number of performance counters, you can start the set manually or schedule it to run at predefined times. Right-click on the collector set, select the Schedule tab. Click Add button to create a schedule for your collector set. These settings looks like many other scheduling applications. You can start a start-date (including time) and end-date, in addition to which days the collector set shall be running.


When the schedule is defined, you should define the stop condition for the “run”. In my case, I want to collect information in normal working hours. Normal working hours are 08:00 til 16:00. In addition, I want to collect plus, minus two hours, resulting in collecting counter values between 06:00 til 18:00. This will help us see when people start using application on the server and how the workload is during the working day.


Perfmon also offer the option to run a “windows scheduled task” when the perfmon run is completed.


This task can copy BLG files to a file share where they can be picked up and stored in a database for further analysis. The process of putting counter data into a database will be the topic of my next post.

SQL Server: collecting and analysing performance data – part 1

In my quest for “MCSE: Business Intelligence – Solutions Expert” certification, I build my knowledge on debugging and optimization server performance. Almost every enterprise solutions are using a kind of database.  Microsoft technologies such as SharePoint and Dynamics CRM are using SQL Server as storage platform.

When the amount of data and users increase, the server performance need to be optimal to offer users good performance. This can either be done be scaling up the running hardware or optimizing current hardware and software.

There exist lots of tool to collect and analyze performance data, but Windows Server offers a standard solution in “Performance Monitor”, and the use of “Performance Counters”. This tool can be used to monitor performance real-time, or capture metrics over a period of time. There exists counters for OS and SQL Server in my case. The data collection can be performed in real-time or scheduled to run automatically as storing performance data to files or database. There also exist tool for processing and analyzing the data.

The lists below show some counters I have found after heavy googling for the last few weeks. Until now, I have created Collector groups for these counters and run them in Performance Monitor.

Generic OS counters (StdWinSrv):

  • Processor – % Processor Time
  • Processor – % Privileged Time
  • System – Processor Queue Length
  • Memory – Available Mbytes
  • Physical Disk – Avg Disk sec/Read
  • Physical Disk – Avg Disk sec/Write
  • Network Interface – Current Bandwidth
  • Network Interface – Bytes Sent/sec
  • Network Interface – Bytes Received/sec

Some basic SQL Server counters (SQLBasicCounters):

  • SQL Server: Cache manager – Cache Hit ratio
  • SQL Server: Databases – Transactions/sec
  • SQL Server: General statistics – User connections
  • SQL Server: Buffer Manager – Page Life Expectancy
  • SQL Server: Locks – Number of Deadlocks/sec
  • SQL Server: Access Methods – Full Scans/sec
  • SQL Server: Access Methods – Index Searches/sec

Some extended SQL Server counters (SQLExtCounters):

  • SQL Server: Access Methods – Forwarded Records/sec
  • SQL Server: Buffer Manager – Free List Stalls/sec
  • SQL Server: Buffer Manager – Lazy Writes/sec
  • SQL Server: Buffer Manager – Page Reads/sec
  • SQL Server: Buffer Manager – Page Writes/sec
  • SQL Server: Memory Manager – Total Server Memory (KB)
  • SQL Server: Memory Manager – Target Server Memory (KB)
  • SQL Server: SQL Statistics – Batch Requests/sec
  • SQL Server: SQL Statistics – SQL Compilations/sec
  • SQL Server: SQL Statistics – SQL Recompilations/sec
  • SQL Server: Latches – Latch Waits/sec
  • SQL Server: Locks – Lock Waits/sec

After watching a Pluralsight video presented by Erin Stellato from SQLskills, I became aware of a pretty neat tool “Performance Analysis of Logs” (PAL – This is a free tool available made and used by Microsoft Field Engineers that analyzes perfmon logs based on predefined and customizable thresholds templates. The PAL application has a GUI and generates HTML reports that can easily be used for management reporting.

If we need more custom analysis, we probably want to store the perfmon data in a SQL database, and make our own analysis. The process of collecting data from a SQL server, store data into daily files, copying the files on a file share, and put these data into a database for closer analysis is the topic of my next blog post.


SQL Server – simple query performance analysis

I’m still reading for my next exam stuff for 70-461. During the chapter “Using Tools to Analyze Query Performance” I found a quite simple way of measuring the query performance by using the SET STATISTICS command.


SELECT * FROM Sales.Customers; 
SELECT * FROM Sales.Orders;

Note that DBCC DROPCLEANBUFFERS clears data from the cache. SQL Server caches data besides query and procedure plans. The TIME switch will display how many miliseconds the query took to execute, while the IO switch will tell how many logical and physical page reads that was involved.

A page in SQL Server is a physical unit on a disk. The size of a page is fixed to 8,192 bytes. A page belongs to a single database object, such as table, index or indexed view. Pages are further grouped into logical groups of eight pages called extents. An extent can be mixed, if pages on this extent belong to multiple objects.

 Output from “SELECT * FROM Sales.Customers” command:

(91 row(s) affected)
Table 'Customers'. Scan count 1, logical reads 5, physical reads 1,   
read-ahead reads 3,   
lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:    CPU time = 15 ms,  elapsed time = 125 ms.

Output from “SELECT * FROM Sales.Orders” command:

(830 row(s) affected)
Table 'Orders'. Scan count 1, logical reads 21, physical reads 1,   
read-ahead reads 19,   
lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:    CPU time = 0 ms,  elapsed time = 330 ms.

The meaning of the information returned is as follows (from Traning Kit for 70-461):

  • Scan count The number of index or table scans performed.
  • Logical reads The number of pages read from the data cache. When you read a whole table as in the queries from the example, this number gives you an estimate about table size.
  • Physical reads The number of pages read from the disk. This number is lower than the actual number of pages because many pages are cached.
  • Read-ahead reads The number of pages SQL Server reads ahead.
  • Lob logical reads The number of large object (LOB) pages read from the data cache. LOBs are columns of types VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), TEXT, NTEXT, IMAGE, XML, or large CLR data types,  including the system CLR spatial types GEOMETRY and GEOGRAPHY.
  • Lob physical reads The number of large object-type pages read from disk.
  • Lob read-ahead reads The number of large object pages SQL Server reads ahead.


Today is discovered a command in T-SQL I never have used before: WAITFOR

This is not the most important command 🙂 But nice to if you need to wait for a predefined number of seconds or until a certain time of day in your scripts.


  • WAITFOR DELAY ’00:00:20′;
    pauses code execution for 20 seconds
  • WAITFOR TIME ’23:46:00′;
    pauses execution to wait for a specific time

This command can be used to syncronize processes in a simple fashion from your scripts. These scripts often run from command line, scheduled tasks or similar applications.