SQL Server – sending email tsql + powershell

Today, I wanted to send an email when a long-running script completed successfully. The SQL server was already configured correctly with the Database Mail Configuration Wizard, so I just had to add similar code as code below to send the mail:

EXEC msdb.dbo.sp_send_dbmail
    @recipients = 'john.doe@someplace.com',
    @body = 'The custom script was successfully completed',
    @subject = 'Automated Success Message' ;

In addition, yesterday I had to send email from a powershell script to notify owner of Sharepoint sites. Here is my send_mail function used from powershell:

function send_email ([string]$emailTo, [string]$emailFrom, 
			[string]$emailSubject, [string]$emailBody)
{
    $emailSmtpServer = "smtp.company.com"
    $emailSmtpServerPort = "25"
    $emailSmtpUser = "COMPANY\johndoe"
    $emailSmtpPass = "johndoe"

    $emailMessage = New-Object System.Net.Mail.MailMessage
			( $emailFrom , $emailTo )
    $emailMessage.IsBodyHtml = $true
    $emailMessage.Subject = $emailSubject
    $emailMessage.Body = $emailBody 

    $SMTPClient = New-Object System.Net.Mail.SmtpClient
			( $emailSmtpServer , $emailSmtpServerPort )
    $SMTPClient.Credentials = New-Object System.Net.NetworkCredential
			( $emailSmtpUser , $emailSmtpPass );

    $SMTPClient.Send( $emailMessage )
 }

Off course, I need to rework the security here and not hard code the password, but for testing purpose, this worked smoothly yesterday.

Happy emailing 🙂

Advertisements

CRM4 – Creating Dashboard reports

A customer has a CRM4 installation, and don’t want to upgrade to CRM 2011 just yet, but he wants to be able to create some dashboard with important KPIs. Our CRM expert found a nice video on how to create such dahsboard in CRM through SQL Server 2008 Reporting Services and Visual Studio. The video is located at channel9.

This video describes the following steps:

  1. Create the placeholder for the report in CRM4
  2. Export the report IDL file
  3. Create a new SQL Server 2008 report based on this IDL file
  4. Update th ereport within CRM4 again

The result is given in the  screenshot below:

CRM4 Dashboard

Example CRM4 Dashboard

ER modelling and “Database project”

I have tried out the “Database Project Type” for some of my database projects now, and I have found a preferred way on how I will approach future projects. Overall I think the solution explorer for a “Database Project” is too heavy to be able to start from scratch when a new database shall be made. Essentially, the programming model is normal scripting of the schema with tables, views, constraints, keys and so on. For me that is enjoying the possibility to work in a graphical environment like Visio and Database Diagrams would like to continue with this.

I normally use the Visio “Database Model Diagram” and often start with a reverse engineering of an existing database. Make changes in the diagram and use the forward engineering plug-in (described in this post) to create a database script that I can import into a new database project. This process will help me alot when create a new database from scratch, remodelling an existing database when get a good starting point for any new projects.

VS2010: Database project – import script

Earlier this month I discovered the new project type “SQL Server 2008 Database Project” in my Visual Studio 2010 Ultimate installation – described in this blog post. Off course I was excited because I had a way of generating my databases from scratch with schema and configuration data for the different configurations, but this solution is not out-of-box as this new project type.

If you want to explore how this project type works, you can import a full database script into your project and getting a quick start. To generate a script for you database, this is one way of doing it:

  1. Open SQL Server Management Studio (SSMS)
  2. Right-click the database you want to create script for importing to Visual Studio Database project
  3. Select “Tasks | Generate Script” from the context menu
  4. Select “Script entire database and all database objects”
  5. See my standard scripting settings from the screenshoots below.

I normally just script to file, in case the database is big and too hard to handle in memory. As seen from the advanced options, I set the SSMS to script both schema and data. This done to be able to script basic configuration data to be handled in development, test and production scenarios.

DB scripting options

It’s a good thing to go over the advanced scripting options, its possible to add too much information in the scripts that makes your live harder. Keep it Simple Stupid – KISS.

Advanced DB scripting options

This blog is based on the AdventureWorks database. This can be downloaded from “AdventureWorks Sample Databases“. I have exported the AW database and you might download the complete script here.

The next step is to create a new  Visual Studio 2010 Ultimate and import the script that is created in SSMS. Create the new project according to my post. When the solution/project is created, then right-click on the DB project, and choose “Import” as shown in the screenshoot below.

Follow the instructions in the import wizard.

Ooobs!…. The  AdventureWorks database script is 500MB, so the import might take some time. But for the lazy of you, here is the full VS2010 project with the final, imported AdventureWorks.

This has to be enough for today….explore the AdventureWorks project and I will come back with more.

SQL Server 2011 (Denali): Columnstore Index

The SQL Team blog just released a new blog on the new feature Columnstore Indexes in the new release of SQL Server 2011 (Denali). This is a new feature that organize indexed data column-wise instead of row-wise.

  1. Compression.  Most general-purpose relational database management systems, including SQL Server, store data in row-wise fashion.  This organization is sometimes called a row store.  Both heaps and B-trees are row stores because they store the values from each column in a given row contiguously.  When you want to find all the values associated with a row, having the data stored together on one page is very efficient.  Storing data by rows is less ideal for compressing the data.  Most compression algorithms exploit the similarities of a group of values.  The values from different columns usually are not very similar.  When data is stored row-wise, the number of rows per page is relatively few, so the opportunities to exploit similarity among values are limited.  A column store organizes data in column-wise fashion.  Data from a single column are stored contiguously.  Usually there is repetition and similarity among values within a column.  The column store organization allows compression algorithms to exploit that similarity.
  2. Fetching only needed columns.  When data is stored column-wise, each column can be accessed independently of the other columns. If a query touches only a subset of the columns in a table, IO is reduced.  Data warehouse fact tables are often wide as well long.  Typical queries touch only 10 – 15% of the columns.  That means a column store can reduce IO by 85 – 90%, a huge speedup in systems that are often IO bound, meaning the query speed is limited by the speed at which needed data can be transferred from disk into memory.

This will be interesting to try out when I have installed my Denali copy on a WMware instans.