SOA and Microsoft

What’s next for me? SOA (service-oriented architecture)  has been buzz word for a long time, but I have never been any work using SOA. I came across a case study on www.microsoft.com where SOA Software give some input. I decided to have a closer look the next few weeks on how this should be implemented.

I’m grabbing the few sentences in the article that contains the solution 🙂

“With the Windows Communication Foundation 4 Routing Service, we can now address customer challenges surrounding interplatform message routing for Microsoft solutions—challenges that were difficult or impossible before,” Slivker says. “These challenges included support for noninterruptible Microsoft-specific transports like Message Queuing, and infrastructure use cases related to Windows-specific security models like the Kerberos and NTLM protocols.”

Much of the is also related to business intelligence – one of my working areas for the moment.

Some links on SOA: MSDN, Wikipedia + div1, div2.

Advertisements

Visual Studio: Problems adding projects

An old problem reappeared. I was going to add existing projects (2 class libraries, 1 test project and 1 WCF services) to my current Visual Studio 2010 solution. The projects I was adding is stored in another team project than my current Visual Studio 2010 solution, and I had problem adding the by just copying the projects to my work space and use “add existing”. The solution to this is to follow the procedure below for each projects you need to “copy” into another solution:

  1. Make project director and all sub-directories writable
  2. Delete <ProjectFile.csproj.vspscc>
  3. Delete bin and obj directory
  4. Edit <ProjectFile.csproj> and remove lines with <SccProjectName>, <SccLocalPath>, <SccAuxPath> and <SccProvider>. These tags are used by TFS and needs to be removed before project can be re-added to any solution.
  5. Add project to the new solution

This will directly add projects to the solution that is connected to the TFS server.

Read/Write chunked data in C#

Summary

The last week I have been working on a kind of file transfer protocol that shall be used to implement file access for a file explorer in a web portal that doesn’t have direct to the database server where files are stored. In addition, the portal needs load balansing where the web serveres don’t have access to each other resources directly. As a result, I needed to implement this file transfer protocol for sending and receiving file data between the web- and application servers. Each of the application servers have access to the database servere where the documents are stored as BLOBs.

During the implement of the protocol, I realized that I needed to do each task step-by-step. First you need to create or update a file header with the essential information, such as id, name, mime type, file size, “is directory” bit, description and CRC. This file header must be inserted if the file doesn’t exist, otherwise updated with new information.

When the file header is inserted or updated, you can write file data to the varbinary column. This column Items.Content is defined as Varbinary(MAX). One important issue is that when a row is created, all varbinary columns needs to initialized. This is fixed by writing a “null” value (not null, but 0x0) when the row is created (inserted). This 0x0 value must be overwritten when the actual file data is written in chunks to the database by controlling the write offset. Set offset to 0 (zero) and you will overwrite the data in the content column.

Write chunked data

The method WriteChunkedFileData below is just from the test program and the Connection is a property that returns a SQLConnection instans to the connected database. In addition, the test program is creating and closing the connection for each operation. This is not optimal programming but useful during testing and debugging.

When handling binary data writes to my varbinary column Content, I need to use the [Content].Write(chunk, offset, size) SQL command. This will write the chunk byte array to the Content column in the Items table, starting at position offset and write size number of bytes from the chunk array.

The offset is calculated based on the chunk number given in the input parameter idx.

Read chunked data

 When reading data from a varbinary column I gladly discovered that I could use the substring function in SQL to retrieve the actual chunk I needed. No special handling. Like the WriteChunkedFileData method, we need to calculate the offset by the chunk number – this time from the chunkNo input parameter. Sorry for my inconsis coding standard from this test program 🙂

I will come back with the final FileTranferWCF is completed and optimized. Hope this help someone in the mean time 😉

How to access .chm files

Have you ever downloaded a help file (.chm extention) and couldn’t read the content? I had this issue a few weeks back when I had to check something in the “Installation Team Foundation Installation Guide for Visual Studio 2010”. I had to think a few seconds before I remembered that I needed to unblock the content.

  1. Open Windows Explorer
  2. Right-click .chm file and select Properties
  3. Choose Unblock

…and the content is readable.

Run large SQL command files

I just had a problem with importing large amount of binary data into an SQL server 2005 Express database used for a project at work.

It is not a large database in number of tables, views and stuff, but the document table contains pretty much data. The file with insert statements was on 200MB and I had problems importing the script in SQL Server Management Studio. Therefore, I needed a command line tool to do the work for me.

It didn’t take me long to find the correct SQL server syntax:

  1. Open a cmd window
  2. Type "sqlcmd -i c:\temp\script.sql" and press ENTER

It took some time to run, but after a couple of hours and heavy server load, the script completed successfully 🙂