Tuesday, November 12, 2013

Using Service Broker Queuing between servers in different domains

For good reason, Microsoft has made this a secure process.  At the same time, since this is a technical process, they have not made it easy to use.

I am writing these steps as I do this for the first time, so you will be learning as I learn.

1. Create Database Master Key
I already noted that the queues do not work if you do not have a database master key.  It is still needed now.

Make sure that you are logged into the database that contains your service broker

CREATE MASTER KEY ENCRYPTION BY PASSWORD='YourK3yH3r3'

As previously noted in another post, I am depending upon Professional Microsoft SQL Server 2012 Administration for much of the information in this post.

It is recommended that you backup and save the key externally.  The command to backup the key is:

backup master key to file='d:\temp\databaseMasterKey.key' encryption by password ='YourPasswordHere'

Question for home study:  If you restore the database and have to recreate the database master key, can you do it with the CREATE MASTER KEY command, or must you use your backed up version of the original key?

Rather than tempt fate, or take the time to experiment, I backup up the master key and saved it in another cloud.

I found that the master key was owned by the Administrator account.  As I don't run as that account, I did not have rights to move or copy the backup files.  I had to change the ownership to myself before I could copy it to email to take it offsite.

2. Create Certificates on each server
Since we are not using domain security, SQL Server will depend upon certificates to validate the identity of each server.  The following command needs to be run on each server, with the appropriate server names inserted.

use master
go

CREATE CERTIFICATE YourCertificateName
WITH SUBJECT = 'Description of Certificate',      
START_DATE = '1/1/2011',      
EXPIRY_DATE = '12/31/2099'

The certificate name should reference the server name and the instance of the database that the certificate is being created for.

REMEMBER: This has to be done on each server.

3. Exchanging Certificates
We have to back up the certificate on EACH server and move it to the other server.  

First, we have to backup the certificate to a file so that we have a file that we move.  Issue the following command:

backup certificate YourCertificateName 
       to File ='d:\temp\YourCertificateName.cer'

The filename is completely arbitrary, but the certificate name is the name that you used when creating the certificate above.

I use RDP to connect from one server to the other,   Again, I have to change the ownership of the file to allow me to copy it to the other server.

Now that the certificates have been copied to the other servers, you need to instantiate the certificates on each server:

create certificate OtherComputerCertificate
       from File ='d:\temp\OtherComputerCertificate.cer'

This has to be done once on each server, referencing the certificate from the other server.

4. Establishing Endpoints
Defining the endpoint tells the Service Broker how to talk to other servers.  It defines the TCP parameters, as well as the security used.

The following commands have to be issued on EACH server:



CREATE ENDPOINT ServiceBrokerEndpoint 
STATE = STARTED AS TCP (LISTENER_PORT = 4022, LISTENER_IP=ALL) 
FOR SERVICE_BROKER 
               (AUTHENTICATION = CERTIFICATE YourCertificateName,
ENCRYPTION = REQUIRED ALGORITHM RC4)

Note that the endpoint uses its own certificate in the command.  That text will have to be changed to support each computer's naming convention.

Any open port may be used.  The default port for this is 4022, but any unassigned ports between 1024 and 32767 may be used,

5.  Open port on firewalls
Make sure that the port that you specify is open through any firewalls that you have in place.  On my server, I have windows firewall, and I also have the cloud vendor's firewall service.

6. Setting up the Route
The initiator needs to know who to connect to.  It uses a route to do this.

The route needs to know a few things, starting with the Broker Instance.  On the target machine:

select service_broker_guid from sys.databases where name= 'yourTargetDatabase'

Back on the source machine, create the route, using the broker guid that you retrieved above.:

create route ConnectToCore
with Service_Name ='MySourceService',
BROKER_INSTANCE = '9FA8BB55-FEC7-4A6B-AED1-F3BAFC2D7190',
ADDRESS = 'TCP://www.targetmachine.net:52309'

The address refers to your target for the queue, and includes the port that you are using.

Now when you use MySourceService, it will connect to the target machine.




Setting up Queuing on SQL Server with Service Broker

To move data from satellite servers to our core cloud database, we are using the Service Broker, included in SQL Server.  I have been using the book Professional Microsoft SQL Server 2012 Administration, by Ada, Jorgensen, et. al. as a guide as I set this up.

1. Create the database master key
Before you can send and receive messages by using the Service Broker, you have to set a database master key, if one doesn’t already exist in the database. To do this, use the CREATE MASTER KEY statement, and specify the password for the database master key.
CREATE MASTER KEY ENCRYPTION BY PASSWORD='MyPa$$w0rd'

Thanks to SQL Magazine for this tip.  It is not in most references for  service broker.  More information can be seen at the page at  http://sqlmag.com/database-administration/configuring-sql-server-service-broker.

If you don't have this set, everything will look like it is working, but no data will flow.  No error messages appear anywhere I could find.  This hung me up for several hours.

2.  Enable the Service on your database


Right click your database, and open properties.  Click on Options and scroll to the bottom of the list.  Set Broker Enabled to True. [Note:  No other users may be connected to this database while you are making this change.]

The following query will return 1 if the broker is enabled.

select is_broker_enabled from sys.databases where name='gmgqueue'

It may have to be re-enabled following a database restore.

3. Create the Message Type

CREATE MESSAGE TYPE SampleMessageType
AUTHORIZATION dbo
VALIDATION=NONE;

The Validation could require that the message type include only well formed XML, or even XML that matches a schema.

4. Create the Contract

CREATE CONTRACT MyContract
AUTHORIZATION dbo
(
      MyMessageType SENT BY ANY
);

The contract defines who is allowed to send which message types.

5. Create the Queue

CREATE QUEUE MySourceQueue

CREATE QUEUE MyDestinationQueue


These queues can be in the same database, different databases, or in different machines.  We will be exploring different machines in a later post.

 6. Create a Service

The service attaches the contract to the queue.  Messages are sent to the service.
CREATE SERVICE MySourceService
AUTHORIZATION dbo
ON QUEUE MySourceQueue
(
      Myontract
);

CREATE SERVICE MyDestinationService
AUTHORIZATION dbo
ON QUEUE MyDestinationQueue
(
      MyContract
);




7. Start sending data


DECLARE @conversation_handle UNIQUEIDENTIFIER

BEGIN DIALOG @conversation_handle
FROM SERVICE [MySourceService]
      TO SERVICE 'MyDestinationService'
      ON CONTRACT [MyContract]

DECLARE @message_body XML

SET @message_body = (SELECT * FROM sys.objects FOR XML AUTO, ROOT('root'));

SEND ON CONVERSATION @conversation_handle
MESSAGE TYPE [MyMessageType]
(@message_body);

END CONVERSATION @conversation_handle


What is going on here?  We create a conversation handle, then start a dialog between the two services.  The select statement stuffs a bunch of random XML into @message_body.  The @message_body is then sent across using the SEND command.  We clean up after ourselves with the end conversation.

8. Receiving the packet

We can look at the queue without removing items from it by using the select statement against the queue::

declare @messageBody xml

select top 1 @messageBody = cast (message_body as xml) 
  from myDestinationQueue


select @messagebody

But, more often than not, we want to remove work from the queue.  This is done by making the following change:

DECLARE @message_body XML

DECLARE @conversation_handle UNIQUEIDENTIFIER;

RECEIVE TOP (1) @conversation_handle = conversation_handle,
                @message_body = cast(message_body AS XML)
FROM MyDestinationQueue

select @message_body

END CONVERSATION @conversation_handle;

Monday, November 11, 2013

Shredding XML

To turn the XML into data, we have a number of steps.

First, let us create some XML to play with:

SELECT cust.CustomerID, 
       cust.CustomerType, 
       [Order].InvoiceID, 
       [Order].OrderDate, 
       OrderDetail.ProductID, 
       OrderDetail.Quantity 
FROM Customer AS cust 
INNER JOIN SalesInvoice AS [Order] 
           ON Cust.CustomerID = [Order].CustomerID 
INNER JOIN SalesInvoiceDetail AS OrderDetail 
           ON [Order].InvoiceID = OrderDetail.InvoiceID 
WHERE Cust.CustomerID = 2
for xml auto

Now, we take that XML and load it into memory

declare @doc xml

set @doc='<cust CustomerID="2" CustomerType="Consumer">
  <Order InvoiceID="943" OrderDate="2008-02-07T02:45:03.840">
    <OrderDetail ProductID="72" Quantity="4" />
  </Order>
</cust>'

declare @hdoc int
exec sp_xml_prepareDocument @hdoc output, @doc

Then we can work with the xml.
select * from openxml(@hdoc, '/cust/Order/OrderDetail')

id parntid nodetpe localname prefix nameuri datatyp prev text
7 4 1 OrderDetail NULL NULL NULL NULL NULL
8 7 2 ProductID NULL NULL NULL NULL NULL
10 8 3 #text NULL NULL NULL NULL 72
9 7 2 Quantity NULL NULL NULL NULL NULL
11 9 3 #text NULL NULL NULL NULL 4

While all the data is there, this is still a little fugly.

Adding one line to the bottom of the query makes it much cleaner:

select * 
from openxml(@hdoc, '/cust/Order/OrderDetail')
with (ProductID INT, Quantity int)

ProductID Quantity
72       4

Now life is better.

To reclaim the memory used when loading the XML, issue the command:

exec sp_xml_RemoveDocument 1



XML on SQL Server

We are going to be using Microsoft SQL Server's internal queuing mechanism, Service Broker.  This mechanism can move any kind of data, but it is expecting XML payloads.

There are two technologies that have to be mastered here:  Service Broker and conversion between XML and SQL data.

SQL Server has a set of tools used for this.  I am using the book 'SQL Queries Volume 5:XML Querying Techniques for SQL Server 2012 by Pinal Dave and Rick Morelan.

The first problem is creating well formed XML.

Using the SQL:
select * from location for xml raw('location'), root ('locations')

I can generate the following output:
<locations>
  <location LocationID="1" Street="111 First ST" City="Seattle" State="WA" />
  <location LocationID="2" Street="222 Second AVE" City="Boston" State="MA" />
  <location LocationID="3" Street="333 Third PL" City="Chicago" State="IL" />
  <location LocationID="4" Street="444 Ruby ST" City="Spokane" State="WA" />
</locations>

Here is the magic:
for xml raw creates the output as XML
('location') names each row as a location
root forces the creation of well formed xml, with a root element
('locations') assigns the name of the root element.  Without this, the root would be called root.

Changing the SQL by adding elements on the end:
select * 
from location 
for xml raw('location'), 
    root ('locations'), 
    elements

puts each column into its own XML attribute, instead of into elements on the row:
<locations>
  <location>
    <LocationID>1</LocationID>
    <Street>111 First ST</Street>
    <City>Seattle</City>
    <State>WA</State>
  </location>
  <location>
    <LocationID>2</LocationID>
    <Street>222 Second AVE</Street>
    <City>Boston</City>
    <State>MA</State>
  </location>
  <location>
    <LocationID>3</LocationID>
    <Street>333 Third PL</Street>
    <City>Chicago</City>
    <State>IL</State>
  </location>
  <location>
    <LocationID>4</LocationID>
    <Street>444 Ruby ST</Street>
    <City>Spokane</City>
    <State>WA</State>
  </location>
</locations>

If any elements are null, they will be omitted.  If you want them to be included. you need to add the XSINIL argument to the end of the query.

select * from employee where lastname ='marshbank' for xml raw ('employee'), root ('Employees'), elements xsinil

Notice how LocationID has an attribute named xsi:nil

<Employees xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
  <employee>
    <EmpID>8</EmpID>
    <LastName>Marshbank</LastName>
    <FirstName>John</FirstName>
    <HireDate>2001-11-15T00:00:00</HireDate>
    <LocationID xsi:nil="true" />
    <ManagerID>4</ManagerID>
    <Status>Active      </Status>
  </employee>
</Employees>

Things start getting more interesting with the AUTO mode:

SELECT City, 
       FirstName, 
       LastName 
  FROM Location 
 INNER JOIN Employee ON Location.LocationID = Employee.LocationID 
 ORDER BY City
 FOR XML AUTO, ROOT

The output is automatically grouped by table source:

<root>
  <Location City="Boston">
    <Employee FirstName="Lee" LastName="Osako" />
    <Employee FirstName="James" LastName="Newton" />
    <Employee FirstName="Terry" LastName="O'Haire" />
  </Location>
  <Location City="Seattle">
    <Employee FirstName="Alex" LastName="Adams" />
    <Employee FirstName="Barry" LastName="Brown" />
    <Employee FirstName="David" LastName="Kennson" />
    <Employee FirstName="Eric" LastName="Bender" />
    <Employee FirstName="David" LastName="Lonning" />
    <Employee FirstName="Sally" LastName="Smith" />
    <Employee FirstName="Phil" LastName="Wilconkinski" />
  </Location>
  <Location City="Spokane">
    <Employee FirstName="Lisa" LastName="Kendall" />
    <Employee FirstName="Barbara" LastName="O'Neil" />
  </Location>
</root>

For additional control, we can use the PATH argument.  This gives us the ability to determine whether columns are treated as elements or attributes.  Note how the forces City to be an attribute instead of an element.  We are also renaming columns by using aliases for the column names.

SELECT City AS [@CityName], 
       FirstName AS [First], 
       LastName AS [Last]
  FROM location AS loc 
 INNER JOIN Employee AS emp 
       ON loc.LocationID = emp.LocationID 
   FOR XML PATH('Employee'), 
  ROOT('Employees')

<Employees>
  <Employee CityName="Seattle">
    <First>Alex</First>
    <Last>Adams</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>Barry</First>
    <Last>Brown</Last>
  </Employee>
  <Employee CityName="Boston">
    <First>Lee</First>
    <Last>Osako</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>David</First>
    <Last>Kennson</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>Eric</First>
    <Last>Bender</Last>
  </Employee>
  <Employee CityName="Spokane">
    <First>Lisa</First>
    <Last>Kendall</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>David</First>
    <Last>Lonning</Last>
  </Employee>
  <Employee CityName="Boston">
    <First>James</First>
    <Last>Newton</Last>
  </Employee>
  <Employee CityName="Boston">
    <First>Terry</First>
    <Last>O'Haire</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>Sally</First>
    <Last>Smith</Last>
  </Employee>
  <Employee CityName="Spokane">
    <First>Barbara</First>
    <Last>O'Neil</Last>
  </Employee>
  <Employee CityName="Seattle">
    <First>Phil</First>
    <Last>Wilconkinski</Last>
  </Employee>
</Employees>

Finally, we can make artificial parent nodes.  Notice the Name/ at the beginning of the alias for the FirstName and LastName.  This will create an element called Name that will include elements for firstname and lastname.

SELECT City AS [@ CityName], 
       FirstName AS [Name/First], 
       LastName AS [Name/Last] 
  FROM location AS loc 
 INNER JOIN employee AS emp 
       ON loc.LocationID = emp.LocationID 
 WHERE City = 'Boston'
   FOR XML PATH(' Employee'), 
       ROOT(' Employees')

Notice the Name element:

<Employees>
  <Employee CityName="Boston">
    <Name>
      <First>Lee</First>
      <Last>Osako</Last>
    </Name>
  </Employee>
  <Employee CityName="Boston">
    <Name>
      <First>James</First>
      <Last>Newton</Last>
    </Name>
  </Employee>
  <Employee CityName="Boston">
    <Name>
      <First>Terry</First>
      <Last>O'Haire</Last>
    </Name>
  </Employee>
</Employees>

Thursday, August 8, 2013

Doing an AJAX (Javascript, JQuery) post using MS MVC on the server

On the Javascript side, the code looks like this:

    var url = window.WEBROOT + 'InventoryItem/_Data_UpdateInventoryTransaction';
    var it = {};  // \Model\ActivityTypes\ActivityHistoryInventoryTransaction
    it.InventoryTransactionId = id;
    it.UsageQuantity = usageQuantity;
    it.LocationId = gridLocationId;
    $.ajax({
        type: 'POST',
        url: url,
        data: it,
        async: true
    });

What's going on here?

The URL says where to go on the server.   window.WEBROOT is a variable set as the page is loading, in the file shared/_Layout.cshtml, which runs every time and is provided by Microsoft.  (This is my addition to the page, of course).  Now I don't have to hard code the path through the website to the root of this web.

it is a Javascript object that mimics the structure of a class with the c# in the mvc world.  I assign values to attributes that match the c# names, and then the object is passed in the ajax call, later in the code.

The $ indicates the start of the jquery ajax call, and we pass it asynchronously.  In this case, I do not need to wait for returned data.

On the server side, in the InventoryItem controller, I have a method called _Data_UpdateInventoryTransaction which takes an InventoryTransaction object.  This object is the defined in exactly the same way as the IT object above.  At that point, I can pass the object to a lower layer, or I can use any of the data elements.

Wednesday, August 7, 2013

Setting up a Kendo Paging Grid

I was pulling my hair out trying to get the Grid to function nicely, with the paging enabled.  I finally went back to fundamentals, copying the format from the Kendo site.  Here is my final configuration:

        $("#ip_reportGrid").kendoGrid({
            dataSource: {
                data: myDataSource,
                pageSize: 15
            },
            height: '532',
            scrollable: true,
            sortable: true,
            filterable: false,
            pageable: {
                input: true,
                numeric: false,
            },
            columns: [

                { field: "Action", title: "Action", encoded: false, width: 205 },
                { field: "ItemDescription", title: "Item" },
                { field: "BatchDesignator", title: "Batch #" },
                { field: "SerialNumber", title: "Run #" },
                { field: "LocationName", title: "Location" },
                { field: "Quantity", title: "Quantity" },
                { field: "UnitOfMeasure", title: "UoM" }
            ],

        });

The data element 'myDataSource' is an array of objects.  Each object has elements that are listed in the columns section above.

The piece that I had been missing was the  pageSize element in the dataSource section, which indicated how many rows to pull from the datasource for every page.

Friday, August 2, 2013

Capturing an item passing on a conveyor belt

We are taking the following steps to see every time that an item passes by on a conveyor belt:

We have a PLC (programmable logic controller), but it does not need any programming, just on one line ladder logic program that says end, to allow us to put the unit into run mode.  We set up port 2 on the device at 38400 baud, odd parity, 1 stop bit and connected it via a serial port to USB converter to a PC running Kepware.

The basic Kepserver EX can be set up as a Modbus RTU Serial device, while to PLC is a Modbus  slave.  Once the communication parameters are set up to match the PLC, Kepware can monitor the PLC, looking at ports numbered 100001 and 100002.  On the PLC, these are known as X1 and X2.

We use the Kepware DataLogger to capture the changes to a MS SQL Server.  We have to create a new logger, and then identify the Kepware OPC registers we wish to monitor.  We are currently only watching the ON/OFF flag, but we can watch the timestamp to get more accurate readings, since we are only polling the PLC once a second.

These registers are mapped back to a table in the sql server, and kepware triggers (not SQL Server triggers!) are configured to write a row to the database whenever the data changes.  Our next step would be to create a sql server trigger to perform the appropriate behavior following each database write.