Keeping Things Fast for Large n

Some years ago, we had customers reporting poor network response time when fetching content from the server. Our product was not anywhere near being wildly popular, and the number of records in the database were still counted in tens of thousands. Even our puny instance was able to cache the entire database into memory. Not just a single table or the results of a few queries. The entire database could be cached in RAM. So the slowdown probably wasn’t caused by something in the database. All customers reported more or less similar latency, irrespective of their geographical locations or internet service providers or time of the day. That also ruled out network problems.

So I rolled up my sleeves and began investigating.

One of the features of this product was that while it used sequential 128-bit integers for the primary key columns, the data retrieval was done with a shorter 5 character identifier, called a short-code. The short-code was also unique, but made up of fewer characters for legibility when users passed links around. The short-code wasn’t appropriate for a primary key column though, as its randomness would cause too much index fragmentation.

The short-code was generated by hashing the primary key value using the MD5 algorithm, and truncating it to the first 5 characters of the result. If there was a collision…well nobody had thought about that back then. It was one of the subtle bugs that would come back to trouble us years later. But that’s another story.

Someone had decided to implement this feature in the application code. When a content link was required, it was computed using .NET cryptography libraries and the result embedded into a URL string template for the user to share. The hashed value was not stored in the database, even though it was going to remain the same every time. And we would be paying heavily later for this oversight.

Now since the application had no way to identify the record directly by its short-code, the developers had to come up with a Rube Goldberg-esque contraption to retrieve it again. For this, they fetched the ID column for all the content records, ran the MD5 function on each row, truncated the result and compared it to the value given in the incoming request, until a match was found. The CS101 guys already know where we are getting at with this approach. Since everything is fast for small n, this technique worked flawlessly on the developers’ own computers. It was only when the application was deployed to production, and stayed there for a few months, that the performance bottlenecks began to show up.

Locating the bug itself was easy. I set up a network trace using WireShark, inspecting the queries between the application and the database server, and promptly proceeded to fall of the chair in disbelief.

After excluding essential communications such as handshakes and authorisation, the application was still receiving almost half a megabyte of data, split into 400 packets, for a table containing only 31,000 rows. All this activity before it could even begin looking for a match.

This was going to require some re-engineering to fix.

Due to unrelated reasons, our goal was only to reduce the amount of data being received from the database server with zero changes in the public API or modifications to the database tables. We could only change the application code and deploy a new build. This code was written to use ADO.NET and inline queries to perform data operations. So changing its behaviour was going to be relatively easy.

The first thing was to assemble a query that could generate the MD5 of an integer.

SELECT HASHBYTES('MD5', CAST([Article].[Id] AS CHAR(36))) AS [Hash]
FROM [Article]
WHERE [Article].[Id] = '6BA1CE84-FDB1-EA11-8269-C038960D1C7A';

Since the HASHBYTES function in T-SQL works only with char, nchar or binary data, the uniqueidentifier had to be cast into a fixed-width char. The output of this function was like so.

-- 0x704E87BA59EB6F930C020E5D6DA6B444

This hash was converted into a string by using the CONVERT function, and finally truncated to the first 5 characters, resulting in the output shown below.

SELECT LEFT(
            CONVERT(
                CHAR(32),
                HASHBYTES(
                    'MD5',
                    CAST([Article].[Id] AS CHAR(36))
                ), 2), 5) AS [Hash]
FROM [Article]
WHERE [Article].[Id] = '6BA1CE84-FDB1-EA11-8269-C038960D1C7A';
-- 704E8

Cool!

Now came the retrieval by the short-code. The hash-computation query was nested inside another simple select query.

SELECT *
FROM
(
    SELECT [Article].[Id],
           [Article].[Name] AS [ArticleName],
           LEFT(CONVERT(CHAR(32), HASHBYTES('MD5', CAST([Article].[Id] AS CHAR(36))), 2), 5) AS [Hash]
    FROM [Article]
) [Article_]
WHERE [Article_].[Hash] = '704E8';

This was executed against the database and measured again using WireShark.

The results were remarkably different, but not at all unexpected. Only 882 bytes of data were transferred between the database and the application, and of that, 630 bytes were the query string going into the database server. The only record the server now returned was 252 bytes long, and required no further processing in the application.

There was still had a lot of processing going on in the database itself, which was still ripe for optimisation. Storing the short-code in the table permanently and indexing the column would improve the product’s performance even further.

But for that moment, I was king of the world.

This story has been altered slightly to protect the guilty and gloss over irrelevant details. The performance bottleneck was made much worse by nested loops (yay, quadratic growth!) and suboptimal data types.

Runtime Resource Authorisation in ASP.NET MVC

The Authorize attribute is a feature of the ASP.NET MVC framework that programmers learn early on. While it is a good out of the box solution for general cases, it doesn’t work well for dynamic authorisation. Take the HTTP request shown below.

GET /posts/edit/12 HTTP/1.1
Host: www.example.com

In colloquial MVC, this requests the PostsController to retrieve the contents of the post with ID 12 and display them in a form. The Authorize attribute does not determine if the currently logged in user has been granted editing rights for that specific post. At best, operations are allowed based on roles or claims, which still becomes an all or nothing situation. Either an individual user can edit all posts, or none at all.

Finer-grained control over individual resources for each user in the system requires a custom solution.

The system described below eschews the Authorize attribute entirely, and chooses to instead use filters in the ASP.NET request pipeline. It imposes the restriction that the name of the resource identifier parameter should always be well-known, such as id. Since the default route already follows this convention, this usually isn’t a problem.

Identifying the What

The first piece of the puzzle is a custom action filter called SecuredAttribute. This class inherits from System.Attribute and is applied to methods. Any controller action method that is marked with this attribute identifies as a sensitive access point that requires some kind of screening procedure before being invoked.

But this attribute only identifies the method. It does not perform any kind of screening on incoming requests. This is also why it doesn’t inherit from any of the more higher-level attributes from the MVC framework, such as ActionFilterAttribute.

public class SecuredAttribute : Attribute
{
}

The SecuredAttribute is used by applying it to the top of the controller method that needs runtime screening.

public class AdminController : Controller
{
    [Secured]
    public IActionResult Edit(int id)
    {
        …
    }
}

Implementing the How

The screening is performed by a class that implements IActionFilter. There can be multiple screening filters, and they are queued up in the GlobalFilterCollection during Application_Start(). The screening process is performed before the action method is executed, by implementing it in the OnActionExecuting method of the filter class.

public class AuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
             context.Result = new HttpStatusCodeResult(HttpStatusCode.Unauthorized);
        }
    }
}

The filter looks for the [Secured] attribute. If the method being invoked doesn’t have the attribute, the filter immediately returns and lets the method execution proceed. If the attribute is found, the filter performs a screening procedure to determine if the request should be allowed or not. It may use a injected service class or even a third-party API to perform this action.

Since the attribute is only identifying the method, it remains simple. Discrete behaviours can be attached to the same action method, that can also be dependent on the request context (e.g. invocation through web vs. API) while maintaining a clean separation of concerns.

Some of these techniques are shown below.

Extending Beyond Simple Authorisation

The method attribute can be leveraged for performing other cross-functional requirements, which are tangent to authorisation. The secured method may require an audit trail.

public class AuditTrailFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        var secured = filterContext.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        // Invoke a service to log the method access
        Logger.Info(…);
    }
}

The authorisation and audit trail filters can coexist and are fired independently. They use the same marker to identify the methods, but perform widely different tasks with different resources at their disposal. AuditTrailFilter can be programmed to log requests to secured location in one store and all other requests into another store, while AuthorizationFilter always allows requests to unsecured locations.

Another example is to return different responses to the client based on its type. When a request comes from a browser, its Accepts header is set to text/html, while an API client such as a SPA or a mobile app sets it to application/xml or application/json. The WebAuthorizationFilter class returns the access-denied error as a HTML view, which the browser displays as a user-friendly error page.

public class WebAuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // Return if a non-API request is received
        var acceptTypes = HttpContext.Current.Request.AcceptTypes;
        if (!acceptTypes.Contains("text/html"))
        {
            return;
        }

        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
            context.Result = new ViewResult()
            {
                ViewName = "AccessDenied",
            }
        }
    }
}

The ApiAuthorizationFilter class, on the other hand, returns a HTTP status code 403 in the response. The API client generates an appropriate error view on the client-side.

public class ApiAuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // Return if a non-API request is received
        var acceptTypes = HttpContext.Current.Request.AcceptTypes;
        if (!acceptTypes.Contains("application/xml"))
        {
            return;
        }

        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
             context.Result = new HttpStatusCodeResult(HttpStatusCode.Unauthorized);
        }
    }
}

How to Write Unmaintainable Code – ASP.NET Redux

No matter how far technology progresses, it seems that we still remain bound to the past by an innate ability of writing poorly structured programs. To me, this points to a rot that is far deeper than languages and platforms. It is a fundamental failure of people who claim to be professionals to understand their tools and the principles that guide their usage.

It has been eight years since I wrote the previous piece in this series that demonstrated poorly written PHP code. The language gets a bad rap due to the malpractices that abound among users of the platform. But this was a theme I was hoping would be left behind after graduating to the .NET framework in the past few years.

It turns out that I was wrong. Bad programmers will write bad code irrespective of the language or platform that is offered to them. And the most shocking bit is that so many of the points from the previous article (and the original by Roedy Green) are still applicable, that it feels like we learned nothing at all.

Reinvent the wheel again. Poorly.

Maintainable code adheres to standards – industry, platform, semantics, or just simply internal to the company. Standard practices make it easy to build, maintain and extend software. As such, they’re anathema to anybody who aims to exclude newcomers from modifying his program.

Therefore, ignore standards.

Take the case of date and time. It is 2018, and people want to and expect to be able to use any software product irrespective of their personal regional settings.

Be merciless in thrashing their expectations. Tailor your product to work exclusively with the regional settings used on your development computer. If you are using the American date format, say you’re paying homage to the original home of the PC. If you’re using British settings, extol upon the semantic benefits of the dd-mm-yy structure over the unintelligible American format.

Modern programming platforms have a dedicated date and time data type precisely to avoid this problem. Sidestep it by transmitting and storing dates as strings in your preferred formats (there doesn’t have to be just one). That way, you also get to scatter a 200-line snippet of code to parse and extract individual fields from the string.

For extra points, close all bug reports about the issue from the test engineers with a “Works for me” comment. Your development computer is the ultimate benchmark for your software. Everybody who wishes to run your program should aspire to replicate the immaculate state of existence of your computer. They have no business running or modifying your program otherwise.

Never acknowledge the presence of alternative universal standards.

Ignorance is bliss

Nobody writes raw C# code if they are going to deploy on the web. A standard deployment of ASP.NET contains significant amounts of framework libraries that enable the web pipeline and extensions to work with popular third-party tools. Frameworks in the ecosystem are a programming language unto themselves, and require training before use.

Skip the books and dive into writing code headfirst.

Write your own code from scratch to do everything from form handling to error logging. Only n00bs use frameworks. Real programmers write their own frameworks to work inside of frameworks. This gives rise to brilliant nuggets such as this.

public class FooController
{
    …
    public new void OnActionExecuting(ActionExecutingContext filterContext)
    {
    }
    …
}

By essentially reinventing the framework, you are the master of your destiny and that of the company that you are working for. Each line of custom-built code that you write to replace the standard library tightens your chokehold on their business, and makes you irreplaceable.

Allow unsanitised input

Protecting from SQL injection is difficult and requires constant vigilance. If everything is open to injection, the maintenance programmer will be bogged under the sheer volume of things to repair and hopefully, either go away or be denied permission to fix it due to lack of meaningful effort estimates.

Mask these shortcomings by only writing client-side validation. That way, the bugs remain hidden until the day some script kiddie uses the contact form on the site to send “; DELETE TABLE Users” to your server.

Try…catch…swallow

Nobody wants to see those ugly-ass “Server Error” pages in the browser. So do the most obvious thing and wrap your code in a try-catch block. But write only one catch handler for the most general exception possible. Then leave it empty.

This becomes doubly difficult to diagnose if you still return something which looks like a meaningful response, but is actually utterly incorrect. For example, if your method is supposed to return a JSON object for the active record, return a mock object from the error handler which looks like the real thing. But populate it with empty or completely random values. Leave some of the values correct to avoid making it too obvious.

Maintenance programmers have no business touching your code if they do not have an innate ken for creating perfect conditions where errors do not occur.

String up performance

Fundamental data types such as strings and numbers are universal. Especially strings. Therefore, store all your data as strings, including obvious numeric entities such as record identifiers.

This strategy has even more potential when working with complex data types containing multiple data fields. Eschew standard schemes such as CSV. Instead come up with your own custom scheme using uncommonly used text characters. The Unicode standard is very vast. I personally recommend using pips from playing cards. The “♥” character is appropriately labelled “Black Heart Suit”, because it lets the maintenance programmer perceive the hatred you bear towards him for attempting to tarnish the pristine beauty of the code you have so lovingly written.

This technique also has a lot of potential in the database. Storing numeric data as strings increases the potential for writing custom parsers or making type-casts mandatory before the data can be used.

Use the global scope

Global variables are one of the fundamental arsenal in the war against maintainable code. Never fail an opportunity to use them.

JavaScript is a prime environment for unleashing them upon the unwary maintenance programmer. Every variable that is not explicitly wrapped up inside a function automatically becomes accessible to all other code being loaded on that page. This is an increasingly rare scenario with modern languages. The closest it can be approximated in C# is to have a global class with several public properties which are referenced directly all over the application. While it looks the same, it is still highly insulated. Try these snippets as an example.

JavaScript –

var a = 0; // Variable a declared in global scope

function doFoo() {
    a++; // Modifies the variable in global scope
}

function doBar() {
    var a = 1;
    a++; // Modifies the variable in local scope
}

C# –

public class AppGlobals
{
    public int A = 0;
}

public class Foo
{
    public void DoSomething()
    {
        // Scope of A is abundantly clear
        AppGlobals.A++;
        var A = 0;
        A++;
    }
}

It is very easy to overlook the scope of the variable in JavaScript if the method is lengthy. Use it to your advantage. Camouflage any attempts to detect the scope correctly by using different conventions across the application. Modify the global variable in some functions. Define and use local variables with the same name in others. Modifying a function must require extensive meditation upon it first. Maintenance programmers must achieve a state of Zen and become one with your code in order to change it.

Use unconventional data types

Libraries often leverage the use of conventions to eliminate the need to write custom code. For example, the Entity Framework can automatically handle table per type conditions if the primary key column in the base class is an identity column.

You can sideline this feature by using string or UUID columns as primary keys. Columns with these data types cannot be marked as identity. This necessitates writing custom code to operate upon the data entities. As you must be aware by now, every extra line of code is invaluable.

Database tables without relationships

If you are working at a small organisation, chances are there is no dedicated database administrator role and developers manage their own database. Take advantage of this lack of oversight and build tables without any relationships or meaningful constraints. Extra points if you can pull it off with no primary keys or indexes.

Combine this with the practice of creating and leaving several unwanted tables with similar names to give rise to a special kind of monstrosity that nobody has the courage to deal with. For still extra marks, perform updates in all the tables, including the dead ones. Fetch it from different tables in different parts of the application. They cannot be called unwanted tables if even one part of your application depends on them. Call it “sharding” if anybody questions your design.

Conclusion

This post is not meant to trigger language wars. Experienced developers have seen bad code written in many languages. Some languages are just more amenable to poor practices than others.

The same principle applies to the .NET framework, which was supposed to be a clean break from the monstrosities of the past. On the web, the ASP.NET framework and its associated libraries are still one of the best environments I have used to build applications.

That people still write badly structured code in spite of all these advances cements my original point – bad programmers write bad code irrespective of the language thrown at them.

A Model for Sequential Workflow Execution

Many features like automatic memory management have made modern programming technically easier. But pesky business requirements still remain a formidable challenge. A large portion of the complexity in modern applications originates from ever-evolving business rules. In an ideal scenario, there would be no functional requirements and programmers would be paid directly in Cheetos and Mountain Dew for doing cool stuff all day. Unfortunately, that’s not the case, and all payments must be made in fiat currency rather than snacks. So a business is necessary in order to generate revenue. And with it come its own requirements for things like processes, regulations and laws.

In spite of this, a smart programmer can notice that the application of rules to a process is easily separated from the rules themselves. They can be applied in linear sequence or driven by outcomes of its component steps (such as offering the customer a choice between cash discounts or adding complimentary products instead). Linear processing is straightforward to implement – execute each step in a queue one at a time until they are all done. Conditional processing depends on the outcome of the previous step, making the workflow a gigantic mishmash of if-else statements if not handled carefully from the start.

Both types of processing can incorporate structures such as loops, sub-routines and interrupts.

This post demonstrates an implementation of a sequential workflow where the process pipeline is separated from the steps in the process. This architecture allows for the execution of the pipeline to remain unchanged even if the steps in the process change.

The workflow model constitutes of the entities described below.

Sequential Workflow Execution

Activities are the steps which must be performed in a workflow. The type IActivity<TParameter> defines the common minimum standard that all activities must implement. It requires a method called Execute which takes one parameter.

void Execute(TParameter parameter)

The parameter is the input that this activity may require. Its type should match the type specified in the concrete constructor of this interface. Consuming the Sequence class is easy if this is a reference type. The client simply has to call its Execute method and wait for it to return. The modifications will show up in the input instance that the client already has. But if it is a value type, the the caller has to subscribe for the ExecuteCompleted event from the Sequence, whose handler receives the modified value as a parameter.

The IActivity<TParameter> type exposes an ExecuteCompleted event. The Sequencer must subscribe to this event in order to be notified when the activity completes its execution successfully. The event delegate receives a parameter of type ExecuteCompletedEventArgs. The Result property of this instance contains the modified value of the input.

Activity<TParameter> is an abstract class that provides a minimal implementation of the IActivity interface. Derived classes which override the Execute method must ensure that the method in the base class is called, or otherwise ensure that the OnExecuteCompleted method is called when the method completes successfully.

Sequence<TParameter> is the primary execution path of the workflow. It lets the client add activities to the workflow and execute them in the order that they were added.

The Sequence class stores the activity class instances in a queue. It uses an enumerator to ratchet through the list. The enumerator points to the first activity instance and executes it. The Sequence class subscribes to the ExecuteCompleted event from the Activity instance, which causes the enumerator to move to the next activity in the list and execute it. This process continues until all the activities in the list have been executed. At this point, the Sequence itself dispatches the ExecuteCompleted event, which the client must subscribe to.

The Sequence class exposes the following methods.

void Add(IActivity<TParameter> activity)

This method accepts an IActivity instance, whose generic parameter must match the generic type of the Sequence class instance itself.

public void Execute(TParameter input)

This method triggers the execution of the Sequence. It takes a single parameter of the type declared in TParameter. This input is passed as a parameter to the Execute method of all the Activity instances in the sequence.

Activities are further classified into filters and transformations. A filter scans the input and either allows or disallows further processing. It does not modify the input in any manner. A transformation activity modifies the input in some way and returns the modified value as output. In the case of the former, there needs to be a mechanism to signal a break in the process to the client. For this, the activity must throw a ExecuteException. The client of the Sequence class must wrap the call to the Execute method in a try block and handle any failure to complete the process in the catch block.

These types are collectively sufficient to provide the framework for any simple linear workflow. However, the actual steps to be performed are not part of the framework. The client must provide the concrete implementations of the Activity class, one for each step in the process. These classes are instantiated and added to the Sequence class.

Examples

The following section demonstrates how a transformation and a filter can be implemented and consumed by this framework.

Classes which derive from Activity are part of the client implementation and must be stored in the client namespace. In this example we use the Notadesigner.Text namespace to implement a HyperlinkTransformation and a DeDupFilter.

HyperlinkTransformation scans the input string for any sequences that begin with http:// or https:// and wraps it within an anchor tag. This example uses a very simple RegEx sequence to perform this step. We are not really interested in the versatility of the regex for this throwaway example.

public class HyperlinkTransformation : Activity<string>
{
    void Execute(string input)
    {
        RegEx.Replace(input, @"http(s)?://[a-z.]+");
        base.Execute(input);
    }
}

…
var activity = new HyperlinkTransformation();
activity.ExecuteCompleted += (sender, e)
{
    Console.WriteLine{"Result {0}", e.Result);
};
activity.Execute("Visit http://www.notadesigner.com for best deals in programming snippets.");
…

When the Execute method completes, it dispatches the ExecuteCompleted event.

DeDupFilter compares the string with existing values in the database. If it is a duplicate, then the previous string is maintained as is and the new one is discarded. This is achieved by throwing a SequenceException from the Execute method of this class if an existing match is found.

public class DeDupFilter : Activity<string>
{
    void Execute(string input)
    {
        // CurrentEntries is of type List<string> and is populated previously with string entries
        if (CurrentEntries.IndexOf(input) > -1)
        {
            throw new ExecuteException("Entry already exists");
        }
    }
}

…
try
{
    var activity = new DeDupFilter();
    activity.Execute("Talk is cheap. Show me the code.");
}
catch (SequenceException)
{
    Trace.TraceError("Entry already exists");
}
…

The client can then handle the exception and proceed with the understanding the input being inserted was already present in the database.

Reifying Your Commands – Interprocess Communications by Example

In the first part of this series, I introduced readers to the work reify, which means to make something real. So far, we have seen how the ActionScript application converts a logging request into a command object, serialises it into a byte array, and sends it over a TCP socket connection into the waiting arms of a server. The server for its part must deserialise the byte array back into a command object and execute it.

This last part in the series explains how this is done.

The Story So Far

We have seen how the Puppeteer receives a message. In the receive callback method is a call to deserialise the message into an object.

byte[] message = new byte[messageLength];
Buffer.BlockCopy(buffer, 4, message, 0, messageLength);

ICommand command = Util.Deserialize(message);

The Deserialize utility method receives only the portion of the message that constitutes the actual data. The first 32 bits are discarded as they are not relevant to the deserialisation process. The Deserialize method is extremely simple.

public static ICommand Deserialize(byte[] message)
{
    Dictionary instructionClassMap = new Dictionary() { { 0x02, typeof(Trace) } };

    Type commandType = null;
    ICommand command = null;
    if (instructionClassMap.TryGetValue(message[0], out commandType))
    {
        command = (ICommand)Activator.CreateInstance(commandType, new object[] { message });
    }

    return command;
}

It reads the first byte from the message which contains the instruction to be executed. The instruction is then used as key to look for the type in the instruction map. The Activator.CreateInstance() API is used to instantiate the type into a variable. The instance is then returned from the function.

The receive callback then dispatches a CommandReceived event. The application implements the plumbing from that point onward to handle the event notification and act upon it.

At this point, we need to take a step back and observe the command object instantiation in detail. Each command type has its own implementation detail which interprets and utilises the message. The Trace class, for example, reads the level, category and message values from the message. Its constructor is listed below.

public Trace(byte[] message)
{
    int unixTimeStamp = message[1] << 24 | message[2] << 16 | message[3] << 8 | message[4];
    TimeStamp = Util.UnixTimeStampToDateTime((double)unixTimeStamp);
    int paramCount = message[5] << 24 | message[6] << 16 | message[7] << 8 | message[8];
    parameters = new string[paramCount];
    int index = 9;
    
    for (int i = 0; i << paramCount; i++)
    {
        int length = message[index] << 8 | message[index + 1];
        parameters[i] = Encoding.UTF8.GetString(message, index + 2, length);

        index += (2 + length);
    }

    Level = parameters[0];
    Category = parameters[1];
    Parameters = parameters;
}

The first byte contains the instruction. This is ignored since we already know that the instruction is Trace (0x02).

The next four bytes contain the timestamp of the message as a 32-bit integer. The value is converted into DateTime object through a utility method.

The next four bytes contain the number of parameters that are passed into the Trace command. The command uses this number to determine the number of string objects to retrieve from the message. Remember that each string object is prefixed by a 16-bit integer that contains the number of characters that make up the string. That's where the index + 2 comes from, which offsets the current position in the array by another 2 bytes. Once the parameters are loaded into an array, they are assigned to public accessors of the Trace class.

The application uses the public members to display the Trace command on screen and store them into a database for persistence.