Valknut – Workout Tracking

Valknut is a workout tracking application. It is designed to keep logs about any kind of weighlifting activity, such as barbell training, bodyweight training or dumbbell workouts.

The original motivation for this project was to serve as a personally tailored replacement to commercial offerings.

History

The initial product idea germinated in the year 2016 as a means to store personal health information safely on an individual desktop computer, away from prying eyes. To that end, the application never considered the possibility of multiple simultaneous users, authentication, or non-local persistence. Each individual user would store their own records in their personal file, which would be protected by the user’s own encryption key. The project was rather unimaginatively titled Fit Net.

After remaining shelved for a very long time, I picked it up again in 2020 to breathe some life back into it. Among other things, I changed it into a web application since that was a platform I knew well by then, and renamed it to a much more distinguished Valknut, invoking imagery of Norse mythology, Viking warriors and Valhalla.

Does a workout even count if you don’t feel at the threshold of Valhalla by the time it’s complete?

Project Status

The product is in what I call functionally usable state. It provides all necessary features to capture, store, retrieve, edit and delete the most essential facets of a weightlifting regimen. A basic summary report has been implemented.

The architecture of the application still aspires to be amended into a desktop-based product someday.

Architecture

Valknut follows classical MVC architecture. The application is separated into three projects for the user interface, the querying engine and the entity model classes. The website project contains classes that implement the web controller interfaces and the views. The querying engine provides repositories, data-error abstractions and query and filtering operations. The models project is a class library to implement the entities that make up the domain model.

The following links expand upon select architectural aspects of the product.

Domain Model

Localisation

Validation

View Models

JSON Handler

Keeping Things Fast for Large n

Some years ago, we had customers reporting poor network response time when fetching content from the server. Our product was not anywhere near being wildly popular, and the number of records in the database were still counted in tens of thousands. Even our puny instance was able to cache the entire database into memory. Not just a single table or the results of a few queries. The entire database could be cached in RAM. So the slowdown probably wasn’t caused by something in the database. All customers reported more or less similar latency, irrespective of their geographical locations or internet service providers or time of the day. That also ruled out network problems.

So I rolled up my sleeves and began investigating.

One of the features of this product was that while it used sequential 128-bit integers for the primary key columns, the data retrieval was done with a shorter 5 character identifier, called a short-code. The short-code was also unique, but made up of fewer characters for legibility when users passed links around. The short-code wasn’t appropriate for a primary key column though, as its randomness would cause too much index fragmentation.

The short-code was generated by hashing the primary key value using the MD5 algorithm, and truncating it to the first 5 characters of the result. If there was a collision…well nobody had thought about that back then. It was one of the subtle bugs that would come back to trouble us years later. But that’s another story.

Someone had decided to implement this feature in the application code. When a content link was required, it was computed using .NET cryptography libraries and the result embedded into a URL string template for the user to share. The hashed value was not stored in the database, even though it was going to remain the same every time. And we would be paying heavily later for this oversight.

Now since the application had no way to identify the record directly by its short-code, the developers had to come up with a Rube Goldberg-esque contraption to retrieve it again. For this, they fetched the ID column for all the content records, ran the MD5 function on each row, truncated the result and compared it to the value given in the incoming request, until a match was found. The CS101 guys already know where we are getting at with this approach. Since everything is fast for small n, this technique worked flawlessly on the developers’ own computers. It was only when the application was deployed to production, and stayed there for a few months, that the performance bottlenecks began to show up.

Locating the bug itself was easy. I set up a network trace using WireShark, inspecting the queries between the application and the database server, and promptly proceeded to fall of the chair in disbelief.

After excluding essential communications such as handshakes and authorisation, the application was still receiving almost half a megabyte of data, split into 400 packets, for a table containing only 31,000 rows. All this activity before it could even begin looking for a match.

This was going to require some re-engineering to fix.

Due to unrelated reasons, our goal was only to reduce the amount of data being received from the database server with zero changes in the public API or modifications to the database tables. We could only change the application code and deploy a new build. This code was written to use ADO.NET and inline queries to perform data operations. So changing its behaviour was going to be relatively easy.

The first thing was to assemble a query that could generate the MD5 of an integer.

SELECT HASHBYTES('MD5', CAST([Article].[Id] AS CHAR(36))) AS [Hash]
FROM [Article]
WHERE [Article].[Id] = '6BA1CE84-FDB1-EA11-8269-C038960D1C7A';

Since the HASHBYTES function in T-SQL works only with char, nchar or binary data, the uniqueidentifier had to be cast into a fixed-width char. The output of this function was like so.

-- 0x704E87BA59EB6F930C020E5D6DA6B444

This hash was converted into a string by using the CONVERT function, and finally truncated to the first 5 characters, resulting in the output shown below.

SELECT LEFT(
            CONVERT(
                CHAR(32),
                HASHBYTES(
                    'MD5',
                    CAST([Article].[Id] AS CHAR(36))
                ), 2), 5) AS [Hash]
FROM [Article]
WHERE [Article].[Id] = '6BA1CE84-FDB1-EA11-8269-C038960D1C7A';
-- 704E8

Cool!

Now came the retrieval by the short-code. The hash-computation query was nested inside another simple select query.

SELECT *
FROM
(
    SELECT [Article].[Id],
           [Article].[Name] AS [ArticleName],
           LEFT(CONVERT(CHAR(32), HASHBYTES('MD5', CAST([Article].[Id] AS CHAR(36))), 2), 5) AS [Hash]
    FROM [Article]
) [Article_]
WHERE [Article_].[Hash] = '704E8';

This was executed against the database and measured again using WireShark.

The results were remarkably different, but not at all unexpected. Only 882 bytes of data were transferred between the database and the application, and of that, 630 bytes were the query string going into the database server. The only record the server now returned was 252 bytes long, and required no further processing in the application.

There was still had a lot of processing going on in the database itself, which was still ripe for optimisation. Storing the short-code in the table permanently and indexing the column would improve the product’s performance even further.

But for that moment, I was king of the world.

This story has been altered slightly to protect the guilty and gloss over irrelevant details. The performance bottleneck was made much worse by nested loops (yay, quadratic growth!) and suboptimal data types.

Runtime Resource Authorisation in ASP.NET MVC

The Authorize attribute is a feature of the ASP.NET MVC framework that programmers learn early on. While it is a good out of the box solution for general cases, it doesn’t work well for dynamic authorisation. Take the HTTP request shown below.

GET /posts/edit/12 HTTP/1.1
Host: www.example.com

In colloquial MVC, this requests the PostsController to retrieve the contents of the post with ID 12 and display them in a form. The Authorize attribute does not determine if the currently logged in user has been granted editing rights for that specific post. At best, operations are allowed based on roles or claims, which still becomes an all or nothing situation. Either an individual user can edit all posts, or none at all.

Finer-grained control over individual resources for each user in the system requires a custom solution.

The system described below eschews the Authorize attribute entirely, and chooses to instead use filters in the ASP.NET request pipeline. It imposes the restriction that the name of the resource identifier parameter should always be well-known, such as id. Since the default route already follows this convention, this usually isn’t a problem.

Identifying the What

The first piece of the puzzle is a custom action filter called SecuredAttribute. This class inherits from System.Attribute and is applied to methods. Any controller action method that is marked with this attribute identifies as a sensitive access point that requires some kind of screening procedure before being invoked.

But this attribute only identifies the method. It does not perform any kind of screening on incoming requests. This is also why it doesn’t inherit from any of the more higher-level attributes from the MVC framework, such as ActionFilterAttribute.

public class SecuredAttribute : Attribute
{
}

The SecuredAttribute is used by applying it to the top of the controller method that needs runtime screening.

public class AdminController : Controller
{
    [Secured]
    public IActionResult Edit(int id)
    {
        …
    }
}

Implementing the How

The screening is performed by a class that implements IActionFilter. There can be multiple screening filters, and they are queued up in the GlobalFilterCollection during Application_Start(). The screening process is performed before the action method is executed, by implementing it in the OnActionExecuting method of the filter class.

public class AuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
             context.Result = new HttpStatusCodeResult(HttpStatusCode.Unauthorized);
        }
    }
}

The filter looks for the [Secured] attribute. If the method being invoked doesn’t have the attribute, the filter immediately returns and lets the method execution proceed. If the attribute is found, the filter performs a screening procedure to determine if the request should be allowed or not. It may use a injected service class or even a third-party API to perform this action.

Since the attribute is only identifying the method, it remains simple. Discrete behaviours can be attached to the same action method, that can also be dependent on the request context (e.g. invocation through web vs. API) while maintaining a clean separation of concerns.

Some of these techniques are shown below.

Extending Beyond Simple Authorisation

The method attribute can be leveraged for performing other cross-functional requirements, which are tangent to authorisation. The secured method may require an audit trail.

public class AuditTrailFilter : IActionFilter
{
    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        var secured = filterContext.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        // Invoke a service to log the method access
        Logger.Info(…);
    }
}

The authorisation and audit trail filters can coexist and are fired independently. They use the same marker to identify the methods, but perform widely different tasks with different resources at their disposal. AuditTrailFilter can be programmed to log requests to secured location in one store and all other requests into another store, while AuthorizationFilter always allows requests to unsecured locations.

Another example is to return different responses to the client based on its type. When a request comes from a browser, its Accepts header is set to text/html, while an API client such as a SPA or a mobile app sets it to application/xml or application/json. The WebAuthorizationFilter class returns the access-denied error as a HTML view, which the browser displays as a user-friendly error page.

public class WebAuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // Return if a non-API request is received
        var acceptTypes = HttpContext.Current.Request.AcceptTypes;
        if (!acceptTypes.Contains("text/html"))
        {
            return;
        }

        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
            context.Result = new ViewResult()
            {
                ViewName = "AccessDenied",
            }
        }
    }
}

The ApiAuthorizationFilter class, on the other hand, returns a HTTP status code 403 in the response. The API client generates an appropriate error view on the client-side.

public class ApiAuthorizationFilter : IActionFilter
{
    … 
    public void OnActionExecuting(ActionExecutingContext context)
    {
        // Return if a non-API request is received
        var acceptTypes = HttpContext.Current.Request.AcceptTypes;
        if (!acceptTypes.Contains("application/xml"))
        {
            return;
        }

        var secured = context.ActionDescriptor.GetCustomAttribute(typeof(SecuredAttribute), false).FirstOrDefault();
        if (null == secured)
        {
            return;
        }

        var user = context.HttpContext.User;
        var param = context.ActionParameters.Where(p => p.Key == "id").FirstOrDefault();
        var id = Convert.ToInt32(param.Value);

        // Invoke a service to check if the request should be allowed
        var isAllowed = securityService.IsAllowed(user, id);
        if (!isAllowed)
        {
             context.Result = new HttpStatusCodeResult(HttpStatusCode.Unauthorized);
        }
    }
}

Practical Design Patterns in C# – Bridge

The bridge pattern is designed to separate the implementation of a functionality from its interface. The benefits of this approach are seen when the functionality has multiple implementations which can be swapped out without changing the API. But the separation of concerns can also prove useful when the system is backed by only a single implementation. The client API can continue to remain stable even if the entire implementation changes, because the client is shielded from its effects.

The source code for this design pattern, and all the others, can be viewed in the Practical Design Patterns repository.

This example demonstrates the use of this pattern by building a playlist which stores and cycles through audio tracks. Tracks can be retrieved in linear or random order. The playlist can either stop after it has cycled over all the items, or loop back and begin afresh.

public class Playlist
{
    private readonly IPlaylistImpl _playlistImpl;

    public async Task PlayAsync()
    {
        …
        var item = _playlistImpl.Next();
        while (item != null)
        {
            // Perform an operation on the item.
            …

            // Pick the next item.
            item = _playlistImpl.Next();
        }
    }
}

This class defines the public API of the playlist. The client populates the audio tracks through the usual collection API (not shown here), after which it invokes the PlayAsync method to start iterating through the list. Once it reaches the end of the list, it stops.

This is coupled with the implementation, defined by the IPlaylistImpl interface, and referenced by the _playlistImpl field.

public interface IPlaylistImpl
{
    bool IsEmpty();

    string Next();

    void Reset();
}

This interface is implemented by the LinearPlaylistImpl and RandomizedPlaylistImpl classes, each of which approaches the collection of items differently. The linear playlist iterates over each audio track in the same order that they are stored in the items array.

public class LinearPlaylistImpl : IPlaylistImpl
{
    private readonly string[] _items;

    private readonly IEnumerator _enumerator;

    public LinearPlaylistImpl(IEnumerable<string> items)
    {
        _items = items.ToArray();
        _enumerator = _items.GetEnumerator();
    }

    public bool IsEmpty()
    {
        return _items.Count() == 0;
    }

    public string Next()
    {
        if (_enumerator.MoveNext())
        {
            return _enumerator.Current as string;
        }

        return null;
    }

    public void Reset()
    {
        _enumerator.Reset();
    }
}

The randomized playlist picks an item at random from the list, marks it visited so it is not repeated, and stops after all audio tracks have been visited.

public class RandomizedPlaylistImpl : IPlaylistImpl
{
    private readonly List<string> _items;

    private readonly Random _random = new Random((int)DateTime.Now.Ticks);

    private readonly Queue<string> _usedItems;

    public RandomizedPlaylistImpl(IEnumerable<string> items)
    {
        _items = new List<string>(items);
        _usedItems = new Queue<string>();
    }

    public bool IsEmpty()
    {
        var c1 = _items.Count;
        var c2 = _usedItems.Count;

        return c1 + c2 == 0;
    }

    public string Next()
    {
        if (_items.Count > 0)
        {
            var index = _random.Next(_items.Count);
            var item = _items[index];
            _items.Remove(item);
            _usedItems.Enqueue(item);

            return item;
        }

        return null;
    }

    public void Reset()
    {
        while (_usedItems.Count > 0)
        {
            var item = _usedItems.Dequeue();
            _items.Add(item);
        }
    }
}

Emergent Behaviour

The real magic of this approach becomes more evident once you add looping to the Playlist class. Since the effect of looping is the same on all implementations, it is best stored in the Playlist itself.

public class Playlist
{
    …
    public bool IsLooping()
    {
        get;
        set;
    }
    …
}

When all items have been iterated through, the state of this property is checked. If looping is not enabled, the playback loop exits. If it is set, the playlist implementation is reset back to the first index and the iteration process is begun afresh.

Practical Design Patterns in C# – Proxy

The intent of this pattern is to substitute an object with a placeholder for any reason, such as to transform the input or output or to provide an alternative representation of the original. This pattern is quite similar to the adapter pattern. The difference is that an adapter changes the interface to suit the client’s needs, whereas a proxy mirrors the interface of its underlying object.

This example uses the SOAP client services framework that ships as a part of the .NET framework. SOAP messages are encoded as XML and can be transmitted over any transport, such as HTTP or SMTP. But parsing XML gets tedious and is prone to errors. And SOAP itself is a gargantuan pile of specifications that few people understand. That’s why vendors sell tooling to generate language-specific bindings that abstract away the XML documents behind types and methods that mirror the web API. As a result, programmers can consume the service by writing statically-typed imperative methods.

This example uses the Number Conversion Service available at the link below.

https://www.dataaccess.com/webservicesserver/numberconversion.wso?WSDL

This service exposes an operation called NumberToWords, that takes an instance of NumberToWordsSoapRequest as a parameter and returns a NumberToWordsSoapResponse. The request object has a Body property of type NumberToWordsRequestBody, which in turn encapsulates an unsigned integer that holds the number to be converted. The operation responds with the type NumberToWordsResponse, containing the string that denotes the value of the number in words.

This entire hierarchy is easily represented in classes that the wsdl.exe utility can generate automatically. The client consumes this API using C# language statements without having to directly interact with the service, XML or network protocols.

try
{
  var client = new NumberConversionSoapTypeClient();
  var response = await client.NumberToWordsAsync(100); // Returns the string "one hundred"
}
catch (Exception)
{
  // Handle the exception.
}

If you were to inspect the contents of the NumberConversionSoapTypeClient, you would see that it has a corresponding method for each operation that is described in the WSDL document for the service.

public class NumberConversionSoapTypeClient
{
  …
  public Task<NumberToWordsResponse> NumberToWordsAsync(ulong ubiNum)
  {
    NumberToWordsRequest inValue = new NumberToWordsRequest();
    inValue.Body = new NumberToWordsRequestBody();
    inValue.Body.ubiNum = ubiNum;
    return ((NumberConversionSoapType(this)).NumberToWordsAsync(inValue);
  }
  …
}

The framework itself provides a general purpose implementation of the method that invokes the API over the network, awaits its response, deserializes its contents into class instances and throws an exception in case an error occurs.

The proxy has a 1:1 parity with the methods, and the request and response types that the service exposes.