Notes On Ray Tracing in One Weekend

Peter Shirley’s Ray Tracing in One Weekend had been on my reading list since a long time. Based on brief incursions into its first few chapters, I was aware of the intensive hands-on approach that the book takes to explain its subject matter.

In addition to the mathematics, this also became a solid opportunity to hone my skills with C# because of several forays into implementing concepts from first-principles. I also encountered some language-specific peculiarities which are uncommon in web development, or as a result of translating the C++ code from the book to C#.

This blog post contains an ongoing list of lessons learned during this exercise.

Console output can be redirected to a file

Now this I knew from earlier, but I’m adding it here for sake of completeness. Anything that the program prints to the console (through Console.Write() or Console.WriteLine()) can be saved to a file by using the > operator in PowerShell (or most other shells).

MyApp.exe > output.log

The commonly used redirection operators are >, >> and N>, which usually present in all shells, across platforms. PowerShell supports additional techniques to pipe output to files using the Out-File and Tee-File cmdlets.

The C# equivalent to std:cerr is Console.Error

The use of stderr is common in several environments because of its convenience and ubiquity. I’ve seen it less often in colloquial C# for runtime logging, where the ILogger interface has far more traction.

The Console.Error is a TextWriter instance that prints to a console by default. And it can be changed to point to any other stream, such as a file or a network endpoint.

Console.Error.WriteLine("The Thing Broke");

But it’s also possible to use the > operator in PowerShell to redirect the standard error output to any other destination, which is far more convenient and flexible.

The unary minus (-) operator negates a value

Easy-peasy. Standard mathematics rules apply. The language has its own rules though, which must be adhered to.

  1. The operator takes only 1 parameter (hence, unary).
  2. The parameter must be the same type that contains the operator declaration.

The unary * operator does not multiply a number by itself

This was my “Doh!” moment. It’s a pointer dereference operator, of course. I had a temporary brain fade when I read this in the book at first. But the documentation set things right.

Compound assignment operators cannot be explicitly overloaded

All compound assignment operators are implicitly overloaded by overloading their corresponding binary operator. These operators are +=, -=, *=, /=, %=, &=, |=, ^=, <<=, >>= and >>>=.

The in modifier on a parameter is a poor man’s substitute for const

Applying the in modifier on a parameter makes it a read only reference. The compiler does not prevent the invocation of mutating methods on an in parameter, but the instance is not modified.

public class Apple
  public int Size;

  public void Bite()
    Size -= 1;
public void Program
  public static void Main()
    var fruit = new Apple();

  private static void Mutate(in Apple fruit)
    // fruit.Size--; // Won't compile
    fruit.Bite(); // Compiles and runs on a copy of fruit

When the Bite() method is called, it transparently operates on a copy of the original object. The value of Size is never altered in the original instance. This can introduce subtle bugs for the unwary programmer. The const keyword in C++ comes with much stronger guarantees of immutability by completely preventing the invocation of mutable methods.

And remember, property getters and setters in C# are methods under the hood.

C# 10 adds support for global aliases

Declaring an alias allows the programmer to step around class name conflicts when using unalterable code (such as third-party libraries). Earlier versions of the language required that the alias be declared separately in each file where it was to be used. This is no longer necessary. The global modifier makes the alias available across the entire project.

global using NativeScrollBar = System.Windows.Forms.HScrollBar;
global using LibraryScrollBar = Vendor.Library.HScrollBar;

var a = new NativeScrollBar();
var b = new LibraryScrollBar();

This feature comes in handy when declaring the Color and Point3 types as aliases for the Vec3 type during the course of the book.

Method inlining is not guaranteed

Inline expansion is an optimisation technique that copies the body of the function to the call site. The compiler will almost always do a better job at selecting methods to inline. Besides, C# does not have forced inlining. The JIT heuristics will always be the final authority to determine whether a method should be inlined or not, even when the programmer has explicitly decorated a method with the MethodImplAttribute. So it’s best to leave it out from your code.

Convert.ToInt32() is different from static_cast<int>

This was a significant stumbling block that had a material change on the colour output. In C++, the statement static_cast<int>(255.999) truncates the decimal digit, returning the value 255. The Convert.ToInt32() method in C# rounds to the nearest integer. So Convert.ToInt32(255.999) results in 256, which is outside the 8-bit range of allowed values for each RGB channel. This can give rise to some weird outputs, ranging from wildly incorrect colours, to mosquito noise patterns.

Math.Truncate() could have been used instead, because it truncates the decimal portion of the number and returns only the integral part. But the return type of this method is still a decimal, which has to be cast to an int.

Eventually, the solution turned out to be simpler than anticipated. Since casting to int also discards the decimal portion, the call to Math.Truncate() can be excluded, and the type can be directly cast to an int.

(int)255.999; // Returns 255

Users Don’t Read Error Messages

I’m reminded of the story of the Microsoft usability test in which an error message was displayed saying, “There is a $50 bill taped to the bottom of your chair.” There was. It was still there at the end of the day.

Joel Spolsky

A particular staffer in a customer’s internal IT team was often focused on keeping down incident metrics at all costs. They’d sooner to wipe clean and reinstall an application to get the system back online as quickly as possible, than figure out why it suffered downtime to begin with.

But when one particularly gnarly ticket refused to go away in spite of their numerous attempts at reinstalling, they were forced to reach out to our support team for help. When the ticket eventually bubbled its way up to my desk, my first reaction was to ask for the error log. So it came as a surprise when the customer reported back that there was no log. That was impossible, because it has been our policy since forever to always log errors, and make them easy to access. It was one of those things that we got right early on.

When I remoted into the customer’s screen, this dialog was sitting there in plain sight.

Say what, now?

I proceeded to click the Details button, which showed an in-depth description of the error along with a helpful stack trace. In the end it turned out to be something completely external to the application. I think it was a misconfigured proxy, which had to be doubled right back to the customer’s own IT team to be resolved. The employee could have saved themselves a lot of time and anguish by being a bit more attentive.

Improved Application Performance with .NET 7

Speed has been a big selling point for .NET 7. Stephen Toub has written an extremely lengthy and detailed article back in August 2022 about all the performance improvements that went into it. And it’s ridiculously easy to take advantage of the speed boost in your own applications. In most cases, it only requires recompiling your project to target the new runtime. Developers who don’t make the switch are leaving a lot of performance gains on the table. I took some effort to measure the speed improvements by running some benchmarks on my own projects.

Shades is a .NET port of the Python module of the same name, for creating generative art. The module was originally authored by Ben Rutter. The library operates by decomposing any drawing operation, no matter how complex, into a series of single-pixel modifications, executed one at a time, across the entire bitmap. To draw a line, the library first calculates every pixel that makes the line, then iterates through that list to compute its colour value, and making a discrete set pixel call for it. Drawing primitive shapes using solid colours is delegated to the much faster SkiaSharp library. But all the more advanced effects which have no equivalent operation in SkiaSharp have to be done with custom routines.

So of course it is slow. And anything that makes it go faster is a welcome improvement.

For this exercise, I chose the PixelsInsideEdge method, which operates on the coordinates that make up the outer edge of a shape, and identifies the pixels that fall within that boundary. It uses a simplified implementation of a ray-casting algorithm to determine whether a point is within the shape boundary or falls outside of it.

public ICollection<SKPoint> PixelsInsideEdge(ICollection<SKPoint> edgePixels)
    /// Contains a list of distinct values along the X axis, extracted from edgePixels.
    /// Each unique value along the X axis corresponds to a list of unique values along
    /// the Y axis that intersect with the X coordinate.
    var xs = new SortedDictionary<int, SortedSet<int>>();
    int minX = int.MaxValue, minY = int.MaxValue, maxX = int.MinValue, maxY = int.MinValue;
    foreach (var e in edgePixels)
        var ex = Convert.ToInt32(e.X);
        var ey = Convert.ToInt32(e.Y);
        maxX = Math.Max(maxX, ex);
        minX = Math.Min(minX, ex);
        maxY = Math.Max(maxY, ey);
        minY = Math.Min(minY, ey);

        if (xs.TryGetValue(ex, out var points))
            xs[ex] = new SortedSet<int>() { ey };

    /// Contains a list of points that make up the inside of the shape formed by the
    /// bounds of edgePixels.
    var innerPixels = new List<SKPoint>();
    for (var x = minX; x <= maxX; x++)
        var ys = xs[x];

        /// Find the lowest values along the Y axis, i.e. values
        /// that make up the lower edge of the shape.
        var temp = new SortedSet<int>();
        foreach (var y in ys)
            if (!ys.TryGetValue(y - 1, out var _))

        var rayCount = 0;
        for (var y = temp.Min; y <= temp.Max; y++)
            if (temp.TryGetValue(y, out var _))

            if (rayCount % 2 == 1)
                innerPixels.Add(new SKPoint(x, y));


    return innerPixels;

It was executed on a circular shape having a radius of 2048 pixels. This method identified 12869 points enclosed within its boundary. The location of the shape on the canvas had no effect. Negative coordinates were also calculated and stored in the result. The benchmark only ran the geometric computations. It did not perform any modifications on the image itself.

public ICollection<SKPoint> PixelsInsideEdgeCollection(SKPoint[] Edges) => shade.PixelsInsideEdge(Edges);

public static IEnumerable<SKPoint[]> LargeDataSource()
    var shade = new BlockShade(SKColor.Empty);

    yield return shade.GetCircleEdge(new SKPoint(0, 0), 2048.0f).ToArray();

The results of this benchmark are shown below.

dotnet run -c Release --filter PixelsInsideEdgeCollection --runtimes \
net5.0 net6.0 net7.0

// * Summary *

BenchmarkDotNet=v0.13.4, OS=Windows 11 (10.0.22621.1105)
AMD Ryzen 5 5600H with Radeon Graphics, 1 CPU, 12 logical and 6 physical cores
.NET SDK=7.0.102
[Host] : .NET 5.0.17 (5.0.1722.21314), X64 RyuJIT AVX2
Job-SOHORL : .NET 5.0.17 (5.0.1722.21314), X64 RyuJIT AVX2
Job-PPCKCW : .NET 6.0.13 (6.0.1322.58009), X64 RyuJIT AVX2
Job-EBVIPG : .NET 7.0.2 (, X64 RyuJIT AVX2

|           Method |  Runtime |          Edges |     Mean | Ratio |
|----------------- |--------- |--------------- |---------:|------:|
| PixelsInsideEdge | .NET 5.0 | SKPoint[12869] | 217.7 ms |  1.00 |
| PixelsInsideEdge | .NET 6.0 | SKPoint[12869] | 213.2 ms |  0.98 |
| PixelsInsideEdge | .NET 7.0 | SKPoint[12869] | 161.9 ms |  0.75 |

But what does this mean for the performance of the application as a whole? To test this, I ran Doom Fire, which replicates the effect from the title screen of the classic video game. I generated a 5 second animated sequence of images in sizes of 128 × 128, 256 × 128, 512 × 128 and 1024 × 128 pixels.

These three videos show the output of the effect at 128 × 128 pixels (top left), 256 × 128 pixels (top centre), and 512 × 128 pixels (above).

The height of the image was restricted to minimise the amount of empty pixels being pushed during the test. Time elapsed was measured with an instance of System.Diagnostics.Stopwatch. Each test was first run 5 times to warm up the JIT and allow the progressive compiler to thoroughly kick in and optimise the code. The results of these warm-up tests were discarded. Then 3 further iterations were run, and the fastest result was considered for comparison.

|  Runtime |           Size |    Time |
|--------- |---------------:|--------:|
| .NET 5.0 |   128 x 128 px | 13.59 s |
| .NET 5.0 |   256 x 128 px | 13.74 s |
| .NET 5.0 |   512 x 128 px | 25.67 s |
| .NET 5.0 |  1024 x 128 px | 49.17 s |
|  Runtime |           Size |    Time |
|--------- |---------------:|--------:|
| .NET 7.0 |   128 x 128 px | 13.35 s |
| .NET 7.0 |   256 x 128 px | 13.52 s |
| .NET 7.0 |   512 x 128 px | 23.23 s |
| .NET 7.0 |  1024 x 128 px | 44.53 s |

This wasn’t quite as stark an improvement as a standalone microbenchmark. A single method being executed in quick succession has vastly different performance characteristics from a complete application. This benchmark also saved the frame to disk, which adds an I/O penalty. As expected, smaller data sets showed minimal variation. The numbers started diverging as the magnitude of data increased.


Based on the standard benchmarks, .NET 5 and 6 show similar performance. With .NET 5 now out of support, and .NET 7 being earmarked as a short-term support release, developers are likely to upgrade to .NET 6 instead of .NET 7. And .NET 6 has an additional 6 months of support over .NET 7. There are many customers who would accept the tradeoff of slower runtime execution for longer official support. If your application is performance sensitive, it’s a no-brainer that you must move to .NET 7. But even otherwise, if you can risk the shorter support cycle (and .NET 8 will certainly be released by then), it may be a worthwhile investment to consider targeting your application for .NET 7 instead.

Breaking Circular Dependencies in Microsoft DI with Lazy Resolution

The service locator pattern is understandably looked down upon. I’m not the kind to get swept away with hubris, but injecting a service provider into other classes, and telling them to pick whatever they like out of it is just global variables with more steps.

There are certain cases where this is the most straightforward solution to a problem. But this anti-pattern can still be avoided with a bit of thought. This is a real-world scenario that I faced few months back.


A wizard is a UI pattern that is used to guide the user through the stages of a process (a license key verification and user registration in this case). I wrote a simple application that would lead the user through each step.

  1. Welcome and short instruction note.
  2. License Key Entry and verification.
  3. Registration Form input and validation.
  4. Review form input.
  5. Completion.

But there may be cases when somebody might need to change their license key after it has been entered. For this, the application had a button to go back to the previous screen.

PlantUML Syntax:<br />
<p>object Welcome<br />
object “License Key Entry” as LicenseKeyEntry</p>
<p>object “Registration Form” as RegistrationForm<br />
object Review<br />
object Completed</p>
<p>Welcome -down-> LicenseKeyEntry</p>
<p>LicenseKeyEntry -right-> RegistrationForm : Next<br />
RegistrationForm -left-> LicenseKeyEntry : Back</p>
<p>RegistrationForm -down-> Review : Next<br />
Review -up-> RegistrationForm : Back</p>
<p>Review -> Completed : Register</p>
<p>@enduml<br />

Each screen was implemented in its own class as a window object. It would be ideal if the screen object would be injected wherever needed through Microsoft Dependency Injection. The Welcome class would receive a reference to License Key Entry, which would receive a reference to the Registration Form, and so on. But some of the screens also needed a reference to the previous screen. Registration Form would require a reference to License Key Entry. Confirmation would require a reference to the Registration Form.

This would create a circular dependency, which is not supported by Microsoft Dependency Injection.

public class LicenseKeyEntry : Form
    public LicenseKeyEntry(RegistrationForm next) { … }
public class RegistrationForm : Form
    public RegistrationForm(LicenseKeyEntry previous, Confirmation next) { … }
private static void ConfigureHostServices(HostBuilderContext _, IServiceCollection services)
    services.AddSingleton<RegistrationForm>(); /// Welp! Circular dependency!
System.InvalidOperationException: A circular dependency was detected for the service of type ‘LicenseKeyEntry’.

My first instinct was that this was an impossible problem to solve. Classes need references. References must come from the DI container. The container won’t allow creating circular dependencies between types. Just hand over a reference to the service provider to the objects, and let them ask for whatever objects they need at runtime.

public class RegistrationForm : Form
    private IServiceProvider _services;

    /// Don't do this!
    public RegistrationForm(IServiceProvider services)
        _services = services;

    public void Next()
        var next = services.GetRequiredService<Confirmation>();

    public void Previous()
        var previous = services.GetRequiredService<LicenseKeyEntry>();

But this reeks of a code smell. The class was now bound to the types in the specific dependency injection framework. Switching over to another framework or removing it altogether would be difficult. Testing and mocking became more complicated, because the unit tests would now have to inject DI container, which would contain the mock types.

A Workaround

While the approach shown above is not advisable, it does demonstrate the strategy of using lazy resolution to break circular dependencies. .NET contains a built-in Lazy<T> type that can be leveraged for the job.

Begin by changing the constructor signatures to use Lazy<T>, where T is the actual type that is required.

public RegistrationForm(Lazy<LicenseKeyEntry> previous, Lazy<Confirmation> next) { … }

Then, inject the Lazy types into the service container.


This removes the circular dependency because these types don’t have a direct reference to each other. But it’s important that the value of the dependent types is not realised in the constructor. They should continue to hold a reference to the Lazy<T> type instead, and only reify the underlying instance later down the line.

public class RegistrationForm : Form
    private Lazy<LicenseKeyEntry> _previous;
    private Lazy<Confirmation> _next;

    public RegistrationForm(Lazy<LicenseKeyEntry> previous, Lazy<Confirmation> next)
        _previous = previous;
        _next = next;

    public void Next()
        var next = _next.Value;

    public void Previous()
        var previous = _previous.Value;


Thomas Levesque wrapped this into an elegant implementation that is exposed by a simple extension method on the service collection. The full article is available on his blog.

public static class ServiceCollectionExtensions
    public static IServiceCollection AddLazyResolution(this IServiceCollection services)
        return services.AddTransient(typeof(Lazy<>), typeof(LazilyResolved<>));

    private class LazilyResolved<T> : Lazy<T>
        public LazilyResolved(IServiceProvider serviceProvider) : base(serviceProvider.GetRequiredService<T>)

Porting a Windows Forms Application to .NET – Part 2

Previously, I described the legacy of the Vertika Player, the bottlenecks in its initial development, some elementary efforts to refactor the code, and a major roadblock that came from the deprecation of the Flash Player.

Once we decided to move to .NET, work began in full earnest. Enthusiasm was running high, and nothing seemed impossible. Don’t mind the old code. It was trash. We’d rewrite everything from scratch! Heck, we were so smart, we could do it twice over. But this fervour lasted all of 15 minutes before we folded up and rolled back all our changes.

I’m exaggerating, of course. We did write a new application shell from scratch, using newer APIs like the Microsoft.Extensions.Hosting.IHost interface and its dependency injection framework for some of the most fundamental classes that were needed. But there was immense pressure to get the product out of the door. Remember that the Flash Player uninstaller was well and truly active now, and support staff were working overtime to keep up with restoring it every time it got nuked. After a couple of weeks of this effort, the enormity of the exercise hit us and we fell back to copying files wholesale from the old code-base. On the brighter side, in spite of rewriting only a small portion of the code, the groundwork had been laid for more significant breakthroughs in the near future.

The singular monolith had been deconstructed into separate projects based on functionality, such as the primary executable, model classes, and the web API host. Over the following months, we added more projects to isolate critical portions of data synchronisation, the network download library (eventually replaced by the Downloader library, written by Behzad Khosravifar) and background services.

A SOAP-y Muddle

There’s a significant chunk of the application functionality that depends on communicating with a remote SOAP service (stop sniggering; this stuff is from 15 years ago). Visual Studio continues to provide tools to automatically generate a SOAP client from a service. But the output does not maintain API compatibility with the client generated with earlier versions of the tool. Among the methods missing from the new client are the synchronous variants of the service calls, which, unfortunately, were a mainstay in the earlier application code.

That’s right. Microsoft used to ship tools that allowed developers to make synchronous network calls.

But this is all gone now. And I had a problem on my hands.

public void ClickHandler(object sender, MouseEvent e)
    var users = serviceClient.GetUsers();
CS1061	'ServiceClient' does not contain a definition for 'GetUsers' and no accessible extension method 'GetUsers' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?)


Calling asynchronous code from previously written synchronous methods was not going to be easy. Web service calls were tightly integrated in many classes that were still oversized even after aggressive refactoring. Stephen Cleary’s AsyncEx library came to our rescue. Asynchronous method invocations were wrapped inside calls to AsyncContext.Run(), which we liberally abused to build a synchronous abstraction over the TAP interface.

public void ClickHandler(object sender, MouseEvent e)
    var users = AsyncContext.Run(serviceClient.GetUsersAsync());

This was much better than the alternative of calling Wait() or Result directly on the task. In addition to blocking what could potentially be the UI thread, it would also wrap any exceptions thrown during the invocation into an additional AggregateException. And anybody who’s dealt with InnerException hell knows how bad that can be.

The second API incompatibility was in the collection types returned from the service. The earlier service client returned a System.Data.DataSet type for collections. This was changed to a new type called ArrayOfXElement. Fortunately, this was an easy fix with a simple extension method to convert the ArrayOfXElement into a DataSet.

Wrapping Up and Rolling Out

The hour of reckoning arrived about 5 months later, when we finally removed references to the Flash Player ActiveX control from the project, replacing them with the WebView2 control. The minuscule amount of effort required of this step belies the enormity of its significance. Flash had been our rendering mainstay for over a decade. All those thousands of man-hours invested into the application were gone in a blink of an eye, replaced with a still-wet-behind-the-ears port to Blazor. This was the first time in the history of the company that legacy code was discarded entirely, and rewritten from scratch on an empty slate.

The new product was deployed to several test sites for a month to ensure that everything worked as expected. And other than a few basic layout errors, there were no problems that we encountered. The porting exercise was a success, and offered a major lifeline to the business.