Pipe: Another approach for chaining methods in C#

A colleague recently linked me to this post and we got talking about how Ramda‘s pipe method could be implemented in C#. We already have similar functionality using the Map operator I wrote about here. However, this was a fun exercise and may be preferred by some people.

Piping

The pipe function in Ramda takes a number of functions and produces a function which, when called, executes the given functions from left to right passing the output of the previous function as the input of the next function.

Pipe is a little difficult to write in C# because of the stricter typing. We need a way of preserving types through the pipe without making it inconvenient to use or hard to read.

I decided to start with a 3 function pipe and worry about extending it out at a later date. My first attempt looked like this:

public static Func<TIn, TOut> Pipe<TIn, TInOut1, TInOut2, TOut>(
    Func<TIn, TInOut1> func1,
    Func<TInOut1 , TInOut2> func2,
    Func<TInOut2 , TOut> func3)
    => input => func3(func2(func1(input)));

This does work but has a bit of a drawback. When used in code you get this:

var pipeFunc = Pipe(
    (Func<string, int>)int.Parse,
    Negate,
    Increment);
pipeFunc("123"); // Returns -122

As you can see, the first function needs to be of type Func. This is because C# struggles to infer types from method groups. This isn’t the worst thing in the world but it could be far better.

I knew how I could resolve this but it does lose a little bit of functionality. If we make this an extension method then TIn can be inferred without the cast. However, this means that we don’t get a function out so can’t just call Pipe once and then re-use the function.

The extension method looks like this:

public static TOut Pipe<TIn, TInOut1, TInOut2, TOut>(
    this TIn input,
    Func<TIn, TInOut1> func1,
    Func<TInOut1, TInOut2> func2,
    Func<TInOut2, TOut> func3)
	=> func3(func2(func1(input)));

Which looks like this when called:

"123".Pipe(
    int.Parse,
    Negate,
    Increment); // Returns -122

I personally prefer the look of this due to the lack of a cast, but it does have the previously mentioned problem.

Multiple parameters

Both approaches above recreate some of the functionality of Ramda’s pipe, but still lack a major component. Ramda’s pipe is able to call the first function with any number of parameters. The example given in Ramda’s documentation is:

const f = R.pipe(Math.pow, R.negate, R.inc);

f(3, 4); // -(3^4) + 1

The C# code as it stands couldn’t cope with this. However, it’s perfectly possible to extend the functions.

The static function looks like this:

public static Func<TIn1, TIn2, TOut> Pipe<TIn1, TIn2, TInOut1, TInOut2, TOut>(
    Func<TIn1, TIn2, TInOut1> func1,
    Func<TInOut1, TInOut2> func2,
    Func<TInOut2, TOut> func3)
	=> (input1, input2) => func3(func2(func1(input1, input2)));

var pipeFunc = Pipe(
    (Func<double, double, double>)Math.Pow,
    Negate,
    Increment);
pipeFunc(3, 4); // Returns -80.0

And the extension method looks like this:

public static class PipeExtensionMethods
{
    public static TOut Pipe<TIn1, TIn2, TInOut1, TInOut2, TOut>(
        this (TIn1, TIn2) input,
        Func<TIn1, TIn2, TInOut1> func1,
        Func<TInOut1, TInOut2> func2,
        Func<TInOut2, TOut> func3)
	=> func3(func2(func1(input.Item1, input.Item2)));
}

(3.0, 4.0).Pipe(
    Math.Pow,
    Negate,
    Increment); // Returns -80.0

As you can see, the static method still has the issue with requiring a cast. However, it otherwise works much like Ramda’s pipe.

The extension method once again doesn’t require the cast but parameters now need passing in via a tuple.

Different numbers of functions and parameters

So far I’ve just been dealing with 3 functions with the first function taking 1 or 2 parameters. However, Ramda can deal with any number of functions with any number of parameters.

It’d be great if we could just use C#’s params keyword to take in any number of functions. However, this wouldn’t allow us to type the functions which would lead to all sorts of trouble when trying to use Pipe. So, we need multiple function overrides.

We can’t provide unlimited number of functions in a pipe, but let’s assume that no-one’s going to want more than 20 – above that (even getting up to that) the code would start to become difficult to read. And when it comes to parameters let’s assume that no-one’s going to want to use a function with more than 10 for the same reason. This means that we need to have 200 functions for each type of Pipe function. Now imagine we need to make a small change – that’s not going to be fun.

T4 text templates

The solution to this (at least in Visual Studio) is to use a text template to generate the code for us. Normally I shy away from these, but I think in this scenario they’re justified.

The template for the static functions looks like this:

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ output extension=".cs" #>
namespace awsxdr.Pipe
{
    using System;

    public static class PipeOperators
    {
    <# for(var i = 0; i < 20; ++i) { #>
        <# for(var j = 0; j < 10; ++j) { #>
        public static Func<<# WriteInTypes(j); #>TOut> Pipe<<# WriteInTypes(j); #><# for(var k = 0; k < i; ++k) { #>TInOut<#= k+1 #>, <# } #>TOut>(<# for(var k = -1; k < i; ++k) { #>Func<<# if(k == -1) { if(j == 0) { #>TIn, <# } else { for(var l = 0; l <= j; ++l) { #>TIn<#= l + 1 #>, <# } } } else { #>TInOut<#= k + 1 #>, <# } if(k == i - 1) { #>TOut<# } else { #>TInOut<#= k + 2 #><# } #>> func<#= k + 2 #><# if (k < i - 1) { #>, <# } #><# } #>)
            => (<# WriteInputVariables(j); #>) => <# for(var k = 0; k <= i; ++ k) { #>func<#= (i - k) + 1 #>(<# } WriteInputVariables(j); for(var k = 0; k <= i; ++k) { #>)<# } #>;
        <# } #>
    <# } #>
    }
}
<#+
    private void WriteInTypes(int j)
    {
        if(j == 0) 
        { 
            #>TIn, <#+
        } 
        else
        { 
            for(var k = 0; k <= j; ++k)
            {
                #>TIn<#= k + 1 #>, <#+
            }
        }
    }

    private void WriteInputVariables(int j)
    {
        if(j == 0) 
        {
            #>input<#+
        } 
        else
        { 
            for(var k = 0; k <= j; ++k)
            { 
                #>input<#= k + 1 #><#+
                if(k < j) { #>, <#+ }
            }
        }
    }
#>

And the template for the extension methods looks like this:

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ output extension=".cs" #>
namespace Pipe
{
    using System;

    public static class PipeExtensionMethods
    {
    <# for(var i = 0; i < 20; ++i) { #>
        <# for(var j = 0; j < 10; ++j) { #>
        public static TOut Pipe<<# WriteInTypes(j); for(var k = 0; k < i; ++k) { #>TInOut<#= k+1 #>, <# } #>TOut>(this <# WriteInTypesForTuple(j); #> input, <# for(var k = -1; k < i; ++k) { #>Func<<# if(k == -1) { WriteInTypes(j); } else { #>TInOut<#= k + 1 #>, <# } if(k == i - 1) { #>TOut<# } else { #>TInOut<#= k + 2 #><# } #>> func<#= k + 2 #><# if (k < i - 1) { #>, <# } #><# } #>)
            => <# for(var k = 0; k <= i; ++ k) { #>func<#= (i - k) + 1 #>(<# } WriteInputVariables(j); for(var k = 0; k <= i; ++k) { #>)<# } #>;
        <# } #>
    <# } #>
    }
}
<#+
    private void WriteInTypes(int j)
    {
        if(j == 0) 
        { 
            #>TIn, <#+
        } 
        else
        { 
            for(var k = 0; k <= j; ++k)
            {
                #>TIn<#= k + 1 #>, <#+
            }
        }
    }

    private void WriteInTypesForTuple(int j)
    {
        if(j == 0) 
        { 
            #>TIn<#+
        } 
        else
        { 
            #>(<#+
            for(var k = 0; k <= j; ++k)
            {
                #>TIn<#= k + 1 #><#+
                if(k < j) { #>, <#+ }
            }
            #>)<#+
        }
    }

    private void WriteInputVariables(int j)
    {
        if(j == 0) 
        {
            #>input<#+
        } 
        else
        { 
            for(var k = 0; k <= j; ++k)
            { 
                #>input.Item<#= k + 1 #><#+
                if(k < j) { #>, <#+ }
            }
        }
    }
#>

Conclusion

I personally prefer using Map and Tee, but this provides another alternative to piping functions. C#’s typing constraints means that we can’t be quite as expressive as in ES, but this approximates what Ramda’s pipe function does.

For future work it could be possible to look at allowing currying in these pipes. I believe that this would be possible but would massively increase the number of overloads.

Functional Programming in C# – Part 1: Method Chaining

This is the first part in what is currently planned to be a 4-part series on functional programming, though I may end up adding more parts in future as there are other functional topics I could cover.

This first post is greatly influenced by Dave Fancher’s Functional C#: Fluent Interfaces and Functional Method Chaining but with a few additions of my own that I’ve found useful.

I’m not going to go into the history of functional programming or try to extol it’s virtues – there are many sources out there that do that and I assume that if you’re here reading this then you’ve probably already read plenty about that. However, I will say that I’ve been using functional style C# in a commercial setting for 4 years now and have found it to be a powerful tool for writing clean, reliable code.

Map and Tee

Functional languages can chain methods together in such a way that each method acts on the output of the previous method. For example, this F# function would output 13:

10
|> Add 1
|> Add 5
|> Subtract 3

This behaviour is similar to the fluent syntax found in some libraries but can be applied without having to design the class specially for it.

This is achieved using a couple of generic extension methods called Map and Tee.

Map

public static TOut Map<Tin, TOut>(
    this Tin @this, 
    Func<Tin, TOut> func) 
    =>
    func(@this);

Map takes a function and applies it to the value that Map is called on, returning the result of the function call.

This gives us enough to replicate what we saw in the F# example above:

10
.Map(Add(1))
.Map(Add(5))
.Map(Subtract(3));

I’ll deal with exactly how the Add and Subtract methods are defined shortly. But for now we can see how Map has allowed us to replicate the F# example.

However, we run into an issue if we want to output this result to the console as Console.WriteLine has no return value and so cannot be used as an argument to Map. The |> operator in F# however will happily accept Console.WriteLine.

We could get around this by writing a wrapper function for Console.WriteLine which returns the same value passed in. However, this would lead to writing the same thing over and over just to work with Map. To solve this, we have Tee.

Tee

public static TInOut Tee(
    this TInOut @this,
    Action func)
{
    func(@this);
    return @this;
}

Tee accepts an action, calls it with the value that Tee is called on, and then returns the original value. This allows us to chain logging functions like Console.WriteLine into our code.

For example:

10
.Map(Add(1))
.Map(Add(5))
.Map(Subtract(3))
.Tee(Console.WriteLine);

So that’s it right? Well, not quite.

There’s also a use of Tee where you want to call some function with side effects that does return a value but you just want to continue using the current value. This is arguably less pure functionally, but in reality it’s really useful because of the nature of working in a functional style in a traditionally OO language.

So, we add an overload of Tee that looks like this:

public static TIn Tee<TIn, TOut>(
    this TIn @this, 
    Func<TIn, TOut> func)
{
    func(@this);
    return @this;
}

Dealing with IDisposable

So the above is all fine when dealing with normal objects. However, we find a slight issue when dealing with disposable objects. If we were to just chain them in with Map then they would soon get lost and we wouldn’t end up disposing them correctly.

We could just break out the bits of code which use disposable objects and wrap them in using statements, but then we’re just back to standard procedural programming. So, we add a method called Use.

Use

public static TOut Use<TDisposable, TOut>(
    this TDisposable @this,
    Func<TDisposable, TOut> func)
where TDisposable : IDisposable
{
    using (@this)
        return func(@this);
}

Use takes a disposable object and passes it to a function. It then disposes the object and returns the result of the function.

This allows us to chain using statements right into the middle of other functions without risking their Dispose methods not getting called. For example, imagine we wanted a function that read a name from a file and then output a string saying hello to that name:

path
.Map(File.OpenRead)
.Use(stream => new StreamReader(stream)
	.Use(reader => reader.ReadToEnd()))
.Map(x => $"Hello, {x}");

Ideally we wouldn’t have this as a single function as it does too much, but I think it serves as an example of Use.

Aligning to LINQ

Those of you familiar with LINQ should have seen the similarity between the above functions and functions like Select. This isn’t by accident and Map, Tee and Use are designed to work well in conjunction with LINQ.

There are a couple more functions needed to fully round out the functional toolkit and integrate it with LINQ, though. These are ForEach and Evaluate.

ForEach

public static IEnumerable<TInOut> ForEach(
    this IEnumerable<TInOut> @this,
    Action<TInOut> func)
    =>
    @this.Select(x => x.Tee(func));

public static IEnumerable<TInOut, TIgnored> ForEach(
    this IEnumerable<TInOut> @this,
    Func<TInOut, TIgnored> func)
    =>
    @this.Select(x => x.Tee(func));

A ForEach is often a standard extension method in projects. This one is designed to fit in with the LINQ functions and so uses Select to produce an IEnumerable without enumerating the collection.

In my experience, this does mean that this behaviour needs communicating clearly to other people on the project as some people expect it to enumerate immediately.

I find it’s also useful to have another method which performs the enumeration without returning a value, just so the intention is clear. This is the Evaluate method.

Evaluate

public static void Evaluate<T>(
    this IEnumerable<T> @this)
    =>
    @this.ToList();

As you can see, this is just a wrapper for ToList. However, it makes it clearer that the output isn’t required.

Comparison of Functions

Now we have all the functions we need (for now), we can see how they relate to each other and to existing LINQ methods.

Map operates on an element and transforms it into another by executing the given function on it. This is much like Select which applies a function to each element in a similar fashion. In fact, some people like Map to be able to operate on one or more items and basically remove Select. However, I prefer having the ability to Map a collection and operate on the entire collection and retain Select for operating on each item.

Similarly, Tee and ForEach execute a method with the current value as the parameter and then return the value.

Currying

The final topic I’ll discuss in this post is the idea of currying.

Currying is where a function which takes multiple parameters is converted into a sequence of functions which each take a single parameter and build up the result. This allows us to use structures like Map and Tee to call these multi-parameter functions.

Functional languages do this without any special syntax, but we need to add stuff to C# to get the same behaviour.

There are two approaches to this that I’m going to cover. The first one I’ll talk about now, the second one I’ll leave for a later post.

So, we’ll look at how to deal with this when we have control over the functions and so can structure them how we like. This method is OK to use when there are two or three parameters for a function but I wouldn’t recommend it beyond that as it starts to make the code difficult to comprehend.

We’ll take the Add function we used in the earlier examples. Obviously a traditional C# implementation of this would look like:

int Add(int x, int y) => x + y;

However, if we used this function as is with Map we would have to add lambdas like so:

10
.Map(x => Add(x, 1))
.Map(x => Add(x, 5));

This isn’t necessarily bad but isn’t as clean as the examples I showed above.

To get to that cleaner syntax we have to jump further into the functional world of functions being passed around like any other variable. Rather than returning an int, we’re going to return a function that accepts an int and returns an int. This turns the function into:

Func<int, int> Add(int y) => x => x + y;

Now, if you remember back to the declaration of Map, you’ll remember that it takes as an argument a function with a single parameter which is exactly what this function returns! So we can just put the call to the method directly in the Map function as shown in the examples and the second argument will be passed through.

There are also other benefits of writing functions like this. In the following example we provide a function which calculates force given mass and acceleration and then produce other functions from this which have already had acceleration provided and are waiting for mass:

void Main()
{
	var getForceOfGravityOnEarth = GetForce(GravityOnEarth);
	var getForceOfGravityOnMars = GetForce(GravityOnMars);
	var getForceOfGravityOnMoon = GetForce(GravityOnMoon);
	
	12.3
	.Map(getForceOfGravityOnEarth)
	.Tee(Console.WriteLine);
}

Func<double, double> GetForce(double mass) => acceleration =>
	mass * acceleration;

const double GravityOnEarth = 9.8;
const double GravityOnMars = 3.1;
const double GravityOnMoon = 1.6;

As mentioned, there is another was of doing this which means that functions can be written normally, but there’s probably enough involved in this that it deserves its own post.

Wrapping Up

That’s it for this part. You should be able to use the tools described to chain simple methods together to produce functional methods.

There are still things we’re missing, though. We still have no way for handling error conditions; there’s still no good way of handling async methods; and we still have no way of using currying with methods we didn’t write.

These topics will be covered in future posts and I’ll link them here once they’re written.

Making roller derby accessible for people with colour vision deficiencies

What is colour vision deficiency

Colour vision deficiency (often referred to as ‘colour-blindness’) affects a significant proportion of the population. In Europe the most common form of it affects 1 in 12 people with a single X chromosome and 1 in 200 people with two X chromosomes (https://ghr.nlm.nih.gov/condition/color-vision-deficiency#statistics).

Despite this prevalence, many aspects of society are not made accessible to colour-blind individuals.

Exactly how colour-blindness affects someone varies from individual to individual. By far the most common form is red-green colour blindness which means that the person has abnormal red or green cones in their retina. The impact this can have ranges from a person struggling to tell the difference between a reddish-brown and a greenish-brown through to a person being completely unable to see one of the colours.

Examples

Describing colour-blindness is very hard to do to someone without it. It’s incredibly hard for a person with normal colour vision to imagine not being able to do something which comes so naturally.

NOTE: Please remember that with all these examples the fact that you’re viewing it on a screen and not in real life will alter the colour display.

The example I often use when trying to describe my personal experience of colour-blindness is that I can’t play snooker. The moment the brown ball ends up amongst the reds I can’t tell where it is anymore because the colours are just too similar for me. The image below is a simulation of how I view snooker balls – it’s not perfect but I hope it will give you an idea.

Picture of a standard set of snooker balls. The image on the left shows the original colours and the picture on the right shows a simulation of viewing the balls with colour vision deficiency.

Original image by barfisch – Own work, CC BY-SA 3.0, Link

Another example is the standard tests used to diagnose colour-blindness: the Ishihara plates. In the image below, people with normal colour vision are expected to see the number 74 whereas those with red-green colour vision deficiencies are expected to see 21.

Further examples can be found at colourvisiontesting.com

Impact in roller derby

So, let’s get to the meat of this – exactly what impact does colour-blindness have in roller derby? I’m going to approach this as an official as that’s my only real experience of the sport but remember that most of these issues are likely to impact other roles connected to the sport as well.

Numbers

The biggest issue I’ve come across is the colour of people’s numbers. It is very common for skaters to choose to put red numbers on a black (or other dark colour) background. This isn’t particularly easy to see for anyone but can be problematic for those with red-cone deficiency.

Take this image below – with full colour vision it appears to contrast quite well. (Apologies to any skaters with the number 123, this isn’t picking on you I just had to choose a number).

123 in red text on a black background

Now I’ll apply a filter to simulate colour-blindness.

The above image with the red component reduced

This number is obviously a lot harder to see. Now remember that people will be trying to read this number as it moves around on a skater’s back or arm and hopefully you’ll be able to imagine how hard that can be.

Shirt colours

The other problem which I’ve come across on a few occasions is the colour of the kit worn by each team being too similar. The rules state that the teams must wear contrasting kits but the interpretation of contrasting can differ and those with full colour vision sometimes don’t consider how the kits may look to those with colour-blindness.

I’ve only ever had one kit combination where I’ve been completely unable to tell two teams apart. This was green versus grey and the players’ sweat had turned both colours into a dark greenish-grey to my eyes. Fortunately I was an OPR and this was Sur5al so I stepped out for a few jams and all was fine. However, if this had been a full length game then I would have likely had to withdraw from the crew.

The more common issue I have is where the team colours are slightly similar but different enough to be identifiable (for instance green and black). This is all fine before the game but once skaters start moving things like pack definition become very hard.

For pack definition I generally rely a lot on my peripheral vision where colour vision is worse. However, when the colours are closer I have to use my central vision more which increases the mental load of pack definition and notably reduces my ability to focus on other areas of the game.

What can be done to improve accessibility

There are a hundred tools out there for simulating colour-blindness that you might be able to use to determine if kits are sufficiently readable. However, none of them can simulate it perfectly and there’s a much easier way to be sure: luminance.

Colour-blindness impacts how hues are observed but how dark or light something is isn’t impacted. So, if two things are of a completely different brightness then they’ll always be visible regardless of colour vision.

The image below depicts a particularly hard number for me to see – red on a dark coloured background. (Again, these colours chosen for difficulty not to single out any team or skater).

Red 123 on a dark turquoise background

If we convert this to greyscale then we get the following:

The above image converted to greyscale

Now the number in this image is definitely visible, but I wouldn’t describe it as highly contrasting which is what we want. So, let’s look at a colour which could make this a lot better.

Light yellow 123 on a dark turquoise background

Whether or not yellow is a particularly aesthetically pleasing colour to go for in this case, it does make it a lot clearer and I wanted to choose a colour rather than just plain white to show that it can be something more interesting. When converting this to greyscale we get the following:

The above image converted to greyscale

This is obviously much more contrasting and could easily be seen by someone even if they had complete colour-blindness.

The other option is to use a highly contrasting border around the number. This way the desired colours can be kept but it still makes it possible for colour-deficient people to see.

Conclusion

Roller derby is a wonderfully inclusive sport but more could be done to improve accessibility for people with colour vision deficiencies whatever their role in the sport.

The simplest steps to take to ensure that the sport is accessible are for teams to ensure that their numbers have a highly contrasting luminance with the rest of their kit; and for head officials and game coordinators to ensure that the teams playing have a high luminance contrast between their kits.

Testing for luminance difference can be done by taking a photo of the kit and converting it to greyscale. Remember that some colours (green is especially bad for this) may appear bright to your eyes but are actually quite dark when converted.

Further reading