Effect Programming in C#

Programming with algebraic effects and handlers, a method for reasoning about computational effects of programs that originates from functional programming research, has recently found increasing adoption in more mainstream languages. Implementations can be found in OCaml, Haskell, Scala, F# and the Koka language.

Algebraic effects are an immensely powerful language feature that can be used to implement a diverse array of features such as dependency injection, cancellation, nondeterminism, debug tracing, first-class continuations and replayable computations.

For a beginner’s introduction to the concept of algebraic effects, I would recommend reading Dan Abramov’s excellent article on the subject, which provides a primer using a hypothetical dialect of JavaScript.

In this article I will be talking about Eff, an experimental C# library that provides a pragmatic adaptation of the concept by taking advantage of the async method extensibility features available since C# 7. It was originally created by my former colleague, Nick Palladinos, and recently I have also been contributing to the project.

I have decided to start writing about Eff since I strongly believe in its experimental potential, and since hardly any documentation exists about it online.

Key Concepts

At its core, the library defines a task-like type, Eff, which can be built using async methods:

using Nessos.Effects;

async Eff<int> Sqr(int x) => x * x;

and can be composed using the await keyword:

async Eff<int> SumOfSquares(params int[] inputs)
{
    int sum = 0;
    foreach (int input in inputs)
    {
        sum += await Sqr(input);
    }
    return sum;
}

An Eff computation has to be run explicitly by passing an effect handler:

using Nessos.Effects.Handlers;

Eff<int> computation = SumOfSquares(1,2,3);

computation.Run(new DefaultEffectHandler());

It should be noted that unlike Task, Eff values have delayed semantics and so running

async Eff HelloWorld() => Console.WriteLine("Hello, World!");

Eff hello = HelloWord();

will have no observable side-effect. It is useful to think of Eff values as delegates, as opposed to futures.

Finally, Eff methods are capable of awaiting regular async awaitables

async Eff Delay()
{
    await Task.Delay(1000);
}

It should be noted however that the opposite is not possible, that is Eff values cannot be awaited inside regular async methods.

Programming with Effects

The examples thus far illustrate what is effectively an alternative implementation for async method builders. We have not talked about effects yet.

In the Eff library, abstract effects can be defined by inheriting the abstract Effect<T> class:

public class DateTimeNowEffect : Effect<DateTime>
{

}

Abstract effects can be awaited inside Eff computations:

async Eff<bool> IsSummerWeekend()
{
    DateTime now = await new DateTimeNowEffect();
    return (now.Month, now.DayOfWeek) switch
    {
        (6 or 7 or 8, DayOfWeek.Saturday or DayOfWeek.Sunday) => true,
        _ => false,
    };
}

And Eff computations depending on Effects can be further composed with other Eff computations:

async Eff SomeDomainLogic(Customer customer)
{
   if (await IsSummerWeekend())
   {
       await SendIceCreamPromo(customer);
   }
}

Let’s now see how we can run this effect computation. Suppose I try to pass it the default effect handler:

SomeDomainLogic(customer).Run(new DefaultEffectHandler());

Then execution will fail with an exception:

System.NotSupportedException: Abstract effects not supported by this handler.
   at Nessos.Effects.Handlers.DefaultEffectHandler.Handle[TResult](EffectAwaiter`1 awaiter) 

The failure is caused by the fact that the effect handler we used does not provide semantics for DateTimeNowEffect. In order to get this to work we need to implement an effect handler of our own:

public class DateTimeEffectHandler : EffectHandler
{
    public override async ValueTask Handle<TResult>(EffectAwaiter<TResult> awaiter)
    {
        switch (awaiter)
        {
            case EffectAwaiter<DateTime> { Effect: DateTimeNowEffect } dateTimeEffectAwaiter:
                dateTimeEffectAwaiter.SetResult(DateTime.Now);
                break;
        }
    }
}

The handler runs a pattern match over an EffectAwaiter argument, and populates any DateTimeNowEffect awaiter with the result of System.DateTime.Now. We can now run the previous computation with our new effect handler:

SomeDomainLogic(customer).Run(new DateTimeEffectHandler());

which will execute with the expected result.

We can also use effect handlers to mock the result of an effect:

public class MockedDateTimeEffectHandler : EffectHandler
{
    public DateTime Now { get; set; }

    public override async ValueTask Handle<TResult>(EffectAwaiter<TResult> awaiter)
    {
        switch (awaiter)
        {
            case EffectAwaiter<DateTime> { Effect: DateTimeNowEffect } dateTimeEffectAwaiter:
                dateTimeEffectAwaiter.SetResult(Now);
                break;
        }
    }
}

var handler = new MockedDateTimeEffectHandler() { Now = DateTime.Parse("2020-07-19") };
SomeDomainLogic(customer).Run(handler);

Combining Effects

Consider the following set of abstract console effects

public static class ConsoleEffect
{
    public static WriteLineEffect WriteLine(string line) => new WriteLineEffect() { LogEntry = line };
    public static ReadLineEffect ReadLine() => new ReadLineEffect();

    public class WriteLineEffect : Effect
    {
        public string? LogEntry { get; set; }
    }

    public class ReadLineEffect : Effect<string>
    {

    }
}

and relevant handler that is backed by System.Console:

public class ConsoleEffectHandler : EffectHandler
{
    public override async ValueTask Handle<TResult>(EffectAwaiter<TResult> awaiter)
    {
        switch (awaiter)
        {
            case EffectAwaiter { Effect: ConsoleEffect.WriteLineEffect writeLineEffect } writeLineAwaiter:
                Console.WriteLine(writeLineEffect.LogEntry);
                writeLineAwaiter.SetResult();
                break;

            case EffectAwaiter<string> { Effect: ConsoleEffect.ReadLineEffect } readLineAwaiter:
                string line = Console.ReadLine();
                readLineAwaiter.SetResult(line);
                break;
        }
    }
}

Now suppose we have an Eff computation that combines effects from both examples:

async Eff Test()
{
    DateTime now = await new DateTimeNowEffect();
    await ConsoleEffect.WriteLine($"The current date is {now}.");
}

Then defining an effect handler that supports both effect families is very straightforward:

public class CombinedDateTimeConsoleEffectHandler : EffectHandler
{
    private readonly EffectHandler[] _nestedHandlers = new EffectHandler[]
    {
        new ConsoleEffectHandler(),
        new DateTimeEffectHandler(),
    };

    public override async ValueTask Handle<TResult>(EffectAwaiter<TResult> awaiter)
    {
        foreach (EffectHandler handler in _nestedHandlers)
        {
            await handler.Handle(awaiter);
            if (awaiter.IsCompleted)
            {
                return;
            }
        }
    }
}

Test().Run(new CombinedDateTimeConsoleEffectHandler());

It should be then clear by the examples that effects can be used to define a form of dependency injection. There is however a twist: since effect handlers flow with function calls, there is no need to pass dependencies via constructor parameters. Domain logic can be expressed by composing static methods, with the effect handler serving as the composition root for the application. This provides a truly functional approach to dependency injection.

Encoding Cancellation

A common criticism of C# is that async methods do not flow cancellation, that is any cancellation tokens will have to be passed manually to operations that require it. We will now see how one might use effects to work around that limitation; the Eff library comes with a baked-in effect handler that provides cancellation semantics. Consider the following example:

using Nessos.Effects.Cancellation;

async Eff Test()
{
    while (true)
    {
        await Delay();

        async Eff Delay()
        {
            await Task.Delay(1000);
        }
    }
}

var cts = new CancellationTokenSource(millisecondsDelay:10_000);
var handler = new CancellationEffectHandler(cts.Token);
Test().Run(handler);

While one might expect the above to diverge, it will actually throw an OperationCanceledException after ten seconds. This is because the effect handler will actively check for cancellation before evaluating the state machine of a nested Eff call.

But what happens if we need to pass the cancellation token to a non-eff asynchronous operation? We can recover it by awaiting on the cancellation token effect:

async Eff<HttpResponseMessage> Get(HttpClient httpClient, string requestUri)
{
    // get the cancellation token from the effect handler
    CancellationToken token = await CancellationTokenEffect.Value;
    return await httpClient.GetAsync(requestUri, token);
}

For those familiar with F#, this is precisely how its async method implementation flows cancellation.

Conclusions

This concludes my introductory overview of the Eff library. I have tried to convey its most basic concepts, however this article hardly covers all capabilities of effects. I will try to follow up with future articles digging deeper into both applications and the implementation details of Eff itself. For the impatient, there is a fairly extensive catalog of samples in the Eff repo itself, including a proof-of-concept AspNetCore application.

I do hope that you have found the concepts interesting, and that this might furthermore spark a conversation about future programming paradigms.

Advertisement

Applying the Tagless-Final pattern in F# Generic Programs

A few years back, I blogged about how one could use the TypeShape library for writing practical generic programs in F#. While the library has seen success in both OSS and proprietary applications, from the beginning there have existed pretty self-evident usability issues when working with the library itself.

On simple inspection of a basic example one can easily detect the following problems:

  • Accessing generic types requires extensive use of visitors which result in awkward looking code, which is hard to read and hard to write.
  • The type checker can hardly handle such expressions, and explicit type annotations are required almost everywhere.
  • Most importantly, unsafe casts are required at the boundaries of generic program composition. While those generally only result in errors at program generation time and not at execution time, they are still a source of runtime errors.

In this article, I propose an alternative approach to generic programming in F#, one that provides a simpler and type-safe approach to building polytypic programs. The approach combines techniques that are long established in the literature.

Encoding Higher-Kinded Types

A core ingredient for the new approach is the ability to express higher-kinded types. While F# does not natively support higher-kinded types, they can be encoded using an approach first described by Yallop and White in the context of OCaml. Various ports of that encoding exist in F#, most prominent being the Higher library.

For the purposes of this application, I’m going to use a simplistic variant of the encoding:

type App<'F, 't> = App of payload : obj

module HKT =

    // associate HKT encoding to underlying type using SRTPs
    let inline private assoc<'F, 'a, 'Fa when 'F : (static member Assign : App<'F, 'a> * 'Fa -> unit)> = ()

    // pack and unpack functions
    let inline pack (value : 'Fa) : App<'F, 'a> = assoc<'F, 'a, 'Fa> ; App value
    let inline unpack (App value : App<'F, 'a>) : 'Fa = assoc<'F, 'a, 'Fa> ; unbox value
        
    // helper active pattern
    let inline (|Unpack|) app = unpack app

Let’s put the above definitions into use. Here’s how we can encode optionals as a higher-kinded type:

type Option =
    static member Assign(_ : App<Option, 'a>, _ : 'a option) = ()

let encoded : App<Option, _> = HKT.pack (Some 42)
let payload = HKT.unpack encoded

The HKT.pack function takes an instance of type int option and returns an instance of type App<Option, int>, whereas unpack does the inverse. The type Option is an uninhabited type representing the _ option type constructor and is known as the brand of the encoding. The compiler infers the association between App<Option, int> and int option by virtue of the Assign method type signature using SRTP method constraints.

Let’s see how we can use this encoding to express programs that are generic w.r.t. type constructors. Here’s the obligatory functor example:

type Functor<'F> =
    abstract member Fmap : ('a -> 'b) -> App<'F, 'a> -> App<'F, 'b>

let fmap f xs : _ when 'F :> Functor<'F> = (new 'F()).Fmap f xs

let incrSqr x = x |> fmap ((+) 1) |> fmap ((*) 2)

Here we describe the functor abstraction using an interface that is generic w.r.t the higher-kinded brand. We also use default constructor constraints and type inference to define a generic fmap combinator and a sample pipeline on top of that.

We can create functor instances as follows:

[<Struct>]
type List =
    static member Assign(_ : App<List, 'a>, _ : 'a list) = ()
    interface Functor<List> with
        member __.Fmap f (HKT.Unpack xs) = HKT.pack (List.map f xs)

let lst : App<List,_> = HKT.pack [1;2;3;4]

incrSqr lst |> HKT.unpack
// val it : int list = [4; 6; 8; 10]

If you’re interested in more elaborate HKT encodings, please refer to the original paper or the samples in the Higher library.

Application to Generic Programming

A very common generic programming application is the pretty printer. It involves taking an arbitrary type 'a and generating a function of type 'a -> string. It can be represented using a higher-kinded type encoding like so:

type PrettyPrinter =
    static member Assign(_ : App<PrettyPrinter, 'a>, _ : 'a -> string) = ()

Thus, generating a pretty-printer for type 'a boils down to obtaining an instance of type App<PrettyPrinter,'a>.

Defining Higher-Kinded Generic Programs

We can combine the ideas described above to obtain an abstraction capable of describing most generic programming applications. Consider the interface:

type ITypeBuilder<'F> =
    // base types
    abstract Bool : unit -> App<'F, bool>
    abstract Int : unit -> App<'F, int>
    abstract String : unit -> App<'F, string>
    // combinators
    abstract Option : App<'F, 't> -> App<'F, 't option>
    abstract List : App<'F, 't> -> App<'F, 't list>
    abstract Tuple : App<'F, 't1> -> App<'F, 't2> -> App<'F, 't1 * 't2>

Which defines a set of constructors capable of producing generic programs for a small subset of types. As before, we can use a bit of type inference to expose the interface methods as proper combinators:

let inline private inst() : 'F when 'F :> ITypeBuilder<'F> = new 'F()
let bool () = inst().Bool()
let int () = inst().Int()
let string () = inst().String()
let option t = inst().Option t
let list t = inst().List t
let tuple t = inst().Tuple t

Then, writing

let sample () = int () |> list |> option |> tuple (bool ())

Produces a generic constructor for instances of type App<'F,bool * int list option>.

We can now produce a generic pretty-printer by implementing the ITypeBuilder interface:

[<Struct>]
type PrettyPrinter =
    static member Assign(_ : App<PrettyPrinter, 'a>, _ : 'a -> string) = ()

    interface ITypeBuilder<PrettyPrinter> with
        member __.Bool () = HKT.pack (function true -> "true" | false -> "false")
        member __.Int () = HKT.pack (fun i -> i.ToString())
        member __.String() = HKT.pack (sprintf "\"%s\"")

        member __.Option (HKT.Unpack ep) = HKT.pack(function None -> "None" | Some x -> sprintf "Some(%s)" (ep x))
        member __.List (HKT.Unpack ep) = HKT.pack(Seq.map ep >> String.concat "; " >> sprintf "[%s]")
        member __.Tuple (HKT.Unpack lp) (HKT.Unpack rp) = HKT.pack (fun (l,r) -> sprintf "(%s, %s)" (lp l) (rp r))

Which we can consume as follows:

let mkPrinter (x : App<PrettyPrinter,_>) = HKT.unpack x

let p = sample() |> mkPrinter

p (false, Some [1;2])
// val it : string = "(false, Some([1; 2]))"

The same sample value can be reused in the context of other generic programs. Here is one that returns values with zeroed out fields:

[<Struct>]
type Zero =
    static member Assign(_ : App<Zero, 'a>, _ : 'a) = ()

    interface ITypeBuilder<Zero> with
        member __.Bool () = HKT.pack false
        member __.Int () = HKT.pack 0
        member __.String() = HKT.pack ""

        member __.Option (HKT.Unpack z) = HKT.pack(Some z)
        member __.List (HKT.Unpack z) = HKT.pack [z]
        member __.Tuple (HKT.Unpack lz) (HKT.Unpack rz) = HKT.pack (lz,rz)


let mkZero (x : App<Zero,_>) = HKT.unpack x

sample() |> mkZero
// val it : bool * int list option = (false, Some [0])

For the sake of completeness, I should mention that this application is a special case of the tagless-final pattern, which was originally described in a paper by Kiselyov et al.

Folding arbitrary types

One issue with the above approach is that we need to explicitly construct the App<_,_> instances by calling the combinators. While this might be acceptable in the context of simple applications, we’d still like a way to generate programs just by passing a simple type argument.

Luckily, this can be achieved by harnessing the usual TypeShape constructs:

let rec fold<'F, 't when 'F :> ITypeBuilder<'F> and 'F : (new : unit -> 'F)> () : App<'F, 't> =
    let wrap (x : App<'F,_>) : App<'F, 't> = unbox x
    match shapeof<'t> with
    | Shape.Bool -> bool() |> wrap
    | Shape.Int32 -> int() |> wrap
    | Shape.String -> string() |> wrap
    | Shape.FSharpOption s ->
        s.Element.Accept {
            new ITypeVisitor<App<'F, 't>> with
                member __.Visit<'e> () =
                    let e = fold<'F, 'e>()
                    option e |> wrap
        }

    | Shape.FSharpList s ->
        s.Element.Accept {
            new ITypeVisitor<App<'F, 't>> with
                member __.Visit<'e> () =
                    let e = fold<'F, 'e>()
                    list e |> wrap
        }

    | Shape.Tuple s when s.Elements.Length = 2 ->
        let ls = s.Elements.[0].Member
        let rs = s.Elements.[1].Member
        ls.Accept {
            new ITypeVisitor<App<'F, 't>> with
                member __.Visit<'l> () =
                    rs.Accept {
                        new ITypeVisitor<App<'F, 't>> with
                            member __.Visit<'r>() =
                                let l = fold<'F, 'l>()
                                let r = fold<'F, 'r>()
                                tuple l r |> wrap
                    }
        }
    | _ -> failwithf "I do not know how to fold type %O" typeof<'t>

It is then possible to define a generic pretty-printer by doing the following:

let mkPrettyPrinter<'a> () = fold<PrettyPrinter, 'a> () |> HKT.unpack

let p' = mkPrettyPrinter<int * (bool list * string option)> ()

p' (42, ([false;true], Some "string"))
// val it : string = "(42, ([false; true], Some("string")))"

As before, we can reuse the fold function for the zero program:

let zero<'a> = fold<Zero, 'a> () |> HKT.unpack

zero<int * (string option * bool list)>  
// val it : int * (string option * bool list) = (0, (Some "", [false]))

You can find the code sample above in fssnip, complete with type annotations.

Conclusions

While the ideas above are in an early prototype state, the approach in general seems very promising. It significantly simplifies the generic program authoring process by being type safe, eliminating visitors and requiring almost no type annotations. While it may lack the flexibility of the TypeShape constructs, it seems to be good enough for most generic programming applications.

You can find more involved implementations of the above idea in the TypeShape samples folder.

A Contract Pattern for Schemaless DataStores

At Jet we do a lot of event sourcing. Our event types are typically modelled using discriminated unions:

type CartEvent =
    | CartCreated
    | ItemAdded of ItemAdded
    | ItemRemoved of ItemRemoved
    | CartCheckedOut of CartCheckedOut

and ItemAdded = { skuId : string ; quantity : int }
and ItemRemoved = { skuId : string }
and CartCheckedOut = { paymentDetails : string }

Since DUs carry no canonical representation in most popular serialization formats, encoding and decoding typically involves a bit of manual work:

let encode = function
    | CartCreated -> "cartCreated", "null"
    | ItemAdded e -> "itemAdded", serialize e
    | ItemRemoved e -> "itemRemoved", serialize e
    | CartCheckedOut e -> "cartCheckedOut", serialize e

let decode = function
    | "cartCreated", _ -> CartCreated
    | "itemAdded", json -> ItemAdded(deserialize json)
    | "itemRemoved", json -> ItemAdded(deserialize json)
    | "cartCheckedout", json -> CartCheckedOut(deserialize json)
    | t,_ -> failwithf "unrecognized event type '%s'" t

where the functions serialize : 'T -> string and deserialize : string -> 'T represent some serialization library like Json.NET. The purpose of the encoder functions is to embed domain events into a tupled representation, the first element being a string identifier for the event type, and the second potentially containing pickled payload data. Sending encoded events over the wire is then a relatively straightforward undertaking.

While this approach works fine with small examples, the boilerplate it introduces tends to become unwieldy as domain complexity grows. The keen-eyed reader might have noticed that I’ve inserted a couple of bugs in my decode function.

While one could argue that using techniques such as property based testing might be sufficient to catch such classes of bugs, the issue remains that making changes in event types generally comes with high maintenance. What if there existed a way to canonically represent union types in a way that is properly generic and serialization format agnostic?

The UnionContract Module

The TypeShape library that I maintain ships with a small module called UnionContract. The surface API is tiny, exposing only a single function and a couple of interfaces. At its core it defines a datatype generic program which accepts a serializer instance, represented by instances of the interface:

type IEncoder<'Format> =
    abstract Encode<'T> : value:'T -> 'Format
    abstract Decode<'T> : fmt:'Format -> 'T
    abstract Empty : 'Format

and generates functions capable of encoding and decoding union values in the sense described previously. We can equivalently express the above encoders using the code:

open System.Runtime.Serialization
open TypeShape.UnionContract
open Newtonsoft.Json

type CartEvent =
    | [<DataMember(Name = "cartCreated")>] CartCreated
    | [<DataMember(Name = "itemAdded")>] ItemAdded of ItemAdded
    | [<DataMember(Name = "itemRemoved")>] ItemRemoved of ItemRemoved
    | [<DataMember(Name = "cartCheckedOut")>] CartCheckedOut of CartCheckedOut
with
    interface IUnionContract // marker interface

// object  json string encoder used for union case payloads
let jsonEncoder =
    { new IEncoder<string> with
        member __.Empty = "null" // payload to be inserted in nullary cases
        member __.Encode(t : 'T) = JsonConvert.SerializeObject(t)
        member __.Decode(json : string) = JsonConvert.DeserializeObject<'T>(json) }

// creates a union contract encoder
let unionEncoder = UnionContractEncoder.Create<CartEvent, string>(jsonEncoder)

The UnionContractEncoder.Create method accepts a datatype generic encoder that pickles union case payloads into a fixed format type (in this case string), and generates an encoder that acts on the union level.

We can use the newly created encoder to encode and decode union instances:

unionEncoder.Encode(CartCreated)
// val it : EncodedUnion<string> = {CaseName = "cartCreated";
//                                  Payload = "null";}

unionEncoder.Encode(ItemAdded { skuId = "a" ; quantity = 2 })
// val it : EncodedUnion<string> = {CaseName = "itemAdded";
//                                  Payload = "{"skuId":"a","quantity":2}";}

Union instances can be reconstructed by applying the inverse function to the encoded unions:

unionEncoder.Decode { CaseName = "cartCreated" ; Payload = null }
// val it : CartEvent = CartCreated

So how does it all work? Under the hood, the implementation uses constructs from the TypeShape library to duplicate the logic of the original encode and decode functions in a datatype generic manner.

Plugging Serialization Formats

Serialization formats can be controlled by plugging different IEncoder implementations. For instance:

open Newtonsoft.Json.Linq

let jtokenEncoder =
    { new IEncoder<JToken> with
        member __.Empty = JValue.CreateNull() :> _
        member __.Encode(t : 'T) = JToken.FromObject t
        member __.Decode(t : JToken) = t.ToObject<'T>() }        

let jtokenUnionEncoder = UnionContractEncoder.Create<CartEvent,JToken>(jtokenEncoder)

produces an encoder that pickles event payloads as Newtonsoft JToken values.

Versioning Events

In the original example, suppose we want to alter the ItemRemoved event so that it additionally includes a quantityRemoved field like so:

type ItemRemoved = { skuId : string ; quantityRemoved : int}

Assuming that events using the original shape have already been persisted to the database, it is evident that this introduces a breaking change to the schematisation. Many databases typically used for event sourcing are of the “schemaless” variety, however as the author of Designing Data-Intensive Applications put it:

Document databases are sometimes called schemaless, but that’s misleading, as the code that reads the data usually assumes some kind of structure—i.e., there is an implicit schema, but it is not enforced by the database. A more accurate term is schema-on-read (the structure of the data is implicit, and only interpreted when the data is read), in contrast with schema-on-write (the traditional approach of relational databases, where the schema is explicit and the database ensures all written data conforms to it).

-Chapter 2, Data Models and Query Languages

So how do we ensure that union contracts enforce a version tolerant schema-on-read? I propose an approach adhering to the following set of principles:

  1. Apply separation of domain types from DTOs.
  2. Model DTOs using the UnionContract construct.
  3. Make DTO union contracts append-only.
  4. Freely evolve domain types as schema changes.
  5. Push versioning and migration responsibility to the DTO layer.

Applying this to the above example would result in the following arrangement for domain events:

namespace Domain

type CartEvent =
    | CartCreated
    | ItemAdded of ItemAdded
    | ItemRemoved of ItemRemoved
    | CartCheckedOut of CartCheckedOut

and ItemRemoved = { skuId : string ; quantityRemoved : int}
...

whereas the contract type would look as follows:

namespace Contracts

type CartEventDto =
    | [<DataMember(Name = "cartCreated")>] CartCreated
    | [<DataMember(Name = "itemAdded")>] ItemAdded of ItemAdded
    | [<DataMember(Name = "itemRemoved")>] ItemRemovedV1 of ItemRemovedV1
    | [<DataMember(Name = "itemRemoved/v2")>] ItemRemovedV2 of ItemRemovedV2
    | [<DataMember(Name = "cartCheckedOut")>] CartCheckedOut of CartCheckedOut
with
    interface IUnionContract

and ItemRemovedV1 = { skuId : string }
and ItemRemovedV2 = { skuId : string ; quantityRemoved : int}
...

Notice how the two versions of ItemRemoved are defined as separate, explicitly qualified DTO union cases. On the other hand, the Domain event type only contains a single, canonicalized ItemRemoved case.

We need a fromDomain function to map domain events to DTOs:

let fromDomain = function
    | CartEvent.CartCreated -> CartEventDto.CartCreated
    | CartEvent.ItemAdded e -> CartEventDto.ItemAdded { skuId = e.skuId ; quantity = e.quantity }
    | CartEvent.ItemRemoved e -> CartEventDto.ItemRemovedV2 { skuId = e.skuId ; removedQuantity = e.removedQuantity }
    | CartEvent.CartCheckedOut e -> CartEventDto.CartCreated { checkoutDetails = e.checkoutDetails }

The opposite toDomain function is more interesting, as it contains migration logic:

let toDomain = function
    | CartEventDto.CartCreated -> CartEvent.CartCreated
    | CartEventDto.ItemAdded e -> CartEvent.ItemAdded { skuId = e.skuId ; quantity = e.quantity }
    | CartEventDto.ItemRemovedV1 e -> CartEvent.ItemRemoved { skuId = e.skuId ; removedQuantity = 1 }
    | CartEventDto.ItemRemovedV2 e -> CartEvent.ItemRemoved { skuId = e.skuId ; removedQuantity = e.removedQuantity }
    | CartEventDto.CartCheckedOut e -> CartEvent.CartCreated { checkoutDetails = e.checkoutDetails }

In this function, we choose to interpret “v1” ItemRemoved events as having removed quantity 1. By using the two functions in conjunction with the union encoder we obtain serialization/schema-on-read for the domain types as follows:

let serialize (e : CartEvent) = fromDomain e |> unionEncoder.Encode
let deserialize (c : EncodedUnion) = unionEncoder.Decode c |> toDomain

A nice property of this approach is that we are forcing any versioning and migration concerns outside of the domain, making business logic cleaner to work with. We have been using this pattern with success in a few of our systems.

Versioning Snapshots

This pattern is not strictly applicable to event serialization. We can also use it for versioning snapshot schemata:

type SnapshotV1 = { ... }
type SnapshotV2 = { ... }
type SnapshotV3 = { ... }

type SnapshotDto =
    | [<DataMember(Name = "snapshot/v1")>] SnapshotV1 of SnapshotV1
    | [<DataMember(Name = "snapshot/v2")>] SnapshotV2 of SnapshotV2
    | [<DataMember(Name = "snapshot/v3")>] SnapshotV3 of SnapshotV3 // current
with
    interface IUnionContract

let unionEncoder = UnionContractEncoder.Create<SnapshotDto,_>(jtokenEncoder)

We can then define our conversion functions, with a slight variation:

let fromDomain snapshot = SnapshotV3 snapshot

let toDomain = function
    | SnapshotV1 snap -> migrateFromV1 snap
    | SnapshotV2 snap -> migrateFromV2 snap
    | SnapshotV3 snap -> snap // current snapshot, no migration needed

Then as before, we obtain the serialization/migration functions by composition

let serialize (e : SnapshotV3) = fromDomain e |> unionEncoder.Encode
let deserialize (c : EncodedUnion) = unionEncoder.Decode c |> toDomain

Again, we have achieved pushing all versioning and migration concerns to the DTO layer of our codebase.

Conclusions

I have attempted to illustrate the utility of using F# Discriminated Unions as a means to encode code-first schematisations for “schemaless” (schema-on-read) datastores. The use of the TypeShape UnionContract module in conjunction with separating domain types from DTOs provides us with an effective pattern to isolate versioning and data migration concerns from core domain logic. The versioning logic itself is pure and as such highly amenable to testing. We have used this pattern effectively in multiple production systems across at least two different teams within Jet.

Property Testing Generic Programs

One of the interesting challenges when dealing with generic programming is testing. How could you possibly achieve extensive test coverage for a piece of code that works with arbitrary types? Suppose for instance that we have implemented a generic equality function:

val equals : 'T -> 'T -> bool

One of the properties we would expect this function to satisfy is reflexivity.
Put formally, the predicate

let reflexivity (t:'T) = equals t t

should return true for any value of any type.

The Standard Approach

Using a random testing tool like FsCheck, a common strategy to validate this would unfold as follows:

[<Property>]
let ``Int Reflexivity`` (x : int) = reflexivity x
[<Property>]
let ``String Reflexivity`` (x : string) = reflexivity x
[<Property>]
let ``2-Tuple Reflexivity`` (x : string * int) = reflexivity x
[<Property>]
let ``List Reflexivity`` (x : (string * int) list) = reflexivity x
[<Property>]
let ``List of List Reflexivity`` (x : (string * int) list list) = 
    reflexivity x
...

Expanding on tested types ad nauseam, until we finally convince ourselves that coverage is sufficient. But is it? How can we be sure that (string * int) list correctness implies correctness for (int * string) list? What about large tuple arities? We will always be forgetting something.

Generic Property Tests

I have been toying with the idea of generic property tests for quite a while now, but only the recent performance improvements in FsCheck 3 (currently in prerelease) have made this approach feasible. For starters, we will need to provide a rank-2 encoding for predicates:

type Predicate =
   abstract Invoke : 'T -> bool

Which permits declaration of generic predicate values (currently not a possibility with F# lambdas, which must be fixed on their argument types)

let reflexivity = 
    { new Predicate with
        member __.Invoke t = equals t t }

The goal is to use FsCheck in a way where generic predicates can be tested against random values of random types.

Defining a Type Algebra

Let’s start by defining the kind of types we would like to be testing. To do this we define a type algebra, or a “type of types”:

type TypeAlg =
    | BaseType of BaseType
    | List of TypeAlg 
    | Array of TypeAlg 
    | Option of TypeAlg 
    | Tuple of TypeAlg list

and BaseType = Bool | Byte | Int | String

Each instance of type TypeAlg represents an actual F# type. For instance, the value Tuple [BaseType Int; Option(BaseType String)] represents the type int * string option. As a first exercice, let’s try implementing a pretty-printer for the algebra:

let rec prettyprint ta =
    match ta with
    | BaseType Bool -> "bool"
    | BaseType Byte -> "byte"
    | BaseType Int  -> "int"
    | BaseType String -> "string"
    | List element -> sprintf "(%s) list" (prettyprint element)
    | Array element -> sprintf "(%s) []" (prettyprint element)
    | Option element -> sprintf "(%s) option" (prettyprint element)
    | Tuple elements -> 
        elements 
        |> Seq.map prettyprint 
        |> String.concat " * " 
        |> sprintf "(%s)"

Importantly, it is possible to map any TypeAlg representation to its corresponding System.Type runtime instance like so:

let rec meaning ta : System.Type =
    match ta with
    | BaseType Bool -> typeof<bool>
    | BaseType Byte -> typeof<byte>
    | BaseType Int  -> typeof<int>
    | BaseType String -> typeof<string>
    | List element -> typedefof<_ list>.MakeGenericType(meaning element)
    | Array element -> (meaning element).MakeArrayType()
    | Option element -> typedefof<_ option>.MakeGenericType(meaning element)
    | Tuple [] -> typeof<unit>
    | Tuple elements -> 
        elements 
        |> Seq.map meaning 
        |> Seq.toArray 
        |> FSharp.Reflection.FSharpType.MakeTupleType

We can use TypeShape to take that semantic mapping one step further: bring the type into scope as a generic argument.

open TypeShape.Core

let visitTAlg (visitor : ITypeShapeVisitor<'R>) (tAlg : TypeAlg) : 'R =
    let sysType : Type = meaning tAlg
    let shape : TypeShape = TypeShape.Create sysType
    shape.Accept visitor

We can verify the behaviour by running

let getType = 
    { new ITypeShapeVisitor<System.Type> with 
        member __.Visit<'T>() = typeof<'T> }
    |> visitTAlg

getType (BaseType Int) // typeof<int>
getType (Option (BaseType (String)) // typeof<string option>

Putting it all Together

Let’s now see how we can use this technique to obtain a random tester for generic predicates. Consider the following function:

open FsCheck

let checkTAlg (predicate : Predicate) (tAlg : TypeAlg) : bool =
    printfn "Testing %s" (prettyprint tAlg)
    let checker = 
        { new ITypeShapeVisitor<bool> with 
            member __.Visit<'T>() =
                Check.QuickThrowOnFailure<'T -> bool> predicate.Invoke 
                true }
    
    visitTAlg checker tAlg

The implementation accepts a generic predicate and a type representation, uses TypeShape to bring the type into scope as a type variable, then calls FsCheck with the predicate instantiated to the reified type.

The key observation for converting this function into a generic property test is that TypeAlg is an algebraic data type, and FsCheck is capable of generating random instances for type out of the box:

let checkGeneric (predicate : Predicate) =
    Check.QuickThrowOnFailure<TypeAlg -> bool>(checkTAlg predicate)

We are now ready to random test our reflexivity property:

checkGeneric reflexivity

which yields the following output:

Testing (bool) []
Ok, passed 100 tests.
Testing byte
Ok, passed 100 tests.
Testing ((bool) option) list
Ok, passed 100 tests.
Testing bool
Ok, passed 100 tests.
Testing (string * byte)
Ok, passed 100 tests.
Testing ((string) option) option
Ok, passed 100 tests.
Testing ((bool * ((((((()) []) []) option) option) list) [] * byte * int * string * string * int * bool * int * int * int * byte)) option
Ok, passed 100 tests.
...

In effect this call will test 100 randomly generated types with 100 randomly generated values each.

Conclusions

Applying the above technique when testing generic programs code has proven to be extremely effective. It has allowed me to discover both implementation bugs of the generic programs themselves, as well as obscure bugs of the core TypeShape library. My previous, handwritten test suite had failed to catch any of those issues. If interested, have a look at this source file for an example of a more complete implementation of the technique.

On the Significance of Recursion

During a recent conversation with a group of F# developers, I started making the case for recursion and its importance in day-to-day programming. My peers reacted with surprise to this opinion of mine, responding that they thought that recursion is an inherently inefficient way of defining programs.

We are taught this rule when studying procedural languages: “for optimal results, just use iteration”. In functional languages however, this is not necessarily true: tail-recursive functions are optimized into imperative loops at compile time and most recursive functions can be readily transformed into tail-recursive workflows by applying accumulators or CPS. These FP fundamentals are what make the functional paradigm a feasible alternative to the imperative style.

I was thus legitimately surprised to discover that working F# programmers carried this attitude towards recursion, which made me wonder whether this could be an artifact of how the language is being taught to FP newcomers. I researched some of the popular online material and did a bit of asking around which seemed to confirm my suspicion: there exists a tendency for de-emphasizing recursion in beginner F# educational material. An important reason is that the topic of recursion itself can appear puzzling and counter-intuitive to beginners: quoting Graham Hutton

Defining recursive functions is like riding a bicycle: it looks easy when someone else is doing it, may seem impossible when you first try to do it yourself, but becomes simple and natural with practice.

Omitting recursion can also be justified by a desire to appear pragmatic; recursion has a reputation for being too academic, a mathematical trivium that is only useful when defining Fibonacci numbers but not relevant in real code. Algorithms should be declared by composing available combinators, and if everything fails we can always fall back to the imperative style. Recursion is relegated to the “Advanced” chapters, bundled together with other academic curiosities such as call/cc and catamorphisms.

An Example

While there are merits in this stance, I claim that avoiding recursion entirely in expression-based languages comes with some very important limitations. I will try to illustrate why using an example. Consider the core combinator

val List.tryFind : ('T -> bool) -> 'T list -> 'T option

which performs linear searches on its supplied list, returning the first element that happens to satisfy the supplied predicate. This is a simple yet extremely useful combinator that is often finding use in F# codebases. It works very well for most applications, assuming our predicate is a regular lambda.

But what happens if the predicate is of type 'T -> Async, in other words asynchronous? We would probably require a variation of tryFind, namely

val tryFindAsync : ('T -> Async<bool>) -> 'T list -> Async<'T option>

which however is not available the core library. In such circumstances, F# developers will have to improvise.

As an exercise, let’s attempt to provide a good implementation that avoids recursion. A novice user would quickly end up with the following solution:

let tryFindAsync (pred : 'T -> Async<bool>) (inputs : 'T list) =
    async { return List.tryFind (pred >> Async.RunSynchronously) inputs }

which is evidently a misguided solution, since it invalidates the purpose of asynchrony.

The requirement for asynchrony could still be satisfied using smarter manipulation of generic combinators, as seen in the following example:

let tryFind (f : 'T -> Async<bool>) (ts : 'T list) =
    let folder acc t = async {
        let! result = acc
        match result with
        | Some _ -> return result
        | None ->
            let! success = f t
            return if success then Some t else None
    }

    List.fold folder (async.Return None) ts

In this case, we are folding over asynchronous computations, so the final result is a non-blocking asynchronous workflow that can be consumed accordingly. There are however two problems with this approach:

  1. The solution is too smart for its own good. Sure, folding over functions feels good to the person that came up with it, however there is a nontrivial cognitive overhead associated with what should be a very simple task. In the context of a bigger team, such heroics aren’t appreciated.
  2. More importantly, fold operations always traverse the entire input list. The point of tryFind is that it should stop as soon as a result is found. By delegating iteration to other combinators, we lose the ability to tune aspects of control flow. Assuming our input type allowed for infinite lists (for instance IEnumerable), this approach would necessarily result in non-terminating computation.

An Imperative Solution

Let us now try addressing the same problem using the imperative style. First, here’s how it could be defined elegantly using C#:

async Task<Option<T>> TryFindAsync<T>(Func<T, Task<bool>> predicate, IEnumerable<T> inputs)
{
    foreach (var t in inputs)
        if (await predicate(t)) return Option.Some(t);
    
    return Option.None<T>();
}

What happens if we attempt to port this implementation to F#? First of all, F# is an expression-based language, which means that return keywords (in the imperative sense) are not permitted. Rather, we need to introduce additional variables to simulate breaks:

let tryFindAsync (predicate : 'T -> Async<bool>) (ts : 'T list) = async {
    let mutable result = None
    let mutable ts = ts
    while Option.isNone result && not (List.isEmpty ts) do
        let head :: tail = ts
        let! r = predicate head
        if r then result <- Some head
        else ts <- tail

    return result
}

Few would disagree with the conclusion that this solution is ugly and easy to get wrong. The lack of return, break and continue constructs in expression-based languages is the primary reason why an imperative approach is awkward here.

Real-Word Recursion

This is precisely where recursion comes in handy. It can be used to effectively emulate any sort of imperative break in expression-based programs:

let rec tryFind (f : 'T -> Async<bool>) (ts : 'T list) = async {
    match ts with
    | [] -> return None
    | head :: tail ->
        let! r = f head
        if r then return Some head
        else return! tryFind f tail
}

Similar examples can be provided that illustrate how break and continue constructs can be emulated using recursion.

Conclusion

When working with expression based languages, recursion is an essential tool for achieving control flow patterns typically only attainable via imperative constructs. In the context of F#, working programmers can only benefit from acquainting themselves with the standard syllabus of common recursion patterns commonly used in FP languages.

Programming in the Point-Free Style

Point-free programming (or point-less programming, for the more cynically inclined) is a paradigm that advocates the formulation of programs by means of function composition. In the point-free style programs avoid explicitly nominating function arguments (or “points”), deriving instead complex function definitions by means of applying higher-order combinators on simpler functions.

To give an F# example, the code

let f xOpt =
    match xOpt with
    | Some x when x >= 0. -> Some(sqrt x)
    | _ -> None

can be refactored into the following one-liner:

let f = Option.map sqrt << Option.filter ((<=) 0.)

This is an extremely powerful technique, deeply rooted in concepts that are foundational to mathematics and logic. It fully embraces the realization that most meaningful concepts can be defined in terms of functions and function composition.

A Taste of Category Theory

If we were trace the influence (and indeed, sexiness) of the point-free style to a single origin, then that would undoubtedly be category theory. Category theory is the field of mathematics that (roughly) deals with the the study of function composition in the abstract.

A category consists of abstract functions (or morphisms, or “arrows”) that have an origin and destination (or domain and codomain, or “argument type” and “return type”). Morphisms can be composed in a way that certain laws are satisfied. Importantly, categories carry a notion of function equality. Ultimately, all meaningful properties of a category are expressed in terms of that equality. Category theory allows for the study of entities that are not functions in the strict sense, but which nevertheless behave like functions.

This idea is powerful enough to serve as a foundation for the whole of mathematics, and then some. Its application has also been fruitful in logic and computer science, particularly in the field of denotational semantics, where it has been used to define domains not attainable using regular set theory.

Thus the point-free style isn’t just the natural way of working with categories, it is really the only way (which is kind of the point). Whereas sets have elements and types have terms, there is no corresponding concept baked into categories in the general case (thunks being the next best thing). This has interesting ramifications in terms of expressivity. To illustrate, consider the following expression from the typed lambda calculus:

f, g \vdash \lambda (x,y).\ g(x, f(x,y)) : A\times B \to C

This otherwise simple expression is defined in category theory as follows:

A\times B \xrightarrow{\Delta\times 1_B} (A\times A)\times B \xrightarrow{\alpha_{A,A,B}} A\times (A\times B)\xrightarrow{1_A\times f} A\times D\xrightarrow{\enspace g\enspace }C

rendered as the expression

g \cdot{} (1_A\times f) \cdot{} \alpha_{A,A,B} \cdot{} (\Delta\times 1_B)

where

If the above definition seems abstruse, it’s because it is. Two points particularly add to the complexity of the point-free definition:

  • The requirement that x is passed as an argument twice: we are forced to simulate this duplication by precomposing with the diagonal (rendered \lambda x . (x,x) using the lambda calculus).
  • Our inability to pattern match and rearrange inputs: arguments have to be meticulously juggled around using associativity transformations.

Ultimately, what was a lambda expression that could be read and understood in seconds has been turned into an expression that takes minutes to grasp.

The point of this exercise is to illustrate how the lambda calculus with pattern matching (or set theory for that matter) is superior to the point-free style in terms of expressive efficiency. Not only does it naturally incorporate the point-free style, but it extends far beyond its capacity to concisely express complex definitions. It goes on to show that unless your work requires reasoning about large categories, you are potentially better off steering clear from an exclusively point-free style.

Programming Take-Aways

So how does this all translate to our coding practices? Let us bring back that F# example referenced at the beginning of the article:

let f xOpt =
    match xOpt with
    | Some x when x >= 0. -> Some(sqrt x)
    | _ -> None

versus

let f = Option.map sqrt << Option.filter ((<=) 0.)

I claim that the first implementation provides the simpler and more idiomatic solution, for many reasons:

  • it’s easiest to read and understand.
  • it assigns a name to the argument, which is exposed as part of the type signature.
  • it provides the most efficient implementation, avoiding gratuitous allocations/invocations of lambda objects.
  • it allows setting of breakpoints, a crucial ability when debugging large codebases.

The central thesis of this article then is more-or-less a reflection of the conclusions drawn from the previous section: ML affords a style of programming that both incorporates and is more expressively efficient than strict function composition. Point-free implementations also tend to be brittle, with slight alterations to either implementation or type signature often prompting substantial rewrites and introduction of new classes of combinators.

These observations can be extended to function usage patterns that are not point-free in the strict sense. Let’s consider a very simple example:

let x = pair |> fst

This is a very common way for deconstructing tuples. It works well, as long as our tuple is indeed a pair. When different arities come into play, it often is necessary to introduce non-standard projection functions:

let proj1Of3 (x,_,_) = x
let proj2Of3 (_,y,_) = y
let proj3Of3 (_,_,z) = z

List.exists (proj2Of3 >> (=) 3)

This approach has a few problems: each tuple type requires its own set of projector definitions, and the introduction of non-standard combinators adds comprehension overhead. The idiomatic solution afforded by ML does not suffer from either problem:

let x,_ = pair
List.exists (fun (_,y,_) -> y = 3)

The same can be said about non-standard combinators like flip, curry and uncurry. For example

let flip f = fun x y -> f y x

Is a combinator commonly used in functional pipelines like the following:

List.filter ((=) 0 << flip (%) 2)

Again, this implementation suffers from similar problems: it is hard to comprehend; it relies on a non-standard combinator; supporting combinators for every permutation of curried parameters is intractable, with n! possible combinators for functions accepting n curried arguments. I contend that code can be greatly simplified in the general case by employing a lambda literal:

List.filter (fun n -> n % 2 = 0)

The Value of Computation Expressions

“Monad” in the context of programming is a term that carries multiple meanings. First and foremost, a monad denotes a type augmented with a set of functions that satisfy the monad laws. Secondarily, it refers to the capacity of a programming language to provide syntactic sugar for monad instances. A language can have monadic types without syntactic sugar support and conversely, a syntactic monad does not necessarily satisfy the monad laws.

The importance of the latter kind of monad cannot be overstated. It provides a mechanism for lifting user-defined combinators into language-integrated expressions. The impact on readability is dramatic. Consider for instance an expression using the option monad:

option {
   let! x = Some 1
   let! y = Some 2
   let! z = Some 3
   return (x,y,z)
}

which desugars into

Some 1
|> Option.bind (fun x ->
    Some 2
    |> Option.bind (fun y -> 
        Some 3
        |> Option.bind (fun z -> Some(x,y,z))))

Notice that the nesting of the bind operations is essential due to scoping considerations: the final operation depends on all three bindings, thus converting this to a flattened pipeline is not possible unless we somehow accumulate the arguments:

Some 1
|> Option.bind (fun x -> Some (x,2))
|> Option.bind (fun (x,y) -> Some (x,y,3))

These examples serve to demonstrate that extended usage of monadic combinators without syntactic sugar support becomes intractable as complexity of workflows increases.

F# computation expressions embrace the significance of syntax, by design: rather than being tied to monads in particular, they provide a general-purpose mechanism for overloading almost every language keyword. Computation expressions can be monads, sequence builders, imperative DSLs, query languages or anything in between. Their goal is to abstract away the noise of combinators and enable DSLs that can be written using human-readable expressions. We can write

let f xs =
    [ for x in xs do
        let y = x + 1
        if y % 5 = 0 then 
          yield string y ]

instead of

let f =
    List.map ((+) 1)
    >> List.filter (flip (%) 5 >> (=) 0)
    >> List.map string

It is this pragmatic design that explains the resounding success of F# async. Computation expressions liberate us from the burden of doing asynchronous programming using callbacks and rather let us write it as if it were synchronous. In that sense, I must say that I am puzzled by a certain tendency within the F# community to use Kleisli-style combinators on top of async:

let (>=>) (f : 'a -> Async<'b>) (g : 'b -> Async<'c>) =
    fun a -> async.Bind(f a, g)

let catch (h : exn -> Async<'b>) (f : 'a -> Async<'b>) =
    fun a -> async.TryWith(f a, h)

async.Return << serialize
>=> sendRequest Post "/foo"
>=> async.Return << deserialize
|> catch (function :? IOException -> return None 
                   | _ -> failwith "unexpected error")

I contend that this technique is a regression, in fact just a fancy way of doing async programming using explicit callbacks. Compare for instance with a slightly similar snippet using TPL:

Task.FromResult(x)
    .ContinueWith(fun x -> serialize x.Result)
    .ContinueWith(fun req -> sendReqAsync Post "/foo" req.Result)
    .ContinueWith(fun res -> deserialize res.Result)
    .ContinueWith(fun res ->
        if res.IsFaulted && res.Exception :? IOException then None
        else res.Result)

The power of async workflows lies in their simplicity. Compare the samples above with idiomatic async:

async {
    try
        let request = serialize x
        let! response = sendRequest Post "/foo" request
        return deserialize response 
    with :? IOException -> return None }

By relying on the compiler to tuck away the visual noise of callbacks, we obtain clean, readable, flexible, composable and debuggable code. Embedded pattern matching, nested expressions, lexical scoping and recursion provide a mix of tools that is strictly more expressive and concise than any ad-hoc forest of point-free monadic combinators.

Conclusion

It is not the case that the point-free style is useless. In fact, libraries often adopt the point-free style out of performance considerations. Good examples are parser combinators and pickler combinators, and there are more. These typically make the informed trade-off of sacrificing readability in the interest of eliminating allocations typically associated with expression builders. In rare cases there can even be point-free DSLs that are actually legible in the large. However the utility of adopting this approach always carries a big burden of proof, and should not be motivated merely out of stylistic considerations.

Why OO Matters (in F#)

F# is a functional-first programming language that comes with a substantial object-oriented feature set. It is so feature-complete in fact, that almost any C# class can be ported over to F# code with little substantial alteration.

However significant, this subset of the language is seeing limited appreciation from the community, which I suspect is partly fuelled by the known criticisms of OOP and partly by a desire to be different than C#. After all, this is a functional-first language so we can just replace all our classes with functions. There is also the opinion that OOP in F# merely serves as a compatibility layer for .NET, so it’s really only there to cover those unfortunate scenarios of having to use a library that accepts interfaces.

Enabling Abstraction

One of the most important aspects of maintaining a nontrivial codebase is controlling complexity. Complexity can be contained by partitioning code into logically standalone components whose implementation details are hidden behind appropriately designed abstractions. In his excellent Solid and Functional article, Vesa Karvonen argues that selecting the correct abstraction is a hard problem, and that functional programming is no silver bullet in dealing with that. This resonates a lot with me, and I strongly encourage everyone to read the full article.

That said, Vesa is framing the article in Standard ML which supports a full-blown module system. Modules can be abstracted using signatures or they can be parameterized by other modules using functors. Modules are the predominant means of abstraction in the ML ecosystem. In Haskell, it is type classes and higher-kinded types. In F#, modules are intentionally stripped of any complicated features, effectively functioning as a mere namespacing construct.

My claim is that there are inherent limits to what can be expressed using just F# modules and functions, in terms of enabling good abstraction. Luckily, we can always make use of the next best thing, which is F# OO. The thesis of this article is that strategically admitting elements of OO in an F# codebase significantly improves quality and maintainability. While I cannot conclusively prove this within the confines of a single blog post, I will try to provide hints as to why this is.

Classes as Value Parametric Modules

It is often the case that an API exposed through a module must be context aware. Typically F# developers address this by adding extra parameters in every function:

module MyApi =
    let function1 dep1 dep2 dep3 arg1 = doStuffWith dep1 dep2 dep3 arg1
    let function2 dep1 dep2 dep3 arg2 = doStuffWith' dep1 dep2 dep3 arg2

While this does work well in simple cases, it does not scale nicely as dependencies increase. It would typically prompt the developer to group arguments in context records:

type MyApiContext = { Dep1 : Dep1 ; Dep2 : Dep2 ; Dep3 : Dep3 }

module MyApi =
    let function1 (ctx : MyApiContext) arg1 = doStuffWith ctx.Dep1 ctx.Dep2 ctx.Dep3 arg1
    let function2 (ctx : MyApiContext) arg2 = doStuffWith' ctx.Dep1 ctx.Dep2 ctx.Dep3 arg2

This complicates the implementation even more both in the definition site and in the consumption site. In practice, you either end up with one context type per component or one God context for the entire application. Even more importantly, this approach often violates encapsulation concerns, pushing the burden of gathering dependencies to the consumers of the API, every single time they do consume the API. Partial application also does little to address any of these concerns in nontrivial contexts.

Less experienced developers might be prompted to do something even worse: lift dependencies to module values.

module MyApi =
    let dep1 = File.ReadAllText "/Users/eirik/connectionstring.txt"
    let dep2 = Environment.GetEnvironmentVariable "DEP_2"
    let dep3 = Random().Next()

    let function1 arg = doStuffWith dep1 dep2 dep3 arg
    let function2 arg = doSutffWith dep1 dep2 dep3 arg

This is bad for many reasons: it makes the API reliant on global state, introduces unneeded side-effects, and pushes app configuration concerns deep inside the guts of our codebase. What’s more, module value initialization compiles to a static constructor for the entire compilation unit so the exact moment of execution is largely unpredictable. Initialization errors manifest as TypeInitializationExceptions which are difficult to debug.

Contrast the situation above with the elegance afforded by a plain old class:

type MyParametricApi(dep1, dep2, dep3) =
    member __.Function1 arg1 = doStuffWith dep1 dep2 dep3 arg1
    member __.Function2 arg2 = doStuffWith' dep1 dep2 dep3 arg2

An API object could be created once at application initialization, or as many times required depending on context. It’s also more amenable to testing. I should add that this approach is essentially just as “functional” as the approaches above, since it’s merely composing dependencies to expose a context-sensitive API. Importantly, it achieves this in a much simpler way both in the definition site and consumption site, which pays great dividends if realized in big codebases.

Expressive APIs

An important attribute of method-based APIs is that they allow for greater expressive power, in two important ways:

  1. Named/Optional parameters: unlike OCaml, whose functions support out-of-order named argument passing and omitted optional arguments, F# functions support neither. Luckily, we can do this using F# methods. I find this to be an immensely powerful tool when exposing non-trivially parameterizable functionality. A function that explicitly accepts 10 optional parameters is not acceptable; a method that accepts 10 optional arguments works like a charm.
  2. Method overloading: because function names like connect' and bind2 are simply not good enough when exposed in a public API.

More Powerful Types

The type system afforded by .NET is strictly more powerful than what can be expressed using modules and functions. For example, the interface

type Scheduler =
    abstract Run<'T> : Async<'T> -> 'T

encodes a kind of function that cannot be expressed in terms of proper F# lambdas. When combined with subtyping, it is possible to effectively encode existential types and Rank-N polymorphism. Even GADTs are possible, with minimal augmentations of the type system.

In practice, it is possible to leverage that additional power very effectively. In fact, F# makes it easy to define generic function literals using object expressions. This is also how the TypeShape library has been made possible.

Abstracting Modules

Functions are the unit of abstraction in F#, but that unit is often insufficient when abstracting APIs. This prompts developers to adopt an approach where abstract APIs are surfaced as either records or tuples of functions:

type Serializer =
    {
        Serialize : bool -> obj -> string
        Deserialize : bool -> string -> obj
    }

According to the F# design guidelines, use of records for building APIs is discouraged and recommends using regular interfaces instead.

I strongly agree with this recommendation for a multitude of reasons: interfaces are more powerful since they support generic methods, named arguments and optional arguments. An interface is less likely to be defined in terms of closures, making it easier to reason about when viewing from a debugger.

So the example above could be rendered as an interface like so:

type Serializer =
    abstract Serialize<'T> : preserveRefEq:bool -> value:'T -> string
    abstract Deserialize<'T> : preserveRefEq:bool -> pickle:string -> 'T

The most important aspect of this approach is readability. It is easier for a consumer of this interface to anticipate what the purpose of each argument is, in the same way that it is easier to understand a record of functions over a tuple of functions.

Representing Illegal State

A lot of proverbial ink has been spent describing how we should strive to make illegal states unrepresentable. However, I do fail to see how this could be fully realized given that the functional core of F# only consists of algebraic data types. Take for example an oft-quoted email type:

type Email = Email of string

The following values are valid instaces of type Email:

Email null
Email "John Smith"
Email "eirik@foo.bar'; DROP TABLE dbo.Users"

If we truly care about illegal states, the obvious alteration to the type above ought to be the following

type Email = private | Email of string
with
    member this.Address = let (Email addr) = this in addr
    static member TryParse(address : string) =
        if isValidEmail address then Some(Email address)
        else None

But really, this is a just a class encoded by a union. The implementation below is simpler:

type Email private (address : string) =
    member __.Address = address
    static member TryParse(address : string) =
        if isValidEmail address then Some(Email address)
        else None

NB the previous implementation might in fact be warranted in cases where free structural equality or comparison are needed. But for all intents and purposes, both approaches effectively subscribe to OO-style encapsulation.

OO And Purity

The relationship between OO and purity can be a frequent avenue for misconception. Occasionally someone will claim that by admitting objects we are ipso facto forsaking purity. On the contrary, I do claim that these really are orthogonal concerns. Just as a lambda is capable of producing side-effects, objects can be designed for purity. Good examples of this are Map and Set in the core library. The lambda is really just an abstract class with a single virtual method and lots of syntactic sugar. There is nothing fundamentally setting it apart from objects once you exclude the syntactic aspect.

Conclusions

So, is this a call to go full-blown OO in F# projects? Should we be digging up our old GoF copies? Are design patterns up there in the functional curriculum together with profunctor optics? Are inheritance and class hierarchies sexy again? No!

I am in fact proposing that there is a third way, where functional and OO components coexist, with one paradigm complementing the other. This is hardly a new idea. Quoting from the F# design guidelines:

F# is commonly called a functional-first language: object, functional and imperative paradigms are all well supported, but functional programming tends to be the first technique used. Many of the defaults of F# are set up to encourage functional programming, but programming in the other paradigms is effective and efficient, and a combination is often best of all. It is a common misconception that the functional and object programming methodologies are competing. In fact, they are generally orthogonal and largely complementary. Often, functional programming plays a stronger role “in the small” (e.g. at the implementation level of functions/method and the code contained therein) and object programming playe a bigger role “in the large” (e.g. at the structural level of classes, interfaces, and namespaces, and the organization of APIs for frameworks).

In my 6 years of working with F#, my style has gradually shifted towards embracing this approach. A few examples:

  • I typically write the implementation of a large component in the functional style behind a private module, then expose its public API as part of a standalone class. I find that method-based APIs are friendlier to consumers unfamiliar with the implementation.
  • I use records and unions for encoding internal representations and classes for encapsulating publicly visible instances. A very good example of this is the F# map implementation.
  • I rarely expose records and unions as part of a public API unless it is abundantly evident that all possible instances for the given types are valid in their context of use. This does not happen often in my experience.
  • If a module is exposed as part of a public API, care must be taken so that the number of arguments is small and behaviour can be predicted by reading the type signature of the function alone. The core List and Array modules are a good example. Avoid using modules to expose complex functionality like the Async API.

I remember reading a few years back Simon Cousins’ NOOO manifesto, which stands for Not Only Object-Oriented development. In retrospect I find this to be an excellent name for a manifesto, if only because “Not Only OO” is not the same thing as “No OO”. So here’s a proposal to revive that manifesto, perhaps with the understanding that “Not Only OO” also implies “Not Only FP” in the context of F#.

F# and Purity

Purely functional programming according to wikipedia

designates a programming paradigm that treats all computation as the evaluation of mathematical functions. Purely functional programing may also be defined by forbidding changing state and mutable data.

The importance of the purely functional approach cannot be overstated: it eliminates entire classes of mutation-related bugs; encourages composable abstraction; pure code can be reasoned about equationally; the explicit segregation of side-effects acts as a forcing function for better abstraction. At the same time, immutable persistent data structures can be cheaply incremented while shared safely across multiple threads. Pure programs also admit compile-time optimizations that would be tricky to achieve in the presence of state.

Benefits aside, purity is an attribute of expressions and their formal semantics. A pure program would still end up executing on regular computer hardware, where it could demonstrate the following effects: high CPU utilization, excessive heap allocations, stack overflows, divizion by zero errors, etc. Thus the connection of pure code to functions of the mathematical variety should not be considered a literal one, otherwise all correct Fibonacci implementations would be equal by virtue of extensionality. Being able to reason how code —pure or otherwise— executes on the underlying hardware is critically important for the working programmer.

An ML Tradition

Transforming purely functional declarations into equivalent, optimal imperative code is a holy grail, still very much a topic of active research. In the general case though, performance can only be attained by forsaking purity. This is very much compatible with the ML philosophy, which has traditionally embraced imperative features in its language core. Quoting from Real World OCaml

Pure code is the default in OCaml, and for good reason—it’s generally easier to reason about, less error prone and more composable. But imperative code is of fundamental importance to any practical programming language, because real-world tasks require that you interact with the outside world, which is by its nature imperative. Imperative programming can also be important for performance. While pure code is quite efficient in OCaml, there are many algorithms that can only be implemented efficiently using imperative techniques.

OCaml offers a happy compromise here, making it easy and natural to program in a pure style, but also providing great support for imperative programming.

This also applies particularly well to F#, an OCaml dialect for the .NET framework. In my personal experience, performance gains when switching from purely functional to imperative code in F# can be particularly dramatic. Strategically adopting imperative implementations in performance-critical components is often vital to ensuring sustainability of a system.

Imperative Programming done right

So what does this all mean? Should we just give up on purely functional code in the interest of performance? Is F# really just a way to write C# programs in ML-style syntax?

Well, not really. I do claim though that there is a sweet spot where impure features can be utilized to combine the benefits of pure FP with a lot of the efficiency of procedural code. Defining that sweet spot is hard, however I cannot think of a more concise example than Nessos Streams, which provides a highly efficient functional streams implementation in just over 40 lines of code. Other examples could include FParsec and Hopac, whose core implementations have been written in C# out of performance considerations.

If one had to nominate a key guiding priciple for using imperative features in F# code, then surely we would single out referential transparency. Very roughly, a function is referentially transparent if it behaves like a pure function, even if its implementation might use imperative constructs. For example, the function

let incr (cell : int ref) : unit = cell := !cell + 1

is not referentially transparent because substituting a call to incr with the the result of the computation is not a behaviour-preserving transformation, in general.

We can still use imperative features to define functions that are in fact referentially transparent. A good example is implementing groupBy for lists. A purely functional groupBy could be written as follows:

let groupBy (proj : 'T -> 'Key) (ts : 'T list) : ('Key * 'T list) list =
    let folder (map : Map<'Key, 'T list>) (t : 'T) =
        let key = proj t
        let grouped = defaultArg (Map.tryFind key map) []
        Map.add key (t :: grouped) map

    ts
    |> List.fold folder Map.empty
    |> Map.toList
    |> List.map (fun (k,v) -> k, List.rev v)

This implementation may be pure, however performance-wise it is not ideal. We can still produce a more efficient, imperative equivalent of the same function:

let groupBy' (proj : 'T -> 'Key) (ts : 'T list) : ('Key * 'T list) list =
    let dict = new Dictionary<'Key, ResizeArray<'T>>()
    for t in ts do
        let key = proj t
        let mutable array = null
        if not <| dict.TryGetValue(key, &array) then
            array <- new ResizeArray<'T>()
            dict.Add(key, array)
        array.Add t

    dict
    |> Seq.map (function KeyValue(k,v) -> k, Seq.toList v)
    |> Seq.toList

The second function provides order-of-magnitude performance gains, but is also referentially transparent: it can be substituted with the pure implementation or the resulting value without affecting behaviour of the overall program. In fact, this approach very much reflects how the core library implements its own functions.

Conclusion

Purely functional programming in F# provides a very effective means of reasoning about code, however in the interest of efficiency it is often preferable to leverage the imperative aspects of the language. When done carefully, it can result in better performing code that sacrifices few if any of the benefits of the purely functional approach.

In fact, my claim is that this is very much the idiomatic way of using ML in general and F# in particular. Pretending that F# is a purely functional language à la Haskell is highly counterproductive, particularly when authoring production-grade projects.

You’re better off using Exceptions

Exception handling is an error management paradigm that has often been met with criticism. Such criticisms typically revolve around scoping considerations, exceptions-as-control-flow abuse or even the assertion that exceptions are really just a type safe version of goto. To an extent, these seem like valid concerns but it is not within the scope of this article to address those per se.

Such concerns resonate particularly well within FP communities, often taken to the extreme: we should reject exceptions altogether, since code that throws is necessarily impure. In the F# community, this opinion is in part realized by advocating alternatives like result types and railway-oriented programming. In essence, these approaches follow the Either monad found in Haskell, but often intentionally avoiding the use of do notation/computation expressions (since that’s just interpreted exception semantics).

The TL;DR version of the approach is that we define a union type for results that looks like this:

type Result<'TSuccess, 'TError> =
    | Ok of 'TSuccess
    | Error of 'TError

then require that any function which admits the possibility of error should return a Result value. This way we are forcing the type system acknowledge error cases and makes our error handling logic more explicit and predictable. We can also use combinators like bind to encode common error handling flows.

I would like to illustrate in this article why I believe that this often causes a lot of problems — while simultaneously not achieving the advertised benefits, particularly in the context of maintaining large codebases. It also gives rise to a multitude of anti-patterns, particularly when used by people new to F#.

An issue of Runtime

Most programming languages out there carry the concept of a runtime error. Such errors sometimes stem from unexpected locations, in code otherwise considered pure. In Haskell for instance, consider what the pure expressions “2 `div` 0” or “head []” evaluate to. In the case of the CLR, runtime errors are manifested in the form of thrown exceptions, which arguably happens more often than Haskell.

Because runtime errors are difficult to anticipate, I claim that using result types as a holistic replacement for error handling in an application is a leaky abstraction. Consider the following example:

type Customer = { Id : string; Credit : decimal option }

let average (customers : Customer list) =
    match customers with
    | [] -> Error "list was empty"
    | _ ->
        customers
        |> List.averageBy (fun c -> c.Credit.Value)
        |> Ok

Notice that the above snippet now admits two possible classes of errors, string and the potential NullReferenceException admitted by careless access of the optional field in the customer record. This eventually delivers an unpleasant surprise to consumers of the function, since the type signature communicates an exception-free implementation.

An Awkward Reconciliation

It’s by such misadventure that the working F# programmer soon realizes that any function could still potentially throw. This is an awkward realization, which has to be addressed by catching as soon as possible. This of course is the purely functional equivalent of the Pokemon anti-pattern:

let avg : Result<decimal, string> =
    try average customers
    with e -> Error e.Message // gotta catch em all!

Boilerplate

In the majority of codebases using result types that I’ve been reviewing, people typically just end up re-implementing exception semantics on an ad-hoc basis. This can result in extremely noisy code, as illustrated in the following example:

let combine x y z =
    match x, y, z with
    | Ok x, Ok y, Ok z -> Ok (x,y,z)
    | Error e, _, _ | _, Error e, _ | _, _, Error e -> Error e

or in using pre-baked combinators

let combine x y z =
    x |> bind (fun x -> y |> bind (fun y -> z |> bind (fun z -> Ok(x,y,z))))

We really shouldn’t be doing this. It is essentially the type equivalent of using a linter that demands inserting catch-alls and rethrows in every method body.

Given sufficient implementation complexity, things can get really ugly: binding on results of differing error types and other incidental complexities require substantial refactoring to get things right. Often, this will prompt developers to cut corners by doing unexpected things, like inserting a throw just to get rid of an annoying error branch or returning nested result types like

Result<Result<string, string>, string list>

Where’s my Stacktrace?

An important property of exceptions -which cannot be stressed enough- is that they are entities managed and understood by the underlying runtime, endowed with metadata critical to diagnosing bugs in complex systems. They can also be tracked and highlighted by tooling such as debuggers and profilers, providing invaluable insight when probing a large system.

By lifting all of our error handling to passing result values, we are essentially discarding all that functionality. There is nothing worse than having to deal with a production issue which comes up in the logs simply as "list was empty".

The Problem with IO

The points discussed thus far have typically focused on applications that might as well have been purely functional. As might be expected, these become even more pronounced when bringing in IO or when interfacing with third-party .NET libraries. Consider the following example:

let tryReadAllText (path : string) : string option =
    try System.IO.File.ReadAllText path |> Some
    with _ -> None

I have spotted this approach and variations in many production-grade F# codebases, and have been responsible for some of these myself back when I was younger and more naive. This adapter is motivated by the desire to have an exception-free file reader, however it is deeply flawed precisely because of that.

While it is fairly unambiguous what a Some result means, the None bit hardly conveys any information other than the obvious. Here is an official list of possible errors that can be raised by that particular call. By discarding the exception, it becomes impossible to diagnose what could have gone wrong.

Stringly-typed error handling

One could quickly point out that the snippet above can be amended so that data loss is minimized.

let readAllText (path : string) : Result<string, string> =
    try System.IO.File.ReadAllText path |> Ok
    with e -> Error e.Message

This alteration still makes error handling awkward and unsafe

match readAllText "foo.txt" with
| Error e when e.Contains "Could not find file" -> // ...

particularly when compared to how the idiomatic approach works

try File.ReadAllText path
with :? FileNotFoundException -> // ...

Interop issues

It is often easy to forget that Result values are plain objects, with no particular bearing when used in the context of other frameworks/libraries. The following example illustrates an innocent mistake when working with TPL:

let task =
    Task.StartNew(fun () ->
        if Random.Next() % 2 = 0
        then Ok ()
        else Error ())

task.Wait()
if task.IsFaulted then printfn "Task failed"
else printfn "Task succeeded"

Or perhaps it could be mistakenly believed that F# Async somehow recognizes result types

let job1 = async { return Ok () }
let job2 = async { return Error () }

let! result =
    [job1 ; job2]
    |> Async.Parallel
    |> Async.map Ok

match result with
| Ok _ -> printfn "All jobs completed successfully"
| Error _ -> printfn "Some of the jobs failed"

Conclusions

It is not in the intentions of this article to argue that result types shouldn’t be used at all. I have written DSLs and interpreters using result types. MBrace uses result types internally. In most cases though, the benefit is only maximized by writing custom result types that model a particular domain. Rather than having a general-purpose error branch that encodes infinite classes of failures, assign each class of errors to an individual branch in that result type:

type MoneyWithdrawalResult =
    | Success of amount:decimal
    | InsufficientFunds of balance:decimal
    | CardExpired of DateTime
    | UndisclosedFailure

This approach constrains error classes in a finite domain, which also allows for more effective testing of our code.

That said, I strongly believe that using result types as a general-purpose error handling mechanism for F# applications should be considered harmful. Exceptions should remain the dominant mechanism for error propagation when programming in the large. The F# language has been designed with exceptions in mind, and has achieved that goal very effectively.

Reconciling Stacktraces with Computation Expressions, Revisited.

About a year ago, I wrote an article arguing for adding stacktrace support to F# computation expressions. Thanks to work by Lincoln Atkinson and Avi Avni, the upcoming F# 4.1 will be providing this functionality.

On the occasion of the release of VS2017 RC, I decided to spend some time adapting my continuation monad implementation to the new feature. The Bind implementation now looks as follows:

member __.Bind(f : Cont<'T>, g : 'T -> Cont<'S>,
                [<CallerMemberName>]?callerName : string,
                [<CallerFilePath>]?callerFilePath : string,
                [<CallerLineNumber>]?callerLineNumber : int) : Cont<'S> =

    fun sc ec ->
        let sc' (t : 'T) =
            match (try Ok(g t) with e -> Error e) with
            | Ok g -> g sc ec
            | Error e -> ec (SymbolicException.capture e)

        let ec' (se : SymbolicException) =
            let stackMsg =
                sprintf "   at %s in %s:line %d" 
                    callerName.Value 
                    callerFilePath.Value
                    callerLineNumber.Value

            ec (SymbolicException.append stackMsg se)

        f sc' ec'

and ReturnFrom:

member __.ReturnFrom(f : Cont<'T>,
                        [<CallerMemberName>]?callerName : string,
                        [<CallerFilePath>]?callerFilePath : string,
                        [<CallerLineNumber>]?callerLineNumber : int) : Cont<'T> =
    fun sc ec ->
        let ec' (se : SymbolicException) =
            let stackMsg =
                sprintf "   at %s in %s:line %d" 
                    callerName.Value 
                    callerFilePath.Value
                    callerLineNumber.Value

            ec (SymbolicException.append stackMsg se)

        f sc ec'

Running the odd/even example

let rec odd (n : int) = 
    cont {
        if n = 0 then return false
        else
            return! even (n - 1)
    }
 
and even (n : int) =
    cont {
        if n = 0 then return failwith "bug!"
        else
            return! odd (n - 1)
    }

odd 5 |> Cont.run

Now generates the stacktrace:

System.Exception: bug!
   at Program.even@119.Invoke(Unit unitVar) in C:\Users\eirik\devel\public\cont\Program.fs:line 119
   at Program.sc'@54-1.Invoke(a t) in C:\Users\eirik\devel\public\cont\Program.fs:line 54
   at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
   at even in C:\Users\eirik\devel\public\cont\Program.fs:line 121
   at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
   at even in C:\Users\eirik\devel\public\cont\Program.fs:line 121
   at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
   at Program.ContModule.ec@102-1.Invoke(SymbolicException se) in C:\Users\eirik\devel\public\cont\Program.fs:line 102
   at Program.ContModule.run[T](FSharpFunc`2 cont) in C:\Users\eirik\devel\public\cont\Program.fs:line 103
   at <StartupCode$ConsoleApplication3>.$Program.main@() in C:\Users\eirik\devel\public\cont\Program.fs:line 106

The results look pretty promising. Looking forward to apply this technique to async and cloud builders!

You can find the full implementation here.