# On the Significance of Recursion

During a recent conversation with a group of F# developers, I started making the case for recursion and its importance in day-to-day programming. My peers reacted with surprise to this opinion of mine, responding that they thought that recursion is an inherently inefficient way of defining programs.

We are taught this rule when studying procedural languages: “for optimal results, just use iteration”. In functional languages however, this is not necessarily true: tail-recursive functions are optimized into imperative loops at compile time and most recursive functions can be readily transformed into tail-recursive workflows by applying accumulators or CPS. These FP fundamentals are what make the functional paradigm a feasible alternative to the imperative style.

I was thus legitimately surprised to discover that working F# programmers carried this attitude towards recursion, which made me wonder whether this could be an artifact of how the language is being taught to FP newcomers. I researched some of the popular online material and did a bit of asking around which seemed to confirm my suspicion: there exists a tendency for de-emphasizing recursion in beginner F# educational material. An important reason is that the topic of recursion itself can appear puzzling and counter-intuitive to beginners: quoting Graham Hutton

Defining recursive functions is like riding a bicycle: it looks easy when someone else is doing it, may seem impossible when you first try to do it yourself, but becomes simple and natural with practice.

Omitting recursion can also be justified by a desire to appear pragmatic; recursion has a reputation for being too academic, a mathematical trivium that is only useful when defining Fibonacci numbers but not relevant in real code. Algorithms should be declared by composing available combinators, and if everything fails we can always fall back to the imperative style. Recursion is relegated to the “Advanced” chapters, bundled together with other academic curiosities such as call/cc and catamorphisms.

### An Example

While there are merits in this stance, I claim that avoiding recursion entirely in expression-based languages comes with some very important limitations. I will try to illustrate why using an example. Consider the core combinator

val List.tryFind : ('T -> bool) -> 'T list -> 'T option


which performs linear searches on its supplied list, returning the first element that happens to satisfy the supplied predicate. This is a simple yet extremely useful combinator that is often finding use in F# codebases. It works very well for most applications, assuming our predicate is a regular lambda.

But what happens if the predicate is of type 'T -> Async, in other words asynchronous? We would probably require a variation of tryFind, namely

val tryFindAsync : ('T -> Async<bool>) -> 'T list -> Async<'T option>


which however is not available the core library. In such circumstances, F# developers will have to improvise.

As an exercise, let’s attempt to provide a good implementation that avoids recursion. A novice user would quickly end up with the following solution:

let tryFindAsync (pred : 'T -> Async<bool>) (inputs : 'T list) =
async { return List.tryFind (pred >> Async.RunSynchronously) inputs }


which is evidently a misguided solution, since it invalidates the purpose of asynchrony.

The requirement for asynchrony could still be satisfied using smarter manipulation of generic combinators, as seen in the following example:

let tryFind (f : 'T -> Async<bool>) (ts : 'T list) =
let folder acc t = async {
let! result = acc
match result with
| Some _ -> return result
| None ->
let! success = f t
return if success then Some t else None
}

List.fold folder (async.Return None) ts


In this case, we are folding over asynchronous computations, so the final result is a non-blocking asynchronous workflow that can be consumed accordingly. There are however two problems with this approach:

1. The solution is too smart for its own good. Sure, folding over functions feels good to the person that came up with it, however there is a nontrivial cognitive overhead associated with what should be a very simple task. In the context of a bigger team, such heroics aren’t appreciated.
2. More importantly, fold operations always traverse the entire input list. The point of tryFind is that it should stop as soon as a result is found. By delegating iteration to other combinators, we lose the ability to tune aspects of control flow. Assuming our input type allowed for infinite lists (for instance IEnumerable), this approach would necessarily result in non-terminating computation.

### An Imperative Solution

Let us now try addressing the same problem using the imperative style. First, here’s how it could be defined elegantly using C#:

async Task<Option<T>> TryFindAsync<T>(Func<T, Task<bool>> predicate, IEnumerable<T> inputs)
{
foreach (var t in inputs)
if (await predicate(t)) return Option.Some(t);

return Option.None<T>();
}


What happens if we attempt to port this implementation to F#? First of all, F# is an expression-based language, which means that return keywords (in the imperative sense) are not permitted. Rather, we need to introduce additional variables to simulate breaks:

let tryFindAsync (predicate : 'T -> Async<bool>) (ts : 'T list) = async {
let mutable result = None
let mutable ts = ts
while Option.isNone result && not (List.isEmpty ts) do
let head :: tail = ts
if r then result <- Some head
else ts <- tail

return result
}


Few would disagree with the conclusion that this solution is ugly and easy to get wrong. The lack of return, break and continue constructs in expression-based languages is the primary reason why an imperative approach is awkward here.

### Real-Word Recursion

This is precisely where recursion comes in handy. It can be used to effectively emulate any sort of imperative break in expression-based programs:

let rec tryFind (f : 'T -> Async<bool>) (ts : 'T list) = async {
match ts with
| [] -> return None
if r then return Some head
else return! tryFind f tail
}


Similar examples can be provided that illustrate how break and continue constructs can be emulated using recursion.

### Conclusion

When working with expression based languages, recursion is an essential tool for achieving control flow patterns typically only attainable via imperative constructs. In the context of F#, working programmers can only benefit from acquainting themselves with the standard syllabus of common recursion patterns commonly used in FP languages.

# Programming in the Point-Free Style

Point-free programming (or point-less programming, for the more cynically inclined) is a paradigm that advocates the formulation of programs by means of function composition. In the point-free style programs avoid explicitly nominating function arguments (or “points”), deriving instead complex function definitions by means of applying higher-order combinators on simpler functions.

To give an F# example, the code

let f xOpt =
match xOpt with
| Some x when x >= 0. -> Some(sqrt x)
| _ -> None


can be refactored into the following one-liner:

let f = Option.map sqrt << Option.filter ((<=) 0.)


This is an extremely powerful technique, deeply rooted in concepts that are foundational to mathematics and logic. It fully embraces the realization that most meaningful concepts can be defined in terms of functions and function composition.

## A Taste of Category Theory

If we were trace the influence (and indeed, sexiness) of the point-free style to a single origin, then that would undoubtedly be category theory. Category theory is the field of mathematics that (roughly) deals with the the study of function composition in the abstract.

A category consists of abstract functions (or morphisms, or “arrows”) that have an origin and destination (or domain and codomain, or “argument type” and “return type”). Morphisms can be composed in a way that certain laws are satisfied. Importantly, categories carry a notion of function equality. Ultimately, all meaningful properties of a category are expressed in terms of that equality. Category theory allows for the study of entities that are not functions in the strict sense, but which nevertheless behave like functions.

This idea is powerful enough to serve as a foundation for the whole of mathematics, and then some. Its application has also been fruitful in logic and computer science, particularly in the field of denotational semantics, where it has been used to define domains not attainable using regular set theory.

Thus the point-free style isn’t just the natural way of working with categories, it is really the only way (which is kind of the point). Whereas sets have elements and types have terms, there is no corresponding concept baked into categories in the general case (thunks being the next best thing). This has interesting ramifications in terms of expressivity. To illustrate, consider the following expression from the typed lambda calculus:

$f, g \vdash \lambda (x,y).\ g(x, f(x,y)) : A\times B \to C$

This otherwise simple expression is defined in category theory as follows:

$A\times B \xrightarrow{\Delta\times 1_B} (A\times A)\times B \xrightarrow{\alpha_{A,A,B}} A\times (A\times B)\xrightarrow{1_A\times f} A\times D\xrightarrow{\enspace g\enspace }C$

rendered as the expression

$g \cdot{} (1_A\times f) \cdot{} \alpha_{A,A,B} \cdot{} (\Delta\times 1_B)$

where

If the above definition seems abstruse, it’s because it is. Two points particularly add to the complexity of the point-free definition:

• The requirement that $x$ is passed as an argument twice: we are forced to simulate this duplication by precomposing with the diagonal (rendered $\lambda x . (x,x)$ using the lambda calculus).
• Our inability to pattern match and rearrange inputs: arguments have to be meticulously juggled around using associativity transformations.

Ultimately, what was a lambda expression that could be read and understood in seconds has been turned into an expression that takes minutes to grasp.

The point of this exercise is to illustrate how the lambda calculus with pattern matching (or set theory for that matter) is superior to the point-free style in terms of expressive efficiency. Not only does it naturally incorporate the point-free style, but it extends far beyond its capacity to concisely express complex definitions. It goes on to show that unless your work requires reasoning about large categories, you are potentially better off steering clear from an exclusively point-free style.

## Programming Take-Aways

So how does this all translate to our coding practices? Let us bring back that F# example referenced at the beginning of the article:

let f xOpt =
match xOpt with
| Some x when x >= 0. -> Some(sqrt x)
| _ -> None


versus

let f = Option.map sqrt << Option.filter ((<=) 0.)


I claim that the first implementation provides the simpler and more idiomatic solution, for many reasons:

• it’s easiest to read and understand.
• it assigns a name to the argument, which is exposed as part of the type signature.
• it provides the most efficient implementation, avoiding gratuitous allocations/invocations of lambda objects.
• it allows setting of breakpoints, a crucial ability when debugging large codebases.

The central thesis of this article then is more-or-less a reflection of the conclusions drawn from the previous section: ML affords a style of programming that both incorporates and is more expressively efficient than strict function composition. Point-free implementations also tend to be brittle, with slight alterations to either implementation or type signature often prompting substantial rewrites and introduction of new classes of combinators.

These observations can be extended to function usage patterns that are not point-free in the strict sense. Let’s consider a very simple example:

let x = pair |> fst


This is a very common way for deconstructing tuples. It works well, as long as our tuple is indeed a pair. When different arities come into play, it often is necessary to introduce non-standard projection functions:

let proj1Of3 (x,_,_) = x
let proj2Of3 (_,y,_) = y
let proj3Of3 (_,_,z) = z

List.exists (proj2Of3 >> (=) 3)


This approach has a few problems: each tuple type requires its own set of projector definitions, and the introduction of non-standard combinators adds comprehension overhead. The idiomatic solution afforded by ML does not suffer from either problem:

let x,_ = pair
List.exists (fun (_,y,_) -> y = 3)


The same can be said about non-standard combinators like flip, curry and uncurry. For example

let flip f = fun x y -> f y x


Is a combinator commonly used in functional pipelines like the following:

List.filter ((=) 0 << flip (%) 2)


Again, this implementation suffers from similar problems: it is hard to comprehend; it relies on a non-standard combinator; supporting combinators for every permutation of curried parameters is intractable, with $n!$ possible combinators for functions accepting $n$ curried arguments. I contend that code can be greatly simplified in the general case by employing a lambda literal:

List.filter (fun n -> n % 2 = 0)


## The Value of Computation Expressions

“Monad” in the context of programming is a term that carries multiple meanings. First and foremost, a monad denotes a type augmented with a set of functions that satisfy the monad laws. Secondarily, it refers to the capacity of a programming language to provide syntactic sugar for monad instances. A language can have monadic types without syntactic sugar support and conversely, a syntactic monad does not necessarily satisfy the monad laws.

The importance of the latter kind of monad cannot be overstated. It provides a mechanism for lifting user-defined combinators into language-integrated expressions. The impact on readability is dramatic. Consider for instance an expression using the option monad:

option {
let! x = Some 1
let! y = Some 2
let! z = Some 3
return (x,y,z)
}


which desugars into

Some 1
|> Option.bind (fun x ->
Some 2
|> Option.bind (fun y ->
Some 3
|> Option.bind (fun z -> Some(x,y,z))))


Notice that the nesting of the bind operations is essential due to scoping considerations: the final operation depends on all three bindings, thus converting this to a flattened pipeline is not possible unless we somehow accumulate the arguments:

Some 1
|> Option.bind (fun x -> Some (x,2))
|> Option.bind (fun (x,y) -> Some (x,y,3))


These examples serve to demonstrate that extended usage of monadic combinators without syntactic sugar support becomes intractable as complexity of workflows increases.

F# computation expressions embrace the significance of syntax, by design: rather than being tied to monads in particular, they provide a general-purpose mechanism for overloading almost every language keyword. Computation expressions can be monads, sequence builders, imperative DSLs, query languages or anything in between. Their goal is to abstract away the noise of combinators and enable DSLs that can be written using human-readable expressions. We can write

let f xs =
[ for x in xs do
let y = x + 1
if y % 5 = 0 then
yield string y ]


let f =
List.map ((+) 1)
>> List.filter (flip (%) 5 >> (=) 0)
>> List.map string


It is this pragmatic design that explains the resounding success of F# async. Computation expressions liberate us from the burden of doing asynchronous programming using callbacks and rather let us write it as if it were synchronous. In that sense, I must say that I am puzzled by a certain tendency within the F# community to use Kleisli-style combinators on top of async:

let (>=>) (f : 'a -> Async<'b>) (g : 'b -> Async<'c>) =
fun a -> async.Bind(f a, g)

let catch (h : exn -> Async<'b>) (f : 'a -> Async<'b>) =
fun a -> async.TryWith(f a, h)

async.Return << serialize
>=> sendRequest Post "/foo"
>=> async.Return << deserialize
|> catch (function :? IOException -> return None
| _ -> failwith "unexpected error")


I contend that this technique is a regression, in fact just a fancy way of doing async programming using explicit callbacks. Compare for instance with a slightly similar snippet using TPL:

Task.FromResult(x)
.ContinueWith(fun x -> serialize x.Result)
.ContinueWith(fun req -> sendReqAsync Post "/foo" req.Result)
.ContinueWith(fun res -> deserialize res.Result)
.ContinueWith(fun res ->
if res.IsFaulted && res.Exception :? IOException then None
else res.Result)


The power of async workflows lies in their simplicity. Compare the samples above with idiomatic async:

async {
try
let request = serialize x
let! response = sendRequest Post "/foo" request
return deserialize response
with :? IOException -> return None }


By relying on the compiler to tuck away the visual noise of callbacks, we obtain clean, readable, flexible, composable and debuggable code. Embedded pattern matching, nested expressions, lexical scoping and recursion provide a mix of tools that is strictly more expressive and concise than any ad-hoc forest of point-free monadic combinators.

## Conclusion

It is not the case that the point-free style is useless. In fact, libraries often adopt the point-free style out of performance considerations. Good examples are parser combinators and pickler combinators, and there are more. These typically make the informed trade-off of sacrificing readability in the interest of eliminating allocations typically associated with expression builders. In rare cases there can even be point-free DSLs that are actually legible in the large. However the utility of adopting this approach always carries a big burden of proof, and should not be motivated merely out of stylistic considerations.

# Why OO Matters (in F#)

F# is a functional-first programming language that comes with a substantial object-oriented feature set. It is so feature-complete in fact, that almost any C# class can be ported over to F# code with little substantial alteration.

However significant, this subset of the language is seeing limited appreciation from the community, which I suspect is partly fuelled by the known criticisms of OOP and partly by a desire to be different than C#. After all, this is a functional-first language so we can just replace all our classes with functions. There is also the opinion that OOP in F# merely serves as a compatibility layer for .NET, so it’s really only there to cover those unfortunate scenarios of having to use a library that accepts interfaces.

### Enabling Abstraction

One of the most important aspects of maintaining a nontrivial codebase is controlling complexity. Complexity can be contained by partitioning code into logically standalone components whose implementation details are hidden behind appropriately designed abstractions. In his excellent Solid and Functional article, Vesa Karvonen argues that selecting the correct abstraction is a hard problem, and that functional programming is no silver bullet in dealing with that. This resonates a lot with me, and I strongly encourage everyone to read the full article.

That said, Vesa is framing the article in Standard ML which supports a full-blown module system. Modules can be abstracted using signatures or they can be parameterized by other modules using functors. Modules are the predominant means of abstraction in the ML ecosystem. In Haskell, it is type classes and higher-kinded types. In F#, modules are intentionally stripped of any complicated features, effectively functioning as a mere namespacing construct.

My claim is that there are inherent limits to what can be expressed using just F# modules and functions, in terms of enabling good abstraction. Luckily, we can always make use of the next best thing, which is F# OO. The thesis of this article is that strategically admitting elements of OO in an F# codebase significantly improves quality and maintainability. While I cannot conclusively prove this within the confines of a single blog post, I will try to provide hints as to why this is.

### Classes as Value Parametric Modules

It is often the case that an API exposed through a module must be context aware. Typically F# developers address this by adding extra parameters in every function:

module MyApi =
let function1 dep1 dep2 dep3 arg1 = doStuffWith dep1 dep2 dep3 arg1
let function2 dep1 dep2 dep3 arg2 = doStuffWith' dep1 dep2 dep3 arg2


While this does work well in simple cases, it does not scale nicely as dependencies increase. It would typically prompt the developer to group arguments in context records:

type MyApiContext = { Dep1 : Dep1 ; Dep2 : Dep2 ; Dep3 : Dep3 }

module MyApi =
let function1 (ctx : MyApiContext) arg1 = doStuffWith ctx.Dep1 ctx.Dep2 ctx.Dep3 arg1
let function2 (ctx : MyApiContext) arg2 = doStuffWith' ctx.Dep1 ctx.Dep2 ctx.Dep3 arg2


This complicates the implementation even more both in the definition site and in the consumption site. In practice, you either end up with one context type per component or one God context for the entire application. Even more importantly, this approach often violates encapsulation concerns, pushing the burden of gathering dependencies to the consumers of the API, every single time they do consume the API. Partial application also does little to address any of these concerns in nontrivial contexts.

Less experienced developers might be prompted to do something even worse: lift dependencies to module values.

module MyApi =
let dep2 = Environment.GetEnvironmentVariable "DEP_2"
let dep3 = Random().Next()

let function1 arg = doStuffWith dep1 dep2 dep3 arg
let function2 arg = doSutffWith dep1 dep2 dep3 arg


This is bad for many reasons: it makes the API reliant on global state, introduces unneeded side-effects, and pushes app configuration concerns deep inside the guts of our codebase. What’s more, module value initialization compiles to a static constructor for the entire compilation unit so the exact moment of execution is largely unpredictable. Initialization errors manifest as TypeInitializationExceptions which are difficult to debug.

Contrast the situation above with the elegance afforded by a plain old class:

type MyParametricApi(dep1, dep2, dep3) =
member __.Function1 arg1 = doStuffWith dep1 dep2 dep3 arg1
member __.Function2 arg2 = doStuffWith' dep1 dep2 dep3 arg2


An API object could be created once at application initialization, or as many times required depending on context. It’s also more amenable to testing. I should add that this approach is essentially just as “functional” as the approaches above, since it’s merely composing dependencies to expose a context-sensitive API. Importantly, it achieves this in a much simpler way both in the definition site and consumption site, which pays great dividends if realized in big codebases.

### Expressive APIs

An important attribute of method-based APIs is that they allow for greater expressive power, in two important ways:

1. Named/Optional parameters: unlike OCaml, whose functions support out-of-order named argument passing and omitted optional arguments, F# functions support neither. Luckily, we can do this using F# methods. I find this to be an immensely powerful tool when exposing non-trivially parameterizable functionality. A function that explicitly accepts 10 optional parameters is not acceptable; a method that accepts 10 optional arguments works like a charm.
2. Method overloading: because function names like connect' and bind2 are simply not good enough when exposed in a public API.

### More Powerful Types

The type system afforded by .NET is strictly more powerful than what can be expressed using modules and functions. For example, the interface

type Scheduler =
abstract Run<'T> : Async<'T> -> 'T


encodes a kind of function that cannot be expressed in terms of proper F# lambdas. When combined with subtyping, it is possible to effectively encode existential types and Rank-N polymorphism. Even GADTs are possible, with minimal augmentations of the type system.

In practice, it is possible to leverage that additional power very effectively. In fact, F# makes it easy to define generic function literals using object expressions. This is also how the TypeShape library has been made possible.

### Abstracting Modules

Functions are the unit of abstraction in F#, but that unit is often insufficient when abstracting APIs. This prompts developers to adopt an approach where abstract APIs are surfaced as either records or tuples of functions:

type Serializer =
{
Serialize : bool -> obj -> string
Deserialize : bool -> string -> obj
}


According to the F# design guidelines, use of records for building APIs is discouraged and recommends using regular interfaces instead.

I strongly agree with this recommendation for a multitude of reasons: interfaces are more powerful since they support generic methods, named arguments and optional arguments. An interface is less likely to be defined in terms of closures, making it easier to reason about when viewing from a debugger.

So the example above could be rendered as an interface like so:

type Serializer =
abstract Serialize<'T> : preserveRefEq:bool -> value:'T -> string
abstract Deserialize<'T> : preserveRefEq:bool -> pickle:string -> 'T


The most important aspect of this approach is readability. It is easier for a consumer of this interface to anticipate what the purpose of each argument is, in the same way that it is easier to understand a record of functions over a tuple of functions.

### Representing Illegal State

A lot of proverbial ink has been spent describing how we should strive to make illegal states unrepresentable. However, I do fail to see how this could be fully realized given that the functional core of F# only consists of algebraic data types. Take for example an oft-quoted email type:

type Email = Email of string


The following values are valid instaces of type Email:

Email null
Email "John Smith"
Email "eirik@foo.bar'; DROP TABLE dbo.Users"


If we truly care about illegal states, the obvious alteration to the type above ought to be the following

type Email = private | Email of string
with
static member TryParse(address : string) =
else None


But really, this is a just a class encoded by a union. The implementation below is simpler:

type Email private (address : string) =
static member TryParse(address : string) =
else None


NB the previous implementation might in fact be warranted in cases where free structural equality or comparison are needed. But for all intents and purposes, both approaches effectively subscribe to OO-style encapsulation.

### OO And Purity

The relationship between OO and purity can be a frequent avenue for misconception. Occasionally someone will claim that by admitting objects we are ipso facto forsaking purity. On the contrary, I do claim that these really are orthogonal concerns. Just as a lambda is capable of producing side-effects, objects can be designed for purity. Good examples of this are Map and Set in the core library. The lambda is really just an abstract class with a single virtual method and lots of syntactic sugar. There is nothing fundamentally setting it apart from objects once you exclude the syntactic aspect.

### Conclusions

So, is this a call to go full-blown OO in F# projects? Should we be digging up our old GoF copies? Are design patterns up there in the functional curriculum together with profunctor optics? Are inheritance and class hierarchies sexy again? No!

I am in fact proposing that there is a third way, where functional and OO components coexist, with one paradigm complementing the other. This is hardly a new idea. Quoting from the F# design guidelines:

F# is commonly called a functional-first language: object, functional and imperative paradigms are all well supported, but functional programming tends to be the first technique used. Many of the defaults of F# are set up to encourage functional programming, but programming in the other paradigms is effective and efficient, and a combination is often best of all. It is a common misconception that the functional and object programming methodologies are competing. In fact, they are generally orthogonal and largely complementary. Often, functional programming plays a stronger role “in the small” (e.g. at the implementation level of functions/method and the code contained therein) and object programming playe a bigger role “in the large” (e.g. at the structural level of classes, interfaces, and namespaces, and the organization of APIs for frameworks).

In my 6 years of working with F#, my style has gradually shifted towards embracing this approach. A few examples:

• I typically write the implementation of a large component in the functional style behind a private module, then expose its public API as part of a standalone class. I find that method-based APIs are friendlier to consumers unfamiliar with the implementation.
• I use records and unions for encoding internal representations and classes for encapsulating publicly visible instances. A very good example of this is the F# map implementation.
• I rarely expose records and unions as part of a public API unless it is abundantly evident that all possible instances for the given types are valid in their context of use. This does not happen often in my experience.
• If a module is exposed as part of a public API, care must be taken so that the number of arguments is small and behaviour can be predicted by reading the type signature of the function alone. The core List and Array modules are a good example. Avoid using modules to expose complex functionality like the Async API.

I remember reading a few years back Simon Cousins’ NOOO manifesto, which stands for Not Only Object-Oriented development. In retrospect I find this to be an excellent name for a manifesto, if only because “Not Only OO” is not the same thing as “No OO”. So here’s a proposal to revive that manifesto, perhaps with the understanding that “Not Only OO” also implies “Not Only FP” in the context of F#.

# F# and Purity

Purely functional programming according to wikipedia

designates a programming paradigm that treats all computation as the evaluation of mathematical functions. Purely functional programing may also be defined by forbidding changing state and mutable data.

The importance of the purely functional approach cannot be overstated: it eliminates entire classes of mutation-related bugs; encourages composable abstraction; pure code can be reasoned about equationally; the explicit segregation of side-effects acts as a forcing function for better abstraction. At the same time, immutable persistent data structures can be cheaply incremented while shared safely across multiple threads. Pure programs also admit compile-time optimizations that would be tricky to achieve in the presence of state.

Benefits aside, purity is an attribute of expressions and their formal semantics. A pure program would still end up executing on regular computer hardware, where it could demonstrate the following effects: high CPU utilization, excessive heap allocations, stack overflows, divizion by zero errors, etc. Thus the connection of pure code to functions of the mathematical variety should not be considered a literal one, otherwise all correct Fibonacci implementations would be equal by virtue of extensionality. Being able to reason how code —pure or otherwise— executes on the underlying hardware is critically important for the working programmer.

Transforming purely functional declarations into equivalent, optimal imperative code is a holy grail, still very much a topic of active research. In the general case though, performance can only be attained by forsaking purity. This is very much compatible with the ML philosophy, which has traditionally embraced imperative features in its language core. Quoting from Real World OCaml

Pure code is the default in OCaml, and for good reason—it’s generally easier to reason about, less error prone and more composable. But imperative code is of fundamental importance to any practical programming language, because real-world tasks require that you interact with the outside world, which is by its nature imperative. Imperative programming can also be important for performance. While pure code is quite efficient in OCaml, there are many algorithms that can only be implemented efficiently using imperative techniques.

OCaml offers a happy compromise here, making it easy and natural to program in a pure style, but also providing great support for imperative programming.

This also applies particularly well to F#, an OCaml dialect for the .NET framework. In my personal experience, performance gains when switching from purely functional to imperative code in F# can be particularly dramatic. Strategically adopting imperative implementations in performance-critical components is often vital to ensuring sustainability of a system.

### Imperative Programming done right

So what does this all mean? Should we just give up on purely functional code in the interest of performance? Is F# really just a way to write C# programs in ML-style syntax?

Well, not really. I do claim though that there is a sweet spot where impure features can be utilized to combine the benefits of pure FP with a lot of the efficiency of procedural code. Defining that sweet spot is hard, however I cannot think of a more concise example than Nessos Streams, which provides a highly efficient functional streams implementation in just over 40 lines of code. Other examples could include FParsec and Hopac, whose core implementations have been written in C# out of performance considerations.

If one had to nominate a key guiding priciple for using imperative features in F# code, then surely we would single out referential transparency. Very roughly, a function is referentially transparent if it behaves like a pure function, even if its implementation might use imperative constructs. For example, the function

let incr (cell : int ref) : unit = cell := !cell + 1


is not referentially transparent because substituting a call to incr with the the result of the computation is not a behaviour-preserving transformation, in general.

We can still use imperative features to define functions that are in fact referentially transparent. A good example is implementing groupBy for lists. A purely functional groupBy could be written as follows:

let groupBy (proj : 'T -> 'Key) (ts : 'T list) : ('Key * 'T list) list =
let folder (map : Map<'Key, 'T list>) (t : 'T) =
let key = proj t
let grouped = defaultArg (Map.tryFind key map) []
Map.add key (t :: grouped) map

ts
|> List.fold folder Map.empty
|> Map.toList
|> List.map (fun (k,v) -> k, List.rev v)


This implementation may be pure, however performance-wise it is not ideal. We can still produce a more efficient, imperative equivalent of the same function:

let groupBy' (proj : 'T -> 'Key) (ts : 'T list) : ('Key * 'T list) list =
let dict = new Dictionary<'Key, ResizeArray<'T>>()
for t in ts do
let key = proj t
let mutable array = null
if not <| dict.TryGetValue(key, &array) then
array <- new ResizeArray<'T>()

dict
|> Seq.map (function KeyValue(k,v) -> k, Seq.toList v)
|> Seq.toList


The second function provides order-of-magnitude performance gains, but is also referentially transparent: it can be substituted with the pure implementation or the resulting value without affecting behaviour of the overall program. In fact, this approach very much reflects how the core library implements its own functions.

### Conclusion

Purely functional programming in F# provides a very effective means of reasoning about code, however in the interest of efficiency it is often preferable to leverage the imperative aspects of the language. When done carefully, it can result in better performing code that sacrifices few if any of the benefits of the purely functional approach.

In fact, my claim is that this is very much the idiomatic way of using ML in general and F# in particular. Pretending that F# is a purely functional language à la Haskell is highly counterproductive, particularly when authoring production-grade projects.

# You’re better off using Exceptions

Exception handling is an error management paradigm that has often been met with criticism. Such criticisms typically revolve around scoping considerations, exceptions-as-control-flow abuse or even the assertion that exceptions are really just a type safe version of goto. To an extent, these seem like valid concerns but it is not within the scope of this article to address those per se.

Such concerns resonate particularly well within FP communities, often taken to the extreme: we should reject exceptions altogether, since code that throws is necessarily impure. In the F# community, this opinion is in part realized by advocating alternatives like result types and railway-oriented programming. In essence, these approaches follow the Either monad found in Haskell, but often intentionally avoiding the use of do notation/computation expressions (since that’s just interpreted exception semantics).

The TL;DR version of the approach is that we define a union type for results that looks like this:

type Result<'TSuccess, 'TError> =
| Ok of 'TSuccess
| Error of 'TError


then require that any function which admits the possibility of error should return a Result value. This way we are forcing the type system acknowledge error cases and makes our error handling logic more explicit and predictable. We can also use combinators like bind to encode common error handling flows.

I would like to illustrate in this article why I believe that this often causes a lot of problems — while simultaneously not achieving the advertised benefits, particularly in the context of maintaining large codebases. It also gives rise to a multitude of anti-patterns, particularly when used by people new to F#.

### An issue of Runtime

Most programming languages out there carry the concept of a runtime error. Such errors sometimes stem from unexpected locations, in code otherwise considered pure. In Haskell for instance, consider what the pure expressions “2 div 0” or “head []” evaluate to. In the case of the CLR, runtime errors are manifested in the form of thrown exceptions, which arguably happens more often than Haskell.

Because runtime errors are difficult to anticipate, I claim that using result types as a holistic replacement for error handling in an application is a leaky abstraction. Consider the following example:

type Customer = { Id : string; Credit : decimal option }

let average (customers : Customer list) =
match customers with
| [] -> Error "list was empty"
| _ ->
customers
|> List.averageBy (fun c -> c.Credit.Value)
|> Ok


Notice that the above snippet now admits two possible classes of errors, string and the potential NullReferenceException admitted by careless access of the optional field in the customer record. This eventually delivers an unpleasant surprise to consumers of the function, since the type signature communicates an exception-free implementation.

### An Awkward Reconciliation

It’s by such misadventure that the working F# programmer soon realizes that any function could still potentially throw. This is an awkward realization, which has to be addressed by catching as soon as possible. This of course is the purely functional equivalent of the Pokemon anti-pattern:

let avg : Result<decimal, string> =
try average customers
with e -> Error e.Message // gotta catch em all!


### Boilerplate

In the majority of codebases using result types that I’ve been reviewing, people typically just end up re-implementing exception semantics on an ad-hoc basis. This can result in extremely noisy code, as illustrated in the following example:

let combine x y z =
match x, y, z with
| Ok x, Ok y, Ok z -> Ok (x,y,z)
| Error e, _, _ | _, Error e, _ | _, _, Error e -> Error e


or in using pre-baked combinators

let combine x y z =
x |> bind (fun x -> y |> bind (fun y -> z |> bind (fun z -> Ok(x,y,z))))


We really shouldn’t be doing this. It is essentially the type equivalent of using a linter that demands inserting catch-alls and rethrows in every method body.

Given sufficient implementation complexity, things can get really ugly: binding on results of differing error types and other incidental complexities require substantial refactoring to get things right. Often, this will prompt developers to cut corners by doing unexpected things, like inserting a throw just to get rid of an annoying error branch or returning nested result types like

Result<Result<string, string>, string list>


### Where’s my Stacktrace?

An important property of exceptions -which cannot be stressed enough- is that they are entities managed and understood by the underlying runtime, endowed with metadata critical to diagnosing bugs in complex systems. They can also be tracked and highlighted by tooling such as debuggers and profilers, providing invaluable insight when probing a large system.

By lifting all of our error handling to passing result values, we are essentially discarding all that functionality. There is nothing worse than having to deal with a production issue which comes up in the logs simply as "list was empty".

### The Problem with IO

The points discussed thus far have typically focused on applications that might as well have been purely functional. As might be expected, these become even more pronounced when bringing in IO or when interfacing with third-party .NET libraries. Consider the following example:

let tryReadAllText (path : string) : string option =
with _ -> None


I have spotted this approach and variations in many production-grade F# codebases, and have been responsible for some of these myself back when I was younger and more naive. This adapter is motivated by the desire to have an exception-free file reader, however it is deeply flawed precisely because of that.

While it is fairly unambiguous what a Some result means, the None bit hardly conveys any information other than the obvious. Here is an official list of possible errors that can be raised by that particular call. By discarding the exception, it becomes impossible to diagnose what could have gone wrong.

### Stringly-typed error handling

One could quickly point out that the snippet above can be amended so that data loss is minimized.

let readAllText (path : string) : Result<string, string> =
with e -> Error e.Message


This alteration still makes error handling awkward and unsafe

match readAllText "foo.txt" with
| Error e when e.Contains "Could not find file" -> // ...


particularly when compared to how the idiomatic approach works

try File.ReadAllText path
with :? FileNotFoundException -> // ...


### Interop issues

It is often easy to forget that Result values are plain objects, with no particular bearing when used in the context of other frameworks/libraries. The following example illustrates an innocent mistake when working with TPL:

let task =
if Random.Next() % 2 = 0
then Ok ()
else Error ())



Or perhaps it could be mistakenly believed that F# Async somehow recognizes result types

let job1 = async { return Ok () }
let job2 = async { return Error () }

let! result =
[job1 ; job2]
|> Async.Parallel
|> Async.map Ok

match result with
| Ok _ -> printfn "All jobs completed successfully"
| Error _ -> printfn "Some of the jobs failed"


## Conclusions

It is not in the intentions of this article to argue that result types shouldn’t be used at all. I have written DSLs and interpreters using result types. MBrace uses result types internally. In most cases though, the benefit is only maximized by writing custom result types that model a particular domain. Rather than having a general-purpose error branch that encodes infinite classes of failures, assign each class of errors to an individual branch in that result type:

type MoneyWithdrawalResult =
| Success of amount:decimal
| InsufficientFunds of balance:decimal
| CardExpired of DateTime
| UndisclosedFailure


This approach constrains error classes in a finite domain, which also allows for more effective testing of our code.

That said, I strongly believe that using result types as a general-purpose error handling mechanism for F# applications should be considered harmful. Exceptions should remain the dominant mechanism for error propagation when programming in the large. The F# language has been designed with exceptions in mind, and has achieved that goal very effectively.

# Reconciling Stacktraces with Computation Expressions, Revisited.

About a year ago, I wrote an article arguing for adding stacktrace support to F# computation expressions. Thanks to work by Lincoln Atkinson and Avi Avni, the upcoming F# 4.1 will be providing this functionality.

On the occasion of the release of VS2017 RC, I decided to spend some time adapting my continuation monad implementation to the new feature. The Bind implementation now looks as follows:

member __.Bind(f : Cont<'T>, g : 'T -> Cont<'S>,
[<CallerMemberName>]?callerName : string,
[<CallerFilePath>]?callerFilePath : string,
[<CallerLineNumber>]?callerLineNumber : int) : Cont<'S> =

fun sc ec ->
let sc' (t : 'T) =
match (try Ok(g t) with e -> Error e) with
| Ok g -> g sc ec
| Error e -> ec (SymbolicException.capture e)

let ec' (se : SymbolicException) =
let stackMsg =
sprintf "   at %s in %s:line %d"
callerName.Value
callerFilePath.Value
callerLineNumber.Value

ec (SymbolicException.append stackMsg se)

f sc' ec'


and ReturnFrom:

member __.ReturnFrom(f : Cont<'T>,
[<CallerMemberName>]?callerName : string,
[<CallerFilePath>]?callerFilePath : string,
[<CallerLineNumber>]?callerLineNumber : int) : Cont<'T> =
fun sc ec ->
let ec' (se : SymbolicException) =
let stackMsg =
sprintf "   at %s in %s:line %d"
callerName.Value
callerFilePath.Value
callerLineNumber.Value

ec (SymbolicException.append stackMsg se)

f sc ec'


Running the odd/even example

let rec odd (n : int) =
cont {
if n = 0 then return false
else
return! even (n - 1)
}

and even (n : int) =
cont {
if n = 0 then return failwith "bug!"
else
return! odd (n - 1)
}

odd 5 |> Cont.run


Now generates the stacktrace:

System.Exception: bug!
at Program.even@119.Invoke(Unit unitVar) in C:\Users\eirik\devel\public\cont\Program.fs:line 119
at Program.sc'@54-1.Invoke(a t) in C:\Users\eirik\devel\public\cont\Program.fs:line 54
at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
at even in C:\Users\eirik\devel\public\cont\Program.fs:line 121
at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
at even in C:\Users\eirik\devel\public\cont\Program.fs:line 121
at odd in C:\Users\eirik\devel\public\cont\Program.fs:line 114
at Program.ContModule.ec@102-1.Invoke(SymbolicException se) in C:\Users\eirik\devel\public\cont\Program.fs:line 102
at Program.ContModule.run[T](FSharpFunc2 cont) in C:\Users\eirik\devel\public\cont\Program.fs:line 103
at <StartupCode$ConsoleApplication3>.$Program.main@() in C:\Users\eirik\devel\public\cont\Program.fs:line 106


The results look pretty promising. Looking forward to apply this technique to async and cloud builders!

You can find the full implementation here.

# TypeShape: Practical Generic Programming in F#

Last week I announced a new library, TypeShape, with claims that it provides a practical way of doing generic programming in F#. I’m following up with this blog post to elaborate why I believe this to be genuinely useful, and how it could benefit the day-to-day life of the working .NET developer.

## The pain of Reflection

Almost everybody who has worked with .NET will at some point need to dabble in the murky ways of reflection. Reflection is needed in scenaria where we need to access data in an indirect fashion, or where circumvention of the type system is necessary.

For example, assume that we have defined the following static method

type Foo =
static member Bar<'T>(?optionalParam : 'T) : unit =
printfn "Invoked with parameter %A" optionalParam


Assume now that we would like invoke that method, with a value whose type cannot be known at compile time. In other words, we want to define a function

val invokeUntyped : obj -> unit


which takes an input of type obj and invokes the generic method using the underlying type of the object instance. How do we do this? By using reflection of course!

open System.Reflection

let invokeUntyped (value:obj) =
// Step 1: get the underlying System.Type for the value
let t = value.GetType()
// Step 2: locate the required method and apply the type argument
let methodInfo =
typeof<Foo>
.GetMethod("Bar", BindingFlags.Public ||| BindingFlags.Static)
.MakeGenericMethod [|t|]

// Step 3: since the parameter is optional, it must be wrapped
let optTy = typedefof<_ option>.MakeGenericType [|t|]
let optCtor = optTy.GetConstructor [|t|]
let optVal = optCtor.Invoke [|value|]

/// Step 4: invoke the method with constructed optional parameter
methodInfo.Invoke(null, [|optVal|]) :?> unit


This is cumbersome code to implement and is highly susceptible to breakage; even minor changes to the method signature will result in runtime errors. What’s more, reflection-based implementations are known to be significantly slower than their IL counterparts.

## Using TypeShape

The TypeShape library can be used to implement the same functionality, but in a significantly safer and easy-to-read fashion:

open TypeShape

let invokeUntyped' (value:obj) =
let shape = TypeShape.Create (value.GetType())
shape.Accept { new ITypeShapeVisitor<unit> with
member __.Visit<'T> () = Foo.Bar(value :?> 'T)}


Let’s have a look at the code, line by line.

The first line takes the underlying type of the input value and uses that to create an object of type TypeShape. This object encapsulates essential information on the type of the object.

The second line accepts an object expression of type ITypeShapeVisitor, which in turn invokes the method Foo.Bar. The second line is an instance of what is known as the visitor pattern, a design pattern commonly found in object-oriented programming. In this case, our visitor takes no arguments other than a type variable 'T. Passing this visitor to the TypeShape instance will have it invoked using the object type as argument, hence the downcast is expected to be successful. Importantly, the invocation is performed normally, thus any disagreement in the method signature will be picked up by the compiler.

In other words, TypeShape lets us introduce type variables into scope using the relatively concise approach of F# object expressions.

## Nothing magical

The implementation of TypeShape is surprisingly simple to define:

open System

type ITypeShapeVisitor<'R> =
abstract Visit<'T> : unit -> 'R

[<AbstractClass>]
type TypeShape() =
abstract Type : Type
abstract Accept : ITypeShapeVisitor<'R> -> 'R

type TypeShape<'T>() =
inherit TypeShape()
override __.Type = typeof<'T>
override __.Accept v = v.Visit<'T>()

type TypeShape with
static member Create(t : Type) =
let tsTy = typedefof<TypeShape<_>>.MakeGenericType [|t|]
Activator.CreateInstance tsTy :?> TypeShape


In essence, TypeShape uses a minimal amount of reflection to bootstrap typed instances, then takes advantage of the ordinary .NET type system to access type information on-demand. TypeShape instances encapsulate and bear witness to types that may not be known at compile time.

## Going Further

Let’s take a look at a different application: suppose we have a tuple whose precise type cannot be known at compile time. A common example of this is the object returned by the ShapeCombination active pattern in the F# quotations module. Suppose we would like like to extract either or both of the items contained in the tuple. Here’s how it could be done using reflection:

let extractTupleElements (value:obj) =
let t = value.GetType()
if not t.IsGenericType || t.GetGenericTypeDefinition() <> typedefof<_ * _> then
invalidArg "value" "not a tuple type!"
let m_Item1 = t.GetProperty("Item1")
let m_Item2 = t.GetProperty("Item2")
m_Item1.GetValue(value), m_Item2.GetValue(value)


Again, the same application could be simplified using the TypeShape library:

let extractTupleElements' (value : obj) =
match TypeShape.Create (value.GetType()) with
| Shape.Tuple2 (s : IShapeTuple2) ->
s.Accept {
new ITuple2Visitor<obj * obj> with
member __.Visit<'T, 'S>() =
let t,s = value :?> 'T * 'S
box t, box s
}

| _ -> invalidArg "value" "not a tuple type!"


In this case, we use the included Shape.(|Tuple2|_|) active pattern that checks against our shape being a 2-tuple. If successful, it returns an instance of type IShapeTuple2 that accepts a different visitor, ITuple2Visitor, which introduces the tuple element types in scope.

Similarly, here’s how we can check whether an unknown F# map contains a particular key:

let mapContainsKeyUntyped (key:obj) (map:obj) =
match TypeShape.Create(map.GetType()) with
| Shape.FSharpMap (s : IShapeFSharpMap) ->
s.Accept {
new IFSharpMapVisitor<bool> with
member __.Visit<'K,'V when 'K : comparison> () =
(map :?> Map<'K,'V>).ContainsKey(key :?> 'K)
}

| _ -> invalidArg "map" "not an F# map!"


## Generic Programming

TypeShape active patterns can be used to orchestrate what could be considered as generic programming. For instance, take this value printer generator:

let rec mkPrinter<'T> () : 'T -> string = mkPrinterUntyped typeof<'T> :?> _
and private mkPrinterUntyped (t : Type) : obj =
match TypeShape.Create t with
| Shape.Unit -> box(fun () -> "()")
| Shape.Bool -> box(sprintf "%b")
| Shape.Int32 -> box(sprintf "%d")
| Shape.String -> box(sprintf "\"%s\"")
| Shape.FSharpOption s ->
s.Accept {
new IFSharpOptionVisitor<obj> with
member __.Visit<'T> () =
let tp = mkPrinter<'T>()
box(function None -> "None" | Some t -> sprintf "Some (%s)" (tp t))
}

| Shape.Tuple2 s ->
s.Accept {
new ITuple2Visitor<obj> with
member __.Visit<'T, 'S> () =
let tp = mkPrinter<'T>()
let sp = mkPrinter<'S>()
box(fun (t : 'T, s : 'S) -> sprintf "(%s, %s)" (tp t) (sp s))
}

| Shape.FSharpList s ->
s.Accept {
new IFSharpListVisitor<obj> with
member __.Visit<'T> () =
let tp = mkPrinter<'T>()
box(fun ts -> ts |> List.map tp |> String.concat "; " |> sprintf "[%s]")
}

| Shape.FSharpSet s ->
s.Accept {
new IFSharpSetVisitor<obj> with
member __.Visit<'T when 'T : comparison> () =
let tp = mkPrinter<'T>()
box(fun (s:Set<'T>) -> s |> Seq.map tp |> String.concat "; " |> sprintf "set [%s]")
}

| _ -> failwithf "unsupported type '%O'" t


The implementation can be used to generate printers for anything within the prescribed algebra of types:

let printer = mkPrinter<(bool * string) option * Set<int * string list option>>()


More importantly, any reflection code will only be executed at generation time, meaning that generated printers execute very efficiently:

// Real: 00:00:00.561, CPU: 00:00:00.562, GC gen0: 32, gen1: 0, gen2: 0
for i = 1 to 1000 do ignore <| sprintf "%A" value
// Real: 00:00:00.010, CPU: 00:00:00.000, GC gen0: 1, gen1: 0, gen2: 0
for i = 1 to 1000 do ignore <| printer value
`

This technique is being utilized in libraries such as FsPickler and FSharp.AWS.DynamoDB, and is an important contributor to their performance.

## Conclusion

If your project relies heavily on reflection, you should consider giving TypeShape a try. It could improve readability and maintainability of your code, and may in some cases lead to better performance. Check it out at https://github.com/eiriktsarpalis/TypeShape and please submit your feedback and/or bug reports!