# Programming in the Point-Free Style

Point-free programming (or point-less programming, for the more cynically inclined) is a paradigm that advocates the formulation of programs by means of function composition. In the point-free style programs avoid explicitly nominating function arguments (or “points”), deriving instead complex function definitions by means of applying higher-order combinators on simpler functions.

To give an F# example, the code

let f xOpt =
match xOpt with
| Some x when x >= 0. -> Some(sqrt x)
| _ -> None


can be refactored into the following one-liner:

let f = Option.map sqrt << Option.filter ((<=) 0.)


This is an extremely powerful technique, deeply rooted in concepts that are foundational to mathematics and logic. It fully embraces the realization that most meaningful concepts can be defined in terms of functions and function composition.

## A Taste of Category Theory

If we were trace the influence (and indeed, sexiness) of the point-free style to a single origin, then that would undoubtedly be category theory. Category theory is the field of mathematics that (roughly) deals with the the study of function composition in the abstract.

A category consists of abstract functions (or morphisms, or “arrows”) that have an origin and destination (or domain and codomain, or “argument type” and “return type”). Morphisms can be composed in a way that certain laws are satisfied. Importantly, categories carry a notion of function equality. Ultimately, all meaningful properties of a category are expressed in terms of that equality. Category theory allows for the study of entities that are not functions in the strict sense, but which nevertheless behave like functions.

This idea is powerful enough to serve as a foundation for the whole of mathematics, and then some. Its application has also been fruitful in logic and computer science, particularly in the field of denotational semantics, where it has been used to define domains not attainable using regular set theory.

Thus the point-free style isn’t just the natural way of working with categories, it is really the only way (which is kind of the point). Whereas sets have elements and types have terms, there is no corresponding concept baked into categories in the general case (thunks being the next best thing). This has interesting ramifications in terms of expressivity. To illustrate, consider the following expression from the typed lambda calculus:

$f, g \vdash \lambda (x,y).\ g(x, f(x,y)) : A\times B \to C$

This otherwise simple expression is defined in category theory as follows:

$A\times B \xrightarrow{\Delta\times 1_B} (A\times A)\times B \xrightarrow{\alpha_{A,A,B}} A\times (A\times B)\xrightarrow{1_A\times f} A\times D\xrightarrow{\enspace g\enspace }C$

rendered as the expression

$g \cdot{} (1_A\times f) \cdot{} \alpha_{A,A,B} \cdot{} (\Delta\times 1_B)$

where

If the above definition seems abstruse, it’s because it is. Two points particularly add to the complexity of the point-free definition:

• The requirement that $x$ is passed as an argument twice: we are forced to simulate this duplication by precomposing with the diagonal (rendered $\lambda x . (x,x)$ using the lambda calculus).
• Our inability to pattern match and rearrange inputs: arguments have to be meticulously juggled around using associativity transformations.

Ultimately, what was a lambda expression that could be read and understood in seconds has been turned into an expression that takes minutes to grasp.

The point of this exercise is to illustrate how the lambda calculus with pattern matching (or set theory for that matter) is superior to the point-free style in terms of expressive efficiency. Not only does it naturally incorporate the point-free style, but it extends far beyond its capacity to concisely express complex definitions. It goes on to show that unless your work requires reasoning about large categories, you are potentially better off steering clear from an exclusively point-free style.

## Programming Take-Aways

So how does this all translate to our coding practices? Let us bring back that F# example referenced at the beginning of the article:

let f xOpt =
match xOpt with
| Some x when x >= 0. -> Some(sqrt x)
| _ -> None


versus

let f = Option.map sqrt << Option.filter ((<=) 0.)


I claim that the first implementation provides the simpler and more idiomatic solution, for many reasons:

• it’s easiest to read and understand.
• it assigns a name to the argument, which is exposed as part of the type signature.
• it provides the most efficient implementation, avoiding gratuitous allocations/invocations of lambda objects.
• it allows setting of breakpoints, a crucial ability when debugging large codebases.

The central thesis of this article then is more-or-less a reflection of the conclusions drawn from the previous section: ML affords a style of programming that both incorporates and is more expressively efficient than strict function composition. Point-free implementations also tend to be brittle, with slight alterations to either implementation or type signature often prompting substantial rewrites and introduction of new classes of combinators.

These observations can be extended to function usage patterns that are not point-free in the strict sense. Let’s consider a very simple example:

let x = pair |> fst


This is a very common way for deconstructing tuples. It works well, as long as our tuple is indeed a pair. When different arities come into play, it often is necessary to introduce non-standard projection functions:

let proj1Of3 (x,_,_) = x
let proj2Of3 (_,y,_) = y
let proj3Of3 (_,_,z) = z

List.exists (proj2Of3 >> (=) 3)


This approach has a few problems: each tuple type requires its own set of projector definitions, and the introduction of non-standard combinators adds comprehension overhead. The idiomatic solution afforded by ML does not suffer from either problem:

let x,_ = pair
List.exists (fun (_,y,_) -> y = 3)


The same can be said about non-standard combinators like flip, curry and uncurry. For example

let flip f = fun x y -> f y x


Is a combinator commonly used in functional pipelines like the following:

List.filter ((=) 0 << flip (%) 2)


Again, this implementation suffers from similar problems: it is hard to comprehend; it relies on a non-standard combinator; supporting combinators for every permutation of curried parameters is intractable, with $n!$ possible combinators for functions accepting $n$ curried arguments. I contend that code can be greatly simplified in the general case by employing a lambda literal:

List.filter (fun n -> n % 2 = 0)


## The Value of Computation Expressions

“Monad” in the context of programming is a term that carries multiple meanings. First and foremost, a monad denotes a type augmented with a set of functions that satisfy the monad laws. Secondarily, it refers to the capacity of a programming language to provide syntactic sugar for monad instances. A language can have monadic types without syntactic sugar support and conversely, a syntactic monad does not necessarily satisfy the monad laws.

The importance of the latter kind of monad cannot be overstated. It provides a mechanism for lifting user-defined combinators into language-integrated expressions. The impact on readability is dramatic. Consider for instance an expression using the option monad:

option {
let! x = Some 1
let! y = Some 2
let! z = Some 3
return (x,y,z)
}


which desugars into

Some 1
|> Option.bind (fun x ->
Some 2
|> Option.bind (fun y ->
Some 3
|> Option.bind (fun z -> Some(x,y,z))))


Notice that the nesting of the bind operations is essential due to scoping considerations: the final operation depends on all three bindings, thus converting this to a flattened pipeline is not possible unless we somehow accumulate the arguments:

Some 1
|> Option.bind (fun x -> Some (x,2))
|> Option.bind (fun (x,y) -> Some (x,y,3))


These examples serve to demonstrate that extended usage of monadic combinators without syntactic sugar support becomes intractable as complexity of workflows increases.

F# computation expressions embrace the significance of syntax, by design: rather than being tied to monads in particular, they provide a general-purpose mechanism for overloading almost every language keyword. Computation expressions can be monads, sequence builders, imperative DSLs, query languages or anything in between. Their goal is to abstract away the noise of combinators and enable DSLs that can be written using human-readable expressions. We can write

let f xs =
[ for x in xs do
let y = x + 1
if y % 5 = 0 then
yield string y ]


let f =
List.map ((+) 1)
>> List.filter (flip (%) 5 >> (=) 0)
>> List.map string


It is this pragmatic design that explains the resounding success of F# async. Computation expressions liberate us from the burden of doing asynchronous programming using callbacks and rather let us write it as if it were synchronous. In that sense, I must say that I am puzzled by a certain tendency within the F# community to use Kleisli-style combinators on top of async:

let (>=>) (f : 'a -> Async<'b>) (g : 'b -> Async<'c>) =
fun a -> async.Bind(f a, g)

let catch (h : exn -> Async<'b>) (f : 'a -> Async<'b>) =
fun a -> async.TryWith(f a, h)

async.Return << serialize
>=> sendRequest Post "/foo"
>=> async.Return << deserialize
|> catch (function :? IOException -> return None
| _ -> failwith "unexpected error")


I contend that this technique is a regression, in fact just a fancy way of doing async programming using explicit callbacks. Compare for instance with a slightly similar snippet using TPL:

Task.FromResult(x)
.ContinueWith(fun x -> serialize x.Result)
.ContinueWith(fun req -> sendReqAsync Post "/foo" req.Result)
.ContinueWith(fun res -> deserialize res.Result)
.ContinueWith(fun res ->
if res.IsFaulted && res.Exception :? IOException then None
else res.Result)


The power of async workflows lies in their simplicity. Compare the samples above with idiomatic async:

async {
try
let request = serialize x
let! response = sendRequest Post "/foo" request
return deserialize response
with :? IOException -> return None }


By relying on the compiler to tuck away the visual noise of callbacks, we obtain clean, readable, flexible, composable and debuggable code. Embedded pattern matching, nested expressions, lexical scoping and recursion provide a mix of tools that is strictly more expressive and concise than any ad-hoc forest of point-free monadic combinators.

## Conclusion

It is not the case that the point-free style is useless. In fact, libraries often adopt the point-free style out of performance considerations. Good examples are parser combinators and pickler combinators, and there are more. These typically make the informed trade-off of sacrificing readability in the interest of eliminating allocations typically associated with expression builders. In rare cases there can even be point-free DSLs that are actually legible in the large. However the utility of adopting this approach always carries a big burden of proof, and should not be motivated merely out of stylistic considerations.

## 11 thoughts on “Programming in the Point-Free Style”

1. Eirik, isn’t the F# idiom (and it’s fairly readable, too): let f xOpt = xOpt |> Option.filter ((>=) 0.) |> Option.map sqrt

?

1. eirik says:

I might have written it like that in the past. No more.

2. Its by the way:

let f1 = Option.map sqrt << Option.filter ((<=) 0.)

not.

let f2 = Option.map sqrt <=) 0.)

This mistake is often the reason why i don’t think we should use curring with operators.

let f3 = (Option.map sqrt) < x >= 0.))

But besides that “easy to read” has nothing todo with simple. Easy is subjective, simple not. And it depends on the person how easy it is to read. The last line says: Only keep number greater than “0.” then “sqrt” the number. Its harder for a lot of people to read because:

1) Option.filter is uses rarely and i guess people don’t know what it does.
2) Composition in general is also used more sparingly in F#. Especially (<<)

So its more over harder to read for many people, because they don't use this style of writing. I also discussed with a lot of people that either never used "lambdas" or higher-order functions like "List.map", "List.filter" and so on. What do those person usually say? That a chain of "List.map", "List.filter", "List.reduce" or in general LINQ or Java 8 Streams and so on are in general hard to understand and they prefer loops instead.

1. eirik says:

Fixed, thanks!

3. piaste says:

A perhaps interesting addendum to this article could be the comparison between Freya and Suave’s routing syntaxes.

Freya uses computation expressions (https://docs.freya.io/en/latest/reference/routers/uri-template/routes.html), while Suave prefers combinators (https://suave.io/routing.html).

I find that in this case Suave ends up with the friendlier style, for a few reasons that have to do with the specific problem domain.

First, there is no possible confusion about the overall function signature – you know that a router needs to take in a HTTP request and return (maybe, at some point) a HTTP response. Accordingly, there are few combinators in use – you won’t find yourself scanning the Intellisense for Seq. looking for the right signature to complete your pipeline

Second, the individual components are fairly simple, and so do not especially benefit from being assigned to their own variables. GET >=> pathScan “/user/%s” >=> showUser is not much longer and arguably just as readable as routeGetUserByName.

Third, as you add more routes Suave’s combinators sort of naturally form a hierarchical structure in which you still get a bird’s-eye view of your API in single page, whereas Freya’s expressions are more likely to get refactored into separate declarations. It’s somewhat related to the second point, as the latter would be a more manageable approach for a truly complex structure, but web routes tend to be straightforward and somewhat homogeneous.

1. eirik says:

If we restrict Kleisli composition to just defining routes, then I have no doubt it might function as neat little DSL. It is abuse of fishbones as a general purpose async-bind mechanism that my criticism was addressing, which I’ve seen in contexts other than Suave.

That said, the fundamental design flaw of Suave lies in the composable WebPart abstraction itself. In integrating Http routes in the Kleisli arrows themselves, it is effectively making the API of website a runtime property. In other words, there is no way we can tell if the website is serving a “GET /foo” endpoint unless we actually pass it such a request. This coupling of routing and execution can be problematic, particularly w.r.t. documentation tools and static analyzers. It is interesting that the fork of Suave that adds Swagger support has precisely changed the routing aspect of the framework.

It would look like Freyja is avoiding that mistake.

Disclaimer: I’ve not used either Freyja or Suave, my opinions are formed just by scanning docs/source code, so they could be wrong.