in a world where no one cares

Golang functional options and overhead: anonymous functions vs. methods

While reading the source code for the AWS SDK for Go, I noticed a neat technique for implementing functional options.

Functional options

Dave Cheney has written about them in detail, but to quickly recap, “functional options” are a common pattern for taking optional arguments to a function in Go. Go doesn’t natively support optional arguments, but it does support variadic functions. The pattern is to expose the function’s options as a struct type, then take as arguments any number of functions which mutate an instance of that type:

func DoSomething(
	mandatoryArgument1 int,
	mandatoryArgument2 string,
	options ...func(*SomethingOptions),
) error {
	opts := SomethingOptions{
		// Configure default option values if/as necessary.
	// Apply provided options.
	for _, option := range options {

	// Now the rest of the function can use those options along with the
	// mandatory arguments.

	// ...

type SomethingOptions struct {
	OptionalFlag  bool
	SomethingElse string

Invocation looks something like this:

err := DoSomething(42, "nice", func(options *SomethingOptions) {
	options.OptionalFlag = true

A very common extension of this is for the package that defines the function to also provide some option function constructors, usually named with the prefix “With”, that take the value you want to modify as an argument, and return an instance of an option function which closes over that value in order to bind it to the options on invocation:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return func(options *SomethingOptions) {
		options.SomethingElse = somethingElse

// Then...

err := DoSomething(420, "blaze", WithSomethingElse("it"))

Done to excess, this starts to look a lot like Objective-C. But the pattern carries a number of really nice benefits; most meaningfully for me being that it allows for a meaningful distinction between passing a zero value as an option and not passing an option (in comparison to taking options as regular arguments and documenting them as being ignored when given as a zero value), and it is strongly typed (in comparison to taking an ...any argument and type-switching over each to figure out what to do with them).

The twist

Alright, so what did I find in the AWS SDK for Go that’s got me so intrigued? Well, instead of defining the option function as above, it was done something like this:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return withSomethingElse(somethingElse).options

type withSomethingElse string

func (option withSomethingElse) options(options *SomethingOptions) {
	options.SomethingElse = string(option)

Well, that’s certainly a lot more verbose. What is the value of doing it this way? And actually, before we cover that, what even is this way?

Fundamentally, the exported function is still doing the same thing: taking the optional value, and returning a function that will bind that value to the function’s options struct. The behaviour has not changed, just how we express it.

So: what’s withSomethingElse(somethingElse).options business? Well, withSomethingElse(somethingElse) is a type conversion. Many other languages use C-style cast/conversion syntax with parentheses around the type name (e.g., (withSomethingElse) somethingElse); Go does it the other way around, so type conversions look like function calls. (Type conversions are not to be confused with type assertions, which, along with type switches, are used to get from an interface type to the implementing type.)

If that’s a type conversion, then what is the withSomethingElse type? In order to accept conversions from a string, it must be defined from a string. And indeed, that’s what we have: type withSomethingElse string. But still, why do this?

Because defined types can have associated methods. By defining our own string type, we can add a method to it: options. This method does exactly the same thing that the anonymous function does in the original implementation, except, hypothetically, rather than having to define and return a new instance of the anonymous function every time, we can simply capture a reference to the method. And instead of an anonymous function closing over the option value, the method takes the value as its receiver.

That brings us back to withSomethingElse(somethingElse).options; specifically, the .options bit at the end. And now that we know that options is the name of a method on withSomethingElse, we can see that this whole line is setting up the receiver for this method, then returning a reference to the method.

The impact

Is this a good way to do functional options, then? Does this really have an impact, beyond making it a bit harder to read?

To find out, I wrote some benchmarks to test this out:

package main

import (

func DoSomething(options ...SomethingOption) (string, int) {
	opts := SomethingOptions{"abc"}
	for _, option := range options {

	return opts.Value, len(opts.Value)

type SomethingOption func(*SomethingOptions)

type SomethingOptions struct {
	Value string

func WithNoop(options *SomethingOptions) {}

func WithValueByAnonymousFunction(value string) SomethingOption {
	return func(options *SomethingOptions) {
		options.Value = value

func WithValueByMethod(value string) SomethingOption {
	return withValueByMethod(value).options

type withValueByMethod string

func (option withValueByMethod) options(options *SomethingOptions) {
	options.Value = string(option)

func BenchmarkDefault(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething()

func BenchmarkWithNoop(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithNoop)

func BenchmarkWithValueByAnonymousFunction(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByAnonymousFunction("def"))

func BenchmarkWithValueByMethod(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByMethod("ghi"))

Here, the DoSomething function does some minimal work: returning the string and its length. There are four benchmark functions:

  • BenchmarkDefault calls the function with no argument. This lets us establish a baseline measurement of how long the function “should” take to run.
  • BenchmarkWithNoop calls the function with a static option function that does nothing. This lets us understand the overhead of calling with one option (the time required to iterate over a slice of one option function, and invoke it).
  • BenchmarkWithValueByAnonymousFunction calls it with an anonymous function option (the “classic” implementation).
  • BenchmarkWithValueByMethod calls it with a method option (the new way).

Results under Go 1.20.2:

$ ~/opt/go1.20.2.linux-amd64/bin/go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: functional-options
cpu: AMD Ryzen 7 3700X 8-Core Processor
BenchmarkDefault-8                              38808385                30.94 ns/op           16 B/op          1 allocs/op
BenchmarkWithNoop-8                             35837532                32.55 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByAnonymousFunction-8         34458160                33.90 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByMethod-8                    33709026                34.38 ns/op           16 B/op          1 allocs/op
ok      functional-options      4.844s

The actual times moved around a bit over successive runs, but ordering was consistent.

From this, we can see that there is less overhead in returning an anonymous function than there is in the multiple type conversions required to bind the value to the method. The more verbose approach taken by the AWS SDK is not only more difficult to read, but it carries a performance penalty, not a benefit.

Was this always the case? No! I reran this test with the latest series release of each Go version back to 1.10. Here’s the raw output from that:

Plotted, we can see some clear movement:

A line chart plotting the ns/op of each of the four benchmarks at each Go release from 1.10 through 1.20. The default and no-op times closely track each other all along (25-30ns, trending upward with later Go versions). The by-anonymous-function and by-method times closely track each other from Go 1.10 through Go 1.15 (55-60ns). The by-method cost plummets to ~30ns at Go 1.16; the by-anonymous-function cost drops similarly with Go 1.17. From there onward, both track each other again (30-35ns).

Throughout time, the anonymous function and method approaches have always been similar, but there is a clear change in Go 1.16, and another in Go 1.17. Sure enough, the Go 1.16 release notes point out a compiler improvement to inlining of functions which return methods. If we dive into the disassembly, we can see that the code generated for the invocation of WithValueByMethod is slightly longer under 1.16 than 1.15 (look for the region in pale yellow near the top of the output, corresponding to line 4 on the left panel – lines 14–26 in the 1.15 output, and lines 15–31 in the 1.16 output). More meaningfully, the CALL to WithValueByMethod has disappeared, meaning that it has been inlined.

But there’s a similar improvement for the anonymous function implementation in Go 1.17. Again, the Go 1.17 release notes point out another compiler improvement to inlining, this time for “functions containing closures” – exactly what we’ve got here. And once again, the disassembly corroborates that story, with a similar increase in the generated code for the invocation of WithValueByAnonymousFunction, and the disappearance of the corresponding CALL instruction.

The takeaway

First off, let’s put this in perspective: even in the most extreme case – ~58ns vs. ~30ns – the difference is so insignificant that you should never need to care about this. Even in a hot loop, the “real work” of your function will far exceed the cost of passing options the “wrong” way.

I suspect that the AWS SDK development team made this design decision around the time of the release of Go 1.16, or at least when it was still important to support Go 1.16. I think, a year and a half on, most people are probably running Go 1.17 or higher. And if they aren’t, then, again, the performance impact is so minimal you should absolutely choose based on syntactic preference and developer ergonomics.

Do what makes you happy. And for me, that is not faffing about with obscure type definitions to gain 30ns on operations not on the hot path when running under outdated language versions.

Write a Comment