Golang functional options and overhead: anonymous functions vs. methods

While reading the source code for the AWS SDK for Go, I noticed a neat technique for implementing functional options.

Functional options

Dave Cheney has written about them in detail, but to quickly recap, “functional options” are a common pattern for taking optional arguments to a function in Go. Go doesn’t natively support optional arguments, but it does support variadic functions. The pattern is to expose the function’s options as a struct type, then take as arguments any number of functions which mutate an instance of that type:

func DoSomething(
	mandatoryArgument1 int,
	mandatoryArgument2 string,
	options ...func(*SomethingOptions),
) error {
	opts := SomethingOptions{
		// Configure default option values if/as necessary.
	// Apply provided options.
	for _, option := range options {

	// Now the rest of the function can use those options along with the
	// mandatory arguments.

	// ...

type SomethingOptions struct {
	OptionalFlag  bool
	SomethingElse string

Invocation looks something like this:

err := DoSomething(42, "nice", func(options *SomethingOptions) {
	options.OptionalFlag = true

A very common extension of this is for the package that defines the function to also provide some option function constructors, usually named with the prefix “With”, that take the value you want to modify as an argument, and return an instance of an option function which closes over that value in order to bind it to the options on invocation:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return func(options *SomethingOptions) {
		options.SomethingElse = somethingElse

// Then...

err := DoSomething(420, "blaze", WithSomethingElse("it"))

Done to excess, this starts to look a lot like Objective-C. But the pattern carries a number of really nice benefits; most meaningfully for me being that it allows for a meaningful distinction between passing a zero value as an option and not passing an option (in comparison to taking options as regular arguments and documenting them as being ignored when given as a zero value), and it is strongly typed (in comparison to taking an ...any argument and type-switching over each to figure out what to do with them).

The twist

Alright, so what did I find in the AWS SDK for Go that’s got me so intrigued? Well, instead of defining the option function as above, it was done something like this:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return withSomethingElse(somethingElse).options

type withSomethingElse string

func (option withSomethingElse) options(options *SomethingOptions) {
	options.SomethingElse = string(option)

Well, that’s certainly a lot more verbose. What is the value of doing it this way? And actually, before we cover that, what even is this way?

Fundamentally, the exported function is still doing the same thing: taking the optional value, and returning a function that will bind that value to the function’s options struct. The behaviour has not changed, just how we express it.

So: what’s withSomethingElse(somethingElse).options business? Well, withSomethingElse(somethingElse) is a type conversion. Many other languages use C-style cast/conversion syntax with parentheses around the type name (e.g., (withSomethingElse) somethingElse); Go does it the other way around, so type conversions look like function calls. (Type conversions are not to be confused with type assertions, which, along with type switches, are used to get from an interface type to the implementing type.)

If that’s a type conversion, then what is the withSomethingElse type? In order to accept conversions from a string, it must be defined from a string. And indeed, that’s what we have: type withSomethingElse string. But still, why do this?

Because defined types can have associated methods. By defining our own string type, we can add a method to it: options. This method does exactly the same thing that the anonymous function does in the original implementation, except, hypothetically, rather than having to define and return a new instance of the anonymous function every time, we can simply capture a reference to the method. And instead of an anonymous function closing over the option value, the method takes the value as its receiver.

That brings us back to withSomethingElse(somethingElse).options; specifically, the .options bit at the end. And now that we know that options is the name of a method on withSomethingElse, we can see that this whole line is setting up the receiver for this method, then returning a reference to the method.

The impact

Is this a good way to do functional options, then? Does this really have an impact, beyond making it a bit harder to read?

To find out, I wrote some benchmarks to test this out:

package main

import (

func DoSomething(options ...SomethingOption) (string, int) {
	opts := SomethingOptions{"abc"}
	for _, option := range options {

	return opts.Value, len(opts.Value)

type SomethingOption func(*SomethingOptions)

type SomethingOptions struct {
	Value string

func WithNoop(options *SomethingOptions) {}

func WithValueByAnonymousFunction(value string) SomethingOption {
	return func(options *SomethingOptions) {
		options.Value = value

func WithValueByMethod(value string) SomethingOption {
	return withValueByMethod(value).options

type withValueByMethod string

func (option withValueByMethod) options(options *SomethingOptions) {
	options.Value = string(option)

func BenchmarkDefault(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething()

func BenchmarkWithNoop(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithNoop)

func BenchmarkWithValueByAnonymousFunction(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByAnonymousFunction("def"))

func BenchmarkWithValueByMethod(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByMethod("ghi"))

Here, the DoSomething function does some minimal work: returning the string and its length. There are four benchmark functions:

  • BenchmarkDefault calls the function with no argument. This lets us establish a baseline measurement of how long the function “should” take to run.
  • BenchmarkWithNoop calls the function with a static option function that does nothing. This lets us understand the overhead of calling with one option (the time required to iterate over a slice of one option function, and invoke it).
  • BenchmarkWithValueByAnonymousFunction calls it with an anonymous function option (the “classic” implementation).
  • BenchmarkWithValueByMethod calls it with a method option (the new way).

Results under Go 1.20.2:

$ ~/opt/go1.20.2.linux-amd64/bin/go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: functional-options
cpu: AMD Ryzen 7 3700X 8-Core Processor
BenchmarkDefault-8                              38808385                30.94 ns/op           16 B/op          1 allocs/op
BenchmarkWithNoop-8                             35837532                32.55 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByAnonymousFunction-8         34458160                33.90 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByMethod-8                    33709026                34.38 ns/op           16 B/op          1 allocs/op
ok      functional-options      4.844s

The actual times moved around a bit over successive runs, but ordering was consistent.

From this, we can see that there is less overhead in returning an anonymous function than there is in the multiple type conversions required to bind the value to the method. The more verbose approach taken by the AWS SDK is not only more difficult to read, but it carries a performance penalty, not a benefit.

Was this always the case? No! I reran this test with the latest series release of each Go version back to 1.10. Here’s the raw output from that:

Plotted, we can see some clear movement:

A line chart plotting the ns/op of each of the four benchmarks at each Go release from 1.10 through 1.20. The default and no-op times closely track each other all along (25-30ns, trending upward with later Go versions). The by-anonymous-function and by-method times closely track each other from Go 1.10 through Go 1.15 (55-60ns). The by-method cost plummets to ~30ns at Go 1.16; the by-anonymous-function cost drops similarly with Go 1.17. From there onward, both track each other again (30-35ns).

Throughout time, the anonymous function and method approaches have always been similar, but there is a clear change in Go 1.16, and another in Go 1.17. Sure enough, the Go 1.16 release notes point out a compiler improvement to inlining of functions which return methods. If we dive into the disassembly, we can see that the code generated for the invocation of WithValueByMethod is slightly longer under 1.16 than 1.15 (look for the region in pale yellow near the top of the output, corresponding to line 4 on the left panel – lines 14–26 in the 1.15 output, and lines 15–31 in the 1.16 output). More meaningfully, the CALL to WithValueByMethod has disappeared, meaning that it has been inlined.

But there’s a similar improvement for the anonymous function implementation in Go 1.17. Again, the Go 1.17 release notes point out another compiler improvement to inlining, this time for “functions containing closures” – exactly what we’ve got here. And once again, the disassembly corroborates that story, with a similar increase in the generated code for the invocation of WithValueByAnonymousFunction, and the disappearance of the corresponding CALL instruction.

The takeaway

First off, let’s put this in perspective: even in the most extreme case – ~58ns vs. ~30ns – the difference is so insignificant that you should never need to care about this. Even in a hot loop, the “real work” of your function will far exceed the cost of passing options the “wrong” way.

I suspect that the AWS SDK development team made this design decision around the time of the release of Go 1.16, or at least when it was still important to support Go 1.16. I think, a year and a half on, most people are probably running Go 1.17 or higher. And if they aren’t, then, again, the performance impact is so minimal you should absolutely choose based on syntactic preference and developer ergonomics.

Do what makes you happy. And for me, that is not faffing about with obscure type definitions to gain 30ns on operations not on the hot path when running under outdated language versions.

Building Hasura v2.0.8

This is a quick update to/errata list for Building the Hasura GraphQL Engine, as I recently built GraphQL Engine v2.0.8 locally and found a couple of things have changed:

Cabal project structure change

cabal.project.freeze now lives at the top level of the Git repository, not inside server/. You’ll need to grep it for GHC/Cabal versions instead of server/cabal.project.freeze (which no longer exists).

MySQL client library build shenanigans

Something having to do with the mysql dependency has changed, and the cabal v2-build process complains that it can’t find my Homebrew-installed OpenSSL library. Thanks to haskell/cabal#2997, the workaround requires modifying the cabal.project file in the top level of the project:

diff --git a/cabal.project b/cabal.project
index c7f79ed63..61f39635c 100644
--- a/cabal.project
+++ b/cabal.project
@@ -59,6 +59,10 @@ source-repository-package
   location: https://github.com/fpco/odbc.git
   tag: 7c0cea45d0b779419eb16177407c4ee9e7ba4c6f

+package mysql
+  extra-include-dirs: /usr/local/opt/openssl/include
+  extra-lib-dirs: /usr/local/opt/openssl/lib
 package odbc
   ghc-options: -Wwarn
   -- Our CI compiles with -Werror, which is also applied to those packages

Thanks to this file change, the local repository is now in a dirty state, and the resulting binary will tell the browser to load console assets from https://graphql-engine-cdn.hasura.io/console/assets/channel/dirty/v2.0/main.js.gz instead of https://graphql-engine-cdn.hasura.io/console/assets/channel/stable/v2.0/main.js.gz. This, of course, 404s. In order to make the build think the repository state is clean, we need to commit the changes and move the version tag to the new commit. And just to make things ever so slightly more complicated, the repository contains a pre-commit hook, so we need to commit with --no-verify so that doesn’t run:

$ # Make the requisite changes to cabal.project:
$ $EDITOR cabal.project
$ git commit --no-verify --message "Bodging in extra MySQL dependency paths."
$ git tag --force v2.0.8

Good to go

Then you can cd server && cabal v2-update && cabal v2-build as before.

Put guards on your destructive operations

tl;dr: Sanity-checking your input should go beyond making sure the input is acceptable. Try to predict contextually-likely mistakes your user will make (even – especially! – when your user is you) and prevent them, particularly when your operation is destructive.

I’m back on a macOS machine more of the time again, which means I’m using Music.app (née iTunes) more again. It’s not my favourite, but I don’t hate it. (Actually, I normally use Plexamp, but my Plex server is offline at the moment – long story.) My biggest frustration with Music.app has always been the lack of native FLAC support, which prevents me from simply network-mounting my music library from my file server and playing it back directly. I used to use Max to batch-convert FLAC to MP4-ensconced AAC (.m4a), because CoreAudio has about the best AAC encoder going, and afconvert is too inscrutable to use directly. But these days, FFmpeg’s native encoder is good enough, so I wrote a simple shell script to:

  • find all FLAC files under the current directory and – in parallel – use FFmpeg to convert them all to AAC-in-MP4;
  • find all M4A files under the current directory and use AtomicParsley to populate their album art from an artwork file in the same directory;
  • delete all non-M4A files under the current directory;
  • add everything left under the current directory into my Music.app library (by moving it into ~/Music/Music/Media.localized/Automatically Add to Music.localized/).

If those last two steps haven’t already set off alarm bells in your head, they should’ve! That’s some pretty destructive stuff, particularly because:

  • I’m applying it to the current directory, not some fixed “ingest” directory, and
  • this script probably lives somewhere in my $PATH, so I can invoke it (maybe accidentally) from anywhere.

In fact, I have accidentally invoked it, from my Downloads directory (which contains nothing of consequence), and from my home directory (which contains approximately everything of value stored locally on my machine). But: I didn’t lose any data in either instance. Why? Because I thought ahead, and added two preconditions to the conversion script:

  1. No files are permitted in the immediate working directory. When I convert music, I do so album by album, not collections of loose files, and those albums are always in their own directories. E.g., I’m always working from a structure like this:
    ./Casualties of Cool/Casualties of Cool (2014)
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1/01 - Daddy.flac
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1/02 - ...
    So a stray file at the top level of the working directory is distinctly abnormal, and might mean that I’m in the wrong place.
  2. The number of FLAC files (files with names ending in .flac) contained within the working directory must exceed the number of all other files.

These checks take more space in the script than the actual conversion work (15 lines vs. 4). And they’re totally worth it.

Building the Hasura GraphQL Engine

2021-09-01: I’ve written a follow-up post covering changes introduced in Hasura v2.0.8.

$WORKPLACE is using Hasura for a project. The core server component of Hasura – the “GraphQL Engine” – is distributed only as a Docker container. This works well enough on my Linux machine, but I have a strong aversion to running Docker on non-Linux machines. I find idea that containers can still be considered “lightweight” when they’re running inside a full-fat VM to be a bit laughable. I prefer to run things on “bare metal” in my development environment where I can. And while Hasura don’t distribute first-party native binaries, there’s hypothetically nothing stopping us from building our own; it is, after all, open source (APL 2.0). So let’s do that.

The Engine is written in Haskell, so as a prerequisite, install ghcup. If it’s been added to your PATH correctly, you should be able to run ghcup list and get a list of available Haskell compiler (GHC) and build/package manager (Cabal) versions.

Check out the GraphQL Engine Git repository:

$ git clone https://github.com/hasura/graphql-engine.git
# Make sure we're building a specific release version, not just the master branch.$ git checkout v1.3.3
# The server-side source code is all in the server/ directory.
$ cd graphql-engine/server

The Haskell build manager, Cabal, is almost capable of building the project without intervention. The fly in the ointment is that Hasura seems to be quite picky about the GHC version it’s built with, and Cabal isn’t high enough up the dependency food chain to pick which compiler version gets used. So instead, we’ll need to pull desired GHC and Cabal versions out of the freeze file, and install/set those as the defaults with ghcup:

# Install the GHC version corresponding to the required Haskell language base version.
$ grep 'any.base ==' cabal.project.freeze
any.base ==,
$ ghcup install ghc base-
[ Info ] downloading: https://downloads.haskell.org/~ghc/8.10.1/ghc-8.10.1-x86_64-apple-darwin.tar.xz
[ Info ] GHC installation successful
$ ghcup set ghc base-
[ Info ] GHC 8.10.1 successfully set as default version
$ grep 'any.Cabal ==' cabal.project.freeze
constraints: any.Cabal ==,
$ ghcup install cabal
[ Info ] downloading: https://downloads.haskell.org/~cabal/cabal-install-
[ Info ] Cabal installation successful
$ ghcup set cabal
[ Info ] Cabal successfully set as default version

Before you can off and run make, you’ll also need a couple libraries installed: unixODBC and libpq are the important ones. On a Mac, you can brew install unixodbc libqp.

Then you can kick Cabal into action:

$ cabal v2-update
(git junk happens)
Downloading the latest package list from hackage.haskell.org
To revert to previous state run:
    cabal v2-update 'hackage.haskell.org,...'
$ cabal v2-build
(git junk happens)
Resolving dependencies...
Build profile: -w ghc-8.10.1 -O1
In order, the following will be built (use -v for more details):
(looong build process happens)
Building executable 'graphql-engine' for graphql-engine-1.0.0..
Linking /path/to/graphql-engine/server/dist-newstyle/build/x86_64-osx/ghc-8.10.1/graphql-engine-1.0.0/x/graphql-engine/opt/build/graphql-engine/graphql-engine ...

And that’s it. The long path given by the last line (“Linking…”) is the final executable. It’ll be about 64MB. You can run it as-is and it’ll complain about missing arguments. The documented configuration arguments and environment variables are really just passed through to the binary running in the container, so they’ll work just fine with the binary running out of the container, too. For example, to connect to the local database testdb as the user testuser, and enable the console:

/path/to/graphql-engine \
    --database-url postgres://testuser@localhost/testdb \
    serve \

Then point a browser at http://localhost:8080/console.

Note that the graphql-engine executable is dynamically linked against a number of other libraries:

$ otool -L /path/to/graphql-engine
        /usr/local/opt/postgresql/lib/libpq.5.dylib (compatibility version 5.0.0, current version 5.13.0)
        /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
        /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
        /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
        /usr/lib/libcharset.1.dylib (compatibility version 2.0.0, current version 2.0.0)

Not very portable. To physically relocate these libraries and modify the linking of the graphql-engine executable for easier distribution, you can either do this yourself with install_name_tool, or use something like locallink to do it for you.

Limitations of the <meta> tag for specifying character encoding

A question came up on reddit: why is my webpage being interpreted with the incorrect character encoding? The question (which has since been removed, or else I’d link to it) involved some specifics about how the page was being served, and, paraphrased, the answer was that PHP-generated pages were served with an HTTP Content-Type header which included encoding information, and static HTML pages weren’t.

But the markup included a <meta http-equiv="Content-Type"> tag. Shouldn’t the encoding have gotten interpreted correctly regardless of the Content-Type header, then? I threw off a quick, cargo-cult-tinged remark about being sure to place a <meta> tag to specifying character encoding before the <title> tag, and someone else said they thought it was interpreted anywhere in the first 128 bytes of the document.

Rather than continue to perpetrate hearsay and questionably-shiny pearls of wisdom, I’m going to try to nail down some factual observations about the behaviour of content-encoding-information-bearing <meta> tags. I want to know how the tag interacts with real HTTP headers, how it behaves when it’s placed at different offsets in the document (before rendered content? After rendered content? A long way after the start of the document?), and whether different browsers handle it differently.

Continue reading

Update, weeks of 2020-07-27 and 2020-08-03

It’s been a slower couple of weeks. M is back into her studies, so my own days are following a bit more of a pattern. I still regular events keeping me up until the wee hours with people back in Canada on Tuesday and Thursday nights, but I’m trying to get in and out of bed at reasonable times otherwise. I went through a few days where I was pretty into Slime Rancher, and now Newto is pulling me back into TIS-100 and Infinifactory. The end of the summer is visible. Days are slowly getting shorter; it’s dark before 22:00.

Cottage cleaning continued

With the cottage empty for a few days and a bit of better weather, I undertook to power wash the paving tiles around the cottage.

Satisfying results, for sure. Need to do around the house next.

Bubble waffles

A café in Aberlour has started serving bubble waffles, so of course we had to take a walk down to try them. Hard to measure up to Moo Shu, but it was pretty good. In my excitement, I neglected to take any photos; but I did get a couple on the walk back up to the house.

Employment and writing

It’s time to start thinking about the fall. M is planning on moving to Strathkinness for the winter, and I need to start looking for something to do – something which will look good on a rental application. “Occasional open source contributor” probably isn’t enough. And I’m struggling to push ahead on that work, anyway, without anyone else interested in me doing it. Guess it’s time to start looking for real work. I started poking around for contract opportunities on Upwork. Lots of people looking for lots of things I can’t do. Imposter syndrome hits hard and fast. It’s the sort of thing to make one dream of switching careers. Too bad my obvious backup, live sound, is still in such a overwhelming state of COVID-induced decline.

I wrote an exploratory article on the performance of a particular design decision I made in passing on a project years ago. I don’t really remember what I was doing which brought it to mind, but I’m glad it was did. It was fun to write up, especially because I hadn’t put much thought into the performance of the choice at the time: it was originally stylistically-motivated. While writing the code samples for the post, I found that the amount of material I wanted to provide was unpleasantly long to read casually. Rather than provide a condensed version of the code in the post and link to a longer form, I annotated the unimportant sections and wrote a plugin to control visibility. I’m still struggling with the language to use to describe this feature: is it expansion? Compression? “Squeezing” is the term I used in the plugin name, but that was mostly for the pun value. And what are the bits that get hidden? Layers? Blocks? Expansions? Would like to get back to it and give it a proper readme, a configuration interface, and release it to the WordPress Plugin Directory.

Cullen and the Knock

We spent most of yesterday afternoon in Cullen, sitting on a bench overlooking the harbour. We had lunch, and M read while I watched the goings-on and enjoyed the day. People watching isn’t usually my thing, but people don’t usually move about that much, either.

In the evening, we climbed the Knock and had a picnic supper. That’s Ben Rinnes in the background.

It’s been a beautiful couple of days here. Clear, bright and sunny, just enough of a breeze.

Next week

I’d like to land at least one contract on Upwork, even if just something small.

I’d like to beat (or at least match) Newto’s cycle count on Interrupt Handler.

I’d like to write two more sections for the library logging analysis project/post I started on months ago.

I’d like to at least look into building a universal macOS binary for NRJS to support both x86-64 and ARM64.

I’d like to go at least one whole day without having an existential crisis.

Silly micro-optimization: branching vs. dynamic dispatch

Once upon a time, I wrote a PHP script to export some data as CSV. If memory serves, it looked something like this:


class ExportController
    private $fooRepo;

    public function __construct($fooRepo)
        $this->fooRepo = fooRepo;

    public function get()
        $data = $this->fooRepo->getRecords();
        echo "Time,Column 1,Column 2,Column 3\n";
        foreach ($record in $data)
                   $record->time, $record->a, $record->b, $record->c);
Continue reading

Update, week of 2020-07-20

Rebooting Seefilms by way of DCGSFT, and on the use of Syncplay

This is the eleventh summer some friends and I have run the DCGSFT drama group. Due to COVID-19 inhibiting its usual format, the group isn’t doing a performance this year; instead, we’re taking the opportunity (and advantage of everyone’s newly-developed familiarity with video conference calls) to do group readings of plays. It’s not very formal; whoever shows up on the call gets a part, and parts are assigned randomly (and can rotate from scene to scene). So far, we’ve read through Dear Brutus by J. M. Barrie, and Saint Joan by George Bernard Shaw.

This week, we took a break from reading plays, and watched a film adaptation of one instead: the 2002 adaptation of The Importance of Being Earnest by Oscar Wilde. We performed this play last summer, so it was pretty familiar to most; beyond that, I’d already re-read the script and watched this cinematic adaptation a month or two ago with KJJ’s Windstone book club. It was neat to rewatch it again with a different crew, and see different things pulled out of it.

Outside of the actual watching, there was a little technical effort involved on my part to get things set up for synchronous watching. I wanted to do something a little more structured than someone in the call doing a countdown and having everyone try to hit “play” at the same time. And because some people watching (self included, but also our fearless director) are doing so from fairly crappy Internet connections, I wanted a system where some people could stream the movie directly, while other people could download a copy ahead of time. The obvious solution is Syncplay: it’s free (in both of the usual senses), it’s cross-platform, and it supports a number of media players.

Syncplay, however, is only one piece of the puzzle (the other being a media player). Setup requires a little more attention than I (pessimistically, and perhaps uncharitably) expected all participants to be able to muddle through by themselves. I spent several hours trying to package up a turnkey distribution of Syncplay, mpv, and a configuration file for Syncplay which would automatically log into the right server and room with a preconfigured username. The intent was to throw together a web interface which would ask for a username, then patch the configuration file, and deliver a customized archive to the user. Unfortunately, the Syncplay configuration dialog doesn’t appear to be skippable. When loading the configuration file, Syncplay substitutes default values for unspecified configuration options, so I could generate a file specifying only the things I care about (username, server, room, etc.) – but the configuration dialog would always pop up anyway when launching Syncplay. Rightly or wrongly, I felt that if I couldn’t totally achieve my objective of a one-click launch then it wasn’t worth trying to build my own package at all. In the end, I wrote some instructions for installing the individual bits and pieces, and it worked out okay. We had a few issues with the stream pausing and spuriously rewinding/skipping, and I’m not sure it was really worth the effort to get everyone to use it; but it did function in the end.

Cottage cleanup

Most of the big effort of the week went into cleaning up around the cottage in anticipation of the arrival of the first Airbnb guests of the season. This involved the usual mundane deep-clean things we hadn’t done earlier in the year, like windows and the fridge, along with extra, pandemic-related disinfecting.

Earlier in the year, M and I cut down a whole bunch of Scotch broom near the cottage:

We had piled it all the in cottage driveway, because we thought that would be the easiest place from which to load it all into the trailer and take it to the dump. Unfortunately, the dump has stopped taking trailer-loads for now, so there the pile sat. We ended up hand-carrying it down the hill and re-piling it in the main parking lot next to the drive shed.

The trouble with cleaning things is that you keep finding more things to clean. I took a brush to the garden bench thinking I could quickly knock some of the moss off it; a couple hours (and one serious application of wire brush and elbow grease) later, the bench is much cleaner, but really ought to be properly sanded and stained. To be done some time when there’s no one in the cottage to miss the bench – maybe in the fall.

M spent a lot of her time trying to beat the overgrown cottage garden into submission. In particular, one large rosebush had pulled itself off the wall and needed reanchoring. And a barrel planter needed replanting; and the interior of the cottage is much improved for the addition of several little plants. It’s not a great time to be cutting things back, horticulturally, but you’ve gotta do what you’ve gotta do. My thumbs are pretty brown, so I keep getting pretty surprised by what a few green things can accomplish.

It does feel really good to get this stuff off of the list. Some of it has been on there since we got here.

Next week

I’d still like to do the things which I said last week that I wanted to do this week. This week has been pretty physically demanding, so it was hard to muster the interest in expending much mental effort even when I did get the time. Hopefully, with the cottage pressure off, I’ll feel a bit more like tackling some deeper technical challenges in the coming week.

Update, weeks of 2020-07-06 and 2020-07-13

I am a great eater of beef

I made steaks (and frites). First time trying a reverse sear. The ribeyes we got were a bit thinner than necessary to take full advantage of the technique, but it was a good first stab at it. Even if they did come out closer to medium-well, they tasted phenomenal, and stayed wonderfully juicy. I’ll definitely be doing that again. I was a bit too ginger with the temperature for the fries, and while they were well-cooked and seasoned, they were a bit on the limp side. Would’ve been a good first pass for double-fried fries, but I was trying to do it in one. Live and learn.


M found what she believed to be an expansive patch of wild blueberries in the woods, and, after confirming that they were indeed wild blueberries, we collected probably half a cup of them. A bit bitter by themselves, but fantastic with (somewhat more than half a cup of) ice cream.

We repeated the experiment with raspberries from the garden, and appreciated the results similarly (although not quite as much – the blueberries really were great).


I’ve been mulling over whether and how best to get involved with the local community radio station. I mocked up a new website layout, but it’s not done yet and I ran into a few snags, so I put it back on the shelf. I did build prototype apps for both Android and iOS:

I tried to build something functionally equivalent to this many years ago for Axe Radio, but never got it finished. This seemed like a good opportunity to prove to myself that I could do it, given the time.

I ran into frustrations building both. The Android player was built over parts of three days. The first day was a false start based on the template Android Studio gives you if you start a new app and accept all of its default proposals. I ended up with Kotlin, which sounds fine, and the AppCompat/AndroidX support libraries. That sounded all well and good (I was looking forward to playing with Kotlin), but combined, those options carry over 2MB overhead with them in your published app package. To me, for an app which should be able to get by on media playback functionality built into Android for years now, that’s unacceptable. I spent the rest of the day in a new, Java-based project trying to pare back the Gradle build definition to a point where I could understand everything it was doing.

On day two, I got down to business wiring things up. On Android, you need to manage your own background task for long-living audio playback. It’s not hard, but it is a little tedious to get set up properly. Beyond that, the built-in programmatic media player, the aptly-named MediaPlayer, doesn’t give very good error messages. I stopped in frustration when I got to the point where older versions of Android were working fine, but Android 9+ failed with an obscure error code (1, meaning MEDIA_ERROR_UNKNOWN, with extra information -2147483648, meaning MEDIA_ERROR_SYSTEM – helpful and specific, I’m sure you’ll agree).

On day three, I proved that my implementation worked on some level when I succeeded in playing back media embedded in the app. After much fumbling and floundering, I tried side-stepping MediaPlayer’s built-in HTTP retrieval mechanism: I made my own HTTP request for a fixed-length chunk of the audio stream, and fed the bytes into MediaPlayer. And… it didn’t even make it to MediaPlayer, because I got an error making the HTTP request. As of Android 9, all HTTP traffic must be secure (HTTPS), or you need to configure your app to allow plaintext traffic. And yet, I had already configured that back on day 2. Turns out that redeploying the app through the Android Studio debugger may not cause the device to pick up on changes to the app manifest. After I uninstalled the app from the emulated device and reinstalled it, it worked perfectly. Aaugh.

As for iOS, I ran into slightly similar backwards-compatibility issues trying to start the project. The default Xcode template for a new Swift app relies on SwiftUI, which is only available as of iOS 13. I wanted to target something at least a version or maybe two older than that. Even after starting afresh with a Storyboard-based app, I still had to rip out a lot of references to scenes before it would compile. After that, it was mostly straightforward to get something working. iOS has similar restrictions on HTTP content in recent versions, but it gives helpful error messages about it. It’s also is much more restrictive about what can run in the background than is Android, but the trade-off is that the process for doing that background work much simpler: tick off the “Audio, AirPlay, and Picture in Picture” background mode permission, and your AVPlayer will do the right thing. No coordinating state between your app’s UI and some background task/service, because it’s managed for you. Much less flexible, but much easier to do – as long as what you want to do is on the path well trod. Then again, not in all cases: error handling for AVPlayer is a mess, and the iOS version of the app mostly doesn’t do it right now. HTTP errors from the source (e.g., 404 if the stream is offline) are handled separately from playback/stream errors (e.g., bad decoding), and are accessed through clumsy, Objective-C-style key-value observers. Even knowing when the player has started actually playing back media versus just starting to buffer it is tedious.

On one hand, it’s nice to have knocked out proofs-of-concept for both of these. On the other hand, neither experience was welcoming. Both Android Studio and Xcode were massive downloads on my wee, limited Internet connection here. Android Studio started off better, with a “mere” 850 MB installer, but immediately after install kicked off downloading a few more hundred megs of updates and plugins. Then, to emulate a device, you need to download an image for each Android version you want to run, which are 700 MB to 1.1 GB. Total weight, by the time all was said and done, was probably in the neighbourhood of 5.5 GB. Xcode, on the other hand, is a single massive 7.8 GB package, but it does have all of the bits in the box, so to speak.

I don’t like using either IDE much. IntelliJ (even for more traditional Java projects) always feels like death by a thousand cuts, with every little thing being just a little different than I’m used to. And without getting into its performance issues, Xcode continues to make worse and worse use of screen real estate. I feel like I could see more code on the screen at once in the QBASIC editor on a CGA monitor than in a contemporary Xcode session. This whole experience has been a good reminder of why I never finished doing this for Axe Radio: too much tooth-pulling to go through voluntarily, unless you’re feeling particularly stubborn.

I also sent an e-mail to KCR asking if and how I could get involved. Haven’t heard back yet, but maybe something beyond my own edification will come of this, eventually.


I added a “The webcam is located at…” note to the webcam page. I realized that I’ve been sending this link to friends who don’t really have a precise idea of where I am, so this answers the question nicely.

I’m hoping to get a timelapse interface built soon. I have weeks of images now at five-minute intervals, and it would be fun to be able to browse through them. In addition to the obvious daily timelapse (animation of all of the images from the last 24 hours in sequence), I think it would be neat to do a time-by-day timelapse, showing images from (around) the same time across multiple days. It wouldn’t look as fluid (the clouds would jump around a lot), but it would be an easy way to compare day-by-day weather.

I’m pretty frustrated with the reflections on the inside of the window. I’d really like to put the camera outside the house. The webcam I’m using (Logitech C920) is not weather-protected in the slightest, but maybe I can build a box to put it in. Or maybe I can replace the plastic housing with aluminum. Then I could put a nicer lens on it, too. But that’s a fairly expensive option, as cool as the result would be. I’ve also thought about replacing the camera entirely with a security camera. Some of those are pretty cheap. But they can also pretty sketchy, with unknown data leakage, and sometimes the image can only be accessed through some cloud service, not directly. A CUBE would be perfect, only that it isn’t out yet. I could even resort to a Raspberry Pi High Quality Camera + a lens + a PoE HAT + some waterproof case, and have a self-contained unit. But that’s probably the most expensive solution of the lot.

Ben Rinnes

M and I climed up Ben Rinnes.

Ben Rinnes, captured by the webcam
Ben Rinnes, seen from the webcam at around the time we were at its peak.

A difference of 525 m in altitude from base to peak. Took minutes shy of three hours to make the round trip, with about an hour and three-quarters of that being the trek up, and maybe ten or fifteen minutes spent at the top.

We were pretty spent afterwards, but after staring out the window at it for a few months, it felt pretty good to climb it and look down from the opposite perspective.

Next week

I’m hoping to get back to some research/analysis I was doing for NRJavaSerial. It would also be nice to finish up error handling for the iOS version of the KCR app, and maybe start on design work for both the Android and iOS versions. Displaying some information on the currently-playing show and music would be nice, too.

Stupid simple command line wall timer

I was making dinner and needed to time something. My phone already had another timer going, and running two timers simultaneously is clearly beyond the capacity of a modern smartphone, so I reached for my laptop instead. And as I Googled “macos desktop timer” (because having any timer at all built into macOS is clearly an extravagance, and not one Apple has seen fit to bequeath us with), my Wi-Fi dropped. So I grabbed a terminal, and ran:

Continue reading