Golang functional options and overhead: anonymous functions vs. methods

While reading the source code for the AWS SDK for Go, I noticed a neat technique for implementing functional options.

Functional options

Dave Cheney has written about them in detail, but to quickly recap, “functional options” are a common pattern for taking optional arguments to a function in Go. Go doesn’t natively support optional arguments, but it does support variadic functions. The pattern is to expose the function’s options as a struct type, then take as arguments any number of functions which mutate an instance of that type:

func DoSomething(
	mandatoryArgument1 int,
	mandatoryArgument2 string,
	options ...func(*SomethingOptions),
) error {
	opts := SomethingOptions{
		// Configure default option values if/as necessary.
	}
	// Apply provided options.
	for _, option := range options {
		option(&opts)
	}

	// Now the rest of the function can use those options along with the
	// mandatory arguments.

	// ...
}

type SomethingOptions struct {
	OptionalFlag  bool
	SomethingElse string
}

Invocation looks something like this:

err := DoSomething(42, "nice", func(options *SomethingOptions) {
	options.OptionalFlag = true
})

A very common extension of this is for the package that defines the function to also provide some option function constructors, usually named with the prefix “With”, that take the value you want to modify as an argument, and return an instance of an option function which closes over that value in order to bind it to the options on invocation:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return func(options *SomethingOptions) {
		options.SomethingElse = somethingElse
	}
}

// Then...

err := DoSomething(420, "blaze", WithSomethingElse("it"))

Done to excess, this starts to look a lot like Objective-C. But the pattern carries a number of really nice benefits; most meaningfully for me being that it allows for a meaningful distinction between passing a zero value as an option and not passing an option (in comparison to taking options as regular arguments and documenting them as being ignored when given as a zero value), and it is strongly typed (in comparison to taking an ...any argument and type-switching over each to figure out what to do with them).

The twist

Alright, so what did I find in the AWS SDK for Go that’s got me so intrigued? Well, instead of defining the option function as above, it was done something like this:

func WithSomethingElse(somethingElse string) func(*SomethingOptions) {
	return withSomethingElse(somethingElse).options
}

type withSomethingElse string

func (option withSomethingElse) options(options *SomethingOptions) {
	options.SomethingElse = string(option)
}

Well, that’s certainly a lot more verbose. What is the value of doing it this way? And actually, before we cover that, what even is this way?

Fundamentally, the exported function is still doing the same thing: taking the optional value, and returning a function that will bind that value to the function’s options struct. The behaviour has not changed, just how we express it.

So: what’s withSomethingElse(somethingElse).options business? Well, withSomethingElse(somethingElse) is a type conversion. Many other languages use C-style cast/conversion syntax with parentheses around the type name (e.g., (withSomethingElse) somethingElse); Go does it the other way around, so type conversions look like function calls. (Type conversions are not to be confused with type assertions, which, along with type switches, are used to get from an interface type to the implementing type.)

If that’s a type conversion, then what is the withSomethingElse type? In order to accept conversions from a string, it must be defined from a string. And indeed, that’s what we have: type withSomethingElse string. But still, why do this?

Because defined types can have associated methods. By defining our own string type, we can add a method to it: options. This method does exactly the same thing that the anonymous function does in the original implementation, except, hypothetically, rather than having to define and return a new instance of the anonymous function every time, we can simply capture a reference to the method. And instead of an anonymous function closing over the option value, the method takes the value as its receiver.

That brings us back to withSomethingElse(somethingElse).options; specifically, the .options bit at the end. And now that we know that options is the name of a method on withSomethingElse, we can see that this whole line is setting up the receiver for this method, then returning a reference to the method.

The impact

Is this a good way to do functional options, then? Does this really have an impact, beyond making it a bit harder to read?

To find out, I wrote some benchmarks to test this out:

package main

import (
	"testing"
)

func DoSomething(options ...SomethingOption) (string, int) {
	opts := SomethingOptions{"abc"}
	for _, option := range options {
		option(&opts)
	}

	return opts.Value, len(opts.Value)
}

type SomethingOption func(*SomethingOptions)

type SomethingOptions struct {
	Value string
}

func WithNoop(options *SomethingOptions) {}

func WithValueByAnonymousFunction(value string) SomethingOption {
	return func(options *SomethingOptions) {
		options.Value = value
	}
}

func WithValueByMethod(value string) SomethingOption {
	return withValueByMethod(value).options
}

type withValueByMethod string

func (option withValueByMethod) options(options *SomethingOptions) {
	options.Value = string(option)
}

func BenchmarkDefault(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething()
	}
}

func BenchmarkWithNoop(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithNoop)
	}
}

func BenchmarkWithValueByAnonymousFunction(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByAnonymousFunction("def"))
	}
}

func BenchmarkWithValueByMethod(b *testing.B) {
	for i := 0; i < b.N; i++ {
		_, _ = DoSomething(WithValueByMethod("ghi"))
	}
}

Here, the DoSomething function does some minimal work: returning the string and its length. There are four benchmark functions:

  • BenchmarkDefault calls the function with no argument. This lets us establish a baseline measurement of how long the function “should” take to run.
  • BenchmarkWithNoop calls the function with a static option function that does nothing. This lets us understand the overhead of calling with one option (the time required to iterate over a slice of one option function, and invoke it).
  • BenchmarkWithValueByAnonymousFunction calls it with an anonymous function option (the “classic” implementation).
  • BenchmarkWithValueByMethod calls it with a method option (the new way).

Results under Go 1.20.2:

$ ~/opt/go1.20.2.linux-amd64/bin/go test -bench=. -benchmem
goos: linux
goarch: amd64
pkg: functional-options
cpu: AMD Ryzen 7 3700X 8-Core Processor
BenchmarkDefault-8                              38808385                30.94 ns/op           16 B/op          1 allocs/op
BenchmarkWithNoop-8                             35837532                32.55 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByAnonymousFunction-8         34458160                33.90 ns/op           16 B/op          1 allocs/op
BenchmarkWithValueByMethod-8                    33709026                34.38 ns/op           16 B/op          1 allocs/op
PASS
ok      functional-options      4.844s

The actual times moved around a bit over successive runs, but ordering was consistent.

From this, we can see that there is less overhead in returning an anonymous function than there is in the multiple type conversions required to bind the value to the method. The more verbose approach taken by the AWS SDK is not only more difficult to read, but it carries a performance penalty, not a benefit.

Was this always the case? No! I reran this test with the latest series release of each Go version back to 1.10. Here’s the raw output from that:

Plotted, we can see some clear movement:

A line chart plotting the ns/op of each of the four benchmarks at each Go release from 1.10 through 1.20. The default and no-op times closely track each other all along (25-30ns, trending upward with later Go versions). The by-anonymous-function and by-method times closely track each other from Go 1.10 through Go 1.15 (55-60ns). The by-method cost plummets to ~30ns at Go 1.16; the by-anonymous-function cost drops similarly with Go 1.17. From there onward, both track each other again (30-35ns).

Throughout time, the anonymous function and method approaches have always been similar, but there is a clear change in Go 1.16, and another in Go 1.17. Sure enough, the Go 1.16 release notes point out a compiler improvement to inlining of functions which return methods. If we dive into the disassembly, we can see that the code generated for the invocation of WithValueByMethod is slightly longer under 1.16 than 1.15 (look for the region in pale yellow near the top of the output, corresponding to line 4 on the left panel – lines 14–26 in the 1.15 output, and lines 15–31 in the 1.16 output). More meaningfully, the CALL to WithValueByMethod has disappeared, meaning that it has been inlined.

But there’s a similar improvement for the anonymous function implementation in Go 1.17. Again, the Go 1.17 release notes point out another compiler improvement to inlining, this time for “functions containing closures” – exactly what we’ve got here. And once again, the disassembly corroborates that story, with a similar increase in the generated code for the invocation of WithValueByAnonymousFunction, and the disappearance of the corresponding CALL instruction.

The takeaway

First off, let’s put this in perspective: even in the most extreme case – ~58ns vs. ~30ns – the difference is so insignificant that you should never need to care about this. Even in a hot loop, the “real work” of your function will far exceed the cost of passing options the “wrong” way.

I suspect that the AWS SDK development team made this design decision around the time of the release of Go 1.16, or at least when it was still important to support Go 1.16. I think, a year and a half on, most people are probably running Go 1.17 or higher. And if they aren’t, then, again, the performance impact is so minimal you should absolutely choose based on syntactic preference and developer ergonomics.

Do what makes you happy. And for me, that is not faffing about with obscure type definitions to gain 30ns on operations not on the hot path when running under outdated language versions.

Building Hasura v2.0.8

This is a quick update to/errata list for Building the Hasura GraphQL Engine, as I recently built GraphQL Engine v2.0.8 locally and found a couple of things have changed:

Cabal project structure change

cabal.project.freeze now lives at the top level of the Git repository, not inside server/. You’ll need to grep it for GHC/Cabal versions instead of server/cabal.project.freeze (which no longer exists).

MySQL client library build shenanigans

Something having to do with the mysql dependency has changed, and the cabal v2-build process complains that it can’t find my Homebrew-installed OpenSSL library. Thanks to haskell/cabal#2997, the workaround requires modifying the cabal.project file in the top level of the project:

diff --git a/cabal.project b/cabal.project
index c7f79ed63..61f39635c 100644
--- a/cabal.project
+++ b/cabal.project
@@ -59,6 +59,10 @@ source-repository-package
   location: https://github.com/fpco/odbc.git
   tag: 7c0cea45d0b779419eb16177407c4ee9e7ba4c6f

+package mysql
+  extra-include-dirs: /usr/local/opt/openssl/include
+  extra-lib-dirs: /usr/local/opt/openssl/lib
+
 package odbc
   ghc-options: -Wwarn
   -- Our CI compiles with -Werror, which is also applied to those packages

Thanks to this file change, the local repository is now in a dirty state, and the resulting binary will tell the browser to load console assets from https://graphql-engine-cdn.hasura.io/console/assets/channel/dirty/v2.0/main.js.gz instead of https://graphql-engine-cdn.hasura.io/console/assets/channel/stable/v2.0/main.js.gz. This, of course, 404s. In order to make the build think the repository state is clean, we need to commit the changes and move the version tag to the new commit. And just to make things ever so slightly more complicated, the repository contains a pre-commit hook, so we need to commit with --no-verify so that doesn’t run:

$ # Make the requisite changes to cabal.project:
$ $EDITOR cabal.project
$ git commit --no-verify --message "Bodging in extra MySQL dependency paths."
$ git tag --force v2.0.8

Good to go

Then you can cd server && cabal v2-update && cabal v2-build as before.

Put guards on your destructive operations

tl;dr: Sanity-checking your input should go beyond making sure the input is acceptable. Try to predict contextually-likely mistakes your user will make (even – especially! – when your user is you) and prevent them, particularly when your operation is destructive.

I’m back on a macOS machine more of the time again, which means I’m using Music.app (née iTunes) more again. It’s not my favourite, but I don’t hate it. (Actually, I normally use Plexamp, but my Plex server is offline at the moment – long story.) My biggest frustration with Music.app has always been the lack of native FLAC support, which prevents me from simply network-mounting my music library from my file server and playing it back directly. I used to use Max to batch-convert FLAC to MP4-ensconced AAC (.m4a), because CoreAudio has about the best AAC encoder going, and afconvert is too inscrutable to use directly. But these days, FFmpeg’s native encoder is good enough, so I wrote a simple shell script to:

  • find all FLAC files under the current directory and – in parallel – use FFmpeg to convert them all to AAC-in-MP4;
  • find all M4A files under the current directory and use AtomicParsley to populate their album art from an artwork file in the same directory;
  • delete all non-M4A files under the current directory;
  • add everything left under the current directory into my Music.app library (by moving it into ~/Music/Music/Media.localized/Automatically Add to Music.localized/).

If those last two steps haven’t already set off alarm bells in your head, they should’ve! That’s some pretty destructive stuff, particularly because:

  • I’m applying it to the current directory, not some fixed “ingest” directory, and
  • this script probably lives somewhere in my $PATH, so I can invoke it (maybe accidentally) from anywhere.

In fact, I have accidentally invoked it, from my Downloads directory (which contains nothing of consequence), and from my home directory (which contains approximately everything of value stored locally on my machine). But: I didn’t lose any data in either instance. Why? Because I thought ahead, and added two preconditions to the conversion script:

  1. No files are permitted in the immediate working directory. When I convert music, I do so album by album, not collections of loose files, and those albums are always in their own directories. E.g., I’m always working from a structure like this:
    .
    ./Casualties of Cool/Casualties of Cool (2014)
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1/01 - Daddy.flac
    ./Casualties of Cool/Casualties of Cool (2014)/CD 1/02 - ...
    So a stray file at the top level of the working directory is distinctly abnormal, and might mean that I’m in the wrong place.
  2. The number of FLAC files (files with names ending in .flac) contained within the working directory must exceed the number of all other files.

These checks take more space in the script than the actual conversion work (15 lines vs. 4). And they’re totally worth it.

Building the Hasura GraphQL Engine

2021-09-01: I’ve written a follow-up post covering changes introduced in Hasura v2.0.8.

$WORKPLACE is using Hasura for a project. The core server component of Hasura – the “GraphQL Engine” – is distributed only as a Docker container. This works well enough on my Linux machine, but I have a strong aversion to running Docker on non-Linux machines. I find idea that containers can still be considered “lightweight” when they’re running inside a full-fat VM to be a bit laughable. I prefer to run things on “bare metal” in my development environment where I can. And while Hasura don’t distribute first-party native binaries, there’s hypothetically nothing stopping us from building our own; it is, after all, open source (APL 2.0). So let’s do that.

The Engine is written in Haskell, so as a prerequisite, install ghcup. If it’s been added to your PATH correctly, you should be able to run ghcup list and get a list of available Haskell compiler (GHC) and build/package manager (Cabal) versions.

Check out the GraphQL Engine Git repository:

$ git clone https://github.com/hasura/graphql-engine.git
# Make sure we're building a specific release version, not just the master branch.$ git checkout v1.3.3
# The server-side source code is all in the server/ directory.
$ cd graphql-engine/server

The Haskell build manager, Cabal, is almost capable of building the project without intervention. The fly in the ointment is that Hasura seems to be quite picky about the GHC version it’s built with, and Cabal isn’t high enough up the dependency food chain to pick which compiler version gets used. So instead, we’ll need to pull desired GHC and Cabal versions out of the freeze file, and install/set those as the defaults with ghcup:

# Install the GHC version corresponding to the required Haskell language base version.
$ grep 'any.base ==' cabal.project.freeze
any.base ==4.14.0.0,
$ ghcup install ghc base-4.14.0.0
[ Info ] downloading: https://downloads.haskell.org/~ghc/8.10.1/ghc-8.10.1-x86_64-apple-darwin.tar.xz
...
[ Info ] GHC installation successful
$ ghcup set ghc base-4.14.0.0
[ Info ] GHC 8.10.1 successfully set as default version
$ grep 'any.Cabal ==' cabal.project.freeze
constraints: any.Cabal ==3.2.0.0,
$ ghcup install cabal 3.2.0.0
[ Info ] downloading: https://downloads.haskell.org/~cabal/cabal-install-3.2.0.0/cabal-install-3.2.0.0-x86_64-apple-darwin17.7.0.tar.xz
...
[ Info ] Cabal installation successful
$ ghcup set cabal 3.2.0.0
[ Info ] Cabal 3.2.0.0 successfully set as default version

Before you can off and run make, you’ll also need a couple libraries installed: unixODBC and libpq are the important ones. On a Mac, you can brew install unixodbc libqp.

Then you can kick Cabal into action:

$ cabal v2-update
(git junk happens)
Downloading the latest package list from hackage.haskell.org
To revert to previous state run:
    cabal v2-update 'hackage.haskell.org,...'
$ cabal v2-build
(git junk happens)
Resolving dependencies...
Build profile: -w ghc-8.10.1 -O1
In order, the following will be built (use -v for more details):
(looong build process happens)
Building executable 'graphql-engine' for graphql-engine-1.0.0..
Linking /path/to/graphql-engine/server/dist-newstyle/build/x86_64-osx/ghc-8.10.1/graphql-engine-1.0.0/x/graphql-engine/opt/build/graphql-engine/graphql-engine ...

And that’s it. The long path given by the last line (“Linking…”) is the final executable. It’ll be about 64MB. You can run it as-is and it’ll complain about missing arguments. The documented configuration arguments and environment variables are really just passed through to the binary running in the container, so they’ll work just fine with the binary running out of the container, too. For example, to connect to the local database testdb as the user testuser, and enable the console:

/path/to/graphql-engine \
    --database-url postgres://testuser@localhost/testdb \
    serve \
    --enable-console

Then point a browser at http://localhost:8080/console.

Note that the graphql-engine executable is dynamically linked against a number of other libraries:

$ otool -L /path/to/graphql-engine
/path/to/graphql-engine:
        /usr/local/opt/postgresql/lib/libpq.5.dylib (compatibility version 5.0.0, current version 5.13.0)
        /usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
        /usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
        /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1252.250.1)
        /usr/lib/libcharset.1.dylib (compatibility version 2.0.0, current version 2.0.0)

Not very portable. To physically relocate these libraries and modify the linking of the graphql-engine executable for easier distribution, you can either do this yourself with install_name_tool, or use something like locallink to do it for you.

Limitations of the <meta> tag for specifying character encoding

A question came up on reddit: why is my webpage being interpreted with the incorrect character encoding? The question (which has since been removed, or else I’d link to it) involved some specifics about how the page was being served, and, paraphrased, the answer was that PHP-generated pages were served with an HTTP Content-Type header which included encoding information, and static HTML pages weren’t.

But the markup included a <meta http-equiv="Content-Type"> tag. Shouldn’t the encoding have gotten interpreted correctly regardless of the Content-Type header, then? I threw off a quick, cargo-cult-tinged remark about being sure to place a <meta> tag to specifying character encoding before the <title> tag, and someone else said they thought it was interpreted anywhere in the first 128 bytes of the document.

Rather than continue to perpetrate hearsay and questionably-shiny pearls of wisdom, I’m going to try to nail down some factual observations about the behaviour of content-encoding-information-bearing <meta> tags. I want to know how the tag interacts with real HTTP headers, how it behaves when it’s placed at different offsets in the document (before rendered content? After rendered content? A long way after the start of the document?), and whether different browsers handle it differently.

Continue reading

Silly micro-optimization: branching vs. dynamic dispatch

Once upon a time, I wrote a PHP script to export some data as CSV. If memory serves, it looked something like this:

<?php

class ExportController
{
    private $fooRepo;

    public function __construct($fooRepo)
    {
        $this->fooRepo = fooRepo;
    }

    public function get()
    {
        $data = $this->fooRepo->getRecords();
        echo "Time,Column 1,Column 2,Column 3\n";
        foreach ($record in $data)
        {
            printf("%d,%s,%s,%s\n",
                   $record->time, $record->a, $record->b, $record->c);
        }
    }
}
Continue reading

Stupid simple command line wall timer

I was making dinner and needed to time something. My phone already had another timer going, and running two timers simultaneously is clearly beyond the capacity of a modern smartphone, so I reached for my laptop instead. And as I Googled “macos desktop timer” (because having any timer at all built into macOS is clearly an extravagance, and not one Apple has seen fit to bequeath us with), my Wi-Fi dropped. So I grabbed a terminal, and ran:

Continue reading

Running amd64 binaries on an i386 Linux system

Normally the qemu-user-static package makes it straightforward to run binaries from another platform, but I ran into a roadblock running amd64 binaries on an i386 machine. It looks like the binfmt spec for amd64 isn’t included in recent i386 releases of qemu-user-static to save people from themselves after they manage to install the package on a mismatched architecture. Frustrating.

Specifically, this unsupport is accomplished by excluding the file /var/lib/binfmts/qemu-x86_64 from the installation. The actual interpreter, /usr/bin/qemu-x86_64-static, is exactly where you’d expect it to be. The file listing for the package didn’t include anything destined for /var/lib/binfmts, so I figured they must be unpacked and put there by an installation script. So I pulled the package apart:

$ apt-get download qemu-user-static
$ dpkg-deb -R qemu-user-static*.deb qemu-user-static
$ cd qemu-user-static/DEBIAN/
$ cat postinst

The script contains a variety of magic number and mask definitions. The script registers these via update-binfmts only as appropriate for the host platform.

Great, so we should be able to sneakily perform our own update:

$ sudo update-binfmts \
    --package qemu-user-static \
    --install qemu-x86_64 /usr/bin/qemu-x86_64-static \
    --magic '\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\x3e\x00' \
    --mask '\xff\xff\xff\xff\xff\xfe\xfe\xfc\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff' \
    --offset 0 \
    --credential yes

Bam.

$ sudo cp /usr/bin/qemu-x86_64-static build-amd64/usr/bin
$ sudo chroot build-amd64 /debootstrap/debootstrap --second-stage
qemu: uncaught target signal 11 (Segmentation fault) - core dumped

Sigh.

Screw it, I’m reinstalling this machine with a 64-bit OS.

DIY Audio Snake, Part 1

(Alternate title: “An Exercise in Masochism”.)

At the beginning of summer, having graduated and left all the audio tech-related roles I had held in Wolfville, I was really missing monkeying around with mics and mixers. Fueled by that hole and thanks to Layman’s continual prodding encouragement, I ended up pitching in on audio tasks at myChurch. To make a long story shorter, the organizational hierarchy of the church places the Audio, Video, and Lighting under the broad Production (“Prod”) umbrella. As such, equipment from all three Prod teams is usually kept in the same general area in venue. More than that, to the untrained eye not paying attention to their contents, the road cases containing video, audio, and power components (largely being cabling) all look the same (assuming the label on the top of the case goes unread). Usually this doesn’t matter much because anyone on Prod learns to distinguish them pretty quickly, but when someone from another team is lobbing cables around, which bin those cables end up in can be a bit of a tossup. This leads to XLR and HDMI being found every whichway when things aren’t struck properly.

In aggressive reaction to an overabundance of HDMI being found in all the wrong places, I threatened to Brojo that I’d build an XLR-to-HDMI adapter and use his video cables to run audio the next time the cables got mixed up.

We had a good laugh. And then I started thinking about how such an adapter might actually work.

HDMI is 19 pins. (Some cheaper cables supposedly don’t include pin 14 which isn’t used by the spec, but I’m getting ahead of myself.) Divided by three (hot/cold/ground), that rounds down to six – six channels of balanced audio. The HDMI standard even defines four grounded, two-conductor channels (pins 1-3, 4-6, 7-9, and 10-12), and one grounded, three-conductor channel (pins 13 and 15-17). Well, that’s far too convenient to not abuse, and suggests that cable composition may even be amenable to carrying at least four channels (ignoring, for the moment, the absurdly low wire gauge used in HDMI cables). And even though they’re maybe not strictly suitable for it, I can try to run another channel over the leftover pins (presumably 16, 18, and 19) just for kicks.

So that’s the carrier. Without actually trying it, I’ve no idea if microphone-level audio signals will actually make it through the cable – both are low-voltage, but I imagine audio is far less current (and therefore far more susceptible to signal loss) than digital video. But working on the assumption that they will, what do we do with the connectors? HDMI is tiny. I don’t solder tiny. I try not to even work with tiny. My fingers aren’t tiny or all that steady, so at a point it becomes something of a physical impossibility. So that means getting some breakout boards – small circuit boards with easily-accessible solder pads connected to a mounted connector (in this case, female HDMI).

And then we need to think about what connects to the breakout board. I could go directly to microphone cable and then to the XLR connectors. But how do you protect that? Just by being used whatever cable comes out of the XLR is going to have tension applied to it, and I don’t want to place tension directly onto the breakout board: my soldering is not that strong, and even if it were, you don’t want the connectors taking any more stress than can’t be avoided. Plus, the breakout is going to have to be mounted to… something so it can be handled easily. And what if this doesn’t work for audio, but I still want to reuse the idea of abusing HDMI as a carrier for analog signals? Maybe it won’t behave well for audio, but I bet you $5 I can run a whole bunch of low-baud serial lines over HDMI. (For what, I don’t know, but it’s a use-case!) Let’s try to avoid connecting the HDMI breakout directly to the XLR, then.

The most commonly-used connector I can think of with around the same pin count as HDMI but which uses a larger form-factor is DB-25, commonly used for parallel port communication. It’s six wasted pins, but DB-25 connectors have sufficiently wide pin spacing to be easy to work with directly, and connectors are plentiful and cheap, both to buy new and to pillage from old equipment. Beauty. It’s more work, because I’ll have to connect each HDMI breakout to a DB-25 connector and then build DB-25-to-XLR pigtails, but it solves both the stress and reusability issues.

I don’t want the HDMI breakouts and DB-25 connectors floating in mid-air, so I’ll have to house them in project boxes. An enclosure that opens on both ends (a rectangular tube with caps on the ends, if you will) would probably be easier to work with than a container with a lid, as I can detach the endcaps to cut HDMI and DB-25-shaped holes in them. I’ll have to solder the wiring harness between the HDMI breakout and DB-25 connector, then thread one connector through the enclosure, then attach the endcaps to the connectors, then fasten the endcaps to the enclosure. As long as I make the wiring harness long enough, it shouldn’t be a problem.

What genders do I want to use for the DB-25s? On one hand, the HDMI port on both breakout boxes is going to have to be female, as HDMI is male-to-male. But the cable is still technically going to be used directionally, so I think I’ll put a male DB-25 on one box and a female connector on the other. Then the pigtale with the female XLR connectors will have a male DB-25 connector and vice-versa so the XLR and box genders match.

So, here we are: two HDMI-to-DB-25 breakout boxes, and two DB-25-to-XLR pigtails. Given that I’m cheap and not in a huge rush, I hit up eBay for the required doodads and came up with this shopping list:

Breakout Boxes

Item Quantity Total Price
HDMI breakout board 2 $24.57
Female DB-25 connector 10 $5.29
Male DB-25 connector 10 $5.29
Connector anchor 25 $5.29
Connector anchor nut 25 $5.29
45x45x18.5mm project box 2 $11.80
22AWG wire 10m $2.96

Pigtails

Item Quantity Total Price
XLR connectors, male/female pair 10 $20.99
Microphone cable 10ft $11.92
Female DB-25 connector with hood 5 $5.29
Male DB-25 connector with hood 5 $5.29

(Huh. That is actually starting to get really expensive. Whoops.)

I accidentally ordered too little wire for the breakout boxes (5m instead of 10m) so I’m waiting to hear back from the seller on whether I can cancel or extend that order. And, I’m dubious about the quality of the microphone cable: I ordered thin stuff to more easily fit six cables through the DB-25 hoods, but I’m not sure how easily I’m going to be able to solder it as it’s a copper-tin alloy. Fingers crossed.

As for the high quantities of DB-25 couplers and connectors, I figure having some spare around for future projects isn’t a bad thing. I also wanted to get as much as I could from the same seller in order to get a reduced shipping rate.

I’m still not totally sure how the HDMI breakouts will mount to the project box endcaps. Bolts, sure, but how long? I don’t actually know how deep the HDMI connectors on the breakout boards are and I’ll need spacers. Have to wait until the bits and pieces are in my hands, then I’ll probably get something from Home Hardware.

Speaking of dimensions, I’m unsure that the breakout boards will fit inside the project boxes. The boxes are supposedly 12.9mm tall internally. The measurements for the breakout boards are obviously wrong as they’re listed as 25.4×25.4mm when they’re visibly wider than they are tall, but I suspect 25.4mm is the accurate height and they’re close to twice that in width. I think I might have to get bigger boxes. That’s a huge and embarrassing oversight, especially as I picked that box specifically because it looked like it was probably wide enough to fit the connector. Oops. Once the breakout boards show up, I’ll get a correct measurement. And then order new boxes. Maybe something like this – that would be nice if it would work because it’s not too deep, and I don’t need any more depth than I already have.

Stuff should start showing up in about two weeks. Depending on the order in which things arrive, I can start working on the box and the pigtails independently, although both of those rely on the DB-25 connectors showing up. Fingers crossed that it all shows up before Christmas.

The Hotel Kit

Over the past four years, I’ve had numerous opportunities through the programming competition team (and one with Axe Radio) to go abroad for a night or two in attendance of competitions or conferences or what have you. Beyond the event proper, these sorts of things are as good an excuse as any for a bit of fun on nights after the scheduled activities for the day are finished. In order to facilitate such shenanigans, I’ve taken to dragging a collection of odds and ends along with me to whatever hotel I end up in.

A Spare Laptop

This isn’t about a backup plan in case something happens to the good one. This isn’t to lend to someone needing a quick Internet fix (although it’s handy for such purposes). This is some old beater you can connect to the TV as you’re unpacking and leave there until you leave. It’s nice to just leave it playing music as you’re booting around the room, and it’s a far superior option to passing a phone around the room when someone wants to show a YouTube video. And, by using a spare, not your daily driver, you don’t ever have to move it or unplug it until you’re ready to check out. It should be something you don’t mind losing if something gets spilled on it or it takes flight; old netbooks make great candidates. It ought to have a decent Wi-Fi adapter because hotel Wi-Fi is notoriously flaky, and both VGA and HDMI outputs because you never know what will be available on the TV in the room.

Cables

All of them. VGA and HDMI for reasons as noted above. Ethernet because you might need it at the conference. MicroUSB (with AC adapters!) because everything needs to be charged. Don’t forget laptop adapters. Don’t forget a male-to-male 3.5mm audio cable for the auxiliary input on the rental car for the drive down and to run laptop audio into the TV if you do have to use the VGA input.

Movies

If you’re there for more than one night (say, two nights), you’re almost bound to be bored on one of them, and with hotel Wi-Fi being what it is, video streaming is out (although music is usually OK). Make sure you stuff at least a couple fairly polar-opposite options onto the laptop before heading out the door.

So that’s about it. None of it that inobvious, but easy enough to forget as you’re haphazardly throwing things into a bag the night before.