More Fun with Docker Tooling

I continue to enjoy using Docker to encapsulate developer tooling so that it doesn’t pollute my laptop with varying versions of software I don’t use regularly. (See Jonathan Bergknoff’s Run More Stuff in Docker and Andrew Welch’s Docker for all the things for further justification.)

In addition to using Andrew Welch’s vitejs-docker-dev project for Javascript development, I converted a couple of my personal projects to use a Dockerfile. I have submitted PRs to the upstream repo’s to incorporate the Dockerfile – we’ll see if they are accepted.

  • RPM Test – responsiveness test tool for network latency
  • Wireguard Vanity Address – creates a Wireguard public key with an easily-recognizable prefix.
  • TrackR-Web-Bluetooth-API – reverse engineering the API for the TrackR Pixel gizmo that helps you find your lost keys
  • Unifi-Controller – My instructions for a pre-built Docker container that runs on a Raspberry Pi

TL;DR Using a Docker container for these tools doesn’t really get in the way. Startup may be slightly slower (adding a second or so), but otherwise these tools run plenty fast. Plus, Docker eliminates a whole raft of hassles getting the software installed, and maintaining it across OS upgrades (say, on my laptop). I’m content.

Migrating Snowpack to Vite (and Docker!)

About eighteen months ago I migrated a small Javascript app to use Snowpack development tooling. (This was mostly for fun, I had already had it working with Webpack.) Snowpack claimed simple dev tooling with nearly instant updates, using the power of ES Modules. It worked pretty well.

About six months ago, the team that developed Snowpack realized that their efforts had paralleled those of the Vite.js tooling. Vite.js also used ES Modules, and provides a mature code base and strong community support. Since the Snowpack team wanted to work on other projects (Astro), they switched their underlying tooling to use Vite.js.

So… I decided to see what it would take to migrate my Snowpack project to use Vite.js. Everything I had read said that it was easy. Here’s a field report from what was required.

A Side Project – Using Docker: Since this Javascript app was a side project, I have also been using it as a learning environment. I had read Jonathan Bergknoff’s Run More Stuff in Docker that made a lot of sense to me. (The magic of using Docker is that once you’ve created the instance, all the tool and dependency versions remain the same. It’s easy then to hand the Dockerfile to a colleague who can build an identical development environment in a few minutes. It also avoids cluttering my daily-driver laptop with multiple versions of Node, npm, Go, Python, rust, and any number of little-used tooling – they’re all encapsulated in the Docker container.)

So I decided to investigate whether using Docker to create the Javascript tooling would make my life better. Googling around led to Andrew Welch’s vitejs-docker-dev project. It builds a Docker instance with full development tooling (Vite.js, pnpm, hot reloading, etc.) that watches the source files in your local directory for updates. You develop the code using your favorite editing tools. Changes are immediately reflected in your browser/test environment. This is very slick. The vitejs-docker-dev repo has good documentation. It describes a lot of background of how the Docker machine gets built and how to use it.

Update: Andrew Welch (who created the vitejs-docker-dev project) sent me a link to his article about using “Docker for all the things.” It’s a good adjunct to the original “Run More Stuff in Docker” post.

Back to the main story – here are the steps I followed to get my Snowpack project on the air with Vite.js:

  1. Create the Docker instance. Clone the vitejs-docker-dev repo. Then run make docker to put the set of development tools into a new Docker instance. This one-time step takes a few minutes. You may see several warnings (as described in a Github issue) but these don’t seem to be important.
  2. Check the default Vite.js app builds on Docker. Run make vite-pnpm run dev in one terminal window, then run make app-pnpm run dev in a second window (waiting between the commands as described in the README). Click the http://localhost:3000 URL to see if the test Vite app starts up. Edit the index.html file to see if the change is reflected in the web browser. (It should be…)
  3. Customize the /app directory for your app. Copy all your app’s files into the app directory. (I renamed the original to app-old and created a new app directory.) This required a bit of jiggering to adjust between Snowpack and Vite.js, such as:
    • My index.html file for Snowpack was in the public directory; for Vite.js, I moved it to the top-level app directory
    • Copy the package.json and other important directories to app directory
    • Adjust the paths for index.html: Snowpack bundles source files from /src into the /dist directory; Vite.js processes the files directly from /src
    • I can’t remember whether I made other tweaks; it certainly wasn’t odious.
  4. Remove references to the Snowpack modules from the package.json file. Optionally, you could take the opportunity to update the versions of dependencies.
  5. Rebuild the Docker instance (#1 above) and restart the development process (#2 above). Click the link to http://localhost:3000 and your app should begin to run. You’ll probably need to make adjustments, but you’ll be substantially on the air.

TL;DR The process of converting to Vite.js wasn’t very hard (at least, not on my small project). It required a little farbling around, but nothing terrible. The jury’s still out on whether vitejs-docker-dev will make my life better – but I think it just might.

Using Apple’s RPM tool

macOS Monterey ships with a tool that measures the responsiveness of your network connection. It saturates the network with traffic for 20 seconds, then measures the rate of short transactions to compute “Responses Per Minute.” Big numbers (above 2000) mean your network remains responsive when the network is heavily loaded. Small numbers (under 800 or so) mean your network isn’t responsive – potentially caused by bufferbloat.

There’s an iOS version described at

Stuart Cheshire and Vidhi Goel talked about the RPM tool at WWDC 2021. Apple also published an Internet-Draft that describes the RPM technique

Here’s a sample run from my Mac. The RPM tool displays my download and upload speeds (nominally 25mbps), and the number of simultaneous flows required to saturate the link (12, in this case). It shows the responsiveness as 1995 round-trips per minute. That’s really good: the average latency – even during heavy load – only increases a bit above the baseline (idle) 21 msec.

% /usr/bin/networkQuality -v
==== SUMMARY ====
Upload capacity: 22.657 Mbps
Download capacity: 23.755 Mbps
Upload flows: 12
Download flows: 12
Responsiveness: High (1995 RPM)
Base RTT: 21
Start: 11/7/21, 7:18:37 AM
End: 11/7/21, 7:18:47 AM
OS Version: Version 12.1 (Build 21C5021h)

Here’s a video that shows the tool in operation:

NameD•Tective — mDNS over AppleTalk

[From the Archives of Amusing Technology…] Back in the ’90s, Dave Fisher and I created NameD•tective, a Macintosh control panel that gave any Mac on the Dartmouth network a static DNS name.

It used Name Binding Protocol to let someone create a DNS name based on their name plus their AppleTalk zone. The DNS name had the form: person-name.AppleTalk-zone… The NameD•tective server looked up the NBP name, and returned the computer’s current IP address.

This screen shot shows an example: was a server in my office that distributed information to other developers in the Kieiwt building. NameD•tective probably didn’t get broad use at Dartmouth, but it was a neat demonstration project. It led Dave and me to develop the MacDNS software that was shipped as part of Apple’s Internet Connection Kit. Here’s the page from Dartmouth’s website archived by the Wayback Machine:

Today, computer naming is much simpler. Modern operating systems let a computer specify a mDNS (multicast DNS) name that can be directly looked up to find a host that provides the service.

Comcast modems decrease bufferbloat

Last month, Comcast released a paper Improving Latency with Active Queue Management (AQM) During COVID-19 that shows that their PIE AQM dramatically decreases lag/latency (by a factor of 10X — from 250 msec down to 15-30 msec.) From the paper (page 13):


… As explained earlier, for two variants of XB6 cable modem gateway, upstream DOCSIS-PIE AQM was enabled on the CGM4140COM (experiment) variant but was not available on the TG3482G (control) variant during the measurement period

At a high level, when a device had AQM it consistently experienced between 15-30 milliseconds of latency under load. … [The] non-AQM devices experienced in many cases 250 milliseconds or higher latency under load.

See if you can get an XB6 / CGM4140COM cable modem

Toward a Consumer Responsiveness Metric

At a recent videoconference, I advocated strongly for a consumer-facing measurement of latency/responsiveness. I had not planned to speak, so I gave off-the-cuff comments. This is an organized explanation of my position. I offer these thoughts for consideration at the IAB Workshop “Measuring Network Quality for End-Users, 2021” – Rich Brown

I hunger for a day when vendors (router manufacturers and service providers) compete on the basis of “responsiveness” in the same way that they compete on speed – “Up to X megabits per second, and Y responsiveness!”

I have been working on the “Bufferbloat Project” [1] since 2011, trying to find layman’s terms for what was happening, and what to do about it. [2] [3] The delay goes by the name “lag”, “latency under load”, or “bufferbloat”. At first, the effects seemed mysterious and non-intuitive. Even to knowledgeable individuals, the magnitude of the delay caused by queueing was astonishing. No matter what name you use, it makes people say, “the internet is slow today”.

My router at home has solved this problem. I enjoy the fruits of the intense research from the mid 2010’s that led to well-understood solutions such as fq_codel, cake, PIE, and airtime fairness. Even using 7 mbps DSL, my network was quite usable, and very responsive.

My frustration in 2021 is that this remains a problem for nearly everyone else. The market has not provided solutions. Every day, people purchase brand name equipment that happily queues hundreds of msec of traffic.

I postulate that vendors have not considered responsiveness to be an important characteristic of their offerings. Consequently, they have not prioritized the engineering resources to incorporate the well-tested solutions listed above.

My hope, from this note, and from our on-going efforts, is that we can come up with a test tool that consumers can use to raise awareness of the problem of bad responsiveness.

Characteristics of a Responsiveness Tool

I seek a “responsiveness tool” with these characteristics:

  1. Easy to use. People need an easy way to measure responsiveness so they can give feedback to their vendors.
  2. A single number, so it’s easy to report and compare.
  3. Bigger must be better. High latency means bad responsiveness. People have no intuitive feel for a millisecond: “Is 100 msec bad? Isn’t that really short…?”
  4. An approximate measure is OK. Consumers won’t mind separate runs varying 20% or 30%, especially since poor responsiveness could be an order of magnitude different from good.
  5. Resistant to cheating. Vendors sometimes optimize pings to make latency look lower. But real people’s traffic doesn’t use pings. The responsiveness test must use protocols that match actual traffic patterns.
  6. Vendor and technology independent. People should use and get similar results from their phone, their desktop, on the web, or using an app.
  7. “Good enough”. A widely implemented and promoted metric that substantially matches people’s real experience is vastly superior to a host of competing metrics that muddy the waters in consumer’s minds.

A Proposed Metric – RPM

Apple has produced an Internet Draft “Responsiveness under Working Conditions” [4] and implementation. It defines a procedure for continually making short HTTPS transactions on a path to a server that has been fully loaded in both directions. The number of transactions in a fixed time is expressed as the number of “round-trips per minute”, which is given the name “RPM”, a wink to the “revolutions per minute” that we use for cars.

The RPM measurement satisfies all my concerns.


It is not a requirement for the responsiveness test to provide:

  • Strict reproducibility. The wider internet has widely varying conditions, with bottlenecks moving around by time of day or adjacent traffic. It is not reasonable/feasible to expect that any measure used by consumers will be exactly reproducible.

  • Detailed statistics or distributions of measurements. This is not a diagnostic tool. A nuanced data set with medians and percentiles may excite techies, but for others, it’s hard to understand the implications.

  • Performance of any particular protocol. The responsiveness tool must measure a broad variety of typical traffic.

  • Data to be used as input for vendors to design solutions. The responsiveness measure needs to be used the same way we say to our mechanic, “The car makes a funny noise when I …”. I expect the specialist to work to reproduce the symptom, using the provided equipment, and come up with an appropriate solution.


The research of the last decade has developed a wide variety of solutions. There are plenty of corner-cases where these solutions aren’t perfect. I encourage vendors and researchers to study the field and advance our knowledge further. I would be delighted if they found practices even better than the current state of the art.

But “the rest of the internet” (including my neighbors and family members, for whom I’m the support person) would all benefit from a world where off-the-shelf equipment already incorporated well-known, best practice solutions.


[1] Bufferbloat Project

[2] Bufferbloat and the Ski Shop

[3] Best Bufferbloat Analogy – Ever

[4] Responsiveness under Working Conditions – Internet-Draft at: Full disclosure: I am one of the editors of the “Responsiveness Under Working Conditions I-D”

Best Bufferbloat Analogy – Ever

My friends frequently ask, “Why is my network so slow?” And often, the answer is “latency” or the screwy term, “Bufferbloat” – the “undesirable latency caused when a router buffers too much data.” But what the heck does that mean?

A while back, I attempted a layman’s explanation of Bufferbloat. I compared it to a ski shop. It was pretty unsuccessful: it just didn’t have any intuitive appeal.

That’s why I was delighted that published what I believe is the Best Bufferbloat Analogy – Ever. (I am pleased to have contributed to the final version of their description.) That page also has a well-designed web-based Bufferbloat Tester (on a par with the DSLReports Speed Test).

They asked, Can you explain bufferbloat like I’m five? and noted that flows of liquids were sort of like flows of packets. The analogy was when a friend dumps a bucket of water into a sink with a narrow drain, it slows other flows (like a teaspoon of oil) from emptying out. Read the whole description…

This made me think about having a SmartSink™ to give a visual image for understanding how a well-designed router can decrease latency.

What’s a SmartSink™?

Instead of accepting a full bucket of water all at once, a SmartSink controls the bucket of water with a valve. It allows just enough water into the sink to keep the drain full. If the water gets too low, the SmartSink opens the valve: if it gets “too full”, it closes it a bit.

A SmartSink also works when lots of friends have their own buckets, pouring in colored water – pink, blue, etc. The valves on the SmartSink control each color. If the SmartSink notices too much pink water, it closes that valve a bit to bring back balance, so that each color gets its “fair share” of the drain’s capacity. And because there’s never too much water (of any color) in the sink, a small new flow always drains quickly.

Reality check: This is just an analogy. I realize that a SmartSink is a ridiculous idea. But it helps me visualize how small flows can drain quickly while big flows share the drain capacity fairly.

What does this have to do with routers?

The Smart Queue Management (SQM) algorithm in a router works like the SmartSink. When a device starts sending a lot of data (maybe a phone starts uploading photos to the cloud), SQM controls the amount of data queued for each flow (each separate upload, videoconference, voice call, gaming session, Youtube, Bittorrent, etc) to prevent any one flow from using more than its share. Instead of operating valves to control the flow of water, SQM controls the size of each flow’s queue by:

  1. Placing packets from each flow into a separate queue.
  2. Removing a small batch of packets from each queue, round-robin style, and sending that batch “out the drain” through the (slow) bottleneck link to the ISP. When each batch has been fully sent, it retrieves another batch from the next queue, and so on.
  3. Offering back pressure to flows that are sending “more than their share” of data.

This process provides these desirable effects:

  • Most importantly, SQM provides low latency. Small flows (with just one or a few small packets) get sent right away in their next “round robin” batch.
  • Equal sharing of the bottleneck: If there are multiple senders, each can send an equal amount of data with each round-robin opportunity.
  • No waste of the bottleneck: If there’s only one sender (one queue with data), that one gets the full capacity of the link.
  • Offering backpressure to bulk senders minimizes lost packets and re-transmissions, making the network globally more efficient.

Does SQM work?

YES! Can I get a router with SQM today? YES!

Got questions? Send them to me and I’ll include them in Part 2 (coming soon) of this blog. Thanks.


Astonishing Lidar View of NH

The NH Stone Wall Mapper project uses Lidar data to display small variations in ground elevation. A UNH project built this map to identify stone walls in the state.

This site can be “misused” (in a good way) to show lots of other topographic features. Here’s a “Lidar view” of the grounds of Loch Lyme Lodge, near Post Pond. The features are shaded as if the sun were shining from the northeast. (Update: 31 Dec: Thanks to the good folks at the NH Geological Survey, the link now goes directly to the desired view!)

But wait… there’s more! You can turn on and off various “layers” to see other kinds of information. To do this:

  1. At the top-left, click the Layers Icon to display various layers
  2. Check on or off the Hillshade box to “show or hide the trees”…
  3. Click the More… icon to enable other features, such as the “Swipe Layers” that lets you compare two layers…

So much fun – play around!. Turn on/off layers, scroll to other parts of NH. If you find something interesting, send me a note and I’ll post it. Enjoy!

WireGuard Vanity Keys

A WireGuard VPN provides a fast, secure tunnel between endpoints. It uses public/private key pairs to encrypt the data.

If you have several clients, you have to enter their public keys into your server. Keeping track of those keys gets to be a hassle, since ordinarily, the keys are essentially random numbers.

I found a great project to help this problem: WireGuard Vanity Address. It continually generates WireGuard private/public key pairs, printing keys that contain a desired string in the first 10 characters. For example, I generated this public key for my MacBook Pro (MBP): MBP/DzPRZ05vNZ0XS3P9tlokZPrLy/1lb1Zsm3du4QA= Note the MBP/ at the start – it makes it easy to know that this is my Mac’s key.

To do it, I ran the wireguard-vanity-address program. Here is sample output:

$ ./wireguard-vanity-address MBP/
searching for 'mbp/' in pubkey[0..10], one of every 299593 keys should match
one trial takes 28.7 us, CPU cores available: 2
est yield: 4.3 seconds per key, 232.30e-3 keys/s
hit Ctrl-C to stop
private qMKPNrCMId59XTn5vgDICUh/QzIfhqZdrZ+XQBIJj2w= public zmbP/YEpC8Zl6MacYhcY1lq126tL2UudFjmrwbl2/18=
private HHtPY8IwGBxQ5OTtJY6GcuFpImXtDp9d187zvI0axFo= public qhIiSMbp/extT5irPy4EJfLRPR9jTzQZHlM15Fo/P2E=
private BEnEu1lVdcRI997nj2uPNGsyCZNPhBTCNfgJuYPPJHA= public hZzmBP/8EthWPOFp5wroEGPeJTHGxZ5KENnMiZvniGY=
private 8HRj+YZfSBnYZn38MPE09W2g03JvRJoGbjlDkHQ0Wnk= public mBP/q2dOd+m457PyKTIvI7MDTuXLCneG6MM0ir9rwRc=
private dFE8xsDDWNNNY1OjOIlxQiNVbp7Z6tZhXsaOo/5gPH0= public MBP/DzPRZ05vNZ0XS3P9tlokZPrLy/1lb1Zsm3du4QA=
# This last line contains a public key starting with "MBP/"

For more details, read the github page, and also the issue where the author addresses security concerns about decreasing the size of the key space.

Update: I created a Dockerfile to make it even easier to run wireguard-vanity-address. Check out my personal github repo for details.