Stable sorting for DB Browser for SQLite

I had some data in SQLite, and wanted to sort on the columns (and on a pair of columns). I really like DB Browser for SQLite because it’s free, powerful, has a nice GUI. It turns out that a stable sort on multiple columns is easy, but not well-documented. Here’s the how you do it:

  • Open a SQLite file in DB Browser for SQLite
  • In the Browse Data tab, click a column head to sort by that column
  • Click it again to reverse-sort
  • To sort by multiple columns, Control-Click (Command-click in macOS) on a second column heading. You’ll see small numbers next to the field names to indicate their order.
  • Ctl/Cmd-click a second time to reverse-sort that column
  • Click another column head to clear the previous selection(s)
  • Here’s a video that shows how it works: https://youtu.be/KMowPpzZ2Vc

INCLUDEPICTURE in Microsoft Word Mail Merge

What a crock! How badly documented can a facility be? There are lots of tutorials on the basics of Microsoft Mail Merge, but there is a dearth of information for the INCLUDEPICTURE facility to include images, especially if the images are retrieved from a link/url.

That said, I needed to use it for a project to create a document that plucks up variables from a spreadsheet and drops them into a pretty template for each page. I wrote this note so that the next time I need to revise the document (and stumble over this process), I’ll have these notes. I’m not claiming this is the only way/best way, just that it worked for me. I think I described everything, but please let me know in the comments if I’ve missed something.

I had the following hassles:

  • Word uses voodoo magic to accomplish this. [The documentation for how to include images in a mailmerge document is terrible (at least, my google-fu wasn’t strong enough) and it’s worse for images specified by URLs.]
  • You need to use (poorly-documented) keyboard commands to type the commands
  • Once I had images inserted into the document, I needed to figure out how to adjust their size.

I succeeded – here’s what I did…

  1. Create an Excel spreadsheet with links (URLs) to the desired images.
  2. The Excel document should have headers in the first row that describe the data. If you’re retrieving images from a website, you’ll need a field with the links.
  3. Create a mailmerge document in Word. To do this:
    • New Document, then from the Ribbon:
    • Click the Mailings tab
    • Click Start Mail Merge and choose Letters
    • Click Select Recipients – you’ll need to choose the Excel document (and proper tab) that has the data to be merged
    • Ignore the “Filter recipients…” stuff
  4. Click “Insert Merge Field” in the Ribbon and select your fields. The field name appears within «…».
  5. They’re voodoo magic. They’re kind of cool. When you click the Preview Results button (in the Ribbon), you’ll see your mail merge results, with substitutions of the live data for your «variables»
  6. Click Preview Results again to un-preview results and continue editing the document.

More voodoo magic

  1. If you type Alt-F9 (Option-Fn-F9 on Mac), the «variables» in the document turn into their underlying representation: «Field1» turns into { MERGEFIELD Field1 } It’s editable text.
  2. Don’t do it. It’ll leads to tears. But at least you can see how these are formatted.

Adding an image…

  1. If you want to insert an image (say, using a URL from that Excel file) you need to do even more voodoo magic. The format is straightforward, but a bit gnarly: { INCLUDEPICTURE { MERGEFIELD Name-of-URL-Field } * MERGEFORMAT \d }
  2. BUT… You can’t just type that string: remember, the { ... } are voodoo magic.
  3. Type Alt-F9 (Cmd-F9 on a Mac) to insert an empty set of braces in the document. Then fill in the text between. If you need a new set of braces inside, just type another Alt-F9 (Cmd-F9) in the right place, and keep typing ’til you enter the correct command.
  4. Preview Results to see if it’s working.

Resizing an image

  1. This is Microsoft Word. So there are no rules. Actually, there are, but they’re not known by mere mortals. (Sorry for the snark.)
  2. The images will be any size they want to be.
  3. To constrain the images, I found it useful to insert a one-by-one table, and let the content fill the cell. To do this, insert a 1×1 table, and adjust the size of the cell with Table Properties
    • I set the Preferred Width (Table tab)
    • I set “Specify Height” (Row tab)
    • I set “Automatically resize to fit contents” (Options button)
  4. I then copy/pasted the voodoo magic for the image into the cell.

Finishing up…

  1. OK – You think you’re almost done. Maybe.
  2. The Finish and Merge button (in the Ribbon) actually performs the mail merge.
    • Edit Individual Documents seems to create a new Word document containing a concatenation of all the mail-merged pages. You can save this (in Word or PDF format) or print it.
    • Print Documents will print, or let you save to a PDF.
  3. You may need to force-load the images, to replace the (single) placeholder image with the right ones. To do this, Select All, then hit F9. You may see a spinner while Word retrieves all the images, but they should all be there.

Additional Thoughts

  • I was successful with Microsoft for Mac 2019 using a .docx file, despite the fact that Google brings up a lot of (old) advice about using .doc format, not the (new-fangled) .docx format.
  • Keymaps: This article gave me a breakthrough, showing how to insert the voodoo magic { } in a document https://wordmvp.com/Mac/FieldsInMacWord.html
  • It’s easiest if the images are all the same size, to minimize resizing effort.
  • I think the \d option of the INCLUDEPICTURE command causes Word not to save the images in the document, making the source file considerably smaller.
  • I could not find documentation about what the \* MERGEFORMAT string does. It works for me.
  • Let me know in the comments if you find find out more. I’ll update this article.

More Fun with Docker Tooling

I continue to enjoy using Docker to encapsulate developer tooling so that it doesn’t pollute my laptop with varying versions of software I don’t use regularly. (See Jonathan Bergknoff’s Run More Stuff in Docker and Andrew Welch’s Docker for all the things for further justification.)

In addition to using Andrew Welch’s vitejs-docker-dev project for Javascript development, I converted a couple of my personal projects to use a Dockerfile. I have submitted PRs to the upstream repo’s to incorporate the Dockerfile – we’ll see if they are accepted.

TL;DR Using a Docker container for these tools doesn’t really get in the way. Startup may be slightly slower (adding a second or so), but otherwise these tools run plenty fast. Plus, Docker eliminates a whole raft of hassles getting the software installed, and maintaining it across OS upgrades (say, on my laptop). I’m content.

Migrating Snowpack to Vite (and Docker!)

About eighteen months ago I migrated a small Javascript app to use Snowpack development tooling. (This was mostly for fun, I had already had it working with Webpack.) Snowpack claimed simple dev tooling with nearly instant updates, using the power of ES Modules. It worked pretty well.

About six months ago, the team that developed Snowpack realized that their efforts had paralleled those of the Vite.js tooling. Vite.js also used ES Modules, and provides a mature code base and strong community support. Since the Snowpack team wanted to work on other projects (Astro), they switched their underlying tooling to use Vite.js.

So… I decided to see what it would take to migrate my Snowpack project to use Vite.js. Everything I had read said that it was easy. Here’s a field report from what was required.

A Side Project – Using Docker: Since this Javascript app was a side project, I have also been using it as a learning environment. I had read Jonathan Bergknoff’s Run More Stuff in Docker that made a lot of sense to me. (The magic of using Docker is that once you’ve created the instance, all the tool and dependency versions remain the same. It’s easy then to hand the Dockerfile to a colleague who can build an identical development environment in a few minutes. It also avoids cluttering my daily-driver laptop with multiple versions of Node, npm, Go, Python, rust, and any number of little-used tooling – they’re all encapsulated in the Docker container.)

So I decided to investigate whether using Docker to create the Javascript tooling would make my life better. Googling around led to Andrew Welch’s vitejs-docker-dev project. It builds a Docker instance with full development tooling (Vite.js, pnpm, hot reloading, etc.) that watches the source files in your local directory for updates. You develop the code using your favorite editing tools. Changes are immediately reflected in your browser/test environment. This is very slick. The vitejs-docker-dev repo has good documentation. It describes a lot of background of how the Docker machine gets built and how to use it.

Update: Andrew Welch (who created the vitejs-docker-dev project) sent me a link to his article about using “Docker for all the things.” It’s a good adjunct to the original “Run More Stuff in Docker” post.

Back to the main story – here are the steps I followed to get my Snowpack project on the air with Vite.js:

  1. Create the Docker instance. Clone the vitejs-docker-dev repo. Then run make docker to put the set of development tools into a new Docker instance. This one-time step takes a few minutes. You may see several warnings (as described in a Github issue) but these don’t seem to be important.
  2. Check the default Vite.js app builds on Docker. Run make vite-pnpm run dev in one terminal window, then run make app-pnpm run dev in a second window (waiting between the commands as described in the README). Click the http://localhost:3000 URL to see if the test Vite app starts up. Edit the index.html file to see if the change is reflected in the web browser. (It should be…)
  3. Customize the /app directory for your app. Copy all your app’s files into the app directory. (I renamed the original to app-old and created a new app directory.) This required a bit of jiggering to adjust between Snowpack and Vite.js, such as:
    • My index.html file for Snowpack was in the public directory; for Vite.js, I moved it to the top-level app directory
    • Copy the package.json and other important directories to app directory
    • Adjust the paths for index.html: Snowpack bundles source files from /src into the /dist directory; Vite.js processes the files directly from /src
    • I can’t remember whether I made other tweaks; it certainly wasn’t odious.
  4. Remove references to the Snowpack modules from the package.json file. Optionally, you could take the opportunity to update the versions of dependencies.
  5. Rebuild the Docker instance (#1 above) and restart the development process (#2 above). Click the link to http://localhost:3000 and your app should begin to run. You’ll probably need to make adjustments, but you’ll be substantially on the air.

TL;DR The process of converting to Vite.js wasn’t very hard (at least, not on my small project). It required a little farbling around, but nothing terrible. The jury’s still out on whether vitejs-docker-dev will make my life better – but I think it just might.

Using Apple’s RPM tool

macOS Monterey ships with a tool that measures the responsiveness of your network connection. It saturates the network with traffic for 20 seconds, then measures the rate of short transactions to compute “Responses Per Minute.” Big numbers (above 2000) mean your network remains responsive when the network is heavily loaded. Small numbers (under 800 or so) mean your network isn’t responsive – potentially caused by bufferbloat.

There’s an iOS version described at https://support.apple.com/en-gb/HT212313

Stuart Cheshire and Vidhi Goel talked about the RPM tool at WWDC 2021. Apple also published an Internet-Draft that describes the RPM technique

Here’s a sample run from my Mac. The RPM tool displays my download and upload speeds (nominally 25mbps), and the number of simultaneous flows required to saturate the link (12, in this case). It shows the responsiveness as 1995 round-trips per minute. That’s really good: the average latency – even during heavy load – only increases a bit above the baseline (idle) 21 msec.

% /usr/bin/networkQuality -v
==== SUMMARY ====
Upload capacity: 22.657 Mbps
Download capacity: 23.755 Mbps
Upload flows: 12
Download flows: 12
Responsiveness: High (1995 RPM)
Base RTT: 21
Start: 11/7/21, 7:18:37 AM
End: 11/7/21, 7:18:47 AM
OS Version: Version 12.1 (Build 21C5021h)
%

Here’s a video that shows the tool in operation: https://youtu.be/e9DUTB9okMA

NameD•Tective — mDNS over AppleTalk

[From the Archives of Amusing Technology…] Back in the ’90s, Dave Fisher and I created NameD•tective, a Macintosh control panel that gave any Mac on the Dartmouth network a static DNS name.

It used Name Binding Protocol to let someone create a DNS name based on their name plus their AppleTalk zone. The DNS name had the form: person-name.AppleTalk-zone…dartmouth.edu. The NameD•tective server looked up the NBP name, and returned the computer’s current IP address.

This screen shot shows an example: cd-changer.kiewit.atzone.dartmouth.edu was a server in my office that distributed information to other developers in the Kieiwt building. NameD•tective probably didn’t get broad use at Dartmouth, but it was a neat demonstration project. It led Dave and me to develop the MacDNS software that was shipped as part of Apple’s Internet Connection Kit. Here’s the page from Dartmouth’s website archived by the Wayback Machine: https://web.archive.org/web/19961220043011/http://www.dartmouth.edu/pages/softdev/named.html

Today, computer naming is much simpler. Modern operating systems let a computer specify a mDNS (multicast DNS) name that can be directly looked up to find a host that provides the service.

Comcast modems decrease bufferbloat

Last month, Comcast released a paper Improving Latency with Active Queue Management (AQM) During COVID-19 that shows that their PIE AQM dramatically decreases lag/latency (by a factor of 10X — from 250 msec down to 15-30 msec.) From the paper (page 13):

 

… As explained earlier, for two variants of XB6 cable modem gateway, upstream DOCSIS-PIE AQM was enabled on the CGM4140COM (experiment) variant but was not available on the TG3482G (control) variant during the measurement period

At a high level, when a device had AQM it consistently experienced between 15-30 milliseconds of latency under load. … [The] non-AQM devices experienced in many cases 250 milliseconds or higher latency under load.

See if you can get an XB6 / CGM4140COM cable modem

Toward a Consumer Responsiveness Metric

At a recent videoconference, I advocated strongly for a consumer-facing measurement of latency/responsiveness. I had not planned to speak, so I gave off-the-cuff comments. This is an organized explanation of my position. I offer these thoughts for consideration at the IAB Workshop “Measuring Network Quality for End-Users, 2021” – Rich Brown

I hunger for a day when vendors (router manufacturers and service providers) compete on the basis of “responsiveness” in the same way that they compete on speed – “Up to X megabits per second, and Y responsiveness!”

I have been working on the “Bufferbloat Project” [1] since 2011, trying to find layman’s terms for what was happening, and what to do about it. [2] [3] The delay goes by the name “lag”, “latency under load”, or “bufferbloat”. At first, the effects seemed mysterious and non-intuitive. Even to knowledgeable individuals, the magnitude of the delay caused by queueing was astonishing. No matter what name you use, it makes people say, “the internet is slow today”.

My router at home has solved this problem. I enjoy the fruits of the intense research from the mid 2010’s that led to well-understood solutions such as fq_codel, cake, PIE, and airtime fairness. Even using 7 mbps DSL, my network was quite usable, and very responsive.

My frustration in 2021 is that this remains a problem for nearly everyone else. The market has not provided solutions. Every day, people purchase brand name equipment that happily queues hundreds of msec of traffic.

I postulate that vendors have not considered responsiveness to be an important characteristic of their offerings. Consequently, they have not prioritized the engineering resources to incorporate the well-tested solutions listed above.

My hope, from this note, and from our on-going efforts, is that we can come up with a test tool that consumers can use to raise awareness of the problem of bad responsiveness.

Characteristics of a Responsiveness Tool

I seek a “responsiveness tool” with these characteristics:

  1. Easy to use. People need an easy way to measure responsiveness so they can give feedback to their vendors.
  2. A single number, so it’s easy to report and compare.
  3. Bigger must be better. High latency means bad responsiveness. People have no intuitive feel for a millisecond: “Is 100 msec bad? Isn’t that really short…?”
  4. An approximate measure is OK. Consumers won’t mind separate runs varying 20% or 30%, especially since poor responsiveness could be an order of magnitude different from good.
  5. Resistant to cheating. Vendors sometimes optimize pings to make latency look lower. But real people’s traffic doesn’t use pings. The responsiveness test must use protocols that match actual traffic patterns.
  6. Vendor and technology independent. People should use and get similar results from their phone, their desktop, on the web, or using an app.
  7. “Good enough”. A widely implemented and promoted metric that substantially matches people’s real experience is vastly superior to a host of competing metrics that muddy the waters in consumer’s minds.

A Proposed Metric – RPM

Apple has produced an Internet Draft “Responsiveness under Working Conditions” [4] and implementation. It defines a procedure for continually making short HTTPS transactions on a path to a server that has been fully loaded in both directions. The number of transactions in a fixed time is expressed as the number of “round-trips per minute”, which is given the name “RPM”, a wink to the “revolutions per minute” that we use for cars.

The RPM measurement satisfies all my concerns.

Non-requirements

It is not a requirement for the responsiveness test to provide:

  • Strict reproducibility. The wider internet has widely varying conditions, with bottlenecks moving around by time of day or adjacent traffic. It is not reasonable/feasible to expect that any measure used by consumers will be exactly reproducible.

  • Detailed statistics or distributions of measurements. This is not a diagnostic tool. A nuanced data set with medians and percentiles may excite techies, but for others, it’s hard to understand the implications.

  • Performance of any particular protocol. The responsiveness tool must measure a broad variety of typical traffic.

  • Data to be used as input for vendors to design solutions. The responsiveness measure needs to be used the same way we say to our mechanic, “The car makes a funny noise when I …”. I expect the specialist to work to reproduce the symptom, using the provided equipment, and come up with an appropriate solution.

Summary

The research of the last decade has developed a wide variety of solutions. There are plenty of corner-cases where these solutions aren’t perfect. I encourage vendors and researchers to study the field and advance our knowledge further. I would be delighted if they found practices even better than the current state of the art.

But “the rest of the internet” (including my neighbors and family members, for whom I’m the support person) would all benefit from a world where off-the-shelf equipment already incorporated well-known, best practice solutions.

References

[1] Bufferbloat Project https://bufferbloat.net

[2] Bufferbloat and the Ski Shop https://randomneuronsfiring.com/bufferbloat-and-the-ski-shop/

[3] Best Bufferbloat Analogy – Ever https://randomneuronsfiring.com/best-bufferbloat-analogy-ever/

[4] Responsiveness under Working Conditions – Internet-Draft at: https://datatracker.ietf.org/doc/draft-cpaasch-ippm-responsiveness/ Full disclosure: I am one of the editors of the “Responsiveness Under Working Conditions I-D”