Local by Flywheel won’t start because it’s regenerating Docker Machine TLS certificates

I have been using Local by Flywheel and really enjoying it. It does two things:

  1. You can stand up a development version of a WordPress site on your laptop and horse around with it. It’s fast, you can make experiments, and if it blows up, you can simply regenerate in a minute or two.
  2. Using the (paid) Flywheel hosting, you can transfer your local dev server to their public hosting, and you’re on the air.

I have not used this latter facility, but I’m here to tell you that the first part is pretty slick.

But… I went away from Local by Flywheel for a month or so, then came back to start working on a new site. When I wanted to start it up, I saw a succession of messages stating that it was “Regenerating Machine Certificates” and that “Local detected invalid Docker Machine TLS certificate sand is fixing them now.” This looped apparently forever, and wouldn’t work. Here’s my report on their community forum.

After considerable searching, I found a procedure from one of the developers that seems to do the trick. It involves downloading a new version of the Boot2Docker ISO file, and letting the system re-provision itself. The process involved a) Creating an alias (“local-docker-machine”) for the “Local by Flywheel”s docker-machine binary; b) issuing a series of commands to that alias:

local-docker-machine stop local-by-flywheel
rm -rf ~/.docker/machine/certs
local-docker-machine create local-cert-gen
local-docker-machine start local-by-flywheel
local-docker-machine regenerate-certs -f local-by-flywheel
local-docker-machine rm -f local-cert-gen

These steps caused Local by Flywheel to recognize that the Boot2Docker ISO was out of date. It triggered a download of the new version, and gave the output below. When it completed Local by Flywheel worked as expected. Whew!

bash-3.2$ alias local-docker-machine="/Applications/Local\ by\ Flywheel.app/Contents/Resources/extraResources/virtual-machine/vendor/docker/osx/docker-machine"
bash-3.2$
bash-3.2$ local-docker-machine stop local-by-flywheel; rm -rf ~/.docker/machine/certs; local-docker-machine create local-cert-gen; local-docker-machine start local-by-flywheel; local-docker-machine regenerate-certs -f local-by-flywheel; local-docker-machine rm -f local-cert-gen;
Stopping "local-by-flywheel"...
Machine "local-by-flywheel" is already stopped.
Creating CA: /Users/richb/.docker/machine/certs/ca.pem
Creating client certificate: /Users/richb/.docker/machine/certs/cert.pem
Running pre-create checks...
(local-cert-gen) Default Boot2Docker ISO is out-of-date, downloading the latest release...
(local-cert-gen) Latest release for github.com/boot2docker/boot2docker is v18.09.1
(local-cert-gen) Downloading /Users/richb/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v18.09.1/boot2docker.iso...
(local-cert-gen) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
Creating machine...
(local-cert-gen) Copying /Users/richb/.docker/machine/cache/boot2docker.iso to /Users/richb/.docker/machine/machines/local-cert-gen/boot2docker.iso...
(local-cert-gen) Creating VirtualBox VM...
(local-cert-gen) Creating SSH key...
(local-cert-gen) Starting the VM...
(local-cert-gen) Check network to re-create if needed...
(local-cert-gen) Waiting for an IP...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: /Applications/Local by Flywheel.app/Contents/Resources/extraResources/virtual-machine/vendor/docker/osx/docker-machine env local-cert-gen
Starting "local-by-flywheel"...
(local-by-flywheel) Check network to re-create if needed...
(local-by-flywheel) Waiting for an IP...
Machine "local-by-flywheel" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
Regenerating TLS certificates
Waiting for SSH to be available...
Detecting the provisioner...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
About to remove local-cert-gen
WARNING: This action will delete both local reference and remote instance.
Successfully removed local-cert-gen

Internet Identity, Nationwide Bank, and the Post Office

Dave Winer wrote about “internet identity” and that several companies were probably thinking about solving the problem. Specifically, he said:

But because money is so central to identity, it’s surprising that there isn’t a Google or Amazon of identity. Seems there’s money to be made here. An organization with physical branches everywhere, with people in them who can help with indentity (sic) problems.

This reminded me of the proposal to have US Post Offices become banks (for example, here and a zillion other places.)

The advantages:

  • There are post offices everywhere. The postal system is constitutionally mandated to be present, so it’s useful for them to have a valuable mission even as the volume of paper mail declines.
  • The “Bank of the US Post Office” could provide an ATM at each branch. You could withdraw cash without fees anywhere in the US.
  • They could provide a low cost (no cost?) saving/checking accounts for the traditionally “unbanked”, instead making people use check cashing services, payday lenders, etc. who siphon off a percentage of the transaction.
  • Postal employees have a strong ethos of caring for the transactions, and already have procedures for handling cash.
  • Post Offices are accustomed to handling critical, private matters in a timely way.

Identity management seems another valuable service that the USPS might provide.

Linking Reservation Nexus and TripAdvisor

Connecting ResNexus and TripAdvisor

We wanted our room availability to show up in TripAdvisor and other online services. There are two basic steps, where you tell Reservation Nexus and TripAdvisor how to find each other’s information:

  • Use Reservation Nexus Availability Exchange to share your room availability
  • Use TripAdvisor TripConnect to link up your business to the Reservation Nexus listings

Note: The business name, postal address, URL, and email must be exactly the same in both ResNexus and TriPAdvisor. Check them before starting this procedure:

On the Reservation Nexus site:

  1. In the ResNexus Settings choose Availability Exchange, near the bottom of the settings (first image below).
  2. In the Availability Exchange page:
    • Click the REGISTER button to register your rooms
    • Click Only share my availability… and check off the desired services. (second image)
  3. Click SAVE. The resulting page (third image below) shows:
    • Your Availability Exchange ID next to the UNREGISTER button
    • The Last full synch time

On your TripAdvisor site:

  1. Log into TripAdvisor
  2. Go to https://www.tripadvisor.com/CostPerClick and click Check your Eligibility. It will show a page naming your property to link to the Cost per Click program. (first image below)
  3. Click Get Connected. You will see a page listing the choices. (second image)
  4. Find “Reservation Nexus” and click it to select, it, then click Confirm. (third image)
  5. The confirmation page (fourth image below) should show property prices for a specific night. This confirms that the connection has been established. Continue with the cost-per-click process with TripConnect.
  6. If you see an error (fifth image), ensure that your contact information for Reservation Nexus and TripAdvisor are exactly the same.

Troubleshooting

  • When it works, the connection between Reservation Nexus and TripAdvisor should happen almost immediately, and you should see the confirmation page listing your property prices.
  • If you had to modify your ResNexus info, then you may need to contact ResNexus to have them re-publish your TripConnect info.
  • Contact Reservation Nexus if the connection has not completed within an hour.

Taxpayer-Funded Networks – all that bad?

I saw an article fretting about taxpayer-funded broadband projects in Texas Monitor. It cites a “study” by the Taxpayer Protection Alliance Foundation that purports to show a wide swath of “failed taxpayer-funded networks”.

A little research on the site led me to realize that it’s not first-rate work – outdated, incorrect information – so I left the following comment on the Texas Monitor site:

I decided to check the “Broadband Boondoggles” site to see what information they provide. First off, the copyright date on the site’s footer says 2017 – are they even updating it?

More specifically, I found that they disparage the local ECFiber.net project (in VT) of which I have personal knowledge. They state that as of January 2015 ECFiber has spent $9M to connect 1,200 subscribers (“an astounding $7,500 per customer.”)

Well, that may be true – as of that date. If they had bothered to follow up with ECFiber’s progress (https://www.ecfiber.net/history/) they would have learned:

  • As of January 2018 they have connected over 2000 customers (cost per subscriber is now roughly half that reported number)
  • They’re hampered by the pole “make ready” process by the incumbent monopoly carriers who are slow to respond. They could connect subscribers faster if the carriers would follow their legal make-ready obligations.
  • ECFiber is a private community effort, entirely funded with grants and private equity/loans, so I’m curious how they could even have filed a FOIA request.
  • They’ve now raised $23M capital (from the private markets), to reach 20,000 subscribers.
  • This gives a system-wide average cost of $1,150/subscriber – a very attractive cost.

I’m sure there are false starts and overruns for many municipal projects, but if this outdated information is typical of the remainder of the TPAF site, then I would be reluctant to accept any of its conclusions without doing my own research.

WordPress Meetup in Londonderry

I’ll be speaking next month at the WordPress Meetup about the using Docker to host a development WP server on your laptop. Here’s the writeup:

Docker for WordPress

Docker enables developers to easily pack, ship, and run any application (including WordPress) as a lightweight, self-sufficient container which can run virtually anywhere.

For WordPress users, this means it’s easy to set up a lightweight development WP server on your laptop/desktop. You can make and test changes before migrating it to your client’s site. Best of all, if you screw things up, you can simply discard the container’s files and start afresh in a couple minutes. And because it’s running on your local computer, no need to worry about hosting, configuring servers, etc.

Rich will show how to install the Docker application on a laptop, then install and start a WordPress Docker container. The result is the familiar new WP install that you can customize to your heart’s (or client’s) content.

The WordPress Meetup is open to all on Tuesday, 8 May. Sign up at https://www.meetup.com/WordPressDevNH/events/249032144/

Fake News News

I went to a terrific talk at the Lyme Library earlier this week.

Randall Mikkelsen from Reuters spoke on the topic, “Fake News: What’s the Real Story?”. In it, he presented The Chart which is an analysis of popular web sites showing their bias (left, center, right) with a measure of their reliability/believability. It’s useful to check your reading habits to see if they match your expectations.

That site also has Six Flags to Identify a Conspiracy Theory Article. This is an easy way to check your reading matter to see if it’s “actual news” or just somebody writing to get you fired up. (I also included a comment – what do you think?)

How to Write Wiki Pages So People Will Read Them

So you’ve just learned something cool on a new subject, and you want to let the world know about your discovery. You go to the project’s wiki, and jot it all down. But how can you help people read what you’ve written?

When I look at pages on a wiki, I use three criteria to determine whether I want to spend the time to read a page. If I’m convinced that the page has the info I’m seeking, I’ll work hard to understand it. But if I can’t tell whether it’s any good, it’s just faster to post a query to the forum. Here are the questions I ask:

  1. Is this page for me? Does it apply to my situation?

    There are a lot of cues to whether a page “is for me”. Obviously the title/heading of the page is important. But when I’m seeking information, I’m not usually an expert in the subject. I need help to understand the topic, and I look for a description that tells what the page is about. I also look for cues to see if it’s up to date. Finally, I love a page that has an introductory section that talks about the kinds of info that I’ll find on the page.

  2. Does the author know more than I do?

    A number of factors influence this judgement. As you’re aware, there’s a huge range of knowledge level of wiki page authors – from expert to the newcomer who’s excited to document his first discovery. As I scan through a page, I’m looking for facts that confirm what I already know (proving the author has some skill), and then things that I don’t (showing they know more.) Finally, it helps to know that the author is aware of the conventions of the wiki – does it look like other wiki pages? If so, I get some comfort that the author is aware of the way other wiki pages work/look.

  3. Can I figure out what to do?

    My final question about whether a page is useful is whether I can use the information. If it’s a tutorial/howto, I want the steps clearly stated – “step 1, step 2, step 3, then you’re done” If it’s a reference page, is the information organized in a comprehensible fashion? Is it really long? Can I pick out what’s important from incidental info?

The challenge I put to every author is to organize the information in a way that presents the most frequently-sought info first, then figure out what to do with the rest. You might move sections around, or move some information onto its own separate page, coalesce it into an existing/similar wiki pages, or even create forum articles (instead of a wiki page) if the subject is rapidly evolving.

Typical Net Neutrality coverage – accepting untruths from ISPs

I just sent an email to the reporter from NewsPressNow who posted a typical net neutrality story. A flaw in this kind of reporting is the tacit acceptance of an ISP’s blandishments that the Internet was fine before the 2015 FCC rules, and that “… And I don’t know if you’d find anyone who said there was a problem with the internet.”

Well, someone said there was a problem, because Comcast paid a $16 million fine to settle a law suit for blocking/throttling legal internet traffic, exactly the kind of behavior that would be permitted by the change of rules. As I said in my note to the reporter:

I don’t know whether he [the source at the ISP] is ignorant of history, or simply baldly saying things that are known to be false, but a quick google of “Comcast throttle bittorrent” will turn up copious evidence that some ISPs were throttling the internet in those “good old days”. See, for example, these two articles that offer technical details of the Comcast case:

Wired: https://www.wired.com/2007/11/comcast-sued-ov/ and

ArsTechnica: https://arstechnica.com/tech-policy/2009/12/comcast-throws-16-million-at-p2p-throttling-settlement/

This behavior by Comcast is the best documented, but I believe more research turn up more ISPs who dabbled in various kinds of throttling behaviors before the Title II language went into effect.

I encouraged the reporter to update the story with a reaction to this information from his source at the ISP.

Netflow Collectors for Home Networks

Update – November 2017: Added descriptions for the other tools I had investigated.
Update – October 2018: Although it’s not based on Netflow, Al Caughy’s YAMon provides a good view of the traffic flowing through an OpenWrt or DD-WRT router. I use it myself.

Now that LEDE Project has an official release, I hungered for a way to see what kinds of traffic is going through my network. I wanted to answer the question, “who’s hogging the bandwidth?” To do that, I needed a Netflow Collector.

A Netflow Collector is a program that collects flow records from routers to show the kinds and volumes of traffic that passed through the router. The collector adds those flow records into its internal database, and lets you search/display the data. (You also need to configure your router to send (“export”) flow records to the collector. My experiments all employ the softflowd Netflow Exporter. It is a standard package you can install into your LEDE router.)

In an earlier life, I used a slick commercial Netflow monitoring program. But it wasn’t free, so it isn’t something that I can recommend to people for their home networks.

There are many open-source Netflow collectors which have varying degrees of ease of installation/ease of use/features. Most have install scripts that show the steps required to install it on an Ubuntu or CentOS machine, but they are fussy, and require that you have a freestanding computer (or VM) to run it.

Consequently, I created Docker containers that have all the essential packages/modules pre-configured. This means that you can simply install the Docker container, then launch it on a computer that’s continually operating, and let it monitor the data.

This is the first of a series of postings about Netflow Collectors. They include:

  • Webview Netflow Reporter Netflow collector and web-based display program. Makes it easy to see fine-grained information about traffic. More…
  • NFSEN/NFDUMP Netflow collector and web-based display program. Provides attractive graphs, and automatically detects Netflow exporters (so you can skip one configuration step.) More…
  • FlowViewer Another Netflow Collector with web-based GUI. I created a Docker Container for FlowViewer
  • FlowBAT A Javascript Netflow collector and display program. This requires an old version of Meteor (0.9.1), and seems not to be currently maintained. The Github repo for FlowBAT has been updated to install using the required (old) version of Meteor.
  • DDWarden This claims to work with DD-WRT’s rflow protocol (very similar to Netflow v5). No further investigation because I was interested in something to work with LEDE/OpenWrt.
  • Generating Netflow Datagrams A few ways to generate Netflow data: softflowd to run on LEDE/OpenWrt routers and nflow-generator to send mock data in the absence of real traffic.

Net Neutrality – Contacting the Congress (update)

The Battle for the Net site https://www.battleforthenet.com/ no longer seems to have the telephone form(!)

But… Boing Boing does. Go to https://boingboing.net/. You’ll see a popup window with a place to enter your phone number. Click OK, and they pop up a script on-screen.

They call you, you answer, then you supply your zip code.

Then they place calls to each of your legislators (in the House and Senate), then if you have time, they call the offices of Mitch McConnell, Chuck Schumer, and other leaders, so you can deliver the message.

I say my name, home town, and then ask that the FCC preserve the current Title II Net Neutrality rules. The staffer who answers is gonna be busy – you might chat them up though to see if they’re getting slammed. (Mitch McConnell’s office wasn’t even answering(!))