Friday, July 23, 2021


Inconceivable: zero hits for "Kthubernetes" ?

Unacceptable. This will not stand.

Thursday, November 28, 2019

gdaldem color-relief is my BFF

When working with geospatial data and maps, an effective way to communicate in email is to send images highlighting the specific thing one wants to point out. For example, showing areas where the dominant land cover is lichens:

gdaldem, though named for its role in processing Digital Elevation Maps, is a useful tool even for things which have nothing to do with elevation. It has a color-relief subcommand intended for color gradients of terrain but which can be used for lots of purposes. It takes a simple text file mapping pixel values in the original to colors in the output. For example, my lichen image above used:

0 black
1 grey
139 grey
140 red
141 grey
209 grey
210 black
211 grey

This means:

  • pixel value of zero (NoData in the original image) should be colored black.
  • water is pixel value 210 in the original image, so make it black as well.
  • the land cover class for lichen in the original image is 140, so color it red.
  • we set grey for 1 and 139, for 141 and 209, and for 211 because by default, gdaldem color-relief would create a gradient of colors between those specified. We want those areas to be solid grey.

The original image was:

Monday, September 9, 2019

Mini-split Heat Pump Installation

We live in a home which was built in 1963, older than we are. The structure has some great attributes and some not-so-great attributes. Among the not-so-great is the lack of air conditioning, and a pair of ancient furnaces with very high gas bills. We set out to do something about this.

  • We wanted to add air conditioning.
  • We wanted a much more efficient heating system.
  • The ducts were 50+ years old and clearly leaky, with dirty patches in the insulation at each joint where the duct has been pulling air through for decades. They would not pass current inspections.

As essentially none of the existing HVAC would remain, we could consider options which didn't preserve any of it. We decided to go with a ductless mini-split heat pump system.

A heat pump is an idea which has been around for a while. It operates similarly to an air conditioner in that it repeatedly compresses and expands a refrigerant, circulating it in and out of the house while doing so. The difference is that where an air conditioner always compresses the refrigerant outside of the house to release heat, the heat pump can also reverse the process to release heat inside. A heat pump can either heat or cool based on where it allows the refrigerant to expand.


There are heat pumps which can replace a central furnace and hook up to the existing ducting, but as the ducts were in terrible shape we opted for a mini-split system. There is no central air handling nor air ducts in this system, there are individual units in each room which are connected to a compressor outside.

In each major room a head unit is mounted high on the wall, and contains refrigerant coils and fans. Air is circulated within the room, not drawn from nor exhausted to the outside.


The head unit connects to power and two refrigerant lines. This picture was taken during the installation, with the wall open and the two copper refrigerant lines not yet hooked to the head unit.

Note that there is no air duct: air is not moved through the home with a mini-split, only refrigerant. The head unit can cool or heat air drawn from the room, using the refrigerant to pump heat in or out of the house.

I emphasize the lack of ducts because it was a big mental hurdle for us. In a retrofit the heat pump units can go anywhere, placement is not constrained to where ducts currently go.


The head unit contains a filter in front of the fan, but the activated charcoal portion of the filter covers only a small portion of the area. We have no way to measure the effectiveness of this filter, but we are skeptical as it seems like air can flow around it easily.

The refrigerant connections are quite small, half inch diameter copper pipes plus insulation, so they can run between studs in the walls and under the house. They all eventually lead to an outdoor unit, which contains a fan and radiating fins like an air conditioner outdoor unit would.

The outdoor unit is available in a few capacities, rated in British Thermal Units (BTUs) like 20k - 50k. The head units inside the home are also rated in BTUs, from 9k through 24k, and one adds up the rating of the head units to determine the capacity of outdoor unit required.

Our home needed two outdoor units, a larger 50k BTU unit for the upstairs and smaller 20k unit for the lower level.

With a furnace or central air, a single thermostat controls the HVAC. That thermostat might be very sophisticated with multiple room sensors, but there is a single central point where control can be implemented.

With ductless mini-split systems, there is no single point of control. Each head unit implements its own local control, it can implement its own schedule, etc. The system is supplied with a handheld remote control for each head unit. It appears to be infrared, and it is not strongly paired with a given head unit. If you take it into another room and point it at a head unit in that room, it will control the head unit in the new room.

The remote is quite complicated. It can change the mode from heating to cooling to fan (and others). It can program weekly schedules. For some models of head unit, it can configure an occupancy sensor feature to aim the airflow directly at people in the room and turn off if nobody is present. Etc, etc.

We use the remote controls for all but one of the head units. For the last unit, we had reasons to not want to change how the HVAC system is operated and wanted to retain the existing themostat on the wall exactly as it was. Mitsubishi has an interface to allow this, connecting any 5-wire thermostat to control a single head unit.

A few things we wish we'd known at the start, in case anyone reading this is planning their own heat pump installation:

  • All of the head units attached to a given outdoor unit have to be cooling or heating, not a mixture of both. We got lucky in this: we needed separate outdoor units for each level, and this matches our usage as the lower level doesn't get so warm while the upstairs needs cooling during the summer.
  • The smaller outdoor units can be attached to the side of the house on a bracket. The larger outdoor units require a concrete pad to be poured. Had we known this we might have chosen to go with three smaller outdoor units and had them all mounted to the side of the house.

At the time of this writing we've had the system for five months, through our first summer. It has been great having the option to cool the house on those days which need it. We are just heading into the cooler months, and we're hoping to see a substantial reduction in the energy bill.

We're quite happy with the system. Heat pumps are also an effective means to help with global warming by improving efficiency and reducing use of methane, and are #42 on Project Drawdown's list.

Our system consists of:

  • 3 x MSZ-FH15 15k BTU head units
  • 1 x MSZ-FH12 12k BTU head unit
  • 3 x MSZ-FH06 6k BTU head units
  • 1 x MXZ-8C48 48k BTU outdoor unit
  • 1 x MXZ-2C20 20k BTU outdoor unit
  • 1 x PAC-US444CN thermostat interface
  • electrical panel work to rearrange breakers and install new 40A and 25A circuits
  • permits and fees
  • demo and removal of old ducts and furnace equipment

The total cost was $31,665 for equipment and installation, in the SF Bay Area where the cost of living is high. Our gas bill in the winter with the old furnaces was often $400/month, which should decline substantially with an electric heat pump powered by solar panels on the roof.

The system was provided and installed by Alternative HVAC Solutions in San Carlos, CA, and we were quite pleased with their work.

Monday, August 12, 2019

LED bulbs for FLOS Fucsia light fixtures

The home we currently live in had a FLOS Fucsia 8 light fixture in the dining room when we moved in. The look of the fixture and the gentle chimes it makes when a breeze blows in from outside is quite appealing.

However we decided not to keep the light bulbs it came with, Philips Spotone NR50 25 Watt halogen bulbs. Replacing them with LEDs turned out to be considerably more difficult than we expected, this post is intended to help anyone else with one of these fixtures who is looking for options.

The base of the bulbs is one not commonly used in the United States: E14. A "candelabra" bulb is E12, a regular bulb is E26. In this nomenclature the E is for "Edison" and refers to the type of screw in base, and the number is millimeters width. The base of the bulbs used in the FLOS Fucsia line of fixtures are slightly larger than a candelabra bulb.

Though not common in the United States, E14 bulbs are quite common in Europe, which means that most of the E14 bulbs you find are designed for the European voltage of 220V and not the US voltage of 120V. Bulbs which are not dimmable will often work all the way down to 85V, but dimmable bulbs are calibrated for 220V and when powered at 120V they are fully dim or all the way off.

It took several tries to find dimmable bulbs which work at the US voltage in this fixture:

  • we first bought non-dimmable bulbs from EBD lighting. These worked, but we missed being able to have a more intimate dinner with the lights turned low.
  • we unintentionally bought dimmable bulbs for European voltage. These did not work at all at 120V, the light would not turn on.
  • a bit later, we found the perfect bulbs: AAMSCO is a specialty vendor which makes an LED version of the E14 NR50 spotlight which is dimmable at 120 volts. It is about 4x as expensive as most LED bulbs, but a perfect fit for this fixture. We felt it was worth splurging. We bought them at, which offers a box of 10 bulbs at a small discount.

This image shows the comparison between the original Philips Spotone halogen bulbs, the non-dimmable EBD Lighting bulbs, and the AAMSCO dimmable bulbs. The EBD bulbs have a notably bluer temperature and are considerably brighter than the other two. The AAMSCO LED bulbs roughly match the temperature and light output of the original halogens.

The climate change connection: energy savings from LED lights is the #33 solution for global warming on Project Drawdown's list.

Monday, July 1, 2019 SSO with

A previous post discussed how to set up a Discourse forum to run as a service within JupyterHub. Though this makes the forum appear within the URL space of the JupyterHub server, it still runs as a completely separate service with its own notion of accounts and identities. We're tackling that in this post, describing how to make Discourse use single sign on (SSO) from, which is also how we set up JupyterHub accounts to work.

The previous post went over creating a JupyterHub service configuration for the Discourse service. We now add a second service, for the SSO server. This example is for The Littlest JupyterHub, where we create a snippet in /opt/tljh/config/jupyterhub_config.d/ = [
        'name': 'forum',
        'url': '',
        'api_token': 'no_token',
        'name': 'discourse-sso',
        'url': '',
        'command': ['/opt/tljh/user/bin/flask', 'run', '--port=10101'],
        'environment': {'FLASK_APP': '/opt/tljh/hub/bin/',
            'GITLAB_CLIENT_ID': '...',
            'GITLAB_CLIENT_SECRET': '...',
            'EXTERNAL_BASE_URL': '',
            'DISCOURSE_SECRET': '...',

Code for the service is at discourse-gitlab-sso. It provides a Python Flask-based service which:

  • Listens for SSO redirects from discourse, which arrive at discourse_sso() and results in redirecting the browser to for an OAuth request.
  • redirects the browser to gitlab_oauth_callback() with the OAuth response. Python code sends several followup requests to get a token and fetch information about the user from GitLab.
  • gitlab_user_to_discourse() maps the information retrieved from gitlab to the format expected by Discourse, and the browser is finally redirected back to Discourse with the SSO information encoded.

Sunday, June 9, 2019

ipywidgets.Text background-color

This took some time to figure out so I'll post it here in hope it helps someone else. To set the color of an ipywidgets Text box using CSS, you need to:

  • add a CSS class name to the Text widget
  • render an HTML <style> block for an <input> element descendant of that class
  • Set !important on the background-color, to prevent it being overridden by a later Jupyter-declared style.

For my case the Text widget was being included in a VBox, allowing the HTML widget containing the <style> to be included in the list.

data_input_style = "<style>.data_input input { background-color:#D0F0D0 !important; }</style>"
value_entry = ipywidgets.Text(value='')

children = [
ipywidgets.VBox(children=children, ...)

Wednesday, June 5, 2019

Discourse as a JupyterHub Service

JupyterHub is a system for managing cloud-hosted Jupyter Notebooks, allowing users to log in and spawning a notebook or Jupyterlab instance for them. JupyterHub has a notion of Services, separate processes either started by or at least managed by JupyterHub alongside the notebook instances. All JupyterHub Services appear at

Discourse is a software package for a discussion forum, and quite nice compared to a number of the alternatives. Discourse is distributed as a Docker container, and strongly recommends using that container not trying to install it any other way. When running by itself on a server, Discourse uses docker-proxy to forward HTTP and HTTPS connections from the external IP address through to the container. In order to run Discourse on the same server as JupyterHub, we need to remove the docker-proxy and let it be handled by handled by JupyterHub's front-end Traefik reverse proxy, which is already bound to ports 80 and 443 on a hub server.

To run alongside JupyterHub we need to reconfigure Discourse to not use docker-proxy. The docker-proxy passes through SSL to be terminated within the container, while Traefik has to be able to see the URL path component in order to route the request, so we're also moving SSL termination out of Discourse and into Traefik.

Some searching turned up several articles looked relevant, but did not turn out to be applicable. To save the trouble:

  • Running other websites on the same machine as Discourse explains how to set up NGINX as a reverse proxy and use a unix domain socket to communicate from NGINX to Discourse. JupyterHub checks the syntax of the URL configured for its services, I didn't find a way to make a Unix socket work within the JupyterHub Services mechanism.
  • Discourse behind Traefik describes how to create Docker networks via Traefik configuration. Though this might have worked, I found it much easier to use HTTP over the docker0 interface.

For bootstrapping, discourse provides a discourse-setup script to ask a few questions and create an app.yml file used to drive construction of the docker container. discourse-setup fails if there is already a webserver on port 80, and I did not find a reasonable alternative to it. In my case, I briefly shut down the JupyterHub server and ran discourse-setup. Running discourse-setup on a separate VM and copying the resulting /var/discourse would likely also work.

Starting from the /var/discourse created by discourse-setup, perform the following steps to make it run as a JupyterHub service.

  1. cd /var/discourse
  2. edit containers/app.yml to let Traefik handle the reverse-proxy function. We comment out the external port in the expose section, which will disable docker-proxy and let us handle the reverse proxy function using traefik.
    ## which TCP/IP ports should this container expose?
    ## If you want Discourse to share a port with another
    ## webserver like Apache or nginx,
    ## see for details
    #  - "80:80"   # http
    #  - "443:443" # https
      - "80"
    in the "env:" section at the bottom:
      ## TODO: The domain name this Discourse instance will respond to
      ## Required. Discourse will not work with a bare IP number.
      # Running Discourse as a JupyterHub Service
      DISCOURSE_RELATIVE_URL_ROOT: /services/discourse
    Replace the "run:" section with the recipe to adjust the URL path for /services/discourse:
    ## Any custom commands to run after building
        - exec:
            cd: $home
              - mkdir -p public/services/discourse
              - cd public/services/discourse && ln -s ../uploads && ln -s ../backups
        - replace:
           global: true
           filename: /etc/nginx/conf.d/discourse.conf
           from: proxy_pass http://discourse;
           to: |
              rewrite ^/(.*)$ /services/discourse/$1 break;
              proxy_pass http://discourse;
        - replace:
           filename: /etc/nginx/conf.d/discourse.conf
           from: etag off;
           to: |
              etag off;
              location /services/discourse {
                 rewrite ^/services/discourse/?(.*)$ /$1;
        - replace:
             filename: /etc/nginx/conf.d/discourse.conf
             from: $proxy_add_x_forwarded_for
             to: $http_your_original_ip_header
             global: true
  3. Run:
    ./launcher rebuild app
    to construct a new docker container.
  4. Add the configuration for a Discourse service to JupyterHub. I'm using The Littlest JupyterHub, where we create a snippet in /opt/tljh/config/jupyterhub_config.d/
    Find the IP address to use within the output of "docker inspect app"; look in NetworkSettings for IpAddress and Ports. = [
            'name': 'discourse',
            'url': '',
            'api_token': 'no_token',
  5. Then restart JupyterHub with the new configuration:
    tljh-config reload
    tljh-config reload proxy

Discourse should now appear on

If something doesn't work, logs can be found in:

sudo journalctl --since "1 hour ago" -u jupyterhub
sudo journalctl --since "1 hour ago" -u jupyterhub-proxy
sudo journalctl --since "1 hour ago" -u traefik

Monday, June 3, 2019

JupyterHub open lab notebook at login

Instructions for JupyterHub configuration state that to start JupyterLab by default, one should use a configuration of:

c.Spawner.args = ['--NotebookApp.default_url=/lab']

To start a classic Notebook by default, use:

c.Spawner.args = ['--NotebookApp.default_url=/tree']

To start the classic Notebook and open a specific ipynb file, use:

c.Spawner.args = ['--NotebookApp.default_url=/tree/path/to/file.ipynb']

One might therefore assume that opening /lab/path/to/file.ipynb would open a specific file in JupyterLab, but this does not work and results in an error. The correct configuration is /lab/tree:

c.Spawner.args = ['--NotebookApp.default_url=/lab/tree/path/to/file.ipynb']

Saturday, June 1, 2019

JupyterHub OAuth via setting scopes

Recently I set up a JupyterHub instance, a system for cloud-hosting Jupyter notebooks. JupyterHub supports authentication by a number of different mechanisms. As the code for the notebook is hosted on GitLab, I set up OAuth to GitLab as the main authentication mechanism.

Gitlab supports a number of scopes to limit what the granted OAuth token is allowed to do:

apiGrants complete read/write access to the API, including all groups and projects.
read_userGrants read-only access to the authenticated user's profile through the /user API endpoint, which includes username, public email, and full name. Also grants access to read-only API endpoints under /users.
read_repositoryGrants read-only access to repositories on private projects using Git-over-HTTP (not using the API).
write_repositoryGrants read-write access to repositories on private projects using Git-over-HTTP (not using the API).
read_registryGrants read-only access to container registry images on private projects.
sudoGrants permission to perform API actions as any user in the system, when authenticated as an admin user.
openidGrants permission to authenticate with GitLab using OpenID Connect. Also gives read-only access to the user's profile and group memberships.
profileGrants read-only access to the user's profile data using OpenID Connect.
emailGrants read-only access to the user's primary email address using OpenID Connect.

However I found that if I didn't grant api permissions on the gitlab side, the authentication would always fail with "The requested scope is invalid, unknown, or malformed." It appears that the JupyterHub OAuth client was not requesting any specific scope, and that defaults to "api" — which is far too powerful a permission to grant for this purpose, as it allows read/write access to everything when all we really need to know is that the user exists.

Setting the OAuth scope for the JupyterHub client to request turns out to be quite simple to do in the configuration, albeit not documented:

  c.GitLabOAuthenticator.scope = ['read_user']

A pull request to add documentation on this for GitLabOAuthenticator has been submitted.

Saturday, May 25, 2019

Adding groupings to TopoJSON files

GeoJSON is a structured format for encoding a variety of geographic data structures like land topology, governmental boundaries, etc. It has structures for points, lines, polygons, and discontiguous collections of all of them. GeoJSON has been in use since the late 2000s.

TopoJSON is a more recent extension to GeoJSON, which brought a key innovation: when encoding the boundary between two regions, both regions contain a representation of that boundary. Using TopoJSON frequently results in much smaller files by separating out the definition of arcs from the collections of those arcs, allowing adjacent regions to both reference the same data describing the border between them. It also adds delta-encoding where points are encoded as a delta from the previous point, which tends to result in smaller numbers for dense topographical areas. TopoJSON was added as part of D3.js, an extremely popular 3d visualization JavaScript library, and has thus spread rapidly.

Today's topic is creation of customized TopoJSON files for various purposes. Mike Bostock, one of the creators of D3, wrote a series of articles about command line tools available for working with cartographic data. I wanted to develop a visualization of climate model results for regions of the world like Latin America or the Middle East and Africa, and found these articles immensely helpful in creating a TopoJSON file to support this. Part 3, which introduces TopoJSON and the CLI tools to work with it, was especially helpful.



The overall process we'll cover today is:

  1. Annotate existing geometries in a TopoJSON file with a new grouping name.
  2. Merge the annotated geometries to create new groupings.
  3. Remove the original geometries and supporting topology data.
  4. Profit!

We'll step through this series of commands:

cat world-countries.json |\
        python |\
        topomerge regions=countries1 -k "d.region" |\
        topomerge countries1=countries1 -f "false" | toposimplify \
        > world_topo_regions.json


(Step 1) Annotate existing regions:

We start from world-countries.json provided by David Eldersveld under an MIT license. The file defines geometry for each country by name and country code:

"countries1": {
    "type": "GeometryCollection",
    "geometries": [
            "arcs": [...etc...
            "type": "MultiPolygon",
            "properties": {
                "name": "Argentina",
                "Alpha-2": "AR"
            "id": "ARG"

In Python, we create a tool with a mapping table of country names to the regions we want to define:

region_mapping = {
    "Albania": "Eastern Europe",
    "Algeria": "Middle East and Africa",
    "Argentina": "Latin America",

We read in the JSON, iterate over each country, and add a field for the region it is supposed to be in:

d = json.load(sys.stdin)
for country in d['objects']['countries1']['geometries']:
    name = country['properties']['name']
    region = region_mapping[name]
    country['region'] = region

json.dump(obj=d, fp=sys.stdout, indent=4)

If one were to examine the JSON at this moment, there would be a new field:

"countries1": {
    "type": "GeometryCollection",
    "geometries": [
            "arcs": [...etc...
            "type": "MultiPolygon",
            "properties": {
                "name": "Argentina",
                "Alpha-2": "AR"
            "id": "ARG",
            "region": "Latin America"


(Step 2) Merge annotated regions: topomerge

topomerge is part of the topojson-client package of tools, and exists to manipulate geometries in TopoJSON files. We invoke TopoJSON to create new geometries using the field we just added.

topomerge regions=countries1 -k "d.region"

The "regions=countries1" argument means to use the source object "countries1" and to target a new "regions" object. The -k argument defines a key to use in creating the target objects, where d is the name of each source object being examined. We're tell it to use the 'region' field we added in step 1.

If we were to examine the JSON at this moment, the original "countries1" collection of objects would be present as well as a new "regions" collection of objects.

"objects": {
    "countries1": {
        "type": "GeometryCollection",
        "geometries": [
                "arcs": [...etc...
                "type": "MultiPolygon",
                "properties": {
                    "name": "Argentina",
                    "Alpha-2": "AR"
                "id": "ARG",
                "region": "Latin America"
        ...etc, etc...
    "regions": {
        "type": "GeometryCollection",
        "geometries": [
                "type": "MultiPolygon",
                "arcs": [...etc...
                "id": "Latin America"


(Step 3) Remove original regions

As we don't use the individual countries in this application, only regions, we can make the file smaller and the UI more responsive by removing the unneeded geometries. We use topomerge to remove the "countries1" objects:

topomerge countries1=countries1 -f "false"

As before, the "countries1=countries1" argument means to use the source object "countries1", and to target the same "countries1" object. We're overwriting it. The -f argument is a filter, which takes a limited JavaScript syntax to examine each object to determine whether to keep it. In our case we're removing all of the objects unconditionally, so we pass in false.

If we were to examine the JSON at this moment, we would see an empty "countries1" collection followed by the "regions" collection we created earlier.

"objects": {
    "countries1": {
        "type": "GeometryCollection",
        "geometries": []
    "regions": {
        "type": "GeometryCollection",
        "geometries": [
                "type": "MultiPolygon",
                "arcs": [...etc...
                "id": "Latin America"

However we're not quite done, as the arcs which define the geometry between all of those countries are still in the file, though not referenced by any object. We use toposimplify, part of the topojson-simplify package of tools, to remove the unreferenced arcs.

topomerge countries1=countries1 -f "false" | toposimplify


(Step 4) Profit!

That's it. We have a new TopoJSON file defining our regions. Rendered to PNG:

The JSON file viewed using github's gist viewer requires a bit of explanation: the country boundaries seen here are being rendered by github from OpenStreetMap data. The country boundaries are not present in the JSON file we created, only the regional boundaries as seen in the PNG file.