Tuesday, September 9, 2025

Passport Cards as Proof of Citizenship

Passport card issued November 2024

In November 2024 we ordered Passport Cards for the first time. The cards arrived 18 days after we mailed in the forms, without paying for expedited service nor even for express mail.

The Passport Card is a stiff plastic card, slightly thicker than our driver's licenses. It is specifically not valid for international air travel, though it can be used to board a domestic flight and is Real-ID compliant. It cost $30, versus $130 for a Passport Book, at least as of the timeframe we ordered in late 2024.

Most importantly though: it is much more reasonable to have a Passport Card with you at all times than it would be to carry around a Passport Book, and the card is a valid proof of citizenship. It can be used within the United States, can be used for travel within the Americas, and will allow re-entry into the US even if you have lost the regular Passport Book.

One caution: when it comes time to renew, both the Passport Book and Card will need to be turned in for renewal. Keep good care of both, if one is lost then the renewal of the other becomes a Lost Passport event which requires DS-64 and DS-11 forms to replace. The DS-11 requires birth certificates and other proof of citizenship, just like getting the passport for the first time required.

A Passport Card is a good way to have proof of citizenship with you at all times.

Monday, September 8, 2025

Konsulatstermine für Reisepässe

Hand holding four German Reispässe Upon acceptance of a Staatsangehörigkeit § 5 declaration, making one a German citizen, the next step is generally to order a German passport called a Reisepass. This requires filling out the form and bringing the Urkunde über den Erwerb der deutschen Staatsangehörigkeit durch Erklärung and passport photos to the responsible Consulate in a passport appointment.

It can be difficult to get a passport appointment. You keep checking the site and there are never any appointment slots available.

German Consulates around the world add new appointments every weekday at midnight in Germany. For example, that is 3pm in California. If you start polling the appointment site at 2:59pm on Sunday, you have the best chance of seeing new appointments appear and grabbing one before they are all gone. Note that Daylight Savings Time differs by several weeks between Europe and the US, they aren't the same number of hours apart all year.

There are Honorary Consuls in a number of cities who can make copies of your documentation and forward it to the Consulate, which might be easier to get an appointment with if your Consulate is swamped.

Tuesday, September 2, 2025

LLMs to Blaze a Trail

As an engineering executive there are a few ideas and practices which I reinforce via repetition to the team, either explicitly at the start of a recurring meeting or implicitly by bringing it up whenever relevant. The first of these ideas is:

The product is not the code, not the features, not the designs.
    The product is that people can use the service for things that are important to them.
The business is not the code, not the features, not the designs.
    The business is that people can use the service for things that are valuable to them.


But the topic of this post is not that. The topic is the second thing I frequently reinforce via repetition:

Get something, anything, working end to end as quickly as you can.
Not even a minimum viable thing. Any thing.


It has been my experience that, as developers, we tend to focus in on one area of a system to explore its requirements and build it out sufficiently until we feel confident that we understand what else will need to be done before moving on to the next piece. This results in a system where the understanding and the plan for development is grown by accretion, each piece layered atop the previous which is left undisturbed by later developments. We might go back and harmonize all of them later... maybe.

It has also been my experience that everything starts progressing more quickly once the system does something, anything end-to-end.

  1. We gain perspective on how the whole system will work and apply it to everything we do subsequently.
  2. One can make a change and see it function all the way through. Enthusiasm improves productivity.
  3. It is far more effective when multiple people work on a system in parallel if they can all see the impacts of each other's work.

Thus:

Get something, anything, working end to end as quickly as you can.
Not even a minimum viable thing. Any thing.


LLMs to Blaze a Trail

With the maturing capabilities of LLM code generation, I tried an experiment with Claude Code. At Google one of the classes in orientation was to construct a web scraper. I asked Claude Code to build a scraper, but an even simpler one: scrape a metric.

In a new scraper directory, create a go program which will scrape a web page formatted
in prometheus metrics format, and extract a floating point value labeled "example"

Create an SQL schema for a timeseries, with columns for a timestamp and a floating point value.

Have scraper connect to a Postgres database and write each sample it collects to the database.

In a new webui/frontend directory, create a web page using React and typescript which will
poll a backend server for changes in a loop and display rows of timeseries data with timestamp,
sample name, and value.

In a new webui/backend directory, create a go program which will handle queries from
webui/frontend and fetch timeseries data from the postgres database.

A classic hacker stock photo in a darkened room sitting in front of a laptop wearing a hoodie and mask, except the person typing is a robot

It produced a small, functional implementation.

scraper123 linesGo
backend169 linesGo
frontend127 linesTypescript
 100 linesCSS
 43 linesHTML

A few interesting tidbits:

  • It produced no unit tests for the Go code. I didn't tell it to.
  • It did produce unit tests for the TypeScript code, even though I did not tell it to. I think this speaks well for the TypeScript community, the training data is infused wth testing as an expected practice.
  • I wish wish wish that Claude Code would automatically populate a .gitignore for node_modules. Not for the first time, I checked 437 Megabytes of code into git and had to rewrite the history to remove it.

 

Unit Tests

Having no tests at all sets a bad example. I don't actually want to encourage the construction of large system test suites at this stage of a project, as the effort to keep updating a large test as the system evolves is likely to outweigh the value of the test at this stage. Yet I do want to set the example by ensuring there is something.

In the scraper directory, keep the main() function in main.go but move the rest of the code
to a scrape.go file. Write tests for scrape.go with a local prometheus server and in-memory
database. Check that metrics are correctly stored in the database.

Claude Code generated 377 lines of test cases, including scraping one value and several values. Most of the code was to set up an in-memory database using sqlite and to run a local Prometheus server.

The cost of the first prompt to generate the system and the second prompt to add unit tests: 93 cents.


 

Non-trivial example

That example was pretty contrived. How about an example of a more realistic system which:

  1. Implements a protocol connecting to a legacy communications system.
  2. Implements a set of modern protocols connecting to current Internet communications infrastructure, to forward messages to and from the legacy protocol.
  3. Has a management layer watching all of the connections and can stop or restart them as needed.
  4. Has a dashboard and console showing the status and configuration of the system.

Can it produce this? Well... not exactly. I kindof cheated: this is the first thing I attempted, I made up the contrived system later.

The problem is that first step. Claude Code was not much help in producing the first piece, connecting to the legacy system. The tasks there were more like engineering archaeology:

  • Trying variations on the digest hash function until the remote system suddenly returned 200 OK.
  • Figuring out what portions of the the poorly documented header fields were actually implemented.
  • Diagnosing failures when the only indication we get is "Invalid" with no further information about what was invalid.

There just isn't any training data for this, and so trying to rapidly get to a functioning end-to-end system entirely via code generation didn't work. I was able to work on the management layer and the dashboard and so on while still debugging the first piece, but it only started working when that first piece was done.

Could I have set that first piece aside with a mockup, and worked on the rest? Probably, but it was just me not a team and the first piece was the biggest risk. I focussed on eliminating the risk.

In an engineering team, I thnk I would approach this with a small team whose job is to sketch out the overall system. It might be entirely senior engineers or at least led by a quite senior engineer, and tasked to identify and quantify risks and to plan out a system. That team could multiply its efforts using LLMs to help generate the more well understood portions of the system.

Monday, September 1, 2025

On the Persistence of Human Memory

Tell me this looks wrong to you, too.

A screenshot of green Save and red Cancel buttons where the Save button is quite obviously lower on the screen than Cancel

Claude Code doesn't see it. I mean, of course Claude Code doesn't see it, it has no eyes or other senses. Nonetheless I tried to get Claude Code to fix it by leading it to a solution.

In frontend/ in the Delivery page, align the Save and Cancel buttons vertically.
In frontend/ in the Delivery page, remove the height property from the Save and Cancel
buttons. Put both buttons inside a div, and set the height of the div to 40px.

Neither of these fixed it, because these were not the problem. The actual problem was:

233 .save-button,
234 .add-button {
235   background-color: #48bb78;
236   margin-top: 1rem;
237 }

This was leftover from when the button was elsewhere on the page, and not removed when it moved to be next to the Cancel button. Poking around with Chrome's Developer Tools and looking at the Elements on the page identified it.


On the Persistence of Human Memory

One thing I am finding is that memory of code generated with the help of an LLM fades much more quickly. Some portions of this system were not amenable to getting help from Claude Code — things which involve low level interoperability with existing and legacy systems. There is no relevant material in the training set, Claude Code could not help in iterative debugging in staring at the errors from the legacy system to figure out what to do next.

Those portions of the codebase, those developed with blood and sweat and tears, remain clear in my memory. Even months later I can predict how they will be impacted by other changes and what will need to be done.

That is not true of the portions which the LLM generated. Continuing with the analogy of treating it as an early career developer, I only reviewed the code I didn't write it. As with any code review, the memory of how it works fades much more quickly compared with actually digging in to the work.

(This is better than Claude Code, though, which retains no memory at all of how code has evolved and instead discovers it all afresh at the start of each session).

Treating an LLM like an early career programming partner can provide large increases in productivity, but it also means that one has less personal recollection of the windy path the code took to get to its current state. One must be able to go spelunking. This isn't that much different from a codebase which one has worked on over a long period: little detailed memory of specific portions of the code remain, but an overall sense of the codebase is retained much longer.