Tuesday, October 16, 2018

Carbon Capture: BECCS

BECCS is an acronym for Bio Energy with Carbon Capture and Storage. It uses plant material in a pyrolysis process to produce electricity. As discussed in the earlier post about biochar, the pyrolysis process produces three outputs:

  • a carbon-rich gas called syngas which is flammable, and contains about half the energy density of natural gas.
  • the solid char, a charcoal which has a much higher concentration of carbon than the original plant material.
  • a thick tar referred to as bio-oil, which is much higher in oxygen than petroleum but otherwise similar.

BECCS is a commercial operation to pyrolyze organic material at scale, usually by growing trees specifically for the purpose.

  • generate electricity by burning the syngas
  • use the char to keep the carbon it holds sequestered for a significant length of time. Though this might involve burial deep underground, char is also useful as a soil additive and takes many years to biodegrade. We could handle a substantial amount of carbon returning to the environment at a long enough cadence.
  • the bio-oil currently has little commercial use but has great potential, as it could displace petroleum in a number of chemical processes.

Because the feedstock for BECCS is newly grown vegetative material, it is strictly carbon neutral. If the char keeps carbon out of the atmosphere for a lengthy period of time, BECCS becomes carbon negative and draws down carbon from the environment while providing revenue via power generation to fund its own operation.

BECCS gets a substantial amount of attention because it is already operating at a substantial scale, removing hundreds of kilotons of carbon dioxide from the atmosphere each year. This is a few orders of magnitude off from where we need to get, but is proof that the process works.

The existing BECCS installations capture byproducts produced in existing agricultural processes, like fermenting corn for ethanol production. An analysis of geo data in 2018 estimated that BECCS could draw down approximately 100 megatons of carbon dioxide per year by 2020 using available land area.

Thursday, October 11, 2018

Carbon Capture: Enhanced Weathering

Chemical weathering is the process by which various types of rock is broken down by exposure to water, oxygen, and/or carbon dioxide. For our purposes, the most relevant forms of weathering involve uptake of carbon dioxide. CO2 dissolved in rainwater forms carbonic acid, which is quite mild as acids go but sufficient over time to dissolve minerals from rock. Calcium and silicon exposed to carbonic acid will form HCO3 bicarbonate, and release calcium and silicates.

Occurring naturally, this chemical reaction takes place gradually over millions of years. Most of the bicarbonate thus produced eventually washes out to the ocean, where various organisms like coral pull carbon and dissolved calcium out of the water to make shells. The rest of the bicarbonate gradually settles into the deep ocean and eventually adds to the limestone at the ocean floor.

Enhanced weathering is a plan by which humans can accelerate this process, by grinding the appropriate types of rock into particles to maximize surface area and spreading them over an area to take up CO2. There are a number of options.

  • Bicarbonate, calcium, and magnesium at appropriate concentrations are beneficial to soil health, especially tropical soils which tend to be depleted in these minerals. Spreading powdered olivine over one third of tropical agricultural land could pull between 30 and 300 parts per million of carbon dioxide out of the atmosphere.
    There is a large range in that number because we just don't know enough about how these processes work at scale. Perhaps fortunately, we also don't have the capacity to quickly seed such a large fraction of the planet's land area. Over time, the results of the earliest years of effort can be measured to guide future plans.

  • Though tropical land is ideal, using olivine as a soil additive in agricultural land elsewhere would still have an effect.
    The term "electrogeochemistry" has been coined to refer to enhanced weathering done at large scale.

  • Mine tailings are the heaps of excess rock discarded from mining operations after the valuable minerals have been extracted. The tailings generally contain large amounts of the types of rock which will absorb CO2 as they weather, and in fact do rapidly form a shell of carbonate at the surface of the pile. If mining regulations are made to require the tailings be ground more finely and appropriately distributed, they can be effective in pulling carbon dioxide from the atmosphere.
    Mine tailings also tend to contain trace amounts of substances which can be harmful, like mercury. Processes such as those developed by Advanced Materials Processing, Inc to remove harmful substances from tailings would be necessary.

Wednesday, September 19, 2018

MacOS Preview.app has a Signature Tool

When I receive a PDF file to be signed and returned I have generally printed it out to sign and scan back in... like an animal, as it turns out. On a MacOS system there is a convenient way to add a signature to a PDF file without needing to print it, using only the Preview.app which comes with the system.

In the toolbar is a squiggly icon with a drop down menu:

Clicking it allows one to create a signature by either signing with a finger on the trackpad, or writing a signature on a piece of paper for the camera to scan in. The camera option does a good job of edge detection to extract only the writing and not shadows on the paper.

The resulting signature can then be added to the document and dragged to the right spot.

Saturday, September 15, 2018

The Arduino before the Arduino: Parallax Basic Stamp

I recently had cause to dig down through the layers of strata which have accumulated in my electronics bin. In one of the lower layers I found this bit of forgotten kit: the Parallax Basic Stamp II. This was the Arduino before there was an Arduino, a tiny microprocessor aimed at being simple for hobbyist and low-volume commercial use.

The Basic Stamp line is still sold today, though with designs developed over a decade ago. The devices have enough of a market to remain in production, but are otherwise moribund. The past tense will be used in this blog post.

The Basic Stamp line dates back to the early 1990s. The Basic Stamp II shown here was introduced in 1995. It used a PIC microcontroller, an 8 bit microprocessor line which has been used in deeply embedded applications for decades and is still developed today. The PIC family is a product from Microchip Technology, the same company which now supplies the AVR chips used in the Arduino after acquiring Atmel in 2016.

The PIC contained several KBytes of flash, which held a BASIC interpreter called PBASIC. An external EEPROM on the BS2 board contained the bytecode compiled user BASIC code. Though it may seem an odd choice now, in the early 1990s the choice of BASIC made sense: the modern Internet and the Tech industry did not exist, with the concordant increase in the number of people comfortable with developing software. BASIC could leverage familiarity with Microsoft GW-BASIC and QBASIC on the PC, as MS-DOS and Windows computers of this time period all shipped with BASIC. Additionally, Parallax could tap into the experience of the hobbyist community from the Apple II and Atari/Commodore/etc.

' PBASIC code for the Basic Stamp
LED         PIN 5
Button      PIN 6    ' the BS2 had 16 pins
ButtonVal   VAR Bit  ' space is precious, 1 *bit* storage
LedDuration CON 500  ' a constant

' Init code
INPUT  Button

 ButtonVal = Button                 ' Read button input pin
 FREQOUT LED,LedDuration,ButtonVal  ' PWM output to flicker LED
 PAUSE 200                          ' in milliseconds

PBASIC supported a single thread of operation, the BASIC Stamp supported neither interrupts nor threads. Applications needing these functions would generally use a PIC chip without the BASIC interpreter on top. Later Stamp versions added a limited ability to poll pins in between each BASIC statement and take action. This seemed aimed at industrial control users of the stamps, for example Disney used BASIC Stamps in several theme park rides designed during this time frame.

A key piece of the Arduino and Raspberry Pi ecosystems is the variety of expansion kits, or "shields," which connect to the microprocessor to add capabilities and interface with the external world. The ecosystem of the BASIC Stamp was much more limited, suppliers like Adafruit were not in evidence because the low volume PCB design and contract manufacturing industry mostly didn't exist. Parallax produced some interesting kits of its own like an early autonomous wheeled robot. For the most part though, hobbyists of this era had to be comfortable with wire-wrapping.

Saturday, September 8, 2018

code.earth hackathon notes

Project Drawdown is a comprehensive plan proposed to reverse global warming. The project researchers analyzed and ranked scenarios according to the potential reduction in carbon levels, and analyzed the costs.

Project Drawdown will continue the analysis work, but is moving into an additional advocacy and empowerment role of showing governments, organizations, and individuals that global warming can be mitigated and providing detailed guidance on strategies which can work. The audience for the project's work is expanding.

This places new demands on the tools. The tooling needs to be more accessible to people in different roles, and provide multiple user interfaces tailored to different purposes. For example, the view provided to policymakers would be more top-level, showing costs and impacts, while the view for researchers would allow comparisons by varying the underlying data.

The code.earth hackathon in San Francisco September 5-7, 2018 implemented a first step in this, starting to move the modeling implementation from Microsoft Excel into a web-hosted Python process with Excel providing the data source and presentation of the results. This will separate the model implementation from user interface, making it easier to have multiple presentations tailored for different audiences. It will still be possible to get the results into Excel for further analysis, but web-based interfaces can reach much wider audiences able to act on the results.

I was at the hackathon, working on an end-to-end test for the new backend, and plan to continue working on the project for a while. Global warming is the biggest challenge of our age. We have to start treating it as such.

Sunday, September 2, 2018

Carbon Capture: Cryogenic CO2 Separation

Sublimation is a phase change directly from a solid to a gas without transitioning through an intermedia liquid state. Desublimation is the opposite, where a gas crystalizes into a solid without becoming a liquid first. The most well-known example of desublimation is snow, where water vapor crystalizes into tiny bits of ice. When water vapor in the cloud first condenses into liquid and then freezes, the result is hail not snow.

Interestingly, and quite usefully for carbon capture, carbon dioxide will desublimate at -78 degrees Centigrade. This is a considerably higher temperature than the main components of the atmosphere like nitrogen and oxygen, and means that as air gets very cold that CO2 will be among the first components to turn into ice crystals. This allows the CO2 crystals to be harvested.

Several companies have working technology in this area:

  • Alliant Techsystems (now defunct) and ACENT Laboratories developed a supersonic wind tunnel which compresses incoming air, causing it to heat up, then expands the supersonic airflow causing it to rapidly freeze. CO2 crystals can be extracted via cyclonic separation, relying on the mass of the frozen particles.

  • Sustainable Energy Solutions in Utah uses a heat exchanger process to rapidly cool air, harvest the CO2 crystals, then reclaim the energy spent on cooling before exhausting the remaining gases.

Wednesday, August 29, 2018

Google Software Engineering Levels and Ladders

Google (now Alphabet) hires a lot of engineers every year. There are articles out there about the interview process and how to prepare, and I do definitely recommend spending time in preparation. Google interviews for software engineers mostly do not focus on the candidate's resume or prior experience, instead asking technical questions on various topics and coding. You'll do better if you mentally refresh topics in computer science which you have not recently worked with.

This post focuses on a different area: how to evaluate an engineering job offer from Alphabet. The financial aspects will presumably be clear enough, but the career aspects of the offer may not be. This post will attempt to explain what Google's engineering career progression looks like.

There are two concepts: ladder and level. The ladder defines the role you are expected to do, like manager or engineer or salesperson, while the level is how senior you are in that role.

Like many Tech companies, Google has parallel tracks for people who wish to primarily be individual contributors and for people who wish to primarily be managers. This takes the form of two ladders, Software Engineer (universally abbreviated as "SWE") and Software Engineering Manager. Google does allow people on the SWE ladder to manage reports, and allows people on the Manager ladder to make technical contributions. The difference is in how performance is evaluated. For those on the SWE ladder the expectation is that at least 50% of their time will be spent on individual contributing engineering work, leaving no more than 50% on management. For those on the Manager ladder the expectation is more like 80% of the time to be spent on management. People on one ladder veering too far out of the guidance for that ladder will be encouraged to switch to the other, as performance evaluations will begin to suffer.


Software Engineer Ladder

The levels are:

  • SWE-I (Level 2) is a software engineering intern, expected to be in the junior or senior year of a four year degree program.
  • SWE-II (Level 3) is an entry level full-time software engineer. An L3 SWE is generally someone who recently graduated with an undergraduate or Master's degree, or equivalent education.
  • SWE-III (Level 4) is someone with several years of experience after graduation, or for someone who just finished a PhD in a technical field.
  • Senior Software Engineer (Level 5) is the level where a software engineer is expected to be primarily autonomous: capable of being given tasks without excessive detail, and being able to figure out what to do and then do it. A software engineer advances to L5 primarily by demonstrating impact on tasks of sufficient difficulty. When hiring externally, six to ten years of experience is generally expected.
  • Staff Software Engineer (Level 6) is the level where leadership increasingly becomes the primary criteria by which performance is judged. Many, though by no means all, SWEs begin managing a team of engineers by this point in their career. When hiring externally, ten or more years of experience are generally expected.
  • Senior Staff Software Engineer (Level 7) is essentially L6 with larger expectations. Guidance for years of experience begins to break down at this level, as most candidates with ten or more years experience will be hired at Level 6 unless there is a strong reason to offer a higher level. Involvement of the hiring manager or strong pushback by the candidate can sometimes push the offer to Level 7.
  • Principal Software Engineer (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Principal Software Engineers drive technical strategy in relatively large product areas. SWEs at level 8 or above are relatively rare: the equivalent level on the manager ladder will routinely have five or more times as many people as on the SWE ladder. By this level of seniority, most people are focussed on management and leadership.
  • Distinguished Software Engineer (Level 9) drives technical strategy in efforts spanning a large technical area.
  • Google Fellow (Level 10) is the same level as a Vice President, expected to drive technical strategy and investment in crucial areas.
  • Google Senior Fellow (Level 11) is for people like Jeff Dean and Sanjay Ghemawat.

Most external hiring for software engineers is for L4 through L6, with L7 also possible though less common. Hiring externally directly to L8 and L9 does happen, but is quite rare and demands the direct sponsorship of a high-level executive like a Senior Vice President of a Google Product Area or CEO of an Alphabet company. For example James Gosling and David Patterson both joined the company as L9 Distinguished Engineers.

Also notable is that the external hiring process and the internal promotion process are entirely separate, and at this point have diverged substantially in their calibration. It is fair to say that Alphabet substantially undervalues experience outside of the company, or perhaps overvalues experience within the company. Someone with ten years experience externally would be hired at L5 or L6, while ten years within the company can make it to L7 or L8.


Software Engineering Manager Ladder

The levels are:

  • Manager, Software Engineering I (Level 5) is the first level on the manager ladder. It is expected that people will have a few years experience in the field before they begin managing a team, and therefore the Manager ladder starts at level 5. Manager I will typically lead a small team of engineers, five to ten is common.
  • Manager, Software Engineering II (Level 6) is typically a manager of a team of ten to twenty, sometimes a mixture of direct reports and managing other managers. When hiring externally, 10+ years of experience is expected.
  • Manager, Software Engineering III (Level 7) begins the transition to be primarily a manager of managers. Teams are larger than L6, typically twenty to forty.
  • Director (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Directors are mostly managers of managers, and typically lead organizations of forty up to several hundred people.
  • Senior Director (Level 9) is basically a secret level at Google: all of the internal tools will show only "Director," and by tradition promotions to Senior Director are not publicly announced. Senior Directors may lead slightly larger organizations than L8 Directors, though mostly it provides a way to have a larger gap between Director and VP while still allowing career progression.
  • Vice President (Level 10) typically leads organizations of hundreds to thousands of people. Their direct reports will typically be Directors and will be second to third level managers themselves.
  • Vice President II (Level 11), like Senior Director, is shown only as "VP" in internal tools and provides a way to maintain a larger gap between VP and SVP while still allowing managers to advance in their careers.
  • There are executive levels beyond L12, notably Senior Vice Presidents of Google divisions and CEOs of other Alphabet companies. This blog post is not a good guide to hiring for those levels, if you happen to be such a candidate. Sorry.

When hiring managers externally, L5 through Director is most common. Above Director is rare and generally only happens with the sponsorship of a high level executive. However where SWE hiring essentially tops out at L9, manager hires can come in at almost any level given sufficient sponsorship. Alphabet hires CEOs for its affiliated companies (John Krafcik, Andrew Conrad) and Google SVPs (Craig Barratt, Diane Greene) externally.


Other ladders equivalent to SWE

There is one other software engineering role at Alphabet which is parallel to the SWE/Software Manager ladders: Site Reliability Engineer or SRE. The individual contributor ladder is called SRE-SWE — for historical reasons, as there used to be an SRE-System Administration ladder which is no longer hired for. There is also an SRE Manager ladder. The levels on SRE-SWE and SRE Manager roughly correspond in responsibilities and years of experience to the SWE and Software Manager ladders described above, though the nature of the work differs.

SRE is equivalent to SWE in that at any time, an SRE can choose to relinquish the SRE duties and transfer to the SWE ladder and SRE Manager can switch to the Software Manager ladder. If originally hired as an SRE, they can also generally switch back if they choose to to do in the future. Engineers hired as a SWE who wish to transfer to SRE require a bit more process, often via an internal training program to serve a rotation as an SRE.


Other ladders NOT equivalent to SWE

SETI, for Software Engineer in Tools and Infrastructure, is another engineering ladder within Google. Though recruiters will make the claim that it is just like being a SWE, transfers from SETI to SWE require interviews, acceptance by a hiring committee, and approval of the SVP who owns the SWE ladder. Though often successful, transfers from SETI to SWE are not automatic and do get rejected, at both stages of the approval process. As such, recruiter claims that it is just like being a SWE are not accurate. The recruiter just has an SETI role to fill.

Only accept an SETI role if automated testing and continuous software improvement are really passions. Projects listing SETI openings will be less numerous than SWE, though will often be more focussed on automation and quality improvement. In many cases, internal transfers to projects which list a SWE opening will accept an SETI applicant, but not always. Being on the SETI ladder will therefore be slightly limiting in choice of projects for internal mobility.

There are other ladders which also involve software development but are even further removed from the SWE ladder, notably Technical Solutions Engineer (TSE) and Web Solutions Engineer (WSE). As with SETI, transfers to the SWE ladder require interviews and approvals. Recruiter claims that TSE or WSE are "just like being a SWE" are not accurate, as people on these ladders cannot internally transfer to projects which have a SWE opening. They can only transfer to TSE/WSE openings, which limit the choice of projects.

Saturday, August 25, 2018

Carbon Capture: Soil health

Topsoil across the land areas of the planet holds substantially more carbon than the entire atmosphere, and over the past several hundred years we have released at least 50 percent of the carbon formerly held by soils into the air. This is primarily because of tilling, which disturbs the deeper soils and kills the roots and fungi which reside there. Tilling is necessary for modern agricultural practices using fertilizer and insecticides, which can improve yields substantially until the soil has become substantially depleted of carbon and gradually less productive. Much farmland around the world now is stuck in a local maxima: stopping tilling and allowing the soil to recover would eventually result in improved yields, but only after a few years of very poor harvests.

It is estimated that regenerating depleted land can absorb two to five tons of CO2 per acre per year, for about ten years. Done at scale, regenerative agriculture could absorb tens of gigatons of CO2 per year. For perspective, current human emissions are approximately 36 gigatons per year. Improving soil health could offset a nontrivial fraction of current emissions or, in conjunction with other methods to reduce new emission, pull previously emitted carbon from the air.

Companies in this technology space

  • Regen Network provides tools to gather and analyze data for soil health in regenerative agriculture, silvopasturing, and other practices to improve ecological health. It also provides a trading platform to invest and fund these developments.
  • COMET-Farm at Colorado State University tracks data entered by farms to estimate the levels of carbon stored, plus other factors relating to soil health.
  • HiveMind produces Mycelium soil enhancements which jump-start the process of sequestering substantially more carbon per acre of soil.

Saturday, August 11, 2018

Carbon Capture: Other Types of Sorbents

A previous post discussed temperature swing adsorption, wherein carbon dioxide is captured when the sorbent is at low temperature and released when raised to sufficiently high temperature. Desorption temperatures of five to seven hundred degrees Celsius are typical with known sorbents, imposing a substantial energy cost to heat and cool the material.

There are other sorbent materials where the capture and release cycle is controlled not be temperature but by other factors. The two most common are:

  • pressure-swing, where adsorption is controlled by the pressure of the gases in the process. In one study, activated carbon was used as the sorbent to capture carbon dioxide.

  • moisture-swing, where the presence of water or water vapor controls the adsorption cycle. A great deal of recent work on moisture swing sorbents for carbon dioxide has been done at the Arizona State University, apparently focused on a Metal Oxide Framework material containing zirconium.

The goal with both of these technologies is for a carbon dioxide removal process requiring less energy than for temperature swing adsorption. The Temperature Swing Adsorption processes are much further along in development, with several commercial carbon capture systems (detailed in the earlier post). Pressure Swing Adsorption is used to scrub CO2 in high-oxygen feeds like for hospitals, but is not currently used at scale for carbon capture from the atmosphere. So far as I can tell, Moisture Swing Adsorption has thus far only been used in the lab and small scale trials.


Companies and organizations in this technology space

Friday, August 10, 2018

Flatulenating, Wherein We Attempt to Rectify a Dictionaric Injustice

Flatulenating: having the property of inducing flatulence.

Example: "Beans are flatulenating. I get such terrible gas every time I eat them."



At the time of this writing on August 10th, 2018, https://www.google.com/search?q="flatulenating" shows zero results. This blog post is an attempt to resolve this dictionaric injustice.

Wednesday, August 1, 2018

Career & Interviewing Help

Something I find rewarding is helping others in their careers. I am quite happy to conduct practice embedded software engineer or manager interviews, answer questions about engineering at Google or in general, advise on career planning, etc.

I keep a bookable calendar with two timeslots per week. I am in the Pacific timezone, and can set up special times more convenient for people in timezones far from my own. If the calendar doesn't work for you, you can contact me at denny@geekhold.com to make special arrangements.

Anyone is welcome, you don't need an intro or to know me in person. The sessions are conducted via Google Hangout or by phone. My only request for this is to pay it forward: we all have opportunities to help others. Every time we do so, we make the world a slightly better place.

Sunday, July 29, 2018

Carbon Capture: Ocean Farming

The ocean has absorbed approximately a third of the extra carbon released since the industrial age. A previous article focused on countering acidification of the ocean either directly by adding massive quantities of alkalines or indirectly by adding minerals to encourage phytoplankton growth. This post discusses a more purposeful effort, using the carbon in the ocean to grow plant life which can be used for other purposes.

Much discussion about ocean farming revolves around kelp, for several reasons:

  1. Kelp propogates amazingly quickly, growing up to a foot in a single day in ideal conditions.
  2. Profitable uses for kelp already exist as a food source for humans and in animal feed. Additional uses by processing kelp into biofuel or as feedstock for other chemical processes appear to be feasible.

Despite its tremendous growth rate, kelp in nature is confined to a relatively small portion of the ocean: it has to anchor itself to the sea floor and take up nutrients present in deeper waters, but must be able to reach the surface to photosynthesize. Therefore, natural kelp only grows near coastlines and islands.

Several startups aim to vastly increase the capacity of the ocean to grow kelp by providing the conditions which the plant requires:

  • The Climate Foundation proposes to build Marine Permaculture Arrays stationed about 25 meters below the surface, to provide a point of attachment for kelp. Pumps powered by solar or wave energy would draw water from the depths, providing an artificial upwelling to provide nutrients for the kelp and plankton. Nori podcast #34 features an interview with Brian Von Herzen, the founder of Climate Foundation.
  • Marine BioEnergy proposes robotic submarine platforms which would descend to depths overnight to allow the kelp to take up minerals and nutrients, then ascend close to the surface during the day to allow the plants access to sunlight. The platforms would also be mobile, periodically returning close to shore to allow harvest of the grown kelp and any needed maintenance and replenishment of the platform.
  • GreenWave has developed a training program, legal permitting assistance, and market development for ocean farmers, along with optimized layout for a kelp farm. The plans appear to be for coastal farms, not involving deep water platforms nor extensive automation like the earlier firms.

The major food crops like soybeans, wheat, corn, and rice have been tremendously modified from their original forms. As we develop uses for kelp as feedstock in the production of fuels or chemicals or other uses, it is likely that the specific kelp population can be bred to better fit the applications.

Carbon Capture: Ocean Acidification Remediation

The ocean has absorbed approximately a third of the extra carbon released since the industrial age. When carbon dioxide is absorbed by seawater it becomes carbonic acid, leading to the gradual acidification of the oceans.

There are several methods proposed by which the carbon stored in the ocean can be more rapidly sequestered, reducing carbonic acid levels (though the ocean would promptly take up more carbon from the atmosphere):

  • alkalinization: to counteract the carbonic acid by adding huge quantities of alkalines to the ocean, such as bicarbonate. Quite usefully, bicarbonate is one of the bi-products of large scale enhanced weathering, which also appears to be quite promising as a mechanism to remove carbon from air.
  • fertilization: the carbonization of the oceans could be addressed by encouraging phytoplankton to grow, which would take up carbon from the water. Different parts of the ocean contain phosphorous, nitrogen, and iron in differing amounts. There are large dead zones in the ocean where plankton and algae grow is stalled due to lack of the needed minerals, not lack of food energy to support them. By adding these three minerals in the correct ratio, phytoplankton will be enabled to consume more carbon.
  • circulation: encourage movement of acidic water from near the surface to the deeper ocean where mineralization processes can absorb it. Ocean-based Climate Solutions, Inc has a description of the mechanism to do this.

These mechanisms produce revenue for their funding via the additional productivity of the ocean which they enable. For example fisheries and canning would both increase substantially in these areas.

Seeking Career in Climate Change Amelioration

I have been at Google (now Alphabet) for almost 9 years. All things come to an end, and the end of my time at Google is approaching. I expect wrap up current work and exit the company on August 28th, 2018.

I have a strong desire to work on ameliorating climate change. I’d like to do this via working on energy production, or carbon recapture from the environment, or other ideas related to climate and cleantech.

I am seeking an engineering leadership role. At a BigCo, this would be Principal Software Engineer, Director, etc depending on the company’s level structure. At a smaller company I’d be looking for the opportunity to grow into such a role.

I have prepared a resume and a pitch deck focusing on climate change roles, and my LinkedIn profile is public.

I’d welcome referrals to companies in these areas, or pointers to opportunities which I can followup on. I can be reached at denny@geekhold.com.


An excerpt from the resume:

Primary skills


Role/Company Must Haves

  • Blameless postmortem culture
  • Emphasis on Inclusion, and care about personnel and their development
  • Belief that engineering management should retain reasonable technical proficiency

Sunday, June 24, 2018

Carbon Capture: Reforestation

Pre-industrialization, forests covered approximately 5.9 billion hectares across the planet. Today that figure is 4 billion hectares, and still dropping. The deforestation has reduced the ability of the terrestrial plants to sink carbon in their yearly growth.

The basic idea in reforestation is straightforward: plant trees and other long-lasting plants in order to take up and store carbon from the atmosphere. Development of mechanisms to plant trees in large enough scale and short enough time frame to be useful in ameliorating climate change is the difficult part. This requires automation, most obviously by use of flying drones.

Biocarbon Engineering and Droneseed are two firms building technologies for rapid planting of trees. They use largish drones loaded with seed pods. The drones do require pilots, as most jurisdictions now require licensed pilots for dones, but where possible the drones are set to fly in a formation to allow a single pilot to control many at a time.

The cost efficiency of this automated seeding method is not clear from publicly available information. Each reseeding project is a unique bid, and the bids are mostly not made public. Estimates of the cost of manual planting average $4940 per hectare using manual methods. Rough estimates of the cost of a Biocarbon Engineering project to reseed Mangrove trees in Myanmar is about half of what a manual effort would be.

Companies in this technology space

  • Propagate Ventures works with farmers and landowners to implement regenerative agriculture, restoring the land while keeping it productive.

  • Dendra Systems (formerly Biocarbon Engineering) builds drones which fly in swarms, numerous drones with a single pilot, and utilizes seed pods loaded with nutrients fired from the drones toward the ground. A good percentage of the seed pods will embed into the ground, and the outer packaging will rapidly biodegrade and allow the seed to germinate.

  • Droneseed also builds drones to plant trees, though fewer details are available.


musings on plants

In real deployments the type of plant life seeded will be chosen to fit the local environment by the client, such as the choice of Mangrove trees in Myanmar. If we were only concerned with the rapidity of carbon uptake, and did not care about invasive species, I think there are two species of plants we would focus on:

  • Paulownia trees which grow extremely rapidly, up to 20 feet in one year. These are native to China, and an invasive species elsewhere.
  • Hemp: "Industrial hemp has been scientifically proven to absorb more CO2 per hectare than any forest or commercial crop and is therefore the ideal carbonsink." (source). I find it amusing that hemp may be crucial in saving humanity after all.

Saturday, June 23, 2018

Carbon Capture: Biochar

biochar is charcoal made from biomass, from agricultural waste or other plant material. If left to rot or burned, the carbon trapped in this plant material would return to the atmosphere. By turning it into charcoal, a large percentage of the carbon is fixed into a stable form for decades.

Turning plant material into charcoal is a straightforward process: heat without sufficient oxygen to burn. This process is called pyrolysis (from the Greek pyro meaning fire and lysis meaning separating). In ancient times this was accomplished by burying smoldering wood under a layer of dirt, cutting it off from air. More recently, a kiln provided a more efficient way to produce charcoal by heating wood without burning it. Modern methods generally use sealed heating chambers in order to capture all of the produced gases.

Pyrolysis produces three outputs:

  • the solid char, which has a much higher concentration of carbon than the original plant material.

  • a thick tar referred to as bio-oil, which is much higher in oxygen than petroleum but otherwise similar.

  • a carbon-rich gas called syngas. It is flammable, though it contains only about half the energy density of methane. In earlier times the gas generally just escaped, while modern processes capture and usually burn it as heat to continue the pyrolysis process.

The temperature and length of pyrolysis determines the relative quantity of char, bio-oil, and syngas. Baking for longer time at lower temperature emphasizes char, shorter times at higher temperature produces more gas and oil.

The idea of biochar for carbon capture is to intercept carbon about to return to the atmosphere, primarily agricultural waste, and turn it into a form which both sequesters carbon and improves the soil into which it is tilled. The very fine char produced from agricultural waste is quite porous and makes soil retain water more effectively. It can also improve the soil health of acidic soils, balancing the pH and making the soil more productive.

Carbon Capture: Temperature Swing Adsorption

Adsorption: the adhesion of atoms, ions or molecules from a gas, liquid or dissolved solid to a surface. This process creates a film of the adsorbate on the surface of the adsorbent.

Temperature Swing Adsorption (TSA) for carbon capture relies on a set of materials, called carbon dioxide sorbents, which attract carbon dioxide molecules at low temperature and release them at a higher temperature. Unlike the Calcium Loop described previously, there is no chemical reaction between the sorbent and the CO2. Adsorption is purely a physical process, where the CO2 sticks to the sorbent due to the slight negative charges of the oxygen atoms and positive charge of the carbon.

There are a relatively large number of materials with this sorbent property for carbon dioxide, enough to have a dedicated Wikipedia page. These materials contain porous gaps. The gaps in the most interesting materials for our purpose are the right size to hold a CO2 molecule, with a slight charge at the right spot to attract the charges of different points on the CO2. To be useful for carbon capture, the sorbent has to attract CO2 molecules but readily release them with a change in temperature. They can be cycled from cold to hot to repeatedly grab and release carbon dioxide.

Unfortunately most of the known materials have drawbacks which make them unsuitable for real-world use, such as being damaged by water vapor.

The most recent class of sorbents developed are Metal-Organic Frameworks (MOFs), which are chains of organic molecules bound up into structures with metals. Metal-Oxide Frameworks are interesting because they are much more robust than the previously known sorbents, not being easily damaged by compounds found in the air and capable of being cycled in temperature without quickly wearing out.


Companies in this technology space

  • Climeworks in Switzerland describes their process as a filter which is then heated to release the carbon dioxide. This is clearly an adsorption process, and almost certainly using Metal-Organic Frameworks as it is described as being reusable for a large number of cycles.

  • Global Thermostat in New York describes their process as an amine-based sorbent bonded to a porous honeycomb ceramic structure.

  • Inventys in Canada builds a carbon capture system using Temperature Swing Adsorption materials. Their system uses circular plates of a sorbent material, stacked vertically, and rotates the plates within a cylindrical housing. At different parts of the revolution the plates spend 30 seconds adsorping CO2, 15 seconds being heated to 110 degrees Celsius to release the concentrated CO2, and 15 seconds cooling back down to 40 degrees to do it again.

    Inventys goes to some length to explain that their technology is in the whole system, not tied to any particular sorbent material. I suspect this is emphasized because Metal Oxide Frameworks are innovating rapidly, and indeed the entire class of MOF materials was developed after Inventys was founded, so they ensure that the system can take advantage of new sorbent materials as they appear.

  • Skytree in the EU is a patent licensing firm which is fairly coy about the technologies it licenses but says they were developed as part of the Advanced Closed Loop System for the International Space Station. One of the main innovations in the ACLS is the development of a solid resin adsorbent Astrine, which means the technology is adsorption-based.

  • Soletair in Finland aims to create an end-to-end process using adsorption and electrolysis to create feedstock for fuels.

  • Carbon Clean Solutions has developed a new carbon dioxide sorbent, amine-promoted buffer salt (APBS). This sorbent is available for licensing.

  • Mosaic Materials has developed a new carbon dioxide sorbent using nitrogen diamines, and which requires only half of the temperature swing to capture and release CO2. This will result in considerably lower energy cost and higher volume production.

Tuesday, June 19, 2018

Carbon Capture: Calcium Looping

I am very interested in technologies to ameliorate climate change. The looming, self-inflicted potential extinction of the human species seems important to address.

In this post we’ll examine the steps in Carbon Engineering’s Direct Air capture process, as published on their website, and explore what each step means. As I am an amateur at carbon capture technologies, anything and everything here may be incorrect. I’m writing this in an attempt to learn more about the space.


step 1: wet scrubber

A wet scrubber passes a gas containing pollutants, in this case atmospheric air containing excess carbon dioxide, through a liquid in order to capture the undesired elements. Scrubber designs vary greatly depending on the size of the pollutant being captured, especially whether particles or gaseous. In this case because CO2 molecules are being targeted, the scrubber is likely a tall cylindrical tower filled with finned material to maximize the surface area exposed to the air.

This process step uses hydroxide HO-, a water molecule with one of the hydrogen atoms stripped off, as the scrubbing liquid. Hydroxide bonds with carbon dioxide to form carbonic acid H2CO3. It is interesting to note that this same chemical process is occurring naturally at huge scale in the ocean, where seawater has acidified due to the absorption of carbon dioxide and formation of carbonic acid.


step 2: pellet reactor

The diluted carbonic acid is pumped through a pellet reactor, which is filled with very small pellets of calcium hydroxide Ca(OH)2. Calcium hydroxide reacts with the carbonic acid H2CO3 to form calcium carbonate CaCO3, which is the primary component of both industrial lime and antacid tablets. The small pellets in the reactor serve to both supply calcium for the reaction and to serve as a seed crystal to allow a larger calcium carbonate crystal to grow. In the process, hydrogen and oxygen atoms are liberated which turn back into water.

As the point of this system is a continuous process to remove carbon dioxide from air, I imagine the pellets are slowly cycled through the reactor as the liquid flows over them. The pellets with their load of newly grown crystal would automatically move on to the next stage of processing.

It is important to dry the pellets of calcium carbonate as they leave the pellet reactor. The next step collects purified carbon dioxide, where water vapor would be a contaminant. Removal of the remaining water could be accomplished by heating the pellets to somewhere above 100 degrees Celsius where water evaporates, but much less than 550 degrees where the calcium carbonate would begin to break down. Hot air would be sufficient to achieve this.


step 3: circulating bed fluid calcinator

A calcinator is a kiln which rotates. The wet pellets loaded with crystals of calcium carbonate CaCO3 slowly move through the kiln, where they are heated to a sufficient temperature for the calcium carbonate to decompose back into calcium oxide CaO and carbon dioxide CO2. A temperature of at least 550 degrees centigrade is needed for this, and the reaction works best somewhere around 840 degrees which is quite hot. There are catalysts which can encourage this reaction at lower temperatures, notably titanium oxide TiO2, but they are quite expensive and might not be economical compared with heating the kiln.

The carbon dioxide would be released as a hot gas to be collected, the calcium oxide will be left as solid grains in the calcinator. The calcium oxide can be used over and over, called calcium looping. Energy is expended at each cycle through the loop to free the carbon dioxide from the calcium oxide.


step 4: slaker

The solid output of the calcinator is calcium oxide CaO, also called quicklime. Quicklime is not stable, and will absorb other molecules from the air which would introduce impurities if put back into the pellet reactor. Therefore the calcium oxide CaO is combined with water to form calcium hydroxide Ca(OH)2.

A slaker adds controlled amounts of water to quicklime. This reaction releases a great deal of heat, so it is controlled by a feedback loop which reduces the inflow of material when the reaction gets too hot. I imagine the waste heat from this process could provide some of the heat needed for the earlier calcinator step, though additional heating would also be needed.


Companies in this technology space

  • Carbon Engineering, which builds large scale operations using the calcium loop process to capture carbon dioxide from air.
  • Calera, which captures CO2 to produce calcium carbonate and magnesium carbonate for industrial use.
  • CleanO2 builds CO2 scrubbers for HVAC systems, allowing cold air from the building to be recirculated after scrubbing carbon dioxide (and likely also scrubbing water vapor and other contaminants). As the systems produce calcium carbonate as an end-product, I'm going to assume it uses the first two steps of the calcium loop as a recovery mechanism.



At the end of the process we have a highly purified stream of carbon dioxide extracted from ambient air. The long term goal of this kind of technology would be negative carbon emissions, which would mean keeping the CO2 from immediately circulating back into the environment by utilizing it in a long-lived form like various plastics or graphene. The technology also allows carbon neutral fuels to be made for applications where energy density requirements are higher than what battery chemistries are likely to provide, such as airplanes or ocean going vessels. Using carbon which was already in the atmosphere for these applications is much better than digging more carbon out of the ground.

Friday, June 15, 2018

CPE WAN Management Protocol: transaction flow

Technical Report 69 from the Broadband Forum is a management protocol called the CPE WAN Management Protocol (CWMP). It was first published in 2004, revised a number of times since, and aimed at the operation of DSL modems placed in customer homes. Over time it has broadened to support more types of devices which an Internet Service Provider might operate outside of its own facilities, in the residences and businesses of its customers.

There are a few key points about CWMP:

  • It was defined during the peak popularity of the Simple Object Access Protocol (SOAP). CWMP messages are encoded as SOAP XML.
  • Like SNMP and essentially every other network management protocol, it separates definition of the protocol from definition of the variables it manages. SNMP calls them MIBs, CWMP calls them data models.
  • It recognizes that firewalls will be present between the customer premises and the ISP, and that the ISP can expect to control its own firewall but not necessarily other firewalls between it and the customer.
  • It makes a strong distinction between the Customer Premises Equipment (CPE) being managed, and the Auto Configuration Server (ACS) which does the managing. It does not attempt to be a generic protocol which can operate bidirectionally, it exists specifically to allow an ACS to control CPE devices.

A few years ago I helped write an open source tr-69 agent called catawampus. The name was chosen based mainly on its ability to contain the letters C W M P in the proper order. I’d like to write up some of the things learned from working on that project, in one or more blog posts.

Connection Lifecycle

One unusual thing about CWMP is connection management between the ACS and CPE. Connections are initiated by the CPE, but RPC commands are then sent by the ACS. Keeping with the idea that it is not a general purpose bidirectional protocol, all commands are sent by the ACS and responded to by the CPE.

tr-69 runs atop an HTTP (usually HTTPS) connection. The CPE has to know the URL of its ACS. There are mechanisms to tell a CPE device what ACS URL to use, for example via a DHCP option from the DHCP server, but honestly in almost all cases the URL of the ISP’s ACS is simply hard-coded into the firmware of devices supplied by the ISP.


  1. The CPE device in the customer premises initiates a TCP connection to the ACS, and starts the SSL/TLS handshake. Once the connection is established, the CPE sends an Inform message to the ACS using an HTTP POST. This is encoded using SOAP XML, and tells the ACS the serial number and other information about the CPE in the <DeviceId> stanza.
    <?xml version="1.0" encoding="utf-8"?>
    <soap:Envelope xmlns:cwmp="urn:dslforum-org:cwmp-1-2"
        <cwmp:ID soap:mustUnderstand="1">catawampus.1529004153.967958</cwmp:ID>
          <Event soap-enc:arrayType="EventStruct[1]">
              <EventCode>0 BOOTSTRAP</EventCode\>
          <ParameterList soap-enc:arrayType="cwmp:ParameterValueStruct[1]">
              <Value xsi:type="xsd:string">http://[redacted]:7547/ping/7fd86a7302ec5f</Value>
    Several fields are highlighted above: the EventCode tells the ACS why the CPE device is connecting. It might have just booted, it might be a periodic connection at a set interval, or it might be because of an exceptional condition. The ParameterList, also highlighted, is a list of parameters the CPE can include to tell the ACS about exceptional conditions.

  3. The ACS sends back an InformResponse in response to the POST.
        <cwmp:ID soapenv:mustUnderstand="1">catawampus.1529004153.967958</cwmp:ID>

  5. If the CPE has other conditions to communicate to the ACS, such as successful completion of a software update, it performs additional POSTs containing those messages. When it has run out of things to send, it does a POST with an empty body. At this point the ACS takes over. The CPE continues sending HTTP POST transactions with an empty body, and the ACS sends a series of RPCs to the CPE in the response. There are RPC messages to get/set parameters, schedule a reboot or software update, etc. All transactions are sent by the ACS and the CPE responds.
        <cwmp:ID soapenv:mustUnderstand="1">TestCwmpId</cwmp:ID>
            <ns2:ParameterValueStruct xmlns:ns2="urn:dslforum-org:cwmp-1-2">
              <Value xmlns:xs="http://www.w3.org/2001/XMLSchema" xsi:type="xs:string">param</Value>


The ACS can send multiple RPCs in one session with the CPE. Only one RPC can be outstanding at a time, the ACS has to wait for a response from the CPE before sending the next.

When the session ends, it is up to the CPE to re-establish it. One of the parameters in a management object is the PeriodicInformInterval, the amount of time the CPE should wait between initiating sessions with the ACS. By default it is supposed to be infinite, meaning the CPE will only check in once at boot and the ACS is expected to set the interval to whatever value it wants during that first session. In practice we found that not to work very well and set the default interval to 15 minutes. It was too easy for something to go wrong and result in a CPE which would be out of contact with the ACS until the next power cycle.

There is also a mechanism by which the ACS can connect to the CPE on port 7547 and do an HTTP GET. The CPE responds with an empty payload, but is supposed to immediately initiate an outgoing session to the ACS. In practice, this mechanism doesn't work very well because intervening firewalls, like the ISP's own residential gateway within the home, will often block the connection. This is an area where the rest of the industry has moved on: we now routinely have a billion mobile devices maintaining a persistent connection back to their notification service. CPE devices could do something similar, perhaps even using the same infrastructure.

Wednesday, June 6, 2018

Reading List: High Output Management

High Output Management by Andy Grove was first published in 1983, making it one of the earliest books about management in the technology industry and an influential book about management overall. I recently read the 2nd edition, revised in 2015.

Though the revisions help in updating the material, the book does still strongly resonate of the 1980s. Some of the examples concern Japanese DRAM manufacturers crowding out US firms, the rise of the PC industry, and the business climate of email beginning to replace telephone and memos. Nonetheless, management techniques change much more slowly than technology, and there is quite a bit of useful material in the book.

Some key takeaways for me:


Manager output = output of org + output of adjacent orgs under their influence

Grove’s point is that managers should be evaluated based on the performance of their own organization, plus the extent to which they influence the output of those who don’t directly report to them. This is especially important for knowledge leaders who provide technical direction for a large organization in particular areas, but without having large numbers of people reporting to them on the orgchart. The examples Grove uses are typically concerned with manufacturing and production, which was a particular strength and focus of his at Intel.

It is notable that 30+ years later, we’re still not very good at evaluating management performance in influencing adjacent organizations. Manager evaluations focus mostly on their direct reports, because that is more straightforward to judge. The incentives for managers are therefore to grow their org as large as possible, which isn’t always the best thing for the company even if it is the best thing for the manager.


Choose indicators carefully, and monitor them closely

It is important to monitor output, not just activity, or you’ll end up emphasizing busywork. An example Grove gives is a metric of the number of invoices processed by an internal team. That metric should be paired with a count of the number of errors produced. Any productivity metric needs to be paired with a quality measurement, to ensure that the team doesn’t feel incentivized to produce as much sloppy work as possible.

Even more importantly, the indicators need to be credible. If you won't act on them by taking big (and possibly expensive) steps, then all the monitoring will produce is anxiety. The business indicators need to be invested with sufficient faith to act when a new trend is clear, even if that trend has yet to percolate up in other, more visible, ways.


Management can usually be forecasted and scheduled

Though we will always deal with interruptions or emergencies or unexpected issues, a big portion of a manager’s job is predictable. You know how often you should have career discussions with team members, and when performance appraisals should be done, so put career discussions on the calendar a quarter before performance appraisals. You know when budgeting will be done, put milestone planning on the calendar two months before that.

For lull times between the scheduled activities, Grove recommends a backlog of manager tasks which need to be done but don’t have a hard deadline. This also nicely reduces the temptation to fill the lull periods by meddling in the work of subordinates.

I feel like this is something management as a profession has gotten better at since the book was initially written. Practices may vary across companies, but on the whole I feel like there is perhaps more structure for managers than the book implies from earlier times.


Now, a disagreement: technical half-life

Grove makes a point several times that technology changes quickly so the company needs to keep hiring younger workers straight out of university, where they will have learned the latest technology. As engineers become more senior they can move into leadership and management roles and leave the technology to those more recently graduated.

I find this not credible, for several reasons:

  • It assumes that technology work is 100% technical, that communications skills and leadership are entirely separate and can be supplied by those senior engineers who move into management roles.
  • There are far fewer managers than engineers. This idea takes it as given that universities should produce a large number of grads for corporations to chew through, and discard most of them in favor of fresh graduates. It seems like corporations could find a better use for their senior engineers than to discard most of them.
  • It implies that all of this new tech comes from somewhere else, perhaps from Universities themselves, and that senior engineers play no role in developing it.

Wednesday, May 2, 2018

We Edited DNA in our Kitchen. You Can Too!

When our children expressed an interest in DNA and genetic engineering, we wanted to encourage their curiosity and interest. We went looking for books we could read, videos we could watch, etc.

However as we all now live in the future, there is a much more direct way to inspire their interest in genetic engineering: we could engineer some genes, in our kitchen. Of course.

We bought a kit from The Odin, a company which aims to make biological engineering and genetic design accessible and available to everyone. The kit contains all of the supplies and chemicals needed to modify yeast DNA: Genetically Engineer Any Brewing or Baking Yeast to Fluoresce

Altogether the exercise took about a week, most of which was spent allowing the yeast time to grow and multiply. If we had an incubator we could have sped this up, but an incubator is not essential for a successful experiment.

The first step was to create a healthy colony of unmodified yeast. We mixed a yeast growth medium called YPD, rehydrated the dried yeast, and spread everything onto petri dishes. The yellowish gel on the bottom of the dish is the growth medium, the droplets are the rehydrated yeast.

After several days to grow, we could then take up a bit of yeast into a small tube. We would be modifying the DNA of the yeast in the tube, and would later be able to compare it to our unmodified yeast.

The next steps are the amazing stuff.

We used a pipette to add a tiny amount of transformation matrix. This mixture prepares the yeast cells to take in new DNA.

We then used the pipette to add the GFP Expression Plasmid. GFP is Green Fluorescent Protein, and is what makes jellyfish glow in blue light. The GFP Expression Plasmid bundles the DNA segment for the jellyfish gene together with CRISPR as the delivery mechanism.

Swirling the yeast together with the plasmid is how we edited DNA in our kitchen. Over several hours, CRISPR transferred the new gene into the yeast cells in the tube. We incubated the tube for a day, then spread it onto a fresh petri dish to spend a few more days growing.

Voila: shining a blue light on the original dish of unmodified yeast versus the dish with our genetically engineered strain, you can see the difference. Our modified yeast glows a soft green. This is the Green Fluorescent Protein which our modified yeast produces.

This wasn’t a difficult experiment to perform, every step was straightforward and the instructions were quite clear. The kids got a great deal out of it, and are enthused about learning more.

We genetically engineered yeast in our kitchen. You can too!
Genetically Engineer Any Brewing or Baking Yeast to Fluoresce

Monday, April 30, 2018

Automated Blackmail at Scale

I received a blackmail letter in the postal mail yesterday. Yes, really. It begins thusly:

Hello Denton, I’m going to cut to the chase. My name is SwiftBreak~15 and I know about the secret you are keeping from your wife and everyone else. More importantly, I have evidence of what you have been hiding. I won’t go into the specifics here in case your wife intercepts this, but you know what I am talking about.

You don’t know me personally and nobody hired me to look into you. Nor did I go out looking to burn you. It is just your bad luck that I stumbled across your misadventures while working on a job around <redacted name of town>. I then put in more time than I probably should have looking into your life. Frankly, I am ready to forget all about you and let you get on with your life. And I am going to give you two options that will accomplish that very thing. Those two options are to either ignore this letter, or simply pay me $8,600. Let’s examine those two options in more detail.

In email this wouldn't be notable. I probably wouldn't even see it as it would be classified as spam. Via postal mail though, it is unusual. Postal spam is usually less interesting than this.

The letter went on to describe the consequences should I ignore it, how going to the police would be useless because the extortionist was so very good at covering their tracks, and gave a bitcoin address to send the payment to.

There are several clues that this was an automated mass mailing:

  • It helpfully included a How To Bitcoin page, which seemed odd for an individual letter (though crucial to make the scam work).
  • It looked like a form letter, inserting my first name and street name at several points.
  • Perhaps most importantly, I don't have any kind of secret which I could be blackmailed over. I don't live that kind of life. Reading the first paragraph was fairly mystifying as I had no idea what secret they were referring to.

I haven't written about bitcoin before as, other than wishing I'd mined a bunch of coins in 2013 or so, I find it farcical. However cryptocurrency is key in enabling things like this automated blackmail at scale, by providing a mostly anonymous way to transfer money online.

I am by no means the first person to be targeted by this scam:

  • Dave Eargle received an early version of the letter, which called out infidelity specifically. The letter I received was completely vague as to the nature of the scandalous secret.
  • Joshua Bernoff received a letter earlier this month which looks very similar to mine.
  • As the scam has grown, various news outlets have covered it: CNBC, Krebs On Security. The news coverage occurred in a burst in January 2018, covering Dave Eargle.

The amount of money demanded has increased over time. The 2016 letter which Dave Eargle received demanded $2000. The 4/2018 letter which Joshua Bernoff received demanded $8,350. My letter demanded $8,600. I imagine the perpetrator(s) are fine-tuning their demand based on response rates from previous campaigns. More sophisticated demographic targeting is possible I suppose, but the simpler explanation seems more likely.

I'll include the complete text of the letter at the end of this post, to help anyone else targeted by this scam to find it. I'm also trying to figure out if there is somewhere at USPS to send the physical letter to. Using the postal service to deliver extortion letters is a crime, albeit in this case one where it would be difficult to identify the perpetrator.



Hello Denton, I’m going to cut to the chase. My name is SwiftBreak~15 and I know about the secret you are keeping from your wife and everyone else. More importantly, I have evidence of what you have been hiding. I won’t go into the specifics here in case your wife intercepts this, but you know what I am talking about.

You don’t know me personally and nobody hired me to look into you. Nor did I go out looking to burn you. It is just your bad luck that I stumbled across your misadventures while working on a job around <redacted name of town>. I then put in more time than I probably should have looking into your life. Frankly, I am ready to forget all about you and let you get on with your life. And I am going to give you two options that will accomplish that very thing. Those two options are to either ignore this letter, or simply pay me $8,600. Let’s examine those two options in more detail.

Option 1 is to ignore this letter. Let me tell you what will happen if you choose this path. I will take this evidence and send it to your wife. And as insurance against you intercepting it before your wife gets it, I will also send copies to her friends, family, and your neighbors on and around <redacted name of street>. So, Denton, even if you decide to come clean with your wife, it won’t protect her from the humiliation she will feel when her friends and family find out your sordid details from me.

Option 2 is to pay me $8,600. We’ll call this my “confidentiality fee.” Now let me tell you what happens if you choose this path. Your secret remains your secret. You go on with your life as though none of this ever happened. Though you may want to do a better job at keeping your misdeeds secret in the future.

At this point you may be thinking, “I’ll just go to the cops.” Which is why I have taken steps to ensure this letter cannot be traced back to me. So that won’t help, and it won’t stop the evidence from destroying your life. I’m not looking to break your bank. I just want to be compensated for the time I put into investigating you.

Let’s assume you have decided to make all this go away and pay me the confidentiality fee. In keeping with my strategy to not go to jail, we will not meet in person and there will be no physical exchange of cash. You will pay me anonymously using bitcoin. If you want me to keep your secret, then send $8,600 in BITCOIN to the Receiving Bitcoin Address listed below. Payment MUST be received within 10 days of the post marked date on this letter’s envelope. If you are not familiar with bitcoin, attached is a “How-To” guide. You will need the below two pieces of information when referencing the guide.

Required Amount: $8,600
Receiving Bitcoin Address: <redacted>

Tell no one what you will be using the bitcoin for or they may not give it to you. The procedure to obtain bitcoin can take a day or two so do not put it off. Again, payment must be received within 10 days of this letter’s post marked date. If I don’t receive the bitcoin by the deadline, I will go ahead and release the evidence to everyone. If you go that route, then the least you could do is tell your wife so she can come up with an excuse to prepare her friends and family before they find out. The clock is ticking, Denton.

Wednesday, January 24, 2018

I Know What You Are by the Smell of Your Wi-Fi

In July 2017 gave a talk at DEFCON 25 describing a technique to identify the type of Wi-Fi client connecting to an Access Point. It can be quite specific: it can distinguish an iPhone 5 from an iPhone 5s, a Samsung Galaxy S7 from an S8, etc. Classically in security literature this type of mechanism would have been called "fingerprinting," but in modern usage that term has evolved to mean identification of a specific individual user. Because this mechanism identifies the species of the device, not the specific individual, we refer to it as Wi-Fi Taxonomy.

The mechanism works by examining Wi-Fi management frames, called MLME frames. It extracts the options present in the client's packets into a signature string, which is quite distinctive to the combination of the Wi-Fi chipset, device driver, and client OS.

The video of the talk has been posted by DEF CON:


  • The slides are available in PDF format from the DEFCON media server, and the speaker notes on the slides contain the complete talk.
  • The database of signatures to identify devices is available as open source code with an Apache license as a GitHub repository.
  • There is also a paper which describes the mechanism, and which goes a level of detail deeper into how it works. It is available from arXiv.