Thursday, March 14, 2019

Vega Visualization Grammar and Jupyter notebooks

In the previous post about charting options for Jupyter Notebooks, the container to run the Notebook had to include a number of extensions for JupyterLab to enable the various charting packages to work. JupyterLab, the next major version of Juypter Notebooks, locks down JavaScript within the Notebook web environment and requires extensions to facilitate it for each package.

jupyter labextension install jupyter-matplotlib
jupyter labextension install bqplot
jupyter labextension install jupyterlab_bokeh
jupyter labextension install beakerx-jupyterlab
jupyter labextension install @pyviz/jupyterlab_pyviz
jupyter labextension install @jupyterlab/plotly-extension

All except one charting package required an extension, that is. The one which doesn't is Altair, which works not by generating JavaScript but by generating a Vega-Lite description of the desired chart. Vega is a visualization grammar, a declarative language for describing visualization. For example, consider the first few lines of a bar chart example from the Vega site:

{
  "$schema": "https://vega.github.io/schema/vega/v5.json",
  "width": 400,
  "height": 200,
  "padding": 5,

  "data": [
    {
      "name": "table",
      "values": [
        {"category": "A", "amount": 28},
        {"category": "B", "amount": 55},
        {"category": "C", "amount": 43},
        {"category": "D", "amount": 91},
        {"category": "E", "amount": 81},
        {"category": "F", "amount": 53},
        {"category": "G", "amount": 19},
        {"category": "H", "amount": 87}
      ]
    }
  ],

JupyterLab includes a renderer for Vega-Lite and Vega. A Vega description can be passed to JupyterLab's display() routine, and will render graphically in the Notebook. This means we can write code within the Notebook to generate a Vega description, and then render it, without requiring any extension to be installed.

As one example: consider an interactive treemap. None of the charting packages investigated earlier had a treemap implementation which I really liked, but the Vega grammar supports one. Using Python code running within the Notebook, we can generate one:

# Vega treemap documentation: https://vega.github.io/vega/examples/treemap/
  return {
      "$schema": "https://vega.github.io/schema/vega/v4.json",
      "width": width,
      "height": height,
      "padding": 2.5,
      "autosize": "none",

      "signals": [
        {
          "name": "layout", "value": "squarify",
        },
        {
          "name": "aspectRatio", "value": 1.6,
        }
      ],

      "data": [
        {
          "name": "drawdown",
          "values": list(elements.values()),
          "transform": [
            {
              "type": "stratify",
              "key": "id",
              "parentKey": "parent"
            },
            {
              "type": "treemap",
              "field": "size",
              "sort": {"field": "value"},
              "round": True,
              "method": {"signal": "layout"},
              "ratio": {"signal": "aspectRatio"},
              "size": [{"signal": "width"}, {"signal": "height"}]
            }
          ]
        },
        {
          "name": "nodes",
          "source": "drawdown",
          "transform": [{ "type": "filter", "expr": "datum.children" }]
        },
        {
          "name": "leaves",
          "source": "drawdown",
          "transform": [{ "type": "filter", "expr": "!datum.children" }]
        }
      ],
      "scales": [
        {
          "name": "color",
          "type": "ordinal",
          "domain": list(sector_colormap.keys()),
          "range": list(sector_colormap.values())
        },
      ],

      "marks": [
        {
          "type": "rect",
          "from": {"data": "nodes"},
          "interactive": False,
          "encode": {
            "enter": {
              "fill": {"scale": "color", "field": "name"}
            },
            "update": {
              "x": {"field": "x0"},
              "y": {"field": "y0"},
              "x2": {"field": "x1"},
              "y2": {"field": "y1"}
            }
          }
        },
        {
          "type": "rect",
          "from": {"data": "leaves"},
          "encode": {
            "enter": {
              "stroke": {"value": "#fff"},
              "tooltip": {
                "signal": "{title: datum.name, 'CO2eq': datum.size + ' Gigatons'}"}
            },
            "update": {
              "x": {"field": "x0"},
              "y": {"field": "y0"},
              "x2": {"field": "x1"},
              "y2": {"field": "y1"},
              "fill": {"value": "transparent"}
            },
            "hover": {
              "fill": {"value": "gray"}
            }
          }
        },
        {
          "type": "text",
          "from": {"data": "nodes"},
          "interactive": False,
          "encode": {
            "enter": {
              "font": {"value": "Helvetica Neue, Arial"},
              "align": {"value": "center"},
              "baseline": {"value": "middle"},
              "fill": {"value": "#fff"},
              "text": {"field": "name"},
              "fontSize": {"value": 18},
              "fillOpacity": {"value": 1.0},
              "angle": {"value": -62.0}
            },
            "update": {
              "x": {"signal": "0.5 * (datum.x0 + datum.x1)"},
              "y": {"signal": "0.5 * (datum.y0 + datum.y1)"}
            }
          }
        }
      ]
    }

The text string is passed to JupyterLab's display() method, annotated with the MIME-type:

display(
   {'application/vnd.vega.v4+json': solution_treemap(width=400, height=800)},
   raw=True)

The notebook can be run for keepsies using mybinder.org, which creates a container to run user code. Clicking on the button below will open Jupyter in a new browser window, albeit after a perhaps lengthy pause while initializing a container to run it.


For those who have not used Jupyter before: each numbered block of code in the Notebook is a cell, and can be run using the right-pointing triangle button at the top. When the Notebook is first opened there will likely be no graphs displayed. Clicking the run button in each cell will run it and display the treemap.

Saturday, February 9, 2019

Line graphs in JupyterLab

Recently I've been doing quite a bit of work on climate solution models, using Jupyter Notebook to interact with the backend model. The Jupyter Notebook is a way to allow front-end code running in a browser to interact with backend code running as kernels managed by Jupyter. Though Jupyter evolved from iPython, there are now kernels available for many languages including Java, R, and even C++. All of my work with Jupyter is in Python, however.

I spent time investigating various graphing and charting packages which support interactive use in a JupyterLab notebook. The demo code for the different charting packages is available on github for reference. Note that github will render this file in a way which looks nice, but it is not actually running Jupyter and the graphs are just static snapshots.

The notebook can be run for keepsies using mybinder.org, which creates a container to run user code. Clicking on the button below will open Jupyter in a new browser window, albeit after a perhaps lengthy pause while initializing a container to run it.


At launch an error dialog may appear, about jupiterlab_pyviz. I think that is a false alarm as jupiterlab_pyviz is installed in the container and the Altair-viz chart does work, but I haven't figured out how to suppress the popup.


For those who have not used Jupyter before: each numbered block of code in the Notebook is a cell, and can be run using the right-pointing triangle button at the top. When the Notebook is first opened there will likely be no graphs displayed. Clicking the run button in each cell (and twice in the matplotlib cell to get it to render) will run each one and allow the graphs to be interacted with.



Matplotlib + ipympl

matplotlib is one of the oldest plotting packages for Python still in active development. Its primary design point is static graphs, but it does provide interactive features with a suitable renderer. In addition to Jupyter notebooks matplotlib supports renderers for many GUI environments like PyTk and Qt.

Instructions on the web will generally say to include a "%matplotlib inline" statement for use in Jupyter notebooks, but inline graphs are completely static. Instead, this example uses ipympl and "%matplotlib ipympl" which allows interactive features to work. A toolbar at the bottom allows panning and zooming.

However, matplotlib's interactive features are quite limited. Only one chart at a time can be interactive, and the user must click a power button in the upper right to deactivate it before another can be activated. The overall experience isn't great, matplotlib is much stronger when producing purely static graphs.


 


Bokeh

Bokeh is a charting and dashboarding package. Though it supports use within Jupyter, bokeh appears to be mainly aimed at creation of dashboards and larger collections of visualizations. For example it doesn't always cooperate with Jupyter widgets, preferring bokeh server's layout functions.

Bokeh's interactive features include a tooltip showing exact values when hovering over a datapoint, plus zoom and panning within the chart.


 


hvplot

hvplot is built atop bokeh and looks quite similar in operation, with tooltips and pan-and-zoom. hvplot's programming interface is integrated with Pandas, adding an hvplot() method to the dataframe.


 


Beaker/X

Beaker/Xs a collection of extensions for Jupyter Notebook, spanning the gamut from computation kernels to a graphing package. Its interactive features include tooltips on hover plus panning and zooming, with double-click to return to the original zoom level.

The charts in Bokeh are implemented in Groovy. The Python APIs are a direct transfer of the Groovy APIs. They aren't Python bindings in a Pythonic interface, Beaker/X provides a mechanism for different language kernels to call into each other. This did not make it very easy to code for in Python, for example I could not figure out how to set up a Tooltips object and had to leave it with the default.


 


bqplot

bqplot was released by Bloomberg, and is a charting package used in producing various Bloomberg media properties. It is a thin Python layer which passes through most commands to D3.js for implementation.

bqplot has ways to support both tooltips on hover and zoom-and-pan features, but I was not able to get both to work at the same time. The video shows a quick edit and re-running the cell in order to demonstrate one and the other. There may well be a way to do so, but I didn't figure it out.

Additionally, the tooltips for Line charts are currently not able to retrieve data values and always print NaN.


 


Altair-viz

Altair is somewhat different from the other packages described here in that it is a declarative API, not imperative. That is, instead of a series of operations to produce a graph, Altair instead provides a way to declare what elements of data should be graphed and how to transform the data while doing so. In that sense, Altair aspires to be more like SQL for fetching data or Prolog for transforming data than it does to be a more traditional API.

Altair-Viz emits a Vega-Lite description of the desired graphic, which is expanded to Vega, which ultimately sits atop D3.js in the browser. JupyterLab, an early access release for the next major Jupyter version, supports Vega-Lite and Vega natively. Altair does not require installation of a Jupyter extension to work, the only charting package in this post which does not need one.

For interactive use, Altair charts support tooltips on hover and pan-and-zoom of the graph.


 


plot.ly

Plot.ly is a plotting-as-a-service provider. The main service is hosted online, but the basic functionality of the plot.ly implementation is also available as an open source package which is run locally.

plot.ly charts support tooltips and zoom-and-pan, as well as a mode which prints the X/Y coordinates as the pointer moves over the chart.


 


My intention at this point is to focus on Altair-Viz as it is working well and shows promise for future capabilities, with Jupyterlab having native support for the Vega graphics grammar.

Tuesday, October 16, 2018

Carbon Capture: BECCS

BECCS is an acronym for Bio Energy with Carbon Capture and Storage. It uses plant material in a pyrolysis process to produce electricity. As discussed in the earlier post about biochar, the pyrolysis process produces three outputs:

  • a carbon-rich gas called syngas which is flammable, and contains about half the energy density of natural gas.
  • the solid char, a charcoal which has a much higher concentration of carbon than the original plant material.
  • a thick tar referred to as bio-oil, which is much higher in oxygen than petroleum but otherwise similar.

BECCS is a commercial operation to pyrolyze organic material at scale, usually by growing trees specifically for the purpose.

  • generate electricity by burning the syngas
  • use the char to keep the carbon it holds sequestered for a significant length of time. Though this might involve burial deep underground, char is also useful as a soil additive and takes many years to biodegrade. We could handle a substantial amount of carbon returning to the environment at a long enough cadence.
  • the bio-oil currently has little commercial use but has great potential, as it could displace petroleum in a number of chemical processes.

Because the feedstock for BECCS is newly grown vegetative material, it is strictly carbon neutral. If the char keeps carbon out of the atmosphere for a lengthy period of time, BECCS becomes carbon negative and draws down carbon from the environment while providing revenue via power generation to fund its own operation.

BECCS gets a substantial amount of attention because it is already operating at a substantial scale, removing hundreds of kilotons of carbon dioxide from the atmosphere each year. This is a few orders of magnitude off from where we need to get, but is proof that the process works.

The existing BECCS installations capture byproducts produced in existing agricultural processes, like fermenting corn for ethanol production. An analysis of geo data in 2018 estimated that BECCS could draw down approximately 100 megatons of carbon dioxide per year by 2020 using available land area.

Thursday, October 11, 2018

Carbon Capture: Enhanced Weathering

Chemical weathering is the process by which various types of rock is broken down by exposure to water, oxygen, and/or carbon dioxide. For our purposes, the most relevant forms of weathering involve uptake of carbon dioxide. CO2 dissolved in rainwater forms carbonic acid, which is quite mild as acids go but sufficient over time to dissolve minerals from rock. Calcium and silicon exposed to carbonic acid will form HCO3 bicarbonate, and release calcium and silicates.

Occurring naturally, this chemical reaction takes place gradually over millions of years. Most of the bicarbonate thus produced eventually washes out to the ocean, where various organisms like coral pull carbon and dissolved calcium out of the water to make shells. The rest of the bicarbonate gradually settles into the deep ocean and eventually adds to the limestone at the ocean floor.

Enhanced weathering is a plan by which humans can accelerate this process, by grinding the appropriate types of rock into particles to maximize surface area and spreading them over an area to take up CO2. There are a number of options.

  • Bicarbonate, calcium, and magnesium at appropriate concentrations are beneficial to soil health, especially tropical soils which tend to be depleted in these minerals. Spreading powdered olivine over one third of tropical agricultural land could pull between 30 and 300 parts per million of carbon dioxide out of the atmosphere.
     
    There is a large range in that number because we just don't know enough about how these processes work at scale. Perhaps fortunately, we also don't have the capacity to quickly seed such a large fraction of the planet's land area. Over time, the results of the earliest years of effort can be measured to guide future plans.

  • Though tropical land is ideal, using olivine as a soil additive in agricultural land elsewhere would still have an effect.
     
    The term "electrogeochemistry" has been coined to refer to enhanced weathering done at large scale.

  • Mine tailings are the heaps of excess rock discarded from mining operations after the valuable minerals have been extracted. The tailings generally contain large amounts of the types of rock which will absorb CO2 as they weather, and in fact do rapidly form a shell of carbonate at the surface of the pile. If mining regulations are made to require the tailings be ground more finely and appropriately distributed, they can be effective in pulling carbon dioxide from the atmosphere.
     
    Mine tailings also tend to contain trace amounts of substances which can be harmful, like mercury. Processes such as those developed by Advanced Materials Processing, Inc to remove harmful substances from tailings would be necessary.

Wednesday, September 19, 2018

MacOS Preview.app has a Signature Tool

When I receive a PDF file to be signed and returned I have generally printed it out to sign and scan back in... like an animal, as it turns out. On a MacOS system there is a convenient way to add a signature to a PDF file without needing to print it, using only the Preview.app which comes with the system.

In the toolbar is a squiggly icon with a drop down menu:

Clicking it allows one to create a signature by either signing with a finger on the trackpad, or writing a signature on a piece of paper for the camera to scan in. The camera option does a good job of edge detection to extract only the writing and not shadows on the paper.

The resulting signature can then be added to the document and dragged to the right spot.

Saturday, September 15, 2018

The Arduino before the Arduino: Parallax Basic Stamp

I recently had cause to dig down through the layers of strata which have accumulated in my electronics bin. In one of the lower layers I found this bit of forgotten kit: the Parallax Basic Stamp II. This was the Arduino before there was an Arduino, a tiny microprocessor aimed at being simple for hobbyist and low-volume commercial use.

The Basic Stamp line is still sold today, though with designs developed over a decade ago. The devices have enough of a market to remain in production, but are otherwise moribund. The past tense will be used in this blog post.

The Basic Stamp line dates back to the early 1990s. The Basic Stamp II shown here was introduced in 1995. It used a PIC microcontroller, an 8 bit microprocessor line which has been used in deeply embedded applications for decades and is still developed today. The PIC family is a product from Microchip Technology, the same company which now supplies the AVR chips used in the Arduino after acquiring Atmel in 2016.

The PIC contained several KBytes of flash, which held a BASIC interpreter called PBASIC. An external EEPROM on the BS2 board contained the bytecode compiled user BASIC code. Though it may seem an odd choice now, in the early 1990s the choice of BASIC made sense: the modern Internet and the Tech industry did not exist, with the concordant increase in the number of people comfortable with developing software. BASIC could leverage familiarity with Microsoft GW-BASIC and QBASIC on the PC, as MS-DOS and Windows computers of this time period all shipped with BASIC. Additionally, Parallax could tap into the experience of the hobbyist community from the Apple II and Atari/Commodore/etc.


' PBASIC code for the Basic Stamp
LED         PIN 5
Button      PIN 6    ' the BS2 had 16 pins
ButtonVal   VAR Bit  ' space is precious, 1 *bit* storage
LedDuration CON 500  ' a constant

' Init code
OUTPUT LED
INPUT  Button

DO
 ButtonVal = Button                 ' Read button input pin
 FREQOUT LED,LedDuration,ButtonVal  ' PWM output to flicker LED
 PAUSE 200                          ' in milliseconds
LOOP

PBASIC supported a single thread of operation, the BASIC Stamp supported neither interrupts nor threads. Applications needing these functions would generally use a PIC chip without the BASIC interpreter on top. Later Stamp versions added a limited ability to poll pins in between each BASIC statement and take action. This seemed aimed at industrial control users of the stamps, for example Disney used BASIC Stamps in several theme park rides designed during this time frame.

A key piece of the Arduino and Raspberry Pi ecosystems is the variety of expansion kits, or "shields," which connect to the microprocessor to add capabilities and interface with the external world. The ecosystem of the BASIC Stamp was much more limited, suppliers like Adafruit were not in evidence because the low volume PCB design and contract manufacturing industry mostly didn't exist. Parallax produced some interesting kits of its own like an early autonomous wheeled robot. For the most part though, hobbyists of this era had to be comfortable with wire-wrapping.

Saturday, September 8, 2018

code.earth hackathon notes

Project Drawdown is a comprehensive plan proposed to reverse global warming. The project researchers analyzed and ranked scenarios according to the potential reduction in carbon levels, and analyzed the costs.

Project Drawdown will continue the analysis work, but is moving into an additional advocacy and empowerment role of showing governments, organizations, and individuals that global warming can be mitigated and providing detailed guidance on strategies which can work. The audience for the project's work is expanding.

This places new demands on the tools. The tooling needs to be more accessible to people in different roles, and provide multiple user interfaces tailored to different purposes. For example, the view provided to policymakers would be more top-level, showing costs and impacts, while the view for researchers would allow comparisons by varying the underlying data.

The code.earth hackathon in San Francisco September 5-7, 2018 implemented a first step in this, starting to move the modeling implementation from Microsoft Excel into a web-hosted Python process with Excel providing the data source and presentation of the results. This will separate the model implementation from user interface, making it easier to have multiple presentations tailored for different audiences. It will still be possible to get the results into Excel for further analysis, but web-based interfaces can reach much wider audiences able to act on the results.

I was at the hackathon, working on an end-to-end test for the new backend, and plan to continue working on the project for a while. Global warming is the biggest challenge of our age. We have to start treating it as such.

Sunday, September 2, 2018

Carbon Capture: Cryogenic CO2 Separation

Sublimation is a phase change directly from a solid to a gas without transitioning through an intermedia liquid state. Desublimation is the opposite, where a gas crystalizes into a solid without becoming a liquid first. The most well-known example of desublimation is snow, where water vapor crystalizes into tiny bits of ice. When water vapor in the cloud first condenses into liquid and then freezes, the result is hail not snow.

Interestingly, and quite usefully for carbon capture, carbon dioxide will desublimate at -78 degrees Centigrade. This is a considerably higher temperature than the main components of the atmosphere like nitrogen and oxygen, and means that as air gets very cold that CO2 will be among the first components to turn into ice crystals. This allows the CO2 crystals to be harvested.

Several companies have working technology in this area:

  • Alliant Techsystems (now defunct) and ACENT Laboratories developed a supersonic wind tunnel which compresses incoming air, causing it to heat up, then expands the supersonic airflow causing it to rapidly freeze. CO2 crystals can be extracted via cyclonic separation, relying on the mass of the frozen particles.

  • Sustainable Energy Solutions in Utah uses a heat exchanger process to rapidly cool air, harvest the CO2 crystals, then reclaim the energy spent on cooling before exhausting the remaining gases.

Wednesday, August 29, 2018

Google Software Engineering Levels and Ladders

Google (now Alphabet) hires a lot of engineers every year. There are articles out there about the interview process and how to prepare, and I do definitely recommend spending time in preparation. Google interviews for software engineers mostly do not focus on the candidate's resume or prior experience, instead asking technical questions on various topics and coding. You'll do better if you mentally refresh topics in computer science which you have not recently worked with.

This post focuses on a different area: how to evaluate an engineering job offer from Alphabet. The financial aspects will presumably be clear enough, but the career aspects of the offer may not be. This post will attempt to explain what Google's engineering career progression looks like.

There are two concepts: ladder and level. The ladder defines the role you are expected to do, like manager or engineer or salesperson, while the level is how senior you are in that role.

Like many Tech companies, Google has parallel tracks for people who wish to primarily be individual contributors and for people who wish to primarily be managers. This takes the form of two ladders, Software Engineer (universally abbreviated as "SWE") and Software Engineering Manager. Google does allow people on the SWE ladder to manage reports, and allows people on the Manager ladder to make technical contributions. The difference is in how performance is evaluated. For those on the SWE ladder the expectation is that at least 50% of their time will be spent on individual contributing engineering work, leaving no more than 50% on management. For those on the Manager ladder the expectation is more like 80% of the time to be spent on management. People on one ladder veering too far out of the guidance for that ladder will be encouraged to switch to the other, as performance evaluations will begin to suffer.


 

Software Engineer Ladder

The levels are:

  • SWE-I (Level 2) is a software engineering intern, expected to be in the junior or senior year of a four year degree program.
  • SWE-II (Level 3) is an entry level full-time software engineer. An L3 SWE is generally someone who recently graduated with an undergraduate or Master's degree, or equivalent education.
  • SWE-III (Level 4) is someone with several years of experience after graduation, or for someone who just finished a PhD in a technical field.
  • Senior Software Engineer (Level 5) is the level where a software engineer is expected to be primarily autonomous: capable of being given tasks without excessive detail, and being able to figure out what to do and then do it. A software engineer advances to L5 primarily by demonstrating impact on tasks of sufficient difficulty. When hiring externally, six to ten years of experience is generally expected.
  • Staff Software Engineer (Level 6) is the level where leadership increasingly becomes the primary criteria by which performance is judged. Many, though by no means all, SWEs begin managing a team of engineers by this point in their career. When hiring externally, ten or more years of experience are generally expected.
  • Senior Staff Software Engineer (Level 7) is essentially L6 with larger expectations. Guidance for years of experience begins to break down at this level, as most candidates with ten or more years experience will be hired at Level 6 unless there is a strong reason to offer a higher level. Involvement of the hiring manager or strong pushback by the candidate can sometimes push the offer to Level 7.
  • Principal Software Engineer (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Principal Software Engineers drive technical strategy in relatively large product areas. SWEs at level 8 or above are relatively rare: the equivalent level on the manager ladder will routinely have five or more times as many people as on the SWE ladder. By this level of seniority, most people are focussed on management and leadership.
  • Distinguished Software Engineer (Level 9) drives technical strategy in efforts spanning a large technical area.
  • Google Fellow (Level 10) is the same level as a Vice President, expected to drive technical strategy and investment in crucial areas.
  • Google Senior Fellow (Level 11) is for people like Jeff Dean and Sanjay Ghemawat.

Most external hiring for software engineers is for L4 through L6, with L7 also possible though less common. Hiring externally directly to L8 and L9 does happen, but is quite rare and demands the direct sponsorship of a high-level executive like a Senior Vice President of a Google Product Area or CEO of an Alphabet company. For example James Gosling and David Patterson both joined the company as L9 Distinguished Engineers.

Also notable is that the external hiring process and the internal promotion process are entirely separate, and at this point have diverged substantially in their calibration. It is fair to say that Alphabet substantially undervalues experience outside of the company, or perhaps overvalues experience within the company. Someone with ten years experience externally would be hired at L5 or L6, while ten years within the company can make it to L7 or L8.


 

Software Engineering Manager Ladder

The levels are:

  • Manager, Software Engineering I (Level 5) is the first level on the manager ladder. It is expected that people will have a few years experience in the field before they begin managing a team, and therefore the Manager ladder starts at level 5. Manager I will typically lead a small team of engineers, five to ten is common.
  • Manager, Software Engineering II (Level 6) is typically a manager of a team of ten to twenty, sometimes a mixture of direct reports and managing other managers. When hiring externally, 10+ years of experience is expected.
  • Manager, Software Engineering III (Level 7) begins the transition to be primarily a manager of managers. Teams are larger than L6, typically twenty to forty.
  • Director (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Directors are mostly managers of managers, and typically lead organizations of forty up to several hundred people.
  • Senior Director (Level 9) is basically a secret level at Google: all of the internal tools will show only "Director," and by tradition promotions to Senior Director are not publicly announced. Senior Directors may lead slightly larger organizations than L8 Directors, though mostly it provides a way to have a larger gap between Director and VP while still allowing career progression.
  • Vice President (Level 10) typically leads organizations of hundreds to thousands of people. Their direct reports will typically be Directors and will be second to third level managers themselves.
  • Vice President II (Level 11), like Senior Director, is shown only as "VP" in internal tools and provides a way to maintain a larger gap between VP and SVP while still allowing managers to advance in their careers.
  • There are executive levels beyond L12, notably Senior Vice Presidents of Google divisions and CEOs of other Alphabet companies. This blog post is not a good guide to hiring for those levels, if you happen to be such a candidate. Sorry.

When hiring managers externally, L5 through Director is most common. Above Director is rare and generally only happens with the sponsorship of a high level executive. However where SWE hiring essentially tops out at L9, manager hires can come in at almost any level given sufficient sponsorship. Alphabet hires CEOs for its affiliated companies (John Krafcik, Andrew Conrad) and Google SVPs (Craig Barratt, Diane Greene) externally.


 

Other ladders equivalent to SWE

There is one other software engineering role at Alphabet which is parallel to the SWE/Software Manager ladders: Site Reliability Engineer or SRE. The individual contributor ladder is called SRE-SWE — for historical reasons, as there used to be an SRE-System Administration ladder which is no longer hired for. There is also an SRE Manager ladder. The levels on SRE-SWE and SRE Manager roughly correspond in responsibilities and years of experience to the SWE and Software Manager ladders described above, though the nature of the work differs.

SRE is equivalent to SWE in that at any time, an SRE can choose to relinquish the SRE duties and transfer to the SWE ladder and SRE Manager can switch to the Software Manager ladder. If originally hired as an SRE, they can also generally switch back if they choose to to do in the future. Engineers hired as a SWE who wish to transfer to SRE require a bit more process, often via an internal training program to serve a rotation as an SRE.


 

Other ladders NOT equivalent to SWE

SETI, for Software Engineer in Tools and Infrastructure, is another engineering ladder within Google. Though recruiters will make the claim that it is just like being a SWE, transfers from SETI to SWE require interviews, acceptance by a hiring committee, and approval of the SVP who owns the SWE ladder. Though often successful, transfers from SETI to SWE are not automatic and do get rejected, at both stages of the approval process. As such, recruiter claims that it is just like being a SWE are not accurate. The recruiter just has an SETI role to fill.

Only accept an SETI role if automated testing and continuous software improvement are really passions. Projects listing SETI openings will be less numerous than SWE, though will often be more focussed on automation and quality improvement. In many cases, internal transfers to projects which list a SWE opening will accept an SETI applicant, but not always. Being on the SETI ladder will therefore be slightly limiting in choice of projects for internal mobility.

There are other ladders which also involve software development but are even further removed from the SWE ladder, notably Technical Solutions Engineer (TSE) and Web Solutions Engineer (WSE). As with SETI, transfers to the SWE ladder require interviews and approvals. Recruiter claims that TSE or WSE are "just like being a SWE" are not accurate, as people on these ladders cannot internally transfer to projects which have a SWE opening. They can only transfer to TSE/WSE openings, which limit the choice of projects.

Saturday, August 25, 2018

Carbon Capture: Soil health

Topsoil across the land areas of the planet holds substantially more carbon than the entire atmosphere, and over the past several hundred years we have released at least 50 percent of the carbon formerly held by soils into the air. This is primarily because of tilling, which disturbs the deeper soils and kills the roots and fungi which reside there. Tilling is necessary for modern agricultural practices using fertilizer and insecticides, which can improve yields substantially until the soil has become substantially depleted of carbon and gradually less productive. Much farmland around the world now is stuck in a local maxima: stopping tilling and allowing the soil to recover would eventually result in improved yields, but only after a few years of very poor harvests.

It is estimated that regenerating depleted land can absorb two to five tons of CO2 per acre per year, for about ten years. Done at scale, regenerative agriculture could absorb tens of gigatons of CO2 per year. For perspective, current human emissions are approximately 36 gigatons per year. Improving soil health could offset a nontrivial fraction of current emissions or, in conjunction with other methods to reduce new emission, pull previously emitted carbon from the air.

Companies in this technology space

  • Regen Network provides tools to gather and analyze data for soil health in regenerative agriculture, silvopasturing, and other practices to improve ecological health. It also provides a trading platform to invest and fund these developments.
  • COMET-Farm at Colorado State University tracks data entered by farms to estimate the levels of carbon stored, plus other factors relating to soil health.