Saturday, October 21, 2017

On CPE Release Processes

Datacenter software is deployed frequently. Push daily! Push hourly! Push on green whenever the tests pass! This works even at extremely large scale, new versions of facebook.com are deployed multiple times each day (much of the site functionality is packaged in a single deployable unit).

CPE device software tends to not be deployed so often, not even close. There are several reasons for this:

  • Test practices are different.

    Embedded systems is one of the oldest niches in software development and does not have a strong tradition even of unit testing, let alone the level of automated testing which makes concepts like push-on-green possible. One can definitely get good unit test coverage of code which the team developed, but the typical system will include a much larger amount of open source code which rarely has unit tests and is daunting for the team to try to add tests to. Much of the code in the system is only going to be tested at the system level. With effort and effective incentives one can develop a level of automated system test coverage... but it still won’t be close to 95%. System level testing never is, the combinatorial complexity is too high.

    Additionally, with datacenter software, the build system creating the release is often somewhat similar to the production system which will run the release. It may even be the same, if the development team uses prod systems to farm out builds. A reasonable fraction of the system functionality can be run in tests on the builder.

    With CPE devices, the build system is almost always not a CPE being tasked to compile everything. The build system is an x86 server with a cross-compiler. The build system will likely lack much of the hardware which is key to the CPE device functionality, like network interfaces or DRM keystores or video decoders. Large portions of the system may not be testable on the builder.

  • The scale is different.

    Having a million servers in datacenters is a lot, that is one or more very large computing facilities capable of serving hundreds of millions of customers.

    Having a million CPE devices is not a lot. There are typically multiple devices within the home (modem, router, maybe some set top boxes), so that is a couple hundred thousand customers.

    It can simply take longer to push that amount of software to the much larger number of systems whose network connections will generally be slower than those within the datacenter. Multiple days is typical.

  • The impact of a problem in deployment is different.

    If you have a serious latent bug which is noticed at the 3% point of a rollout within a datacenter, that is probably a survivable event. Customers may be impacted and notice, but you can generally quarantine those 3% of servers from further traffic to end the problem. The servers can be rolled back and restored to service later, even if remediation steps are required, without further impacting customers.

    If you have serious latent bug which is noticed at the 3% point of a rollout within a CPE Fleet, you now have a crisis. 3% of the customer base is impacted by a serious bug, and will feel the impact until you finish all of the remediation steps.

    If the remediation steps in 3% of a datacenter rollout require manual intervention, that will be a significant cost. If the remediation steps in 3% of a CPE Fleet deployment require manual intervention, it will have a material impact on the business.

We’ll jump straight to the punchline: How often should one deploy software updates to a CPE fleet?

In my opinion: exactly as often as it takes to not feel terrified at the prospect of the next release, no more and no less often than that.

  • Releasing infrequently allows requirements and new development to build up, making the release heavier and with more opportunities for accidental breakage. It also results in developer displeasure at having to wait so long for their work to make it to customers, and corresponding rush to get not-quite-baked features in to avoid missing the release.
  • Releasing too frequently can leave not enough time to fully test a release. Though frequent releases have the advantage of having a much smaller set of changes in each, there does still need to be a reasonable confidence in testing.

In the last CPE fleet I was involved in, we tried a number of different cadences: every 6 weeks, then weekly, then quarterly. I believe the 6 week cadence worked best. The weekly cadence resulted in a number of bugs being pushed to the fleet and subsequent rollbacks simply due to the lack of time to test. The quarterly cadence led to developers engaging in bad behavior to avoid missing a release train, by submitting their feature even in terrible shape. The release cadence became even slower, and the quality of the product noticeably lower. I think six weeks was a reasonable compromise, and left enough headroom to do minor releases at the halfway point as needed where a very small number of changes which were already tested for the next release could be delivered to customers early.

One other bit of advice: no matter what the release cadence is, once it has been going on long enough, developers will begin griping about it and the leadership may begin to question it (Maxim #4). Leadership interference is what led to the widely varying release processes in the last CPE fleet I was involved in. My only advice there is to manage upwards: announce every release, and copy your management, to keep it fresh in their minds that the process works and delivers updates regularly.

Wednesday, October 4, 2017

Bad Physics Jokes

Recently I posted a number of truly terrible physics jokes to Twitter, as one does. For your edification and bemusement, here they are:

  • If you accelerate toward the red light fast enough, the blue shift turns it green again.
  • Two people are walking up a frictionless hill...
    (that's it. That's the joke.)
  • Whenever a neutron asks the price of anything: "For you, no charge."
  • Whenever the proton is asked if they are sure: "I'm absolutely positive."
  • The electron isn't invited to the party in the nucleus. The other particles find it boorish, being so constantly negative all the time.
  • The Higgs Boson conveys the gravity of the situation to other particles.
  • Photons make light of EVERYTHING. They can be SO inappropriate sometimes.
  • You might assume that Gravitons would be extroverts attracted to large groups, but no. They're actually really, really shy.
  • Despite the occasional unverified sighting, experts agree that Phlogistons are the Bigfoot of the subatomic particle world.
  • Neutrinos are the tragic poets of the subatomic world. They yearn for interaction, but know that it can never be.
  • As a community, Protons realized that their diet and exercise habits needed to improve.
  • Other particles wish they could help the Pion, but don't know what to do. Even the smallest thing can make them fall to pieces.

Sunday, September 24, 2017

There is No Feminist Cabal

From the 23-Sep-2017 New York Times:

One of those who said there had been a change is James Altizer, an engineer at the chip maker Nvidia. Mr. Altizer, 52, said he had realized a few years ago that feminists in Silicon Valley had formed a cabal whose goal was to subjugate men. At the time, he said, he was one of the few with that view.

Now Mr. Altizer said he was less alone. "There’s quite a few people going through that in Silicon Valley right now," he said. "It’s exploding. It’s mostly young men, younger than me."

I want to share some experiences, as another white male in Tech for a similar number of years as Mr. Altizer.

A while ago I made a conscious effort to follow more women in Tech on Twitter, to deliberately maintain a ratio of ~50% in those I follow. I wanted to try for more perspective than that provided by my own vantage point in the industry, where the gender ratio is definitely not 50%. It has been illuminating... and often painful. Intermixed with happy and proud events in life and work is the constant level of sexism which women experience. Sometimes it is blatant and vile: intimidation, physical threats. More often it is a grinding, ever present disrespect from men. It is so commonplace that it becomes completely expected, often mentioned in an offhand way. This doesn’t mean it is a minor thing, it means that it never stops, isn’t possible to avoid, and ceases to be surprising.

You likely won’t hear this in person, from women you work with. That doesn’t mean women you work with are experiencing something different. It doesn’t mean that their career and work environment are free of sexism and discrimination. It means that talking about it in person is asking them to relive those events, sometimes extraordinarily painful events. It means it is vastly more difficult to relate horrible experiences in person, in conversation. It is understandable to not want to talk about it.


Yet one thing I haven’t seen, not even a hint of, is the existence of a powerful group of women who are organizing to oppress men. I’ve seen no evidence of any kind of backlash against men in Tech. Jokes about a Cabal started circulating after the NYT story was published. This was irony, not confirmation.

We’re hearing more about sexism in Tech, far more than we did even a year ago. I think, I hope, that is because we are in the early stages of the extinction burst. When a behavior which was formerly rewarded no longer is, that behavior will begin to decline... except for a final gasp, a final burst, in trying to turn back the clock. The process of acknowledging the disparities in Tech has been ongoing for many years, slowly. It has reached a point where the industry is starting to respond, if only a little. That the response may grow stronger will feel like a threat, like a backlash against males. It really isn't. It is about disparity, and doing something to rectify that long-present disparity.




I’m posting this because it is unfair to expect people in disadvantaged groups to carry the entire burden of correcting the disadvantage. In Tech the advantaged group is males, and as I happen to be a male in Tech, that means me. I have not been nearly enough a part of the solution. That needs to change.

Males in Tech have almost certainly witnessed aggressions: a woman being spoken over, or not being invited to a meeting she should be, or not receiving sufficient credit for her work. One thing learned from following women in tech is that no matter how much we think we understand, the aggressions they go through happen orders of magnitude more frequently than we think. For every occurrence we know of there are ten more, a hundred more, a thousand more, which we don’t see and don't grasp the frequency of because we are not female.

We, males in Tech, need to speak out. We need to speak out frequently and firmly. So I am. My voice isn’t powerful, but power can be achieved in numbers too. I’m adding my voice to the multitude. You should too; not just as a blog post, speak out when you see the grinding aggression happening.

Thursday, September 21, 2017

Software Engineering Maxims which May or May Not Be True

This is a series of Software Engineering Maxims Which May or May Not Be True, developed over the last few years of working at Google. Your mileage may vary. Use only as directed. Past performance is not a predictor of future results. Etc.




Maxim #1: Small teams are bigger than large teams

In my mind, the ideal size for a software team is seven engineers. It does not have to be exactly seven: six is fine, eight is fine, but the further the team gets from the ideal the harder it is to get things done. Three people isn’t enough and limits impact, fourteen is too many to effectively coordinate and communicate amongst.

Organizing larger projects becomes an exercise in modularizing the system to allow teams of about seven people to own the delivery of a well-defined piece of the overall product. The largest parts of the system will end up with clusters of teams working on different aspects of the system.




Maxim #2: Enthusiasm improves productivity.

By far the best way to improve engineering productivity is to have people working on something which they are genuinely enthused about. It is beneficial in many ways:

  • the quality of the product will benefit from the care and attention
  • people don’t let themselves get blocked by something else when they are eager to see the end result
  • they’ll come up with ways to make the product even better, by way of their own resourcefulness
  • people are simply happier, which has numerous benefits in morale and working environment.

There are usually way more tasks on the project wish list than can realistically be delivered. Some of those tasks will be more important than others, but it is rarely the case that there is a strict priority order in the task list. More often we have broad buckets of tasks:

  • crucial, can’t have the product without it
  • nice to have
  • won’t do yet: too much crazy, or get to it eventually, or something

The crucial tasks have to be done, even the ones which no-one particularly wants to do.

In my mind, when selecting from the (lengthy) list of nice-to-have tasks, the enthusiasm of the engineering team should be a factor in the choices. The team will deliver more if they can work on things they are genuinely interested in doing.




Maxim #3: Project plans should have top-down and bottom-up elements

It is possible for a team to work entirely from a task list, where Product Management and the Leadership add stuff and the team marks things off as they are completed. This is not a great way to work, but it is possible.

It is better if the team spend some fraction of their time on tasks which germinated from within the team - not merely 20% time, a portion of regular work should be on tasks which the team itself came up with.

  • The team tends to generate immediately practical ideas, things which build upon the product as it exists today and provide useful extensions.
  • It is good for morale.
  • It is good for careers. Showing initiative and technical leadership is good for a software engineer.



Maxim #4: Bricking the fleet is bad for business

Activities with a risk of irreparable consequences deserve more care. This sounds obvious, like something which no-one would ever disagree with, but in the day-to-day engineering work those tasks won’t look like something which require that extra level of care. Instead they will look like something which has been running for years and never failed, something which fades into the background and can be safely ignored because it is so reliable.

Calls to add this risk will not be phrased as "be cavalier about something which can ruin us." It will be phrased as increasing velocity, or lowering cost, or not being stuck in doing things the old way - all of which might be true, it just needs more care and attention in changing it.




Maxim #5: There is an ideal rate of breakage: no more, no less

Breaking things too often is a sign of trying to do too much too quickly, and either excessively dividing attention or not allowing time for proper care to be taken.

Not breaking things often enough is a sign of the opposite problem: not pushing hard enough.

I’m sure it is theoretically possible for a team to move at an absolutely optimal speed such that they maximize their results without ever breaking anything, but I’ve no idea how to achieve it. The next best thing is to strive for just the right amount of breakage: not too much, not too little.




Maxim #6: It’s a marathon, not a sprint

"Launch and iterate" is a catchy phrase, but often turns into excuses to launch something sub-par and then never getting around to improving it.

Yet there is a real advantage to being in a market for the long term, launching a product and improving it. Customer happiness is earned over time, not all at once with a big launch.

  • This means structuring teams for sustained effort, not big product pushes.
  • It means triaging bugs: not everything will get fixed right away, but should at least be looked at to assess relative priority.
  • It means really thinking about how to support the product in the field.
  • It means not running projects in a way which burn people out.



Maxim #7: The service is the product

The product is not the code. The product is not the specific feature we launched last week, nor the big thing we’re working on launching next week.

The product is the Service. The Product which customers care about is that they can get on the Internet and do what they need to do, that they can turn on the TV and have it work, that they can make phone calls, whatever it is they set out to do.




Maxim #8: Money is not the only motivator

A monetary bonus is one tool available for managers to reward good work. It is not the only tool, and is not necessarily the best tool for all situations.

For example, to encourage SWEs to write automated system tests we created the Yakthulhu of Testing. It is a tiny Cthulhu made from the hair of shaved yaks (*). A Yakthulhu can be obtained by checking in one’s first automated test to run against the product.

(*) It really is made from yak hair. Yak hair yarn is a thing which one can buy. Disappointingly though, they do not shave the yaks. They comb the yaks.




Maxim #9: Evolve systems as a series of incremental changes

There is substantial value in code which has seen action in the field. It contains a series of small and large decisions, fixes, and responses which made the system better over time. Generally these decisions are not recorded as a list of lessons learned to be applied to a rewrite or to the next system.

Whenever possible, systems should evolve as a series of incremental changes to take it from where it is to where we want it to be. Doing this incrementally has several advantages:

  • benefits are delivered to customers much earlier, as the earliest pieces to be completed don’t have to wait for the later pieces before deployment.
  • there is no stagnant period in the field after work on the old system is stopped but before the new system is ready.
  • once the system is close enough to where we want it to be that other stuff moves higher on the list of priorities, we can stop. We don’t have to push on to finish rewriting all of it.



Maxim #10: Risk is multiplicative

There is a school of thought that when there are multiple large projects going on, and there is some relation between them, that they should be tied together and made dependent upon each other. The arguments for doing so are often:

  • "We’re going to pay careful attention to those projects, making them one project means we’ll be able to track them more effectively."
  • "There was going to be duplication of effort, we can share implementation of the common systems."
  • "We can better manage the team if we have more people available to be redirected to the pieces which need more help."

The trouble with this is that it glosses over the fundamental decision being made: nothing can ship until all of it ships. Combining risks makes a single, bigger risk out of the multiple smaller risks.




Maxim #11: Don’t shoot the monitoring

There is a peculiar dynamic when systems contain a mix of modules with very good monitoring along with modules with very poor monitoring; the modules with good monitoring report all of the errors.

The peculiarity becomes damaging if the result is to have all of the bugs filed against the components with good monitoring. It makes it look like those modules are full of bugs, when the reality is likely the opposite.




Maxim #12: No postmortem prior to mortem

There are going to be emergencies. It happens, despite our best efforts to anticipate risks. When it happens, we go into damage control mode to resolve it.

People not involved in handling the emergency will begin to ask about a postmortem almost immediately, even before the problem is resolved. It is important to not begin writing the postmortem until the problem has been mitigated. Doing so turns a unified crisis response into a hotbed of fingerpointing and intrigue. Even in a culture of blameless postmortems, it is difficult to avoid the harmful effects of the hints of blame while writing that blameless postmortem.

It is fine, even crucial, to save information for later drafting of the postmortem. IRC logs, lists of bugs/CLs/etc, will all be needed eventually. Just don’t start a draft of a postmortem while still antemortem.




Maxim #13: Cadence trumps mechanism

We tend to focus a lot on mechanisms in software engineering as a way to increase velocity or productivity. We reduce the friction of releasing software, or we automate something, and we expect that this will result in more of the activity which we want to optimize.

Inertia is a powerful thing. A product at rest will tend to stay at rest, a product in motion will tend to stay in motion. The best way to release a bunch of software is to release a bunch of software, by setting a cadence and sticking to it. People get used to a cadence and it becomes self-reinforcing. Making something easier may or may not result in better velocity, making it more regular almost always does.




Maxim #14: Churn wastes more than time

Project plans change. It happens.

When plans change too often, or when a crucial plan is suddenly cancelled and destaffed, we burn more than just the time which was spent on the old plan. We burn confidence in the next plan. People don’t commit as readily and don’t put their best effort into it until they’re pretty sure the new plan will stick.

In the worst case, this becomes self-reinforcing. New plans fail because of the lack of effort engendered by the failure of previous plans.




Maxim #15: Sometimes the hard way is the right way

For any given task, there is often some person somewhere who has done it before and knows exactly what to do and how to do it. For things which are likely to be done once in a project and never repeated, relying on that person (either to do it or to provide exactly what to do step by step) can significantly speed things up.

For things which are likely to be repeated, or refined, or iterated upon, it can be better to not rely on that one expert. Learning by doing results in a much deeper understanding than just following directions. For an area which is core to the product or will be extended upon repeatedly, the deeper understanding is valuable, and is worth acquiring even if it takes longer.




Maxim #16: Spreading knowledge makes it thicker

Pigeonholing is rampant in software engineering, engineers who have become experts in a particular area and always end up being assigned tasks in that area.

There are occasions where that is optimal, where becoming a subject matter expert takes substantial time and effort, but these situations are rare. In most cases it is not the expense of becoming an expert that keeps an engineer doing similar work over and over, it is just complacency.

Areas of the product where the team needs to continue to expend effort over a long time period should move around to different members of the team. Multiple people familiar with an area will reinforce each other. Additionally, teaching the next person is a very effective way to get a better understanding for oneself.




Maxim #17: Software Managers must code

When one first transitions from being an individual contributor software engineer to being a manager, there is a decision to be made: whether to stop doing work as an individual contributor and focus entirely on the new role in guiding the team, or to keep doing IC work as well as management.

There are definitely incentives to focus entirely on management: one can have a much bigger impact via effective use of a team than by one’s own effort alone. When a new manager makes that choice, they get a couple of really good years. They have more time to plan, more time to strategize, and the team carries it all out.

The danger in this path comes later: one forgets how hard things really are. One forgets how long things take. The plans and strategies become less effective because they no longer reflect reality.

Software managers need to continue do engineering work, at least a little, to stay grounded in reality.




Maxim #18: Manage without a net

Managers and Tech Leads cannot depend on escalation. We sometimes believe that the layers of management above us exist in order to handle things which the lower layers are having trouble with. In reality, those upper layers have their own goals and priorities, and they generally do not include handling things bubbling up from below.

Do not rely on Deus Ex Magisterio from above, organizations do not work that way.




Maxim #19: Goodwill can be spent. Spend wisely.

Doing good work accumulates goodwill. It is helpful to have a pile of goodwill, it tends to make interactions smoother and generally makes things easier.

Nonetheless, it is possible to spend goodwill on something important: to redirect a path, to right a wrong, etc. Sometimes spending goodwill is the right thing to do. Don’t spend it frivolously.




Maxim #20: Everyone finds their own experience most compelling

"We should do A. I did that on my last project, and it was great."

"No, we should do B. I did that on my last project, and it was great."

Comparing experiences rarely builds consensus, everyone believes their own experiences to be the most convincing. Comparing experiences really only works when there is a power imbalance, when the person advocating A or B also happens to be a Decider.

In most cases, simply being aware of this phenomena is sufficient to avoid damaging disagreements. The team needs to find other ways to pick their path forward, such as shared experiences or quantitative measurements, not just firmly held belief.

Friday, September 15, 2017

On CPE Cost

When it comes to the cost of hardware, volume matters more than anything else. To large extent, volume matters more than everything else put together. A cost efficient hardware design produced in low volume will be considerably more expensive than an inefficient and sloppy design produced in high volume. Plus, for a high volume product, the Contract Manufacturer will have engineering teams to help tighten the design for a moderate fee.

If your own sales volume is sufficient to get deep volume discounting, you can stop reading now (more honestly, you aren't reading this in the first place). Otherwise, if you are building a product for a new market or you are building for a niche, read on.

What does this mean? It means you should work very, very hard to use hardware which is produced in high volume. The compromises you would make in terms of RAM or other capabilities in order to get your own custom design down to a price you can tolerate will cost you far more than you saved in terms of updating the software and capabilities throughout the service lifetime. Using an existing, high volume design may bring other compromises, but it is a good tradeoff to make.

If you want to have your branding on the box: many commercial off the shelf (COTS) devices are available in unbranded white-box versions. It is simple and easy to add silkscreening or design flourishes, often a one-time design fee and a tiny line item on the Bill of Materials.

If you want to add RAM, Flash, moderately faster CPU, etc: most of those white-box products allow customization of specs which do not require changes in the board design. RAM and Flash suppliers offer different capacities in the same pinout, and CPU vendors offer multiple speed-bins of their chips. There will be a sweet spot in the market where the industry is buying the most volume, with a reasonable standard deviation such that you can moderately increase the capability without substantially increasing the cost. The converse is also true: moderate reductions in RAM/Flash/CPU don’t substantially decrease cost and may not be a good tradeoff.

If you want to have a unique industrial design: many ODMs will customize a product for you, including a new casing. It will need to fit the existing board, and will cost a few hundred thousand dollars for design, tooling, and emissions testing, but that is still cheaper than taking it all on in-house as you get the volume pricing for the board and other components.

Corollaries:

  • Mobile ate the world. You shouldn’t shy away from using mobile chipsets, even if your product will never operate on battery. Volume drives cost down, and mobile has the volume. Also, mobile chipsets with good power management are less in need of active cooling, and fanless is a huge win for consumer products.
  • RAM does cost money, but RAM is your future proofing. Greatly reducing RAM to lower cost is usually a bad tradeoff. Raspberry Pi Zero has 512 MBytes of RAM and costs US $10. Moderate amounts of RAM do not add much cost.
  • Many modern CPUs have configurable endianness, but seriously: little endian won. I hate that it won, but it did. If you’re considering a big endian toolchain, think carefully about the life choices that led you to that dark place. You’ll be taking endianness bugs onto your own plate for no benefit.