Showing posts with label Engineering Life. Show all posts
Showing posts with label Engineering Life. Show all posts

Monday, January 13, 2025

Rent Then Versus Now, 32 Years Later

When I first moved to California in 1992 I rented a one bedroom apartment in Mountain View at a complex called The Shadows. I remember it being $900 for a 700-750 square foot apartment with one bedroom and a small kitchen.

The apartment complex is still there. Rent for that apartment now starts at $3295 per month.

Wednesday, August 29, 2018

Google Software Engineering Levels and Ladders

Google (now Alphabet) hires a lot of engineers every year. There are articles out there about the interview process and how to prepare, and I do definitely recommend spending time in preparation. Google interviews for software engineers mostly do not focus on the candidate's resume or prior experience, instead asking technical questions on various topics and coding. You'll do better if you mentally refresh topics in computer science which you have not recently worked with.

This post focuses on a different area: how to evaluate an engineering job offer from Alphabet. The financial aspects will presumably be clear enough, but the career aspects of the offer may not be. This post will attempt to explain what Google's engineering career progression looks like.

There are two concepts: ladder and level. The ladder defines the role you are expected to do, like manager or engineer or salesperson, while the level is how senior you are in that role.

Like many Tech companies, Google has parallel tracks for people who wish to primarily be individual contributors and for people who wish to primarily be managers. This takes the form of two ladders, Software Engineer (universally abbreviated as "SWE") and Software Engineering Manager. Google does allow people on the SWE ladder to manage reports, and allows people on the Manager ladder to make technical contributions. The difference is in how performance is evaluated. For those on the SWE ladder the expectation is that at least 50% of their time will be spent on individual contributing engineering work, leaving no more than 50% on management. For those on the Manager ladder the expectation is more like 80% of the time to be spent on management. People on one ladder veering too far out of the guidance for that ladder will be encouraged to switch to the other, as performance evaluations will begin to suffer.


 

Software Engineer Ladder

The levels are:

  • SWE-I (Level 2) is a software engineering intern, expected to be in the junior or senior year of a four year degree program.
  • SWE-II (Level 3) is an entry level full-time software engineer. An L3 SWE is generally someone who recently graduated with an undergraduate or Master's degree, or equivalent education.
  • SWE-III (Level 4) is someone with several years of experience after graduation, or for someone who just finished a PhD in a technical field.
  • Senior Software Engineer (Level 5) is the level where a software engineer is expected to be primarily autonomous: capable of being given tasks without excessive detail, and being able to figure out what to do and then do it. A software engineer advances to L5 primarily by demonstrating impact on tasks of sufficient difficulty. When hiring externally, six to ten years of experience is generally expected.
  • Staff Software Engineer (Level 6) is the level where leadership increasingly becomes the primary criteria by which performance is judged. Many, though by no means all, SWEs begin managing a team of engineers by this point in their career. When hiring externally, ten or more years of experience are generally expected.
  • Senior Staff Software Engineer (Level 7) is essentially L6 with larger expectations. Guidance for years of experience begins to break down at this level, as most candidates with ten or more years experience will be hired at Level 6 unless there is a strong reason to offer a higher level. Involvement of the hiring manager or strong pushback by the candidate can sometimes push the offer to Level 7.
  • Principal Software Engineer (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Principal Software Engineers drive technical strategy in relatively large product areas. SWEs at level 8 or above are relatively rare: the equivalent level on the manager ladder will routinely have five or more times as many people as on the SWE ladder. By this level of seniority, most people are focussed on management and leadership.
  • Distinguished Software Engineer (Level 9) drives technical strategy in efforts spanning a large technical area.
  • Google Fellow (Level 10) is the same level as a Vice President, expected to drive technical strategy and investment in crucial areas.
  • Google Senior Fellow (Level 11) is for people like Jeff Dean and Sanjay Ghemawat.

Most external hiring for software engineers is for L4 through L6, with L7 also possible though less common. Hiring externally directly to L8 and L9 does happen, but is quite rare and demands the direct sponsorship of a high-level executive like a Senior Vice President of a Google Product Area or CEO of an Alphabet company. For example James Gosling and David Patterson both joined the company as L9 Distinguished Engineers.

Also notable is that the external hiring process and the internal promotion process are entirely separate, and at this point have diverged substantially in their calibration. It is fair to say that Alphabet substantially undervalues experience outside of the company, or perhaps overvalues experience within the company. Someone with ten years experience externally would be hired at L5 or L6, while ten years within the company can make it to L7 or L8.


 

Software Engineering Manager Ladder

The levels are:

  • Manager, Software Engineering I (Level 5) is the first level on the manager ladder. It is expected that people will have a few years experience in the field before they begin managing a team, and therefore the Manager ladder starts at level 5. Manager I will typically lead a small team of engineers, five to ten is common.
  • Manager, Software Engineering II (Level 6) is typically a manager of a team of ten to twenty, sometimes a mixture of direct reports and managing other managers. When hiring externally, 10+ years of experience is expected.
  • Manager, Software Engineering III (Level 7) begins the transition to be primarily a manager of managers. Teams are larger than L6, typically twenty to forty.
  • Director (Level 8) is the first level which is considered an executive of the Alphabet corporation for the purposes of remuneration and corporate governance. Directors are mostly managers of managers, and typically lead organizations of forty up to several hundred people.
  • Senior Director (Level 9) is basically a secret level at Google: all of the internal tools will show only "Director," and by tradition promotions to Senior Director are not publicly announced. Senior Directors may lead slightly larger organizations than L8 Directors, though mostly it provides a way to have a larger gap between Director and VP while still allowing career progression.
  • Vice President (Level 10) typically leads organizations of hundreds to thousands of people. Their direct reports will typically be Directors and will be second to third level managers themselves.
  • Vice President II (Level 11), like Senior Director, is shown only as "VP" in internal tools and provides a way to maintain a larger gap between VP and SVP while still allowing managers to advance in their careers.
  • There are executive levels beyond L12, notably Senior Vice Presidents of Google divisions and CEOs of other Alphabet companies. This blog post is not a good guide to hiring for those levels, if you happen to be such a candidate. Sorry.

When hiring managers externally, L5 through Director is most common. Above Director is rare and generally only happens with the sponsorship of a high level executive. However where SWE hiring essentially tops out at L9, manager hires can come in at almost any level given sufficient sponsorship. Alphabet hires CEOs for its affiliated companies (John Krafcik, Andrew Conrad) and Google SVPs (Craig Barratt, Diane Greene) externally.


 

Other ladders equivalent to SWE

There is one other software engineering role at Alphabet which is parallel to the SWE/Software Manager ladders: Site Reliability Engineer or SRE. The individual contributor ladder is called SRE-SWE — for historical reasons, as there used to be an SRE-System Administration ladder which is no longer hired for. There is also an SRE Manager ladder. The levels on SRE-SWE and SRE Manager roughly correspond in responsibilities and years of experience to the SWE and Software Manager ladders described above, though the nature of the work differs.

SRE is equivalent to SWE in that at any time, an SRE can choose to relinquish the SRE duties and transfer to the SWE ladder and SRE Manager can switch to the Software Manager ladder. If originally hired as an SRE, they can also generally switch back if they choose to to do in the future. Engineers hired as a SWE who wish to transfer to SRE require a bit more process, often via an internal training program to serve a rotation as an SRE.


 

Other ladders NOT equivalent to SWE

SETI, for Software Engineer in Tools and Infrastructure, is another engineering ladder within Google. Though recruiters will make the claim that it is just like being a SWE, transfers from SETI to SWE require interviews, acceptance by a hiring committee, and approval of the SVP who owns the SWE ladder. Though often successful, transfers from SETI to SWE are not automatic and do get rejected, at both stages of the approval process. As such, recruiter claims that it is just like being a SWE are not accurate. The recruiter just has an SETI role to fill.

Only accept an SETI role if automated testing and continuous software improvement are really passions. Projects listing SETI openings will be less numerous than SWE, though will often be more focussed on automation and quality improvement. In many cases, internal transfers to projects which list a SWE opening will accept an SETI applicant, but not always. Being on the SETI ladder will therefore be slightly limiting in choice of projects for internal mobility.

There are other ladders which also involve software development but are even further removed from the SWE ladder, notably Technical Solutions Engineer (TSE) and Web Solutions Engineer (WSE). As with SETI, transfers to the SWE ladder require interviews and approvals. Recruiter claims that TSE or WSE are "just like being a SWE" are not accurate, as people on these ladders cannot internally transfer to projects which have a SWE opening. They can only transfer to TSE/WSE openings, which limit the choice of projects.

Wednesday, August 1, 2018

Career & Interviewing Help

Something I find rewarding is helping others in their careers. I am quite happy to conduct practice embedded software engineer or manager interviews, answer questions about engineering at Google or in general, advise on career planning, etc.

I keep a bookable calendar with two timeslots per week. I am in the Pacific timezone, and can set up special times more convenient for people in timezones far from my own. If the calendar doesn't work for you, you can contact me at denny@geekhold.com to make special arrangements.

Anyone is welcome, you don't need an intro or to know me in person. The sessions are conducted via Google Hangout or by phone. My only request for this is to pay it forward: we all have opportunities to help others. Every time we do so, we make the world a slightly better place.

Sunday, July 29, 2018

Seeking Career in Climate Change Amelioration

I have been at Google (now Alphabet) for almost 9 years. All things come to an end, and the end of my time at Google is approaching. I expect wrap up current work and exit the company on August 28th, 2018.

I have a strong desire to work on ameliorating climate change. I’d like to do this via working on energy production, or carbon recapture from the environment, or other ideas related to climate and cleantech.

I am seeking an engineering leadership role. At a BigCo, this would be Principal Software Engineer, Director, etc depending on the company’s level structure. At a smaller company I’d be looking for the opportunity to grow into such a role.

I have prepared a resume and a pitch deck focusing on climate change roles, and my LinkedIn profile is public.

I’d welcome referrals to companies in these areas, or pointers to opportunities which I can followup on. I can be reached at denny@geekhold.com.


 

An excerpt from the resume:

Primary skills


 

Role/Company Must Haves

  • Blameless postmortem culture
  • Emphasis on Inclusion, and care about personnel and their development
  • Belief that engineering management should retain reasonable technical proficiency

Sunday, September 24, 2017

There is No Feminist Cabal

From the 23-Sep-2017 New York Times:

One of those who said there had been a change is James Altizer, an engineer at the chip maker Nvidia. Mr. Altizer, 52, said he had realized a few years ago that feminists in Silicon Valley had formed a cabal whose goal was to subjugate men. At the time, he said, he was one of the few with that view.

Now Mr. Altizer said he was less alone. "There’s quite a few people going through that in Silicon Valley right now," he said. "It’s exploding. It’s mostly young men, younger than me."

I want to share some experiences, as another white male in Tech for a similar number of years as Mr. Altizer.

A while ago I made a conscious effort to follow more women in Tech on Twitter, to deliberately maintain a ratio of ~50% in those I follow. I wanted to try for more perspective than that provided by my own vantage point in the industry, where the gender ratio is definitely not 50%. It has been illuminating... and often painful. Intermixed with happy and proud events in life and work is the constant level of sexism which women experience. Sometimes it is blatant and vile: intimidation, physical threats. More often it is a grinding, ever present disrespect from men. It is so commonplace that it becomes completely expected, often mentioned in an offhand way. This doesn’t mean it is a minor thing, it means that it never stops, isn’t possible to avoid, and ceases to be surprising.

You likely won’t hear this in person, from women you work with. That doesn’t mean women you work with are experiencing something different. It doesn’t mean that their career and work environment are free of sexism and discrimination. It means that talking about it in person is asking them to relive those events, sometimes extraordinarily painful events. It means it is vastly more difficult to relate horrible experiences in person, in conversation. It is understandable to not want to talk about it.


Yet one thing I haven’t seen, not even a hint of, is the existence of a powerful group of women who are organizing to oppress men. I’ve seen no evidence of any kind of backlash against men in Tech. Jokes about a Cabal started circulating after the NYT story was published. This was irony, not confirmation.

We’re hearing more about sexism in Tech, far more than we did even a year ago. I think, I hope, that is because we are in the early stages of the extinction burst. When a behavior which was formerly rewarded no longer is, that behavior will begin to decline... except for a final gasp, a final burst, in trying to turn back the clock. The process of acknowledging the disparities in Tech has been ongoing for many years, slowly. It has reached a point where the industry is starting to respond, if only a little. That the response may grow stronger will feel like a threat, like a backlash against males. It really isn't. It is about disparity, and doing something to rectify that long-present disparity.




I’m posting this because it is unfair to expect people in disadvantaged groups to carry the entire burden of correcting the disadvantage. In Tech the advantaged group is males, and as I happen to be a male in Tech, that means me. I have not been nearly enough a part of the solution. That needs to change.

Males in Tech have almost certainly witnessed aggressions: a woman being spoken over, or not being invited to a meeting she should be, or not receiving sufficient credit for her work. One thing learned from following women in tech is that no matter how much we think we understand, the aggressions they go through happen orders of magnitude more frequently than we think. For every occurrence we know of there are ten more, a hundred more, a thousand more, which we don’t see and don't grasp the frequency of because we are not female.

We, males in Tech, need to speak out. We need to speak out frequently and firmly. So I am. My voice isn’t powerful, but power can be achieved in numbers too. I’m adding my voice to the multitude. You should too; not just as a blog post, speak out when you see the grinding aggression happening.

Thursday, September 21, 2017

Software Engineering Maxims which May or May Not Be True

This is a series of Software Engineering Maxims Which May or May Not Be True, developed over the last few years of working at Google. Your mileage may vary. Use only as directed. Past performance is not a predictor of future results. Etc.




Maxim #1: Small teams are bigger than large teams

In my mind, the ideal size for a software team is seven engineers. It does not have to be exactly seven: six is fine, eight is fine, but the further the team gets from the ideal the harder it is to get things done. Three people isn’t enough and limits impact, fourteen is too many to effectively coordinate and communicate amongst.

Organizing larger projects becomes an exercise in modularizing the system to allow teams of about seven people to own the delivery of a well-defined piece of the overall product. The largest parts of the system will end up with clusters of teams working on different aspects of the system.




Maxim #2: Enthusiasm improves productivity.

By far the best way to improve engineering productivity is to have people working on something which they are genuinely enthused about. It is beneficial in many ways:

  • the quality of the product will benefit from the care and attention
  • people don’t let themselves get blocked by something else when they are eager to see the end result
  • they’ll come up with ways to make the product even better, by way of their own resourcefulness
  • people are simply happier, which has numerous benefits in morale and working environment.

There are usually way more tasks on the project wish list than can realistically be delivered. Some of those tasks will be more important than others, but it is rarely the case that there is a strict priority order in the task list. More often we have broad buckets of tasks:

  • crucial, can’t have the product without it
  • nice to have
  • won’t do yet: too much crazy, or get to it eventually, or something

The crucial tasks have to be done, even the ones which no-one particularly wants to do.

In my mind, when selecting from the (lengthy) list of nice-to-have tasks, the enthusiasm of the engineering team should be a factor in the choices. The team will deliver more if they can work on things they are genuinely interested in doing.




Maxim #3: Project plans should have top-down and bottom-up elements

It is possible for a team to work entirely from a task list, where Product Management and the Leadership add stuff and the team marks things off as they are completed. This is not a great way to work, but it is possible.

It is better if the team spend some fraction of their time on tasks which germinated from within the team - not merely 20% time, a portion of regular work should be on tasks which the team itself came up with.

  • The team tends to generate immediately practical ideas, things which build upon the product as it exists today and provide useful extensions.
  • It is good for morale.
  • It is good for careers. Showing initiative and technical leadership is good for a software engineer.



Maxim #4: Bricking the fleet is bad for business

Activities with a risk of irreparable consequences deserve more care. This sounds obvious, like something which no-one would ever disagree with, but in the day-to-day engineering work those tasks won’t look like something which require that extra level of care. Instead they will look like something which has been running for years and never failed, something which fades into the background and can be safely ignored because it is so reliable.

Calls to add this risk will not be phrased as "be cavalier about something which can ruin us." It will be phrased as increasing velocity, or lowering cost, or not being stuck in doing things the old way - all of which might be true, it just needs more care and attention in changing it.




Maxim #5: There is an ideal rate of breakage: no more, no less

Breaking things too often is a sign of trying to do too much too quickly, and either excessively dividing attention or not allowing time for proper care to be taken.

Not breaking things often enough is a sign of the opposite problem: not pushing hard enough.

I’m sure it is theoretically possible for a team to move at an absolutely optimal speed such that they maximize their results without ever breaking anything, but I’ve no idea how to achieve it. The next best thing is to strive for just the right amount of breakage: not too much, not too little.




Maxim #6: It’s a marathon, not a sprint

"Launch and iterate" is a catchy phrase, but often turns into excuses to launch something sub-par and then never getting around to improving it.

Yet there is a real advantage to being in a market for the long term, launching a product and improving it. Customer happiness is earned over time, not all at once with a big launch.

  • This means structuring teams for sustained effort, not big product pushes.
  • It means triaging bugs: not everything will get fixed right away, but should at least be looked at to assess relative priority.
  • It means really thinking about how to support the product in the field.
  • It means not running projects in a way which burn people out.



Maxim #7: The service is the product

The product is not the code. The product is not the specific feature we launched last week, nor the big thing we’re working on launching next week.

The product is the Service. The Product which customers care about is that they can get on the Internet and do what they need to do, that they can turn on the TV and have it work, that they can make phone calls, whatever it is they set out to do.




Maxim #8: Money is not the only motivator

A monetary bonus is one tool available for managers to reward good work. It is not the only tool, and is not necessarily the best tool for all situations.

For example, to encourage SWEs to write automated system tests we created the Yakthulhu of Testing. It is a tiny Cthulhu made from the hair of shaved yaks (*). A Yakthulhu can be obtained by checking in one’s first automated test to run against the product.

(*) It really is made from yak hair. Yak hair yarn is a thing which one can buy. Disappointingly though, they do not shave the yaks. They comb the yaks.




Maxim #9: Evolve systems as a series of incremental changes

There is substantial value in code which has seen action in the field. It contains a series of small and large decisions, fixes, and responses which made the system better over time. Generally these decisions are not recorded as a list of lessons learned to be applied to a rewrite or to the next system.

Whenever possible, systems should evolve as a series of incremental changes to take it from where it is to where we want it to be. Doing this incrementally has several advantages:

  • benefits are delivered to customers much earlier, as the earliest pieces to be completed don’t have to wait for the later pieces before deployment.
  • there is no stagnant period in the field after work on the old system is stopped but before the new system is ready.
  • once the system is close enough to where we want it to be that other stuff moves higher on the list of priorities, we can stop. We don’t have to push on to finish rewriting all of it.



Maxim #10: Risk is multiplicative

There is a school of thought that when there are multiple large projects going on, and there is some relation between them, that they should be tied together and made dependent upon each other. The arguments for doing so are often:

  • "We’re going to pay careful attention to those projects, making them one project means we’ll be able to track them more effectively."
  • "There was going to be duplication of effort, we can share implementation of the common systems."
  • "We can better manage the team if we have more people available to be redirected to the pieces which need more help."

The trouble with this is that it glosses over the fundamental decision being made: nothing can ship until all of it ships. Combining risks makes a single, bigger risk out of the multiple smaller risks.




Maxim #11: Don’t shoot the monitoring

There is a peculiar dynamic when systems contain a mix of modules with very good monitoring along with modules with very poor monitoring; the modules with good monitoring report all of the errors.

The peculiarity becomes damaging if the result is to have all of the bugs filed against the components with good monitoring. It makes it look like those modules are full of bugs, when the reality is likely the opposite.




Maxim #12: No postmortem prior to mortem

There are going to be emergencies. It happens, despite our best efforts to anticipate risks. When it happens, we go into damage control mode to resolve it.

People not involved in handling the emergency will begin to ask about a postmortem almost immediately, even before the problem is resolved. It is important to not begin writing the postmortem until the problem has been mitigated. Doing so turns a unified crisis response into a hotbed of fingerpointing and intrigue. Even in a culture of blameless postmortems, it is difficult to avoid the harmful effects of the hints of blame while writing that blameless postmortem.

It is fine, even crucial, to save information for later drafting of the postmortem. IRC logs, lists of bugs/CLs/etc, will all be needed eventually. Just don’t start a draft of a postmortem while still antemortem.




Maxim #13: Cadence trumps mechanism

We tend to focus a lot on mechanisms in software engineering as a way to increase velocity or productivity. We reduce the friction of releasing software, or we automate something, and we expect that this will result in more of the activity which we want to optimize.

Inertia is a powerful thing. A product at rest will tend to stay at rest, a product in motion will tend to stay in motion. The best way to release a bunch of software is to release a bunch of software, by setting a cadence and sticking to it. People get used to a cadence and it becomes self-reinforcing. Making something easier may or may not result in better velocity, making it more regular almost always does.




Maxim #14: Churn wastes more than time

Project plans change. It happens.

When plans change too often, or when a crucial plan is suddenly cancelled and destaffed, we burn more than just the time which was spent on the old plan. We burn confidence in the next plan. People don’t commit as readily and don’t put their best effort into it until they’re pretty sure the new plan will stick.

In the worst case, this becomes self-reinforcing. New plans fail because of the lack of effort engendered by the failure of previous plans.




Maxim #15: Sometimes the hard way is the right way

For any given task, there is often some person somewhere who has done it before and knows exactly what to do and how to do it. For things which are likely to be done once in a project and never repeated, relying on that person (either to do it or to provide exactly what to do step by step) can significantly speed things up.

For things which are likely to be repeated, or refined, or iterated upon, it can be better to not rely on that one expert. Learning by doing results in a much deeper understanding than just following directions. For an area which is core to the product or will be extended upon repeatedly, the deeper understanding is valuable, and is worth acquiring even if it takes longer.




Maxim #16: Spreading knowledge makes it thicker

Pigeonholing is rampant in software engineering, engineers who have become experts in a particular area and always end up being assigned tasks in that area.

There are occasions where that is optimal, where becoming a subject matter expert takes substantial time and effort, but these situations are rare. In most cases it is not the expense of becoming an expert that keeps an engineer doing similar work over and over, it is just complacency.

Areas of the product where the team needs to continue to expend effort over a long time period should move around to different members of the team. Multiple people familiar with an area will reinforce each other. Additionally, teaching the next person is a very effective way to get a better understanding for oneself.




Maxim #17: Software Managers must code

When one first transitions from being an individual contributor software engineer to being a manager, there is a decision to be made: whether to stop doing work as an individual contributor and focus entirely on the new role in guiding the team, or to keep doing IC work as well as management.

There are definitely incentives to focus entirely on management: one can have a much bigger impact via effective use of a team than by one’s own effort alone. When a new manager makes that choice, they get a couple of really good years. They have more time to plan, more time to strategize, and the team carries it all out.

The danger in this path comes later: one forgets how hard things really are. One forgets how long things take. The plans and strategies become less effective because they no longer reflect reality.

Software managers need to continue do engineering work, at least a little, to stay grounded in reality.




Maxim #18: Manage without a net

Managers and Tech Leads cannot depend on escalation. We sometimes believe that the layers of management above us exist in order to handle things which the lower layers are having trouble with. In reality, those upper layers have their own goals and priorities, and they generally do not include handling things bubbling up from below.

Do not rely on Deus Ex Magisterio from above, organizations do not work that way.




Maxim #19: Goodwill can be spent. Spend wisely.

Doing good work accumulates goodwill. It is helpful to have a pile of goodwill, it tends to make interactions smoother and generally makes things easier.

Nonetheless, it is possible to spend goodwill on something important: to redirect a path, to right a wrong, etc. Sometimes spending goodwill is the right thing to do. Don’t spend it frivolously.




Maxim #20: Everyone finds their own experience most compelling

"We should do A. I did that on my last project, and it was great."

"No, we should do B. I did that on my last project, and it was great."

Comparing experiences rarely builds consensus, everyone believes their own experiences to be the most convincing. Comparing experiences really only works when there is a power imbalance, when the person advocating A or B also happens to be a Decider.

In most cases, simply being aware of this phenomena is sufficient to avoid damaging disagreements. The team needs to find other ways to pick their path forward, such as shared experiences or quantitative measurements, not just firmly held belief.

Friday, December 16, 2011

The Ada Initiative 2012

Donate to the Ada InitiativeEarlier this year I donated seed funding to the Ada Initiative, a non-profit organization dedicated to increasing participation of women in open technology and culture. One of their early efforts was development of an example anti-harassment policy for conference organizers, attempting to counter a number of high profile incidents of sexual harassment at events. Lacking any sort of plan for what to do after such an incident, conference organizers often did not respond effectively. This creates an incredibly hostile environment, and makes it even harder for women in technology to advance their careers through networking. Developing a coherent, written policy is a first step toward solving the problem.

The Ada Initiative is now raising funds for 2012 activities, including:

  • Ada’s Advice: a guide to resources for helping women in open tech/culture
  • Ada’s Careers: a career development community for women in open tech/culture
  • First Patch Week: help women write and submit a patch in a week
  • AdaCamp and AdaCon: (un)conferences for women in open tech/culture
  • Women in Open Source Survey: annual survey of women in open source

 

For me personally

There are many barriers discouraging women from participating in the technology field. Donating to the Ada Initiative is one thing I'm doing to try to change that. I'm posting this to ask other people to join me in supporting this effort.

My daughter is 6. The status quo is unacceptable. Time is short.


My daughter wearing Google hat

Wednesday, October 12, 2011

Dennis Ritchie, 1941-2011

Kernighan and Ritchie _The C Programming Language_

K&R C is the finest programming language book ever published. Its terseness is a hallmark of the work of Dennis Ritchie; it says exactly what needs to be said, and nothing more.

Rest in Peace, Dennis Ritchie.

The first generation of computer pioneers are already gone. We're beginning to lose the second generation.


Friday, October 7, 2011

Finding Ada, 2011

Ada Lovelace Day aims to raise the profile of women in science, technology, engineering and maths by encouraging people around the world to talk about the women whose work they admire. This international day of celebration helps people learn about the achievements of women in STEM, inspiring others and creating new role models for young and old alike.

For Ada Lovelace Day 2010 I analyzed a patent for a frequency hopping control system for guided torpedoes, granted to Hedy Lamarr and George Antheil. For Ada Lovelace Day this year I want to share a story from early in my career.

After graduation I worked on ASICs for a few years, mostly on Asynchronous Transfer Mode NICs for Sun workstations. In the 1990s Sun made large investments in ATM: designed its own Segmentation and Reassembly ASICs, wrote a q.2931 signaling stack, adapted NetSNMP as an ILMI stack, wrote Lan Emulation and MPOA implementations, etc.

Yet ATM wasn't a great fit for carrying data traffic. Its overhead for cell headers was very high, it had an unnatural fondness for Sonet as its physical layer, and it required a signaling protocol far more complex than the simple ARP protocol of Ethernet.

Cell loss == packet loss.Its most pernicious problem for data networking was in dealing with congestion. There was no mechanism for flow control, because ATM evolved out of a circuit switched world with predictable traffic patterns. Congestive problems come when you try to switch packets and deal with bursty traffic. In an ATM network the loss of a single cell would render the entire packet unusable, but the network would be further congested carrying the remaining cells of that packet's corpse.

Allyn Romanow at Sun Microsystems and Sally Floyd from the Lawrence Berkeley Labs conducted a series of simulations, ultimately resulting in a paper on how to deal with congestion. If a cell had to be dropped, drop the rest of the cells in that packet. Furthermore, deliberately dropping packets early as buffering approached capacity was even better, and brought ATM links up to the same efficiency for TCP transport as native packet links. Allyn was very generous with her time in explaining the issues and how to solve them, both in ATM congestion control and in a number of other aspects of making a network stable.

ATM also had a very complex signaling stack for setting up connections, so complex that many ATM deployments simply gave up and permanently configured circuits everywhere they needed to go. PVCs only work up to a point, the network size is constrained by the number of available circuits. Renee Danson Sommerfeld took on the task of writing a q.2931 signaling stack for Solaris, requiring painstaking care with specifications and interoperability testing. Sun's ATM products were never reliant on PVCs to operate, they could set up switched circuits on demand and close them when no longer needed.

In this industry we tend to celebrate engineers who spend massive effort putting out fires. What I learned from Allyn, Sally, and Renee is that the truly great engineers see the fire coming, and keep it from spreading in the first place.

Update: Dan McDonald worked at Sun in the same timeframe, and posted his own recollections of working with Allyn, Sally, and Renee. As he put it on Google+, "Good choices for people, poor choice for technology." (i.e. ATM Considered Harmful).


Monday, August 15, 2011

An Awkward Segue to CPU Caching

Last week Andy Firth published The Demise of the Low Level Programmer, expressing dismay over the lack of low level systems knowledge displayed by younger engineers in the console game programming field. Andy's particular concerns deal with proper use of floating versus fixed point numbers, CPU cache behavior and branch prediction, bit manipulation, etc.

I have to admit a certain sympathy for this position. I've focussed on low level issues for much of my career. As I'm not in the games space, the specific topics I would offer differ somewhat: cache coherency with I/O, and page coloring, for example. Nonetheless, I feel a certain solidarity.

Yet I don't recall those topics being taught in school. I had classes which covered operating systems and virtual memory, but distinctly remember being shocked at the complications the first time I encountered a system which mandated page coloring. Similarly though I had a class on assembly programming, by the time I actually needed to work at that level I had to learn new instruction sets and many techniques.

In my experience at least, schools never did teach such topics. This stuff is learned by doing, as part of a project or on the job. The difference now is that fewer programmers are learning it. Its not because programmers are getting worse. I interview a lot of young engineers, their caliber is as high as I have ever experienced. It is simply that computing has grown a great deal in 20 years, there are a lot more topics available to learn, and frankly the cutting edge stuff has moved on. Even in the gaming space which spurred Andy's original article, big chunks of the market have been completely transformed. Twenty years ago casual gaming meant Game Boy, an environment so constrained that heroic optimization efforts were required. Now casual gaming means web based games on social networks. The relevant skill set has changed.

I'm sure Andy Firth is aware of the changes in the industry. Its simply that we have a tendency to assume that markets where there is a lot of money being made will inevitably attract new engineers, and so there should be a steady supply of new low level programmers for consoles. Unfortunately I don't believe that is true. Markets which are making lots of money don't attract young engineers. Markets which are perceived to be growing do, and other parts of the gaming market are perceived to be growing faster.


 

Page Coloring

Least significant bits as cache line offset, next few bits as cache indexBecause I brought it up earlier, we'll conclude with a discussion of page coloring. I am not satisfied with the Wikipedia page, which seems to have paraphrased a FreeBSD writeup describing page coloring as a performance issue. In some CPUs, albeit not current mainstream CPUs, coloring isn't just a performance issue. It is essential for correct operation.


Cache Index

Least significant bits as cache line offset, next few bits as cache indexBefore fetching a value from memory the CPU consults its cache. The least significant bits of the desired address are an offset into the cache line, generally 4, 5, or 6 bits for a 16/32/64 byte cache line.

The next few bits of the address are an index to select the cache line. It the cache has 1024 entries, then ten bits would be used as the index. Things get a bit more complicated here due to set associativity, which lets entries occupy several different locations to improve utilization. A two way set associative cache of 1024 entries would take 9 bits from the address and then check two possible locations. A four way set associative cache would use 8 bits. Etc.


Page tag

Least significant bits as page offset, upper bits as page tagSeparately, the CPU defines a page size for the virtual memory system. 4 and 8 Kilobytes are common. The least significant bits of the address are the offset within the page, 12 or 13 bits for 4 or 8 K respectively. The most significant bits are a page number, used by the CPU cache as a tag. The hardware fetches the tag of the selected cache lines to check against the upper bits of the desired address. If they match, it is a cache hit and no access to DRAM is needed.

To reiterate: the tag is not the remaining bits of the address above the index and offset. The bits to be used for the tag are determined by the page size, and not directly tied to the details of the CPU cache indexing.


Virtual versus Physical

In the initial few stages of processing the load instruction the CPU has only the virtual address of the desired memory location. It will look up the virtual address in its TLB to get the physical address, but using the virtual address to access the cache is a performance win: the cache lookup can start earlier in the CPU pipeline. Its especially advantageous to use the virtual address for the cache index, as that processing happens earlier.

The tag is almost always taken from the physical address. Virtual tagging complicates shared memory across processes: the same physical page would have to be mapped at the same virtual address in all processes. That is an essentially impossible requirement to put on a VM system. Tag comparison happens later in the CPU pipeline, when the physical address will likely be available anyway, so it is (almost) universally taken from the physical address.

This is where page coloring comes into the picture.


 

Virtually Indexed, Physically Tagged

From everything described above, the size of the page tag is independent of the size of the cache index and offset. They are separate decisions, and frankly the page size is generally mandated. It is kept the same for all CPUs in a given architectural family even as they vary their cache implementations.

Consider then, the impact of a series of design choices:

  • 32 bit CPU architecture
  • 64 byte cache line: 6 bits of cache line offset
  • 8K page size: 19 bits of page tag, 13 bits of page offset
  • 512 entries in the L1 cache, direct mapped. 9 bits of cache index.
  • virtual indexing, for a shorter CPU pipeline. Physical tagging.
  • write back
Virtually indexed, physically tagged, with 2 bits of page color

What does this mean? It means the lower 15 bits of the virtual address and the upper 19 bits of the physical address are referenced while looking up items in the cache. Two of the bits overlap between the virtual and physical addresses. Those two bits are the page color. For proper operation, this CPU requires that all processes which map in a particular page do so at the same color. Though in theory the page could be any color so long as all mappings are the same, in practice the virtual color bits are set the same as the underlying physical page.

The impact of not enforcing page coloring is dire. A write in one process will be stored in one cache line, a read from another process will access a different cache line.

Page coloring like this places quite a burden on the VM system, and one which would be difficult to retrofit into an existing VM implementation. OS developers would push back against new CPUs which proposed to require coloring, and you used to see CPU designs making fairly odd tradeoffs in their L1 cache because of it. HP PA-RISC used a very small (but extremely fast) L1 cache. I think they did this to use direct mapped virtual indexing without needing page coloring. There were CPUs with really insane levels of set associativity in the L1 cache, 8 way or even 16 way. This reduced the number of index bits to the point where a virtual index wouldn't require coloring.


Wednesday, March 30, 2011

Non-blocking Programmers

It is often stated that the productivity of individual programmers varies by an order of magnitude, and there is significant research supporting the 10x claim. More subjectively, I suspect every working developer quickly realizes that the productivity of their peers varies tremendously. Having been at this for a while, I suspect there is no single factor or even small number of factors which cause this variance. Instead there are a whole collection of practices, all of which which add up to determine an individual developer's productivity.

To make things more interesting, many of the practices conflict in some way. We'll discuss three of them today.


 
1. Non-Blocking Operation

We don't just write code right up until the moment the product ships. Numerous steps depend on other people: code reviews, dependent modules or APIs, a test cycle, etc. When faced with a delay to wait for someone else, a developer can choose several possible responses.

blocking: while waiting for the response do something other than produce code for the project. Codelabs, reading related documentation, and browsing the programming reddit are all examples.

non-blocking: switch to a different coding task in another workspace.

Versatility and wide ranging knowledge is a definite positive (see point 2), and people who spend time satisfying intellectual curiosity grow into better developers. The blocking developer spends time pursuing those interests. We'll ignore the less positive variations on this.

The non-blocking programmer makes progress on a different development task. This can of course be taken too far: having a dozen workspaces and context switching to every one of them each day isn't productivity, its ADHD.

One could also label these as single-tasking versus multi-tasking, but that analogy implies more than I intend.

Sometimes developers maximize their own productivity by immediately interrupting the person they are waiting for, generally with the lead-in of "I just sent you email." This impacts point 3, the amount of time developers can spend in a productive zone, and is one of the conflicts between practices which impact overall productivity.


 
2. Versatile Techniques

Here I'm obliged to make reference to a craftsman's toolbox, with hammers and nails and planers and other woodworking tools I haven't the slightest idea what to do with. The essential point is valid without understanding the specifics of carpentry: a developer with wide ranging expertise can bring more creative solutions to bear on a problem. For example,

Sparse graph
  • Realizing that complex inputs would be better handled by a parser than an increasingly creaky collection of string processing and regex.
  • Recognizing that a collection of data would be better represented as a graph, or processed using a declarative language
  • Recall having read about just the right library to solve a specific problem.

Developers with a curiosity about their craft grow into better developers. This takes time away from the immediate work of pounding out code (point 1), but makes one more effective over the long run.


 
3. Typing Speed

This sounds too trivial to list, but the ability to type properly does make a difference. Steve Yegge dedicated an entire post to the topic. I concur with his assessment that the ability to touch type matters, far more than most developers think it should. I'll pay further homage to Yegge with a really long explanation as to why.

Filthy old keyboardDevelopers work N hours per day, where N varies considerably, but the entire time is not spent writing code. We have interruptions, from meetings to questions from colleagues to physical necessities. The amount of time spent actually developing can be a small slice of one's day. More pressingly, we don't just sit down and immediately start pounding out program statements. There is a warm up period, to recall to mind the details of what is being worked on. Reviewing notes, re-reading code produced in the previous session, and so forth get one back into the mode of being productive. Interruptions which disrupt this productive mode have far greater impact than the few minutes it takes to answer the question.

Peopleware, the classic book on the productivity of programmers, refers to this focussed state as "flow" and devotes sections of the book to suggestions on how to maximize it. As the book was published in 1987, some of the suggestions now seem quaint like installing voice mail and allowing developers to turn off the telephone ringer. The essential point remains though: a block of time is far more useful than the same amount of time broken up by interruptions, and developers do well to maximize these blocks of time.

Once in the zone, thoughts race ahead to the next several steps in what needs to be done. Ability to type quickly and accurately maximizes the effectiveness of time spent in the flow of programming. Hunting and pecking means you only capture a fraction of what could have been done.

There are other factors relating to flow which can be optimized. For example one can block off chunks of time, or work at odd hours when interruptions are minimal. Yet control of the calendar isn't entirely up to the individual, while learning to type most definitely is.


 
Conclusion

The most effective, productive programmer I know talks very fast and types even faster. He has worked in a number of different problem spaces in his career, and stays current by reading Communications of the ACM and other publications. He handles interruptions well, getting back into the flow of programming very quickly. He also swears profusely, though I suspect that isn't really a productivity factor.

Other highly effective programmers have different habits. The most important thing is to be aware of how to maximize your own effectiveness, rather than look for a single solution or adopt someone else's techniques wholesale. Especially not the swearing.

Thursday, March 3, 2011

Home Made Cable Spaghetti

Rack of equipment entangled in a messy mass of cables

I wrote some thoughts for a colleague about home installation of a rack for computer equipment. Much of it is generally applicable for anyone considering such a thing, presented here for your edification and bemusement.

General
  • buy extra rack screws, maybe 20. The cheap ones strip easily, and nothing sucks harder than getting a new system in only to discover you've run out of rack screws.
  • Get an electric screwdriver if you don't already have one.
Physical Installation
  • Bolt the rack to the floor and the ceiling. Otherwise an earthquake which doesn't otherwise damage the house can rip the rack out of the floor. The rack itself will be shorter than ceiling height, you get an extended brace to bolt to the ceiling. You want to bolt it to joists, not just drywall.
  • There are also racks made to bolt to the wall rather than free standing floor to ceiling. These tend not to be as deep front to back, so it impacts the gear you can put in it. Also they can be an airflow problem if the gear vents front to back.
  • Perhaps obviously, when filling the rack start from the bottom and put the heaviest gear at the very bottom. A top-heavy rack is a disaster.
  • The industry never settled on whether airflow is front to back or side-to-side. You'll find equipment with both layouts. With one rack it doesn't particularly matter, and you can mix them. With multiple racks it matters a lot.
  • The industry also never settled on whether rack ears go at the very front of the equipment or the midpoint. Front is most common, to accommodate boxes of differing depths. Lots of gear has threaded screwholes at front and midpoint, just be consistent.
  • 19 inch racks are by far the most common, but be aware that 17 inch and 23 inch both exist. They will be well-labelled in catalogs as they are not common. If you buy second hand, bring a tape measure.
Cable Management
  • Spend as much time thinking about cable management as you do about how to rack the machines. Otherwise you end up with a beautiful rack covered in cable spaghetti.
  • Its customary to put the network switch at the top of the rack, because gravity makes cable management easier. However its not essential, and you can put it anywhere you like.
  • There are cable trays with removable fronts made to bolt vertically to the side of the rack or between adjacent racks. HIghly recommended.
  • Label both ends of each cable. Label them in a way which will still make sense in a few years when you replace these machines and have forgotten everything about the construction.
  • Avoid labeling cables according to their destination within the rack. That changes over time, relabeling cables is a pain.

Thursday, December 23, 2010

Signing Your Work

I recently had occasion to go digging around in the installers for MacOS System 7.0.1 and 7.6, extracting its excellent beep sounds to use on my phone. While schlepping around I found wonderful little gems where the developers signed their work. The 7.0.1 installer binary contains a plea for help from the Blue Meanies, shown here. The 7.6 installation tome contains a series of images, reproduced further down this page. As the best laid plans of mice and developers often go astray, the largest image is corrupted in the CD golden image, with a blue cast over the bottom third of the image. I'm sure that was disappointing.

00000000  4d 61 63 69 6e 74 6f 73  68 20 53 79 73 74 65 6d  |Macintosh System|
00000010  20 76 65 72 73 69 6f 6e  20 37 2e 30 2e 31 0d 0d  | version 7.0.1..|
00000020  0d a9 20 41 70 70 6c 65  20 43 6f 6d 70 75 74 65  |.. Apple Compute|
00000030  72 2c 20 49 6e 63 2e 20  31 39 38 33 2d 31 39 39  |r, Inc. 1983-199|
00000040  31 0d 41 6c 6c 20 72 69  67 68 74 73 20 72 65 73  |1.All rights res|
00000050  65 72 76 65 64 2e 20 20  20 20 20 20 20 20 20 20  |erved.          |
00000060  20 20 20 20 20 20 20 20  20 20 20 20 20 20 20 20  |                |
*
00000200  60 04 4e fa 05 22 59 4f  2f 3c 62 6f 6f 74 3f 3c  |`.N.."YO/<boot?<|
00000210  00 01 a9 a0 22 1f 67 54  4f ef ff 86 20 4f 42 a8  |....".gTO... OB.|
00000220  00 12 42 68 00 1c 42 68  00 16 a2 07 66 34 31 68  |..Bh..Bh....f41h|
00000230  00 42 00 16 67 36 31 68  00 44 00 18 22 41 22 51  |.B..g61h.D.."A"Q|
00000240  21 49 00 20 21 7c 00 00  04 00 00 24 31 7c 00 01  |!I. !|.....$1|..|
00000250  00 2c 42 a8 00 2e a0 03  66 08 20 78 02 ae 4e e8  |.,B.....f. x..N.|
00000260  00 0a 0c 40 ff d4 66 04  70 68 a9 c9 70 63 a9 c9  |...@..f.ph..pc..|
00000270  a9 20 31 39 38 33 2c 20  31 39 38 34 2c 20 31 39  |. 1983, 1984, 19|
00000280  38 35 2c 20 31 39 38 36  2c 20 31 39 38 37 2c 20  |85, 1986, 1987, |
00000290  31 39 38 38 2c 20 31 39  38 39 2c 20 31 39 39 30  |1988, 1989, 1990|
000002a0  2c 20 31 39 39 31 20 41  70 70 6c 65 20 43 6f 6d  |, 1991 Apple Com|
000002b0  70 75 74 65 72 20 49 6e  63 2e 0d 41 6c 6c 20 52  |puter Inc..All R|
000002c0  69 67 68 74 73 20 52 65  73 65 72 76 65 64 2e 0d  |ights Reserved..|
000002d0  0d 48 65 6c 70 21 20 48  65 6c 70 21 20 57 65 d5  |.Help! Help! We.|
000002e0  72 65 20 62 65 69 6e 67  20 68 65 6c 64 20 70 72  |re being held pr|
000002f0  69 73 6f 6e 65 72 20 69  6e 20 61 20 73 79 73 74  |isoner in a syst|
00000300  65 6d 20 73 6f 66 74 77  61 72 65 20 66 61 63 74  |em software fact|
00000310  6f 72 79 21 0d 0d 54 68  65 20 42 6c 75 65 20 4d  |ory!..The Blue M|
00000320  65 61 6e 69 65 73 0d 0d  44 61 72 69 6e 20 41 64  |eanies..Darin Ad|
00000330  6c 65 72 0d 53 63 6f 74  74 20 42 6f 79 64 0d 43  |ler.Scott Boyd.C|
00000340  68 72 69 73 20 44 65 72  6f 73 73 69 0d 43 79 6e  |hris Derossi.Cyn|
00000350  74 68 69 61 20 4a 61 73  70 65 72 0d 42 72 69 61  |thia Jasper.Bria|
00000360  6e 20 4d 63 47 68 69 65  0d 47 72 65 67 20 4d 61  |n McGhie.Greg Ma|
00000370  72 72 69 6f 74 74 0d 42  65 61 74 72 69 63 65 20  |rriott.Beatrice |
00000380  53 6f 63 68 6f 72 0d 44  65 61 6e 20 59 75 0d 00  |Sochor.Dean Yu..|
Help! Help! We're being held prisoner in a system software factory! The Blue Meanies: Darin Adler, Scott Boyd, Chris Derossi, Cynthia Jasper, Brian McGhie, Greg Marriott, Beatrice Sochor, Dean Yu.
MacOS 7.6 installer images

 
Do You Sign Your Work?

Every ASIC I worked on has an undocumented register hardwired to read out my initials, and many ASIC designers follow a similar practice. Some take it further by changing the actual operation of the device, for example I've heard of a mode to insert a phrase like "We are the Knights who say Ni!" into the datastream (actual phrase omitted to protect the guilty). Functional modification like this always seemed too risky to me, a bug or manufacturing defect could conceivably enable it unexpectedly.

In ASIC design these tidbits serve a real business purpose: it is not unknown for departing employees to take a copy of a netlist or verilog source with them, and the company can find itself competing with its own designs. The existence of these telltale registers can serve as legal proof that the design was stolen, not reverse engineered.

Amongst software developers the practice of signing one's work is far from universal. GUI applications sometimes put developers names in the About box, though even this practice seems to be less common that it used to be. Developers of infrastructure devices without direct display to a user typically don't include any way of crediting the developers, in my experience. I think that is a shame. Signing ones work represents pride in craftsmanship, a desire to broadcast that "I made this."


 
Can You Sign Your Work?

Sometimes a company will ban the practice of listing developers names. A common reason I've heard is they don't want to enable recruiters to target their developers, but that is an astonishingly bad reason. Not only is it completely ineffective in this age of LinkedIn and hyper-connectedness, its also insulting that denying credit is considered to be a retention policy.

A second, more acceptable reason for banning names is that given the size and loose connections amongst teams working on a product its easy to mistakenly omit people who've made valuable contributions, engendering resentment. This is plausible, but I suspect that means the teams should find a way to maintain their own list of contributors and combine them as needed in the final product.


 
You Should Sign Your Work

I encourage developers to sign their work in an accessible (though not gratuitously intrusive) way. Encouraging pride in one's craft is a net positive for the product, and for the profession. Civil Engineers and architects on large projects sign their work, in the form of a plaque or cornerstone. Artists and craftspeople sign their work. Software engineers should, too.


 

Thursday, December 2, 2010

Engineering in a Small World

I currently work in a relatively large development team. As is the case with every team of that size, we are organized as one enormous group where everybody works with everybody else, every day. I've graphed out our team interactions. I'm sure it looks a lot like your team, right?

fully connected graph of 20 people

loosely connected group of 20 people Wait: does that sound weird, based on your experience? You're right, I made it all up. We're not organized as one enormous group, we're grouped into smaller teams like everybody else. Yet to a degree, the larger group has to be able to coordinate between every single person, every day. How is this accomplished?

Even in a relatively small group of people, a certain pattern emerges. Most individuals in the group interact with a small number of others, but a few are far more highly connected and routinely interact with dramatically more. These connectors result in enormous groups, loosely coupled. This is the phenomena which leads to the six degrees of separation theory, that on average any two people on the planet can be connected by six friends of friends. This pattern is also the basis of the six degrees of Kevin Bacon, who is one of those "highly connected" nodes in the graph of film actors.


 
The Small World Pattern

This phenomena is called the Small World pattern. I first read about it in Here Comes Everybody.

Cover of Here Comes Everybody

Here Comes Everybody, chapter 9.
... the chance that you know [a highly connected person] is high. And the "knowing someone in common" link - the thing that makes you exclaim "Small World!" with your seat mate - is specifically about that kind of connection.


The Small World Pattern seems obvious, in hindsight. Of course some people are simply more social and outgoing that others. They make an effort to meet people. They form connections. They are far more connected to other people than most.

The rest of this musing will concern the Small World Pattern in engineering organizations.


 
The Small World Scoffs at Your Orgchart

Connections can be forced, organizationally: a regular meeting between tech leads from related projects, for example. Connections can also happen by happenstance, as when members of different teams work at adjacent desks. However, the strongest connections happen because some percentage of the engineering population wants to be connected. They are outgoing, and enjoy talking to people outside their immediate coworkers. These connections are far more persistent, and likely to survive past the end of any particular project or recurring meeting.


 
No Group is an Island, but Some are Peninsulas

Something which can happen in a large company: you work on an infrastructure project which should be applicable in a number of different areas, yet never seems to get the attention you think it deserves. Other groups which could leverage your work instead do their own thing, and later only grudgingly evaluate your system before pronouncing it unfit. Is it because you've misunderstood their requirements? Is it because they think your implementation is poor?

More likely, its because you lack connections from your group to others. It takes just one person in the right place at the right time to say "we should go talk to John on Project Foo." When these suggestions are made organically and at the right time, they are far more likely to be acted upon. When such a suggestion comes as an edict way after the decision point, such as via some recurring meeting, it is far less likely to be received favorably.


 
To the Connector Go the Spoils

Being highly connected within an engineering organization reaps many rewards. People associate them with the good outcomes of serendipitous introductions.

Being highly connected within an engineering organization also suffers some downsides. I wish I understood the psychological reason why, but nonetheless it happens: Technical competence as an individual contributor will be questioned more often if you spend significant time interacting with other groups. Its weird.


 
Closing Thoughts

Engineers are human, though in your daily work it might not always seem so. Understanding human behavior is as important in our field as in any other. I highly recommend Shirky's Here Comes Everybody, and his subsequent Cognitive Surplus. Both are excellent.