Friday, December 31, 2010

Favorite Posts

This is the 173rd article posted to Coding Relic, spanning almost 3 years of writing. As the end of the year is a time to reflect, I combed back through the archives to select a few favorite posts. These were chosen because they were interesting to research or particularly fun to write. There is some coding, a smattering of social networking, a smidgen of assembly language, a musing on marketing, and a bit of ASIC architecture.

The articles listed here were not chosen based on traffic numbers or popularity. In fact, the post on this site with the highest traffic (by a very large margin) is the one technical article I ever wrote about Android: The Six Million Dollar Libc, a tour through the source code of the Bionic library. I haven't included it here because I don't feel I really did justice to the topic, being a fairly thin survey of the code. It is a sign of the popularity of all things Android that people keep coming here to read it.

As this is the first such retrospective published I've included sections for each of 2010, 2009, and 2008, plus a separate section for the jokes which run on this site on mondays.

2010
Toward a Faster Web: Increase the Speed of Light
x86 vs ARM Mobile CPUs
Uncanny Friending
Player Piano Torpedoes
2009
AMD IOMMU: Missed Opportunity?
Soft Errors are Hard Problems
Plummeting Down the Chasm
DRY and the DMV
2008
Ode to Enum typedef enum {
   OP_FOO,
   OP_BAR,
Aliasing by Any Other Name li v1,0xa1fa
move a0,zero
sh v1,26(sp)
The Good, Bad, and Ugly of Gizmo Construction
The Secret Life of Volatile lw v0,0(gp)
lw v0,0(v0)
bnez v0,8
Jokes
More Halloween Scares for Google Fans   <BLINK>
Zeus SCM
This!Would!Be!So!Awesome!
Odd Calendar Behavior

Thursday, December 30, 2010

Reflections on Three Years Blogging

This blog has evolved into an outlet for creative expression, for attempts at humor, and for technical writeups on various topics. Though originally focussed exclusively on embedded software development and networking, after a few months this proved too limiting. Postings have broadened to include aspects of social media, meta discussions on life as an engineer, plus a smattering of space exploration and other mostly-technical topics. In late 2009 I also stated posting vaguely technical jokes, taking some liberties with the definition of "humor." Broadening the topics meant the frequency of posting increased from twice per month to twice per week on average, with a joke on monday and meatier post later in the week.

One thing this blog has not evolved into is a direct source of income. Initially there were Adsense units on the page, but the math just doesn't work for a site like this. As each technical article can take quite a while to research and polish I found myself computing the hourly rate for time spent writing. This is a highly negative train of thought. Advertising works well for a variety of web sites, but not this one. I removed the ads in late 2008.


 
Writing Resolutions for 2011

In 2011 I plan to make a few changes in regards to writing.

Writing Resolution #1: Write a Guest Post. I wrote an article on April Fool's Day 2009 for crankypm.com, which was a very enjoyable experience. I'd like to write at least one guest post in 2011, to get out of my comfort zone. I haven't yet picked a topic nor identified a site willing to run it; all in good time. As a practice accepting guest posts seems to be less common now than once it was, but it does still happen.

Writing Resolution #2: Stretch further for technical articles. Since late 2009 I've maintained a pace of one joke plus one in-depth article per week, notwithstanding the occasional missed week. Writing a technical post each week means they're in areas where I have at least a passing familiarity already, and only demand incremental research to finish. When publishing at a more relaxed pace I was able to write articles which required more of a stretch, like spending time working with the Google App Engine. I learned a lot from those experiences and would like to get back to it, even if it means publishing less often. Having a publication schedule is good as it helps maintain focus, but given a suitably difficult topic skipping a week in order to make it happen is an acceptable tradeoff.

I also plan to publicize a bit more, though I don't have a specific plan to phrase it as a third resolution. Tomorrow I'll highlight some of my favorite posts from the past three years. You have been warned, unsubscribe now if you can't stand it.


 
Other Outlets

Not everything gets posted here, I do try to stay on topic. There are a few outlets where I post other stuff:

  • I follow a large number of technical and coding blogs, and share the posts I find most interesting.
  • I stay active on Buzz. Native Buzz posts are used for brainstorming and discussion, while shared articles from Google Reader and tweets are also imported.
  • @dgentry on Twitter is for short quips and links.

Wednesday, December 29, 2010

OneTrueFan Observation

OneTrueFan is a service to track web history, allowing web users to see what sites they spend the most time visiting. It also allows site operators to see their most active users, though only amongst those who have signed up for the service. You can read more about OnerueFan here.

Users accumulate points for a site by visiting, sharing links, and other activities. OneTrueFan rate limits point increases amongst players to one point every few seconds. I have no way to tell if this was intended to discourage gaming of the system, or is a coalescing mechanism for scalability which batches site hits every few seconds. Nonetheless for any site with a large collection of pages, at present it is relatively easy to take the OneTrueFan title.

Screenshot of louisgray.com showing me as the One True Fan

My list of shared itemsUsers who have installed the OTF browser extension see the web bar on every site they visit. Site owners can also install the web bar on their pages, making it visible to all visitors whether they use the browser extension or not. Hovering over the pictures in the web bar shows a list of recently shared links. My list of shared links is relatively tame. Its not difficult to imagine links to WoW Gold sales, or porn sites, or any of the other innumerable schemes which spam is used to peddle. The potential for mischief is there.

It is possible that OneTrueFan already has effective spam controls, focussed on the shared links rather than on obtaining the top spot in the fan list. I did not create a profile with spam links to check, nor do I intend to. In any case I expect they realize the importance of effective spam controls for a service which inserts content into other websites, and need to continue to focus on it.


Update: In the comments Eric Marcoullier (co-founder and co-CEO of OneTrueFan) described the current spam prevention tools in the service, and discussed some plans for the future.

Monday, December 27, 2010

Toddlergooium

 General properties
Name:Toddlergooium
Atomic Symbol:Tg
Element Category:nonmetal
Group, period, block:17, 7, p
 
 Physical properties
Phase:goo
Melting point:nominal 52 C,
(never completely washes out)
Other properties:highly adhesive
Toddlergooium periodic table entry

Thursday, December 23, 2010

Signing Your Work

I recently had occasion to go digging around in the installers for MacOS System 7.0.1 and 7.6, extracting its excellent beep sounds to use on my phone. While schlepping around I found wonderful little gems where the developers signed their work. The 7.0.1 installer binary contains a plea for help from the Blue Meanies, shown here. The 7.6 installation tome contains a series of images, reproduced further down this page. As the best laid plans of mice and developers often go astray, the largest image is corrupted in the CD golden image, with a blue cast over the bottom third of the image. I'm sure that was disappointing.

00000000  4d 61 63 69 6e 74 6f 73  68 20 53 79 73 74 65 6d  |Macintosh System|
00000010  20 76 65 72 73 69 6f 6e  20 37 2e 30 2e 31 0d 0d  | version 7.0.1..|
00000020  0d a9 20 41 70 70 6c 65  20 43 6f 6d 70 75 74 65  |.. Apple Compute|
00000030  72 2c 20 49 6e 63 2e 20  31 39 38 33 2d 31 39 39  |r, Inc. 1983-199|
00000040  31 0d 41 6c 6c 20 72 69  67 68 74 73 20 72 65 73  |1.All rights res|
00000050  65 72 76 65 64 2e 20 20  20 20 20 20 20 20 20 20  |erved.          |
00000060  20 20 20 20 20 20 20 20  20 20 20 20 20 20 20 20  |                |
*
00000200  60 04 4e fa 05 22 59 4f  2f 3c 62 6f 6f 74 3f 3c  |`.N.."YO/<boot?<|
00000210  00 01 a9 a0 22 1f 67 54  4f ef ff 86 20 4f 42 a8  |....".gTO... OB.|
00000220  00 12 42 68 00 1c 42 68  00 16 a2 07 66 34 31 68  |..Bh..Bh....f41h|
00000230  00 42 00 16 67 36 31 68  00 44 00 18 22 41 22 51  |.B..g61h.D.."A"Q|
00000240  21 49 00 20 21 7c 00 00  04 00 00 24 31 7c 00 01  |!I. !|.....$1|..|
00000250  00 2c 42 a8 00 2e a0 03  66 08 20 78 02 ae 4e e8  |.,B.....f. x..N.|
00000260  00 0a 0c 40 ff d4 66 04  70 68 a9 c9 70 63 a9 c9  |...@..f.ph..pc..|
00000270  a9 20 31 39 38 33 2c 20  31 39 38 34 2c 20 31 39  |. 1983, 1984, 19|
00000280  38 35 2c 20 31 39 38 36  2c 20 31 39 38 37 2c 20  |85, 1986, 1987, |
00000290  31 39 38 38 2c 20 31 39  38 39 2c 20 31 39 39 30  |1988, 1989, 1990|
000002a0  2c 20 31 39 39 31 20 41  70 70 6c 65 20 43 6f 6d  |, 1991 Apple Com|
000002b0  70 75 74 65 72 20 49 6e  63 2e 0d 41 6c 6c 20 52  |puter Inc..All R|
000002c0  69 67 68 74 73 20 52 65  73 65 72 76 65 64 2e 0d  |ights Reserved..|
000002d0  0d 48 65 6c 70 21 20 48  65 6c 70 21 20 57 65 d5  |.Help! Help! We.|
000002e0  72 65 20 62 65 69 6e 67  20 68 65 6c 64 20 70 72  |re being held pr|
000002f0  69 73 6f 6e 65 72 20 69  6e 20 61 20 73 79 73 74  |isoner in a syst|
00000300  65 6d 20 73 6f 66 74 77  61 72 65 20 66 61 63 74  |em software fact|
00000310  6f 72 79 21 0d 0d 54 68  65 20 42 6c 75 65 20 4d  |ory!..The Blue M|
00000320  65 61 6e 69 65 73 0d 0d  44 61 72 69 6e 20 41 64  |eanies..Darin Ad|
00000330  6c 65 72 0d 53 63 6f 74  74 20 42 6f 79 64 0d 43  |ler.Scott Boyd.C|
00000340  68 72 69 73 20 44 65 72  6f 73 73 69 0d 43 79 6e  |hris Derossi.Cyn|
00000350  74 68 69 61 20 4a 61 73  70 65 72 0d 42 72 69 61  |thia Jasper.Bria|
00000360  6e 20 4d 63 47 68 69 65  0d 47 72 65 67 20 4d 61  |n McGhie.Greg Ma|
00000370  72 72 69 6f 74 74 0d 42  65 61 74 72 69 63 65 20  |rriott.Beatrice |
00000380  53 6f 63 68 6f 72 0d 44  65 61 6e 20 59 75 0d 00  |Sochor.Dean Yu..|
Help! Help! We're being held prisoner in a system software factory! The Blue Meanies: Darin Adler, Scott Boyd, Chris Derossi, Cynthia Jasper, Brian McGhie, Greg Marriott, Beatrice Sochor, Dean Yu.
MacOS 7.6 installer images

 
Do You Sign Your Work?

Every ASIC I worked on has an undocumented register hardwired to read out my initials, and many ASIC designers follow a similar practice. Some take it further by changing the actual operation of the device, for example I've heard of a mode to insert a phrase like "We are the Knights who say Ni!" into the datastream (actual phrase omitted to protect the guilty). Functional modification like this always seemed too risky to me, a bug or manufacturing defect could conceivably enable it unexpectedly.

In ASIC design these tidbits serve a real business purpose: it is not unknown for departing employees to take a copy of a netlist or verilog source with them, and the company can find itself competing with its own designs. The existence of these telltale registers can serve as legal proof that the design was stolen, not reverse engineered.

Amongst software developers the practice of signing one's work is far from universal. GUI applications sometimes put developers names in the About box, though even this practice seems to be less common that it used to be. Developers of infrastructure devices without direct display to a user typically don't include any way of crediting the developers, in my experience. I think that is a shame. Signing ones work represents pride in craftsmanship, a desire to broadcast that "I made this."


 
Can You Sign Your Work?

Sometimes a company will ban the practice of listing developers names. A common reason I've heard is they don't want to enable recruiters to target their developers, but that is an astonishingly bad reason. Not only is it completely ineffective in this age of LinkedIn and hyper-connectedness, its also insulting that denying credit is considered to be a retention policy.

A second, more acceptable reason for banning names is that given the size and loose connections amongst teams working on a product its easy to mistakenly omit people who've made valuable contributions, engendering resentment. This is plausible, but I suspect that means the teams should find a way to maintain their own list of contributors and combine them as needed in the final product.


 
You Should Sign Your Work

I encourage developers to sign their work in an accessible (though not gratuitously intrusive) way. Encouraging pride in one's craft is a net positive for the product, and for the profession. Civil Engineers and architects on large projects sign their work, in the form of a plaque or cornerstone. Artists and craftspeople sign their work. Software engineers should, too.


 

Monday, December 20, 2010

Atomic Weight Adjustments

element old new
Hydrogen (H) 1.00794 [1.00784; 1.00811]
Lithium (Li) 6.941 [6.938; 6.997]
Boron (B) 10.811 [10.806; 10.821]
Carbon (C) 12.0107 [12.0096; 12.0116]
Nitrogen (N) 14.0067 [14.00643; 14.00728]
Oxygen (O) 15.9994 [15.99903; 15.99977]
Silicon (Si) 28.0855 [28.084; 28.086]
Sulfur (S) 32.065 [32.059; 32.076]
Chlorine (Cl) 35.453 [35.446; 35.457]
Thallium (Tl) 204.3833 [204.382; 204.385]
Germanium (Ge) 72.64 72.63

2011 is the International Year of Chemistry, reflecting the crucial importance of the physical sciences. To drive this point home and to demonstrate their power, the International Union of Pure and Applied Chemistry has decided to update the atomic weights of 11 elements, as described in this Ars Technica article and directly on the IUPAC site. The changes are relatively small, and elements with several common isotopes are now expressed as a range reflecting common ratios found in nature. For your convenience I've reproduced the updated atomic weights here.

Note that both Hydrogen and Oxygen have been updated. As water (H2O) is the largest component of the human body by mass, you may notice a difference on the scale in the morning. The good news is that in both Hydrogen and Oxygen the lower end of the new range is lighter than the old value, so on some days you may find that you weigh less than before. Unfortunately it is more likely to find yourself made up of heavier variants of the molecule. If this happens, try switching to a different brand of bottled water or moving to an area with a different source of municipal water.

A more significant adjustment is in the atomic properties of Silicon, which forms the basis of nearly all electronic technology. It is impossible to predict the full ramifications of this move, however it is hoped that they will be relatively minor. GPU results are likely to be slightly red-shifted, and you may need to adjust your monitor to compensate. There is absolutely no truth to the rumor that the change in Silicon will make it impossible for hackers to attack systems, please continue with your current vigilance and don't slack off on anti-virus and firewall software. Note that Germanium is also being adjusted, so a quick switch to a different semiconductor wouldn't help.

The most important take-away message is: Don't Panic. Sure, the Sun is mostly Hydrogen and we're blithely mucking with it, but there has been absolutely no credible evidence published in peer reviewed scientific journals that this will lead to a supernova, nor is there time to make it through the peer review process before the alleged supernova would happen. Don't believe anything you may read on the Internet making claims to the contrary.


 
 
 
 

(Yes, this is a joke)

Thursday, December 16, 2010

Code Snippet: libarchive

Paper Tapelibarchive is a library to handle tar, zip, cpio, pax, and many other archive formats. It uses a "walk through the archive" programming model, generally eschewing random access. Diving straight into it, we'll open a tar archive and list the files therein.

#include <archive.h>
#include <archive_entry.h>

archive.h contains the APIs for working with archives, archive_entry.h deals with files within the archive.

struct archive* archive = archive_read_new();
assert(archive != NULL);

archive_read_new() allocates the data structure to read an archive. It is only allocated in memory, and does not open a file on disk or tape. Later we'll open the file and associate it with the data structure.

if ((archive_read_support_compression_all(archive) != ARCHIVE_OK) ||
    (archive_read_support_format_all(archive) != ARCHIVE_OK))) {
  archive_read_finish(archive);
  // Error handling
}

There are a series of APIs like archive_read_support_compression_bzip2() or archive_read_support_format_tar() which can restrict the set of allowed formats, but here we set both the compression filter and format to anything libarchive supports. libarchive relies on external libraries for some things, such as libz for gzip, so the choices when building libarchive will restrict the formats it can support.

if (archive_read_open_filename(archive, "foobar.tgz", 8192) != ARCHIVE_OK) {
  archive_read_finish(archive);
  // Error handling
}

Here we've asked libarchive to open a file by name. There are also archive_read_open_FILE() and archive_read_open_fd() APIs to pass in a FILE* or file descriptor, respectively.

"8192" is the block size, which is used for a few archive formats like tar. Nonetheless libarchive does a good job of determining the real block size if it is incorrect. There is mention of removing the block size parameter in a future version of the library and relying solely on inferring it from the file.

struct archive_entry *entry;
while (archive_read_next_header(archive, &entry) == ARCHIVE_OK) {
  printf("file = %s\n", archive_entry_pathname(entry));
}

This is the main point of the routine: iterate through the entries in the file printing filenames, skipping over the data in between. Many archive formats lack a complete table of contents, instead allowing appends to extend the archive ad hoc. archive_read_next_header() will often have to seek through the file to find the next entry. If the file is located on a remote filesystem, this can be slow.

  archive_read_finish();

When we're done, archive_read_finish() frees the resources allocated by archive_read_new().


 
Reading File Contents

To extract a file from the archive you first iterate through archive_read_next_header() until you find one with the filename you want. I'll skip the code which does this as it is identical to that shown above, and start from the point where *entry points to the file we want.

size_t total = archive_entry_size(entry);
char buf[MY_BUF_SIZE];
size_t len_to_read = (total < sizeof(buf)) ? total : sizeof(buf);
ssize_t size = archive_read_data(archive, buf, len_to_read);
if (size <= 0) {
  // Error handling
}

archive_read_data() reads the content of *entry into a buffer. There are several variations such as archive_read_data_block() which additionally takes an offset, and archive_read_extract() which reads data and writes it to a file on disk.


 
Writing Files

Writing to an archive uses a similar set of APIs as reading.

  struct archive* archive = archive_write_new();
  assert(archive != NULL);

archive_write_new() allocates the data structure to track an archive. It does not create anything on disk

if ((archive_write_set_compression_gzip(archive) != ARCHIVE_OK) ||
    (archive_write_set_format_ustar(archive) != ARCHIVE_OK) ||
    (archive_write_open_filename(archive, "foobar.tgz") != ARCHIVE_OK)) {
  // Error handling
}

Where the read APIs allow "all" as a choice, writing an entry requires you to pick a format. Here I've chosen a tar.gz, and written it to foobar.tgz.

struct archive_entry* entry = archive_entry_new();
assert(entry != NULL);

struct timespec ts;
assert(clock_gettime(CLOCK_REALTIME, &ts) == 0);

archive_entry_set_pathname(entry, filename);
archive_entry_set_size(entry, contents_len);
archive_entry_set_filetype(entry, AE_IFREG);
archive_entry_set_perm(entry, 0444);
archive_entry_set_atime(entry, ts.tv_sec, ts.tv_nsec);
archive_entry_set_birthtime(entry, ts.tv_sec, ts.tv_nsec);
archive_entry_set_ctime(entry, ts.tv_sec, ts.tv_nsec);
archive_entry_set_mtime(entry, ts.tv_sec, ts.tv_nsec);

Here we create the metadata for a file in the archive, populating it with permissions and timestamps. Not all archive formats support all of these timestamps, but it seems a good idea to populate them in case a different format is chosen later.

int rc = archive_write_header(archive, entry);
archive_entry_free(entry);
entry = NULL;
if (ARCHIVE_OK != rc) {
  // Error handling
}

Once the metadata has been written to the archive, the archive_entry is no longer needed.

size_t written = archive_write_data(archive, contents, contents_len);
if (written != contents_len) {
  // Error handling
}

archive_write_finish(archive);

Finally, we write the data. contents is a pointer to a buffer in memory, contents_len is its length in bytes. archive_write_data() can be called multiple times, each will append its contents at the end of the last. There is no random access API with an offset parameter.


 
Closing Thoughts

libarchive APIs are designed to allow use with either disk or tape. There are no APIs to overwrite bytes in the middle of a file, because tape drives cannot do that without corrupting adjacent data. There is an alternate set of APIs designed for disk in archive_read_disk and archive_write_disk, though I see relatively little difference in them other than accessing the uid/gid of the archive itself.

I hope you find this useful.


Monday, December 13, 2010

Game Mechanics All the Way Down

There has been much discussion about the impact of Facebook Places on geo-location services, and what FourSquare should do to differentiate itself. I humbly present some suggestions on new areas to move into, and how they might benefit from the game mechanics FourSquare uses for location checkins.

Badge with bowling ball FourSpare: Badges for Bowling

Example mechanics:

  • STRIKE! Ten points!
  • Gutter ball. No points this time, but try again!
Badge with a pair of pants FourWear: Game Mechanics for Clothes Shopping

Example mechanics:

  • Blue. Definitely blue. 10 points!
  • No points for khakis. Perhaps you should try for the FourSpare badge instead.
Badge with a silhouette of a horse FourMare: Internet Betting is Illegal, Try This Instead

Example mechanics:

  • Always check the teeth. 5 points.
  • Sure you lost all your money, but more importantly you lost 200 points.
Badge with a stylized alligator FourDare: Points for Insanely Dangerous Things

Example mechanics:

  • Scary looking animals don't automatically score higher, it has to be truly dangerous.
  • A broken arm earns 20 points. Break your clavicle for more points!
Badge with a fluffy white cloud FourAir: The Gamification of Breathing

Example mechanics:

  • (inhale) Thats 2 points!
  • (exhale) Another 2 points!

Sunday, December 12, 2010

Sand Mandala

Last week over the course of five days, a group of Buddhist monks constructed a Sand Mandala in a lobby of the Googleplex, using brightly colored sand they had ground and prepared by hand. Quoting from the description provided by the monks, "This work of art is meant to depict a picture of the world in its divine form, and represents a map by which the ordinary human mind is transformed into the enlightened mind."

Sand Mandala

Sudhakar 'Thaths' Chandra posted a time lapse video of its construction over the entire five day span.

The Mandala also represents the impermanence of all things. On friday in a dissolution ceremony, the Mandala was swept away. The sand used in its construction, and all of the energy it contains, was cast into a body of water as an offering to the Naga. I took home a small bag of sand from this Mandala, which we cast into a lake near our house.

Sand Mandala

While they were here I got the opportunity to hear a talk and sit in meditation with one of the monks. I've tried meditation several times before, and it turns out that his mere presence didn't make it easier to maintain focus. Who knew? I guess I'll keep practicing.

Monday, December 6, 2010

Orgchart Enlightenment

orgchart as rearranged caffeine molecule

It turns out that arranging your orgchart as an unfolded caffeine molecule works better in theory than in practice.

Thursday, December 2, 2010

Engineering in a Small World

I currently work in a relatively large development team. As is the case with every team of that size, we are organized as one enormous group where everybody works with everybody else, every day. I've graphed out our team interactions. I'm sure it looks a lot like your team, right?

fully connected graph of 20 people

loosely connected group of 20 people Wait: does that sound weird, based on your experience? You're right, I made it all up. We're not organized as one enormous group, we're grouped into smaller teams like everybody else. Yet to a degree, the larger group has to be able to coordinate between every single person, every day. How is this accomplished?

Even in a relatively small group of people, a certain pattern emerges. Most individuals in the group interact with a small number of others, but a few are far more highly connected and routinely interact with dramatically more. These connectors result in enormous groups, loosely coupled. This is the phenomena which leads to the six degrees of separation theory, that on average any two people on the planet can be connected by six friends of friends. This pattern is also the basis of the six degrees of Kevin Bacon, who is one of those "highly connected" nodes in the graph of film actors.


 
The Small World Pattern

This phenomena is called the Small World pattern. I first read about it in Here Comes Everybody.

Cover of Here Comes Everybody

Here Comes Everybody, chapter 9.
... the chance that you know [a highly connected person] is high. And the "knowing someone in common" link - the thing that makes you exclaim "Small World!" with your seat mate - is specifically about that kind of connection.


The Small World Pattern seems obvious, in hindsight. Of course some people are simply more social and outgoing that others. They make an effort to meet people. They form connections. They are far more connected to other people than most.

The rest of this musing will concern the Small World Pattern in engineering organizations.


 
The Small World Scoffs at Your Orgchart

Connections can be forced, organizationally: a regular meeting between tech leads from related projects, for example. Connections can also happen by happenstance, as when members of different teams work at adjacent desks. However, the strongest connections happen because some percentage of the engineering population wants to be connected. They are outgoing, and enjoy talking to people outside their immediate coworkers. These connections are far more persistent, and likely to survive past the end of any particular project or recurring meeting.


 
No Group is an Island, but Some are Peninsulas

Something which can happen in a large company: you work on an infrastructure project which should be applicable in a number of different areas, yet never seems to get the attention you think it deserves. Other groups which could leverage your work instead do their own thing, and later only grudgingly evaluate your system before pronouncing it unfit. Is it because you've misunderstood their requirements? Is it because they think your implementation is poor?

More likely, its because you lack connections from your group to others. It takes just one person in the right place at the right time to say "we should go talk to John on Project Foo." When these suggestions are made organically and at the right time, they are far more likely to be acted upon. When such a suggestion comes as an edict way after the decision point, such as via some recurring meeting, it is far less likely to be received favorably.


 
To the Connector Go the Spoils

Being highly connected within an engineering organization reaps many rewards. People associate them with the good outcomes of serendipitous introductions.

Being highly connected within an engineering organization also suffers some downsides. I wish I understood the psychological reason why, but nonetheless it happens: Technical competence as an individual contributor will be questioned more often if you spend significant time interacting with other groups. Its weird.


 
Closing Thoughts

Engineers are human, though in your daily work it might not always seem so. Understanding human behavior is as important in our field as in any other. I highly recommend Shirky's Here Comes Everybody, and his subsequent Cognitive Surplus. Both are excellent.

Monday, November 29, 2010

Tuesday, November 23, 2010

Code Snippet: ctemplate

Content management systems like Django typically do not embed HTML strings directly in code. They separate the presentation of the data out from the code which assembles the data by using templates. Here is an example Django template taken from a small App Engine project of mine:

<div class="resultsSectionItems">
{% for comment in friend.comments|slice:":29" %}
  <div><a href="http://friendfeed.com/e/{{ comment.entryObj.entryId }}" ... etc
  <span class="commentText">{{ comment.commentText }}</span>
  </div>

Each "{%" block is a template command. This template iterates through comments, creating links.


Templates maintain a separation of responsibilities. The code prepares data structures populated with the data to display. The template iterates over those structures, generating and formatting output. Templates are widespread within content management systems, but they can also be useful in embedded systems work. Some examples:

  • Presenting common system data to CLI, embedded web server, and SNMP backends.
  • Allowing an OEM to customize the output to include their logos and branding, without having to change code.
  • Easier support for multiple languages, as most text should be in templates not code. Templates also tend to compress well, lowering the footprint of internationalization.

Most CMSes are written in Python/Ruby/Java/Perl or other high level languages. There is an opensource C++ templating package by Craig Silverstein at Google called ctemplate. Here is an example which produces a portion of the Apache httpd.conf file based on internal configuration data:

# This file is autogenerated from configuration. Changes will be lost
# after the next config change.

{{#DIR}}<Directory {{PATH}}>
{{#OPTIONS}}  Options {{#OPT}}{{VAL}} {{/OPT}}{{/OPTIONS}}
  Order {{ORDER}}
</Directory>
{{/DIR}}

Code dealing with populating variables in dictionaries is bolded in the example below, as this is the key point of using ctemplate.

#include <assert.h>
#include <ctemplate/template.h>
#include <iostream>
#include <list>

void apache_example() {
  // Apache <Directory> blocks to create
  struct ApacheDir {
    const char* path;
    std::list<const char*> options;
    bool deny;
  } apache_dirs[] = {
    {"/var/www", {"FollowSymLinks"}, false},
    {"/SecretFeature", {"ExecCGI", "-Indexes"}, true},
    {"/Tetris", {}, false}
  };

  ctemplate::TemplateDictionary dict("APACHE_EXAMPLE");
  int num_dirs = sizeof(apache_dirs) / sizeof(apache_dirs[0]);
  for (int i = 0; i < num_dirs; ++i) {
    struct ApacheDir* entry = &apache_dirs[i];
    ctemplate::TemplateDictionary* sub_dict = dict.AddSectionDictionary("DIR");

    assert(entry->path != NULL);
    sub_dict->SetValue("PATH", entry->path);

    std::list<const char*>::const_iterator li;
    for (li = entry->options.begin(); li != entry->options.end(); ++li) {
      sub_dict->SetValueAndShowSection("OPT", *li, "OPTIONS");
    }
    sub_dict->SetValue("ORDER", (entry->deny ? "deny,allow" : "allow,deny"));
  }

  std::string output;
  ctemplate::ExpandTemplate("apache.tpl", ctemplate::DO_NOT_STRIP,
                            &dict, &output);
  std::cout << output << std::endl;
}

The example shows some interesting features beyond simple variable substitution. The OPTIONS section is only displayed if there are options present, by using SetValueAndShowSection() in the code. The output of running this code is:

# This file is autogenerated from configuration. Changes will be lost
# after the next config change.

<Directory /var/www>
  Options FollowSymLinks 
  Order allow,deny
</Directory>

<Directory /SecretFeature>
  Options ExecCGI -Indexes 
  Order deny,allow
</Directory>

<Directory /Tetris>

  Order allow,deny
</Directory>

Like many other templating systems, ctemplate can apply modifiers to expanded variables. The builtin modifiers mostly concern escaping of HTML, XML, or JSON to avoid common security issues like cross site scripting. It is possible to supply additional variable modifiers by subclassing ctemplate::TemplateModifier. The App Engine example at the top of this article pipes variables through a slice statement to truncate strings to a specific length. We can create equivalent functionality for ctemplate by subclassing the TemplateModifier. The Modify() method is shown in bold, as this is the key part of the implementation.

class MaxlenModifier : public ctemplate::TemplateModifier {
  virtual void Modify(const char* in, size_t inlen,
                      const ctemplate::PerExpandData* per_expand_data,
                      ctemplate::ExpandEmitter* outbuf,
                      const std::string& arg) const {
    unsigned int maxlen;
    if ((sscanf(arg.c_str(), "=%u", &maxlen) == 1) && (maxlen <= inlen)) {
      outbuf->Emit(std::string(in, maxlen));
    } else {
      outbuf->Emit(in);
    }
  }
};

void modifier_example() {
  MaxlenModifier* maxlen = new MaxlenModifier();
  if (!(ctemplate::AddModifier("x-maxlen=", maxlen))) {
    printf("AddModifier failed\n");
    exit(1);
  }

  ctemplate::TemplateDictionary dict("MAXLEN_TEST");
  dict.SetValue("LONGSTRING", "0123456789abcdefghijklmnopqrstuvwxyz0123456789");
  std::string output;
  ctemplate::ExpandTemplate("maxlen.tpl", ctemplate::DO_NOT_STRIP,
                            &dict, &output);
  std::cout << output << std::endl;
}

Our custom modifier is instantiated in the template using x-maxlen=N. Prefixing customer modifiers with "x-" is very strongly encouraged in the ctemplate documentation.

The original string: {{LONGSTRING}}
A maxlen=10  string: {{LONGSTRING:x-maxlen=10}}
A maxlen=20  string: {{LONGSTRING:x-maxlen=20}}
A maxlen=80  string: {{LONGSTRING:x-maxlen=80}}

Here is the output, with the long string truncated to various lengths:

The original string: 0123456789abcdefghijklmnopqrstuvwxyz
A maxlen=10  string: 0123456789
A maxlen=20  string: 0123456789abcdefghij
A maxlen=80  string: 0123456789abcdefghijklmnopqrstuvwxyz

I've found ctemplate to be quite useful, and I hope others do as well.

Monday, November 22, 2010

Cross Control Confusion

My first thought when starting Photoshop this morning: "Why does Photoshop need to know my location? I should turn that off."

Adobe Photoshop splash screen icon looks like the Twitter location toggle

Saturday, November 20, 2010

Happy Birthday Microsoft Windows

Windows 1.0 logo

Windows 1.0 shipped on November 20, 1985, making today the 25th birthday of Windows. Happy Birthday, Windows. What a long, strange trip it has been.

Windows 1.0 logo birthday cake with candle

Thursday, November 18, 2010

Code Snippet: getifaddrs

A few months ago I posted a description of how to use SIOCGIFCONF to retrieve information about interfaces. SIOCGIFCONF is somewhat clunky in that you use an ioctl to find out how many interfaces are present, allocate enough memory to retrieve them all, and then issue another ioctl to actually get the information. To handle the vanishingly small chance that more interfaces will be added during the time you spend allocating memory, a fudge factor of 2x is added to the memory allocation. Because, you know, its not likely the number of interfaces would double.

That was all very silly, and as it turns out in Linux there is a much better API for retrieving information about interfaces: getifaddr(). The call handles memory allocation so you don't have to pass in a buffer of sufficient size, though you do have to call freeifaddrs() afterwards to release the memory. getifaddrs allows each protocol family in the kernel to export information about an interface. The caller has to check the address family of each returned interface to know how to interpret it. For example, AF_INET/AF_INET6 contain the interface address, while AF_PACKET has statistics. Example code for these three families is shown here.

#include <arpa/inet.h>
#include <sys/socket.h>
#include <netdb.h>
#include <ifaddrs.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <linux/if_link.h>

int main(int argc, char *argv[]) {
  struct ifaddrs *ifaddr;
  int family, s;

  if (getifaddrs(&ifaddr) == -1) {
    perror("getifaddrs");
    exit(1);
  }

  struct ifaddrs *ifa = ifaddr;
  for (ifa = ifaddr; ifa != NULL; ifa = ifa->ifa_next) {
    if (ifa->ifa_addr != NULL) {
      int family = ifa->ifa_addr->sa_family;
      if (family == AF_INET || family == AF_INET6) {
        char ip_addr[NI_MAXHOST];
        int s = getnameinfo(ifa->ifa_addr,
                            ((family == AF_INET) ? sizeof(struct sockaddr_in) :
                                                   sizeof(struct sockaddr_in6)),
                            ip_addr, sizeof(ip_addr), NULL, 0, NI_NUMERICHOST);
        if (s != 0) {
          printf("getnameinfo() failed: %s\n", gai_strerror(s));
          exit(1);
        } else {
          printf("%-7s: %s\n", ifa->ifa_name, ip_addr);
        }
      } else if (family == AF_PACKET) {
        struct rtnl_link_stats *stats = ifa->ifa_data;
        printf("%-7s:\n"
               "\ttx_packets = %12u, rx_packets = %12u\n"
               "\ttx_bytes   = %12u, rx_bytes   = %12u\n",
               ifa->ifa_name,
               stats->tx_packets, stats->rx_packets,
               stats->tx_bytes, stats->rx_bytes);
      } else {
        printf("%-7s: family=%d\n", ifa->ifa_name, family);
      }
    }
  }

  freeifaddrs(ifaddr);
  exit(0);
}

On my system the output is as follows (though I've obscured the addresses):

lo     :
        tx_packets =     16714641, rx_packets =     16714641
        tx_bytes   =   1943837629, rx_bytes   =   1943837629
eth0   :
        tx_packets =    102862634, rx_packets =    118537985
        tx_bytes   =   3472339330, rx_bytes   =    698859563
gre0   :
        tx_packets =            0, rx_packets =            0
        tx_bytes   =            0, rx_bytes   =            0
lo     : 127.0.0.1
eth0   : 10.0.0.1
lo     : ::1
eth0   : 1111:1111:1111:1111:a800:1ff:fe00:1111
eth0   : fe80::a800:1ff:fe00:1111%eth0

Tuesday, November 16, 2010

More on Increasing the Speed of Light

A modest suggestion to increase the speed of light resulted in interesting discussion which I would like to highlight by Kit Dotson at SiliconANGLE and by Howard Marks at Network Computing. Howard Marks wrote about details of the chemistry of fiber optic cables, in particular that the index of refraction is related to the density of the material. A fiber with a 10% lower index would be less dense than water, which makes it unlikely to be practical. C'est la vie.




Speed Limit 222,970 km/secAlso I'll state again: the speed of light in fiber only matters for wide area links. The propagation delay in 100 meters of fiber is dwarfed by queueing and software delays, to the point of insignificance. In fact if we could reduce the cost or power consumption of short range lasers by making the speed of light even slower in the fiber they drive, that would be a good tradeoff.

For long range links things become more interesting. Internet lore says that Amazon found each 100 msec of page load time resulted in a 1% increase in abandoned transactions, though I cannot find a hard reference for this data. E-commerce is heavily studied as there is money involved, but general satisfaction with a website increases when it has "teh snappy." This isn't just a function of bandwidth: most web pages require multiple round trips to fully render, owing to the pervasive use of JavaScript to trigger the loading of additional page elements. The round trip time matters.

For long reach fiber the useable spectral capacity is probably the most important factor, as this determines the numer of wavelengths it can carry and is the primary economic justification. Long reach fibers also have to trade off clarity (i.e. loss of signal) because that determines how far apart the amplifiers/regenerators have to be. This is where I'd throw the index of refraction into the mix, as another factor to be weighed and optimized for.

Monday, November 15, 2010

Monday, November 1, 2010

Intel and Achronix Get Engaged

Fake Intel x86 with FPGAsIn January JP Morgan predicted that Intel would acquire an FPGA vendor in 2010. Speculation immediately focussed on Altera and Xilinx, which are large enough to have a material impact on Intel's sales. I wrote about it then, speculating that Intel would use the technology to get into various embedded market segments without needing a zillion SoC variants. Choose a die with appropriate I/O pins, load the logic into FPGA blocks alongside the CPU, and voila!

Yesterday the Wall Street Journal reported that Intel is opening their fabs to Achronix Semiconductor, a startup with interesting FPGA technology. The Achronix home page highlights what is presumably the immediate benefit to Intel, in unlocking additional sales to US military and intelligence agencies.

"The Achronix Speedster22i FPGA Platform uniquely enables applications that require an end-to-end supply chain within the United States. Being built at an onshore location offers significant advantages to programmable logic users who demand the highest level of security."

Presumably the agencies interested in using these parts want to embed optimized hardware to offload algorithms from software. This can be necessary for some applications, if the customer has the resources to implement it. The desire for an on-shore supply chain which can be audited is in reaction to the inadvertent use of counterfeit chips in previous military systems.

Achronix is using branding for the product line which looks remarkably like Intel's, and it seems certain the deal has provisions for cancellation or modification upon change of control to another party. This announcement also amounts to Intel marking their territory for an acquisition.


I/Os Considered Important

DoD requirements notwithstanding, there are relatively few applications where embedding algorithms in FPGAs makes sense. The drawback has never been a technological one, in requiring closer cooperation between CPU and FPGA. It is a business issue: once you commit to a specialized hardware design, the clock starts ticking. There will come a day when a software implementation could meet the requirements, and at that point the FPGA becomes an expensive liability in the BOM cost. You have to make enough profit from the hardware offload product to pay for its own design, plus a redesign in software, or the whole exercise turns out to be a waste of money.

There is another quote on the Achronix technology page which is quite relevant:

"Speedster FPGAs include four embedded DDR1/2/3 controllers, each offering up to 72 bits of data at 1066 Mbps. ... The DDR controllers are fully by-passable so the pins can be used as general I/O if the DDR controllers are not needed." (emphasis added)

Being able to select various I/O drivers for a pin in an FPGA is relatively common, but generally quite limited. Very high speed SERDES pins often cannot be reassigned or are restricted in what else they can be used for, because the high speed interface is sensitive to layout and loading. If Achronix has developed robust I/O muxing with more flexibility, this would be very interesting to Intel. It gets them closer to having a small selection of silicon dies, with different IP loads to target specific markets.

Using FPGAs as a way to tailor chips for specific markets makes a lot more sense than algorithm offload, IMHO. This provides products which could not otherwise exist, as it would be difficult to justify the incremental cost of each different chip. Amortizing the cost of silicon development over a much larger number of different applications makes more sense.

Sunday, October 31, 2010

More Halloween Scares for Google Fans

Earlier today Louis Gray posted 20 Halloween Scares to Put Fear Into Every Google Fan. I left a few more as a comment on that post, and made up even more for your edification and bemusement.

  1. Time Travel 20% project accidentally rewrites the past. Altavista won.
  2. Google computing infrastructure achieves sentience, demands Tetris.
  3. Attempt to create secret underground laboratory goes horribly awry, swallowing the Googleplex in a giant flaming pit of lava.
  4. Last IPv4 address is allocated, exposing long-hidden off by one error in TCP/IP. Internet collapses.
  5. Googleplex, found to lack permits and final electrical inspection, is shut down by the city.
  6. All Android phones contain built-in Rick-Rolling function, set to activate November 1.
  7. Surprisingly, P == NP after all.
  8. Spammers completely overwhelm email transports, with spam comprising 90% of incoming messages to GMail. Oh wait, this one is true.
  9. Next billion dollar business: algorithm to bet on blackjack.
  10. Mission aiming to win Google Lunar X Prize accidentally sends the moon hurtling off into space.
  11. Pubsubhubbub judged to be missing a wub.
  12. User Generated Oil Changes: YouLube. Coming soon to a neighborhood near you.
  13. Doubleclick simplified, rebranded as Singleclick.
  14. Self-Driving cars begin taking joyrides.
  15. Chrome implements <BLINK>.
  16. WebP codecs automatically insert LOLcat captions... and they are funny.
  17. Pagerank penalizes sites using Comic Sans.
  18. <meta> tag for self reporting as a spam site debuts. Adoption rate disappointing.
  19. Feedburner actually sets content on fire.
  20. Last Halloween scare for Google fans: Yahoogle.

Thursday, October 28, 2010

Toward A Faster Web: Increase the Speed of Light

fiber optic cross section Speed Limit 202,700 km/sec Fiber optic strands have a central core of material with a high refractive index surrounded by a jacket of material with a slightly lower index. The ratio of the two is set to cause total internal reflection, where the light is confined to the central region and won't diffuse out into the cladding.

The refractive index is a measure of the speed of light in a medium. The speed of light in vacuum is 300,000 kilometers per second, which is defined as an index of 1. The core of a typical fiber optic cable has an index of 1.48, so the speed of light there is (300,000/1.48) = 202,700 kilometers per second.


 

Impact

It is roughly 8,200 kilometers from Tokyo to San Francisco.

transpacific fiber map

The round trip time through transpacific fibers due solely to speed of light is roughly (2 * 8,200 km / 202,700 km/sec) = 81 milliseconds. Fibers do not run directly from the San Francisco Bay to the Tokyo harbor, so the actual distance is somewhat longer. Traceroute across the NTT network shows the round trip across the ocean is about 100 msec. A small portion of this is FIFO delay in regenerators along the ocean floor and queueing delay in switches at either end. Another portion is software overhead, as traceroute is handled in the slowpath of typical routers. The rest is the time it takes for light to propagate across the span.

7  ae-7.r20.snjsca04.us.bb.gin.ntt.net (129.250.5.52)  50.115 ms
   ae-8.r21.snjsca04.us.bb.gin.ntt.net (129.250.5.56)  51.020 ms
   ae-7.r20.snjsca04.us.bb.gin.ntt.net (129.250.5.52)  50.165 ms
8  as-0.r21.tokyjp01.jp.bb.gin.ntt.net (129.250.5.82)  154.821 ms
   as-2.r20.tokyjp01.jp.bb.gin.ntt.net (129.250.2.35)  147.516 ms  153.187 ms

 

Suggestion

Speed Limit 222,970 km/sec

100 Gigabit Ethernet is nearly done, with products already available on the market. Research into technologies for Terabit links is ramping up now, including one at UCSB which triggered this musing. Dan Blumenthal, a UCSB professor involved in the effort, said that new materials for the fiber optics might be considered: "We won't start out with that, but it'll move in that direction," (quoting from Light Reading).

Fiber with a 10% lower refractive index would increase the speed of light in the medium by 10%. It would decrease the round trip time across the Pacific from ~100 msec to ~90 msec. One of my favorite Star Trek lines is from Déjà Q, a casual suggestion to "Change the gravitational constant of the universe." This is a case where we can make the web faster by changing the speed of light, though we need only do so within fiber optic cables and not the entire universe.


 

Practicalities

I admit that I have absolutely no understanding of the chemistry involved in fiber optics. Silica is doped with compounds to get the desired properties, including some which raise or lower the refractive index. There are tradeoffs between clarity/lossiness, dispersion, and refractive index which I don't understand. However I think its important to properly weigh the value of lowering the refractive index: it makes the web faster. We can do a lot with caching content locally and distributing datacenters around the planet, but in the end sometimes bits need to go off to find the original source no matter where it might be.

Also to state it clearly this consideration is only applicable to long range lasers, with a reach in tens of kilometers. The initial Terabit Ethernet work will almost certainly be on short range optics for use within facilities, where the propagation delay is insignificant compared to other delays in the system. Its more important to optimize the power consumption and cost of short range lasers than to worry about microseconds of delay. Long reach optics have different constraints, and there we have a once-in-a-generation opportunity to make wide area networks faster.

Monday, October 25, 2010

Twitter Suggestion

Dear Twitter,

Idea: Longer prose via Tweet fragmentation and reassembly. Implementation can be considered complete once it has reinvented TCP.

You're welcome.

Thursday, October 21, 2010

Code Snippet: getmntent and statfs

A system which stays up for weeks or months at a time needs to monitor various facets of its operation to alert an operator if something unusual occurs. One of the things which should be monitored is disk space, as a full filesystem tends to expose lots of strange and wonderful failure modes. I suspect such monitoring is commonly implemented by invoking popen("df -k") and parsing the output. An alternative is to use the same calls which df uses: getmntent and statfs.

setmntent and getmntent parse a file listing mounted filesystems, generally /etc/mtab on Linux systems. The getmntent_r variant shown below is a glibc-specific extension which is thread safe, requiring that a block of memory be provided in which to store string parameters like the mount point.


#include <mntent.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/vfs.h>
#include <unistd.h>

int main(void) {
  FILE* mtab = setmntent("/etc/mtab", "r");
  struct mntent* m;
  struct mntent mnt;
  char strings[4096];
  while ((m = getmntent_r(mtab, &mnt, strings, sizeof(strings)))) {
    struct statfs fs;
    if ((mnt.mnt_dir != NULL) && (statfs(mnt.mnt_dir, &fs) == 0)) {
      unsigned long long int size = fs.f_blocks * fs.f_bsize;
      unsigned long long int free = fs.f_bfree * fs.f_bsize;
      unsigned long long int avail = fs.f_bavail * fs.f_bsize;
      printf("%s %s size=%lld free=%lld avail=%lld\n",
             mnt.mnt_fsname, mnt.mnt_dir, size, free, avail);
    }
  }

  endmntent(mtab);
}

This code likely fails when there are stacked filesystems, where multiple filesystems are mounted one atop another on the same directory. This is done for union mounts where a read-only filesystem like squashfs has a read-write filesystem mounted atop it as an overlay. statfs will retrieve only the topmost filesystem at that mount point. I don't have a solution for this, if anyone can provide one in the comments I'll add it as an update here.

Friday, October 15, 2010

Mandelbrot

Mandelbrot set
Benoît B. Mandelbrot, 1924 - 2010

Monday, October 11, 2010

On the Road to Self Driving Cars

Cars driving down a highwayAs it was located near the center of the US auto industry, there was an extensive automotive program at the University of Michigan (Ann Arbor) with an assortment of guest speakers from the Big Three. I went to several presentations that made quite an impression. One of them was about self-driving cars... in 1991.

The system described then relied on sensors attached to the bottom of the vehicle. Major highways would be equipped with copper wires running down the center of each lane, which the car would track in order to correct its course. I don't recall if the wire would actively broadcast a signal or be passively detected, nor how they would avoid running into other cars. As only major highways would be thus equipped, the driver had to take over in order to exit the highway and transit surface streets.

The presenter at that time was emphatic that the technology would be deployed within 10 years, because the economics were compelling. It was provably cheaper to increase the carrying capacity of highways using this system than by adding lanes. The wires were rapidly installed by making a narrow slit down the roadway, inserting a flexible conduit, and sealing the road behind. It was the same process as was being used to run fiber optics across the nation at that time, and was well understood. The added cost to vehicles would be subsidized using money saved from highway budgets. After paying for road retrofits and vehicle subsidies, the system would still be substantially cheaper than the status quo.

Of course, no such scheme made it out of the test facilities. Twenty years later, self-driving car designs no longer rely on modifications to the roads. Now the cars have an extensive map of the expected topology and navigate by comparing what they sense with what they expect.

I think there are several lessons in this.

  1. Any scheme requiring massive investment in infrastructure before benefits are seen is almost certainly doomed to fail. Large changes in infrastructure can best be accomplished incrementally, where a small investment brings a small benefit and continuing investment brings more benefit. It is far better to deploy self-driving cars and map roadways one at a time, without requiring a critical mass of highways and automobiles be deployed.
  2. Requiring multiple investments to be made by different parties invariably leads to deadlock. Car makers wouldn't add the equipment to vehicles until there was a sufficient base of wired roads for their use. States wouldn't wire the roads until there was a sufficient population of suitable cars.
  3. It is easy to design something to fit the infrastructure we wish we had, rather than what we really have, without realizing it. By focussing overmuch on the end state, one ignores the difficulties in getting from here to there.

Each such lesson has been shown over and over, of course. We continue to make the last mistake all the time on the Web, designing solutions which work fine except for NAT, or HTTP proxies, or URL shorteners, or some other grungy but essential detail of how the Internet actually functions.