Joe Ganley

I make software and sometimes other things.

 

Due to Blogger's termination of support for FTP, this blog is no longer active.

It is possible that some links from here, particularly those within the site, are now broken. If you encounter one of those, your best bet is to go to the new front page and hunt for it from there.

Most, but not all, of the blog's posts are on this page; the archives are here.

 

Especially in the software industry, a topic that gets revisited again and again is that of working from home vs. working in an office. There are those who seem to believe that working from home is a thoroughly bad idea, and at the other end of the spectrum there are companies that are entirely locationless, with all of their employees working from their own homes.

I worked for 12 years, for four different companies, from home. All four companies were headquartered in Silicon Valley, and I live in the suburbs of Washington, D.C. I worked from my home, except that I visited headquarters for a week at a time, at intervals ranging (depending on the company and on where we were in the release cycle) from once per month to once per quarter. At the first company, I had worked in the office in California for 18 months before I moved to D.C. At the second, a startup, I moved to California for the first 6 months, then returned to D.C. The remaining companies, I entered cold, working from home from the beginning.

Now, for almost two years, I've switched to the other extreme. I work for a local company, and due to the nature of the work it must be done 100% in the office, and typically only 40 hours per week.

The first thing that people always seem to talk about is productivity. From my own experience, I can report that I was at least as productive at home as I am in the office. However, that claim can be a little misleading; I was more personally productive, but I made fewer contributions to other developers' work, and in general I now do a lot more of that and more generally facilitating the work of others. All in all, I would say that the net sum is about even; however, as my career advances, my job becomes more about collaborating, overseeing, and facilitating others and less about my own productivity, so it becomes more important to have the kind of face-to-face, high-bandwidth contact that you get in an office. Indeed, this is a big part of the reason why I finally stopped working at home in favor of working on-site. Proponents of telecommuting are quick to point out all of the ways that modern technology facilitates communication: Not just phone, but instant messaging, wikis, chat clients, screen sharing and collaboration software, video conferencing, and on and on. These things definitely help - I was certainly able to be more effective in the later days of high-speed broadband than I was at the beginning with nothing but phone and 128KB/sec ISDN - but they're still a pale shadow of being together in person in front of a whiteboard.

In some ways, a good telecommuter turns this handicap into a win. You do more of your design up front, and more carefully and thoroughly, because that reduces the need for high-bandwidth communication throughout the development process. You do this design work during those times when you are together face to face. As a side effect, careful design at the beginning leads to better software. On the other hand, it is impossible to nail down all of the design up front; inevitably you figure things out during development that require changes, sometimes large ones, in the architecture of the system. Here, being geographically distant becomes a real hindrance; what happens is generally that the lead architect comes up with a design, and then the other members of the team critique it, and in my experience the result is often inferior to what would have come of true collaboration through the entire design.

As far as your work schedule, being at home is obviously very flexible. This is both good and bad. It is good because you can work when you are most productive. I am a lark, and I am at my sharpest first thing in the morning. At home, I could go straight to work when I woke up; now, I spend 90 minutes of what would be my most effective time of day showering, getting dressed, and commuting. Similarly, you can put that flexibility to work on a larger scale: If you are having one of those days where you just can't seem to focus, just step away; clean the house, or take a 3-hour lunch, or get a white chocolate mocha. And on those days when you are really in the zone, you can work 12 or 14 hours while still having a few hours with your family. I also found it easier to stay focused on work in general, because my hours were all mine; not only was there none of the kind of non-productive water-cooler BS'ing that you do in the office, but I also spent very little time surfing the web, reading XKCD, and the like, because it was easy to see that the time I spent doing so was mine. You can also move your schedule around to enable more time with your family; I typically got up very early and did an hour or two of work before anyone else woke up, and did another hour or two of work after the kids went to bed, thus freeing those hours during the day for family time.

The downside is that it's hard to maintain any kind of work-home separation. You feel like you are always at work, or at least like when you're not at work, you should be. Exacerbating this is the fact that the bulk of my teams were in a timezone where it was three hours earlier, so I often fielded calls and IMs through the dinner and kids' bedtime hours. Similarly, despite my constantly telling them not to worry about it, my coworkers were loathe to call me when they knew it was my evening, and so whatever issue they had got delayed until mid-day (i.e. their morning) the next day.

Some telecommuters report having trouble staying focused on work, with the constant potential distractions around: Housework to be done, kids to play with, etc. I never had much of a problem with this; I'm a fairly disciplined person, and I love what I do, so except for those occasional times I mentioned where focus is elusive, the call of the laundry was rarely louder than that of the work. Any anyway, a lot of household tasks, like laundry, don't require require a lot of attention. But if you're the sort who has trouble focusing on work at home, then working at home simply might not be for you. It's also notable that my wife was home during my core work hours, and she ran interference with the kids and such.

Another factor to be considered is the social interaction of being around other people. If you are the sort who needs this on a regular basis, then working at home is probably not for you. However, many software people, even in the office, spend most of their day hunkered down in their cube, talking to others only when work requires it. For such people, being at home may not make much of a difference in their social schedule.

Another important factor is that employees who want to work from home are happy doing so. The effect of this is often underestimated; a happy employee is a productive employee. For many of us, giving us this kind of flexibility and trusting us to do the work in whatever way we find most effective has an incredible ROI in terms of employee productivity.

Most of this applies to the line engineer, a leaf node in the org chart. Things get much more difficult when you need to manage others, or to lead efforts with multiple developers. Then, it becomes not just about you; those other developers need to interact with you, both for technical guidance and for the more touchy-feely managerial stuff. It can be done, especially if you're really proactive about communicating with them and giving feedback, but it's hard and arguably inherently less effective than being together in person. Again, this is why I stopped working at home; I reached a point where I felt that the level of leadership I needed to do could not be effectively accomplished remotely. At least, not by me.

Finally, note that for really great developers, you may not have a choice - they either work from home or they don't work for you. Folklore says that an outstanding developer is many times more productive than an average one, so even if such a person isn't quite as effective from home as they would be from an office, hiring them can often still be a net win. Certainly the companies I worked for were happy with my performance working from home.

The bottom line: A leaf-node developer, if they have the right discipline and temperament, can be every bit as productive as a telecommuter as they could in the office (if not moreso). As their responsibilities of leadership and mentorship grow, this will become increasingly difficult, until a point where they may not be able to effectively do their job remotely. Even for such people, though, the need to be in the office isn't constant, and while I haven't ever been in a situation where this is possible, I believe that even a manager could effectively spend a day or two a week working at home.

Update: I found this study interesting, and it squares well with all of my experience.

Labels: , ,

Comments (0)
 

This is the story of the biggest mistake I've made in my professional career. (I say I; there were other people involved, but it was as much my fault as anyone's.) It was early in my career, and I was tasked with building a new algorithmic optimization system from the ground up. This meant we needed a database; this is a database of geometric objects and connectivity, not the kind people normally think of when they hear the word database. This is exactly what OpenAccess, which I helped develop some years later and which had it existed back then would've allowed me to avoid this whole fiasco, is.

When I came on board, we needed a prototype done, like, yesterday. So I slapped together the quickest, dirtiest database that could possibly work, with the intention that eventually it would be replaced by something more production-worthy. That may have been a mistake too, but it's not the big one that the title refers to.

Fast-forward about a year. The prototype is done, and is producing fantastic results. We've written a real database, and it is time to move the system onto it. Here's the big mistake: I did that as open-heart surgery, completely ripping the system apart and replacing the old database calls with new ones. The data model had changed substantially, so this was a major effort, not just a syntactic change, and calls to the database were woven through the entire codebase; to make another surgical analogy, imagine trying to replace all of a person's nerves. There was a period of a couple of months in which the system did not work at all. Eventually we got it running, and then there were several weeks of debugging - just plain old bug fixes, many of them fixing bugs that had probably already been fixed in the old version but the fixes were lost with the old database calls.

Meanwhile, we weren't able to just freeze the old system in carbonite; we needed to continue improving it, competing in benchmarks, and the like. So it continued to evolve forward from the code base that had been used to begin the conversion. Had the new version ever worked properly, we would have had to make all of the same improvements to it when we were done that we had made to the original system during those months.

Because here is the worst part: The system running on top of this database was mostly Monte Carlo optimization algorithms. Such machinery is highly dependent, in unpredictable and hard-to-debug ways, on such harmless-seeming transformations as changing the units in which a size is expressed, or changing the order in which a graph vertex's edges are enumerated. There were many such differences between the old and new databases, and the new system never did produce results as good as the old one.

After it was all over, it was clear to me that this way of making this conversion was totally wrong-headed. The right way would have been to first write a bunch of regression tests. Then write a facade over the new database that had the old database's API. Move the system onto it (which is nothing more than recompiling against the new library). Then slowly migrate the code, and the facade API, to look more like the new database's API. Run the regression tests frequently, so that if you make a change that breaks things, you know what change is to blame. Eventually the facade API looks just like the new database's API, and at this point the facade is vestigial and can be removed.

This approach has two key features: There is just one version of the system, and it is always working throughout the process. It probably takes substantially more time than the open-heart surgery approach would if everything went smoothly, but how often does that happen?

So imagine how I felt listening to this week's Stack Overflow podcast, in which Jeff talks about facing the same problem with Stack Overflow. Evidently the schemas of his core tables turn out to be really wrong, and force a lot of baroque and complicated over-joining and such in the site's code. Joel suggested something almost identical to what I decided so long ago I should have done: Create a view that looks like what he wants the new table to look like but has the original tables underneath. Then, migrate the code a piece at a time from the underlying tables to the new view. Again, there remains just one, working system the whole time. To my horror, Jeff disagreed quite vehemently and said he planned to go the open-heart surgery route. He went on a bit about the romance and adventure of that sort of swashbuckling. Surprisingly, Joel acquiesced a little and said that might be the right approach for Jeff. I seriously doubt it, and I was disappointed that Joel didn't let Jeff have it with both barrels; after all, this is just a smaller-scale version of the exact same mistake Joel wrote about in Things You Should Never Do, Part I, for pretty much the same reasons. (By the way, that hadn't been written yet when I had my little misadventure.)

Just as Joel says quite unequivocally that you should never do a full rewrite of an application, I'll say just as unequivocally that you shouldn't perform this kind of massive surgery on a working application unless it is simply impossible to do it incrementally. Indeed, in the decade since then I've formed a habit of never having my software be broken for more than a few hours at a time.

Labels: , ,

Comments (0)
 
Years ago, the first time I worked at Cadence, my username was "ganley". I left to go to a startup, where my username was "joe", and when Cadence acquired that startup I kept the username "joe". I quickly discovered that this was a mistake, since spammers send to name@domain for every common name. Only much later did I discover that "ganley" would have been better for another reason: I worked on an open source project, and my username still appears in some of the CVS tags in the source code. It would be much cooler if it was "ganley" instead of "joe" - after all, "joe" could be anyone, right?

Labels:

Comments (0)
 
A coworker and I were discussing the relative rarity of people like me, namely PhDs who are great software engineers and who are good at actually getting things done. He conjectured that there would be a strong correlation between those traits and how quickly one finished their PhD. Certainly that fits in my case; I did my PhD in 2.5 years. I'm curious as to how strong this correlation might be, and I'll be collecting data about it going forward.

Labels:

Comments (0)
 
An interesting essay about why developers leave. In particular, I found the analysis in the morale section very interesting. This is one place where I think Google really has it right: Giving your people time to work on projects of their choosing pays huge dividends in morale. (Via particletree.)

Labels:

Comments (0)
 
Maciej Ceglowski writes an excellent, half-facetious rebuttal to Paul Graham's Hackers and Painters essay. I didn't buy this analogy when I read it, and still don't, and Ceglowski does a great job of enumerating why it doesn't fly. I've always thought of programmers more like 'design and build' contractors. The best of them are both good architects and good builders, and like programmers who are good at both of these things, they are rare. Many more are either good architects and spotty builders (their buildings look nice and function well but are poorly constructed) or vice-versa (their buildings are well put-together but don't flow well or are unattractive), and of course some aren't good at either.

Labels:

Comments (0)
 
I just ran across an Eric Sink article that I've somehow previously missed called The Hazards of Hiring. There are many good points in this article; in particular, he does a good job on one of my favorite points: in his words, The "very best" people never stop learning. ... They know their own weaknesses, and they're not insecure in talking about them. Many people seem afraid to say "I don't know." I love to say that, because it means we've just identified a gap in my knowledge, and almost always, it means that we're just about to fill that gap. That is the single most fulfilling thing in life, work-wise anyway.

Another interesting point in that article is his skepticism toward people with advanced degrees. I see this a lot, and in fact I spend a lot of my time in interviews trying to convince people that despite my having a Ph.D., I am not some sort of ivory-tower computer science researcher. First and foremost, my love is for writing software. Sure, I love to sink my teeth into a really hard CS problem from time to time, but there is also fun to be found in all of the other facets of the software development process, even those that many consider mundane. I really enjoy positions that offer a lot of variety, from hard algorithmic problems to user interface design to library architecture to low-level infrastructure.

Labels:

Comments (0)
 
Somewhat in the same spirit as that last post, another great engineering story, this one from Damien Katz. This story illustrates perfectly why it makes me so crazy when job postings demand, "10 years experience with XYZ." Clearly if you can get a really great person who also has XYZ experience, that's optimal, but if you have to choose, you want a really great programmer (who doubtless can learn XYZ very quickly) over a mediocre programmer with lots of XYZ experience. (Link via Ned Batchelder.)

Labels:

Comments (0)
 
I just love this story of the Macintosh Graphing Calculator. It was part of a project that got cancelled, but its authors continued to sneak into Apple to work on it for free, and eventually got it shipped with MacOS. This is an extreme and wonderful illustration of the dedication a good engineer can have to an important project. I just wish it was available for Windows!

Labels:

Comments (0)
 
Via Ned Batchelder I learned about Darcs, which is a fully peer-to-peer source code control system. This sounds ideal for home hobby work, as it doesn't require a dedicated server machine. Also, it surprised the heck out of me that it's written in the [almost] purely functional language Haskell!

Labels:

Comments (0)
 
I agree pretty much 100% with these Hallmarks of a Great Developer. In particular, the first point ("plans before coding") resonated with me. This is a former weakness of mine, and the area where I've made the most improvement in the past couple of years, since joining my current project. The point is even better made by Michael Abrash in his Graphics Programming Black Book (chapter 10) and by Ned Batchelder in his diamond cutter analogy.

Labels:

Comments (0)
 
Red Hat's Alan Cox on writing better software. There is a lot of good stuff in there, but my favorite quote is, "If it does everything it's complicated, and if it's complicated, it's broken."

Labels:

Comments (0)
 
Some interesting observations from Brian Cantrill on The Economics of Software.

Labels:

Comments (0)