Open Source convention wrap-up (2008)

By Andy Oram
July 25, 2008

The computer industry is certainly not recession-proof, but the Open Source convention that's just wrapping up had more attendees than last year (we were up to about 2000), and discussions about starting businesses based on open source seemed to take place everywhere. And I don't mean just free software: open source concepts apply to hardware, creative content, and other materials.

This excitement reverses the classic economic justification for open source and free software. Certainly, one can find businesses built on it over the decades, but the main advantage comes to the users. They reap economic advantages by drawing on a large community for support and staffing, being able to fix bugs and add extensions at will, testing before committing to using the software, etc. (For many non-profit users, of course the lack of license fees is important, but many users actually pay license fees for free software!)

The old justification still reigns. It was mentioned by many keynoters at OSCon. But the sessions about making money by starting an open source business were well-attended, and I heard a lot about it in the halls.

Part of the reason can be found in the type of people who attend OSCon. The cost of the conference--of which travel costs are the biggest--screen out many of the grassroots hackers coding the next great rendering engine (or whatever). But even allowing for that, I sense more excitement about making money with open source this year.

What were the big technical topics at the conference? The scope has spread out a lot over the years; almost anything fits. Along with in-depth sessions on cool stuff you can do with Ruby, Java, Perl, etc., there were sessions on virtualization and the next stage of virtualization: cloud computing.

I noticed that hardly anybody discussed multicores. This is interesting because everywhere you turn in the computer field--the trade journals, the academic journals, the professional journals--everybody says we need to exploit multicores and learn parallel programming.

Well, parallel programming has been embodied in compilers and libraries for at least twenty-five years, and most programmers have never learned to do it. It looks like IPv6: a technology that one can prove beyond a shadow of a doubt to be absolutely critical, but that has being blithely ignored by all the people in the field responsible for making it happen.

Throughout the history of the computer field, one finds examples when the heavyweight solutions championed by researchers were thrown aside by cheap-and-dirty implementations that people in the field thought up. (Some people would cite the Internet and the Web as examples.) The discussions of virtualization and cloud computing made me think: maybe this is the practitioner's answer to parallel programming. Use a single core or even part of a core for each application, and let the infrastructure figure out the rest.

The future looks small. One title promised "An Open Source Startup in Three Hours" (the title was drawn from an old Bill Gates comment about starting Microsoft). And the techniques seemed reasonable: you don't even need a server; just use something like Google App Engine to start.

I also talked to someone from a start-up that encourages users to submit code for others to use as soon as they write it: no need for regression or integration testing. I can't really endorse that practice, but it seems to be the way things are going.

As it gets ridiculously easy to start or fork a project, more and more projects will compete for users. It becomes increasingly important for a project to provide a good experience for incoming users and provide education on any topic they need. So I'll sign off with my own contribution to the growth of the open source movement: better educational tools for projects that want to grow fast.


You might also be interested in:


Popular Topics

Archives

Or, visit our complete archives.

Recommended for You

Got a Question?