Having cornered the search market, taken over Youtube and struck fear into the hearts of publishers, Google is licking its lips and looking for new targets. Next stop, Wikipedia.
The potential Wikipedia killer app that Google is developing is called knol, and it has of course sparked much discussion in the blogosphere, with bloggers outbidding each other in who can come up with the wittiest pun. (My favourite: Google Sets its Guns on the Grassy Knol). But puns aside, should Wikipedia be worried? Should we?
Yes and no. You have to remember that Google does not always suceed. Remember Google Video? That didn't take off. Google News? I don't know any News junky who uses that. And I don't see people rushing to pick up Google Talk.
Knol operates on a completely different model from Wikipedia. Instead of having a page for a topic that everybody can edit if they think they know better than the original author, in Knol, the author controls the content, and other users can only suggest changes. Also, you could have more than one "knol" on each topic. Google assures us that more popular (and hence, presumably, more accurate) knols will float to the top, but will they really?
Of course, you have a similar problem in Wikipedia, but it's mitigated because you can correct information: In knols, it seems you can only decide if you like the information or not.
Then there is the problem of orphaned topics: What happens if a "good" knol is abandoned by the author? Can we reuse it to start a new knol? Or is that information frozen in time forever, neither to be reused nor updated?
There are tons of problems that could bring knols to its knees. Of course, there are also reasons why it could succeed: There's less risk of vandalism like Wikipedia has seen, and there are more incentives for people to contribute (Google has agreed to share ad revenue if knol owners let them place ads on their pages).
Whether or not knol succeeds, I believe that the two models are different enough that they could even exist side-by-side. After all Encarta and the Encyclopedia Britannica both sold copies, didn't they?
Friday, December 21, 2007
Thursday, December 13, 2007
Pay Per Use Bioinformatics Software
Equinox, a company based in London and started by the Imperial College, has started offering access to some of its bioinformatics tools on a pay per use basis. Basically, they host the software on their servers, and you pay them each time you want to run a query. Their flagship product has to do with protein structure prediction:
But for smaller companies, or an isolated researcher who may only need to use the software once in his career, the Pay Per Use model may actually have an advantage. Of course, it would be preferable if they'd offer the tool for free, but what kind of business model is that?
Unfortunately, I couldn't find out what the actual price per use was, or how it compared to the cost of a licence. How many uses before a licence would be cheaper? They need to get the balance right, otherwise people will opt for licences every time, and the PPU idea might well die a premature death, as far as bioinformatics is concerned.
Meanwhile, if you want to play around with a bare-bones version of Phyre, you can still go here. Just put on your academic hat first.
Yadda, yadda, the press release goes on to state how great this all is. Of course my knee-jerk reaction was: "Pay? Nevah!" But then I realised that you'd have to pay anyway. Previously, this product had only been available via a software licence. Now, that's fine for big companies and universities, who use it frequently. I suspect most of them will stick with the licence as well.The first product available will be Equinox's leading Phyre(TM) homology modelling and fold-recognition software. User research has shown that proteomics is an ideal target market with positive feedback from research, biotech and pharma audiences.
But for smaller companies, or an isolated researcher who may only need to use the software once in his career, the Pay Per Use model may actually have an advantage. Of course, it would be preferable if they'd offer the tool for free, but what kind of business model is that?
Unfortunately, I couldn't find out what the actual price per use was, or how it compared to the cost of a licence. How many uses before a licence would be cheaper? They need to get the balance right, otherwise people will opt for licences every time, and the PPU idea might well die a premature death, as far as bioinformatics is concerned.
Meanwhile, if you want to play around with a bare-bones version of Phyre, you can still go here. Just put on your academic hat first.
Thursday, December 6, 2007
Open Source Genetics?
Here's an interesting premise: Wired Blog has an article on "The Open Organism: Genetic Engineering in the Open Source Era". What would happen if you applied the principles of Open Source Software to genetic engineering?
Genetic engineering is not like that. Or maybe it's exactly like that, but at a far grander scale. Instead of a computer*, you need a lab: You need pipettes, petri dishes, microscopes, solutions, PCR machines, microarrays, maybe even a gene sequencer. These things don't come cheap.
But let's assume for a moment that you have all of that already. Then you'll still need the things every programmer takes for granted, the libraries or APIs containing shortcuts to all the common tasks that you don't want to design from the ground up. As a genetic engineer, you'll need promoters, restriction enzymes and specialised vectors, each different depending on what you started with.
It is always possible that in the future, genetics labs and components will become as ubiquitous as computers and code libraries. I'm sure that when we were putting punch-cards into basement-sized supercomputers, open source software development seemed as far away as open source genetic engineering seems today. But the transition still took thirty years. I don't think we have to worry about it just yet.
*Or rather, in addition to.
Modularity in computer science has helped unleash crazy amounts of creativity, and new business models derived from user-generated content. Take Google Maps open-API. Or even HTML itself, which allowed users to create graphically sophisticated pages with no real programming knowledge. By putting the hard stuff into a black box and just letting you access what you need to know, user/producers have been able to focus on creating interesting content quickly and easily. What if, in the next decade, the same group of elite users/coders could do the same thing with corn?They might be a little too optimistic, in my opinion. The allure of Open Source (or indeed any kind of hacking) is that anyone can do it. You don't need much initial investment, beyond the computer which you likely already have. Install Linux, get a GNU compiler of your choice, fire up the text editor and you're in business.
Genetic engineering is not like that. Or maybe it's exactly like that, but at a far grander scale. Instead of a computer*, you need a lab: You need pipettes, petri dishes, microscopes, solutions, PCR machines, microarrays, maybe even a gene sequencer. These things don't come cheap.
But let's assume for a moment that you have all of that already. Then you'll still need the things every programmer takes for granted, the libraries or APIs containing shortcuts to all the common tasks that you don't want to design from the ground up. As a genetic engineer, you'll need promoters, restriction enzymes and specialised vectors, each different depending on what you started with.
It is always possible that in the future, genetics labs and components will become as ubiquitous as computers and code libraries. I'm sure that when we were putting punch-cards into basement-sized supercomputers, open source software development seemed as far away as open source genetic engineering seems today. But the transition still took thirty years. I don't think we have to worry about it just yet.
*Or rather, in addition to.
Subscribe to:
Posts (Atom)