The process of innovation has been profoundly altered by the emergence of two way communications mediated through the Internet. Nowadays large groups of people have the ability to connect and collaborate in the innovation process. Von Hippel and Jasanoff acknowledged that a large number of users of a given technology will come up with innovative ideas. Discussions on “citizen science”, “open innovation” and “co-production” show that there is a sizable impact in the area of science.
In this context, crowdsourcing a term introduced by Howe in a 2006 Wired article, was primarily used from a business perspective. Howe compared crowdsourcing to outsourcing in the manufacturing and service industry, which essentially meant moving costly operations to low labor cost countries. Consequently crowdsourcing can be best described as large-scale problem solving model where participants replace an automated process because of potentially better results. However, crowdsourcing in a science context appears to have a different quality and notion. Alan Irwin has described this phenomenon as “citizen science”.
The notion of the scientist as the lonely researcher in his lab has already been replaced by intra-academic collaboration. Crowdsourcing is taking this development even further: instead of the expert community it enables the wider public to contribute to science projects.
The most prominent examples of citizen science projects include Galaxy Zoo and Folding@home (see also TED talk on protein folding).
The basic concept of these types of initiatives, as described by Yochai Benkler in his book: the wealth of networks , is to split up a bigger task into more manageable smaller tasks so that an individual or group can contribute with their results.
Nevertheless, critics raised concerns over the method of crowdsourcing in science. For instance David Weinberger argues that ‘people are not doing the work of scientist’ but simply are ‘scientific instruments’ who gather information without much comprehension.
Neverthelss, both projects are quite successful as results are published in acclaimed academic journals. (For examples see (1)here and (2) here). Folding@home comes in the form of a computer game where participants fold proteins in different ways. The participants have significantly contributed by solving the configuration of a retrovirus within three weeks where scientist struggled to find an answer for a number of years.
Galaxy Zoo on the other hand enables participation and discussion through forums. Additionally, Quench an initiative from Galaxy Zoo, enables the participants to analyse the results and even contribute with a paper.
Therefore assertions such as Weinberger’s simply ignore the fact the participants made credible and knowledgeable contributions.
It has to be pointed out that the professionalisation of science is a more recent development that manifested itself through universities who competed for research grants in order to engage in the solution of research problems. This was in stark contrast to the naturalist researchers who where primarily self-funded.
Science nowadays has become partly a guarded profession with elite institutions and private research labs restricting interaction and access to their research. However, crowdsourcing may constitute a crucial attempt to lower these barriers to enter science, thereby fostering public engagement and participation with science.
But is crowdsourcing/citizen science an acceptable form of expertise or are trained scientists the only credible source for producing knowledge?
Apparently there is no definitive answer to this question because crowdsourcing/citizen science in its current form is still evolving. In an article Collins and Evans came up with an interesting idea of categorizing expertise into different types. In view of the discussion on crowdsourcing the notion of expertise as ‘Non possession’, ‘Interactional’ and ‘Contributory’ expertise are distinctions that are easily transferable to the different levels of crowdsourcing projects.
According to von Hayek any member of the public has some sort of so called ‘local information’ or unique information. Therefore bringing this local information into crowdsourcing project can indeed lead to more than just low-level contributions (e.g. catalogue galaxies as in Galaxy Zoo).
In a study Page observed that groups with a diverse background always performed better than a group consisting only of experts. Yet, as outlined by Wiggins and Crowston there are no agreed standards of how to manage this relationship between formal accredited scientists and the informed ‘crowd’ that wishes deeper participation in science projects.
Often crowdsourced projects are more structured controlled such as the examples mentioned earlier in this blog, but there are also examples where the crowd controls the whole project (for further reading see Wicks et al.)
In conclusion, crowdsourcing should be embraced by the academic community because it enables communication between experts and the public and is also a first step towards creating standards to manage this relationship. As Sheila Jasanoff puts it: “people should be engaged as active parts, as well as sources of knowledge and insight.“
We cannot deny the fact that the informed and experienced expert is needed in order to put all these results in context. However, the public can bring a ‘beginners’ view on a set of problems and may overcome bias of an expert. In a similar vein Barbara Prainsack acknowledged that there need to be a better understanding of citizen science in order to assess the effects it could possibly have on scientific knowledge creation.