Show Summary Details

Page of

Printed from Oxford Research Encyclopedias, Business and Management. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 19 October 2021

Crowdsourcing Innovationfree

Crowdsourcing Innovationfree

  • Linus DahlanderLinus DahlanderESMT Berlin
  •  and Henning PiezunkaHenning PiezunkaEntrepreneurship and Family Enterprise, INSEAD

Summary

Crowdsourcing—a form of collaboration across organizational boundaries—provides access to knowledge beyond an organization’s local knowledge base. There are four basic steps to crowdsourcing: (a) define a problem, (b) broadcast the problem to an audience of potential solvers, (c) take actions to attract solutions, and (d) select from the set of submitted ideas. To successfully innovate via crowdsourcing, organizations must complete all these steps. Each step requires an organization to make various decisions. For example, organizations need to decide whether its selection is made internally. Organizations must take into account interdependencies among these four steps. For example, the choice between qualitative and quantitative selection mechanisms affects how widely organizations should broadcast a problem and how many solutions they should attract. Organizations must make many decisions, and they must take into account the many interdependencies in each key step.

Subjects

  • Organization Theory
  • Organizational Behavior
  • Problem Solving and Creativity
  • Technology and Innovation Management

How do organizations innovate? Research on the antecedents of organizational innovation points to a number of possibilities, such as interorganizational collaborations (Ahuja, 2000; Powell et al., 1996; Schilling & Phelps, 2007), competitors (Greve & Taylor, 2000; Katila & Chen, 2008; von Hippel, 1986), alliance portfolios (Lavie, 2007), learning from lead users (Chatterji & Fabrizio, 2014; Franke & Shah, 2003; Katila et al., 2017; von Hippel, 1986), hiring practices (Rosenkopf & Almeida, 2003), organizational structures (Tushman & O’Reilly, 1996), explorative behaviors (Levinthal, 1997; March, 1991; Rivkin & Siggelkow, 2003), individual search processes (Dahlander et al., 2016; Kneeland et al., 2020), ecosystems (Adner & Kapoor, 2010), and cognition (Kaplan & Tripsas, 2008). The most recent practice identified by scholars is crowdsourcing defined as a distributed form of innovation where contributors self-select to take part.

Crowdsourcing’s potential for organizational innovation has been conceptually derived and empirically demonstrated (Afuah & Tucci, 2012; Jeppesen & Lakhani, 2010). Many different organizations, such as NASA and Netflix, have leveraged crowdsourcing to generate innovative breakthroughs; others such as Yelp and Open Street Map innovate new types of knowledge goods with the help of the crowd (Nagaraj & Piezunka, 2021). What makes crowdsourcing so promising is that it allows organizations to gather new knowledge from a wide range of participants. But despite its great potential, the vast majority of organizations seeking to innovate via crowdsourcing fail in their endeavor (Dahlander & Piezunka, 2014). Given its potential, on the one hand, but the high failure rate on the other, it is no surprise that crowdsourcing has attracted the attention of scholars from many disciplines, and it has become the subject of a large number of academic and practitioner articles, special issues, and books (Felin et al., 2017; Majchrzak & Malhotra, 2020).

This article provides a research overview of crowdsourcing. First, it defines crowdsourcing innovation—listing its different types—and then positions it relative to similar, related phenomena. The review of the literature is then structured according to the different activities that organizations (need to) take part it when engaging in crowdsourcing innovation, such as broadcasting, attracting, moderating, selecting, and rejecting. The article concludes with a general overview of the status quo of the field by illustrating how it can inform the broader field of research on management, strategy, and innovation; its limitations; and future opportunities. It is also worth paying attention to other reviews in the field of crowdsourcing, as they have emphasized other avenues (e.g., Majchrzak & Malhotra, 2013, 2020).

Crowdsourcing Innovation—Definition, Types, and Positioning

What Is Crowdsourcing?

When organizations rely on crowdsourcing, they deploy a new form of organizing (Puranam et al., 2014). The key feature is that organizations do not exert any authority over the division and allocation of tasks, as crowd members decide on those independently, self-selecting to work on tasks. In other words, participants voluntarily choose to collaborate without a central hierarchy assigning them to tasks and collaboration partners. Self-selection can be effective (or even necessary) because organizations often do not know ex ante who has the knowledge required to support them in a particular task. For example, Netflix may not know who is best at developing a new recommendation algorithm, NASA may not know who is best at predicting solar storms, and Yelp may not know who can provide a review of a particular restaurant. Instead of seeking to find someone knowledgeable, organizations crowdsource, so that someone with the right knowledge can find them. Thus, anyone can contribute to working on a task, and the set of people who can work on it is often undefined. Participants can either be individuals, as is most often the case, or teams, or even an entire organization.

Crowdsourcing also relies upon nontraditional ways of rewarding the crowd for engaging in a task. In a traditional organization, employees who complete a task are compensated through their salary. By contrast, in crowdsourcing, participants are often intrinsically motivated, strive to gain the attention of the organization (Dahlander & Piezunka, 2014; Piezunka & Dahlander, 2015), seek the support of their peers, showcase their skills, want the organization to make a change, or hope to receive some form of reward or award (Gallus, 2017; Jung et al., 2018).

Crowdsourcing is not a new concept. Governments and companies have long used crowdsourcing as a source of ideas to advance such diverse issues as industrializing land, controlling infectious diseases, and mass-producing and conserving food. One of the most famous examples of crowdsourcing occurred in 1714, when the British government came up with the Longitude Prize in order to elicit a solution to one of the most pressing scientific problems of the time: determining longitude at sea (Cattani et al., 2017). In the last two decades, an increasing number of organizations have adopted crowdsourcing to solve more contemporary problems (Brunt et al., 2012). For instance, the different X-prizes use crowdsourcing to come up with solutions to major societal problems, such as finding cures for neglected diseases. This recent widespread adoption of crowdsourcing is rooted in (a) the constant need for innovation, which prompts organizations to search for knowledge beyond their boundaries; (b) the emergence of the Internet, which has expanded the potential reach of crowdsourcing; and (c) the decline in information and computation costs, which has facilitated sophisticated problem-solving and innovation at the individual level (Baldwin & von Hippel, 2011; Faraj et al., 2016; von Hippel, 2005; von Hippel & von Krogh, 2003).

Types of Crowdsourcing Innovation

The label “crowdsourcing” has been associated with various purposes and ways to engage the crowd. Boudreau and Lakhani (2009) make a separation between competitive markets and collaborative communities for how to use external innovators. Competitive markets are used in a frequently studied form of crowdsourcing, namely, crowd contests (Terwiesch & Yi, 2008). In a crowd contest, an organization presents the crowd with a problem to be solved. Though often subsumed, it is crucial to differentiate between crowd contests (e.g., Terwiesch & Yi, 2008) and crowd ideation (e.g., Bayus, 2013; Piezunka & Dahlander, 2015). In ideation, people provide different kinds of more or less defined ideas and problems, whereas in contests people provide comparable solutions. For example, an organization might run an ideation on how they could improve their service but run a contest on the development of a better-performing algorithm. There is also work in crowd collaborative communities, where individuals come together to solve problems or interact even more informally. A case in point is LEGO Ideas, where more than a million people interact to share ideas for a new LEGO set. These communities can spur innovation and help organizations find new talent to be hired (Woolley et al., 2015). In sum, crowdsourcing innovation can be adopted by using both competitive markets and collaborative communities.

Relationships with Other Literatures

Much of the research on crowdsourcing innovation has been phenomenological. The literature has expanded in recent years, and there are multiple touch points with related streams of literature. Understanding the links and differences between these streams of literature is crucial to avoiding the mistake of treating them as if they were the same or ignoring highly related phenomena. This section provides a brief overview of the links to and differences from related phenomena. Table 1 includes an overview of the literature, selected influential references, and similarities to and differences from crowdsourcing innovation. More specifically, table 1 compares the relationships between the literature on crowdsourcing with the literature on (a) open innovation (OI), (b) user-based innovation, (c) wisdom of crowd, (d) crowdfunding, (e) open source, (f) hackathons, (g) platforms and ecosystems, (h) brainstorming, (i) innovation communities, and (j) online communities. Without going into the details of each of these 10 comparisons, we can provide a few illustrations. For instance, there is an intellectual linkage to the open innovation literature (Chesbrough, 2003; Dahlander & Gann, 2010; Laursen & Salter, 2006) in that crowdsourcing is often (albeit not always; some companies use it for internal employees only) used to gain ideas from outside the organization. The OI literature, though, is broader than that of crowdsourcing and does not have a strong theoretical prior history for how crowdsourcing works. Although crowdsourcing can learn from the OI literature, there is also a need to develop this stream separately. There are also similarities with the literature on open source (von Hippel & von Krogh, 2003) in that people are widely distributed and can self-select tasks. However, in open source, the created knowledge accumulates, and every individual makes an incremental contribution, whereas in crowdsourcing every individual’s input stands by itself.

Table 1. The Relationship between Crowdsourcing and Related Streams of Literatures

Literature

Selected references

Similarities

Differences

Open innovation

Chesbrough (2003), Laursen and Salter (2006), Chesbrough et al. (2006), Dahlander and Gann (2010)

Idea for innovation originates outside organization

Crowdsourcing is one specific form of open innovation

User-based innovation

von Hippel (1986), Shah (2006), Shah and Tripsas (2007), Agarwal and Shah (2014)

External individuals as a source of innovation

External individual innovates on their own, whereas in crowd contests they only provide an input enabling the organization to innovate

Wisdom of crowd

Mollick and Nanda (2016), Csaszar (2018), Becker et al. (2018), Piezunka et al. (2021)

Individuals provide important insight

In the wisdom of the crowds, individuals’ inputs are aggregated (e.g., the average is taken) whereas in crowd contests each individual’s input stands by itself

Crowdfunding

Mollick (2014), Kuppuswamy and Bayus (2013)

External individuals self-select into funding

In contrast to crowd contests where external individuals produce ideas/solutions, funding constitutes a commodity

Open source

Lerner and Tirole (2002), von Hippel and von Krogh (2003), Lakhani and Wolf (2005), Stewart and Gosain (2006), Spaeth et al. (2015)

Individuals self-select to contribute

In open source, the created knowledge accumulates, and every individual makes an incremental contribution, whereas in crowdsourcing every individual’s input stands by itself

Hackathon

Lifshitz-Assaf et al. (2020), Fang et al. (2021)

External individuals provide specific solutions for building upon the organization

Participants are temporally co-located in hackathons and work in a condensed time frame; individual innovates on their own, whereas in crowd contests they only provide an input enabling the organization to innovate

Platforms and ecosystems

Gawer and Cusumano (2002), Zhu and Iansiti (2010), Rietveld and Schilling (2021), Adner and Kapoor (2010), Hannah and Eisenhardt (2018)

External complementors self-select to provide products that operate at the top of platform/interact with other products in the ecosystem

Complementors provide a modular product which may be sold to customers independently; in crowdsourcing participants only provide an input enabling the organization to innovate

Brainstorming

Sutton and Hargadon (1996), Girotra et al. (2010)

Generating multiple ideas to then select among these ideas

Typically used internally on a relatively small scale among employees that are co-located within a condensed time frame

Innovation communities

Harhoff et al. (2003), Jeppesen and Frederiksen (2006)

People share ideas and contribute in other ways

In innovation communities people have relationships with one another it is notable that often organizations that crowdsource allow for or even foster relationships among contributors, turning them more and more into communities (e.g., Piezunka & Dahlander, 2019)

Online communities

Füller et al. (2007), Ren et al. (2012), Faraj et al. (2016)

People interact with one another online

It is not necessary directly linked to an organization and not necessarily about innovation

In sum, table 1 illustrates many linkages to broader fields of study. But there are also notable differences between these constructs and crowdsourcing, which implies that crowdsourcing can be studied independently (see also Felin et al., 2017; and Powell, 2017, for an examination of how crowdsourcing relates to theoretical frameworks and phenomena). An implication from an analysis of the information in table 1 is that crowdsourcing research would do well to explicate what is different from related streams of work and what the new insights are. Crowdsourcing literature has often been rather phenomenological in documenting interesting facets of crowdsourcing. At the same time, elaborate analyses have been developed (Afuah & Tucci, 2012) that have used crowdsourcing as a setting to develop new theories. Theory building is obviously important and is high in the hierarchy of scientific advancement, but more phenomenon-based work has been important, too, for understanding what is going on.

Crowdsourcing Innovation—Process, Steps, and Activities

This section discusses the different steps and activities for which organizations use crowdsourcing, building on Dahlander et al. (2020). Scholars have developed alternative and complementary frameworks on how organizations can manage their crowdsourcing initiative (e.g., Blohm et al., 2018; Majchrzak & Malhotra, 2020; Malhotra & Majchrzak, 2014). A key challenge in organizational crowdsourcing is the irreversibility of decisions. For example, it is hard to reverse course once a task has been defined and a crowd has begun to work on it. Take the famously quoted Deepwater Horizon crisis in the Mexican gulf where BP, in the aftermath of the platform catching fire, reached out to the crowd to get ideas on how to cope with the catastrophe. After receiving more than 130,000 ideas, BP used more than 100 experts to help process the ideas, but they concluded there was no silver bullet to be found. In their words, it was “a lot of effort for little result.” One challenge here was that the problem was vaguely defined, and once released, this definition was irreversible, which led to larger-than-expected costs and problems in evaluating the ideas from the crowd.

Willingness to Crowdsource

A crucial step for an organization and its members before engaging in crowdsourcing is to adopt a mindset allowing them to engage in crowdsourcing. Lifshitz-Assaf (2018) illustrates that this must not be taken for granted. In an ethnography of NASA, she shows that organization members have often adopted an identity where they see themselves as solution owners not problem owners. Many employees within an organization are skeptical of using crowdsourcing because they fear being replaced or giving away their own personal knowledge to an undefined person. Lifshitz-Assaf’s work is important in highlighting internal challenges that need to be managed to work with crowds, which is an area that deserves further work. Fayard et al. (2016) illustrate that an organization can be vastly different with respect to its attitude toward crowdsourcing—and consequently how (much and long) the organization relies on it. It is important to note that starting a crowdsourcing initiative is by itself not an indicator of an organization’s openness for crowdsourcing. Even organizations that crowdsource ideas are often unwilling to attend to and act on these suggestions (Piezunka & Dahlander, 2015, 2019; Piezunka et al., 2021). Taken together, work in this domain suggests that internal R&D (research and development) departments need to change their mindsets from the fear of being made redundant to willingness to work with a crowd (Boudreau & Lakhani, 2009; Lifshitz-Assaf, 2018).

Willingness to crowdsource is also connected to the ability to get an internal buy-in. Internal efforts and crowdsourcing are complements rather than substitutes. Organizations can, for example, leverage crowdsourcing for tasks in which internal employees lack interest. In other instances, organizations may prefer internal efforts, for example, when they do not want to reveal strategically important problems unless the call for solutions is abstracted or obfuscated. If an organization decides to crowdsource an idea, the first step lies in the definition of the problem for which they are seeking help. Table 2 summarizes the key stages.

Table 2. The Four Stages of Crowdsourcing

Stage

Short explanation

Define

Defining the task to be distributed to the crowd—Is it related to getting a specific problem solved or geared toward finding problems?

Broadcast

Broadcasting refers to the channels used to find a crowd—Is it related to using own channels or using intermediaries?

Attract

Attracting is linked to how crowds will be incentivized for taking part—What means of incentives are being employed and how many will get rewarded?

Select

Selecting refers to evaluation criteria being used to evaluate contributions—Is it possible to evaluate with predefined metrics or is it more of a judgment call?

Define

In crowdsourcing, one needs to define a problem before turning to a crowd (Baer et al., 2013). How the problem is defined and communicated has wide-ranging implications (Foss et al., 2016). For example, it affects how many people can relate to the problem and thus the number of participants (i.e., the size of the crowd that can be attracted). Potentially even more important, it also affects the types of participants who are attracted. For example, if a diversified conglomerate defines a particular problem as an engineering problem, it will attract a different crowd than if the problem was defined as a chemical issue. A too narrow definition can be harmful, as having a variety of perspectives increases the odds of finding an innovative solution (Jeppesen & Lakhani, 2010; Levina & Fayard, 2018). Organizations are likely to benefit if they tap into the diversity of a problem and the ideas of their users and potential contributors (Franke & von Hippel, 2003).

When defining the problem, it is important to make the issue broadly appealing, thus avoiding any unnecessary restrictions (e.g., through terminologies, jargon, or framings that reflect a specific field). The problem can be more undefined where the crowd’s general task is to point out new ideas for future products or services or areas of improvement. Such minimal specifications allow crowd contributors to redefine and interpret a problem through the lenses of their own expertise and local knowledge (which are likely to be distant from the organizations’) (Winter et al., 2007). Avoiding superfluous specifications can increase the number of people willing to engage in a task. Overspecifying the problem can lead to unnecessary and potentially detrimental constraints (Erat & Krishnan, 2012). Of course, organizations may also underspecify the problem and, as a result, receive too many unusable ideas. In sum, although a lower specification is likely to increase the share of unusable input, it increases the chances of sourcing valuable input (Boudreau et al., 2011).

Broadcast

Broadcasting is the mode an organization uses to reach the crowd. This can be done by using existing channels of communication and building its own crowd or using intermediaries (Lopez-Vega et al., 2016). The classic examples of crowdsourcing were designed and run by the organizations that created the actual tasks. The previously mentioned classic example of the British government’s Longitude Act from 1714 was in response to the Scilly naval disaster where four British warships were lost and more than 2,000 soldiers were killed. The Longitude Act sought to find a simple and practical method for the precise determination of a ship’s longitude (Cattani et al., 2017).

When organizations run their own crowdsourcing, they tend to rely on crowds of individuals with whom they already have a relationship. It is difficult for an organization to attract individuals with whom they do not yet have a relationship. This is problematic, because an organization’s knowledge and the knowledge of the individuals with whom the organization already has relationships are likely to overlap. These individuals are likely to operate in the same domain as the organization. This, in turn, prevents them from achieving sufficient “distance” in their search (Afuah & Tucci 2012; Jeppesen & Frederiksen 2006). Thus, although their contributions are often more feasible and immediately applicable, they also tend to be less novel (Franke, Poetz, et al., 2014). As a result, the input an organization gathers from individuals with whom they already have a relationship is unlikely to lead to pathbreaking innovations (Burgelman, 2002).

When organizations broadcast a problem, it is critical that they reach a wide audience. A primary reason organizations’ crowdsourcing initiatives fail is their inability to attract a sufficient number of people who submit ideas (Dahlander & Piezunka, 2014, 2020). It is notable that organizations that are successful in crowdsourcing (and typically become the object of scientific research) are often organizations with a well-known brand and general appeal, like Dell (Bayus, 2013), NASA (Lifshitz-Assaf, 2018), or LEGO (Dahlander et al., 2020). Although organizations typically struggle to attract a large audience, if they do attract a large audience it can come with its own challenges. For example, prior research illustrates how important it is that people develop relationships with one another and know each other (Ma & Agarwal, 2007; Ren et al., 2012). As the size of the crowd grows, it may be more difficult to transform an anonymous crowd into a connected community.

One of the goals of crowdsourcing is to tap into distant knowledge. It is, however, unclear ex ante who holds relevant knowledge (Afuah & Tucci, 2012). By increasing the size of the crowd, an organization increases its chances of identifying suitable input (Boudreau et al., 2017; Lakhani et al., 2007; Terwiesch & Yi, 2008). A larger crowd increases the chance of finding an extreme solution (Baer et al., 2010; Boudreau et al., 2011). Franke, Lettl, et al. (2014) argue that the quality of ideas is largely random—and, as a result, the success of crowdsourcing depends on the number of people it attracts. However, other scholars have pointed out that the relationship between the number of searchers and the breadth of search is sublinear (Erat & Krishnan, 2012). A large crowd increases competition and decreases the incentive for any given individual to work.

A potential benefit of crowdsourcing is what is sometimes referred to as a “parallel search,” which occurs when multiple crowd members explore possible solutions simultaneously. On the upside for the organization, it typically only pays for successful attempts (Boudreau et al., 2011). One can jokingly say the difference is that with crowdsourcing, one often pays for performance, and in R&D departments we pray for performance.

In such a situation, it is important to have diversity and a variety of perspectives to find possible solutions (Jeppesen & Lakhani, 2010). Parallel search occurs where multiple crowd members explore possible solutions, but the organization only pays for successful attempts (Boudreau et al., 2011). The challenge is for the organization to seek specific information without giving away any strategic information. It also requires internal R&D departments to give up some of their power and work with the crowd rather than being concerned about being made redundant (Boudreau & Lakhani, 2009; Lifshitz-Assaf, 2018).

Today, there are alternatives for organizations to build an initiative and their own audience. So-called intermediaries can assist organizations in connecting with potential crowds. Those intermediaries invest in building a crowd whose members are interested and skilled in completing tasks (Lakhani & Lonstein, 2011). Since these pools of individuals tend to expand as intermediaries collaborate with various organizations—and, as a result, can offer more and different tasks—they often represent a great diversity of expertise. Such diversity is crucial to finding novel solutions (Boudreau et al., 2011), but it would be difficult for an organization to access those solutions without the help of an intermediary. Intermediaries also offer advice on how to organize crowdsourcing. As the individuals contacted via an intermediary have no direct link to the organization, they are less likely to be motivated to contribute via a feeling of affiliation (compared to when organizations recruit crowds themselves).

When broadcasting, organizations also need to decide whom to crowdsource from. Organizations may reach out beyond the boundaries of the firm, but they may also engage in internal crowdsourcing where they elicit ideas from their employees (Malhotra et al., 2017). Research has begun to illustrate how organizations can engage their internal employees successfully in crowdsourcing—also hinting at how internal and external crowdsourcing may differ (Jung et al., 2018).

Attract

But simply broadcasting a task to a crowd is not enough if the crowd does not engage. Contributors will only engage if they perceive the overall process and their reward to be appropriate and fair (Franke et al., 2013). There are different ways in which organizations have solved this. Perhaps the most obvious is to offer prizes for winning solutions. Recent work has underscored the enormous effectiveness of offering awards (Deller & Sandino, 2020; Frey & Gallus, 2017; Gallus, 2017; Gallus & Frey, 2016). In the early work on crowdsourcing contests, scholars thought carefully about how to design prizes (Erat & Krishnan, 2012). But the financial incentive alone cannot explain crowdsourcing’s success. Even if the winning sum is remarkable, it has to be relative to the number of people providing input. For instance, if the prize is €10,000 for a winning idea, and 1,000 competing individuals spend 10 hours each to come up with a solution, then the expected payoff for an individual is (€10,000/1,000 = €10). Considering that many of the tasks require deep expertise, an hourly wage of €1 cannot fully explain why crowds engage.

Prior work has pointed out other kinds of motivations. Many crowd members are motivated purely by the intellectual challenges and applying their skills to a useful outcome (Frey et al., 2011; Lakhani & Wolf, 2005). These intrinsic sources of motivation also appear to be linked to more substantial contributions (Füller et al., 2006). Benefits can also arise from highlighting skills and enhancing career prospects (Lerner & Tirole, 2002). There are examples where creative contributors (even nonwinners) have ended up being hired by the crowdsourcing organization. This implies that the organization can design crowdsourcing initiatives in ways that attract contributions. They can offer interesting tasks, allow for showcasing skills, and compare outputs to increase learning. A crucial component of peoples’ motivation to engage in crowdsourcing is also their ambition to form a relationship with the organization (Piezunka & Dahlander, 2019).

Organizations may also benefit from providing opportunities to engage before contributing. Kane and Ramsbotham (2016) illustrate that consumption often precedes contributions. Nagaraj and Piezunka (2021) consistently illustrate that if platforms experience a loss in consumers, it hurts their ability to recruit new contributors. However, there is an easy way for an organization to allow for participation without contribution. For example, when organizations share ideas they have generated internally, there is a positive effect, leaving contributors to discuss these ideas before contributing their own (Dahlander & Piezunka, 2014).

Moderate and Provide Feedback

Even if a crowd is attracted to working on a particular task, it does not necessarily imply success. And if organizations succeed in attracting a large crowd, it is crucial and surprisingly challenging to keep them engaged on an ongoing basis (Piezunka & Dahlander, 2019; Ray et al., 2014). Contributors typically require feedback, which implies that organizations cannot throw out a task and expect that their work is done. Working with a crowd requires ongoing moderation and providing feedback (Moon & Sproull, 2008), which not all organizations fully appreciate (Dahlander & Piezunka, 2014).

People often value their own ideas very highly (Fuchs et al., 2019), which implies that they feel entitled to receive recognition or feedback. At the same time, research has shown that in the vast majority of cases—88%—organizations do not provide any type of feedback, positive or negative, to individual contributors (Piezunka & Dahlander, 2019). This is unfortunate as even contributors who receive an explicit rejection are more likely to come up with a second idea and are even more likely to have it accepted. In other words, by receiving negative feedback with the rejection, the contributors learn what the organization wants (Piezunka & Dahlander, 2019). Recent research has underscored the important and often motivating role of negative feedback (Camacho et al., 2019). Thus, by providing feedback, organizations can increase the odds of a newcomer becoming a serial ideator (Bayus, 2013) who generates a stronger idea that is more in line with the organization’s objective.

Some research also analyzed the type of feedback that crowdsourcing organizations provide, specifically the content of rejections, along with the word choice and tone (Piezunka & Dahlander, 2019). Piezunka and Dahlander suggest that the idea of “echoing” content is less effective than matching the contributor’s style. For instance, if the contributor provides an informal idea, it makes sense for the organization to respond informally. A possible effect is that crowds may scale more quicky than the internal capacity to moderate and provide feedback.

Beyond the feedback from the organization, contributors often also receive feedback from the crowd. Frequently, contributors are allowed to vote and comment on each other’s ideas (e.g., Bayus, 2013; Piezunka & Dahlander, 2015). Research shows that feedback from their peers is also an important source of motivation (Bayus, 2013; Chan et al., 2015; Piezunka & Dahlander, 2019). By allowing people to comment and vote on each other’s work, crowds can transform into communities, as people interact and develop relationships with one another. Such interactions are also crucial for the emergence and exchange of new knowledge (Faraj et al., 2016). More work has begun to study the dynamics of interactions among contributors and their ideas (Franke & Shah, 2003)—a domain that would be fertile ground for future empirical examination and theoretical development.

Select

An organization also needs to decide which ideas to select for implementation. There are clear differences as to how this can be done. The two most noteworthy methods are preestablished metrics and judgment calls. When using judgment calls, particular invention types are not specified in advance (i.e., the winning entry is decided ex post) (Moser & Nicholas, 2013). In these cases, judges are allowed to “know it when they see it” (Scotchmer, 2004, p. 40). When using metrics, the evaluation criteria are formalized by standards developed ex ante, which stipulate a goal against which entries are evaluated to determine a winner. For instance, many of the Innocentive challenges have 10+ pages of descriptions of conditions that need to be fulfilled in order to win. Similarly, the data science challenges hosted by Kaggle have clear instructions on how contributions are measured. In addition, Kaggle has such clear evaluation metrics that contributions can be continuously compared, which fosters learning from others. However, determining the right metric can be difficult and costly (Terwiesch & Yi, 2008). For a metrics-based evaluation to work, the conditions need to be specified in advance. However, this is difficult when an organization is exploring unfamiliar terrain, and it can be expensive, meaning that an organization involved in a metrics-based evaluation incurs costs before it even knows whether it will attract any entries. It can also constrain the solution space if an organization does not expand its horizon on where solutions come from. For instance, Sir Isaac Newton, who was the commissioner of the Longitude Act and interacted with different people who contributed solutions to the longitude problem, was so engrossed in his own thinking around astronomy that he failed to appreciate solutions that were coming from a different area of expertise (Cattani et al., 2017).

Judgment calls increase an organization’s flexibility to choose any idea. However, relying purely on judgment calls can be challenging. Each entry must be evaluated at great length, thus increasing the organization’s selection burden and exposing the selection process to managerial bias. It can also create friction between the organization and the crowd when the crowd elevates an idea that the organization dismisses. When loosely defining selection criteria, organizations are also relying upon crowds. Crowd contributors may be or may become future customers (Mollick & Nanda, 2016; Poetz & Schreier, 2012). For example, when LEGO involves crowds in evaluating designs on its LEGO Ideas platform, the crowd evaluators’ preferences are highly relevant, since the crowd members in this case are also often customers and users of the products. This constitutes a double-selection environment (Beretta, 2019) where the crowd votes or comments on ideas they think are worth pursuing, and the organization chooses from a smaller number of ideas. Even in the absence of such a double-selection environment, a crowdsourcing organization often allows contributors to vote and comment on each other’s suggestions and may take such votes into account when making their final selection.

A large body of work has begun to examine what factors determine the ideas an organization will select from crowdsourced ideas. Piezunka and Dahlander (2015) find that once organizations have gathered a wide range of ideas, they tend to focus on those closely related to what they are already doing. This paradoxical phenomenon is known as distant search, narrow attention. The more ideas gathered, the more this tendency to focus on familiar ideas intensifies. Thus, whereas prior research has illustrated the value of collecting many ideas to increase the chance of finding a great one (Girotra et al., 2010), current research shows that success on that score may render organizations less receptive to novel distant ideas (Piezunka & Dahlander, 2015). The sheer scale of crowdsourced ideas can result in cognitive overload so that novel distant ideas—the ones that are particularly valuable—are “crowded out.” These findings are in line with research showing that organizations often filter out distant ideas with breakthrough potential (Chai & Menon, 2019). Recent exciting work on selection has begun to use machine learning to tease out patterns in organizations’ selections (Christensen et al., 2017). Future research may also explore the consequences of different selection regimes on the kinds of ideas that are crowdsourced. Future research may also examine how internal structure affects the selection of externally crowdsourced ideas—building on work in the domain of organizational design (Keum & See, 2017; Reitzig & Maciejovsky, 2015; Reitzig & Sorenson, 2013).

Reject

Research has often examined which ideas organizations select. Piezunka and Dahlander have found that how these decisions are communicated is also important. This is crucial for keeping participants engaged. Participation is often short-lived, seldom going beyond contributing one idea. Piezunka and Dahlander (2019) identify the primary reason for this: The typical experience for a contributor is to submit an idea, see the organization reject it, and then forego submitting again. In order to better understand what contributors are looking for, we studied those whose initial ideas were not selected. We compared those whose ideas were explicitly rejected by the organization with those who simply did not hear back. We found that explicitly rejecting ideas substantially increases contributors’ tendency to continue to submit. Explicit rejections indicate to contributors that the organization is committed to crowdsourcing and does pay attention, and that it is worth trying again. Examination of the text of the rejection revealed that when it is written in the same linguistic style as the submitted idea, the contributor is more likely to submit ideas in the future. These findings show that for organizations to attract contributions, they must signal to contributors their commitment to crowdsourcing and establish a relationship with them.

Crowdsourcing Innovation—Taking Stock, Challenges, and Future Directions

Research on crowdsourcing is continually growing. There are many open questions for research on crowdsourcing innovation, even if we know a lot more today than we did when the field first emerged.

Theoretical Challenges and Future Directions

Oversampling Success

Research has pointed out how fruitful crowdsourcing can be, but it has also documented the enormous challenges that accompany it. A key question is when and for what kinds of organizations crowdsourcing is beneficial. Related fields, such as open innovation, have attracted early optimism as a “new” way of organizing innovation, and so has crowdsourcing. Early work often had examples from organizations that had successfully implemented crowdsourcing. Today, we have a more sober view as we have moved beyond studies that were characterized by oversampling success. There are indeed a number of organizations that tried to implement crowdsourcing and failed in their attempt. These negative examples are well hidden in the bunkers of corporate bureaucracies with little interest in displaying failure. One solution is to be more careful when theoretically sampling unsuccessful cases or even partially successful ones. Another solution is to use broader data sets that capture cases of organizations that at some point implemented crowdsourcing (Dahlander & Piezunka, 2014).

Newcomers and Serial Ideators

Research shows that most who contribute ideas are newcomers—they show up once but are likely to leave the crowd soon after (Piezunka & Dahlander, 2019). As a result, a crowd has very skewed contribution patterns with a tiny fraction accounting for a large share of the contributions. Some become serial ideators and are likely to improve their success rate over time, which has been the subject of some research (Bayus, 2013). One aspect that has not been investigated in depth is how ideators change content in their contributions. For instance, it is likely that serial ideators learn in the process, but research has yet to discoverb how and why their content changes from idea to idea. Similar to research on open source, where “joining scripts” has been identified (Spaeth et al., 2015), it would be advantageous to understand the sequences that newcomers go through to become accepted. Research on how they “lurk” and vicariously learn from others could unearth how they learn from the successes and failures of others and whether doing so trumps the effect of one’s own experiences. Piezunka and Dahlander (2019) have shown how organizations can use feedback from rejected ideas to get people to contribute a second time in the hope of then generating better ideas. There are many open questions here regarding how contributors change as a result of getting feedback. If they change in the line of the organization by proposing ideas too close to their comfort zone, it may lead to problems of not stretching organizations outside their comfort zone.

Performance Implications

A question that remains unanswered, surprisingly, is whether and when crowdsourcing is beneficial for organizations. It is surprising because prior work has examined the potential of crowdsourcing, yet we do not know at what point crowdsourcing innovation translates into higher performance. There is strong conceptual evidence of the strategic relevance of crowdsourcing innovation (and the underlying innovation communities) (Bogers & West, 2012; Fisher, 2019; Shah & Nagel, 2020), but empirical evidence that links crowdsourcing to firm-level performance outcomes is missing. This is in part because it is difficult to obtain a representative sample that could be used to connect to reliable company performance. It is worth noting here that the performance of crowdsourcing often follows a skewed distribution, and that some organizations have disproportionate benefits. This implies that scholars would be well served to consider both average treatment effects and what occurs in the tails of the distribution. A plausible proposition is that there is a marginal benefit for the “average” firm but that organizations that are very successful in managing the crowd, reap disproportionate performance benefits.

Competitive Advantages

Future research is needed to link crowdsourcing to competitive advantages. In particular, it is important to recognize how crowdsourcing can be used as a vehicle to build reputation, and how competitors would have to respond. For instance, if Dell successfully implemented crowdsourcing, would competitors have to follow suit to rope in crowd members? Given that there are network effects in a crowd, it may be difficult to be a second mover when a competitor has been successful in developing a crowd. More research is thus needed for how organizations use crowds for competitive moves that can be both offensive in becoming a “black hole” in a space that sucks in good ideas or defensive if competitors have moved to reach such dominance. Given the internal focus of how crowds are functioning, these questions have been too long overlooked and deserve future scholarly attention.

Methodological Challenges and Future Research

Inside Crowd Studies

Research on crowdsourcing typically requires insider econometrics where the researchers have access to personal data from inside a firm. Rather than data on internal employees, scholars of crowdsourcing have scraped data or collaborated with partners who can provide data on crowdsourcing. Scraping data obviously has the challenge of catching snapshots at a particular point in time, which would have to be done over an extended period to fully capture behaviors longitudinally. Collaboration with a partner would typically overcome these issues, but gaining such access is time-consuming and requires trust and nondisclosure agreements (NDAs) to be established. A single organization usually does both scraping and partnerships, and as a result, it has been difficult to compare and contrast what makes organizations successful given the focus on detailed data on a single crowd.

Experiments on Crowdsourcing

The early crowdsourcing literature was conceptual, using case studies and getting data from a single crowd. Building upon these developments, research in recent years has also moved to use more field experiments to establish causal effects on crowds (see, e.g., Boudreau, 2012). This seems to be a natural next step to test some theoretical predictions. It is also worth remembering that a combination of methods is important.

New Methods

Given the vast amount of data in the context of crowdsourcing—ideas at different stages, votes, comments, contributors, administrators, and so on—the ground is fertile for making use of machine learning for inductive theory development (Shrestha et al., 2021). Although there are many pathways to develop theory and to better understand crowdsourcing, machine learning—in combination with the ingenuity of the researchers studying crowdsourcing—seems particularly promising.

References

  • Adner, R., & Kapoor, R. (2010). Value creation in innovation ecosystems: How the structure of technological interdependence affects firm performance in new technology generations. Strategic Management Journal, 31(3), 306–333.
  • Afuah, A., & Tucci, C. (2012). Crowdsourcing as solution to distant search. Academy of Management Review, 37(3), 355–375.
  • Agarwal, R., & Shah, S. K. (2014). Knowledge sources of entrepreneurship: Firm formation by academic, user and employee innovators. Research Policy, 43(7), 1109–1133.
  • Ahuja, G. (2000). Collaboration networks, structural holes, and innovation: A longitudinal study. Administrative Science Quarterly, 45(3), 425–455.
  • Baer, M., Dirks, K. T., & Nickerson, J. A. (2013). Microfoundations of strategic problem formulation. Strategic Management Journal, 34(2), 197–214.
  • Baer, M., Leenders, R. T. A., Oldham, G. R., & Vadera, A. K. (2010). Win or lose the battle for creativity: The power and perils of intergroup competition. Academy of Management Journal, 53(4), 827–845.
  • Baldwin, C., & von Hippel, E. (2011). Modeling a paradigm shift: From producer innovation to user and open collaborative innovation. Organization Science, 22(6), 1399–1417.
  • Bayus, B. L. (2013). Crowdsourcing new product ideas over time: An analysis of the Dell IdeaStorm community. Management Science, 59(1), 226–244.
  • Becker, J., Porter, E., & Centola, D. (2018). The wisdom of partisan crowds. Proceedings of the National Academy of Sciences, 116(22), 10717–10722.
  • Beretta, M. (2019). Idea selection in web-enabled ideation systems. Journal of Product Innovation Management, 36(1), 5–23.
  • Blohm, I., Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2018). How to manage crowdsourcing platforms effectively? California Management Review, 60(2), 122–149.
  • Bogers, M., & West, J. (2012). Managing distributed innovation: Strategic utilization of open and user Innovation. Creativity and Innovation Management, 21(1), 61–75.
  • Boudreau, K. J. (2012). Let a thousand flowers bloom? Growing an applications software platform and the rate and direction of innovation. Organization Science, 23(5), 1409–1427.
  • Boudreau, K. J., Brady, T., Ganguli, I., Gaule, P., Guinan, E., Hollenberg, T., & Lakhani, K. R. (2017). A field experiment on search costs and the formation of scientific collaborations. Review of Economics and Statistics, 99(4), 565–576.
  • Boudreau, K. J., Lacetera, N., & Lakhani, K. R. (2011). Incentives and problem uncertainty in innovation contests: An empirical analysis. Management Science, 57(5), 843–863.
  • Boudreau, K. J., & Lakhani, K. R. (2009). How to manage outside innovation. MIT Sloan Management Review, 50(4), 69–76.
  • Brunt, L., Lerner, J., & Nicholas, T. (2012). Inducement prizes and innovation. The Journal of Industrial Economics, 60(4), 657–696.
  • Burgelman, R. A. (2002). Strategy as vector and the inertia of coevolutionary lock-in. Administrative Science Quarterly, 47(2), 325–357.
  • Camacho, N., Nam, H., Kannan, P. K., & Stremersch, S. (2019). Tournaments to crowdsource innovation: The role of moderator feedback and participation intensity. Journal of Marketing, 83(2), 138–157.
  • Cattani, G., Ferriani, S., & Lanza, A. (2017). Deconstructing the outsider puzzle: The legitimation journey of novelty. Organization Science, 28(6), 965–992.
  • Chai, S., & Menon, A. (2019). Breakthrough recognition: Bias against novelty and competition for attention. Research Policy, 48(3), 733–747.
  • Chan, K. W., Li, S. Y., & Zhu, J. J. (2015). Fostering customer ideation in crowdsourcing community: The role of peer-to-peer and peer-to-firm interactions. Journal of Interactive Marketing, 31, 42–62.
  • Chatterji, A. K., & Fabrizio, K. R. (2014). Using users: When does external knowledge enhance corporate product innovation? Strategic Management Journal, 35(10), 1427–1445.
  • Chesbrough, H. W. (2003). Open innovation: The new imperative for creating and profiting from technology. Harvard Business School Press.
  • Chesbrough, H. W., Vanhaverbeke, W., & West, J. (2006). Open innovation: Researching a new paradigm. Oxford University Press.
  • Christensen, K., Nørskov, S., Frederiksen, L., & Scholderer, J. (2017). In search of new product ideas: Identifying ideas in online communities by machine learning and text mining. Creativity and Innovation Management, 26(1), 17–30.
  • Csaszar, F. (2018). Limits to the wisdom of the crowd in idea selection. In J. Joseph, O. Baumann, R. Burton, & K. Srikanth (Eds.), Organization design (pp. 275–298). Emerald.
  • Dahlander, L., & Gann, D. M. (2010). How open is innovation? Research Policy, 39(6), 699–709.
  • Dahlander, L., Jeppesen, L. B., & Piezunka, H. (2020). How organizations manage crowds: Define, broadcast, attract, and select. In J. Sydow & H. Berends (Eds.), Managing inter-organizational collaborations: Process views (pp. 239–270). Emerald.
  • Dahlander, L., O’Mahony, S., & Gann, D. M. (2016). One foot in, one foot out: How does individuals’ external search breadth affect innovation outcomes? Strategic Management Journal, 37(2), 280–302.
  • Dahlander, L., & Piezunka, H. (2014). Open to suggestions: How organizations elicit suggestions through proactive and reactive attention. Research Policy, 43(5), 812–827.
  • Dahlander, L., & Piezunka, H. (2020). Why crowdsourcing fails. Journal of Organizational Design, 9, 24.
  • Deller, C., & Sandino, T. (2020). Effects of a tournament incentive plan incorporating managerial discretion in a geographically dispersed organization. Management Science, 66(2), 911–931.
  • Erat, S., & Krishnan, V. (2012). Managing delegated search over design spaces. Management Science, 58(3), 606–623.
  • Fang, T. P., Wu, A., & Clough, D. R. (2021). Platform diffusion at temporary gatherings: Social coordination and ecosystem emergence. Strategic Management Journal, 42(2), 233–272.
  • Faraj, S., von Krogh, G., Monteiro, E., & Lakhani, K. R. (2016). Special section introduction—Online community as space for knowledge flows. Information Systems Research, 27(4), 668–684.
  • Fayard, A.-L., Gkeredakis, E., & Levina, N. (2016). Framing innovation opportunities while staying committed to an organizational epistemic stance. Information Systems Research, 27(2), 302–323.
  • Felin, T., Lakhani, K. R., & Tushman, M. L. (2017). Firms, crowds, and innovation. Strategic Organization, 15(2), 119–140.
  • Fisher, G. (2019). Online communities and firm advantages. Academy of Management Review, 44(2), 279–298.
  • Foss, N., Frederiksen, L., & Rullani, F. (2016). Problem-formulation and problem-solving in self-organized communities: How modes of communication shape project behaviors in the free open-source software community. Strategic Management Journal, 37(13), 2589–2610.
  • Franke, N., Keinz, P., & Klausberger, K. (2013). “Does this sound like a fair deal?”: Antecedents and consequences of fairness expectations in the individual’s decision to participate in firm innovation. Organization Science, 24(5), 1495–1516.
  • Franke, N., Lettl, C., Roiser, S., & Tuertscher, P. (2014, January 1). “Does God play dice?” Randomness vs. deterministic explanations of crowdsourcing success [Paper presentation]. Academy of Management Conference, Philadelphia, PA.
  • Franke, N., Poetz, M. K., & Schreier, M. (2014). Integrating problem solvers from analogous markets in new product ideation. Management Science, 60(4), 1063–1081.
  • Franke, N., & Shah, S. (2003). How communities support innovative activities: An exploration of assistance and sharing among end-users. Research Policy, 32(1), 157–178.
  • Franke, N., & von Hippel, E. (2003). Satisfying heterogeneous user needs via innovation toolkits: The case of Apache security software. Research Policy, 32(7), 1199–1215.
  • Frey, B. S., & Gallus, J. (2017). Honours versus money: The economics of awards. Oxford University Press.
  • Frey, K., Lüthje, C., & Haag, S. (2011). Whom should firms attract to open innovation platforms? The role of knowledge diversity and motivation. Long Range Planning, 44(5–6), 397–420.
  • Fuchs, C., Sting, F., Schlickel, M., & Alexy, O. (2019). The ideator’s bias: How identity-induced self-efficacy drives overestimation in employee-driven process innovation. Academy of Management Journal, 62(5), 1498–1522.
  • Füller, J., Bartl, M., Ernst, H., & Mühlbacher, H. (2006). Community based innovation: How to integrate members of virtual communities into new product development. Electronic Commerce Research, 6(1), 57–73.
  • Füller, J., Jawecki, G., & Mühlbacher, H. (2007). Innovation creation by online basketball communities. Journal of Business Research, 60(1), 60–71.
  • Gallus, J. (2017). Fostering public good contributions with symbolic awards: A large-scale natural field experiment at Wikipedia. Management Science, 63(12), 3999–4015.
  • Gallus, J., & Frey, B. S. (2016). Awards: A strategic management perspective. Strategic Management Journal, 37(8), 1699–1714.
  • Gawer, A., & Cusumano, M. A. (2002). Platform leadership: How Intel, Microsoft, and Cisco drive industry innovation. Harvard Business School Press.
  • Girotra, K., Terwiesch, C., & Ulrich, K. T. (2010). Idea generation and the quality of the best idea. Management Science, 56(4), 591–605.
  • Greve, H. R., & Taylor, A. (2000). Innovations as catalysts for organizational change: Shifts in organizational cognition and search. Administrative Science Quarterly, 45(1), 54–80.
  • Hannah, D. P., & Eisenhardt, K. M. (2018). How firms navigate cooperation and competition in nascent ecosystems. Strategic Management Journal, 39(12), 3163–3192.
  • Harhoff, D., Henkel, J., & von Hippel, E. (2003). Profiting from voluntary information spillovers: How users benefit by freely revealing their innovations. Research Policy, 32(10), 1753–1769.
  • Jeppesen, L. B., & Frederiksen, L. (2006). Why do users contribute to firm-hosted user communities? The case of computer-controlled music instruments. Organization Science, 17(1), 45–66.
  • Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast search. Organization Science, 21(5), 101–1033.
  • Jung, O. S., Blasco, A., & Lakhani, K. R. (2018). Innovation contest: Effect of perceived support for learning on participation. Health Care Management Review, 45(3), 255–266.
  • Kane, G. C., & Ransbotham, S. (2016). Content as community regulator: The recursive relationship between consumption and contribution in open collaboration communities. Organization Science, 27(5), 1258–1274.
  • Kaplan, S., & Tripsas, M. (2008). Thinking about technology: Applying a cognitive lens to technical change. Research Policy, 37(5), 790–805.
  • Katila, R., & Chen, E. L. (2008). Effects of search timing on innovation: The value of not being in sync with rivals. Administrative Science Quarterly, 53(4), 593–625.
  • Katila, R., Thatchenkery, S., Christensen, M. Q., & Zenios, S. (2017). Is there a doctor in the house? Expert product users, organizational roles, and innovation. Academy of Management Journal, 60(6), 2415–2437.
  • Keum, D. D., & See, K. E. (2017). The influence of hierarchy on idea generation and selection in the innovation process. Organization Science, 28(4), 653–669.
  • Kneeland, M. K., Schilling, M. A., & Aharonson, B. S. (2020). Exploring uncharted territory: Knowledge search processes in the origination of outlier innovation. Organization Science, 31(3), 535–557.
  • Kuppuswamy, V., & Bayus, B. L. (2013). Crowdfunding creative ideas: The dynamics of project backers in Kickstarter. SSRN Electronic Journal, 5, 1–37.
  • Lakhani, K. R., Jeppesen, L. B., Lohse, P. A., & Panetta, J. A. (2007). The value of openness in scientific problem solving (Harvard Business School Working Paper No. 07-050). Harvard Business School.
  • Lakhani, K. R., & Lonstein, E. (2011). InnoCentive.com (A).
  • Lakhani, K. R., & Wolf, R. (2005). Perspectives on free and open source software. In J. Feller, B. Fitzgerald, S. Hissam, & K. R. Lakhani (Eds.), Perspectives on free and open source software (pp. 3–23). MIT Press.
  • Laursen, K., & Salter, A. (2006). Open for innovation: The role of openness in explaining innovation performance among U.K. manufacturing firms. Strategic Management Journal, 27(2), 131–150.
  • Lavie, D. (2007). Alliance portfolios and firm performance: A study of value creation and appropriation in the U.S. software industry. Strategic Management Journal, 28(12), 1187–1212.
  • Lerner, J., & Tirole, J. (2002). Some simple economics of open source. The Journal of Industrial Economics, 50(2), 197–234.
  • Levina, N., & Fayard, A.-L. (2018). Tapping into diversity through open innovation platforms: The emergence of boundary-spanning practices. In C. L. Tucci, A. Afuah, & G. Viscusi (Eds.), Creating and capturing value through crowdsourcing (pp. 204–235). Oxford University Press.
  • Levinthal, D. A. (1997). Adaptation on rugged landscapes. Management Science, 43(7), 934–950.
  • Lifshitz-Assaf, H. (2018). Dismantling knowledge boundaries at NASA: The critical role of professional identity in open innovation. Administrative Science Quarterly, 63(4), 746–782.
  • Lifshitz-Assaf, H., Lebovitz, S., & Zalmanson, L. (2020). Minimal and adaptive coordination: How Hackathons’ projects accelerate innovation without killing it. Academy of Management Journal. Advance online publication.
  • Lopez-Vega, H., Tell, F., & Vanhaverbeke, W. (2016). Where and how to search? Search paths in open innovation. Research Policy, 45(1), 125–136.
  • Ma, M., & Agarwal, R. (2007). Through a glass darkly: Information technology design, identity verification, and knowledge contribution in online communities. Information Systems Research, 18(1), 42–67.
  • Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257–268.
  • Majchrzak, A., & Malhotra, A. (2020). Unleashing the crowd: Collaborative solutions to wicked business and societal problems (1st ed.). Palgrave Macmillan.
  • Malhotra, A., & Majchrzak, A. (2014). Managing crowds in innovation challenges. California Management Review, 56(4), 103–123.
  • Malhotra, A., Majchrzak, A., Kesebi, L., & Looram, S. (2017). Developing innovative solutions through internal crowdsourcing. MIT Sloan Management Review, 58(4), 73–79.
  • March, J. G. (1991). Exploration and exploitation in organizational learning. Organization Science, 2(1), 71–87.
  • Mollick, E. (2014). The dynamics of crowdfunding: An exploratory study. Journal of Business Venturing, 29(1), 1–16.
  • Mollick, E., & Nanda, R. (2016). Wisdom or madness? Comparing crowds with expert evaluation in funding the arts. Management Science, 62(6), 1533–1553.
  • Moon, J. Y., & Sproull, L. S. (2008). The role of feedback in managing the Internet-based volunteer work force. Information Systems Research, 19(4), 494–515.
  • Moser, P., & Nicholas, T. (2013). Prizes, publicity and patents: Non-monetary awards as a mechanism to encourage innovation. The Journal of Industrial Economics, 61(3), 763–788.
  • Nagaraj, A., & Piezunka, H. (2021). How competition affects contributions to open source platforms: Evidence from OpenStreetMap and Google Maps (Harvard Business School Working Paper). Harvard Business School.
  • Piezunka, H., & Aggarwal, V. A., & Posen, H. E. (2021). The aggregation learning trade-off. Organization Science.
  • Piezunka, H., & Dahlander, L. (2015). Distant search, narrow attention: How crowding alters organizations’ filtering of suggestions in crowdsourcing. Academy of Management Journal, 58(3), 856–880.
  • Piezunka, H., & Dahlander, L. (2019). Idea rejected, tie formed: Organizations’ feedback on crowdsourced ideas. Academy of Management Journal, 62(2), 503–530.
  • Poetz, M. K., & Schreier, M. (2012). The value of crowdsourcing: Can users really compete with professionals in generating new product ideas? Journal of Product Innovation Management, 29(2), 245–256.
  • Powell, W. W. (2017). A sociologist looks at crowds: Innovation or invention? Strategic Organization, 15(2), 289–297.
  • Powell, W. W., Koput, K. W., & Smith-Doerr, L. (1996). Interorganizational collaboration and the locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly, 41(1), 116–145.
  • Puranam, P., Alexy, O., & Reitzig, M. (2014). What’s “new” about new forms of organizing? Academy of Management Review, 39(2), 162–180.
  • Ray, S., Kim, S. S., & Morris, J. G. (2014). The central role of engagement in online communities. Information Systems Research, 25(3), 528–546.
  • Reitzig, M., & Maciejovsky, B. (2015). Corporate hierarchy and vertical information flow inside the firm-a behavioral view. Strategic Management Journal, 36(13), 1979–1999.
  • Reitzig, M., & Sorenson, O. (2013). Biases in the selection stage of bottom-up strategy formulation. Strategic Management Journal, 34(7), 782–799.
  • Ren, Y. Q., Harper, F. M., Drenner, S., Terveen, L., Kiesler, S., Riedl, J., & Kraut, R. E. (2012). Building member attachment in online communities: Applying theories of group identity and interpersonal bonds. MIS Quarterly, 36(3), 841–864.
  • Rietveld, J., & Schilling, M. A. (2021). Platform competition: A systematic and interdisciplinary review of the literature. Journal of Management, 47(6), 1528–1563.
  • Rivkin, J. W., & Siggelkow, N. (2003). Balancing search and stability: Interdependencies among elements of organizational design. Management Science, 49(3), 290–311.
  • Rosenkopf, L., & Almeida, P. (2003). Overcoming local search through alliances and mobility. Management Science, 49(6), 751–766.
  • Schilling, M. A., & Phelps, C. C. (2007). Interfirm collaboration networks: The impact of large-scale network structure on firm innovation. Management Science, 53(7), 1113–1126.
  • Scotchmer, S. (2004). Innovation and incentives. MIT Press.
  • Shah, S. K. (2006). Motivation, governance, and the viability of hybrid forms in open source software development. Management Science, 52(7), 1000–1014.
  • Shah, S. K., & Nagle, F. (2020). Why do user communities matter for strategy? Strategic Management Review, 1(2), 305–353.
  • Shah, S. K., & Tripsas, M. (2007). The accidental entrepreneur: The emergent and collective process of user entrepreneurship. Strategic Entrepreneurship Journal, 1(1), 123–140.
  • Shrestha, Y. R., He, V. F., Puranam, P., & von Krogh, G. (2021). Algorithm supported induction for building theory: How can we use prediction models to theorize? Organization Science, 32(3), 856–880.
  • Spaeth, S., von Krogh, G., & He, F. (2015). Research note—Perceived firm attributes and intrinsic motivation in sponsored open source software projects. Information Systems Research, 26(1), 224–237.
  • Stewart, K. J., & Gosain, S. (2006). The impact of ideology on effectiveness in open source software development teams. MIS Quarterly, 30(2), 291–314.
  • Sutton, R. I., & Hargadon, A. (1996). Brainstorming groups in context: Effectiveness in a product design firm. Administrative Science Quarterly, 41(4), 685–718.
  • Terwiesch, C., & Yi, X. (2008). Innovation contests, open innovation, and multiagent problem solving. Management Science, 54(9), 1529–1543.
  • Tushman, M. L., & O’Reilly, C. A. (1996). Ambidextrous organizations: Managing evolutionary and revolutionary change. California Management Review, 38(4), 8–29.
  • von Hippel, E. (1986). Lead users: A source of novel product concepts. Management Science, 32(7), 791–805.
  • von Hippel, E. (2005). Democratizing innovation. MIT Press.
  • von Hippel, E., & von Krogh, G. (2003). Open source software and the “private-collective” innovation model: Issues for organization science. Organization Science, 14(2), 209–223.
  • Winter, S. G., Cattani, G., & Dorsch, A. (2007). The value of moderate obsession: Insights from a new model of organizational search. Organization Science, 18(3), 403–419.
  • Woolley, J., Madsen, T. L., & Sarangee, K. (2015, June 15–17). Crowdsourcing or expertsourcing: Building and engaging online communities for innovation [Paper presentation]? DRUID 2015 Conference, Rome, Italy.
  • Zhu, F., & Iansiti, M. (2010). Entry into platform-based markets. Strategic Management Journal, 33(1), 88–106.