Everything community? Destructive processes in communities of crowdsourcing competitions

Rita Faullant (University of Southern Denmark, Odense, Denmark) (Institute for Innovation Management and Entrepreneurship, Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria)
Guido Dolfus (Institute for Innovation Management and Entrepreneurship, Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria)

Business Process Management Journal

ISSN: 1463-7154

Article publication date: 6 November 2017

2502

Abstract

Purpose

Virtual crowdsourcing initiatives, and in particular crowdsourcing competitions, have become a promising means of harnessing users’ creativity to help corporate innovation. To date, research has tended to focus on the outcome of the competition, i.e. on the creative solution. There is, however, a lack of understanding in such crowdsourcing environments of the creative process itself and the influence of social interaction on the platform during this process. The paper aims to discuss these issues.

Design/methodology/approach

The authors conducted a series of qualitative interviews with participants from a major European crowdsourcing platform. The platform acts as an intermediary between companies and firms, and has launched more than 370 idea competitions.

Findings

The results suggest that there are not only positive interactions going on between participants. Below the surface, there also appear destructive processes provoked by the fierce competition among the contestants for prizes and a position in the Top Innovator lists. Such destructive behavior includes bullying of successful contestants, excessive use of like-functions among befriended contestants, and mutual donation of prize money among in-group members.

Practical implications

Negative social interaction among contestants of crowdsourcing communities can potentially threaten the platform provider’s business model. Managers of crowdsourcing platforms should engage in the development of strong social norms explicitly disapproving destructive behavior.

Originality/value

This study is the first to investigate in detail the phase of idea generation on crowdsourcing platforms, and the nature and impact of social interactions among contestants.

Keywords

Citation

Faullant, R. and Dolfus, G. (2017), "Everything community? Destructive processes in communities of crowdsourcing competitions", Business Process Management Journal, Vol. 23 No. 6, pp. 1108-1128. https://doi.org/10.1108/BPMJ-10-2016-0206

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Rita Faullant, Guido Dolfus

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Crowdsourcing is a form of open innovation that has become very popular in the past few years. Jeff Howe, the Founder of WIRED magazine, coined the term crowdsourcing as a combination of crowd and outsourcing to represent “[…] the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call” (Howe, 2006). For new product development, crowdsourcing has been harnessed as a way to broadcast design problems to people all over the world in hopes of receiving creative solutions and innovative ideas from motivated participants (Estellés-Arolas and Gonzàlez-Ladrón-de-Guevara, 2012; Howe, 2006). Crowdsourcing enlarges the knowledge base of a company by integrating ideas and knowledge from outside. A process like crowdsourcing that brings knowledge into a firm from outside sources is known as a form of in-bound open innovation (Chesbrough et al., 2006; Dahlander and Gann, 2010) or as an outside-in process as defined by Gassmannn and colleagues (Gassmann and Enkel, 2004). As a conduit for knowledge from the outside to enter the firm and become distributed and decentralized (Jeppesen and Lakhani, 2010), crowdsourcing enables access to a multitude of heterogeneous knowledge sources. Crossing the firm’s boundaries thus allows the firm to invite and exploit knowledge residing externally, and to broaden the firm’s locus of knowledge (Chesbrough, 2003; Palacios et al., 2016).

Crowdsourcing tasks may be performed collaboratively through peer production, but may also be undertaken by individuals working alone (Afuah and Tucci, 2012; Saxton et al., 2013). In the latter case, crowdsourcing takes the form of contests or idea competitions in which participants compete to win a prize for the best solution (King and Lakhani, 2013). An idea competition is defined as an invitation issued by a private or public organizer to the general public or a targeted group to submit solutions to a challenge within a certain allowed time period (Bullinger et al., 2010; Ebner et al., 2009). Usually a prize is awarded to the winning contribution. In the literature, the term idea competition is often used synonymously with other terms such as design competition, innovation or ideas contest, or research tournament.

So far, mainly the positive aspects offered by crowdsourcing activities for new product development have been highlighted. The goal of our present research is to look deeper into what actually happens in idea competitions during the phase of idea generation on virtual platforms. In particular, we are interested in the micro-interactions that occur on such platforms and how they affect users’ involvement and engagement with idea generation. We opted for a qualitative approach and interviewed participants involved with one of the largest crowdsourcing platforms with a competitive character in Europe. In this paper, we first conceptualize the typical set-up of an idea competition from a participant’s perspective and divide it into three phases: entry decision to participate in a crowdsourcing competition, dynamic process of solution generation, and consequences of participation in crowdsourcing competitions. Based on this structure, we review previous literature, and then use our empirical study to focus on the social interactions among participants of crowdsourcing competitions in phase 2. The findings suggest that, below the positive, shining surface, crowdsourcing communities might be much more competitive than expected, with substantial negative interaction between participants.

Theoretical background and related work

Crowdsourcing-based idea competitions

Idea competitions in general, or research tournaments, have always played an important role in the industrial development of nations. Participants in such tournaments and competitions have delivered many groundbreaking innovations. One example is the design of a chronograph suitable for accurately determining longitude at sea, produced in response to a competition launched by the British Parliament in 1714 (Jeppesen and Lakhani, 2010). Another is the development of the steam locomotive, spurred on by a research tournament sponsored by the Liverpool and Manchester Railway in 1829. More recently, research tournaments have been organized in a variety of areas such as high-tech fighter aircraft, digital television, and the first manned space mission to Mars (Fullerton et al., 1999).

Today crowdsourcing-based idea competitions are among the most popular and promising forms of open innovation. Due to powerful communication and internet technologies, idea competitions are flourishing on the web. Recent examples of businesses launching online idea competitions are OSRAM, a leading lighting manufacturer, asking participants for new and consumer-oriented ideas relating to LED light; Fujitsu-Siemens seeking ideas for “IT Services for Tomorrow’s Data Center”; and Swarovski looking for new jewelry designs.

Besides these directly launched initiatives, businesses can also opt for posting their problems on problem broadcasting platforms such as InnoCentive. Such platforms act as intermediaries that connect, translate, and facilitate the flow of knowledge between seekers and solvers (Rippa et al., 2016). Firms are able to present their problems to experts from all over the world in the hope of receiving creative solutions to their problems (Terwiesch and Xu, 2008). Pharmaceutical and chemical companies such as BASF and Eli Lilly have successfully used platforms for problem broadcasting (Lakhani and Jeppesen, 2007). The nature of participants’ contributions varies according to the challenge posed, with submissions sought ranging from raw idea formulation through design and concept elaboration to fully functional solutions (Bullinger et al., 2010). Among the various established crowdsourcing intermediaries in current use, the competition-based model can be seen as the prevailing model (Colombo et al., 2013). Solvers individually submit their solutions to the posted problem, and the winning solution is determined after all submissions have been screened. In contrast, potential solvers on competence-based intermediary platforms submit a bid to the company, which is seeking ideas by putting forward a proposal saying what they could deliver. The company then selects the most capable bidder and charges him/her to elaborate the solution.

A process model of crowdsourcing competitions from the participant’s perspective

Crowdsourcing competitions have attracted a lot of research interest leading to significant contributions, which have been published in articles in a broad range of journals. More recently some scholars have begun systematically to collect and present these findings in various overview articles, e.g. on definitions of crowdsourcing (Estellés-Arolas and Gonzàlez-Ladrón-de-Guevara, 2012), or on the impact of crowdsourcing on company performance (Bengtsson et al., 2015; Xu et al., 2015). Although some specific aspects, like users’ motivation to participate in crowdsourcing initiatives, are well explored, we identify a lack of research in the social interactions they experience during their participation. There is to date no model that considers explicitly the overall process of crowdsourcing competitions from a contestant’s perspective. We argue that such a model is especially helpful for understanding why and how participants of crowdsourcing competitions are attracted to such contests and motivated to contribute, and also to detect where academic research still lacks knowledge. We focus on crowdsourcing-based idea competitions that allow also for social interaction among participants, and therefore entail both collaborative and competitive elements. We introduce a simple process model, which maps crowdsourcing competitions from the contestants’ perspective. We conceptualize the model with three phases and illustrate it in Figure 1.

Phase 1 – entry into crowdsourcing competitions: the process of idea generation is assumed to start before the actual idea development, i.e. the decision to participate in an idea competition has to be accounted for in order to understand how creative output develops. The first phase, therefore, is the decision phase with the essential question of why users decide to participate in crowdsourcing competitions. Looking at the extant literature, we hold that there is abundant literature investigating this phase, and especially users’ motivation. Numerous studies, which explore motives for contribution, suggest that both intrinsic as well as extrinsic factors provide motivating forces for participation (Füller, 2010; Malone et al., 2010). Such factors include pure enjoyment of the task itself (Lakhani and Wolf, 2005) or altruism (Hennig-Thurau et al., 2004; Lakhani and Hippel, 2003). Also engaging in social interactions with like-minded peers (Füller et al., 2006; Kosonen et al., 2014) and recognition by peers (Boons et al., 2015) or by the commissioning company (Jeppesen and Frederiksen, 2006) have been shown to be important motivators. Besides motivational forces, other psychological factors such as personality dispositions (Faullant and Holzmann, 2016) and fairness anticipations (Franke et al., 2013).

Phase 2 – dynamic process of solution creation: the second and main phase in our model is the time span for which the crowdsourcing competition is scheduled. Many virtual idea competitions are set up as interactive virtual platforms in Web 2.0 style, allowing for multiple possibilities of interaction between the competing participants. This phase of idea generation is therefore described as a dynamic process. The stimuli provided by the virtual platform itself are complemented by individual-psychological factors resulting from interaction with like-minded peers. In the present research, we will investigate the processes, which occur during this phase in more detail, and will therefore devote a new section to it after introducing phase 3 of the full model.

Phase 3 – consequences of participation: the last phase in the model is characterized by the conclusion of the crowdsourcing competition. In this phase, the jury announces the winners of the competition, and thereby makes real the hopes or fears of participants. An essential aspect in this phase is that the jury decision process has the potential to create frustration among participants if the decision process is not perceived as fair. Perceptions of unfairness can undo all the benefits a company has gained from virtual idea competitions (Faullant et al., 2017). Apart from fairness considerations, the fact that there are only a few winners (or even only one) may discourage people from engaging in further crowdsourcing contests. Longitudinal studies from the Dell innovation community report that lack of success in idea generation (i.e. suggestions that were not implemented) eventually caused their contributors to become inactive (Yan et al., 2014), while successful contributors were willing to try to repeat their initial success, though often with less varied ideas (Bayus, 2013). Participants may, however, also gain personal satisfaction from having completed a task, even if they have not been selected as winners. Studies have shown that crowdsourcing initiatives, if executed well, have the potential to encourage positive post-content behavior independently of winning or losing (Füller et al., 2011), heightened product interest (Ogawa and Piller, 2006; Schlosser, 2003), and positive loyalty intentions toward the company (Fuchs and Schreier, 2011; Nambisan and Baron, 2007).

The process of solution creation in phase 2

Crowdsourcing competitions are usually set up as web-based interactive idea platforms (Estellés-Arolas and Gonzàlez-Ladrón-de-Guevara, 2012). Depending on the design, such platforms can provoke and promote intense interaction among participants. They can vote for which idea or design they like best, discuss various topics by leaving comments on other users’ pin boards, and compete for prizes.

As highlighted above, individuals often engage in online co-creation activities not only to contribute content, but also to build social relationships. Often these platforms establish a sense of community among participants through the use of social software applications (Algesheimer et al., 2005; Bagozzi and Dholakia, 2006). Trust in the community and in the hosting company further supports knowledge-sharing intentions during crowdsourcing activities (Kosonen et al., 2013). For participants, being part of a crowdsourcing contest community is associated both with experiencing a sense of community and at the same time with being competitors trying to provide a convincing solution to the company in order to win a prize (Hutter et al., 2011). Comparing collaborative vs competitive crowdsourcing communities, Bretschneider et al. (2012) found that in competitive settings collaboration is less pronounced than in collaborative settings. Competition in general is seen as beneficial for innovations and for technological and societal progress (Bullinger et al., 2010). The competitive character in crowdsourcing competitions should motivate participants to try harder and therefore be beneficial for the overall outcome of the crowdsourcing initiative.

Since contestants post their ideas online for evaluation, most ideas receive peer feedback and undergo further elaboration. However, the social interactions shaping this process are largely unknown. In existing research, the process of idea development is largely a black box. If they have investigated social interactions among contestants in crowdsourcing communities at all, researchers have up to now largely fallen back on meta-data or log-file data, by analyzing the volume of incoming and outgoing interactions (e.g. Bayus, 2013; Hutter et al., 2011), and determining positions in a network. Social network analysis has been a frequent method for doing this (e.g. Wa Chan et al., 2015).

To date, however, there is no research investigating in depth the social processes individuals are involved in on the platform during the contest time. We know little about the reasons for and the content of feedback that contestants give to others, and about the cognitive and affective consequences these messages have in the recipient.

Linking self-determination theory to crowdsourcing competitions

Findings from self-determination theory (Ryan and Deci, 2000, 2002) can help shed some light on the effects on creativity of peer feedback. Self-determination theory states that three human needs predominate in driving the natural desire to grow and develop. First is the need for competence, which refers to the wish of individuals to be effective in their interactions with the social environment and to display their capacities. This need leads people to seek challenges that are optimal for their perceived skills. Second is the need for autonomy, which refers to the basic desire of individuals to feel that they determine their own behavior, i.e. to perceive no, or little, external control over their actions. Third is the need for relatedness, which refers to the desire to feel connected to others, to feel cared for, and to experience a sense of belonging to other individuals or groups (Vansteenkiste et al., 2010; Deci and Vansteenkiste, 2004). External events contributing to meeting these needs enhance intrinsic motivation and creativity, while events thwarting these needs are found to be detrimental. Studies in other domains have shown that people who perceive their own behavior as largely self-determined are more intrinsically motivated and show longer persistence in their behavior than people with a low perception of self-determination (Vallerand and Bissonnette, 1992; Zuckerman et al., 1978). As such feedback can have positive or negative consequences for creativity, depending upon whether such feedback is perceived as informational or controlling (Amabile, 1996).

Social interactions during virtual idea competitions are thus likely to have both positive and negative consequences. Perceived competence is likely to be enhanced by positive feedback received from others, while negative feedback would tend to decrease perceived competence. Functions for making comments, casting votes or making evaluations might be a double-edged sword, supporting those who receive positive feedback that enhances their feeling of competence and autonomy, while discouraging those who receive no attention or negative feedback. Furthermore, meeting the need for relatedness might be an important factor in motivating additional contributions to the virtual platform. To date, knowledge on this phase 2 of virtual idea competitions is very limited. With our research, we aim to understand in more detail the social mechanisms that contribute to idea generation on crowdsourcing platforms.

Empirical study

The crowdsourcing platform analyzed is one of the major European CS platforms, having successfully launched more than 370 projects since its foundation. Approximately, 25,000 active innovators compete for prizes and reputation, and, so far, more than EUR700,000 have been awarded in prizes. The CS platform acts as an intermediary between organizations and a public crowd. The basic functionality of the platform allows organizations/companies to ask the community to submit solutions to a challenge with the aim of generating new and consumer-oriented ideas or solving company-specific problems.

Design of CS competitions

Crowdsourcing competitions within this CS platform are generally set up in a standardized way. Figure 2 presents the template for each initiative launched via the platform.

The announcement of the competition provides information about the company steering the competition and thus owning the announcement. A header describes the topic of the competition, followed by a short description summarizing the problem that requires a solution. Important notes (optional) can be published by the moderator in order to guide the crowd in the right direction. In the bottom line, key features such as the total reward to be distributed to the winning ideas (usually ranging from EUR1,000 to EUR2,000), accepted languages for ideas, comments related to the competition, and the duration of the competition (usually four to eight weeks) are announced. Each competition has a moderator, usually an employee of the company owning the competition. The moderator can mark some ideas as “interesting,” which acts as an indicator to the crowd of the extent to which suggested solutions are heading in an appropriate direction. Usually a company’s internal jury team rates the ideas and the moderator publishes the winning ideas. Typically, five to ten ideas are rewarded in each competition, which means that the prize money is split between the winning ideas (pro rata).

Contestants’ options for social interaction

The process of idea creation within a CS competition is supported by multiple synchronous communications. Figure 3 illustrates the forms of interaction and communication available to contestants. The platform allows users to create profiles, to send direct messages to all other participants (users, moderators, and the competition initiator), to write comments on published ideas, to comment on published comments, and to give “Likes” to published ideas or comments. Users can network by connecting their profiles to others’ profiles, by inviting other users to establish a connection; and they can donate prize money they have won (or parts of it) to other users (e.g. if the winning idea was based on another user’s idea published in advance).

The platform publishes various lists of “Top Innovators.” The ranking of users in such lists is based on a system of collecting “points” for various action and interactions. There are three different types of “Top Innovators” listings:

  1. activity and endurance: a contestants’ total number of points collected over the entire duration of membership of the community;

  2. quality and efficiency: the ratio between the total number of ideas submitted by a contestant and the number of award-winning ideas; and

  3. climber of the week: contestants’ number of points collected during the last seven days.

These lists each prominently highlight the 20 Top Innovators in the respective category. Crowdsourcing intermediaries commonly use such lists when seeking to motivate their members to be active and to contribute repeatedly (Schenk and Guittard, 2011). Innovators climb up the lists by collecting points for various actions, as explained in detail in Table I. Points are collected by being active oneself, by posting ideas and comments, but also passively by receiving comments and likes from others. Thus, we distinguish between active and passive activities, sending and receiving communications, respectively (from a contestant’s perspective). The contestants themselves can directly regulate active communication, whereas it is others (other users or the moderator) who determine the amount of the passive communication an individual user receives.

Both sending and receiving communications increase the total number of points collected. The more points collected, the higher the contestant’s ranking in the various lists. To show the relative importance of social interaction during a competition, we calculated that the average number of points which contestants in the “Top 20” lists were collecting both actively and passively. In the “Top 20” list “Activity and Endurance,” users collected two-thirds of their points in active ways, whereas in the “Top 20” list “Quality and Efficiency,” more than 80 percent of points were collected passively.

Research design

To investigate the social processes that occur during a crowdsourcing idea competition, and to see to what extent these processes impact on subsequent idea generation, nine personal unstructured in-depth interviews were conducted in the period from October 2014 to March 2015. In-depth interviews are used in particular to investigate aspects of behavior in their relevant context (Boyce and Neale, 2006; Djelassi and Decoopman, 2013). The methodology of unstructured interviews in a qualitative interpretive study involves allowing participants take the lead in conversations conducted in an everyday conversational style and letting them tell their own stories. This is done in order to understand the meanings that interviewees themselves attach to their own experiences. Qualitative research and interpretive research are common methods used to obtain deeper understanding of the implication of behavioral factors in online community groups (Kozinets, 2002).

We recruited interviewees by means of personal messages using the platform’s direct mailing functionality. In addition, snowballing was used to expand the sample and identify additional relevant respondents. We selected the interviewees based on their activity index and duration of membership including participants who had contributed a lot and others who had not (see Table II). This allows the analysis of users who have different activity levels during a competition in order to identify critical incidents occurring during the process of solution creation. Because the interviewees were physically located in different regions in Germany, Switzerland, and Austria, all interviews were undertaken over the phone/Lync and followed a more or less informal way. All interviews were conducted through unstructured and open-ended questions and were directed by a common interview guide, which instructed interviewers to keep a focus on social interaction and its impact.

The interview guide contained three major topics: motivation to contribute, usage of social interaction channels, and behavior of the crowd. We designed the questions to address not only general usage of the crowdsourcing platform and motivation to participate, but also to focus on impact of interaction with others as well as noticeable characteristics of communication (both positive and negative). Furthermore, we wanted to know to what extent interviewees experienced a competitive climate within the crowd. As outlined in the theoretical section, based on the self-determination literature we expected to find various social interactions that would have an impact on participants’ motivation and contribution behavior (Amabile, 1996; Ryan and Deci, 2000).

All interviews started with a general discussion about how the interviewees first connected with the platform and their key motivation for participating. Questions concerning the interaction channels used by the various users, their motivation for contributing, how they deal with both positive and negative feedback, and its respective impact on further activities, as well as knowledge of potential misuse of various channels of communication were covered during each interview session. The findings obtained during an interview were checked iteratively to ensure that the ad hoc interpretations were accurate. For further analysis, we tape-recorded the interviews and transcribed them verbatim, followed by thematic coding using open coding techniques (grounded theory approach). The open coding results in the following themes and categories.

The analysis, which involves exploring users’ own meanings followed by a process of reflection on, and interpretation of, those meanings, and allows the building of hypotheses regarding the impact of social interaction in crowdsourcing-based idea competitions. The initial descriptive coding resulted in a broad categorization. In a second step, patterns and themes were identified and generalized (see Figure 4), and finally elaborated into an overall picture like described in the following results.

Results

Based on the interviews, we can group social interactions occurring on crowdsourcing platforms into two major categories: collaboration and competition. Virtually all interviewees mentioned both positive and negative interactions with peers on the platform. Other researchers have already shown positive aspects of collaboration on such platforms, so here we will only briefly discuss these results and instead elaborate on competitive and heretofore undescribed destructive social interactions.

Collaboration

Participants on the platform usually submit ideas individually, at least for their initial suggestions. However, the process of further developing ideas is often performed collaboratively, where users support each other. All respondents report positive feelings resulting from receiving positive comments about their ideas and work, even if such feedback is only in the form of a few words. Participants frequently use the comment function, and there is typically a steady flow of incoming and outgoing comments between users. Our respondents report that apart from general comments or short acknowledgments they also receive concrete suggestions for improvement of ideas. Such suggestions allow contributors to elaborate more thoroughly on various aspects of an idea, and can result in redirecting the idea creator’s efforts in an unanticipated direction. Receiving such constructive suggestions for improvement is well appreciated and seems to motivate participants to further engage with the topic. Giving Likes to others’ ideas and receiving Likes from others is another frequently used function. Receiving a Like from peers on the platform is perceived as positive and gives the recipient the feeling of accomplishment, and of being on the right track. One participant said:

This is for sure: positive feedback (of an idea) is always good. It totally motivates me to stay with the thing

(Frank, October 2014).

Competition on the platform and destructive processes

Positive interactions like these are not the only types of activities that occur between participants. Although the interview results show a lot of collaboration that occurs on the crowdsourcing platform, interviewees also report competitive actions among members. Interview results reveal that below some surface interactions there are destructive processes in play, resulting in misuse of communicational channels. Examples range from destructive critiques to the formation of small, supportive yet exclusive, “in-groups.”

The competition inside

In particular, there seems to be strong rivalry among contestants for listings in the “Top Innovator” lists. We learned that a top 20 position is a major motivator for members to participate and to be active. Eight out of nine respondents mention that climbing up the “Top Innovators” list is a substantial motivation for contributing and being active in writing comments. Occupying a top 20 rank is associated with the possibility of future job opportunities in addition to winning competitions. One interviewee said:

I already heard of a guy who got hired by the company that rewarded his idea […] so I wanna get in touch with companies showing my potential by solving problems on that platform

(Silvia, October 2014).

With more than about 25,000 active contestants on the platform and only 60 positions in the “Top Innovators” lists (3×20 positions), it becomes evident that there is fierce competition among members. Further analysis of our interviews led us to the conclusion that there exists a range of practices employed by users in order to promote their own progress into the “Top 20” lists or to maintain their status in the lists. Apparently, the leading “Top 20” innovators form some sort of closed peer group and protect each other. Several respondents reported that they themselves have been subject to, or witnessed others being subject to, bullying, i.e. members purposely trying to debar others from the community. They saw others stealing their own or others’ ideas or making compromising comments. One interviewee said:

I signed up on the platform and my idea was rewarded immediately, so some other guys started bashing my new ideas in the upcoming competitions […] I was totally confused and started to get angry, then disappointed […] After a while I realized that this was done on purpose […] also other users told me that there is a bunch of people helping each other stay at the top of the lists

(Patrizia, October 2014).

Respondents reported that there exist smaller sub-groups whose members make extensive use of networking, e.g. voting massively for ideas from insiders in order to exclude outsiders. All those interviewed admitted that negative and compromising feedback had a negative impact on their engagement during the competition.

Interviewees mentioned two key reasons for using the various channels of social interaction. One was winning competitions or being rewarded for their idea. The other was climbing up the list of top innovators. Those users who aim to climb the list of top innovators in order to gain opportunities seem to follow a systematic strategy such as the one described above. These users try to call attention to themselves by their high ranking in various lists, seeing this as a means to get in touch with companies and eventually to receive job offers.

In-group support

As illustrated above, there are different ways of collecting points in order to become prominent in the “Top Innovators” lists. Winning a competition gives the most points followed by the receiving donations from a peer. Respondents observed that some users systematically donate prize money to other users – both of whom are usually linked to each other – in order to support them in staying on top of the lists and receive the same service in return. (“You scratch my back and I’ll scratch yours.”) One interviewee said:

It is well known among insiders that donating money is a means of mutual support: if you’re doing great at the moment and I’m not, you donate part of your prize money to me, and I will do the same in future for you

(Silvia, October 2014).

Some insiders report that users of a hidden community support each other in staying on the list of top innovators not only by donating prize money to each other, but also by posting multiple likes to the partner’s ideas and comments (supportive Likes). Receiving likes does not result in many points, but it seems to be used as a means of influencing the community as well as the CS moderator. Having many Likes means that an idea attracts more attention, and other users as well as the moderator could be influenced to consider this idea as a highly accepted one.

Sabotage of out-group users

Besides the direct support of allies, people also mention an indirect form of support that is more cannibalizing. Interviewees report that negative feedback is given systematically to the ideas of users that are close to breaking into the “Top 20” lists. As the current “Top 20” users (usually part of a hidden community) see these users as a threat, they try to compromise the ideas published by the “rising stars” by systematically posting negative comments or even bashing ideas. As an example, we retrieved this comment:

Your Idea has nothing to do with the topic – it’s related to telecommunication but senseless for banking industry […]. Your idea is just rubbish.

Respondents expressed their concern that the aim of negative comments was to reduce the acceptance of the ideas from “out-group users” in order to stop their ideas from gaining support and ultimately to prevent changes in the current “Top 20.” One interviewee said:

Some users just don’t want to see you rising up […] so they just start knocking out your ideas

(Frank, October 2014).

The majority of respondents confirm this behavior. Two of the interviewees had been personally impacted by such “defensive action.” Both report that they were bullied and their ideas compromised in exactly the ways mentioned. This had had an impact on the quantity of ideas posted. One of those affected had subsequently reduced the effort put into idea competitions. Besides making direct negative feedback, some of the interviewees reported that various users repeatedly copied ideas of others as soon as those ideas received Likes from the crowd or positive attention from the moderator (i.e. a positive comment or marked as “interesting idea”). The following statement, posted on the platform, provides an example:

R […] again took the biscuit. She dealt with her topic to a disgusting degree […] and now she’s copying my ideas from other projects. This is just annoying!!!

Self-promotion

The platform enables users to build relationships by connecting their profiles to others. The direct result of confirming a contact request is that all the activities of contacts are highlighted in the profile entry page (in the section “Ideas from contacts”). This is therefore a legitimate way of promoting one’s own ideas: making them directly visible to connected users can result in more likes. As reported by the majority of interviewees, some users make new contacts for the sole purpose of gaining more points through receiving likes for their own ideas. One interviewee said:

It is so obvious that they just wanna highlight their profiles and activities to receive likes […] it’s not networking at all, but all about promotion

(Silvia, October 2014).

Some respondents reported that as soon as they were registered to the platform existing users who were, and still are, ranked in the “Top 20” lists swamped them with contact requests.

Consequences of social interactions

Based on the interviews, we conclude that social interactions on crowdsourcing platforms do directly affect contestants’ propensity to contribute in future. The affective valence of social interaction determines the direction of impact, whether decreasing or increasing subsequent efforts.

Positive effects resulting from social interaction

Although some CS users seem to misuse various channels of social interaction, all respondents experience positive effects from receiving positive communication. The majority reports an increase in motivation when receiving positive feedback to an idea or to a comment posted. All but one respondent confirmed experiencing positive feelings from receiving likes. They feel supported by receiving likes for their idea, and benchmark their own ideas compared to others based on the number of likes received (within a competition it is possible to sort all ideas by the number of likes received which results in a kind of high score list). Positive comments have a similar impact to likes as regards users’ emotions. The majority of people interviewed report a positive effect on their motivation to stay active within the particular competition. Even if comments on ideas are not exclusively positive, but reflect some doubts regarding the successful implementation of a formulated idea, respondents take on that feedback as a stimulus to continue thinking further about an idea, which could then result in a better (new) solution or an adaption of the initial idea. This indicates that constructive feedback results in an increase of motivation. These findings are in line with self-determination literature: positive social interaction and feedback elicit positive emotions. It gives participants a sense of accomplishment and they feel competent in what they are doing (Ryan and Deci, 2000). Our research also shows that constructive feedback is predominantly perceived as positive, which is in line with creativity literature (Amabile, 1996). Such feelings of competence and creativity increase participants’ intrinsic motivation to become better and encourage them to contribute further and to stay active on the platform. A lively platform is vital to the overall system of crowdsourcing-based idea competitions (Kohler et al., 2011).

All interviewees experienced a collaborative atmosphere where users usually tried to improve published ideas by using the comment functionality. Surprisingly, even negative feedback could elicit positive effects. Two interviewees reported watching virtual battles fought by various users in the public comments space. By reading all the comments in the “virtual battle,” they stayed longer on the platform (session) and longer with the topic, which had a positive impact on their own idea generation. One interviewee said:

Well, I am just amused by the girly fights. But somehow it keeps me thinking about the ideas and my own ideas as well. For sure, it keeps me thinking – in a positive way

(Thorsten, R., January 2015).

Negative effects resulting from social interaction

Our findings demonstrate that social interaction and peer-to-peer feedback do not always encourage users to increase the quantity and quality of ideas generated. Two interviewees told of their experience of being at the receiving end of bullying. One had – as a consequence – already deleted profiles on other crowdsourcing platforms and is thinking about doing the same on the current platform as well. The other confirmed that having his posted ideas “bashed” is the key reason why his motivation and the quantity of ideas he has submitted have decreased. Most of the other respondents had also observed the bashing of ideas or the direct compromising of other users. They report that it is obvious that there is a group of people who actively post bad feedback on ideas. Four of the interviewees think that this must negatively impact the affected users’ motivation and also confirm that this impacts the whole community’s mood in a negative way. They also mention confusion felt when being passively subjected to negative feedback, but report no direct impact on their own activity index as long as they are not directly affected. Feedback tinged with negativity in users’ comments seems to directly impact participants’ motivation and subsequently their further activity on the platform. This can be explained by the fact that participants who submit a proposal to the platform are promoting a part of themselves and therefore experience a strong sense of ownership over their ideas (Pierce et al., 2001). They therefore perceive negative comments as a personal attack. The result is negative emotion and reactions such as anger, frustration, and withdrawal. Although platform-based social interaction is in general able to stimulate activity there is also the risk of reverse leverage whereby negative impulses can create adverse trends in communication and reduce users’ efforts.

Neutral social interaction

Apart from social interaction with positive or negative consequences for contestants’ activity, we also found some types of social interaction that are perceived to be neutral with no impact on contestants’ behavior. All interviewees without exception confirmed that personal messages do not have any impact at all. This function is rarely used by any of the respondents. Usually no competition-related information is shared through personal messages. Nobody reports that messaging is used for negative interactions. Personal messages neither impinge on nor increase users’ efforts, they serve exclusively as a vehicle for sharing information. Connections to other users (the same as requests for connection) do not impact motivation. Besides the supposed misuse of connections to promote one’s own ideas, interviewees do not get anything out of this functionality.

Collaborative tasks involve people interacting with each other. The various channels of social interaction support both collaboration and competition. Most participants perceive the collaborative nature of the platform as more prevalent than the competitive characteristics. Nonetheless, misuse of social interaction channels affects considerably more users, including third-party neutral users who are not directly involved in the interaction; so this negative aspect of competition can have a toxic effect that threatens the overall welfare of the crowd. The platform’s management decision to decommission the Dislike functionality which was in place from 2010 to 2012 provides an indication of the destructive and deteriorating effect such negative communication may have on the overall performance of the platform.

Discussion

We developed a process model mapping the phases of crowdsourcing competitions from a contestant’s perspective and found that the phase of actual idea generation has not yet been well understood. Idea generation on a crowdsourcing platform is a recursive process that is influenced and shaped by the contestants’ social interaction with peer contestants (Djelassi and Decoopman, 2013). Our study is the first to investigate in detail the nature and impact of social interactions in communities of crowdsourcing competitions. As a theoretical framework we discussed how social interaction in crowdsourcing competitions relates to the theory of self-determination (Ryan and Deci, 2002) and creativity (Amabile, 1996). In line with these theories, we found support for the view that social interaction can positively influence participants’ motivation. In particular, supportive feedback and constructive critiques encourage contestants to develop their ideas further. To our own surprise, and in contrast to the extant literature highlighting mostly the positive sides of crowdsourcing competitions, we found evidence that a considerable amount of the social interactions is characterized by destructive behavior between members. This results primarily from the strong competitive orientation among contestants who strive for a “Top 20” position in one of the “Top Innovators” lists. This finding contrasts with previous findings from crowdsourcing communities where intrinsic motivations such as social and learning benefits were identified as being among the most important factors in engagement in knowledge-sharing activities (Kosonen et al., 2014). Apparently, the competitive character in this community spurs users to try harder to climb up the various lists. As a negative consequence, some users seem to exploit all the interaction functionalities available either to climb up the lists or to prevent themselves from falling down or out of the lists. It seems to be appropriate therefore to discuss crowdsourcing competitions in the light of the literature on economic tournaments, where sabotage behavior between contestants has been found to destroy resources and reduce overall welfare. In this stream of literature, studies showed that able members in promotion tournaments are more often the victims of sabotage attacks (Chen, 2003). We find a similar pattern in our study in that especially the rising stars or those close to entering a “Top 20” list are subject to defensive actions. Tournament literature also suggests that sabotage not only destroys resources, but severely depresses incentives to work productively (Gürtler and Münster, 2010). Our respondents who had themselves experience of negative feedback or bashing from other contestants report exactly the same effect. In the end, sabotage behavior in crowdsourcing communities may have the same effect on individuals as lack of success – participants ultimately drop out of the platform because their ideas are not valued (Bayus, 2013).

From a theoretical point of view, researchers have not so far differentiated clearly between crowdsourcing activities of a competitive and those of a collaborative nature. Our findings suggest that the competitiveness feature, which is often seen as positive, may prevent the action of some of the mechanisms, which come into play in collaborative settings. Scholars in the field of creativity and motivation have long put forward arguments against competitions, arguing that any competition is harmful for creativity (Deci et al., 1999; Reeve and Deci, 1996; Vansteenkiste and Deci, 2003). We find some of these concerns confirmed also for crowdsourcing-based idea competitions.

In the broader framework of open innovation and collaboration, our study adds to the understanding of governance of co-innovation with communities. To date, literature on collaborative innovation has predominantly highlighted the risks of undesirable knowledge spillovers to partners (Dahlander and Gann, 2010; Salge et al., 2013), and the dilemma of sharing knowledge while at the same time maximizing value (Salter et al., 2014). More recent research addresses the importance of selective governance modes for different forms of collaboration (Gesing et al., 2015). However, these findings can be transferred only to a limited extent for collaboration with crowdsourcing communities where, as in our case, IP infringement issues play a minor role. Additionally, there are considerable governance challenges related to the environmental design and set-up of crowdsourcing-based idea competitions for open collaboration. Levine and Prietula (2014) found that open collaboration for innovation also works under non-perfect conditions, e.g. if only a fraction of the population is cooperative. The system, however, collapses if needs of the population are too similar and goods at the same time are rival – a situation that seems to apply potentially to many crowdsourcing-based idea competitions where members participate for the same motives and the same prizes.

Battistella and Nonino (2012) investigated different web-based platforms for innovation and found that monetary incentives provide the most important reward for participation, and ultimately also the success of the platform itself. This raises a new dilemma for the set-up of collaboration with crowdsourcing communities relying on competitions as the mode of implementation: money and glory provide effective incentives for participation (Malone et al., 2010), but at the same time they also constitute a threat to the system. For future studies, we suggest a further investigation of the governance mode for this competitive type of open collaboration for innovation. Potentially, it will be worthwhile to include the perspective of promotion tournaments.

Managerial implications

Our results have important implications for the managerial practice of crowdsourcing competitions. First, providers of crowdsourcing platforms or idea competitions have to be aware that “Top Innovators” lists are a double-edged sword. On the one hand, they are a perfect motivator for members to be constantly active; on the other hand, these lists create fierce competition among contestants, inducing some of them to misuse interaction channels or even to sabotage other members. Second, the rest of the community often witness such bashing actions and might thereby be reluctant to become victims themselves, thereby losing motivation and reducing engagement. For intermediaries launching crowdsourcing competitions this could also potentially threaten their business model. The platform provider should therefore carefully monitor the general climate within the community in order to anticipate when the whole atmosphere is in danger of becoming overly negative, affecting the users’ willingness to participate further.

There seem to be few options available to those running competitions to counteract the misuse of the donation of prize money and the automatic support of “befriended” users. One option could be to set up a reporting functionality to notify the provider about potential misuse. Apart from running the risk that this channel could itself be misused, building up this middleware would need to be funded and the subsequent service administrated, which would require some investment. The adoption of rating scales with more detailed ratings would be another possible means to counteract misuse of the system. Riedl et al. (2013) demonstrated that for shorter idea submissions multi-criteria rating scales by far outperform single-item scales. We assume that a multi-criteria peer evaluation would probably also enhance the true examination of peers’ ideas, by entailing more effort not only cognitively but also in terms of time, thereby making purely defensive actions more costly in terms of thought and time expended. A third course of action and, in our view, the action most likely to be effective would be to foster true community building. This is a major task. Blohm et al. (2013) highlighted the importance of community building mechanisms in order to attract a sufficiently large number of contributors and evaluators. We support the importance of guided development of community values and the building of strong norms in order to minimize sabotage behavior. Again, studies from tournament literature show that sabotage is significantly reduced in settings where norms lead to discreet disapproval of such behavior.

Limitations and direction for further research

The model that we introduced in the theoretical part of the paper describes the process of idea generation particularly in relation to crowdsourcing-based idea competitions that entail both collaborative and competitive elements. However, there are also platforms that allow for only closed (i.e. non transparent) idea submissions, and that do not enable social interaction among participants. Our findings are therefore not generalizable to all types of crowdsourcing platforms. Additionally, we base our findings on a limited number of qualitative interviews. Because of the richness of content garnered from existing interviews, we did not perceive that additional interviews would lead to further information; thus we are confident that we reached theoretical saturation (Seale, 1999). However, it would be worthwhile to further validate these initial results. This could be done through longitudinal investigation of social interactions on such platforms or through large-scale quantitative surveys in order gain an idea of how massive the potential problem of destructive behaviors on crowdsourcing platforms is.

Figures

Process model of crowdsourcing competitions

Figure 1

Process model of crowdsourcing competitions

Set-up of a CS competition

Figure 2

Set-up of a CS competition

Action flow of analyzed CS platform

Figure 3

Action flow of analyzed CS platform

Thematic coding – themes and categories

Figure 4

Thematic coding – themes and categories

Gaining points for various actions

Use case Description Example Direction No. of points unit
Competition related
Win a competition The number of points units depends on monetary reward: prize money won divided by total prize money of competition multiplied by 1.6 If total prize money of competition=EUR1,000 and prize money for user’s idea=EUR100, then 160 points units are awarded Passive See example
Idea marked as “interesting” by moderator The moderator (owner) of a competition can mark an idea as “interesting” The moderator (owner) of a competition marked an idea as “interesting” Passive 15
Submit idea User posts idea in a competition Idea with pic=2.6 points units; without pic=3 points units Active 2.6-3
Interaction related
Receive donation (prize money won by donor) Number of points depends on prize money: prize money won divided by total prize money of competition multiplied by 800 If total prize money of competition=EUR1,000 and prize money for user’s idea=EUR100, then 80 points units are awarded Passive See example
Post comment Post a comment on an existing idea Post a comment on an existing idea Active 1.4
Receive likes on an idea Likes received by an existing idea Likes received by an existing idea Passive 1
Complete profile detail User to complete profile information Upload profile pic=2 points units; enter information, e.g. profession or level of education=1 points unit Active 1-2
Receive likes on a comment Likes received by an existing comment Likes received by an existing comment Passive 0.6

Interviewee profile information

References

Afuah, A. and Tucci, C.L. (2012), “Crowdsourcing as a solution to distant search”, Academy of Management Review, Vol. 37 No. 3, pp. 355-375.

Algesheimer, R., Dholakia, U.M. and Herrmann, A. (2005), “The social influence of brand community: evidence from European car clubs”, Journal of Marketing, Vol. 69 No. 3, pp. 19-34.

Amabile, T.M. (1996), Creativity in Context, Westview Press, Boulder, CO.

Bagozzi, R.P. and Dholakia, U.M. (2006), “Open source software user communities: a study of participation in Linux user groups”, Management Science, Vol. 52 No. 7, pp. 1099-1115.

Battistella, C. and Nonino, F. (2012), “Open innovation web-based platforms: the impact of different forms of motivation on collaboration”, Innovation: Management, Policy & Practice, Vol. 14 No. 4, pp. 557-575.

Bayus, B.L. (2013), “Crowdsourcing new product ideas over time: an analysis of the Dell IdeaStorm community”, Management Science, Vol. 59 No. 1, pp. 226-244.

Bengtsson, L., Lakemond, N., Lazzarotti, V., Manzini, R., Pellegrini, L. and Tell, F. (2015), “Open to a select few? Matching partners and knowledge content for open innovation performance”, Creativity and Innovation Management, Vol. 24 No. 1, pp. 72-86.

Blohm, I., Leimeister, J.M. and Krcmar, H. (2013), “Crowdsourcing: how to benefit from (too) many great ideas”, MIS Quarterly Executive, Vol. 12 No. 4, pp. 199-211.

Boons, M., Stam, D. and Barkema, H.G. (2015), “Feelings of pride and respect as drivers of ongoing member activity on crowdsourcing platforms”, Journal of Management Studies, Vol. 52 No. 6, pp. 717-741.

Boyce, C. and Neale, P. (2006), Conducting In-Depth Interviews: A Guide for Designing and Conducting In-Depth Interviews for Evaluation Input, Pathfinder International, Watertown, MA.

Bretschneider, U., Zogaij, S. and Leimeister, J. (2012), “Wettbewerb v. Kollaboration: Wie verhalten sich Teilnehmer in Ideenwettbewerben und Ideen Communities?”, paper presented at the VHB Jahrestagung, Bozen.

Bullinger, A.C., Neyer, A.-K., Rass, M. and Moeslein, K.M. (2010), “Community-based innovation contests: where competition meets cooperation”, Creativity and Innovation Management, Vol. 19 No. 3, pp. 290-303.

Chen, K.-P. (2003), “Sabotage in promotion tournaments”, Journal of Law, Economics, and Organization, Vol. 19 No. 1, pp. 119-140.

Chesbrough, H. (2003), “The era of open innovation”, MIT Sloan Management Review, Vol. 28 No. 5, pp. 507-519.

Chesbrough, H., Vanhaverbeke, W. and West, J. (2006), Open Innovation: Researching a New Paradigm, Oxford University Press, Oxford.

Colombo, G., Buganza, T., Klanner, I.-M. and Roiser, S. (2013), “Crowdsourcing intermediaries and problem typologies: an explorative study”, International Journal of Innovation Management, Vol. 17 No. 2, pp. 1-24.

Dahlander, L. and Gann, D.M. (2010), “How open is innovation?”, Research Policy, Vol. 39 No. 6, pp. 669-709.

Deci, E.L. and Vansteenkiste, M. (2004), “Self-determination theory and basic need satisfaction: understanding human development in positive psychology”, Ricerche di Psicologia, Vol. 1 No. 27, pp. 23-40.

Deci, E.L., Ryan, R. and Koestner, R. (1999), “A meta-analytic review of experiments examining the effects of extrinsic rewards on intrinsic motivation”, Psychological Bulletin, Vol. 125 No. 6, pp. 627-668.

Djelassi, S. and Decoopman, I. (2013), “Customers’ participation in product development through crowdsourcing: issues and implications”, Industrial Marketing Management, Vol. 42 No. 5, pp. 683-692.

Ebner, W., Leimeister, J.M. and Krcmar, H. (2009), “Community engineering for innovations: the ideas competition as a method to nurture a virtual community for innovations”, R&D Management, Vol. 39 No. 4, pp. 342-356.

Estellés-Arolas, E. and Gonzàlez-Ladrón-de-Guevara, F. (2012), “Towards an integrated crowdsourcing definition”, Journal of Information Science, Vol. 38 No. 2, pp. 189-200.

Faullant, R. and Holzmann, P. (2016), “Everybody is invited but not everybody will come – the role of personality dispositions on users’ entry decision of crowdsourcing competitions”, International Journal of Innovation Management, forthcoming, Vol. 20, August, 1650020pp.

Faullant, R., Füller, J. and Hutter, K. (2017), “Fair play: perceived fairness in crowdsourcing communities and its behavioral consequences”, Management Decision (forthcoming).

Franke, N., Keinz, P. and Klausenberger, K. (2013), “Does this sound like a fair deal? Antecedents and consequences of fairness expectations in the individual’s decision to participate in firm innovation”, Organization Science, Vol. 24 No. 5, pp. 1495-1516.

Fuchs, C. and Schreier, M. (2011), “Customer empowerment in new product development”, Journal of Product Innovation Management, Vol. 28 No. 1, pp. 17-32.

Füller, J. (2010), “Refining virtual co-creation from a consumer perspective”, California Management Review, Vol. 52 No. 2, pp. 98-122.

Füller, J., Hutter, K. and Faullant, R. (2011), “Why co-creation experience matters? Creative experience and its impact on the quantity and quality of creative contributions”, R&D Management, Vol. 41 No. 3, pp. 259-273.

Füller, J., Bartl, M., Ernst, H. and Mühlbacher, H. (2006), “Community based innovation: how to integrate members of virtual communities into new product development”, Electronic Commerce Research, Vol. 6 No. 1, pp. 57-73.

Fullerton, R., Linster, B.G. and McKee, M. (1999), “An experimental investigation of research tournaments”, Economic Inquiry, Vol. 37 No. 4, pp. 624-636.

Gassmann, O. and Enkel, E. (2004), “Towards a theory of open innovation: three core process archetypes”, Proceedings of the R&D Management Conference, Lissabon, 7-9 July.

Gesing, J., Antons, D., Piening, E.P., Rese, M. and Salge, T.O. (2015), “Joining forces or going it alone? On the interplay among external collaboration partner types, interfirm governance modes, and internal R&D”, Journal of Product Innovation Management, Vol. 32 No. 3, pp. 424-440.

Gürtler, O. and Münster, J. (2010), “Sabotage in dynamic tournaments”, Journal of Mathematical Economics, Vol. 46 No. 2, pp. 179-190.

Hennig-Thurau, T., Gwinner, K.P., Walsh, G. and Gremler, D.D. (2004), “Electronic word-of-mouth via consumer-opinion platforms: What motivates consumers to articulate themselves on the Internet?”, Journal of Interactive Marketing, Vol. 18 No. 1, pp. 38-52.

Howe, J. (2006), “The rise of crowdsourcing”, WIRED Magazine, Vol. 14 No. 6, pp. 1-4.

Hutter, K., Hautz, J., Füller, J., Mueller, J. and Matzler, K. (2011), “Communitition: the tension between competition and collaboration in community-based design contests”, Creativity and Innovation Management, Vol. 20 No. 1, pp. 3-21.

Jeppesen, L.B. and Frederiksen, L. (2006), “Why do users contribute to firm-hosted user communities? The case of computer-controlled music instruments”, Organization Science, Vol. 17 No. 1, pp. 45-63.

Jeppesen, L.B. and Lakhani, K. (2010), “Marginality and problem-solving effectiveness in broadcast search”, Organization Science, Vol. 21 No. 5, pp. 1016-1033.

King, A. and Lakhani, K.R. (2013), “Using open innovation to identify the best ideas”, MIT Sloan Management Review, Vol. 55 No. 1, pp. 41-48.

Kohler, T., Fueller, J., Stieger, D. and Matzler, K. (2011), “Avatar-based innovation: consequences of the virtual co-creation experience”, Computers in Human Behavior, Vol. 27 No. 1, pp. 160-168.

Kosonen, M., Gan, C., Olander, H. and Blomqvist, K. (2013), “My idea is our idea! Supporting user-driven innovation activities in crowdsourcing communities”, International Journal of Innovation Management, Vol. 17 No. 3, pp. 10-18.

Kosonen, M., Gan, C., Vanhala, M. and Blomqvist, K. (2014), “User motivation and knowledge sharing in idea crowdsourcing”, International Journal of Innovation Management, Vol. 18 No. 5, pp. 31-54.

Kozinets, R.V. (2002), “The field behind the screen: using netnography for marketing research in online communities”, Journal of Marketing Research, Vol. 39 No. 1, pp. 61-72.

Lakhani, K. and Hippel, E.V. (2003), “How open source software works: ‘free’ user-to-user assistance”, Research Policy, Vol. 32 No. 6, pp. 923-942.

Lakhani, K.R. and Jeppesen, L.B. (2007), “Getting unusual suspects to solve R and D puzzles”, Harvard Business Review, Vol. 85 No. 5, pp. 30-32.

Lakhani, K. and Wolf, R.G. (2005), “Why hackers do what they do: understanding motivation and effort in free/open source software projects”, in Feller, J., Fitzgerald, B., Hissam, S.A. and Lakhani, K. (Eds), Perspectives on Free and Open Source Software, The MIT Press, Cambridge, MA, pp. 3-22.

Levine, S.S. and Prietula, M.J. (2014), “Open collaboration for innovation: principles and performance”, Organization Science, Vol. 25 No. 5, pp. 1414-1433.

Malone, T.W., Laubacher, R. and Dellarocas, C. (2010), “The collective intelligence genome”, MIT Sloan Management Review, Vol. 51 No. 3, pp. 20-31.

Nambisan, S. and Baron, R.A. (2007), “Interactions in virtual customer environments: Implications for product support and customer relationship management”, Journal of Interactive Marketing, Vol. 21 No. 2, pp. 42-62.

Ogawa, S. and Piller, F. (2006), “Reducing the risks of new product development”, MIT Sloan Management Review, Vol. 47 No. 2, pp. 65-72.

Palacios, M., Martinez-Corral, A., Nisar, A. and Grijalvo, M. (2016), “Crowdsourcing and organizational forms: emerging trends and research implications”, Journal of Business Research, Vol. 69 No. 5, pp. 1834-1839.

Pierce, J.L., Kostova, T. and Dirks, K.T. (2001), “Toward a theory of psychological ownership in organizations”, The Academy of Management Review, Vol. 26 No. 2, pp. 298-310.

Reeve, J. and Deci, E.L. (1996), “Elements of the competitive situation that affect intrinsic motivation”, Personality and Social Psychology Bulletin, Vol. 22 No. 1, pp. 24-33.

Riedl, C., Blohm, I., Leimeister, J. and Krcmar, H. (2013), “The effect of rating scales on decision quality and user attitudes in online innovation communities”, International Journal of Electronic Commerce, Vol. 17 No. 3, pp. 7-37.

Rippa, P., Quinto, I., Lazzarotti, V., Manzini, R. and Pellegrini, L. (2016), “Does size matter? Role of intermediaries in open innovation practices”, International Journal of Business and Innovation Research, Vol. 11 No. 3, pp. 377-396.

Ryan, R. and Deci, E.L. (2000), “Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being”, American Psychologist, Vol. 55 No. 1, pp. 68-78.

Ryan, R. and Deci, E.L. (2002), “An overview of self-determination theory: an organismic-dialectical perspective”, in Deci, E.L. and Ryan, R. (Eds), Handbook of Self-Determination Research, University of Rochester Press, Rochester, NY, pp. 3-36.

Salge, T.O., Piening, E.P. and Foege, J.N. (2013), “Exploring the dark side of innovation collaboration – a resource-based perspective”, Proceedings of Academy of Management Annual Meeting, Vol. 1, November, pp. 485-490.

Salter, A., Criscuolo, P. and Ter Wal, A.L.J. (2014), “Coping with open innovation: responding to the challenges of external engagement in R&D”, California Management Review, Vol. 56 No. 2, pp. 77-94.

Saxton, G.D., Oh, O. and Kishore, R. (2013), “Rules of crowdsourcing: models, issues, and systems of control”, Information Systems Management, Vol. 30 No. 1, pp. 2-20.

Schenk, E. and Guittard, C. (2011), “Towards a characterization of crowdsourcing practices”, Journal of Innovation Economics, Vol. 1 No. 7, pp. 93-107.

Schlosser, A.E. (2003), “Experiencing products in the virtual world: the role of goal and imagery in influencing attitudes versus purchase intentions”, Journal of Consumer Research, Vol. 30 No. 2, pp. 184-198.

Seale, C. (1999), “The quality of qualitative research”, Qualitative Inquiry, Vol. 5 No. 4, pp. 465-478.

Terwiesch, C. and Xu, Y. (2008), “Innovation contests, open innovation, and multiagent problem solving”, Management Science, Vol. 54 No. 9, pp. 1529-1543.

Vallerand, R.J. and Bissonnette, R. (1992), “Intrinsic, extrinsic, and amotivational styles as predictors of behavior: a prospective study”, Journal of Personality, Vol. 60 No. 3, pp. 599-620.

Vansteenkiste, M. and Deci, E.L. (2003), “Competitively contingent rewards and intrinsic motivation: can losers remain motivated?”, Motivation and Emotion, Vol. 27 No. 4, pp. 273-299.

Vansteenkiste, M., Niemiec, C.P. and Soenens, B. (2010), “The development of the five mini-theories of self-determination theory: an historical overview, emerging trends, and future directions”, in Urdan, T.C. and Karabenick, S.A. (Ed.), The Decade Ahead: Theoretical Perspectives on Motivation and Achievement (Advances in Motivation and Achievement, Volume 16 Part A), Emerald Group Publishing Limited, pp. 105-165.

Wa Chan, K., Yiyan Li, S. and Jianjun Zhu, J. (2015), “Fostering customer ideation in crowdsourcing community: the role of peer-to-peer and peer-to-firm interactions”, Journal of Interactive Marketing, Vol. 31, August, pp. 42-62.

Xu, Y., Ribeiro-Soriano, D.E. and Gonzalez-Garcia, J. (2015), “Crowdsourcing, innovation and firm performance”, Management Decision, Vol. 53 No. 6, pp. 1158-1169.

Yan, H., Vir Singh, P. and Srinivasan, K. (2014), “Crowdsourcing new product ideas under consumer learning”, Management Science, Vol. 60 No. 9, pp. 2138-2159.

Zuckerman, M., Porac, J., Lathin, D., Smith, R. and Deci, E.L. (1978), “On the importance of self-determination for intrinsically-motivated behavior”, Personality and Social Psychology Bulletin, Vol. 4 No. 3, pp. 443-446.

Corresponding author

Rita Faullant can be contacted at: ritaf@sam.sdu.dk

Related articles