Fair play: Perceived fairness in crowdsourcing competitions and the customer relationship-related consequences

Rita Faullant (Centre for Integrative Innovation Management, Institute for Marketing and Management, University of Southern Denmark, Odense, Denmark) (Alpen-Adria-Universitat Klagenfurt, Klagenfurt, Austria)
Johann Fueller (University of Innsbruck, Innsbruck, Austria)
Katja Hutter (University of Salzburg, Salzburg, Austria)

Management Decision

ISSN: 0025-1747

Article publication date: 16 October 2017

4816

Abstract

Purpose

Companies are discovering the power of crowdsourcing as a source of new ideas for products and services. It is assumed that the personal engagement and the continuous involvement with a company’s products or services over a period of several weeks positively affect participants’ loyalty intentions toward the host companies. The research leads the authors to challenge this assumption. In addition to mere participation in crowdsourcing initiatives, the authors argue that perceptions of fairness will explain changes in customer relationship-related consequences such as loyalty, perceived innovativeness and product interest. The paper aims to discuss these issues.

Design/methodology/approach

The authors analyzed a real-life crowdsourcing contest launched by a leading lighting manufacturer and investigated the impact of two fairness dimensions (distributive and procedural) on participants’ future behavioral and attitudinal intentions (n=121). The analysis was performed with SEM.

Findings

The results suggest that fairness perceptions are significantly related to evoked product interest, perceived innovativeness and loyalty intentions. The analysis reveals that the influence of the fairness dimensions is asymmetric: while distributive fairness can be considered as a basic factor that must be fulfilled in order to avoid negative behavioral consequences, procedural fairness instead is an excitement factor that causes truly positive behavioral consequences.

Research limitations/implications

The results are particularly relevant for companies launching a crowdsourcing competition under their own brand name, and for broadcasting platforms. For companies with no relations to end-users, these findings may not be as relevant.

Practical implications

Organizers of crowdsourcing contests should be aware that such initiatives can be a double-edged sword. Fair Play is a must to gain the positive effects from crowdsourcing initiatives for both new product development and the customer relationship. For companies lacking the capabilities to manage crowdsourcing initiatives professionally it is advisable to rely on intermediary broadcasting platforms.

Originality/value

The research is the first to investigate systematically the consequences of fairness perceptions in a real-life crowdsourcing idea contest. The authors demonstrate the asymmetric nature of fairness perceptions on three different outcome variables that are important for the customer relationship.

Keywords

Citation

Faullant, R., Fueller, J. and Hutter, K. (2017), "Fair play: Perceived fairness in crowdsourcing competitions and the customer relationship-related consequences", Management Decision, Vol. 55 No. 9, pp. 1924-1941. https://doi.org/10.1108/MD-02-2017-0116

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Rita Faullant, Johann Fueller and Katja Hutter

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Skyrocketing customer participation and engagement in crowdsourcing contests is shifting the focus of corporate marketing and innovation activities. Through connectivity and interaction, companies are using online platform technologies to “outsource” the ideation efforts to a worldwide pool of talented people (Terwiesch and Xu, 2008). On the basis of current literature studies the landscape of crowdsourcing approaches may seem astounding. Crowd contests, collaborative communities, crowd complementors, labor markets, innovation events, integrator platforms, two-sided platforms or marketplaces are distinct forms with each suited to a specific kind of challenge or task (Boudreau and Lakhani, 2013; Bernhardt et al., 2016; Kohler, 2015). Above and beyond various technical distinctions, crowdsourcing tool characteristics generally vary in terms of compensation (monetary or non-monetary), the method of participation and interaction (collaborative or competitive), the time frame (temporary or ongoing) and the platform host (company-owned or intermediary platforms) (Howe, 2008; Schenk and Guittard, 2011; Vukovic, 2009; Bernhardt et al., 2016; Kohler, 2015). We focus on company-owned crowdsourcing contests, where participants self-select to work on their own solutions and compete to win an offered prize for the best solution within a certain time period (Jeppesen and Lakhani, 2010; Terwiesch and Xu, 2008). Besides contributing their own ideas, participants are often allowed to comment on other contributions, exchange ideas, ask questions and give feedback to other participants. When companies decide to launch their own crowdsourcing initiative, as a hosting organization they can directly engage with consumers and users, and generate valuable outcomes that go beyond receiving innovative ideas or designs (Kozinets et al., 2008). For example, such company-owned crowdsourcing activities can create collective commitment toward new offerings (Nambisan and Baron, 2007), higher commercialization capabilities (Poetz and Schreier, 2012), large scale consumer-to-consumer as well as consumer-to-brand interaction, dissemination of online word-of-mouth (WOM) communication (Kozinets et al., 2010; Zwass, 2010) as well as brand loyalty (Malthouse et al., 2013). The basic argument put forward is that crowdsourcing initiatives provide individuals with opportunities for ongoing involvement and personal engagement with a company and its products or services over several weeks. While developing and cracking the challenge for the sponsor (the company) the crowdsourcing activity strengthens the relationship between the hosting company and the crowdsourcing participants thereby also increasing loyalty intentions (Nambisan and Baron, 2007; Kozinets et al., 2010).

However, increased openness and collaboration also conceals risks (Gatzweiler et al., 2013, 2017), and may in contrast to the positive effects cause frustration and other negative feelings among loyal participants (Roehm and Tybout, 2006; Ward and Ostrom, 2006). Recent examples from well-known companies like Moleskine and Henkel illustrate such situations, as these announced crowdsourcing contests ended in embarrassing public disasters that undermined brand loyalty and corporate image. The Moleskine Facebook (FB) page received hundreds of negative comments from designers, fans and customers who expressed their disappointment and disagreement with the Moleskine incentive scheme, which awarded only the winner with a cash prize. In the case of Henkel, engaged participants disagreed with the jury decision and the selected winners of the crowdsourcing competition and felt overruled (Salt, 2011; Breithut, 2011). Henkel had to face a long-lasting PR debacle which included negative reports in the most popular online newspapers in Germany including www.spiegel.de (Breithut, 2011), zeit.de (Sawall, 2011), and focus.de (Frickel, 2011).

These anecdotal episodes suggest that in addition to incentives, crowdsourcing participants also care about fairness; and conflicts may arise due to perceived unfair treatment. Perceived unfairness in crowdsourcing contests may result from unfair prize allocations, nontransparent jury decisions, unfriendly climate and intolerable communication behavior (Franke et al., 2013; Gebauer et al., 2013). However, crowdsourcing literature reveals little about how perceived fairness or unfairness may influence the quality of participants’ relationships toward the hosting organization. In an experimental study, Franke et al. (2013) show that negative fairness anticipation may prevent people from participating in crowdsourcing events right from the outset, but their research does not address the consequences of participants’ de facto perceived fairness judgments within a real-life crowdsourcing contest.

We investigate how participants’ perceptions of distributive and procedural fairness affect their opinion of and subsequent loyalty change related to the hosting company’s perceived innovativeness and evoked product interest. This study makes two substantial contributions to the marketing and innovation literatures. First, it increases our collective understanding about the conditions under which crowdsourcing can serve as a customer relationship-building tool that will foster loyalty intentions and other favorable behavioral consequences. And second, our examination of the asymmetric effects of distributive and procedural fairness on participant opinion and loyalty-related behavioral intentions sheds light on their factor structure. With our analysis, we offer a differentiated picture of each dimension’s factor structure and we demonstrate that the two fairness dimensions have asymmetric impacts on behavioral consequences.

Theoretical background

Crowdsourcing contests from a relationship management perspective

The primary reason for launching a crowdsourcing contest is in most cases to search for novel and outstanding solutions to new product development problems. When managed successfully, crowdsourcing contests might also have important positive side effects, described in the following.

Loyalty intentions

Loyalty intentions are a significant indicator of the quality of a customer relationship. Loyalty encompasses both an attitudinal component, reflecting what an individual thinks and feels about a firm, and a behavioral component, comprising the actions a person is disposed to undertake in response to an interaction with a company (Mittal and Kamakura, 2001). Positive behavioral responses include repurchase, positive WOM, and other supporting activities that result in higher rates of return for the firm (e.g. Anderson et al., 2004; Rust and Zahornik, 1993; Srivastava et al., 1998; Zeithaml et al., 1996). The participation of users in company sponsored crowdsourcing contests may increase their loyalty intentions through their close interaction with the firm. After enrolling in a crowdsourcing contest, participants become deeply involved with a company and its products over a period of several weeks. They devote time, skills and personal engagement to developing new ideas for the company. Users identify themselves as part of the community and form intense relationships with it (Bagozzi and Dholakia, 2006), and they often become passionate about the brand or product (McWilliam, 2000; McAlexander et al., 2002). Crowdsourcing contests therefore strengthen the community members’ relationships toward the host company (Porter and Donthu, 2008) and also their relationships toward the products they have helped design (Franke et al., 2010).

Evoked product interest

As users who co-create and co-design new products for certain companies become familiar with a project, they may become interested in and attached to the product at the same time (Belk, 1988). We define evoked product interest as interest in a product that evolves while co-creating or co-designing a product. Evoked product interest may be related to information about the product (e.g. the whereabouts of the virtual product, its further development and market launch) or to the product itself (e.g. how the new product and its technology work, how it can be used, and where it can be purchased). Furthermore, consumers’ interest is not restricted to finding out information about the product they have co-created; they are interested in owning it and recommending it to others. By engaging in crowdsourcing contest activities, consumers initiate a relationship with “their” new product even before it physically exists (Schlosser, 2003).

Perceived innovativeness

Consumers evaluate the firm and its products on their innovativeness, i.e. how well a company is able to supply its customers with innovative products (Brown and Dacin, 1997; Deshpande et al., 1993). Firm innovativeness strongly relies on a firm’s ability and the will to identify, analyze, understand and answer user needs, and it is seen as a driver of economic success. Some studies emphasize on the link between perceived innovativeness and participation in value co-creation activities (Schreier et al., 2012; Nambisan and Baron, 2007; Nambisan and Nambisan, 2008). The creation of virtual customer environments allows organizations to transform one-way customer interactions into mechanisms for collaborative innovation and impact innovation outcomes such as time-to-market, new product or service quality, or innovation cost (Nambisan and Baron, 2009; Prahalad and Krishnan, 2008).

In sum, crowdsourcing contests serving the purpose of co-innovating with external stakeholders can be considered as an ideal tool for strengthening the relationship between the hosting (sponsoring) organization and its customers and users by providing them with opportunities to actively shape the firm’s future developments (Fuchs and Schreier, 2011). While at first sight, crowdsourcing contests can be seen to create a number of positive effects for the hosting organizations in terms of new product development and customer relationship management, the examples in our introductory statement provide evidence that the causal relationship between crowdsourcing contests and positive outcomes is not as straightforward as it might seem. In the following section, we will discuss the perceived fairness in crowdsourcing contests, and how fairness affects behavioral consequences.

The perceived fairness of crowdsourcing contests

Fairness, or justice, which has become an increasingly important issue over the past three decades, has its foundations in social psychology. Fairness and justice are closely related terms that are often used interchangeably (e.g. Boiney, 1995). Justice is used with reference to standards of rightness and is seen as the moral fabric that binds together societies. The term fairness on the other hand is used with regard to the ability to make unbiased, concrete and specific judgements to a particular case without referring to one’s own feelings. It is therefore we adopt the term fairness in the following, as the fair and unbiased treatment of crowdsourcing participants is in the focus of our paper. Fairness is essential for the success of an online crowdsourcing contest, since fairness is important for ensuring participation (Boiney, 1995). As seen in the examples pointed out in the introduction crowdsourcing participants expect a fair level of reward as well as a fair selecting of the best submission and winner of the contest. Unfair treatment can lead to reduced future participation, to no participation at all, or to migration to other contests. Scholars (e.g. Gilliland, 1993) frequently make a distinction between two dimensions in fairness perceptions: distributive and procedural fairness.

Distributive fairness focuses on the equity principle, by which an individual considers an exchange fair if his or her input/output ratio from the exchange is seen as fair when compared with the input/output ratio of a referent other (Adams, 1965). Issues involving distributive justice exist throughout society in all situations where individuals or groups enter into exchange (Deutsch, 1985). A lack of distributive justice may cause negative emotions (Weiss et al., 1999) and determines actual behavior, leading to reduced effort or withdrawal. In crowdsourcing contests, distributive justice is reflected in the amount and the distribution of prizes. If participants consider the compensation of their efforts as not fair (e.g. because the prizes set out to win are perceived as too low), this may result in poor quality of submitted ideas, lower engagement, migration to other contests, or withdrawal. This was exactly the case when the notebook brand Moleskine launched a logo design contest and hundreds of designers and fans expressed their disagreement with the incentive scheme, because the winner would receive a monetary prize but Moleskine would retain copyright privileges to all submitted logos (Moleskine, 2011b). Design freelancers and Moleskine customers discussed their anger and explained that their refusal to participate was due to their perception that Moleskine had created a misleading input/output ratio and thereby a violation of distributive fairness (Moleskine, 2011a). While distributive fairness primarily focuses on outcomes and the results of reward allocation, another important determinant of perceived justice is the process by which allocations are made.

Procedural fairness is defined as the perceived fairness of the procedures, policies, and criteria used by decision makers in arriving at an output allocation (Lind, 1988; Thibaut and Walker, 1975; Leventhal, 1980). Allocation procedures should be consistent across individuals over time (the consistency rule); decision makers’ personal self-interest should be prevented (the bias suppression rule); and the allocation process should be based on good information (the accuracy rule). In addition, allocation procedures should reflect the needs, values and outlooks of all parties affected by an allocation process (representativeness); they should be compatible with fundamental moral and ethical values of the perceivers (the ethical rule); and they should allow for the possibility of unfair decisions being rectified (the correctability rule) (Leventhal, 1980). When a person perceives the process leading to a particular outcome as unfair, his or her reactions will be directed at the organization as a whole rather than at the specific tasks. This is of paramount importance, and may explain the reasons for the protest activities in the following crowdsourcing example. The design contest for Pril – a well-known dishwashing detergent owned by Henkel in Germany – faced serious procedural fairness issues when it changed the terms and conditions in the middle of a contest to prevent popular but “crazy” and unexpected designs like “chicken-flavored Pril” from becoming the winner (Breithut, 2011). Participants felt overruled and did not agree with the jury decision and the selected winners as a satisfactory outcome of the crowdsourcing project. They engaged in active resistance and voiced their dissatisfaction on the corporate Pril Facebook Page and on private Twitter accounts.

We establish the influence of fairness perceptions on post-contest behavior based on social exchange theory (Homans, 1958; Blau, 1964), which is closely related to the study of fairness and which posits that individuals will maintain relationships as long as reciprocity is preserved, i.e. as long as they get back something in return for what they have given. The object of exchange can be monetary as well as non-monetary. Indeed, abundant literature on the motivation of crowdsourcing participants indicates a wide variety of motives to explain why people contribute and what they “get back” for their participation: motivations range from purely extrinsic (i.e. winning the prize) to intrinsic (e.g., fun and joy in the task itself), as well as to socializing with others (e.g. Füller, 2010). Baldwin and Hippel (2011, p. 1411) describe a “symbiotic relationship” between the firm and external contributors showing complementary motives and the fulfillment of their own net utility: a misleading input/output ratio in terms of perceived distributive fairness in crowdsourcing contests violates the rule of reciprocity in material aspects; and violations of procedural fairness concern the ideals and intangible values of such a “symbiotic relationship.” We therefore expect that both violations of distributive and procedural fairness will lead to unfavorable attitudes toward the emitting company. As a result participants may show reduced interest in future products, hold less favorable loyalty intentions, and perceive the firm as less innovative and customer-orientated. These considerations promote the following hypothesis:

H1.

Perceived distributive fairness positively impacts (a) the evoked interest in the designed products and (b) perceived innovativeness of the company; and it will lead to loyalty intentions in terms of (c) a positive change in participants’ loyalty toward the company.

H2.

Perceived procedural fairness positively impacts the (a) evoked interest in the designed products and (b) perceived innovativeness of the company; and will lead to loyalty intentions in terms of (c) a positive change in participants’ loyalty toward the company.

Asymmetric effects of fairness – tracing the individual impact of the two fairness dimensions. We argue that there may be asymmetric impacts of fairness components on evoked product interest, perceived innovativeness and loyalty toward the hosting organization. Human resource management, decision science, and consumer behavior researchers have discussed issues of asymmetry in the relationship between variables. For example, Herzberg et al. (1959) in his two-factor theory of job satisfaction distinguished hygiene factors and motivators. Also in the customer satisfaction and consumer behavior literature, the Kano-Model gained considerable relevance by contributing to an understanding of how important single attributes of a product or service feature are for overall satisfaction. In such models, product or service feature attributes are distinguished into basic, hybrid and excitement factors depending on the effect on customer satisfaction (Kano, 1984; Matzler et al., 1996). For our two fairness dimensions, we expect a similar asymmetric factor structure, explained as follows.

Basic factors – minimum level is needed: a fairness dimension will be classified as a basic factor, if participants view it as fundamental to the crowdsourcing experience. In this case there must be a minimally acceptable level, otherwise participants’ post-contest behavior will decrease, along with their negative change of loyalty toward the organization. However, exceeding this minimum level does not necessarily lead to a higher level of perceived innovativeness, evoked product interest, nor increased loyalty change.

Hybrid factors – the more the better: a fairness dimension can be classified as a hybrid factor, if its impact on post-contest behavior is symmetric, i.e. it can entail both positive and negative consequences. In other words, the higher the fairness components the higher perceived innovativeness, evoked product interest, and increased potential for loyalty change. These characteristics cause an equally strong negative effect when they are missing from the crowdsourcing features.

Excitement factors – positive impact if a certain level is reached: a fairness dimension is classified as an excitement factor, if it will increase the loyalty change toward the organization as well as post-contest behavior if a certain level is reached, but in case of absence will not cause any damage.

According to these insights and approaching the idea of asymmetry, we also expect that distributive and procedural fairness will have a different impact on perceived innovativeness, evoked product interest and loyalty change. We expect distributive fairness to be classified as a basic factor, i.e. this fairness dimension needs to be fulfilled in order not to cause negative future behavioral consequences, but it does not necessarily lead to positive post-contest behavior. Participants evaluate the input/output ratio before and also during their participation and balance reasons to either contribute more time and effort to generate creative ideas or to stop and migrate from the platform (Gebauer et al., 2013; Franke et al., 2013). We further expect the procedural fairness dimension to be a hybrid factor, i.e. if it is fulfilled a company can enjoy positive loyalty scores and other positive post-contest behavior, if it is not fulfilled negative consequences has to be feared. This argument can be deducted from the first studies on procedural fairness in court procedures, where the accused persons agreed more with the jury decision when they had the impression that the procedure was fair, independent from the outcome. Depending on the symmetric or asymmetric impact of fairness components we group them into basic and hybrid factors, and make the following propositions (Figure 1):

H3a.

Distributive fairness can be interpreted as a basic factor.

H3b.

Procedural fairness can be interpreted as a hybrid factor.

Empirical study

The OSRAM design contest – LED emotionalize your light

OSRAM, one of the world’s top lighting manufacturers, invited designers and creative consumers from across the world to engage in an online crowdsourcing contest on new, creative, LED lighting solutions. The contest was open to anyone with an interest in LED technology, lighting solutions and related topics. The community consisted of many ordinary users, architects, artists, engineers, professional and semi-professional designers and design students from a variety of design disciplines. In total, more than 952 participants took part in the OSRAM contest to display their creativity, to submit ideas, or to provide feedback to their fellows via comments and evaluations. Overall, the contest elicited 576 ideas. The award winners were determined in a two-stage process: a pre-selection committee (comprised of experts in light design and contest management) drew up a list of the 25 finalist ideas that had received high rankings in both user community and expert ratings. The 25 ideas were then presented to the jury (OSRAM management+light and interior design experts), which debated each idea in detail and awarded it between 1 and 5 stars. The jury then announced the three award-winning designs to the community. Although the winners received exclusively positive feedback from other community members in the form of comments and messages posted on the contest platform after the announcement, in emails to the contest organizing team, some users expressed frustration regarding the selection (Figure 2).

The sample

All the participants were invited to set up a personal profile immediately after the registration at the beginning of the contest. At this point, we assessed their loyalty attitudes toward the launching company with a short questionnaire (loyalty t1). In order to track changes in their loyalty perceptions we assessed loyalty with the identical items at a second point in time, after the contest together with the main questionnaire. The main questionnaire (including measures for loyalty t2) was then distributed at the end of the contest after the jury decision was communicated to the OSRAM contest community. An email with a link to our online questionnaire was sent out to all 952 participants that had initially registered for the contest. For the main questionnaire, we received data from 132 participants. Cases with excessive missing (more than 10% according to literature) were removed. We also deleted cases with a completion time of less than 3 minutes. Once the data purification process was completed, 121 cases remained for further analysis, a (purified) response rate of 13 percent. The data for loyalty t1 was then matched with the data from the main questionnaire based on the logfile user identifier. To test for a possible non-response bias, we compared the means of early responders and late responders (Armstrong and Overton, 1977), finding no significant differences. Thus, non-response bias did not seem to be a problem. We also compared the survey sample to the total contest population of 952 participants about whom we had received the personal demographic data supplied upon registration. As Table I shows, there is no significant difference between our sample and the total population in terms of gender, but survey participants were on average somewhat older than the average contest participant (mean age 29 years vs 27 in the basic population).

We also compared these groups according to their activity levels and found that the survey participants were on average significantly more active than the overall population (median comparison). Survey participants submitted significantly more ideas and comments on the platform (median: 2 ideas vs 1 idea, 3 comments vs 1 comment), but did not differ significantly in their voting levels (thumbs up or down). Summarizing, we may say that the sample contains the more active and engaged participants.

Measures

We drew up and refined the questions and items used in our quantitative online survey based on the research literature. All items were assessed on a five-point rating scale where the end-points of the scale were labeled with “strongly disagree” and “strongly agree.” We used the radio-button format with equal distances between the buttons which has been found to enhance measurement reliability and legitimacy of parametric testing of such data (Cook et al., 2001). Table II displays the wordings of all questions and refers to the literature sources. As mentioned above, we assessed the loyalty items twice in an identical manner: once, on participants’ registration at the start of the contest, and a second time, after the jury decision. This allowed us to calculate a loyalty change score and report the actual change in loyalty intentions over the duration of the contest. On average, we indeed see a slight decrease in the loyalty scores across the sample (pre-loyalty 3.36 vs post-loyalty 3.08, measured on a five-point scale from 1 low to 5 high).

Results

In a first step, we conducted confirmatory factor analysis and calculated the latent constructs’ psychometric properties. The results are shown in Table II and indicate an appropriate structure. All indicators have good factor loadings (>0.6) and the respective factor reliabilities all exceed 0.8, thereby passing the required reliability in structural equation modeling of 0.6 (Bagozzi and Yi, 1988). A high variance extracted (>0.5) and convergent validity are also fulfilled (Hair et al., 2006). Discriminant validity was assessed by calculating the Fornell-Larcker ratio (Fornell and Larcker, 1981), the achieved value of which must not exceed 1. All our constructs are well below this value and thus exhibit discriminant validity. We account for common method bias through both ex-ante and ex-post survey recommendations in literature (e.g. following Sea-Jin et al., 2010; Podsakoff et al., 2003, 2012). In the study design stage we positioned dependent and independent variables at different points in the questionnaire, and furthermore we assured participants of their anonymity and that there are no wrong or right answers. For the scope of our research question it was not possible to obtain the predictor and the criterion variables from different sources (it had to be the same person), so we further conducted a number of statistical ex post tests as recommended by Podsakoff et al. (2003). We ran the CFA model with an additional common method-factor, which indicated that 0 percent common variance was shared with the method factor. We further conducted the marker variable test suggested by Lindell and Whitney (2001). We used “perceived autonomy of the task” as a marker variable, and the common variance was 0.0009 percent. Finally, we also calculated the difference in the standardized regression weights between the models with and without the common method factor. If common method bias was a problem, the standardized regression weights in the main model would have signified higher values than in the model including the common method-factor. The differences for all regression weights were 0, meaning that we received the exact same values in both models. All these results suggest that common method bias was not a problem in our study.

To test our theoretical model, we applied structural equation modeling with AMOS 22. We ran a maximum likelihood estimation, which yielded a satisfying overall fit for the model (CMIN/DF 1.147). The non-significant p-value (0.125) indicates that our model’s implied covariance structure does not differ significantly from that contained in the real data, and thus our model depicts reality well. Accordingly, all other global and incremental fit indices met the required thresholds suggested in the literature (e.g. see Hu and Bentler, 1999; Kline, 1998; Browne and Cudeck, 1993).

The path analysis (Table III) reveals that procedural fairness significantly influences evoked product interest (β=0.35**) and perceived innovativeness (β=0.34**), but not loyalty change (β=0.16 ns). Distributive fairness on the other hand has a highly significant impact on product interest (β=0.33**) but not on perceived innovativeness (β=0.01 ns) and loyalty change (β=0.12 ns). Together the two fairness dimensions explain 37 percent of the variance in evoked product interest and 12 percent in perceived innovativeness. The results of the structural equation model partly confirm our hypotheses. The main effect of the two fairness types on loyalty change is not significant; therefore, H1c and H2c must be rejected, as well as H1b.

Test of asymmetric effects

In H3a and H3b we proposed that the two fairness dimensions would be asymmetric in nature and thereby have different impacts on the outcome variables, depending on their level of fulfillment. To test this, we conducted a dichotomized type of regression analysis with dummy variables in SPSS (Matzler and Sauerwein, 2002; Mittal et al., 1998). First, we computed a single variable for each fairness dimension and for the criterion variables evoked product interest, perceived innovativeness, and loyalty change. This was done by means computation of the variables included in the structural models (see Table II). Next, we dichotomized the fairness dimensions (procedural fairness and distributive fairness), based on tercile split, into high (upper tercile 1/rest 0) and low (lower tercile 1/rest 0) fairness variables in order to test which variable out of each dummy variable pair would have the higher impact. If the high dummy variable of a dimension has a stronger effect than the low one, the dimension is classified as an excitement factor, or vice versa, as a basic factor. If both exert an equally strong influence on the outcome variable, the dimension will be classified as a hybrid factor. Table IV summarizes the results of the multiple regression analysis (SPSS), and the impact of distributive and procedural fairness on our three post-contest related outcome variables. Distributive fairness is classified as a basic factor: only the low-score variable exhibits a significant negative regression weight on evoked product interest (0.409***) and loyalty change (0.265**) while the betas for the high-score variable are not significant. H3a thus is confirmed. Procedural fairness on the other hand is classified as an excitement factor: only the high-score variable shows significant regression weights on evoked product interest (0.353**) and on loyalty change (0.355**). This stands against our assumption that procedural fairness is a hybrid factor that would influence post-contest behavior in both positive and negative directions. Instead, it is classified as an excitement factor that leads to positive consequences if it is fulfilled, but not necessarily to damage if not fulfilled. We therefore have to reformulate H3b. Obviously a fair, transparent and bias-free procedure with honest feedback to the community regarding any decision enlightens participants, and provides them with more than they would have expected. For one of the outcome variables, perceived innovativeness, we do not find any asymmetric effects, but remember the main effect in the path model showed a significant positive relationship between fairness and perceived innovativeness.

The analysis of asymmetric effects now further explains why the main effects in the path model from distributive and procedural fairness on loyalty change were not significant. Here the asymmetric nature would have been hidden in the overall analysis. Instead, we were able to reveal that a fair distribution of the rewards (distributive fairness) is a basic requirement in order to not create frustration among participants and to not cause negative changes in participants’ loyalty intentions toward the organization. If expectations of distributive fairness are not fulfilled, a negative change in participants’ loyalty toward the company must be expected; however, if expectations of distributive fairness are fulfilled, no positive effect on participants’ loyalty intentions should be expected. For procedural fairness, the situation is vice versa. If the hosting organization ensures a transparent, objective and bias-free jury process a positive loyalty change toward the organization and positive post-contest behavior can be expected.

Discussion and managerial implications

We sought to investigate the consequences of crowdsourcing contents on participants’ future behavior. Previous studies have argued that, taken together, the ongoing involvement and interaction over several weeks with a company and its products and the personal engagement that occurs when developing new ideas for the company lead to stronger and deeper relationships between the company and users (Nambisan and Baron, 2007). We challenged these assumptions by pointing out the importance of perceptions of fairness, and we integrating two distinct types of fairness – procedural and distributive fairness – that are relevant for crowdsourcing initiatives in general and for crowdsourcing contests in particular.

In the course of a real-life crowdsourcing contest, we find that not only anticipated fairness expectations as argued by Franke et al. (2013), but actually perceived fairness experiences during a contest significantly affect future behavioral intentions. Our findings show that procedural fairness drives customers’ interest in products and positively affect their perceived innovativeness of the company and the brand. Relating these findings to the theory of value creation and in particular to the theory of trying (Xie et al., 2008), participation in crowdsourcing contests is a form of co-creation through which participants derive value. This value may reside in trying to demonstrate one’s capabilities, but also in the intrinsic value of the process of striving and just participating. Participation in crowdsourcing contests is associated with risks and uncertainties that are not fully under the participating individual’s control. The managerial implications are straightforward: Fairness presents an important parameter beyond the volitional control of users and organizations need to grant fair surrounding conditions for participants to support their intention to derive value from their co-creation activity.

The findings also highlight the distinction between procedural and distributive fairness, showing asymmetrical effects on loyalty intentions and evoked product interest. Our findings point out that distributive fairness may be considered as basic factor, which causes frustration, and negative consequences if certain thresholds are not fulfilled, but do not necessarily create value if high levels of distributive levels are achieved. In practice, however, this provides interesting insights for both the design of reward structures as well as the size of the prizes in crowdsourcing contests. If the advertised reward is not attractive to participants, it is unlikely that they will be motivated to participate at all (Franke et al., 2013). Conversely, it may also cause negative consequences in case they still engage. In light of these findings, firms should also be aware of that a winner-takes-all award structure always produces many “losers” who might generate significant feelings of unfairness. Since most participants work hard over several weeks to create what they deem as appropriate and promising solutions, they may feel frustrated if the award structure does not acknowledge this effort. A structured allocation of multiple awards might lead to higher participant investments, since the chances of being rewarded would be higher. Terwiesch and Xu (2008), for example, propose an optimal reward structure to minimize underinvestment by problem-solvers.

In contrast, our study shows that procedural fairness is an excitement factor and leads to positive consequences when fulfilled, but may not hurt too much if not executed well. However, a company should take the chance and ensure procedural fairness by, e.g. guaranteeing an unbiased, transparent, and rule-consistent jury process and by dealing with the contest community in an honest and truthful manner. By demonstrating the asymmetric nature of the individual fairness dimensions, we clearly show that crowdsourcing initiatives are a double-edged sword. From the company’s perspective, this may have severe consequences and they might lose more than they gain with the generated ideas and designs. Therefore, companies must invest in fair awards and procedures along with respectful communication and treatment of participants, if they decide to launch a crowdsourcing contest under their own brand.

Alternatively, as mentioned in the introduction, firms can broadcast their challenges also via intermediaries who are specialized in launching crowdsourcing competitions for other companies. Such intermediaries like Atizo, for example, usually have several thousands of potential problem solvers in their community and dispose of extensive experience in managing crowdsourcing contests, including how to set the prize structure and how to design the jury decision process. For companies lacking the experience or the capacities to manage large-scale crowdsourcing contests it might be advisable to fall back to such intermediaries and to rely on their professional service. In this case, companies can still enjoy some of the relationship-building advantages by being visible to the crowd. At the same time, they can benefit from the safety of a professional contest-managing platform intermediary and avoid potential PR disasters resulting from fairness violations. Especially the ongoing interaction with the contest community is a process over several weeks that need a lot of attention, monitoring, and high responsiveness toward community requests. During this time procedural fairness perceptions are shaped among participants. Since procedural fairness in our study emerged as an excitement factor, we see good relationship-building potential for firms especially in the active community management during the contest period. Finally, companies must weigh how important the relationship with end-consumers and crowdsourcing participants is at all, and whether secrecy reasons might speak against the launch of a crowdsourcing contest under the company’s own brand name. In the case of industrial products positioned very early in the value creation process of the final product (e.g. chemical compounds) the relationship-building mechanisms of crowdsourcing competitions may not be relevant at all. For those firms, an anonymized search via platform intermediaries like InnoCentive might be the better approach to benefit from the wisdom of crowds.

Limitations and future outlook

Limitations and promising areas for future research warrant discussion in this context. Our study was limited in that we investigated only one particular design contest, and therefore our findings may not be generalized to other crowdsourcing competitions. The contest attracted more than 900 participants, but our data analysis was based on the subset of 121 purified responses from participants who filled out the questionnaire. It is therefore possible that we did not capture the whole picture, although our sample included participants who were more engaged compared to the overall contest population and who therefore are likely to be more sensitive to fairness violations. Furthermore our results are especially interesting for companies who wish to launch a crowdsourcing contest under the own brand name. They are however also useful for broadcasting platforms who act as intermediaries for other firms – also those platforms have to care for fairness in order to maintain a sustainable business model. Future research should also investigate the optimal award structure that would be perceived as fair, as well as the possible interaction effects between various fairness types (as highlighted above). Finally, the jury decision procedures and the decision process deserve more attention. It has been shown that idea selection by jury teams (individuals selecting the best idea in the same time and space) does not always lead to the identification of the best solution (Girotra et al., 2010). It could be helpful to investigate whether considering the community’s internal votes regarding the selection of winning ideas improves the perceived procedural fairness level.

Figures

Hypothesis model

Figure 1

Hypothesis model

The LED emotionalize your light design contest

Figure 2

The LED emotionalize your light design contest

Sample comparison

Total contest population
Contest participants Survey participants
Min. Max. SD Median Median p
No. of ideas 0 29 2,89 1 2,00 ***
No. of votes 0 583 57,18 1,00 3,00 ns
No. of comments 0 929 60,45 0,00 1,00 ***
Age 73 11,35 27,00 29,00 **
Gender ns

Notes: **,***Significant at <0.000 and <0.01, respectively

Wording and psychometric properties of items and constructs

Construct Item FL FR AVE F-LR
Distributive fairness (Grewal and Baker, 1994) How fair are the prizes in the OSRAM Design Contest? 0.87 0.88 0.71 0.53
The contest prizes for the winners in the OSRAM Design Contest are appropriate 0.95
I consider the prizes as fair 0.70
Procedural fairness (Folger and Konovsky, 1989; Colquitt, 2001) The jury decision process was free from bias 0.89 0.90 0.69 0.54
The jury decision process applied objective standards so that decisions were made in a consistent manner 0.83
The jury decision process employed procedures designed to provide useful feedback regarding any decisions 0.67
The jury team dealt with the contest community in an honest and truthful manner when making decisions 0.82
Evoked product interest Garbarino and Johnson (1999) My participation in the OSRAM Design Contest led me to the result that
0.80 0.58 0.51
… I became interested in the lights designed by the contest community 0.75
… I want to be informed regarding the development progress of the lights designed by the contest community 0.79
… I can imagine buying the lights designed by the contest community as soon as they have been realized 0.75
Perceived innovativeness (Deshpande et al., 1993) Compared to other light manufacturers OSRAM
0.90 0.76 0.21
… is more innovative 0.88
… is first-to-market with new products 0.90
… is the leader of innovation 0.84
Loyalty changea Beatty and Kahle (1988), Zeithaml et al. (1996) I will not use other lighting products if OSRAM lights are available (R) 0.69 0.92 0.69 0.18
I will recommend OSRAM lights to someone who seeks my advice 0.88
I encourage friends and relatives to use OSRAM lights 0.83
I will use OSRAM lights more often in the future 0.87
OSRAM lights are my first choice 0.86

Notes: FL, standardized factor loading; FR, factor reliability; AVE, average variance extracted; F-LR, Fornell-Larcker ratio (discriminant validity); aloyalty change: the same items have been assessed before and after contest participation; pre-participation scores were subtracted from post-participation scores

Results of the path analysis

Path β p
Distributive fairness → product interest 0.33 0.008
Distributive fairness → perc. innovativeness 0.01 0.939
Distributive fairness → loyalty change 0.12 0.326
Procedural fairness → product interest 0.35 0.005
Procedural fairness → perc. innovativeness 0.34 0.007
Procedural fairness → loyalty change 0.16 0.204

Notes: CMIN/DF 1.147, p=0.125, GFI=0.884, TLI=0.984, CFI=0.987, RMSEA=0.035

Asymmetric effects of fairness

Distributive fairness (median 2.5) Procedural fairness (median 2.33)
High Low High Low
Evoked product interest
β 0.095 0.409 0.353 0.155
Sig. (0.316) (0.000) (0.001) (0.135)
Interpret Basic factor Excitement factor
Perceived innovativeness
β 0.099 0.178 0.093 0.218
Sig. (0.341) (0.087) (0.406) (0.054)
Interpret ns by trend: basic ns by trend: basic
Loyalty change
β 0.038 0.265 0.355 0.022
Sig. (0.711) (0.010) (0.002) (0.840)
Interpret Basic factor Excitement factor

References

Adams, J.S. (1965), “Inequity in social exchange”, in Berkowitz, L. (Ed.), Advances in Experimental Social Psychology, Academic Press, New York, NY, pp. 267-299.

Anderson, E.W., Fornell, C. and Mazvancheryl, S.K. (2004), “Customer satisfaction and shareholder value”, Journal of Marketing, Vol. 68, October, pp. 172-185.

Armstrong, S.J. and Overton, T.S. (1977), “Estimating nonresponse bias in mail surveys”, Journal of Marketing Research, Vol. 14, pp. 396-402.

Bagozzi, R. and Yi, Y. (1988), “On the evaluation of structural equation models”, Journal of the Academy of Marketing Science, Vol. 16 No. 1, pp. 74-94.

Bagozzi, R.P. and Dholakia, U.M. (2006), “Open source software user communities: a study of participation in linux user groups”, Management Science, Vol. 52 No. 7, pp. 1099-1115.

Baldwin, C. and Hippel, E.V. (2011), “Modeling a paradigm shift: from producer innovation to user and open collaborative innovation”, Organization Science, Vol. 22 No. 6, pp. 1399-1417.

Beatty, S.E. and Kahle, L.R. (1988), “Alternative hierarchies of the attitude-behavior relationship: the impact of brand commitment and habit”, Journal of the Academy of Marketing Science, Vol. 16 No. 2, pp. 1-10.

Belk, R. (1988), “Possessions and the extended self”, Journal of Consumer Research, Vol. 17 No. 9, pp. 127-140.

Bernhardt, J., Helander, N., Jussila, J. and Kärkkäinen, H. (2016), “Crowdsourcing in business-to-business markets: a value creation and business model perspective”, in Lee (Ed.), Encyclopedia of E-Commerce Development, Implementation, and Management, IGI Global, Hershey, PA, pp. 933-943.

Blau, P.M. (1964), Exchange and Power in Social Life, John Wiley, New York, NY.

Boiney, L.G. (1995), “When efficient is insufficient: fairness in decisions affecting a group”, Management Science, Vol. 41 No. 9, pp. 1523-1537.

Boudreau, K.J. and Lakhani, K. (2013), “Using the crowd as an innovation partner”, Harvard Business Review, Vol. 61, April, pp. 60-69.

Breithut, J. (2011), “Pril-Wettbewerb endet im PR-Debakel”, [WWW Document], Spiegel, Hamburg, available at: www.spiegel.de/netzwelt/web/virale-werbefallen-pril-schmeckt-nach-haehnchen-a-756532.html (accessed April 12, 2011).

Brown, T. and Dacin, P. (1997), “The company and the product: corporate associations and consumer product responses”, Journal of Marketing, Vol. 61 No. 1, pp. 68-84.

Browne, M.W. and Cudeck, R. (1993), “Alternative ways of assessing model fit”, in Bollen, K.A. and Long, S.J. (Eds), Testing Structural Equation Models, Sage Publications, Newbury Park, CA, pp. 136-162.

Colquitt, J.A. (2001), “On the dimensionality of organizational justice: a construct validation on a measure”, Journal of Applied Psychology, Vol. 86 No. 3, pp. 386-400.

Cook, C., Heath, F., Thompson, R.L. and Thompson, B. (2001), “Score Reliability in web- or internet-based surveys: unnumbered graphic rating scales versus likert-type scales”, Educational & Psychological Measurement, Vol. 61, pp. 697-706.

Deshpande, R., Farley, J.U. and Webster, F.E. Jr (1993), “Corporate culture, customer orientation, and innovativeness in Japanese firms: a quadrate analysis”, Journal of Marketing, Vol. 5 No. 1, pp. 23-27.

Deutsch, M. (1985), Distributive Justice, Yale University Press, New Haven, CT.

Folger, R. and Konovsky, M.A. (1989), “Effects of procedural and distributive justice on reactions to pay raise decisions”, Academy of Management Journal, Vol. 32 No. 1, pp. 115-130.

Fornell, C. and Larcker, D.F. (1981), “Evaluating structural equation models with unobservable variables and measurement error”, Journal of Marketing Research, Vol. 18, February, pp. 39-50.

Franke, N., Keinz, P. and Klausenberger, K. (2013), “Does this sound like a fair deal? Antecedents and consequences of fairness expectations on the individual’s decision to participate in firm innovation”, Organization Science, Vol. 24 No. 5, pp. 1495-1516.

Franke, N., Schreier, M. and Kaiser, U. (2010), “The ‘I designed it myself’ effect in mass customization”, Management Science, Vol. 56 No. 1, pp. 125-140.

Frickel, C. (2011), “Facebook Aufstand gegen Pril-Wettbewerb”, FOCUS Magazin Verlag GmbH, available at: www.focus.de/digital/internet/facebook/facebook-aufstand-gegen-pril-wettbewerb_aid_628554.html (accessed May 19, 2011).

Fuchs, C. and Schreier, M. (2011), “Customer empowerment in new product development”, Journal of Product Innovation Management, Vol. 28 No. 1, pp. 17-32.

Füller, J. (2010), “Refining virtual co-creation from a consumer perspective”, California Management Review, Vol. 52 No. 2, pp. 98-122.

Garbarino, E. and Johnson, M.S. (1999), “The different roles of satisfaction, trust, and commitment in customer relationships”, Journal of Marketing, Vol. 63, April, pp. 70-87.

Gatzweiler, A., Blazevic, V. and Piller, F.T. (2017), “Dark side or bright light: destructive and constructive deviant content in consumer ideation contests”, Journal of Product Inovation Management, January, doi: 10.1111/jpim.12369.

Gebauer, J., Füller, J. and Pezzei, R. (2013), “The dark and the bright side of co-creation: triggers of member behavior in online innovation communities”, Journal of Business Research, Vol. 66 No. 9, pp. 1516-1527.

Gilliland, S.W. (1993), “The perceived fairness of selection systems: an organizational justice perspective”, The Academy of Management Review, Vol. 18 No. 4, pp. 694-734.

Girotra, K., Terwiesch, C. and Ulrich, K.T. (2010), “Idea generation and the quality of the best idea”, Management Science, Vol. 56 No. 4, pp. 591-605.

Grewal, D. and Baker, J. (1994), “Do retail store environmental factors affect consumers’ price acceptability? An empirical examination”, International Journal of Research in Marketing, Vol. 11 No. 2, pp. 107-115.

Hair, J., Black, B., Babin, B., Anderson, R. and Tatham, R. (2006), Multivariate Data Analysis, Prentice Hall, Upper Saddle River, NJ.

Herzberg, F., Mausner, B. and Snyderman, B.B. (1959), The Motivation to Work, John Wiley, New York, NY.

Homans, G.C. (1958), “Social behavior as exchange”, American Journal of Sociology, Vol. 63 No. 6, pp. 597-606.

Howe, J. (2008), Crowdsourcing: How the Power of the Crowd is Driving the Future of Business, The Crown Publishing Group, New York, NY.

Hu, L.-T. and Bentler, P.M. (1999), “Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 6 No. 1, pp. 1-55.

Jeppesen, L.B. and Lakhani, K.R. (2010), “Marginality and problem-solving effectiveness in broadcast search”, Organization Science, Vol. 21 No. 5, pp. 1016-1033.

Kano, N. (1984), “Attractive quality and must be quality”, Hinshitsu (Quality), Vol. 14 No. 2, pp. 147-156, (in Japanese).

Kline, R.B. (1998), Principles and Practice of Structural Equation Modeling, Guilford Press, New York, NY.

Kohler, T. (2015), “Crowdsourcing-based business models: how to create and capture value”, California Management Review, Vol. 57 No. 4, pp. 63-84.

Kozinets, R.V., Hemetsberger, A. and Schau, H.J. (2008), “The wisdom of consumer crowds: collective innovation in the age of networked marketing”, Journal of Macromarketing, Vol. 28 No. 4, pp. 339-354.

Kozinets, R.V., Wilner, S., Wojnicki, A. and De Valck, K. (2010), “Networks of narrativity: understanding word-of-mouth marketing in online communities”, Journal of Marketing, Vol. 74 No. 2, pp. 71-89.

Leventhal, G.S. (1980), “What should be done with equity theory? New approaches to the study of fairness in social relationships”, in Gergen, K.J., G., M.S. and Willis, R.H. (Eds), Social Exchange: Advances in Theory and Research, Plenum, New York, NY, pp. 27-55.

Lind, E.A.T.T.R. (1988), The Social Psychology of Procedural Justice, Plenium Press, New York, NY.

Lindell, M.K. and Whitney, D.J. (2001), “Accounting for common method variance in cross-selectional research designs”, Journal of Applied Psychology, Vol. 86 No. 1, pp. 114-121.

McAlexander, J., Schouten, J. and Koenig, H. (2002), “Building brand community”, Journal of Marketing, Vol. 66 No. 1, pp. 38-54.

McWilliam, G. (2000), “Building strong brands through online communities”, Sloan Management Review, Vol. 41 No. 13.

Malthouse, E.C., Haenlein, M., Skiera, B., Wege, E. and Zhang, M. (2013), “Managing customer relationships in the social media era: introducing the social CRM house”, Journal of Interactive Marketing, Vol. 27 No. 4, pp. 270-280.

Matzler, K. and Sauerwein, E. (2002), “The factor structure of customer satisfaction: an empirical test of the importance grid and the penalty-reward-contrast analysis”, International Journal of Service Industry Management, Vol. 13 No. 4, pp. 314-332.

Matzler, K., Hinterhuber, H.H., Bailom, F. and Sauerwein, E. (1996), “How to delight your customers”, Journal of Product and Band Management, Vol. 5 No. 2, pp. 6-18.

Mittal, V. and Kamakura, W.A. (2001), “Satisfaction, repurchase intent, and repurchase behavior: investigating the moderating effect of customer characteristics”, Journal of Marketing Research, Vol. 38 No. 2, pp. 131-142.

Mittal, V., Ross, W.T. and Baldasare, P.M. (1998), “The asymmetric impact of negative and positive attribute-level performance on overall satisfaction and repurchase intentions”, Journal of Marketing, Vol. 62, January, pp. 33-47.

Moleskine (2011a), “Create the Moleskinerie logo”, available at: www.moleskine.com/gb/news/moleskinerie_logo_competition

Moleskine (2011b), “Facebook entry by Moleskine on October 21”, available at: www.facebook.com/moleskine

Nambisan, S. and Baron, R.A. (2007), “Interactions in virtual customer environments: implications for product support and customer relationship management”, Journal of Interactive Marketing, Vol. 21 No. 2, pp. 42-62.

Nambisan, S. and Nambisan, P. (2008), “How to profit from a better ‘virtual customer environment’”, MIT Sloan Management Review, Vol. 49 No. 3, pp. 53-61.

Nambisan, S. and Baron, R.A. (2009), “Virtual customer environments: testing a model of voluntary participation in value co-creation activities”, Journal of Product Innovation Management, Vol. 26 No. 4, pp. 388-406.

Podsakoff, P.M., Mackenzie, S.B. and Podsakoff, N.P. (2012), “Sources of method bias in social science research and recommendations on how to control it”, Annual Review of Psychology, Vol. 63 No. 1, pp. 539-569.

Podsakoff, P.M., Mackenzie, S.B., Jeong-Yeon, L. and Podsakoff, N.P. (2003), “Common method biases in behavioral research: a critical review of the literature and recommended remedies”, Journal of Applied Psychology, Vol. 88, October, p. 879.

Poetz, M. and Schreier, M. (2012), “The value of crowdsourcing: can users really compete with professionals in generating new product ideas?”, Journal of Product Innovation Management, Vol. 29 No. 2, pp. 245-256.

Porter, C.E. and Donthu, N. (2008), “Cultivating trust and harvesting value in virtual communities”, Management Science, Vol. 54 No. 1, pp. 113-128.

Prahalad, C.K. and Krishnan, M.S. (2008), The New Age of Innovation: Driving Co-Creation Value Through Global Networks, McGraw Hill, New York, NY.

Roehm, M. and Tybout, A. (2006), “When will a brand scandal spill over, and how should competitors respond?”, Journal of Marketing Research, Vol. 43 No. 3, pp. 366-373.

Rust, R.T. and Zahornik, A.J. (1993), “Customer satisfaction, customer retention and market share”, Journal of Retailing, Vol. 69 No. 2, pp. 193-215.

Salt, S. (2011), “How to lose fans & annoy advocates: the Moleskine story”, The Inc., Slingers, available at: www.theincslingers.com/2011/10/how-to-lose-fans-annoy-advocates-the-moleskine-story/ (accessed November 15, 2011).

Sawall, A. (2011), “Henkel vergrault seine Facebook-Freunde [Online]”, available at: www.zeit.de/digital/internet/2011-05/pril-facebook-pr (accessed May 21, 2011).

Schenk, E. and Guittard, C. (2011), “Towards a characterization of crowdsourcing practices”, Journal of Innovation Economics & Management, Vol. 7 No. 1, pp. 93-107, doi: 10.3917/jie.007.0093.

Schlosser, A.E. (2003), “Experiencing products in the virtual world: the role of goal and imagery in influencing attitudes versus purchase intentions”, Journal of Consumer Research, Vol. 30, September, pp. 184-198.

Schreier, M., Fuchs, C. and Dahl, D.W. (2012), “The innovation effect of user design: exploring consumers’ innovation perceptions of firms selling products designed by users”, Journal of Marketing, Vol. 76, pp. 18-32.

Sea-Jin, C., Witteloostuijn, A.V. and Eden, L. (2010), “From the editors: common method variance in international business research”, Journal of International Business Studies, Vol. 41 No. 2, pp. 178-184.

Srivastava, R., Shervani, T.A. and Fahey, L. (1998), “Market-based assets and shareholder value: a framework for analysis”, Journal of Marketing, Vol. 62, January, pp. 2-18.

Terwiesch, C. and Xu, Y. (2008), “Innovation contests, open innovation, and multiagent problem solving”, Management Science, Vol. 54 No. 9, pp. 1529-1543.

Thibaut, J.W. and Walker, L. (1975), Procedural Justice: A Psychological Analysis, Erlbaum, Hillsdale, NJ.

Vukovic, M. (2009), “Crowdsourcing for enterprises”, Proceedings of 2009 World Conference on Services, pp. 686-692, doi: 10.1109/SERVICES-I.2009.56.

Ward, J.C. and Ostrom, A.L. (2006), “Complaining to the masses: the role of protest faming in customer created complaint web sites”, Journal of Consumer Research, Vol. 33 No. 2, pp. 220-230.

Weiss, H.M., Suckow, K. and Corpanzano, R. (1999), “Effects of justice conditions on discrete emotions”, Journal of Applied Psychology, Vol. 84, October, pp. 786-794.

Xie, C., Bagozzi, R.P. and Troye, S.V. (2008), “Trying to prosume: toward a theory of consumers as co-creators of value”, Journal of the Academy of Marketing Science, Vol. 36, pp. 109-122.

Zeithaml, V.A., Berry, L.L. and Parasuraman, A. (1996), “The behavioral consequences of service quality”, Journal of Marketing, Vol. 60, April, pp. 31-46.

Zwass, V. (2010), “Co-creation: toward a taxonomy and an integrated research perspective”, International Journal of Electronic Commerce, Vol. 15 No. 1, pp. 11-48, doi: 10.2753/JEC1086-4415150101.

Corresponding author

Rita Faullant can be contacted at: rita.faullant@aau.at

Related articles