Strong reciprocity is an area of research in behavioral economics, evolutionary psychology, and evolutionary anthropology on the predisposition to cooperate even when there is no apparent benefit in doing so. This topic is particularly interesting to those studying the evolution of cooperation, as these behaviors seem to be in contradiction with predictions made by many models of cooperation.[1] In response, current work on strong reciprocity is focused on developing evolutionary models which can account for this behavior.[2] [3] Critics of strong reciprocity argue that it is an artifact of lab experiments and does not reflect cooperative behavior in the real world.[4]
A variety of studies from experimental economics provide evidence for strong reciprocity, either by demonstrating people's willingness to cooperate with others, or by demonstrating their willingness to take costs on themselves to punish those who do not.
One experimental game used to measure levels of cooperation is the dictator game. In the standard form of the dictator game, there are two anonymous unrelated participants. One participant is assigned the role of the allocator and the other the role of the recipient. The allocator is assigned some amount of money, which they can divide in any way they choose. If a participant is trying to maximize their payoff, the rational solution (nash equilibrium) for the allocator to assign nothing to the recipient. In a 2011 meta study of 616 dictator game studies, Engel found an average allocation of 28.3%, with 36% of participants giving nothing, 17% choosing the equal split, and 5.44% give the recipient everything.[5] The trust game, an extension of the dictator game, provides additional evidence for strong reciprocity. The trust game extends the dictator game by multiplying the amount given by the allocator to the recipient by some value greater than one, and then allowing the recipient to give some amount back to the allocator. Once again in this case, if participants are trying to maximize their payoff, recipient should give nothing back to the allocator, and the allocator should assign nothing to the recipient. A 2009 meta analysis of 84 trust game studies revealed that the allocator gave an average of 51% and that the receiver returned an average of 37%.[6]
A third commonly used experiment used to demonstrate strong reciprocity preferences is the public goods game. In a public goods game, some number of participants are placed in a group. Each participant is given some amount of money. They are then allowed to contribute any of their allocation to a common pool. The common pool is then multiplied by some amount greater than one, then evenly redistributed to each participant, regardless of how much they contributed. In this game, for anyone trying to maximize their payoff, the rational nash equilibrium strategy is to not contribute anything. However, in a 2001 study, Fischbacher observed average contributions of 33.5%.[7]
The second component of strong reciprocity is that people are willing to punish those who fail to cooperate, even when punishment is costly. There are two types of punishment: second party and third party punishment. In second party punishment, the person who was hurt by the other parties' failure to cooperate has the opportunity to punish the non-cooperator. In third party punishment, an uninvolved third party has the opportunity to punish the non-cooperator.
A common game used to measure willingness to engage in second party punishment is the ultimatum game. This game is very similar to the previously described dictator game in which the allocator divides a sum of money between himself and a recipient. In the ultimatum game, the recipient has the choice to either accept the offer or reject it, resulting in both players receiving nothing. If recipients are payoff maximizers, it is in the nash equilibrium for them to accept any offer, and it is therefore in the allocator's interest to offer as close to zero as possible.[8] However, the experimental results show that the allocator usually offers over 40%, and is rejected by the recipient 16% of the time. Recipients are more likely to reject low offers rather than high offers.[9] Another example of second party punishment is in public goods game as described earlier, but with a second stage added in which participants can pay to punish other participants. In this game, a payoff maximizer's rational strategy in nash equilibrium is not to punish and to not contribute. However, experimental results show that participants are willing to pay to punish those who deviate from the average level of contribution – so much so that it becomes disadvantageous to give a lower amount, which allows for sustained cooperation.[10] [11]
Modifications of the dictator game and prisoner's dilemma provide support for the willingness to engage in costly third party punishment. The modified dictator game is exactly the same as the traditional dictator game but with a third party observing. After the allocator makes their decision, the third party has the opportunity to pay to punish the allocator. A payoff maximizing third party would choose not to punish, and a similarly rational allocator would choose to keep the entire sum for himself. However, experimental results show that a majority of third parties punish allocations less than 50%[12] In the prisoner's dilemma with third party punishment, two of the participants play a prisoner's dilemma, in which each must choose to either cooperate or defect. The game is set up such that regardless of what the other player does, it is rational for an income maxizer to always choose to defect, even though both players cooperating yields a higher payoff than both players defecting. A third player observes this exchange, then can pay to punish either player. An income maximizing third parties' rational response would be to not punish, and income maximizing players would choose to defect. A 2004 study demonstrates that a near majority of participants (46%) are willing to pay to punish if one participant defects. If both parties defect, 21% are still willing to punish.
Other researchers have investigated to what extent these behavioral economic lab experiments on social preferences can be generalized to behavior in the field. In a 2011 study, Fehr and Leibbrandt examined the relationship between contributions in public goods games to participation in public goods in the community of shrimpers in Brazil. These shrimpers cut a hole in the bottom of their fishing bucket in order to allow immature shrimp to escape, thereby investing in the public good of the shared shrimp population. The size of the hole can be seen as the degree to which participants cooperate, as larger holes allow more shrimp to escape. Controlling for a number of other possible influences, Fehr and Leibbrandt demonstrated a positive relationship between hole size and contributions in the public goods game experiment.[13]
Rustagi and colleagues were able to demonstrate a similar effect with 49 groups of Bale Oromo herders in Ethiopia, who were participating in forest management. Results from public goods game experiments revealed more than one third of participant herders were conditional cooperators, meaning they cooperate with other cooperators. Rustagi et al. demonstrated that groups with larger amounts of conditional cooperators planted a larger number of trees.[14]
In addition to experimental results, ethnography collected by anthropologists describes strong reciprocity observed in the field.
Records of the Turkana, an acephalous African pastoral group, demonstrate strong reciprocity behavior. If someone acts cowardly in combat or commits some other free-riding behavior, the group confers and decides if a violation has occurred. If they do decide a violation has occurred, corporal punishment is administered by the age cohort of the violator. Importantly, the age cohort taking the risks are not necessarily those who were harmed, making it costly third party punishment.[15]
The Walibri of Australia also exhibit third party costly punishment. The local community determines whether an act of homicide, adultery, theft, etc. was an offense. The community then appoints someone to carry out the punishment, and others to protect that person against retaliation.[16] Data from the Aranda foragers of the Central Desert in Australia suggest this punishment can be very costly, as it carries with it the risk of retaliation from the family members of the punished, which can be as severe as homicide.[17]
A number of evolutionary models have been proposed in order to account for the existence of strong reciprocity. This section briefly touches on an important small subset of such models.
The first model of strong reciprocity was proposed by Herbert Gintis in 2000, which contained a number of simplifying assumptions addressed in later models. In 2004, Samuel Bowles and Gintis presented a follow up model in which they incorporated cognitive, linguistic, and other capacities unique to humans in order to demonstrate how these might be harnessed to strengthen the power of social norms in large scale public goods games. In a 2001 model, Joe Henrich and Robert Boyd also build on Gintis' model by incorporating conformist transmission of cultural information, demonstrating that this can also stabilize cooperative groups norms.[18]
Boyd, Gintis, Bowles, and Peter Richerson's 2003 model of the evolution of third party punishment demonstrates how even though the logic underlying altruistic giving and altruistic punishment may be the same, the evolutionary dynamics are not. This model is the first to employ cultural group selection in order to select for better performing groups, while using norms to stabilize behavior within groups.[19]
Though many of the previously proposed models were both costly and uncoordinated, a 2010 model by Boyd, Gintis and Bowles presents a mechanism for coordinated costly punishment. In this quorum-sensing model, each agent chooses whether or not they are willing to engage in punishment. If a sufficient number of agents are willing to engage in punishment, then the group acts collectively to administer it.[20] An important aspect of this model is that strong reciprocity is self-regarding when rare in the population, but may be altruistic when common within a group.
Significant cross-cultural variation has been observed in strong reciprocity behavior. In 2001, dictator game experiments were run in a 15 small scale societies across the world. The results of the experiments showed dramatic variation, with some groups mean offer as little as 26% and some as great as 58%. The pattern of receiver results was also interesting, with some participants in some cultures rejecting offers above 50%. Henrich and colleagues determined that the best predictors of dictator game allocations were the size of the group (small groups giving less) and market integration (the more involved with markets, the more participants gave).[21] This study was then repeated with a different 15 small scale societies and with better measures of market integration, finding a similar pattern of results.[22] These results are consistent with the culture-gene coevolution hypothesis. A later paper by the same researchers identified religion as a third major contributor. Those people who participate in a world religion were more likely to exhibit strong reciprocity behavior.[23]
A particularly prominent criticism of strong reciprocity theory is that it does not correspond to behavior found in the actual environment. In particular, the existence of third party punishment in the field is called into question. Some have responded to this criticism by pointing out that if effective, third party punishment will rarely be used, and will therefore be difficult to observe.[24] [25] Others have suggested that there is evidence of third party costly punishment in the field.[26] Critics have responded to these claims by arguing that it is unfair for proponents to argue that both a demonstration of costly third party punishment as well as a lack of costly third party punishment are both evidence of its existence. They also question whether the ethnographic evidence presented is costly third party punishment, and call for additional analysis of the costs and benefits of the punishment.[27] Other research has shown that different types of strong reciprocity do not predict other types of strong reciprocity within individuals.[28]
The existence of strong reciprocity implies that systems developed based purely on material self-interest may be missing important motivators in the marketplace. This section gives two examples of possible implications. One area of application is in the design of incentive schemes. For example, standard contract theory has difficulty dealing with the degree of incompleteness in contracts and the lack of use of performance measures, even when they are cheap to implement. Strong reciprocity and models based on it suggest that this can be explained by people's willingness to act fairly, even when it is against their material self-interest. Experimental results suggest that this is indeed the case, with participants preferring less complete contracts, and workers willing to contribute a fair amount beyond what would be in their own self-interest.[29]
Another application of strong reciprocity is in allocating property rights and ownership structure. Joint ownership of property can be very similar to the public goods game, where owners can independently contribute to the common pool, which then returns on the investment and is evenly distributed to all parties. This ownership structure is subject to the tragedy of the commons, in which if all the parties are self-interested, no one will invest. Alternatively, property could be allocated in an owner and employee relationship, in which an employee is hired by the owner and paid a specific wage for a specific level of investment. Experimental studies show that participants generally prefer joint ownership, and do better under joint ownership than in the owner employee organization.[30]