Sat. Nov 16th, 2024

Y young Swiss males with mean (SD) age of 23.5 years (3.6) took part in our double-blind, parallel group and placebo controlled experiment. The study was performed in accordance with the Declaration of Helsinki and approved by the Cantonal Ethic Commission Zurich. Subjects had no significant general psychiatric, medical, or neurological disorder based on the result of structured interviews; they were included in the study after having provided written informed consent. Three subjects were excluded due to self-reported nausea, and two because they did not understand the instructions.Experimental DesignIn our paradigm, two players, player A and player B, begin with an endowment of 10 monetary units (MUs). First, player A has to decide how much of his endowment he wants to transfer to player Table 1. DAT1 polymorphism allele frequencies in our sample.DAT1 polymorphism genotypes 10/10R 9/10R 9/9RPlacebo 45 46 8 1 1 0L-DOPA 51 42 3 2 0 1Total 96 88 11 3 1 1GenotypingThe polymorphism for the DAT-1 was characterized using PCR amplification procedure with the following primers: DAT-1: F59-TGTGGTGTAGGGAACGGCCTG-39. R59-CTTCCTGGAGGTCACGGCTCA-39.9/11R 7/10R 10/11R Total doi:10.1371/journal.pone.0067820.tDopamine and Learning about Others’ ProsocialityB, knowing that the transfer is tripled by the experimenter. The transfer has an 80 probability of reaching player B. In this case, B can choose to either make a repayment that equalizes payoffs, or to retain the entire amount. The transfer is “lost” in the remaining 20 of the cases, so that player B receives nothing and cannot make a repayment. Thus, in case of an omitted return, player A does not know with 100 certainty whether this was player B’s intention. To be able to observe learning over time, we let our subjects in the role of player A play several rounds of the task. Each player A plays 20 rounds of the task paired with the same player B in all rounds. Since an omitted return is an extremely powerful aversive social signal, we implemented the “lost transfer” possibility to avoid the Title Loaded From File problem that player As might immediately withhold positive transfers after observing a single non-repayment. All our subjects in the main experiment are in the role of player A. They are paired with player Bs for whom repayment decisions were prerecorded, i.e. player Bs decided in how many of a total of 20 rounds they were going to make a repayment. Thus, player Bs made decisions in line with their true prosocial preferences. Player As were aware of the fact that they were paired with a player B whose decisions had been pre-recorded and also about the possibility that their transfers might get “lost” in 20 of the cases. The use of pre-recorded player B decisions is necessary to avoid an important confound. If player A would interact simultaneously with a given player B they could vary transfers strategically to influence player B’s future behavior [28]. Specifically, by conditioning transfers on B’s previous repayments, A can generate reputational incentives for B to repay [29]. Thus, in repeated simultaneous interactions in this context a repayment is no longer a clear signal of a player B’s prosocial preferences, because a purely selfish player B may also repay due to reputational incentives and hide his or her true type [30]. To investigate player 17460038 A’s pure learning process about a partners’ prosocial preferences within a reinforcement-learning framework, we Title Loaded From File eliminated these strategic elements by usin.Y young Swiss males with mean (SD) age of 23.5 years (3.6) took part in our double-blind, parallel group and placebo controlled experiment. The study was performed in accordance with the Declaration of Helsinki and approved by the Cantonal Ethic Commission Zurich. Subjects had no significant general psychiatric, medical, or neurological disorder based on the result of structured interviews; they were included in the study after having provided written informed consent. Three subjects were excluded due to self-reported nausea, and two because they did not understand the instructions.Experimental DesignIn our paradigm, two players, player A and player B, begin with an endowment of 10 monetary units (MUs). First, player A has to decide how much of his endowment he wants to transfer to player Table 1. DAT1 polymorphism allele frequencies in our sample.DAT1 polymorphism genotypes 10/10R 9/10R 9/9RPlacebo 45 46 8 1 1 0L-DOPA 51 42 3 2 0 1Total 96 88 11 3 1 1GenotypingThe polymorphism for the DAT-1 was characterized using PCR amplification procedure with the following primers: DAT-1: F59-TGTGGTGTAGGGAACGGCCTG-39. R59-CTTCCTGGAGGTCACGGCTCA-39.9/11R 7/10R 10/11R Total doi:10.1371/journal.pone.0067820.tDopamine and Learning about Others’ ProsocialityB, knowing that the transfer is tripled by the experimenter. The transfer has an 80 probability of reaching player B. In this case, B can choose to either make a repayment that equalizes payoffs, or to retain the entire amount. The transfer is “lost” in the remaining 20 of the cases, so that player B receives nothing and cannot make a repayment. Thus, in case of an omitted return, player A does not know with 100 certainty whether this was player B’s intention. To be able to observe learning over time, we let our subjects in the role of player A play several rounds of the task. Each player A plays 20 rounds of the task paired with the same player B in all rounds. Since an omitted return is an extremely powerful aversive social signal, we implemented the “lost transfer” possibility to avoid the problem that player As might immediately withhold positive transfers after observing a single non-repayment. All our subjects in the main experiment are in the role of player A. They are paired with player Bs for whom repayment decisions were prerecorded, i.e. player Bs decided in how many of a total of 20 rounds they were going to make a repayment. Thus, player Bs made decisions in line with their true prosocial preferences. Player As were aware of the fact that they were paired with a player B whose decisions had been pre-recorded and also about the possibility that their transfers might get “lost” in 20 of the cases. The use of pre-recorded player B decisions is necessary to avoid an important confound. If player A would interact simultaneously with a given player B they could vary transfers strategically to influence player B’s future behavior [28]. Specifically, by conditioning transfers on B’s previous repayments, A can generate reputational incentives for B to repay [29]. Thus, in repeated simultaneous interactions in this context a repayment is no longer a clear signal of a player B’s prosocial preferences, because a purely selfish player B may also repay due to reputational incentives and hide his or her true type [30]. To investigate player 17460038 A’s pure learning process about a partners’ prosocial preferences within a reinforcement-learning framework, we eliminated these strategic elements by usin.