For this study published in Nature in March 2022, will the main finding replicate?
23
54
400
resolved Nov 21
Resolved
NO

We have submitted the study described below to a replication attempt, and we invite you to read the description and then to predict whether the main finding replicated. We consider a finding to have replicated if the original result was statistically significant and our result was statistically significant and in the same direction, OR if the original result was statistically insignificant and our result was also statistically insignificant.

Replication of Study 2A from “Knowledge about others reduces one’s own sense of anonymity” in Nature

What is this Replication Project?

The project involves replications of randomly-selected, newly-published psychology papers in prestigious journals, with the overall aim to reward best practices and to shift incentives in social science toward more replicable science.


Note that we also rate papers on their transparency and on how unlikely they are to be misinterpreted by readers, but the focus here is only to describe the study in enough detail to allow people to predict the outcome.

How often have social science studies tended to replicate in the past?

In one historical project that attempted to replicate 100 experimental and correlation studies from 2008 in three important psychology journals, analysis indicated that they successfully replicated 40%, failed to replicate 30%, and the remaining 30% were inconclusive. (To put it another way, of the replications that were not inconclusive, 57% were successful replications.) 


In another project, researchers attempted to replicate all experimental social science science papers (that met basic inclusion criteria) published in Nature or Science (the two most prestigious general science journals) between 2010 and 2015. They found a statistically significant effect in the same direction as the original study for 62% (i.e., 13 out of 21) studies, and the effect sizes of the replications were, on average, about 50% of the original effect sizes. Replicability varied between 57% and 67% depending on the replicability indicator used.

Summary of this study

We subjected Study 2A from Knowledge about others reduces one’s own sense of anonymity, published in Nature, to a replication attempt. Our replication study (N = 475) examined whether people assigned a higher probability to another person detecting their lie if they were given information about that other person than if they were not. 


In the replication experiment, like in the original study:

  1. Participants were asked to write 5 statements about themselves: 4 truths and 1 lie. They were told those statements would be shared with another person, who would then guess which one was the lie. 

  2. Participants were either given 4 true statements about their ‘partner’ (Information Condition), or they were given no information about their ‘partner’ (No Information Condition). 

  3. Participants were asked to assign a percentage chance describing how likely their ‘partner’ would be to detect their lie.*


*In our replication, prior to asking people to give their estimated percentage, we reminded participants that their 'partner' was shown their 5 statements and not told which were true. This was done in case participants had forgotten the conditions of the experiment. In the original study, this reminder had not been provided.


We collected data from 481 participants. We excluded 4 participants who were missing demographic data. We also excluded 2 participants who submitted nonsensical single word answers to the four truths and a lie prompt. Participants could not proceed in the experiment if they left any of those statements blank, but there was no automated check on the content of what was submitted. The authors of the original study did not remove any subjects from their analysis, but they recommended that we do this quality check in our replication.


In the original experiment and in our replication, participants were not actually connected to a ‘partner.’ They were informed about this fact after all participants had completed the experiment.


To test the main hypothesis, we used two-tailed independent samples t-tests. The main analysis asked whether participants in the Information Condition assigned a different probability to the chance of their ‘partner’ detecting their lie than participants in the No Information Condition. 

Summary of this study - in flowchart form

Our replication study is summarized in the diagram below. (Here’s a link to a higher-resolution version.)

If you'd like to see the above study description in the form of a Google Document instead, here's the link: https://docs.google.com/document/d/1wmXfOu17WZ0FnBGlCZay5zx9nVZ1yQaaIBpbbGPek70/edit?usp=sharing

Close date updated to 2022-08-02 11:59 pm

Close date updated to 2022-08-07 11:59 pm

Close date updated to 2022-08-14 11:59 pm

Close date updated to 2022-08-26 11:59 pm

Post-resolution update - here is a link to our report: https://replications.clearerthinking.org/replication-2022nature603/

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ249
2Ṁ62
3Ṁ30
4Ṁ25
5Ṁ13
Sort by:
predicted YES

As a counter-point to my earlier comment, putting the details of this study into Alvaro de Menard's "simple model" for forecasting replications [1] gives a 48% chance of replication [2].

[1] https://fantasticanachronism.com/2021/11/18/how-i-made-10k-predicting-which-papers-will-replicate/#Early-Steps-A-Simple-Model

[2] https://docs.google.com/spreadsheets/d/1XktdmFO9xRkCFfL3OS_kDv0mwRmG7_cOFycHQBioftU/edit?usp=sharing

predicted YES

## Base Rates

The two large-scale replication projects linked in the market description find that 40% and 62% of studies replicate. A literature review I did in march found that, across all large-scale replication projects in Psychology, 45% of the studies tried successfully replicated. This can be our base rate: p=0.45.

## Updating on Study Specifics

I will first note that, in my opinion, the hypothesis is plausible. It is by no means open and shut, but if it were true I wouldn't be surprised. The idea that "knowledge about others reduces one's own sense of anonymity" is often the case in day-to-day situations - if we put people in a laboratory experiment designed specifically so this heuristic no longer holds, it's not clear that people would intuitively feel this. I think that we are more likely to see hypotheses this plausible when the study replicates than when it doesn't. I'll put this down as a Bayes factor of 1.5.

Next, the sample size. The original study had two arms with 228 and 234 participants respectively. This is much larger than an ordinary pre-replication crisis psychology paper. A larger study means that it is much less likely the original study simply found statistical randomness and makes it much more likely that the study found an effect that will replicate. How much more likely? I would ball park it at P(large study | will replicate) = 0.4 and P(large study | won't replicate) = 0.1, giving a Bayes factor of 4.

P-values: the bane of the student and the scapegoat of psychology. In this study we have p = 0.000318 which is pretty low. I estimate P(really low p-value | will replicate) = 0.5 and P(really low p-value | won't replicate) = 0.05 so our Bayes factor is 10.

Updating our prior (the base rate) according to the Bayes factors as above give the posterior odds (0.45/0.55)*1.5*4*10 = 49.1 or the posterior probability of 98%.

## Updating on market behaviour

Since so many people are betting against me, I would put my subjective probability at about 90%. If you are betting against me, why? Have I made a mistake? Is there something important I have overlooked?

predicted YES

@JonathanNankivell It has been pointed out to me that we can improve the paragraph on p-values. Perhaps it should read

In this study we have p < 0.001. I estimate P(p < 0.001 | will replicate) = 0.1 and P(p < 0.001 | won't replicate) = 0.001 so our Bayes factor is 100.

Following through this would give us posterior odds of 491 or the posterior probability of 0.998!

Please note that this account bought some shares in this market in error. Once this error was noticed, we then sold them all. This account has a policy of not betting in its own markets.

predicted YES

Hey, why does the close date keep changing? What's going on behind the scenes?

@JonathanNankivell thanks for your interest! This study (and the other replication attempts we've posted about) were completed some time ago, so nothing is changing behind the scenes. However, we were hoping for more engagement (a larger number of bets), so we've been extending it in the hope that more people will find our replication markets and participate in them.

predicted NO

@TheReplicationProject Is that to say the replication attempt is already complete? If so, emphasizing that fact could spur engagement, as people want rapid return on their investment: I assumed my funds would be locked up for about a year so I was investing primarily to encourage replication markets, since I wasn't expecting good returns after accounting for opportunity cost.

predicted YES

@TheReplicationProject Updated close date again—do you have a number of bets you’re trying to hit before you close?

bought Ṁ150 of YES
Probably should have asked this before betting, but what is the significance threshold for the replication? $\alpha = 0.05$? $\alpha = 0.01$?
@JonathanNankivell it's $\alpha = 0.05$
predicted YES
@TheReplicationProject Phew!! Thanks for letting me know :)