Step 1: The initial sample
Go to the tab "New study" on the left.
- Decide how many participants you want to collect initially.
Pro-Tip: You increase your chances of finding a significant effect when you run many studies with few participants, instead of few studies with many participants (Bakker, van Dijk, & Wicherts, 2012)!
- Next, decide what the true effect size should be.
Pro-Tip: For a proper training in p-hacking, always select "0"! Then you can train to squeeze out an effect from nothing - isn`t that cool!?
- Next, decide how many potential dependent variables (DVs) you assess. (Technical detail: all DVs correlate to r=.5)
Pro-Tip: The more DVs you measure, the more you increase the chance of finding something! DV_all is an aggregate of all DVs.
Finally, click on the button
Run new experiment to collect your sample.
Now, take a look at the p
-values in the middle pane. For your convenience, significant results are already marked in green, and result that are teetering on the brink of significance (i.e., promising results!) are yellow.
Is it futile, such as p > .60? Meh. Consider to run another conceptual replication. Probably the manipulation did not work, or the DV was not so good. (What a luck that you didn`t run too many subjects on this shitty stimulus set!)
But maybe the p-value is in a promising region, say p <.20? Great! That`s a near hit. Are you ready to go to Step 2? Now comes the fun part!
Step 2: Polish your p-value
Got to the tab "Now: p-hack!". This gives you all the great tools to improve your current study. Here you can fully utilize your data analytic skills and creativity.
Things to try:
- Have you looked at all dependent variables (DVs)? And also their aggregate?
- Have you tried to control for age? Or for gender? Or for both?
- Maybe the effect is only present in one gender. You should try the interaction. (You will find a great post-hoc explanation, why this only works in men. I count on your creativity!)
- Push your result across the 5% boundary by adding 5 or 10 new subjects! (The 5% criterion is arbitrary anyway, isn`t it?)
- Remove outliers! Simply click on a data point in the plot to exclude (or re-include) it from your analysis. This is also very powerful when you look at the interaction with gender: Sometimes a point is an outlier only when you consider genders separately.
Nothing helped to get a significant result? Well, that happens to the best of us.
Don`t become desperate, and don`t think too long about why that specific study failed.
Now it is important to show even more productivity: Go for the next conceptual rpelication (i.e., go back to Step 1 and collect a new sample, with a new manipulation and a new DV).
Pro-Tip: Never do direct replications (aka. "stupid method repetitions")!
- First, this is only for second-stringers without creative potential.
- Second, direct replications lock the "Now: p-hack" tab! Oh no! With direct replications, you are forced to use the same DV as before, and you cannot choose anymore from several DVs. If you controlled for age in the first study, you would have to control for age in the direct replication as well, etc. All this compulsive, anal-retentive stuff just limits your creative potential.
- Instead of conducting a direct replication (which, at n=20, wouldn`t take too long, right?), we rather suggest to write long rebuttals about the merits of conceptual replication. You can point to all of the successful conceptual replications you have collected in your study stack! (See Step 3.)
Step 3: Harvest your accomplishments
You found a significant effect? We congratulate you for your creativity and productivity.
On the right panel, you can harvest your successes. Simply click on the
button next to each DV and the current study is saved to your stack, awaiting some additional conceptual replications that show the robustness of the effect.
But the challenge continues. Many journals require multiple studies - but that should be no problem for you. Go back to Step 1. Craft a new sample with a significant p
-value, and when you have it, save it to your stack.
Four to six studies should make a compelling case for your subtile, counterintuitive, and shocking effects. Honor to whom honor is due: Find the best outlet for your achievements!
Step 4: The nasty part
Those were the good times where we could stop here. But some nasty researchers developed tools that are quite powerful in detecting p-hacking. If you click on the
Send to p-checker
button below your study stack on the right, the saved test statistics are transfered to the p
-checker app. Let`s see whether we can detect p
Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi:10.1177/1745691612459060