- Advertisement -spot_img
Monday, October 25, 2021

Exposure to fake dishonesty studies make me proud to be a behavioral scientist

The story has a lot to recommend it: Psychologist Dan Ariely, author of a bestselling book on the behavioral science of dishonesty, withdrew his study because the data was fake. No wonder it has been picked up by the world’s media. BuzzFeed declared it “the latest blow to the esoteric field of behavioral economics.” Psychologist Stuart Ritchie, himself a scientist, wrote about the matter under the headline: “Never trust a scientist”.

Dan Ariely admitted that the data on which his study relied was spurious and that he should have re-examined it.
Yale Zur, CC BY

I am concerned about these interpretations. And not because I teach on a behavioral science master’s program. I worry because headlines like these stir up anti-science sentiment at a time when trust in experts is low, when thoughtful people parrot that we live in an “after-truth world” and where distrust of science is like death. is becoming the reason.

But above all, I worry about these interpretations because I draw opposite conclusions from this story. In this case, the lesson is that the scientific process has worked out really well.

Casting Doubts on Science

An important and overlooked detail is that the scientific process uncovered years ago that the paper did not have results. Using data provided by an insurance company, Ariely’s study claimed that people are more honest in their reports if they sign a declaration of truthfulness at the beginning of a document, not at the end of it. This method was adopted by the IRS, the US Tax Collection Agency, and at least one major insurance company.

While no one expressed concern for intentional fraud, several research teams had reported their failed attempts to replicate the initial studies. Replication is important. Because science has its roots in probability, seeing the same result on two independent occasions greatly reduces the likelihood that the result is temporary.

In 2020, Ariely and his co-authors published a paper that they themselves attempted and failed to replicate the initial results. At that time it was not yet revealed that the initial data was fake. The authors concluded that the preliminary results were a tentative one and titled the follow-up paper: “Signing at the beginning versus at the end does not reduce dishonesty.”

Another notable note is that the failed reproductions were published in one of the top general science journals. It is a recent development that scientists will devote their time to replication studies – and that top journals will devote precious column inches to publishing them – and follows a series of statistical studies that cast doubt on the rigor of published science. Huh.

Read Also:  Vikings LB Anthony Barr returns to action for the first time in over a year

The first was the provocative data simulation study which suggested that more than half of the published results of scientific research are false. This discovery derives from the following three characteristics:

1. Some results are fluke.
2. New results are coming all the time.
3. Unexpected and compelling results are more likely to be published.

Then there was the Many Labs replication project. It was found that more than half of the results published in top psychology journals could not be replicated.

expose false results

Some practical contributions come from the behavioral sciences, which covers many disciplines that look at human behavior and interactions, and work at the intersection of statistics, economics and psychology. One of those insights was that scientists could publish false results without even knowing it.

To understand this, you first need to know that the scientific community holds that the result provides evidence if it exceeds a threshold. That threshold is measured as a p-value, with p standing for probability. Lower p-values ​​indicate more reliable results. A result passes the threshold in reliable evidence or, in science parlance, is statistically significant if its p-value is below some threshold, for example, p < 0.05.

Intentionally or otherwise, researchers increase their chances of obtaining statistically significant results by engaging in questionable research practices. In a survey published in 2012, most psychologists reported that they test their theory by measuring more than one outcome and then report the results only on the result that achieves statistical significance. They probably accepted this behavior because they did not recognize that it increased the likelihood of drawing incorrect conclusions.

Uri Simonsohn, Leif Nelson and Joe Simmons, a trio of behavioral scientists who are regularly described as “data detectives”, devised a test to determine whether findings could be derived from questionable research practices. is likely to. The test examines whether the evidence supporting the claim is suspiciously clustered just below the threshold of statistical significance.

It was this test that debunked the idea of ​​”power posing”—the widely publicized claim that you can perform better in stressful situations if you adopt an assertive body posture, such as hands on hips.

Now three data detectives have done it again. It was on his blog that the harsh and sensational facts of Ariely’s dishonesty study were exposed. Contrary to BuzzFeed’s claim that this case is a blow to behavioral economics, it really shows how behavioral science has driven us to root out spurious results. Uncovering that bad apple, and the fascinating techniques employed to do it, is really a win for behavioral scientists.

This article is republished from – The Conversation – Read the – original article.

World Nation News Deskhttps://www.worldnationnews.com
World Nation News is a digital news portal website. Which provides important and latest breaking news updates to our audience in an effective and efficient ways, like world’s top stories, entertainment, sports, technology and much more news.
Latest news
Related news
- Advertisement -

Leave a Reply