Lessons Questionable science/validity of the Dunning-Kruger Effect: Implications for competency self-assessment in diving.

Please register or login

Welcome to ScubaBoard, the world's largest scuba diving community. Registration is not required to read the forums, but we encourage you to join. Joining has its benefits and enables you to participate in the discussions.

Benefits of registering include

  • Ability to post and comment on topics and discussions.
  • A Free photo gallery to share your dive photos with the world.
  • You can make this box go away

Joining is quick and easy. Log in or Register now!

A: I find it enormously delightful that a bunch of people on the internet who (presumably) have done no research whatsoever as trained and experienced social psychologists are working together to determine the validity of a study in social psychology about people who don't know what they're talking about.

B: I remember hearing an interview with David Dunning some time ago where the interviewer asked, "How confident are you in the results you produced?"
He answered, "Well... I might be completely wrong, but there's no way for me to know it."
 
That is not the issue here
Of course it is. You are trying to discredit the data display, but clearly you misunderstand what is being displayed. The whole purpose of the 45-deg line is to show what unbiased results would look like. The data don't look like that, therefore they are biased. The issue revolves around why are they biased?

Much more convincing as a negative take on D-K is the tendency to assume one is average, or perhaps slightly better than average. So those with low ability estimate higher, toward the mean; those with high ability estimate lower, toward the mean.
 
A: I find it enormously delightful that a bunch of people on the internet who (presumably) have done no research whatsoever as trained and experienced social psychologists are working together to determine the validity of a study in social psychology about people who don't know what they're talking about.
How confident are you about your presumption?
 
B: I remember hearing an interview with David Dunning some time ago where the interviewer asked, "How confident are you in the results you produced?"
He answered, "Well... I might be completely wrong, but there's no way for me to know it."
:rofl3:
 
How confident are you about your presumption?
I'm gonna go with about a 9 on a scale of 1-10.
1 being "I know my own name."and 10 being "I don't really care."
 
As a PhD and professor of social psychology….I think I can safely say that “Dunning-Kruger has been debunked” is a vast oversimplification.

What the D-K effect is and why it “exists” has been hotly debated basically since the day Dave Dunning published the original paper. (I’m not going to touch on all the inaccurate takes in popular media that don’t accurately reflect the effect or theorized mechanisms). Nothing new here, another round of debate. Which is great - a healthy science is an evolving science.

Very short gist: The D-K effect replicates robustly. I can and do replicate it routinely in my own classroom, using exam scores, ability to shoot paper balls into baskets etc.

People’s judgments of their ability correlate poorly with actual ability and due to regression-to-the-mean bad performers underestimate how bad they really are. Why exactly this pattern is found can and has been debated. But the most important claim (theoretically) is that the error in self-knowledge is larger among non-experts than experts. That is, low skill error > high skill error. It’s not that people don’t know their own ability. It’s that they know their ability *with error* and error is greater among the unskilled.

That’s it. That’s the Dunning-Kruger effect.

This critique does not address that. The random numbers used in the linked “debunking” blog post do not duplicate that effect. For a longer rebuttal, see:

 
As a PhD and professor of social psychology….I think I can safely say that “Dunning-Kruger has been debunked” is a vast oversimplification.

What the D-K effect is and why it “exists” has been hotly debated basically since the day Dave Dunning published the original paper. (I’m not going to touch on all the inaccurate takes in popular media that don’t accurately reflect the effect or theorized mechanisms). Nothing new here, another round of debate. Which is great - a healthy science is an evolving science.

Very short gist: The D-K effect replicates robustly. I can and do replicate it routinely in my own classroom, using exam scores, ability to shoot paper balls into baskets etc.

People’s judgments of their ability correlate poorly with actual ability and due to regression-to-the-mean bad performers underestimate how bad they really are. Why exactly this pattern is found can and has been debated. But the most important claim (theoretically) is that the error in self-knowledge is larger among non-experts than experts. That is, low skill error > high skill error. It’s not that people don’t know their own ability. It’s that they know their ability *with error* and error is greater among the unskilled.

That’s it. That’s the Dunning-Kruger effect.

This critique does not address that. The random numbers used in the linked “debunking” blog post do not duplicate that effect. For a longer rebuttal, see:

Thanks for that, @Rilelen - another interesting viewpoint to take in.
 

Back
Top Bottom