I recently heard the argument from the NRA that there are “2.5 million defensive gun uses (DGU) every year.” They back up this number with a study done by Kleck and Gertz.

The number seemed quite large (remember, we have a population of about 320 million in the US – 240 million of which are adults), so I wasn’t quick to just accept it. Curiosity got the better of me and I started reading the actual paper and articles surrounding it.

(Please note that this rabbit hole goes down deeper than I had originally thought. I had to dig myself out of it after spending a little bit too much time reading on this topic. That being said, please forgive me if this post seems rushed and not well thought-out.)

The Paper

Let me try to summarize how they got to the number of 2.5 million DGUs per year.

Kleck and Gertz surveyed 4,977 people by an “anonymous random digit-dial telephone survey” (160 – 161) and found that 66 had used a gun defensively (184). The criteria (162) for a DGU is defined as follows:

  1. The incident involved defensive action against a human rather than an animal but not in connection with police, military, or security guard duties;
  2. the incident involved actual contact with a person, rather than merely investigating suspicious circumstances, etc;
  3. the defender could state a specific crime which he thought was being committed at the time of the incident;
  4. the gun was actually used in some way — at a minimum it had to be used as part of a threat against a person, either by verbally referring to the gun (e.g., “get away–I’ve got a gun”) or by pointing it to an adversary

So, from the roughly 5,000 people we get that 1.3% (= 66/4,977) of the randomly selected population. The researchers then extrapolated their findings to the entire U.S. adult population (190 million), resulting in an estimate of of 2.5 million defensive gun uses per year.

My Response

From my perspective, I found a couple of main problems with the study.

Human Psychology

I am no expect in human psychology (though my wife is and I’d like to think some of her expertise is rubbing off on me… “Not true” she says). Memory is quite fickle and has to be controlled strictly.

Here, I expect to see what is known as the Social Desirability Bias. This is when someone reports behavior that looks good (whether to seem heroic or to justify having spent money on a firearm). This bias would mean people are over-reporting their DGU. This effect can also be seen in surveys where men consistently report higher numbers in having sex compared to women.

Also, we have to keep in mind the telescoping effect. This is when a person recalls an event that is outside the question’s time frame (e.g. identifying an event that happened 8 months ago as something that happened within 6 months ago). A good method to control for this can be found in NCVS where they properly addresses this concern by doing multiple follow-ups that are restricted to a six-month time frame and then correcting for duplicates. Keep in mind that using this strategy, NCVS found that telescoping alone likely produces at least a 30 percent increase in false positives.


The second problem I see is with statistics. When we take a small sample of low probability events and extrapolate that in a bigger population, we will run into problems of overestimating. This effect can be seen if you study Bayes Theorem and some of its consequences. This paper goes into depth by providing some solid examples.

We also see a bit of sampling error here. There are many practices in this paper that could have been handled better. First, we see the result is estimated based on individuals but the sample was randomized on household. Second, the method of which they collected the data only on the calls that were answered (by a person), then the interviewers specifically asked for male head of the household (which, tied with the social desirability bias, could have significant effects). A tampering with the pre-selected sample could have drastic effects in the overall experiment. Third, the authors mention that they weighted the data but I had a lot of trouble actually seeing how they weighted it. They claim to have adjusted so it reflects the bigger population but we see that they find that only 8.9% of the adult population is black, when 1992 Census data indicate that 12.5% of individuals were black. There are more small statistical sampling issues here and there, but suffice it to say, this study lacks rigorousness that should be found in a study that is cited so often.

The Consensus

The general consensus (from the liberal and nonpartisan outlets) seems to be relatively clear : the 2.5 million number does not hold up under scrutiny.

The conservative media point to the “decades of academic work” as a big part of the infallibility of the number. When pointed to confounding studies, most media outlets cast doubt to the study by calling the source “most radical gun control activist group in the country”. However, it is important to note most (but not all) responses from the conservative media does not delve into the science and often can be summarized as name-calling.

If you want to dig in more, you can read more about Hemenway’s original response, Kleck’s response to Hemenway, and then Hemenway’s response to Kleck.

Budapest, Hungary Getting started with CryptoCurrency

Leave a Reply

Your email address will not be published. Required fields are marked *