Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • How to interpret insignificant regression results?

    1, Scenario:
    (1) as regards the impact of democracy upon terrorism, there are two contrasting viewpoints. One group of scholar argues that democracy incurs terrorist attacks because a open society creates far more opportunities (#positive effect). The other suggests that democratic institutions release pent up grievancees, which in turn reduce the incentives for terrorists (# negative effect). (2) we conduct a panel data analysis, testing the above two hypotheses. our findings are as follows: (3) The IV coefficient (democracy, an ordered or dummy variable) is insignificant; (4) The IV coefficient is both insignificant and positive.

    2, Questions:
    #1: some author interprets (3) as follows: the positive effect and the negative effect cancels each other out so that the predictor loses its statistical significance. Is this an appropriate interpretation? A different way to ask the question is whether no-significance carries any substantive connotation.
    #2: some other author interprets (4) as follows: the positive effect overwhelms the negative effect. I was taught to ignore insignificant covariates, but more and more political science research begins to say more about insignificant control variables. In the above case, given that the predictor is insignificant, does it make sense to tell which effect is larger than the other?

    Thanks a lot!
    Last edited by Raymon Lucas; 24 Jul 2021, 01:39.

  • #2
    The correct way to interpret your results is that you cannot say whether the effect is positive or negative.

    Comment


    • #3
      Raymon:
      provided that the statistical procedure is the right one, results are what they are.
      In my opinion, statisticall significant coefficients are as informative as their non-significant counterparts and there's no gain in splitting the world in two using a p-value.
      Kind regards,
      Carlo
      (StataNow 18.5)

      Comment


      • #4
        Without separate measures of opportunities for terror and opportunites for legitimate participation, you won't be able to measure if either or both are significant. They might both be highly significant, but if correlated the sum might be insignificant.. Insignificant means that the statistical result is compatible with a positive or negative true value. It doesn't mean that the true value is zero.

        Comment


        • #5
          Your first explanation might be true but you don't know it is or not. In a perfect world, you'd be able to come up with some grouping variable and then add an interaction showing that the effect was positive in one group and negative in the other. But you'd have to have some theory for why this would be. The simplest explanation is just that, counter to what some expected, the variable does not have much effect.

          As it stands, I might say that the variable has little or no effect. Or if it does, you can't identify it because for some groups the effect might be positive while for others it might be positive.

          Also, what is the binary correlation between the variables? Suppose it is significantly positive, but becomes small and insignificant once other variables are added. This might suggest that the effects of the variable are indirect -- A affects B and B affects C.

          To put it another way -- an insignificant direct effect does not necessarily mean the variable is irrelevant. It may just mean that it has indirect rather than direct effects. In effect, you may be explaining why and how A affects C -- it does so by affecting B which in turn affects C.

          You can offer possible explanations for your non-results -- but you can't assert that the explanations are true without actually testing them.

          Also, to follow up on what Carlo said -- how insignificant is it? If it was significant at, say, the .09 level, I wouldn't say it must have no effect.
          -------------------------------------------
          Richard Williams, Notre Dame Dept of Sociology
          Stata Version: 17.0 MP (2 processor)

          EMAIL: [email protected]
          WWW: https://www3.nd.edu/~rwilliam

          Comment


          • #6
            I would thank Joro and Carlo a lot for quick replies to my question. According to Joro, it seems meaningless to make a substantive interpretation of insignificant regression results. I also buy the argument of Carlo that both significant and insignificant findings are informative. However, in my discipline, people tend to do regression in order to find significant results in support of their hypotheses. Once it is insignificant, we would think that our efforts are futile and no professional reviwer would accept such a result. This is sort of unscientific, but indeed a common practice in some social sciences.

            Comment


            • #7
              Thank you, Richard! Your answers help a lot! I totally agree that more work needs to be done, both theoretically and statistically!

              Comment


              • #8
                Raymond:
                while it's true that (too) often reviewers behave the way you reported, it is also true that, in a paper, you can explain (or propose some hypotheses) about the non-significant of your results.
                For instance, if your sample size is too small, you will not get any evidence of a given effect, whereas the effect exists in the population from which the sample on hand has been drawn.
                As far as this issue is concerned, I still remember one of the most famous occasional (but pretty frequent, in fact) teaching notes on statistics by the deeply missed Doug Altman and his friend and colleague Martin Bland: https://pubmed.ncbi.nlm.nih.gov/7647644/ which is freely downloadable from Pubmed,
                Kind regards,
                Carlo
                (StataNow 18.5)

                Comment


                • #9
                  I suggest to my students (and everyone else) that they offer both hypotheses and counter-hypotheses, i.e. reasons the original hypotheses May be wrong. Then, whatever happens, the results will be interesting. That is, showing that something does not have an effect when many would expect it to is an interesting and important finding.

                  the emphasis on statistically significant results is unfortunate. There may be 99 studies that found no effect, and one that did, and the one that did is the only one that gets published. That one paper may have just been capitalizing on chance.
                  -------------------------------------------
                  Richard Williams, Notre Dame Dept of Sociology
                  Stata Version: 17.0 MP (2 processor)

                  EMAIL: [email protected]
                  WWW: https://www3.nd.edu/~rwilliam

                  Comment


                  • #10
                    There are two things, Raymon.

                    1) The point estimate, which is your best guess of what is the effect. Best estimate, best estimate... but it is still an estimate, that is, subject to estimation error.

                    2) The interval estimate, which says with certain confidence where your estimate could have been. The p-value and the confidence interval carry the same information, but present it in alternative ways. I will talk about the confidence interval because here it is more convenient, but note that from the estimated b, and Var b, I can derive both the p-value and the confidence interval. Similarly if you give me the confidence interval, I can get the p-value, and if you give me the p-value, I can construct the confidence interval.

                    "The statement the coefficient is insignificant at the 5% level" is equivalent to the statement that your 95% confidence interval includes possible values which are both negative and positive. Therefore my answer to your question was simple: You do not know whether the true effect is positive or negative, because at the 95% confidence, the interval estimate includes effect which are both positive and negative.

                    In my view the most unscientific of all unscientific practices is interpreting sings without talking about significance. In other words, spinning stories on the basis that your best guess happened to have positive sign.

                    I would not have given you such simple and straightforward answer if you had a notion of what is big, and what is small effect. From your original post, it became clear to me that either you, or maybe the whole literature, do not have a notion of what a big, and what a small effect of democracy on terrorism would be. And I kind of see why you have no notion of sizes: If your dependent variable is say the number of deadly terrorist attacks, what is a small number? I would say that anything which is not 0 is big effect.

                    Anyways, if you had a notion of what is a big effect of democracy on terrorism, and you happen to estimate a big and insignificant effect, you can and should interpret this effect. Big effects are potentially interesting.




                    Originally posted by Raymon Lucas View Post
                    I would thank Joro and Carlo a lot for quick replies to my question. According to Joro, it seems meaningless to make a substantive interpretation of insignificant regression results. I also buy the argument of Carlo that both significant and insignificant findings are informative. However, in my discipline, people tend to do regression in order to find significant results in support of their hypotheses. Once it is insignificant, we would think that our efforts are futile and no professional reviwer would accept such a result. This is sort of unscientific, but indeed a common practice in some social sciences.

                    Comment


                    • #11
                      Hey Carlo, thank you for recommendation. I have downloaded the Altman & Bland 1995 and will study it.

                      There are some occasions that political scientists propose some hypotheses about non-significance. It occurs most often when we attempt to falsify a well-known theory, say, Joseph Nye argues that
                      soft power improves a state's foreign relations, but some one counter-argues that soft power does not makes a state’s international life easier. In this case, a non-significance hypothesis comes naturally. Nonetheless, such cases are indeed rare.

                      Comment


                      • #12
                        Hey Rich, you did see through the unfortunate side of quantitative social sciences. As a consequence, I tend to test the hypotheses once an idea comes up to my mind. I will embark upon the research on condition that I find “solid” and “consistent” results in support of my idea. Otherwise, I would give the it up. Are qualitative social scientists having a easier time? The answer is absolutely NO. It is not uncommon that a fellow spends weeks’ long time in finding out a piece of evidence that fills a small gap in his narrative. Nothing is easy in the human world, so it is most important that we love our job.

                        Yes, counter-intuitive findings are always delightful. However, they are not always encouraged, because such findings very often challenge some canonical work. Sciences are conservative.

                        Comment


                        • #13
                          Hey Joro, I would thank you for further clarification! In my example, there can be two DVs, one is the onset of terrorist attack, the other the (log transformed) number of victims. In the latter case, your suggestions about big vs. small effects work best!

                          Comment

                          Working...
                          X