Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Storing estimates from margins

    Hi all,

    I have a rather simple question. I'm trying to generate predicted outcomes for two variables in a panel

    This is the full code of what I am doing


    xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1, fe

    // get the x where the minimum occurs
    local x = -_b[indicator1]/(2*_b[indicator1#indicator1])


    // get the y for that minimim
    qui margins, at(indicator1 = `x' cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
    // the predictions are stored in the matrix r(b)
    // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
    local y = el(r(b),1,1)


    // prepare for our plot
    nlcom -_b[indicator1]/(2*_b[indicator1#indicator1])
    qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)

    // with scatteri we can add a point and a label to our graph
    marginsplot, noci plotopts(msymbol(i)) legend(off) ///
    addplot(scatteri `y' `x' (12) "(`xlab'; `ylab')") ///
    ylab(,format(%9.3fc) angle(0)) ytitle("predicted gdp ") title("Predicted Marings pair")


    My variables of interest are gdp and indicator1.

    What I can't figure out is how to then store those results for the two variables gdp and indicator1, as the -predict- command does, in the same dataset.

    Is this doable? What am I not getting here??

    Thanks in advance.

    Giorgio
    Last edited by Giorgio Di Stefano; 10 Mar 2022, 13:58.

  • #2
    Code:
    help svmat
    to convert a matrix to variables. So if the matrix holding the results is stored as r(b)

    Code:
    mat b= r(b)
    svmat b

    Comment


    • #3
      Originally posted by Andrew Musau View Post
      Code:
      help svmat
      to convert a matrix to variables. So if the matrix holding the results is stored as r(b)

      Code:
      mat b= r(b)
      svmat b

      Thanks, Andrew, but it is not what I want. I get in the first row b11 up to b121, if I add the line to the end of my do file. And I do get b11, b12 if I add the line just before the display plot.
      Yet, I do get only values for the y but not for the x.
      Last edited by Giorgio Di Stefano; 11 Mar 2022, 09:43.

      Comment


      • #4
        Present a reproducible example pointing out what you need to extract.

        Comment


        • #5
          Originally posted by Andrew Musau View Post
          Present a reproducible example pointing out what you need to extract.
          Andrew Musau If I run the code below using the data below I get a pair of margins in the plot of: 8:13.9.
          That is what I would like to extract in my data.

          The code works fine. I have looked both margins, gen() and margins ,sav(), tried in several ways, but was not able to solve. What is the code line, and where should I add it in my do file code in order to get both marhins of them in the data?Note in the code that I using a quadratic. I need to get the x where the minimum occurs and get the y for that minimim,extracting them in my datset .



          xtset id ts

          qui xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1, fe

          // get the x where the minimum occurs
          local x = -_b[indicator1]/(2*_b[indicator1#indicator1])


          // get the y for that minimim
          qui margins, at(indicator1 = `x' cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
          // the predictions are stored in the matrix r(b)
          // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
          local y = el(r(b),1,1)


          // to display those values we don't want all the decimal places
          // : display allows you to control how a number is displayed
          // it adds some spaces before the number, which strtrim() removes
          local ylab : display %9.0fc `y'
          local ylab = strtrim("`ylab'")
          local xlab : display %9.0fc `x'
          local xlab = strtrim("`xlab'")


          // prepare for our plot
          nlcom -_b[indicator1]/(2*_b[indicator1#indicator1])
          qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)

          // with scatteri we can add a point and a label to our graph
          marginsplot, noci plotopts(msymbol(i)) legend(off) ///
          addplot(scatteri `y' `x' (12) "(`xlab'; `ylab')") ///
          ylab(,format(%9.3fc) angle(0)) ytitle("predicted gdp ") title("Predicted Marings pair")

          ---------------------- copy starting from the next line -----------------------
          Code:
          * Example generated by -dataex-. To install: ssc install dataex
          clear
          input float(id ts) str1 group str97 country float dummy1 byte dummy2 float(indicator1 indicator2) long gdp float(cpi u) double Output
          1 1990 "A" "Australia"      1 1  10.999176  10.999176   571564  7.333022  6.93 1.28877e+16
          1 1991 "A" "Australia"      0 1       12.2       12.2   565844  3.176675  9.58  1.3296e+15
          1 1992 "A" "Australia"      0 1       12.2       12.2   580278 1.0122311 10.73 1.22544e+16
          1 1993 "A" "Australia"      1 1  17.553352  17.553352   602944 1.7536534 10.87 1.18076e+16
          1 1994 "A" "Australia"      0 1      19.11      19.11   632477 1.9696348  9.72 1.34421e+16
          1 2000 "A" "Australia"      0 1     28.634     28.634   798334  4.457435  6.28 1.39202e+16
          1 2001 "A" "Australia"      1 1   26.41307   26.54496   819144 4.4071355  6.74 1.30234e+16
          1 2002 "A" "Australia"      0 1   5.536341      6.908   853048 2.9815745  6.37 1.61436e+16
          1 2003 "A" "Australia"      0 1   5.536341      6.908   876715  2.732596  5.93 2.02622e+16
          1 2004 "A" "Australia"      1 1   7.350179   8.569935   912927 2.3432553  5.39 2.35621e+16
          1 2010 "A" "Australia"      1 1   6.241802   6.241802  1080050   2.91834  5.21 3.47466e+16
          1 2011 "A" "Australia"      0 0      5.068      5.068  1110726   3.30385  5.08  4.0027e+15
          1 2012 "A" "Australia"      0 0      5.068      5.068  1152963 1.7627802  5.22 4.13369e+16
          1 2013 "A" "Australia"      1 1   7.929047   8.248285  1177554  2.449889  5.66 3.74135e+16
          1 2014 "A" "Australia"      0 .  15.081665     16.199  1207855  2.487923  6.08 3.47753e+16
          2 1990 "A" "France"         0 0        1.6        1.6  2081911 3.1942835  9.36 5.03813e+16
          2 1991 "A" "France"         0 0        1.6        1.6  2103733  3.213407  9.13 4.93537e+16
          2 1992 "A" "France"         0 1        1.6        1.6  2137379 2.3637605 10.21  5.2038e+15
          2 1993 "A" "France"         1 1   6.996447  -.8161401  2123942 2.1044629 11.32 5.80229e+16
          2 1994 "A" "France"         0 0    8.69136     -1.575  2174032 1.6555153 12.59   5.071e+14
          2 2000 "A" "France"         0 0   2.364474      2.797  2564959   1.67596 10.22 7.85938e+16
          2 2001 "A" "France"         0 0   2.364474      2.797  2615840 1.6347808  8.61 7.85391e+16
          2 2002 "A" "France"         1 1  -.7668521  -.7921649  2645544 1.9234123   8.7 8.75676e+16
          2 2003 "A" "France"         0 0 -2.9592714     -3.286  2667321  2.098472  8.31 1.03639e+17
          2 2004 "A" "France"         0 0  -2.950783     -3.286  2742800 2.1420896  8.91  1.0654e+16
          2 2010 "A" "France"         0 0      3.891      3.891  2904699 1.5311227  8.87 9.63052e+16
          2 2011 "A" "France"         0 0      3.891      3.891  2968390  2.111598  8.81 1.08307e+17
          2 2012 "A" "France"         1 1 -2.5410414 -2.1623123  2977685 1.9541953   9.4 9.84001e+16
          2 2013 "A" "France"         0 0  -6.333579     -5.608  2994846  .8637155  9.92 9.87864e+16
          2 2014 "A" "France"         0 0  -6.108374     -5.608  3023483  .5077588 10.29 1.04153e+17
          3 1990 "B" "Germany"        0 1   4.255927      4.029  3090684 2.6964715  4.89           .
          3 1991 "B" "Germany"        0 1   .5641079 -1.6926868  3245558 4.0470366  5.32 1.21946e+17
          3 1992 "B" "Germany"        0 1   .3835026     -1.973  3307973  5.056979  6.32 1.29671e+17
          3 1993 "B" "Germany"        0 1   .3835026     -1.973  3275659  4.474575  7.68 1.14872e+17
          3 1994 "B" "Germany"        1 1   .8169867 -1.2526814  3354009  2.693057  8.73 1.21188e+17
          3 2000 "B" "Germany"        0 1  1.9049623      2.806  3738235  1.440268  7.92 1.17777e+17
          3 2001 "B" "Germany"        0 1  1.9049622      2.806  3801092  1.983857  7.77 1.17577e+17
          3 2002 "B" "Germany"        1 1   1.283737  2.0394616  3793567 1.4208056  8.48 1.22359e+17
          3 2003 "B" "Germany"        0 1 -1.3254085      -1.18  3767008 1.0342277  9.78 1.50381e+17
          3 2004 "B" "Germany"        0 1 -1.3254085      -1.18  3811273 1.6657335 10.73 1.75373e+17
          3 2010 "B" "Germany"        0 1  1.1606808       -.05  4071113 1.1038091  6.97  2.0819e+16
          3 2011 "B" "Germany"        0 1  1.1606808       -.05  4230912 2.0751746  5.82 2.44939e+17
          3 2012 "B" "Germany"        0 1  1.1606808       -.05  4248618  2.008491  5.38 2.25506e+17
          3 2013 "B" "Germany"        1 1  1.1165322      .0625  4267210  1.504721  5.23 2.32201e+17
          3 2014 "B" "Germany"        0 .  .01281774      2.875  4361496  .9067979  4.98 2.37452e+17
          4 1990 "B" "Italy"          0 0   3.829138       3.66  2199474  6.456609  9.79 4.78032e+16
          4 1991 "B" "Italy"          0 1   3.777037       3.66  2233312      6.25  10.1 4.77836e+16
          4 1992 "B" "Italy"          1 1   2.615249  2.9572604  2251944   5.27059  9.33 5.49646e+16
          4 1993 "B" "Italy"          0 1  1.4437795   .7736538  2232739 4.6267347 10.24 4.39227e+16
          4 1994 "B" "Italy"          1 1  4.0116615  13.063806  2280766  4.051842 11.09 5.03757e+16
          4 2000 "B" "Italy"          0 1    5.97681   1.812674  2598506 2.5376854 10.84 7.22295e+16
          4 2001 "B" "Italy"          1 1   7.402936  4.7593465  2649212  2.785165   9.6 7.13962e+16
          4 2002 "B" "Italy"          0 0      8.534      8.534  2655940  2.465323  9.21 7.52609e+16
          4 2003 "B" "Italy"          0 0      8.534      8.534  2659622 2.6725554  8.87 8.79973e+16
          4 2004 "B" "Italy"          0 0      8.534      8.534  2697484 2.2067366  7.87 1.03212e+17
          4 2010 "B" "Italy"          0 0     -16.25          0  2680599  1.525516  8.36  1.1136e+16
          4 2011 "B" "Italy"          0 0 -14.330358          0  2699559  2.780633  8.36 1.24328e+17
          4 2012 "B" "Italy"          0 1          0          0  2619088  3.041363 10.65 1.11628e+17
          4 2013 "B" "Italy"          1 1   .6806767 -1.9956785  2570869 1.2199935 12.15 1.13354e+17
          4 2014 "B" "Italy"          0 0 -1.9673892     -2.941  2570752 .24104743 12.68 1.12803e+17
          5 1990 "C" "United Kingdom" 0 1     16.809     16.809  1846210  8.063461  6.97 5.76584e+16
          5 1991 "C" "United Kingdom" 0 1     16.809     16.809  1825844  7.461783  8.55 5.51486e+16
          5 1992 "C" "United Kingdom" 1 1  13.403038  13.403038  1833167 4.5915494  9.78 5.94441e+16
          5 1993 "C" "United Kingdom" 0 1       12.1       12.1  1878822  2.558578 10.35 5.35544e+16
          5 1994 "C" "United Kingdom" 0 1       12.1       12.1  1951082 2.2190125  9.65 5.90669e+16
          5 2000 "C" "United Kingdom" 0 1      1.806      1.806  2386524 1.1829562  5.56 6.64745e+16
          5 2001 "C" "United Kingdom" 1 1   1.958802   1.958802  2451683 1.5323496   4.7 6.12017e+16
          5 2002 "C" "United Kingdom" 0 1      2.076      2.076  2505100 1.5204024  5.04 6.22673e+16
          5 2003 "C" "United Kingdom" 0 1      2.076      2.076  2588317 1.3765004  4.81 6.70562e+16
          5 2004 "C" "United Kingdom" 0 1      2.076      2.076  2647493 1.3903975  4.59 7.82735e+16
          5 2010 "C" "United Kingdom" 1 1   3.564247   3.697813  2796536  2.492655  7.79 6.99631e+16
          5 2011 "C" "United Kingdom" 0 .   5.454876   5.670001  2832212 3.8561125  8.04 7.67414e+16
          5 2012 "C" "United Kingdom" 0 .   5.454876       5.67  2872722  2.573235  7.88 7.59697e+16
          5 2013 "C" "United Kingdom" 0 .   5.454876   5.670001  2935525 2.2916667  7.52 7.64401e+16
          5 2014 "C" "United Kingdom" 0 .   5.454876   5.670001  3019561   1.45112  6.11 7.94942e+16
          6 1990 "C" "United States"  1 1       14.9       14.9 10650444  5.397956   5.6           .
          6 1991 "C" "United States"  0 1       14.9       14.9 10638913  4.234964   6.8           .
          6 1992 "C" "United States"  1 1       14.9       14.9 11013662 3.0288196   7.5           .
          6 1993 "C" "United States"  0 1   .0375137   .0375137 11316734  2.951657   6.9           .
          6 1994 "C" "United States"  1 1      -.781      -.781 11772662  2.607442  6.12           .
          6 2000 "C" "United States"  1 1      1.463      1.463 14931055  3.376857  3.99           .
          6 2001 "C" "United States"  0 1   5.542341   5.542341 15073548  2.826171  4.73           .
          6 2002 "C" "United States"  1 1      5.767      5.767 15329187 1.5860317  5.78           .
          6 2003 "C" "United States"  0 1      5.767      5.767 15757822 2.2700949  5.99           .
          6 2004 "C" "United States"  1 1      5.767      5.767 16364901  2.677237  5.53           .
          6 2010 "C" "United States"  1 1       -.36       -.36 17784695 1.6400435  9.63           .
          6 2011 "C" "United States"  0 1       -.36       -.36 18060339 3.1568415  8.95           .
          6 2012 "C" "United States"  1 1       -.36       -.36 18472238 2.0693374  8.07           .
          6 2013 "C" "United States"  0 1   1.298654   1.298654 18812474 1.4648327  7.37           .
          6 2014 "C" "United States"  0 1       1.39       1.39 19242861  1.622223  6.17           .
          end

          Comment


          • #6
            Originally posted by Giorgio Di Stefano View Post
            . I need to get the x where the minimum occurs and get the y for that minimim,extracting them in my datset .
            It seems you already save these in the locals `x' and `y'. Didn't you write this code yourself?

            Code:
            qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
            gen xmax= `x'
            gen ymax=`y'

            Comment


            • #7
              Originally posted by Andrew Musau View Post

              It seems you already save these in the locals `x' and `y'. Didn't you write this code yourself?

              Code:
              qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
              gen xmax= `x'
              gen ymax=`y'
              Thank you very much Andrew for your help

              ​​​​​​Also I would like to kindly ask for your help and patience on the following. I think a few simple no time consuming stata command lines for an expert user.

              I need to repeat the entire above procedures(regression, margins creation and store etc ) as a whole for each one each one of the decades subperiod and then for each one the three groups, for each one decade separately in the sample, obtaining different results obviously for each group and decade.

              While I can do that by writing long lines of code separately for each decade as a whole and for each group, that will require, however a lot of time and long lines. Is there a way to write a repeating loop that will do that at once?

              Just to mention, In the sample provided I have only inlceluded four decades, but in my real data I have data for 8 dacades (1945 to 2020) and time series contain gaps

              Thank you for any help you can provide

              The codes now becomes as follows

              [QUOTE]

              xtset id ts

              qui xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1, fe

              // get the x where the minimum occurs
              local x = -_b[indicator1]/(2*_b[indicator1#indicator1])


              // get the y for that minimim
              qui margins, at(indicator1 = `x' cpi=0 u=2 Output= 7.54e+16
              dummy1=0 dummy2=0 indicator1=-4)
              // the predictions are stored in the matrix r(b)
              // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
              local y = el(r(b),1,1)

              // Generate optimal values for indicator and gdp
              gen indmax= `x'
              gen gdpmax=`y'

              ​​​​​​
              // to display those values we don't want all the decimal places
              // : display allows you to control how a number is displayed
              // it adds some spaces before the number, which strtrim() removes
              local ylab : display %9.0fc `y'
              local ylab = strtrim("`ylab'")
              local xlab : display %9.0fc `x'
              local xlab = strtrim("`xlab'")


              // prepare for our plot
              nlcom -_b[indicator1]/(2*_b[indicator1#indicator1])
              qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)

              // with scatteri we can add a point and a label to our graph
              marginsplot, noci plotopts(msymbol(i)) legend(off) ///
              addplot(scatteri `y' `x' (12) "(`xlab'; `ylab')") ///
              ylab(,format(%9.3fc) angle(0)) ytitle("predicted gdp ") title("Predicted Marings pair " )
              /QUOTE]
              Last edited by Giorgio Di Stefano; 13 Mar 2022, 23:11.

              Comment


              • #8
                Create indicators for the decades/ groups, e.g., using

                Code:
                tab group, g(group)
                tab decade, g(decade)
                then

                Code:
                foreach var of varlist group? decade?{
                    xtset id ts
                    qui xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1 if `var', fe
                    OTHER CODE
                    gen indmax_`var'= `x'
                    gen gdpmax_`var'=`y'
                    OTHER CODE
                }

                Comment


                • #9
                  Originally posted by Andrew Musau View Post
                  Create indicators for the decades/ groups, e.g., using

                  Code:
                  tab group, g(group)
                  tab decade, g(decade)
                  then

                  Code:
                  foreach var of varlist group? decade?{
                  xtset id ts
                  qui xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1 if `var', fe
                  OTHER CODE
                  gen indmax_`var'= `x'
                  gen gdpmax_`var'=`y'
                  OTHER CODE
                  }
                  Thanks again, Andrew! I much appreciate your time and effort!
                  I have now run the code and got some remarks, did get exactly what I was looking.

                  1. This is a bit naïve. In the data I have the time variable, ts, in years. I don’t have decades created yet . Although, I do have dummies for each decade, including them as time shifting effects into the regression equation (not in the sample provided) When I tried, nonetheless,

                  tab ts, group(ts)
                  The command created a single dummy for each one of the years. I don't have decades already created in the data, just dummies as time shifting effect for each decade. So how do you create this? Should I create new decades dummies or the ones, decades have will do? Should I leave ts as it is? Won’t that effect the regression adding twice the same dummy variable?
                  Most Important
                  2. I got a single value for each one of the indmax , gdpmax generated for the entire period. Trying to solve, I added &ingange(ts 1970-1979), for example, in the regression command ,that is if `var' & inrange(ts,1970,1979), fe , still got indmax , gdpmax the same values similar as before for the entire period values.

                  3. I also added gen indmax, if inrange(ts,1970,1979), yet same as before, not difference in the outcome , although the values were generated only for the inrange decade. That is fine if I consider the entire period. But was looking to get indmax gdpmax for each decade as whole and then for each of the 3 groups in each decade. That makes four values for each of indmax and gdpmax, if data permitted, two for the entire and two for each of the three groups. Then two for each decade, sampling these last two together with deferent values, and 6 new for the groups (2x3groups) for a total of 8(6 for groups+ 2 as the entire) for each decade, grouping them into four new variables, two for the entire period , say in indmaxfull and gdpmaxfull , for the entire period and two other new for the decades , say indmaxdecade, and indmaxdecadedecad sampling the data here together . For the later, the values in each decade are expected to be somehow different for each decade. Hope you get my point.

                  4. After that, going further, not previously asked, I need to generate the gaps of Διndicator1, defined as ιndicator1- indmax, and the gaps of Δgdp defined as Διndocator1/Δgdp, for each entire period and for each group , and then for decades and for each group.

                  Then putting some weight on the difference Δ, Δindocator1, Δgdp collapse to the weighted average of their each entire period, and then for each decade and for each group .

                  5. Finally, I have to calculate the 1-a value and do some postestimation correlation to a count variable (not in the sample) called duration, which expresses time duration in years. That is, the time elapsed between the events captured by the dumny1 variable takes place. To check when a is expected to be higher and when starts to lower, and what is the rate of depreciation, if any.


                  I understand that is too much I am in debt to you and cannot thank you enough!

                  Thank you wholehardly
                  Giorgio!

                  Comment


                  • #10
                    I cannot answer all your questions in one go, so I will answer as time allows.

                    Originally posted by Giorgio Di Stefano View Post

                    1. This is a bit naïve. In the data I have the time variable, ts, in years. I don’t have decades created yet . Although, I do have dummies for each decade, including them as time shifting effects into the regression equation (not in the sample provided) When I tried, nonetheless,



                    The command created a single dummy for each one of the years. I don't have decades already created in the data, just dummies as time shifting effect for each decade. So how do you create this? Should I create new decades dummies or the ones, decades have will do? Should I leave ts as it is? Won’t that effect the regression adding twice the same dummy variable?
                    Most Important

                    You can use floor and ceiling functions to create decade indicators. A decade is 10 years, so we can define each decade using the formula


                    $$\text{decade } = 10\times \;\text{floor}\left(\frac{year}{10}\right).$$


                    As the years 1970-1979 are in the 1970's decade, we see that for the years 1970, 1974, 1979, we have:

                    Code:
                    di 10*floor(1970/10)
                    di 10*floor(1974/10)
                    di 10*floor(1979/10)
                    and 1993 and 1999 are in the 1990's decade:

                    Code:
                    di 10*floor(1993/10)
                    di 10*floor(1999/10)

                    Res.:

                    Code:
                    . di 10*floor(1970/10)
                    1970
                    
                    . di 10*floor(1974/10)
                    1970
                    
                    . di 10*floor(1979/10)
                    1970
                    
                    . di 10*floor(1993/10)
                    1990
                    
                    . di 10*floor(1999/10)
                    1990

                    Therefore, you can create a decade variable from the years using

                    Code:
                    gen decade= 10*floor(ts/10)
                    and then dummies as

                    Code:
                    tab decade, g(decade)
                    Res.:

                    Code:
                    
                    . l id ts decade*, sepby(decade)
                    
                         +--------------------------------------------------+
                         | id     ts   decade   decade1   decade2   decade3 |
                         |--------------------------------------------------|
                      1. |  1   1990     1990         1         0         0 |
                      2. |  1   1991     1990         1         0         0 |
                      3. |  1   1992     1990         1         0         0 |
                      4. |  1   1993     1990         1         0         0 |
                      5. |  1   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                      6. |  1   2000     2000         0         1         0 |
                      7. |  1   2001     2000         0         1         0 |
                      8. |  1   2002     2000         0         1         0 |
                      9. |  1   2003     2000         0         1         0 |
                     10. |  1   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     11. |  1   2010     2010         0         0         1 |
                     12. |  1   2011     2010         0         0         1 |
                     13. |  1   2012     2010         0         0         1 |
                     14. |  1   2013     2010         0         0         1 |
                     15. |  1   2014     2010         0         0         1 |
                         |--------------------------------------------------|
                     16. |  2   1990     1990         1         0         0 |
                     17. |  2   1991     1990         1         0         0 |
                     18. |  2   1992     1990         1         0         0 |
                     19. |  2   1993     1990         1         0         0 |
                     20. |  2   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                     21. |  2   2000     2000         0         1         0 |
                     22. |  2   2001     2000         0         1         0 |
                     23. |  2   2002     2000         0         1         0 |
                     24. |  2   2003     2000         0         1         0 |
                     25. |  2   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     26. |  2   2010     2010         0         0         1 |
                     27. |  2   2011     2010         0         0         1 |
                     28. |  2   2012     2010         0         0         1 |
                     29. |  2   2013     2010         0         0         1 |
                     30. |  2   2014     2010         0         0         1 |
                         |--------------------------------------------------|
                     31. |  3   1990     1990         1         0         0 |
                     32. |  3   1991     1990         1         0         0 |
                     33. |  3   1992     1990         1         0         0 |
                     34. |  3   1993     1990         1         0         0 |
                     35. |  3   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                     36. |  3   2000     2000         0         1         0 |
                     37. |  3   2001     2000         0         1         0 |
                     38. |  3   2002     2000         0         1         0 |
                     39. |  3   2003     2000         0         1         0 |
                     40. |  3   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     41. |  3   2010     2010         0         0         1 |
                     42. |  3   2011     2010         0         0         1 |
                     43. |  3   2012     2010         0         0         1 |
                     44. |  3   2013     2010         0         0         1 |
                     45. |  3   2014     2010         0         0         1 |
                         |--------------------------------------------------|
                     46. |  4   1990     1990         1         0         0 |
                     47. |  4   1991     1990         1         0         0 |
                     48. |  4   1992     1990         1         0         0 |
                     49. |  4   1993     1990         1         0         0 |
                     50. |  4   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                     51. |  4   2000     2000         0         1         0 |
                     52. |  4   2001     2000         0         1         0 |
                     53. |  4   2002     2000         0         1         0 |
                     54. |  4   2003     2000         0         1         0 |
                     55. |  4   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     56. |  4   2010     2010         0         0         1 |
                     57. |  4   2011     2010         0         0         1 |
                     58. |  4   2012     2010         0         0         1 |
                     59. |  4   2013     2010         0         0         1 |
                     60. |  4   2014     2010         0         0         1 |
                         |--------------------------------------------------|
                     61. |  5   1990     1990         1         0         0 |
                     62. |  5   1991     1990         1         0         0 |
                     63. |  5   1992     1990         1         0         0 |
                     64. |  5   1993     1990         1         0         0 |
                     65. |  5   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                     66. |  5   2000     2000         0         1         0 |
                     67. |  5   2001     2000         0         1         0 |
                     68. |  5   2002     2000         0         1         0 |
                     69. |  5   2003     2000         0         1         0 |
                     70. |  5   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     71. |  5   2010     2010         0         0         1 |
                     72. |  5   2011     2010         0         0         1 |
                     73. |  5   2012     2010         0         0         1 |
                     74. |  5   2013     2010         0         0         1 |
                     75. |  5   2014     2010         0         0         1 |
                         |--------------------------------------------------|
                     76. |  6   1990     1990         1         0         0 |
                     77. |  6   1991     1990         1         0         0 |
                     78. |  6   1992     1990         1         0         0 |
                     79. |  6   1993     1990         1         0         0 |
                     80. |  6   1994     1990         1         0         0 |
                         |--------------------------------------------------|
                     81. |  6   2000     2000         0         1         0 |
                     82. |  6   2001     2000         0         1         0 |
                     83. |  6   2002     2000         0         1         0 |
                     84. |  6   2003     2000         0         1         0 |
                     85. |  6   2004     2000         0         1         0 |
                         |--------------------------------------------------|
                     86. |  6   2010     2010         0         0         1 |
                     87. |  6   2011     2010         0         0         1 |
                     88. |  6   2012     2010         0         0         1 |
                     89. |  6   2013     2010         0         0         1 |
                     90. |  6   2014     2010         0         0         1 |
                         +--------------------------------------------------+
                    
                    .
                    Last edited by Andrew Musau; 16 Mar 2022, 08:55.

                    Comment


                    • #11
                      Originally posted by Giorgio Di Stefano View Post

                      2. I got a single value for each one of the indmax , gdpmax generated for the entire period. Trying to solve, I added &ingange(ts 1970-1979), for example, in the regression command ,that is if `var' & inrange(ts,1970,1979), fe , still got indmax , gdpmax the same values similar as before for the entire period values.

                      3. I also added gen indmax, if inrange(ts,1970,1979), yet same as before, not difference in the outcome , although the values were generated only for the inrange decade. That is fine if I consider the entire period. But was looking to get indmax gdpmax for each decade as whole and then for each of the 3 groups in each decade. That makes four values for each of indmax and gdpmax, if data permitted, two for the entire and two for each of the three groups. Then two for each decade, sampling these last two together with deferent values, and 6 new for the groups (2x3groups) for a total of 8(6 for groups+ 2 as the entire) for each decade, grouping them into four new variables, two for the entire period , say in indmaxfull and gdpmaxfull , for the entire period and two other new for the decades , say indmaxdecade, and indmaxdecadedecad sampling the data here together . For the later, the values in each decade are expected to be somehow different for each decade. Hope you get my point.
                      The min and max statistics will be generated corresponding to each -if condition-. If you want for the full sample, drop the -if- conditions. For the data in #5, the statistics are generated only for the first 2 decades due to the number of observations.

                      Code:
                      gen decade= 10*floor(ts/10)
                      tab decade, g(decade)
                      
                      foreach var of varlist decade?{
                          xtset id ts
                          xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1 if `var', fe
                          // get the x where the minimum occurs
                          local x = -_b[indicator1]/(2*_b[indicator1#indicator1])
                          // get the y for that minimim
                          qui margins, at(indicator1 = `x' cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
                          // the predictions are stored in the matrix r(b)
                          // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
                          local y = el(r(b),1,1)
                          gen indmax_`var'= `x'
                          gen gdpmax_`var'=`y'
                          // prepare for our plot
                          nlcom -_b[indicator1]/(2*_b[indicator1#indicator1])
                          qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
                          // with scatteri we can add a point and a label to our graph
                          marginsplot, noci plotopts(msymbol(i)) legend(off) ///
                          addplot(scatteri `y' `x' (12) "(`xlab'; `ylab')") ///
                          ylab(,format(%9.3fc) angle(0)) ytitle("predicted gdp ") title("Predicted Marings pair")
                      }
                      Res.:

                      Code:
                      . list id ts country indmax_decade* gdpmax_decade* in 1
                      
                           +--------------------------------------------------------------------+
                           | id     ts     country   indmax~1   indmax~2   gdpmax~1   gdpmax_~2 |
                           |--------------------------------------------------------------------|
                        1. |  1   1990   Australia   15.24291   19.02719   822309.9   -916432.4 |
                           +--------------------------------------------------------------------+

                      Comment


                      • #12
                        Originally posted by Andrew Musau View Post

                        The min and max statistics will be generated corresponding to each -if condition-. If you want for the full sample, drop the -if- conditions. For the data in #5, the statistics are generated only for the first 2 decades due to the number of observations.

                        Code:
                        gen decade= 10*floor(ts/10)
                        tab decade, g(decade)
                        
                        foreach var of varlist decade?{
                        xtset id ts
                        xtregar D.gdp cpi u Output dummy1 dummy2 c.indicator1##c.indicator1 if `var', fe
                        // get the x where the minimum occurs
                        local x = -_b[indicator1]/(2*_b[indicator1#indicator1])
                        // get the y for that minimim
                        qui margins, at(indicator1 = `x' cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
                        // the predictions are stored in the matrix r(b)
                        // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
                        local y = el(r(b),1,1)
                        gen indmax_`var'= `x'
                        gen gdpmax_`var'=`y'
                        // prepare for our plot
                        nlcom -_b[indicator1]/(2*_b[indicator1#indicator1])
                        qui margins, at(indicator1=(-16(4)29) cpi=0 u=2 Output= 7.54e+16 dummy1=0 dummy2=0 indicator1=-4)
                        // with scatteri we can add a point and a label to our graph
                        marginsplot, noci plotopts(msymbol(i)) legend(off) ///
                        addplot(scatteri `y' `x' (12) "(`xlab'; `ylab')") ///
                        ylab(,format(%9.3fc) angle(0)) ytitle("predicted gdp ") title("Predicted Marings pair")
                        }
                        Res.:

                        Code:
                        . list id ts country indmax_decade* gdpmax_decade* in 1
                        
                        +--------------------------------------------------------------------+
                        | id ts country indmax~1 indmax~2 gdpmax~1 gdpmax_~2 |
                        |--------------------------------------------------------------------|
                        1. | 1 1990 Australia 15.24291 19.02719 822309.9 -916432.4 |
                        +--------------------------------------------------------------------+
                        Thank you so much again Andrew!!!

                        I now got an idea on how to do that. I would still need the following steps



                        1. On point 4 in #9 I would need to generate the gaps of Διndicator1, defined as ιndicator1- indmax, then similarly gaps of Δgdp, and then the ratio of defined as Διndocator1/Δgdp, for each entire period and for each group , and then for decades and for each group.
                        Then putting some weight on the difference Δ, Δindocator1, Δgdp collapse to the weighted average of their each entire period, and then for each decade and for each group. That means tow types of collapse or more.One in general and one for each subperiod add subgroud, as above expaned

                        2. Also, I have to calculate the 1-a value of the intercept and do some postestimation correlation to a count variable called length in the data below, which expresses time duration in years. That is, the time elapsed between the events captured by the dumny1 variable takes place. To check when a is expected to be higher and when starts to lower, and what is the rate of depreciation, if any.

                        3. Lastly, If you check in my data I have included two indicators just for the example, while in reality I got 10. In order to avoid doing all the above process for each of the indicator, is there a loop to include them all together as I run the program you’ ve written? And generate for each the variables, etc as above?
                        I know have asked too much so I cannot thank you enough for your time





                        ----------------------- copy starting from the next line -----------------------
                        Code:
                        * Example generated by -dataex-. To install: ssc install dataex
                        clear
                        input float(id ts) str1 group str97 country float dummy1 byte(dummy2 length) float(indicator1 indicator2) long gdp float(cpi u) double Output byte count
                        1 1990 "A" "Australia"      1 1 3  10.999176  10.999176   571564  7.333022  6.93 1.28877e+16 26
                        1 1991 "A" "Australia"      0 1 2       12.2       12.2   565844  3.176675  9.58  1.3296e+15 27
                        1 1992 "A" "Australia"      0 1 2       12.2       12.2   580278 1.0122311 10.73 1.22544e+16 27
                        1 1993 "A" "Australia"      1 1 3  17.553352  17.553352   602944 1.7536534 10.87 1.18076e+16 28
                        1 1994 "A" "Australia"      0 1 3      19.11      19.11   632477 1.9696348  9.72 1.34421e+16 28
                        1 2000 "A" "Australia"      0 1 3     28.634     28.634   798334  4.457435  6.28 1.39202e+16 30
                        1 2001 "A" "Australia"      1 1 3   26.41307   26.54496   819144 4.4071355  6.74 1.30234e+16 31
                        1 2002 "A" "Australia"      0 1 3   5.536341      6.908   853048 2.9815745  6.37 1.61436e+16 31
                        1 2003 "A" "Australia"      0 1 3   5.536341      6.908   876715  2.732596  5.93 2.02622e+16 31
                        1 2004 "A" "Australia"      1 1 3   7.350179   8.569935   912927 2.3432553  5.39 2.35621e+16 32
                        1 2010 "A" "Australia"      1 1 3   6.241802   6.241802  1080050   2.91834  5.21 3.47466e+16 35
                        1 2011 "A" "Australia"      0 0 3      5.068      5.068  1110726   3.30385  5.08  4.0027e+15 35
                        1 2012 "A" "Australia"      0 0 3      5.068      5.068  1152963 1.7627802  5.22 4.13369e+16 35
                        1 2013 "A" "Australia"      1 1 3   7.929047   8.248285  1177554  2.449889  5.66 3.74135e+16 37
                        1 2014 "A" "Australia"      0 . .  15.081665     16.199  1207855  2.487923  6.08 3.47753e+16 37
                        2 1990 "A" "France"         0 0 3        1.6        1.6  2081911 3.1942835  9.36 5.03813e+16 51
                        2 1991 "A" "France"         0 0 3        1.6        1.6  2103733  3.213407  9.13 4.93537e+16 52
                        2 1992 "A" "France"         0 1 1        1.6        1.6  2137379 2.3637605 10.21  5.2038e+15 53
                        2 1993 "A" "France"         1 1 2   6.996447  -.8161401  2123942 2.1044629 11.32 5.80229e+16 54
                        2 1994 "A" "France"         0 0 2    8.69136     -1.575  2174032 1.6555153 12.59   5.071e+14 54
                        2 2000 "A" "France"         0 0 5   2.364474      2.797  2564959   1.67596 10.22 7.85938e+16 57
                        2 2001 "A" "France"         0 0 5   2.364474      2.797  2615840 1.6347808  8.61 7.85391e+16 57
                        2 2002 "A" "France"         1 1 5  -.7668521  -.7921649  2645544 1.9234123   8.7 8.75676e+16 59
                        2 2003 "A" "France"         0 0 2 -2.9592714     -3.286  2667321  2.098472  8.31 1.03639e+17 59
                        2 2004 "A" "France"         0 0 2  -2.950783     -3.286  2742800 2.1420896  8.91  1.0654e+16 60
                        2 2010 "A" "France"         0 0 3      3.891      3.891  2904699 1.5311227  8.87 9.63052e+16 64
                        2 2011 "A" "France"         0 0 1      3.891      3.891  2968390  2.111598  8.81 1.08307e+17 65
                        2 2012 "A" "France"         1 1 2 -2.5410414 -2.1623123  2977685 1.9541953   9.4 9.84001e+16 67
                        2 2013 "A" "France"         0 0 2  -6.333579     -5.608  2994846  .8637155  9.92 9.87864e+16 67
                        2 2014 "A" "France"         0 0 2  -6.108374     -5.608  3023483  .5077588 10.29 1.04153e+17 68
                        3 1990 "B" "Germany"        0 1 3   4.255927      4.029  3090684 2.6964715  4.89           . 24
                        3 1991 "B" "Germany"        0 1 3   .5641079 -1.6926868  3245558 4.0470366  5.32 1.21946e+17 25
                        3 1992 "B" "Germany"        0 1 3   .3835026     -1.973  3307973  5.056979  6.32 1.29671e+17 25
                        3 1993 "B" "Germany"        0 1 3   .3835026     -1.973  3275659  4.474575  7.68 1.14872e+17 25
                        3 1994 "B" "Germany"        1 1 4   .8169867 -1.2526814  3354009  2.693057  8.73 1.21188e+17 26
                        3 2000 "B" "Germany"        0 1 4  1.9049623      2.806  3738235  1.440268  7.92 1.17777e+17 27
                        3 2001 "B" "Germany"        0 1 4  1.9049622      2.806  3801092  1.983857  7.77 1.17577e+17 27
                        3 2002 "B" "Germany"        1 1 4   1.283737  2.0394616  3793567 1.4208056  8.48 1.22359e+17 28
                        3 2003 "B" "Germany"        0 1 3 -1.3254085      -1.18  3767008 1.0342277  9.78 1.50381e+17 28
                        3 2004 "B" "Germany"        0 1 3 -1.3254085      -1.18  3811273 1.6657335 10.73 1.75373e+17 28
                        3 2010 "B" "Germany"        0 1 4  1.1606808       -.05  4071113 1.1038091  6.97  2.0819e+16 30
                        3 2011 "B" "Germany"        0 1 4  1.1606808       -.05  4230912 2.0751746  5.82 2.44939e+17 30
                        3 2012 "B" "Germany"        0 1 4  1.1606808       -.05  4248618  2.008491  5.38 2.25506e+17 30
                        3 2013 "B" "Germany"        1 1 4  1.1165322      .0625  4267210  1.504721  5.23 2.32201e+17 31
                        3 2014 "B" "Germany"        0 . .  .01281774      2.875  4361496  .9067979  4.98 2.37452e+17 31
                        4 1990 "B" "Italy"          0 0 2   3.829138       3.66  2199474  6.456609  9.79 4.78032e+16 49
                        4 1991 "B" "Italy"          0 1 2   3.777037       3.66  2233312      6.25  10.1 4.77836e+16 50
                        4 1992 "B" "Italy"          1 1 1   2.615249  2.9572604  2251944   5.27059  9.33 5.49646e+16 51
                        4 1993 "B" "Italy"          0 1 1  1.4437795   .7736538  2232739 4.6267347 10.24 4.39227e+16 52
                        4 1994 "B" "Italy"          1 1 1  4.0116615  13.063806  2280766  4.051842 11.09 5.03757e+16 53
                        4 2000 "B" "Italy"          0 1 1    5.97681   1.812674  2598506 2.5376854 10.84 7.22295e+16 58
                        4 2001 "B" "Italy"          1 1 4   7.402936  4.7593465  2649212  2.785165   9.6 7.13962e+16 59
                        4 2002 "B" "Italy"          0 0 4      8.534      8.534  2655940  2.465323  9.21 7.52609e+16 59
                        4 2003 "B" "Italy"          0 0 4      8.534      8.534  2659622 2.6725554  8.87 8.79973e+16 59
                        4 2004 "B" "Italy"          0 0 4      8.534      8.534  2697484 2.2067366  7.87 1.03212e+17 59
                        4 2010 "B" "Italy"          0 0 3     -16.25          0  2680599  1.525516  8.36  1.1136e+16 62
                        4 2011 "B" "Italy"          0 0 3 -14.330358          0  2699559  2.780633  8.36 1.24328e+17 63
                        4 2012 "B" "Italy"          0 1 1          0          0  2619088  3.041363 10.65 1.11628e+17 64
                        4 2013 "B" "Italy"          1 1 1   .6806767 -1.9956785  2570869 1.2199935 12.15 1.13354e+17 67
                        4 2014 "B" "Italy"          0 0 1 -1.9673892     -2.941  2570752 .24104743 12.68 1.12803e+17 68
                        5 1990 "C" "United Kingdom" 0 1 3     16.809     16.809  1846210  8.063461  6.97 5.76584e+16 18
                        5 1991 "C" "United Kingdom" 0 1 2     16.809     16.809  1825844  7.461783  8.55 5.51486e+16 18
                        5 1992 "C" "United Kingdom" 1 1 5  13.403038  13.403038  1833167 4.5915494  9.78 5.94441e+16 19
                        5 1993 "C" "United Kingdom" 0 1 5       12.1       12.1  1878822  2.558578 10.35 5.35544e+16 19
                        5 1994 "C" "United Kingdom" 0 1 5       12.1       12.1  1951082 2.2190125  9.65 5.90669e+16 19
                        5 2000 "C" "United Kingdom" 0 1 4      1.806      1.806  2386524 1.1829562  5.56 6.64745e+16 20
                        5 2001 "C" "United Kingdom" 1 1 4   1.958802   1.958802  2451683 1.5323496   4.7 6.12017e+16 21
                        5 2002 "C" "United Kingdom" 0 1 4      2.076      2.076  2505100 1.5204024  5.04 6.22673e+16 21
                        5 2003 "C" "United Kingdom" 0 1 4      2.076      2.076  2588317 1.3765004  4.81 6.70562e+16 21
                        5 2004 "C" "United Kingdom" 0 1 4      2.076      2.076  2647493 1.3903975  4.59 7.82735e+16 21
                        5 2010 "C" "United Kingdom" 1 1 3   3.564247   3.697813  2796536  2.492655  7.79 6.99631e+16 24
                        5 2011 "C" "United Kingdom" 0 . .   5.454876   5.670001  2832212 3.8561125  8.04 7.67414e+16 24
                        5 2012 "C" "United Kingdom" 0 . .   5.454876       5.67  2872722  2.573235  7.88 7.59697e+16 24
                        5 2013 "C" "United Kingdom" 0 . .   5.454876   5.670001  2935525 2.2916667  7.52 7.64401e+16 24
                        5 2014 "C" "United Kingdom" 0 . .   5.454876   5.670001  3019561   1.45112  6.11 7.94942e+16 24
                        6 1990 "C" "United States"  1 1 2       14.9       14.9 10650444  5.397956   5.6           . 24
                        6 1991 "C" "United States"  0 1 2       14.9       14.9 10638913  4.234964   6.8           . 25
                        6 1992 "C" "United States"  1 1 2       14.9       14.9 11013662 3.0288196   7.5           . 25
                        6 1993 "C" "United States"  0 1 2   .0375137   .0375137 11316734  2.951657   6.9           . 26
                        6 1994 "C" "United States"  1 1 2      -.781      -.781 11772662  2.607442  6.12           . 26
                        6 2000 "C" "United States"  1 1 2      1.463      1.463 14931055  3.376857  3.99           . 29
                        6 2001 "C" "United States"  0 1 2   5.542341   5.542341 15073548  2.826171  4.73           . 30
                        6 2002 "C" "United States"  1 1 2      5.767      5.767 15329187 1.5860317  5.78           . 30
                        6 2003 "C" "United States"  0 1 2      5.767      5.767 15757822 2.2700949  5.99           . 31
                        6 2004 "C" "United States"  1 1 2      5.767      5.767 16364901  2.677237  5.53           . 31
                        6 2010 "C" "United States"  1 1 2       -.36       -.36 17784695 1.6400435  9.63           . 34
                        6 2011 "C" "United States"  0 1 2       -.36       -.36 18060339 3.1568415  8.95           . 35
                        6 2012 "C" "United States"  1 1 2       -.36       -.36 18472238 2.0693374  8.07           . 35
                        6 2013 "C" "United States"  0 1 2   1.298654   1.298654 18812474 1.4648327  7.37           . 36
                        6 2014 "C" "United States"  0 1 .       1.39       1.39 19242861  1.622223  6.17           . 36
                        end

                        Comment


                        • #13
                          I will provide some general comments as what you are requesting seems like a full project.


                          1. On point 4 in #9 I would need to generate the gaps of Διndicator1, defined as ιndicator1- indmax, then similarly gaps of Δgdp, and then the ratio of defined as Διndocator1/Δgdp, for each entire period and for each group , and then for decades and for each group.
                          Then putting some weight on the difference Δ, Δindocator1, Δgdp collapse to the weighted average of their each entire period, and then for each decade and for each group. That means tow types of collapse or more.One in general and one for each subperiod add subgroud, as above expaned
                          Look at the egen command and -rowmax()- function.

                          Code:
                          help egen
                          Here, you can use wildcards to find the maximum of variables with a certain prefix, e.g.,

                          Code:
                          egen durationmax= rowmax(duration?)
                          for variables with duration followed by one number, e.g, duration0, duration1,..., duration9. For all variables with the prefix "duration"

                          Code:
                          egen durationmax= rowmax(duration*)

                          2. Also, I have to calculate the 1-a value of the intercept and do some postestimation correlation to a count variable called length in the data below, which expresses time duration in years. That is, the time elapsed between the events captured by the dumny1 variable takes place. To check when a is expected to be higher and when starts to lower, and what is the rate of depreciation, if any.

                          I cannot follow this. I would need to understand what you are doing. Maybe post as a new question and see if there is anyone who can follow.


                          3. Lastly, If you check in my data I have included two indicators just for the example, while in reality I got 10. In order to avoid doing all the above process for each of the indicator, is there a loop to include them all together as I run the program you’ ve written? And generate for each the variables, etc as above?
                          I know have asked too much so I cannot thank you enough for your time
                          Once you have the indicators in the dataset, you just add the variables to the loop. Nothing else changes. If a categorical variable generates more than 10 indicators (i.e., has more than 10 levels), include both "indicator?" and "indicator??" in the loop, e.g., indicator 3 below:

                          Code:
                          foreach var of varlist decade? indicator2? indicator3? indicator3?? indicator4?{
                          Again, I am sure there are efficient ways to do what you want, but I do not really understand what you need to do.
                          Last edited by Andrew Musau; 18 Mar 2022, 06:56.

                          Comment


                          • #14
                            Originally posted by Andrew Musau View Post
                            I will provide some general comments as what you are requesting seems like a full project.



                            Look at the egen command and -rowmax()- function.

                            Code:
                            help egen
                            Here, you can use wildcards to find the maximum of variables with a certain prefix, e.g.,

                            Code:
                            egen durationmax= rowmax(duration?)
                            for variables with duration followed by one number, e.g, duration0, duration1,..., duration9. For all variables with the prefix "duration"

                            Code:
                            egen durationmax= rowmax(duration*)



                            I cannot follow this. I would need to understand what you are doing. Maybe post as a new question and see if there is anyone who can follow.




                            Once you have the indicators in the dataset, you just add the variables to the loop. Nothing else changes. If a categorical variable generates more than 10 indicators (i.e., has more than 10 levels), include both "indicator?" and "indicator??" in the loop, e.g., indicator 3 below:

                            Code:
                            foreach var of varlist decade? indicator2? indicator3? indicator3?? indicator4?{
                            Again, I am sure there are efficient ways to do what you want, but I do not really understand what you need to do.
                            There is one thing when you have the time to answer if you wish, of course, I am already grateful for your help!

                            I am really indebted to you. What I meant is that a is the intercept variable in the regression. Actually, it is a discount factor. What I would like to check is the relation of 1-a with the count variable length in the data in #12 according to the loop. For the full sample, decades subperiods and subgroups etc.

                            I tried to create a new variable after the regression command

                            gen discount=1- _cons
                            ,

                            according to the loop, but I am getting all zeros. By visually inspecting _cons are in absolute values all larger from zero, by far, therefore, I was expecting a different number.

                            Here is my code

                            tab imf_income, g(imf_income)
                            gen decade= 10*floor(ts/10)
                            tab decade, g(decade)
                            // foreach var of varlist decade?{
                            xtset id ts
                            xtregar D.gdp cpi u dummy1 dummy2 c.indicator##c.indicator if `var', fe
                            // get the x where the minimum occurs
                            local x = -_b[indicator]/(2*_b[indicator1#indicator])
                            // get the y for that minimim
                            qui margins, at(indicator = `x' cpi=0 u=2 dummy1=0 dummy2=0 indicator=-4)
                            // the predictions are stored in the matrix r(b)
                            // el(r(b),1,1) extracts the cell 1,1 from the matrix r(b)
                            local y = el(r(b),1,1)
                            gen indmax2_`var'= `x'
                            gen gdpmax2_`var'=`y'
                            // prepare for our plot
                            nlcom -_b[indicator]/(2*_b[indicator#indicator])
                            gen dicount=1- _cons
                            }
                            The same happens if I remove the, if, option, for the full sample. What am I doing wrong, and how to estimate the correlation by creating a table ?

                            I am so very grateful for your time.

                            Giorgio

                            Comment


                            • #15
                              You wouldn't reference the coefficient on the constant in that way. See

                              Code:
                              help _variables

                              You want "_b[_cons]".

                              Code:
                              sysuse auto, clear
                              regress mpg weight disp, robust
                              di _cons
                              di _b[_cons]
                              Res.:

                              Code:
                              . regress mpg weight disp, robust
                              
                              Linear regression                               Number of obs     =         74
                                                                              F(2, 71)          =      54.17
                                                                              Prob > F          =     0.0000
                                                                              R-squared         =     0.6529
                                                                              Root MSE          =     3.4561
                              
                              ------------------------------------------------------------------------------
                                           |               Robust
                                       mpg |      Coef.   Std. Err.      t    P>|t|     [95% Conf. Interval]
                              -------------+----------------------------------------------------------------
                                    weight |  -.0065671   .0009323    -7.04   0.000    -.0084261   -.0047081
                              displacement |   .0052808   .0074408     0.71   0.480    -.0095557    .0201172
                                     _cons |   40.08452   2.074126    19.33   0.000     35.94883    44.22021
                              ------------------------------------------------------------------------------
                              
                              .
                              . di _cons
                              1
                              
                              .
                              . di _b[_cons]
                              40.084522
                              
                              .
                              Also, if you want to refer to the coefficient from nlcom, it is usually not named _cons. Unless you post the estimates, you will need to pick the coefficient from matrix r(b).

                              Code:
                              sysuse auto, clear
                              regress mpg weight displacement, robust
                              nlcom _b[weight]*_b[displacement]
                              di r(b)[1,1]
                              Res.:


                              Code:
                               nlcom _b[weight]*_b[displacement]
                              
                                     _nl_1:  _b[weight]*_b[displacement]
                              
                              ------------------------------------------------------------------------------
                                       mpg |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
                              -------------+----------------------------------------------------------------
                                     _nl_1 |  -.0000347   .0000528    -0.66   0.511    -.0001382    .0000688
                              ------------------------------------------------------------------------------
                              
                              . 
                              . di r(b)[1,1]
                              -.00003468
                              Last edited by Andrew Musau; 18 Mar 2022, 11:13.

                              Comment

                              Working...
                              X