Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Stochastic Frontier (SFA) model not converging

    Hi everyone,

    I'm trying to estimate the eco-efficiency on industry level for 9 countries using panel data and Stochastic Frontier Analysis (SFA) by running the following model:

    sfpanel logGDPGHGratio logLabor logCapital logRenew logNonrenew Year, model(tfe) dist(tn) ort(o)

    logGDPGHGratio = The log of the ratio of value added and greenhouse gas emissions
    logLabor = the log of the n.o. employed people
    logCapital = The log of Fixed assets
    logRenew = The log of Renewable energy consumption
    logNonrenew = The log of Non-renewable energy consumption
    Year = Year


    However when I run the code it looks like this:

    initial: Log likelihood = -<inf> (could not be evaluated)
    feasible: Log likelihood = -817.8733
    Iteration 0: Log likelihood = -817.8733 (not concave)
    Iteration 1: Log likelihood = -386.18149 (not concave)
    Iteration 2: Log likelihood = -154.2575 (not concave)
    Iteration 3: Log likelihood = 20.408539
    Iteration 4: Log likelihood = 34.537644 (not concave)
    Iteration 5: Log likelihood = 153.05269
    Iteration 6: Log likelihood = 179.65474
    Iteration 7: Log likelihood = 197.76452
    Iteration 8: Log likelihood = 209.66292 (not concave)
    Iteration 9: Log likelihood = 210.85545
    Iteration 10: Log likelihood = 211.04231
    Iteration 11: Log likelihood = 211.07419
    Iteration 12: Log likelihood = 211.15698 (not concave)
    Iteration 13: Log likelihood = 211.15855
    Iteration 14: Log likelihood = 211.18896
    Iteration 15: Log likelihood = 211.2324
    Iteration 16: Log likelihood = 211.30521
    Iteration 17: Log likelihood = 211.3262
    Iteration 18: Log likelihood = 211.34639
    Iteration 19: Log likelihood = 211.36486
    Iteration 20: Log likelihood = 211.38176 (not concave)
    Iteration 21: Log likelihood = 211.38997 (not concave)
    Iteration 22: Log likelihood = 211.39067
    Iteration 23: Log likelihood = 211.39592
    Iteration 24: Log likelihood = 211.40235
    Iteration 25: Log likelihood = 211.41253
    Iteration 26: Log likelihood = 211.41627
    Iteration 27: Log likelihood = 211.4218
    Iteration 28: Log likelihood = 211.42454
    Iteration 29: Log likelihood = 211.43018
    Iteration 30: Log likelihood = 211.43228
    Iteration 31: Log likelihood = 211.43565
    Iteration 32: Log likelihood = 211.44079
    Iteration 33: Log likelihood = 211.44176
    Iteration 34: Log likelihood = 211.4436
    Iteration 35: Log likelihood = 211.44467
    Iteration 36: Log likelihood = 211.44679
    Iteration 37: Log likelihood = 211.44735
    Iteration 38: Log likelihood = 211.44834
    Iteration 39: Log likelihood = 211.44988
    Iteration 40: Log likelihood = 211.45023
    Iteration 41: Log likelihood = 211.45065
    Iteration 42: Log likelihood = 211.45066 (backed up)
    Iteration 43: Log likelihood = 211.45066 (backed up)
    Iteration 44: Log likelihood = 211.45066 (backed up)
    Iteration 45: Log likelihood = 211.45066 (backed up)
    Iteration 46: Log likelihood = 211.45066 (backed up)
    Iteration 47: Log likelihood = 211.45066 (backed up)
    Iteration 48: Log likelihood = 211.45066 (backed up)
    Iteration 49: Log likelihood = 211.45066 (backed up)
    Iteration 50: Log likelihood = 211.45066 (backed up)
    Iteration 51: Log likelihood = 211.45066 (backed up)
    Iteration 52: Log likelihood = 211.45066 (backed up)
    Iteration 53: Log likelihood = 211.45066 (backed up)
    Iteration 54: Log likelihood = 211.45066 (backed up)
    Iteration 55: Log likelihood = 211.45066 (backed up)
    Iteration 56: Log likelihood = 211.45066 (backed up)
    Iteration 57: Log likelihood = 211.45066 (backed up)
    Iteration 58: Log likelihood = 211.45066 (backed up)
    Iteration 59: Log likelihood = 211.45066 (backed up)
    Iteration 60: Log likelihood = 211.45066 (backed up)
    Iteration 61: Log likelihood = 211.45066 (backed up)
    Iteration 62: Log likelihood = 211.45066 (backed up)
    Iteration 63: Log likelihood = 211.45066 (backed up)
    Iteration 64: Log likelihood = 211.45066 (backed up)
    Iteration 65: Log likelihood = 211.45066 (backed up)
    Iteration 66: Log likelihood = 211.45066 (backed up)
    Iteration 67: Log likelihood = 211.45066 (backed up)
    Iteration 68: Log likelihood = 211.45066 (backed up)
    Iteration 69: Log likelihood = 211.45066 (backed up)
    Iteration 70: Log likelihood = 211.45066 (backed up)
    Iteration 71: Log likelihood = 211.45066 (backed up)
    Iteration 72: Log likelihood = 211.45066 (backed up)
    Iteration 73: Log likelihood = 211.45066 (backed up)
    Iteration 74: Log likelihood = 211.45066 (backed up)
    Iteration 75: Log likelihood = 211.45066 (backed up)
    Iteration 76: Log likelihood = 211.45066 (backed up)
    Iteration 77: Log likelihood = 211.45066 (backed up)
    Iteration 78: Log likelihood = 211.45066 (backed up)
    Iteration 79: Log likelihood = 211.45066 (backed up)
    Iteration 80: Log likelihood = 211.45066 (backed up)
    Iteration 81: Log likelihood = 211.45066 (backed up)
    Iteration 82: Log likelihood = 211.45066 (backed up)
    Iteration 83: Log likelihood = 211.45066 (backed up)
    Iteration 84: Log likelihood = 211.45066 (backed up)
    Iteration 85: Log likelihood = 211.45066 (backed up)
    Iteration 86: Log likelihood = 211.45066 (backed up)
    Iteration 87: Log likelihood = 211.45066 (backed up)
    Iteration 88: Log likelihood = 211.45066 (backed up)
    Iteration 89: Log likelihood = 211.45066 (backed up)
    Iteration 90: Log likelihood = 211.45066 (backed up)
    Iteration 91: Log likelihood = 211.45066 (backed up)
    Iteration 92: Log likelihood = 211.45066 (backed up)
    Iteration 93: Log likelihood = 211.45066 (backed up)
    Iteration 94: Log likelihood = 211.45066 (backed up)
    Iteration 95: Log likelihood = 211.45066 (backed up)
    Iteration 96: Log likelihood = 211.45066 (backed up)
    Iteration 97: Log likelihood = 211.45066 (backed up)
    Iteration 98: Log likelihood = 211.45066 (backed up)
    Iteration 99: Log likelihood = 211.45066 (backed up)
    Iteration 100: Log likelihood = 211.45066 (backed up)

    True fixed-effects model (truncated-normal) Number of obs =
    > 327
    Group variable: ID Number of groups =
    > 25
    Time variable: Year Obs per group: min =
    > 5
    avg = 1
    > 3.1
    max =
    > 14

    Prob > chi2 = 0.0
    > 000
    Log likelihood = 211.4507 Wald chi2(5) = 102
    > .07

    ---------------------------------------------------------------------------
    > ---
    logGDPGHGr~o | Coefficient Std. err. z P>|z| [95% conf. interv
    > al]
    -------------+-------------------------------------------------------------
    > ---
    Frontier |
    logLabor | .3261576 .1169416 2.79 0.005 .0969564 .5553
    > 589
    logCapital | .0388976 .0918791 0.42 0.672 -.1411821 .2189
    > 774
    logRenew | .0043788 .014619 0.30 0.765 -.024274 .0330
    > 316
    logNonrenew | -.0466693 .0126807 -3.68 0.000 -.0715231 -.0218
    > 155
    Year | .01462 .002246 6.51 0.000 .0102179 .0190
    > 221
    -------------+-------------------------------------------------------------
    > ---
    Mu |
    _cons | -98.94816 230.2629 -0.43 0.667 -550.2552 352.3
    > 589
    -------------+-------------------------------------------------------------
    > ---
    Usigma |
    _cons | 1.945617 2.329127 0.84 0.404 -2.619388 6.510
    > 622
    -------------+-------------------------------------------------------------
    > ---
    Vsigma |
    _cons | -4.48299 .1443512 -31.06 0.000 -4.765913 -4.200
    > 067
    -------------+-------------------------------------------------------------
    > ---
    sigma_u | 2.645363 3.080694 0.86 0.391 .2699026 25.92
    > 768
    sigma_v | .1062995 .0076722 13.86 0.000 .0922773 .1224
    > 523
    lambda | 24.88595 3.081093 8.08 0.000 18.84712 30.92
    > 478
    ---------------------------------------------------------------------------



    Should I be worried that the model doesn't seem to converge and that the estimated coefficients may be uncertain (because of the "(not concave)" and "(backed up)" messages on the iterations)? What can be done to try and remedy this issue?

    Worth mentioning is that I get reasonable values on the estimated eco-efficiency scores. To obtain these scores I use the code snippet predict te, jlms.

    Many thanks in advance!








  • #2
    sfpanel is from SSC (FAQ Advice #12). Most commands would exit with the error "convergence not achieved" if the ML algorithm failed to converge. I'm unsure if sfpanel is programmed to stop iterations after reaching 100 by default and output the estimates. However, if this is the case, you'll need to discard those estimates as they do not represent the maximized likelihood. There is a simple way to check. After you get the results, run

    Code:
    mat b= e(b)
    sfpanel logGDPGHGratio logLabor logCapital logRenew logNonrenew Year, model(tfe) dist(tn) ort(o) from(b)
    If the estimation does not stop immediately and keeps on iterating, then convergence was not achieved the first time around. If this is the case, you may want to email the authors of the command and ask them to change this default.

    Comment


    • #3
      Hi Andrew, thanks for the reply.

      When running the suggested code I get an error message saying:
      Option from() not allowed. Use sveqnname() options.
      invalid syntax


      When trying to use sveqnname(b)instead of from(b), Stata iterates a hundred times as before, giving me the same estimates.

      Is there any other way of determining if the model fails to converge? Thank you!

      Comment


      • #4
        Originally posted by Fanny Oegren View Post
        When running the suggested code I get an error message saying:
        Option from() not allowed. Use sveqnname() options.
        invalid syntax
        You're correct, in sfpanel you specify starting values using the following:


        svfrontier() specifies a 1 X k vector of starting values for the frontier. The vector must have the same length as there are parameters to estimate in the frontier
        equation.

        svusigma() specifies a starting value for the parameter of the variance of the inefficiency term.

        svvsigma() specifies a starting value for the parameter of the variance of the idiosyncratic term.
        Is there any other way of determining if the model fails to converge?
        It is clear that your model did not converge, so you cannot use those estimates obtained after 100 iterations. A suggestion is to start with Greene's True Random Effects (TRE) model. If it converges and you get estimates, use these for the TFE model. Here is how to do it showing indeed that the starting values are picked up by the TFE model.

        Code:
        webuse xtfrontier1, clear
        *GREENE'S TRE MODEL
        sfpanel lnwidgets lnmachines lnworkers, model(tre) dist(hnormal) usigma(lnworkers) difficult rescale nsim(50) simtype(genhalton)
        
        *COEFFICIENTS MATRIX
        mat l e(b)
        
        *EXTRACT THE 3 MATRICES
        mat b_f= e(b)[1, 1..`e(df_m)']
        mat b_u= e(b)[1, "Usigma:"]
        mat b_v= e(b)[1, "Vsigma:"]
        
        mat l b_f
        mat l b_u
        mat l b_v
        
        *RUN TFE MODEL WITH NO ITERATIONS TO SHOW STARTING VALUES PICKED UP
        sfpanel lnwidgets lnmachines lnworkers, model(tfe) dist(exp) usigma(lnworkers)  svfrontier(b_f) svusigma(b_u) svvsigma(b_v) iter(0)
        
        *NOW RUN THE MODEL WITH THE SPECIFIED STARTING VALUES
        sfpanel lnwidgets lnmachines lnworkers, model(tfe) dist(exp) usigma(lnworkers)  svfrontier(b_f) svusigma(b_u) svvsigma(b_v)
        Res.:

        Code:
        . *GREENE'S TRE MODEL
        
        .
        . sfpanel lnwidgets lnmachines lnworkers, model(tre) dist(hnormal) usigma(lnworkers) difficult rescale nsim(50) simtype
        > (genhalton)
        
        
        Initial:      Log simulated-likelihood = -1755.9634
        Rescale:      Log simulated-likelihood = -1755.9634
        Rescale eq:   Log simulated-likelihood = -1503.2923
        Iteration 0:  Log simulated-likelihood = -1503.2923  
        Iteration 1:  Log simulated-likelihood = -1485.8234  (not concave)
        Iteration 2:  Log simulated-likelihood = -1479.4207  
        Iteration 3:  Log simulated-likelihood = -1478.0449  
        Iteration 4:  Log simulated-likelihood = -1477.9694  
        Iteration 5:  Log simulated-likelihood = -1477.9678  
        Iteration 6:  Log simulated-likelihood = -1477.9678  
        
        True random-effects model (half-normal)              Number of obs =       948
        Group variable: id                                Number of groups =        91
        Time variable: t                                Obs per group: min =         6
                                                                       avg =      10.4
                                                                       max =        14
        
                                                             Prob > chi2   =    0.0000
        Log simulated-likelihood = -1477.9678                Wald chi2(2)  =    398.66
        
        Number of Randomized Halton Sequences = 50
        Base for Randomized Halton Sequences  = 7
        ------------------------------------------------------------------------------
           lnwidgets | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
        -------------+----------------------------------------------------------------
        Frontier     |
          lnmachines |   .2906754   .0165098    17.61   0.000     .2583168     .323034
           lnworkers |   .2639837   .0290145     9.10   0.000     .2071163    .3208511
               _cons |   1.722812   .2276866     7.57   0.000     1.276555     2.16907
        -------------+----------------------------------------------------------------
        Usigma       |
           lnworkers |  -.0802731   .0709493    -1.13   0.258    -.2193313     .058785
               _cons |  -.4036188   .5552325    -0.73   0.467    -1.491854    .6846168
        -------------+----------------------------------------------------------------
        Vsigma       |
               _cons |  -.3090683   .1838835    -1.68   0.093    -.6694732    .0513367
        -------------+----------------------------------------------------------------
        Theta        |
               _cons |    1.33327    .094778    14.07   0.000     1.147508    1.519031
        -------------+----------------------------------------------------------------
          E(sigma_u) |   .8670305                                 .8616404    .8724205
             sigma_v |   .8568142    .078777    10.88   0.000     .7155265    1.026001
        ------------------------------------------------------------------------------
        
        .
        .
        .
        . *COEFFICIENTS MATRIX
        
        .
        . mat l e(b)
        
        e(b)[1,7]
              Frontier:   Frontier:   Frontier:     Usigma:     Usigma:     Vsigma:      Theta:
            lnmachines   lnworkers       _cons   lnworkers       _cons       _cons       _cons
        y1    .2906754   .26398369   1.7228125  -.08027314  -.40361883  -.30906827   1.3332696
        
        .
        .
        .
        . *EXTRACT THE 3 MATRICES
        
        .
        . mat b_f= e(b)[1, 1..`e(df_m)']
        
        .
        . mat b_u= e(b)[1, "Usigma:"]
        
        .
        . mat b_v= e(b)[1, "Vsigma:"]
        
        .
        .
        .
        . mat l b_f
        
        b_f[1,2]
              Frontier:   Frontier:
            lnmachines   lnworkers
        y1    .2906754   .26398369
        
        .
        . mat l b_u
        
        b_u[1,2]
                Usigma:     Usigma:
             lnworkers       _cons
        y1  -.08027314  -.40361883
        
        .
        . mat l b_v
        
        symmetric b_v[1,1]
                Vsigma:
                 _cons
        y1  -.30906827
        
        .
        .
        .
        . *RUN TFE MODEL WITH NO ITERATIONS TO SHOW STARTING VALUES PICKED UP
        
        .
        . sfpanel lnwidgets lnmachines lnworkers, model(tfe) dist(exp) usigma(lnworkers)  svfrontier(b_f) svusigma(b_u) svvsigm
        > a(b_v) iter(0)
        
        
        Iteration 0:  Log likelihood = -3399.9534  (not concave)
        
        True fixed-effects model (exponential)               Number of obs =       948
        Group variable: id                                Number of groups =        91
        Time variable: t                                Obs per group: min =         6
                                                                       avg =      10.4
                                                                       max =        14
        
                                                             Prob > chi2   =    0.0000
        Log likelihood = -3399.9534                          Wald chi2(2)  =     60.13
        
        ------------------------------------------------------------------------------
           lnwidgets | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
        -------------+----------------------------------------------------------------
        Frontier     |
          lnmachines |   .2906754   .0406166     7.16   0.000     .2110683    .3702825
           lnworkers |   .2639837   .0532677     4.96   0.000     .1595809    .3683865
        -------------+----------------------------------------------------------------
        Usigma       |
           lnworkers |  -.0802731   .0848607    -0.95   0.344    -.2465971    .0860509
               _cons |  -.4036188   .1709012    -2.36   0.018    -.7385791   -.0686586
        -------------+----------------------------------------------------------------
        Vsigma       |
               _cons |  -.3090683   .2617274    -1.18   0.238    -.8220446    .2039081
        -------------+----------------------------------------------------------------
          E(sigma_u) |   .8670305                                 .8616404    .8724205
             sigma_v |   .8568142   .1121259     7.64   0.000     .6629721    1.107333
        ------------------------------------------------------------------------------
        
        .
        .
        .
        . *NOW RUN THE MODEL WITH THE SPECIFIED STARTING VALUES
        
        .
        . sfpanel lnwidgets lnmachines lnworkers, model(tfe) dist(exp) usigma(lnworkers)  svfrontier(b_f) svusigma(b_u) svvsigm
        > a(b_v)
        
        
        Initial:      Log likelihood = -3399.9534
        Iteration 0:  Log likelihood = -3399.9534  (not concave)
        Iteration 1:  Log likelihood = -1571.2884  (not concave)
        Iteration 2:  Log likelihood = -1483.9504  (not concave)
        Iteration 3:  Log likelihood = -1445.5861  
        Iteration 4:  Log likelihood = -1331.6921  
        Iteration 5:  Log likelihood = -1309.6402  
        Iteration 6:  Log likelihood =  -1301.774  
        Iteration 7:  Log likelihood =  -1301.092  
        Iteration 8:  Log likelihood = -1300.9274  
        Iteration 9:  Log likelihood = -1300.9124  
        Iteration 10: Log likelihood = -1300.9119  
        Iteration 11: Log likelihood = -1300.9119  
        
        True fixed-effects model (exponential)               Number of obs =       948
        Group variable: id                                Number of groups =        91
        Time variable: t                                Obs per group: min =         6
                                                                       avg =      10.4
                                                                       max =        14
        
                                                             Prob > chi2   =    0.0000
        Log likelihood = -1300.9119                          Wald chi2(2)  =    454.34
        
        ------------------------------------------------------------------------------
           lnwidgets | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
        -------------+----------------------------------------------------------------
        Frontier     |
          lnmachines |   .2923237   .0157401    18.57   0.000     .2614737    .3231736
           lnworkers |   .2720863    .027266     9.98   0.000      .218646    .3255266
        -------------+----------------------------------------------------------------
        Usigma       |
           lnworkers |  -.1365877   .1366951    -1.00   0.318    -.4045052    .1313297
               _cons |  -2.435708   1.051787    -2.32   0.021    -4.497173    -.374243
        -------------+----------------------------------------------------------------
        Vsigma       |
               _cons |  -.2196069    .122049    -1.80   0.072    -.4588185    .0196048
        -------------+----------------------------------------------------------------
          E(sigma_u) |   .3290061                                 .3252884    .3327237
             sigma_v |   .8960102   .0546786    16.39   0.000     .7950031    1.009851
        ------------------------------------------------------------------------------
        If you do not get improvements with this, go back with your supervisor to the drawing board and look at your model specification and data. There are some diagnostics to check whether your data meets the assumptions of the to be estimated model. You are using a truncated normal distribution for the inefficiency. Here, there is a chance to model the inefficiency as a linear function of a set of covariates using the -emean()- option as well.

        Comment


        • #5
          Hi again Andrew,

          When trying Greene's True Random Effects (TRE) model, the model doesn't converge either and after a few iterations I get the following error message: "could not calculate numerical derivatives -- discontinuous region with
          missing values encountered".

          However, when I try to include the emean() option in the true fixed effects model and put the variable "Year" in it, it looks like this:

          . sfpanel logGDPGHGratio logLabor logCapital logRenew logNonrenew Year, model(tfe) dist(tn) emean(Year) ort(o)


          initial: Log likelihood = -<inf> (could not be evaluated)
          feasible: Log likelihood = -817.8733
          Iteration 0: Log likelihood = -817.8733 (not concave)
          Iteration 1: Log likelihood = -382.52866 (not concave)
          Iteration 2: Log likelihood = -216.0596 (not concave)
          Iteration 3: Log likelihood = 22.360711 (not concave)
          Iteration 4: Log likelihood = 119.88889
          Iteration 5: Log likelihood = 161.12759
          Iteration 6: Log likelihood = 202.39408 (not concave)
          Iteration 7: Log likelihood = 218.01912
          Iteration 8: Log likelihood = 221.4917
          Iteration 9: Log likelihood = 221.90934
          Iteration 10: Log likelihood = 221.9181
          Iteration 11: Log likelihood = 221.92266 (not concave)
          Iteration 12: Log likelihood = 221.92319
          Iteration 13: Log likelihood = 221.92393
          Iteration 14: Log likelihood = 221.92426
          Iteration 15: Log likelihood = 221.92427

          True fixed-effects model (truncated-normal) Number of obs =
          > 327
          Group variable: ID Number of groups =
          > 25
          Time variable: Year Obs per group: min =
          > 5
          avg = 1
          > 3.1
          max =
          > 14

          Prob > chi2 = 0.0
          > 000
          Log likelihood = 221.9243 Wald chi2(5) = 129
          > .91

          ---------------------------------------------------------------------------
          > ---
          logGDPGHGr~o | Coefficient Std. err. z P>|z| [95% conf. interv
          > al]
          -------------+-------------------------------------------------------------
          > ---
          Frontier |
          logLabor | .4054443 .1125838 3.60 0.000 .1847841 .6261
          > 045
          logCapital | .0603758 .0903203 0.67 0.504 -.1166487 .2374
          > 003
          logRenew | -.0031402 .0143083 -0.22 0.826 -.031184 .0249
          > 036
          logNonrenew | -.0407008 .0117756 -3.46 0.001 -.0637806 -.0176
          > 211
          Year | .0239321 .0031359 7.63 0.000 .0177859 .0300
          > 784
          -------------+-------------------------------------------------------------
          > ---
          Mu |
          Year | .7161188 1.539471 0.47 0.642 -2.301189 3.733
          > 426
          _cons | -1447.7 3112.998 -0.47 0.642 -7549.064 4653.
          > 664
          -------------+-------------------------------------------------------------
          > ---
          Usigma |
          _cons | -1.857554 2.264724 -0.82 0.412 -6.296332 2.581
          > 224
          -------------+-------------------------------------------------------------
          > ---
          Vsigma |
          _cons | -4.391942 .0987163 -44.49 0.000 -4.585423 -4.198
          > 462
          -------------+-------------------------------------------------------------
          > ---
          sigma_u | .3950365 .4473244 0.88 0.377 .0429308 3.63
          > 501
          sigma_v | .1112505 .0054911 20.26 0.000 .1009923 .1225
          > 506
          lambda | 3.550875 .4471721 7.94 0.000 2.674434 4.427
          > 317
          ---------------------------------------------------------------------------



          But I'm not sure if I should use some other variable inside the emean() specification? What exactly is the interpretation of using "Year" here (a variable that only indicates which year the other variables' data points are from, ranging from 2008-2021)? I have seen someone in an example video using "imports shares" inside this specification, could that make sense for me to use as well?

          Thank you in advance!

          Comment


          • #6
            But I'm not sure if I should use some other variable inside the emean() specification?
            This should come from theory. In other words, you have to answer the question of what factors determine inefficiency in your setting and come up with a theoretical model. There is some discussion in the Stata manual entry of frontier, where the corresponding option is -cm()-. See Example 3: The truncated-normal model. Also, look at what variables past researchers have used, but ultimately make sure that you can make a theoretical argument for your choice.

            Comment

            Working...
            X