Announcement

Collapse
No announcement yet.
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • is -matchit- just really slow? And can anybody see anything wrong with my reclink syntax?

    So I have one dataset with 7,503 institutions, and another with 2,768 individuals (sometimes several in the same institution).

    reclink fails to match anybody when I try:
    Code:
    reclink Institution Address1 City Zip using Allinstitutions.dta, idm(id2) idu(myID) gen(matchscore) orblock(none)
    so I try matchit, based just on the single variable Institution (the school's name, all caps).

    Code:
    . matchit id2 Institution using Allinstitutions.dta, idu(myID) txtu(Institution) override
    Matching current dataset with Allinstitutions.dta
    Loading USING file: Allinstitutions.dta
    Indexing USING file. Method: bigram
    0%...20%...40%...60%...80%...100%.
    Computing results
    0%...
    And it's sitting at 0% for two hours now. 50% CPU usage, so I know it's trying to do something, but whatever it is, is slow... I'll be happy to run it for a couple of days if I get what I want, but if it ends up failing to match stuff anyway, it will b pretty annoying. At least reclink failed fast.
    Last edited by ben earnhart; 18 Dec 2015, 18:48.

  • #2
    ps. When I do a regular -merge- on institution name, I get about 2/3rds to match. So -reclink- should be finding *some* matches, and -matchit- shouldn't have to work *that* hard. BTW -- this is Stata 13.1 with all updates, Windows 7. Both packages (reclink and matchit) are from SSC.
    Last edited by ben earnhart; 18 Dec 2015, 19:02.

    Comment


    • #3
      Any luck, Ben - any more dots emerge from matchit since you wrote?

      I have no knowledge of matchit, but it occurs to me that if you can get 2/3 match using merge, you might want to remove those matches from the data and see if that helps: perhaps run time is nonlinear in the number of observations to match. I'm dubious, but if nothing else works, it might be worth a try.

      Actually, what I might do in circumstances like yours would be
      • eliminate the individuals who match using merge
      • start with, say, 20 individuals to make sure there isn't some underlying problem
      It also occurs to me that if there were some a priori way of reducing the size of the Allinstitutions file you might speed things up.

      Let us know what happens. I can imagine myself wanting something like this someday, it will be good to know your experience.

      Comment


      • #4
        It's at 20% now; it moved to 20% sometime overnight and has been sitting at 20% all day.

        You're right, I should have removed to perfect matches beforehand to minimize the job. I think I will stop it and start over with the cases that don't match. If it's non-linear, then that should help immensely.

        Comment


        • #5
          Awesome! IT's at 20% already. What took between five and thirteen hours just ran in about five minutes.

          Comment


          • #6
            Very interesting. The author of matchit, Julio Raffo, while not active on Statalist, did announce it here and has participated in discussions. If the pace you've established holds up, it might be interesting to ask him if having a high percentage of actual matches impedes flow of the algorithm; or if it would have been more effective with a different string matching method than the default.

            Comment


            • #7
              Hmm. It finished up, more slowly than it appeared at first, but much faster than doing the whole thing. Unfortunately, only about 5% of cases are worth using. The default setting is .5, but my matches only seem good above about .82. So it's not a panacea, but it will save our brute-force-human matcher (project assistant) from about 500 matches or so.

              Comment


              • #8
                Interesting. Have you considered trying soundex or token_soundex ? I see that help matchit suggests that phonetic methods like Soundex are more efficient at matching misspellings based on similar sounds. If your data was collected by interviewers transcribing responses, that could help. Might be worth a try on a small sample of your data.

                Thanks again for following up with your results.

                Comment


                • #9
                  (Apologies in advance, but I'm writing from my phone and without a great connection)

                  My first reaction is that I'm surprised that reclink didn't match anything. It should at least replicate your --merge-- results. -matchit- also will replicate them, but you will observe this only when it's done.

                  Concerning -matchit- performance, it is always worth remembering that it attempts to produce matching candidates through a computationally heavy process. So, yes, it can take some time depending on the size of the files and the similarity algorithm chosen. Of course your hardware also plays a role. (In theory, it's worst than linear but better than exponential, but really depends on your data)

                  But 7k against 2k doesn't strike me as a big challenge for matchit. I've done way worst with a i5 and win 32bits in a fairly decent time. Maybe there is something in the data that it is increasing artificially the space of search (e.g. double spaces).

                  An easy and practical test is to check different algorithms (e.g. token) and/or limit both master and using files to smaller subsamples. If you think misspellings are not a major problem in your data, then think of token as a faster solution than the default bigram.

                  Comment


                  • #10
                    Sorry, low connection made me miss some recent posts.

                    Does your school names have many repeated terms? Like most of them contain "school" or something alike? If this is the case, you might be not benefiting of -matchit- indexation (and then actually matching every possible pair of 7k x 2k).

                    On another note, once you have your -matchit- results you can reapply -matchit- using the columns syntax as many times as you want. This can help you fine tune the similarity score by choosing different algorithms.

                    Best

                    J.

                    Comment


                    • #11
                      Sorry so long getting back to you, I was away from the computer for a while. Yes, there is a lot of overlap in the names -- "university of ..." or "... university" "college of..." "...college." I end up with something like 250,000 records with scores >.5 so there is a lot of redundant and false matching going on. But, when I get up into .82 or so, seems fairly clean.

                      Comment


                      • #12
                        Your results are in line with my own experience. Usually, you will get mostly false positive matches once you pass a threshold of about .7. But it really depends on the data. -matchit- was conceived to recover those false negatives that usually fall through the cracks, but there is always additional screening to be performed. If you have additional data, like address, zip code or state, you can improve your results considerably.

                        Two tips which can be handy for you. If you use weights you will reduce the impact of common terms like "college " or "university " in the similarity score. Weights will work with either token or bigram. The second tip is to use minsimple as score function, which is useful when you have cases where you will like to match "university of something" with "university of something, department of etcetera". Minsimple scores how much of the shorter term is found in the longer one.

                        You can apply both cases to either the files or columns syntax. Although the weight will theoretically take longer in the columns one.

                        I hope these can help you.

                        Comment


                        • #13
                          I tuned it to use trigrams and a log weight. Finished up in a matter of minutes with higher accuracy. I was happy enough with that that I didn't even try minsimple. Thanks for the advice, it made a huge difference.

                          Comment


                          • #14
                            I'm glad to hear it.

                            Just as a matter of using your experience for other users: -matchit- in files syntax produces an index (hash table or dictionary, if you prefer) which will vary in height (entries) and depth (elements inside each entry). An increase in either will take -matchit- longer and viceversa. However, there is a trade-off between the two and the following list increases in height but decreases in depth: 1-gram, 2-gram, 3-gram, 4-gram, soundex, token_soundex, token.

                            The ultimate result in time performance will depend on the data.

                            Comment


                            • #15
                              Hi,

                              my question is strongly related to bens.

                              I would like to merge two datsets using matchit. Dataset 1 (master dataset) contains the Full Name of a companies CFO, a unique identifier of the company (Acq_ID_Compustat) and the announcement date of an Acquisition (Deal_Announced) made by the CFOs company. There is also a variable called Deal_No in place. After the fuzzy match I use the Deal_No to assign the compensation data to a larger dataset, which contains a lot more variables (> 30) than dataset 1 does. I choose this strategy because I am afraid that using matchit with the overall dataset that contains > 30 variables takes a lot more time. The master dataset has around 13,400 observations.

                              Data sample (master):

                              Code:
                              * Example generated by -dataex-. For more info, type help dataex
                              clear
                              input str26 CFO_Name str6 Acq_ID_Compustat double Deal_No str4 Deal_Announced str38 fuzzy float fuzzy_ID_master
                              "Mike Sharp"        "001004" 2984229020 "2016" "001004;Mike Sharp;2016"         1
                              "Rick Poulton"      "001004" 1923119020 "2007" "001004;Rick Poulton;2007"       2
                              "John Fortson"      "001004" 2668083020 "2014" "001004;John Fortson;2014"       3
                              "Timothy Romenesko" "001004" 3014377020 "2016" "001004;Timothy Romenesko;2016"  4
                              "Timothy Romenesko" "001004" 1830739020 "2007" "001004;Timothy Romenesko;2007"  5
                              "Rick Poulton"      "001004" 1955618020 "2008" "001004;Rick Poulton;2008"       6
                              "Timothy Romenesko" "001004" 1855925020 "2007" "001004;Timothy Romenesko;2007"  7
                              "John Adamovich Jr" "001056" 1859210020 "2007" "001056;John Adamovich Jr;2007"  8
                              "Michael Gorin"     "001056" 1051491020 "2000" "001056;Michael Gorin;2000"      9
                              "Michael Gorin"     "001056" 1056795020 "2000" "001056;Michael Gorin;2000"     10
                              "Michael Gorin"     "001056" 1409781020 "2003" "001056;Michael Gorin;2003"     11
                              "Michael Gorin"     "001056" 1287720020 "2002" "001056;Michael Gorin;2002"     12
                              "Michael Gorin"     "001056" 1420176020 "2003" "001056;Michael Gorin;2003"     13
                              "Michael Gorin"     "001056" 1343143020 "2002" "001056;Michael Gorin;2002"     14
                              "Michael Gorin"     "001056" 1170080020 "2001" "001056;Michael Gorin;2001"     15
                              "Gil Danielson"     "001076" 2035985020 "2008" "001076;Gil Danielson;2008"     16
                              "Gil Danielson"     "001076" 1440378020 "2003" "001076;Gil Danielson;2003"     17
                              "Gil Danielson"     "001076" 1911297020 "2007" "001076;Gil Danielson;2007"     18
                              "Gil Danielson"     "001076" 1424314020 "2003" "001076;Gil Danielson;2003"     19
                              "Tom Freyman"       "001078" 2162802020 "2010" "001078;Tom Freyman;2010"       20
                              end
                              format Deal_No %20.0g

                              Dataset 2 is the using dataset and contains information on the compensation of the directors of the acquiring firms and again the company identifier (GVKEY, which equals Acq_ID_Compustat in ds 1) and the reporting year of the compensation data. The using dataset has around 81,000 observations.

                              Data sample (using):

                              Code:
                              * Example generated by -dataex-. For more info, type help dataex
                              clear
                              input str6 GVKEY str28 CONAME double(SALARY BONUS TOTAL_CURR) str34 Director_Name float fuzzy_ID_using str4 YEAR str46 fuzzy_using
                              "001004" "AAR CORP"     661.466        0  661.466 "David Storch"        1 "2002" "001004;David Storch;2002"    
                              "001004" "AAR CORP"       261.1        0    261.1 "Howard Pulsifer"     2 "2002" "001004;Howard Pulsifer;2002"  
                              "001004" "AAR CORP"       300.6        0    300.6 "Timothy Romenesko"   3 "2002" "001004;Timothy Romenesko;2002"
                              "001004" "AAR CORP"       171.3        0    171.3 "Michael Sharp"       4 "2002" "001004;Michael Sharp;2002"    
                              "001004" "AAR CORP"       251.1        0    251.1 "James Clark"         5 "2002" "001004;James Clark;2002"      
                              "001004" "AAR CORP"       176.1      180    356.1 "Mark McDonald"       6 "2002" "001004;Mark McDonald;2002"    
                              "001004" "AAR CORP"       661.4    496.1   1157.5 "David Storch"        7 "2003" "001004;David Storch;2003"    
                              "001004" "AAR CORP"       261.1  127.289  388.389 "Howard Pulsifer"     8 "2003" "001004;Howard Pulsifer;2003"  
                              "001004" "AAR CORP"       300.6  225.651  526.251 "Timothy Romenesko"   9 "2003" "001004;Timothy Romenesko;2003"
                              "001004" "AAR CORP"       251.1       50    301.1 "James Clark"        10 "2003" "001004;James Clark;2003"      
                              "001004" "AAR CORP"       199.6  320.818  520.418 "Mark McDonald"      11 "2003" "001004;Mark McDonald;2003"    
                              "001004" "AAR CORP"       695.4  591.838 1287.238 "David Storch"       12 "2004" "001004;David Storch;2004"    
                              "001004" "AAR CORP"       268.8  148.589  417.389 "Howard Pulsifer"    13 "2004" "001004;Howard Pulsifer;2004"  
                              "001004" "AAR CORP"       309.7  240.105  549.805 "Timothy Romenesko"  14 "2004" "001004;Timothy Romenesko;2004"
                              "001004" "AAR CORP"       274.4      225    499.4 "James Clark"        15 "2004" "001004;James Clark;2004"      
                              "001004" "AAR CORP"       239.2      310    549.2 "Mark McDonald"      16 "2004" "001004;Mark McDonald;2004"    
                              "001004" "AAR CORP"       716.6 1041.051 1757.651 "David Storch"       17 "2005" "001004;David Storch;2005"    
                              "001004" "AAR CORP"       276.8  180.055  456.855 "Howard Pulsifer"    18 "2005" "001004;Howard Pulsifer;2005"  
                              "001004" "AAR CORP"       318.9   347.12   666.02 "Timothy Romenesko"  19 "2005" "001004;Timothy Romenesko;2005"
                              "001004" "AAR CORP"         283   428.01   711.01 "James Clark"        20 "2005" "001004;James Clark;2005"      
                              "001004" "AAR CORP"       281.8   298.78   580.58 "Mark McDonald"      21 "2005" "001004;Mark McDonald;2005"    
                              "001004" "AAR CORP"       741.5        0    741.5 "David Storch"       22 "2006" "001004;David Storch;2006"    
                              "001004" "AAR CORP"       286.4        0    286.4 "Howard Pulsifer"    23 "2006" "001004;Howard Pulsifer;2006"  
                              "001004" "AAR CORP"         330        0      330 "Timothy Romenesko"  24 "2006" "001004;Timothy Romenesko;2006"
                              "001004" "AAR CORP"       299.4        0    299.4 "James Clark"        25 "2006" "001004;James Clark;2006"      
                              "001004" "AAR CORP"       299.4        0    299.4 "Mark McDonald"      26 "2006" "001004;Mark McDonald;2006"    
                              "001004" "AAR CORP"     768.248        0  768.248 "David Storch"       27 "2007" "001004;David Storch;2007"    
                              "001004" "AAR CORP"     296.738        0  296.738 "Howard Pulsifer"    28 "2007" "001004;Howard Pulsifer;2007"  
                              "001004" "AAR CORP"         400        0      400 "Timothy Romenesko"  29 "2007" "001004;Timothy Romenesko;2007"
                              "001004" "AAR CORP"       310.5        0    310.5 "James Clark"        30 "2007" "001004;James Clark;2007"      
                              "001004" "AAR CORP"         300        0      300 "Richard Poulton"    31 "2007" "001004;Richard Poulton;2007"  
                              "001004" "AAR CORP"     791.295        0  791.295 "David Storch"       32 "2008" "001004;David Storch;2008"    
                              "001004" "AAR CORP"         450        0      450 "Timothy Romenesko"  33 "2008" "001004;Timothy Romenesko;2008"
                              "001004" "AAR CORP"     319.815      275  594.815 "James Clark"        34 "2008" "001004;James Clark;2008"      
                              "001004" "AAR CORP"         330        0      330 "Richard Poulton"    35 "2008" "001004;Richard Poulton;2008"  
                              "001004" "AAR CORP"         309        0      309 "Terry Stinson"      36 "2008" "001004;Terry Stinson;2008"    
                              "001004" "AAR CORP"     799.208        0  799.208 "David Storch"       37 "2009" "001004;David Storch;2009"    
                              "001004" "AAR CORP"       454.5        0    454.5 "Timothy Romenesko"  38 "2009" "001004;Timothy Romenesko;2009"
                              "001004" "AAR CORP"     323.013      125  448.013 "James Clark"        39 "2009" "001004;James Clark;2009"      
                              "001004" "AAR CORP"         360        0      360 "Richard Poulton"    40 "2009" "001004;Richard Poulton;2009"  
                              "001004" "AAR CORP"     327.897        0  327.897 "Terry Stinson"      41 "2009" "001004;Terry Stinson;2009"    
                              "001004" "AAR CORP"         850        0      850 "David Storch"       42 "2010" "001004;David Storch;2010"    
                              "001004" "AAR CORP"      468.18        0   468.18 "Timothy Romenesko"  43 "2010" "001004;Timothy Romenesko;2010"
                              "001004" "AAR CORP"       367.2        0    367.2 "Richard Poulton"    44 "2010" "001004;Richard Poulton;2010"  
                              "001004" "AAR CORP"      338.13        0   338.13 "Terry Stinson"      45 "2010" "001004;Terry Stinson;2010"    
                              "001004" "AAR CORP"       367.2        0    367.2 "Robert Regan"       46 "2010" "001004;Robert Regan;2010"    
                              "001004" "AAR CORP"         867        0      867 "David Storch"       47 "2011" "001004;David Storch;2011"    
                              "001004" "AAR CORP"     477.544        0  477.544 "Timothy Romenesko"  48 "2011" "001004;Timothy Romenesko;2011"
                              "001004" "AAR CORP"     374.544        0  374.544 "Richard Poulton"    49 "2011" "001004;Richard Poulton;2011"  
                              "001004" "AAR CORP"     344.893        0  344.893 "Terry Stinson"      50 "2011" "001004;Terry Stinson;2011"    
                              "001004" "AAR CORP"     374.544        0  374.544 "Robert Regan"       51 "2011" "001004;Robert Regan;2011"    
                              "001004" "AAR CORP"     877.838        0  877.838 "David Storch"       52 "2012" "001004;David Storch;2012"    
                              "001004" "AAR CORP"     483.513        0  483.513 "Timothy Romenesko"  53 "2012" "001004;Timothy Romenesko;2012"
                              "001004" "AAR CORP"     312.576        0  312.576 "Michael Sharp"      54 "2012" "001004;Michael Sharp;2012"    
                              "001004" "AAR CORP"     164.711        0  164.711 "Richard Poulton"    55 "2012" "001004;Richard Poulton;2012"  
                              "001004" "AAR CORP"     379.226        0  379.226 "Robert Regan"       56 "2012" "001004;Robert Regan;2012"    
                              "001004" "AAR CORP"      349.07        0   349.07 "Randy Martinez"     57 "2012" "001004;Randy Martinez;2012"  
                              "001004" "AAR CORP"     906.449        0  906.449 "David Storch"       58 "2013" "001004;David Storch;2013"    
                              "001004" "AAR CORP"     499.272        0  499.272 "Timothy Romenesko"  59 "2013" "001004;Timothy Romenesko;2013"
                              "001004" "AAR CORP"     360.353        0  360.353 "Michael Sharp"      60 "2013" "001004;Michael Sharp;2013"    
                              "001004" "AAR CORP"     391.586        0  391.586 "Robert Regan"       61 "2013" "001004;Robert Regan;2013"    
                              "001004" "AAR CORP"     360.447        0  360.447 "Randy Martinez"     62 "2013" "001004;Randy Martinez;2013"  
                              "001004" "AAR CORP"         400        0      400 "John Fortson"       63 "2013" "001004;John Fortson;2013"    
                              "001004" "AAR CORP"     906.449        0  906.449 "David Storch"       64 "2014" "001004;David Storch;2014"    
                              "001004" "AAR CORP"     499.272        0  499.272 "Timothy Romenesko"  65 "2014" "001004;Timothy Romenesko;2014"
                              "001004" "AAR CORP"     391.586        0  391.586 "Robert Regan"       66 "2014" "001004;Robert Regan;2014"    
                              "001004" "AAR CORP"     409.375        0  409.375 "John Holmes"        67 "2014" "001004;John Holmes;2014"      
                              "001004" "AAR CORP"         400        0      400 "John Fortson"       68 "2014" "001004;John Fortson;2014"    
                              "001004" "AAR CORP"      755.25        0   755.25 "David Storch"       69 "2015" "001004;David Storch;2015"    
                              "001004" "AAR CORP"      456.25    507.4   963.65 "Timothy Romenesko"  70 "2015" "001004;Timothy Romenesko;2015"
                              "001004" "AAR CORP"     382.418        0  382.418 "Michael Sharp"      71 "2015" "001004;Michael Sharp;2015"    
                              "001004" "AAR CORP"     390.396        0  390.396 "Robert Regan"       72 "2015" "001004;Robert Regan;2015"    
                              "001004" "AAR CORP"      456.25  117.551  573.801 "John Holmes"        73 "2015" "001004;John Holmes;2015"      
                              "001004" "AAR CORP"     168.109        0  168.109 "John Fortson"       74 "2015" "001004;John Fortson;2015"    
                              "001004" "AAR CORP"         835        0      835 "David Storch"       75 "2016" "001004;David Storch;2016"    
                              "001004" "AAR CORP"       463.5        0    463.5 "Timothy Romenesko"  76 "2016" "001004;Timothy Romenesko;2016"
                              "001004" "AAR CORP"     227.692        0  227.692 "Michael Sharp"      77 "2016" "001004;Michael Sharp;2016"    
                              "001004" "AAR CORP"       401.7        0    401.7 "Robert Regan"       78 "2016" "001004;Robert Regan;2016"    
                              "001004" "AAR CORP"       463.5        0    463.5 "John Holmes"        79 "2016" "001004;John Holmes;2016"      
                              "001004" "AAR CORP"     263.333        0  263.333 "Eric Pachapa"       80 "2016" "001004;Eric Pachapa;2016"    
                              "001056" "AEROFLEX INC" 216.471   212.01  428.481 "Harvey Blau"        81 "1997" "001056;Harvey Blau;1997"      
                              "001056" "AEROFLEX INC" 281.261   212.01  493.271 "Michael Gorin"      82 "1997" "001056;Michael Gorin;1997"    
                              "001056" "AEROFLEX INC" 281.261   212.01  493.271 "Leonard Borow"      83 "1997" "001056;Leonard Borow;1997"    
                              "001056" "AEROFLEX INC" 173.564       60  233.564 "Carl Caruso"        84 "1997" "001056;Carl Caruso;1997"      
                              "001056" "AEROFLEX INC" 120.875       30  150.875 "Charles Badlato"    85 "1997" "001056;Charles Badlato;1997"  
                              "001056" "AEROFLEX INC" 220.667        0  220.667 "Harvey Blau"        86 "1998" "001056;Harvey Blau;1998"      
                              "001056" "AEROFLEX INC"  300.25      420   720.25 "Michael Gorin"      87 "1998" "001056;Michael Gorin;1998"    
                              "001056" "AEROFLEX INC"  300.25      420   720.25 "Leonard Borow"      88 "1998" "001056;Leonard Borow;1998"    
                              "001056" "AEROFLEX INC" 186.399       50  236.399 "Carl Caruso"        89 "1998" "001056;Carl Caruso;1998"      
                              "001056" "AEROFLEX INC" 128.562       40  168.562 "Charles Badlato"    90 "1998" "001056;Charles Badlato;1998"  
                              "001056" "AEROFLEX INC" 278.532  473.353  751.885 "Harvey Blau"        91 "1999" "001056;Harvey Blau;1999"      
                              "001056" "AEROFLEX INC" 352.532  631.137  983.669 "Michael Gorin"      92 "1999" "001056;Michael Gorin;1999"    
                              "001056" "AEROFLEX INC" 369.466  631.137 1000.603 "Leonard Borow"      93 "1999" "001056;Leonard Borow;1999"    
                              "001056" "AEROFLEX INC" 174.881       95  269.881 "Carl Caruso"        94 "1999" "001056;Carl Caruso;1999"      
                              "001056" "AEROFLEX INC" 138.859       55  193.859 "Charles Badlato"    95 "1999" "001056;Charles Badlato;1999"  
                              "001056" "AEROFLEX INC" 290.195  637.113  927.308 "Harvey Blau"        96 "2000" "001056;Harvey Blau;2000"      
                              "001056" "AEROFLEX INC" 369.466  614.402  983.868 "Michael Gorin"      97 "2000" "001056;Michael Gorin;2000"    
                              "001056" "AEROFLEX INC" 373.853  615.999  989.852 "Leonard Borow"      98 "2000" "001056;Leonard Borow;2000"    
                              "001056" "AEROFLEX INC"  217.77      130   347.77 "Carl Caruso"        99 "2000" "001056;Carl Caruso;2000"      
                              "001056" "AEROFLEX INC" 158.068       70  228.068 "Charles Badlato"   100 "2000" "001056;Charles Badlato;2000"  
                              end

                              The goal is to assign the compensation data in dataset 2 to the correct CFO in the correct year in dataset 1.
                              In order to do so I created fuzzy variables in both datasets using the following logic: "CompanyID;Full Name;Year"

                              Before I run the code (which will for sure take some hours to complete) I was wondering whether you think there might be some options in matchit to increase the speed/efficiency of the process (beside the obvious on to run it on a more powerful machine). Here is the currenct code which I am about to run:

                              Code:
                              matchit fuzzy_ID_master fuzzy using temp_fuzzy_using.dta , idusing(fuzzy_ID_using) txtusing(fuzzy_using) override weights(log)
                              System: i5, 8 GB of RAM, Windows 10 (64-bit), STATA 17

                              Thanks
                              Last edited by Marc Pelow; 22 Aug 2021, 10:10.

                              Comment

                              Working...
                              X