You are not logged in. You can browse but not post. Login or Register by clicking 'Login or Register' at the top-right of this page. For more information on Statalist, see the FAQ.
I'm really enjoying watching all of the ChatGPT content coming out in the programing and medical communities. That said, I wonder how one might go about systematically evaluating the quality of the programming advice. To what extent does the model get things wrong, and are there systematic conditions under which it generates incorrect responses?
I've asked it a bunch of questions and it makes a lot of factual errors. It will provide similar language for similarly worded questions. I think these artifacts will be the "tell." I suspect this will not be unlike the migration of "regression by hand" to statistical software. It will make writing papers easier for sure, but it will never replace a clever question.
I wonder how nice it would play with data collection. Like, even if I asked "Where might I get Spanish state GDP/cap from 1990-2010", or even if I asked it to scrape together python code to do that (if I fed it like a link or something).
Either way, I don't take think it'll overtake ado programmers any time soon
Yep that's my friend's command. When i asked him, he said augsynth is pretty much completely Ridge regression instead of OLS followed by ridge. Justin also noted how R and Stata optimize sort of differently, so he said they might not be totally comparable. But I'll see! Maybe I'll test them side by side.
But yeah, chatgpt seems super useful, and it could be sharpened over time for programming
Leave a comment: