• by jghn on 5/24/2024, 1:04:35 PM

    This is looking at the wrong metric. I'm not expecting it to be 100% correct when I use it. I expect it to get me in the ballpark faster than I would have on my own. And then I can take it from there.

    Sometimes that means I have a follow on question & iterate from there. That's fine too.

  • by happypumpkin on 5/24/2024, 1:16:56 PM

    From the paper:

    "Additionally, this work has used the free version of ChatGPT (GPT-3.5)"

  • by cubefox on 5/24/2024, 1:39:12 PM

    From the paper:

    > For each of the 517 SO [Stack Overflow] questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt1 and fed that to the free version of ChatGPT, which is based on GPT-3.5. We chose the free version of ChatGPT because it captures the majority of the target population of this work. Since the target population of this research is not only industry developers but also programmers of all levels, including students and freelancers around the world, the free version of ChatGPT has significantly more users than the paid version, which costs a monthly rate of 20 US dollars.

    Note that GPT-4o is now also freely available, although with usage caps. Allegedly the limit is one fifth the turns of paid Plus users, who are said to be limited to 80 turns every three hours. Which would mean 16 free GPT-4o turns per 3 hours. Though there is some indication the limits are currently somewhat lower in practice and overall in flux.

    In any case, GPT-4o answers should be far more competent than those by GPT-3.5, so the study is already somewhat outdated.

  • by jononomo on 5/24/2024, 1:25:07 PM

    I use ChatGPT for coding constantly and the 52% error rate seems about right to me. I manually approve every single line of code that ChatGPT generates for me. If I copy-paste 120 lines of code that ChatGPT has generated for me directly into my app, that is because I have gone over all 120 lines with a fine-toothed comb, and probably iterated 3-4 times already. I constantly ask ChatGPT to think about the same question, but this time with an additional caveat.

    I find ChatGPT more useful from a software architecture point of view and from a trivial code point of view, and least useful at the mid-range stuff.

    It can write you a great regex (make sure you double-check it) and it can explain a lot of high-level concepts in insightful ways, but it has no theory of mind -- so it never responds with "It doesn't make sense to ask me that question -- what are you really trying to achieve here?", which is the kind of thing an actually intelligent software engineer might say from time to time.

  • by cjonas on 5/24/2024, 1:16:12 PM

    I scanned the paper and it doesn't mention what model they were using within chatgpt. If it was 3.5 turbo, then these results are already meaningless. GPT-4 and 4o are much more accurate.

    I just used GPT-4o to refactor 50 files from react classes to react function components and it did so almost perfectly everytime. Some of these classes were as long as 500 loc.

  • by Foivos on 5/24/2024, 1:14:46 PM

    This is way better than I thought. A follow-up question would be for the times that it is wrong, how wrong is it. In other words, is the wrong answer complete rubbish or it can be a starting point towards the actual correct answer?

  • by mrweasel on 5/24/2024, 1:12:07 PM

    ChatGPT was released one and a half year ago. It basically duct tape code together from a probability model, the fact that 52% of it's coding answers a correct is amazing.

    I'm still on the fence about LLMs for coding, but from talking to friends, they primarily use it to define a skeleton of code or generate code that they can then study and restructure. I don't see many developers accepting the generate code without review.

  • by jrvarela56 on 5/24/2024, 1:15:00 PM

    Similar to how programmers work, the AI needs feedback from the runtime in order to iterate towards a workable program.

    My expectation isn’t that the AI generate correct code. The AI will be useful as an ‘agent in the loop’:

    - Spec or test suite written as bullets

    - Define tests and/or types

    - Human intevenes with edits to keep it in the right direction

    - LLM generates code, runs complier/tests

    - Output is part of new context

    - Repeat until programmer is happy

  • by tasuki on 5/24/2024, 1:14:10 PM

    Does that mean that 48% of ChatGPT answers to programming questions are correct? If so, that's amazing!

  • by ChrisArchitect on 5/24/2024, 2:22:53 PM

    Related presentation video on the CHI 2024 conference page:

    https://programs.sigchi.org/chi/2024/program/content/146667

  • by MrSkelter on 5/25/2024, 12:30:24 PM

    ChatGPT isn’t the best coding LLM. Claude Opus is.

    Also as you can always tell if a coding response works empirically mistakes are much more easily spotted than in other forms of LLM output.

    Debugging with AI is more important than prompting. It requires an understanding of the intent which allows the human to prompt the model in a way that allows it to recognize its oversights.

    Most code errors from LLMs can be fixed by them. The problem is an incomplete understanding of the objective which makes them commit to incorrect paths.

    Being able to run code is a huge milestone. I hope the GPT5 generation can do this and thus only deliver working code. That will be a quantum leap.

  • by avg_dev on 5/24/2024, 1:06:06 PM

    That article links to the actual paper, the abstract of which is itself quite readable: https://dl.acm.org/doi/pdf/10.1145/3613904.3642596

    > Q&A platforms have been crucial for the online help-seeking behav- ior of programmers. However, the recent popularity of ChatGPT is altering this trend. Despite this popularity, no comprehensive study has been conducted to evaluate the characteristics of ChatGPT’s an- swers to programming questions. To bridge the gap, we conducted the first in-depth analysis of ChatGPT answers to 517 programming questions on Stack Overflow and examined the correctness, consis- tency, comprehensiveness, and conciseness of ChatGPT answers. Furthermore, we conducted a large-scale linguistic analysis, as well as a user study, to understand the characteristics of ChatGPT an- swers from linguistic and human aspects. Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.

  • by nijuashi on 5/24/2024, 2:40:42 PM

    I guess I know how to ask the right programming questions, because my feeling about it is it’s about 80-90% correct, and the rest just gets me to correct solutions much faster than a search engine.

  • by ph4 on 5/24/2024, 3:23:47 PM

    I view it as an e-bike for my mind. It doesn't do all the legwork, but it definitely gets me up certain hills (of my choosing) without as much effort.

  • by drewcoo on 5/24/2024, 1:15:15 PM

    To those who constantly claim ChatGPT is "like an intern," just how low are the standards for interns?

  • by 123yawaworht456 on 5/24/2024, 1:19:36 PM

    iirc, I saw some other study (or an experiment some random guy had ran) where original GPT4 had vastly outperformed its later incarnations for code generation.

    current openai products either use much lower parameter models under the hood than they did originally, or maybe it's a side-effect of context stretching.

  • by ggddv on 5/24/2024, 1:06:48 PM

    Can there be some sort of mechanism on HN for criticism of an unsubstantiated headline?

  • by odyssey7 on 5/24/2024, 1:15:27 PM

    Extrapolation:

    Odds of correct answer within n attempts =

    1 - (1/2)^n

    Nice, that’s exponentially good!

  • by resource_waste on 5/24/2024, 1:12:01 PM

    Can someone email the author and explain what a LLM is?

    People asking for 'right' answers, don't really get it. I'm sorry if that sounds abrasive, but these people give LLMs a bad name due to their own ignorance/malice.

    I remember having some Amazon programmer trash LLMs for 'not being 100% accurate'. It was really an iD10t error. LLMs arent used for 100% accuracy. If you are doing that, you don't understand the technology.

    There is a learning curve with LLMs, and it seems a few people still don't get it.

  • by Last5Digits on 5/24/2024, 1:17:30 PM

    Here's hoping that the average HN commenter will actually read the paper and realize that the study was performed using GPT-3.5.

  • by f0e4c2f7 on 5/24/2024, 1:18:02 PM

    This study uses a version of ChatGPT that is either 1 or 2 versions behind depending on the part of the study.

    It cracks me up how consistent this is.

    See post criticizing LLMs. Check if they're on the latest version (which is now free to boot!!).

    Nope. Seemingly...never. To be fair, this is probably just an old study from before 4o came out. Even still. It's just not relevant anymore.

  • by ObnoxiousProxy on 5/24/2024, 1:12:16 PM

    Misleading headline and completely pointless without diving into how the benchmark was constructed and what kinds of programming questions were asked.

    On the Humaneval (https://paperswithcode.com/sota/code-generation-on-humaneval) benchmark, GPT4 can generate code that works on first pass 76.5% of the time.

    While on SWE bench (https://www.swebench.com/) GPT4 with RAG can only solve about 1% of github issues used in the benchmark.