• by johnthuss on 12/8/2022, 7:09:37 PM

    "because GPT is good enough to convince users of the site that the answer holds merit, [the] signals the community typically use to determine the legitimacy of their peers’ contributions frequently fail to detect severe issues with GPT-generated answers. As a result, information that is objectively wrong makes its way onto the site."

    Basically it is much faster to generate incorrect answers algorithmically than it is for humans to evaluate their accuracy and flag them.

  • by newbieuser on 12/8/2022, 7:03:47 PM

    > the community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers.

    this confidence is not fully achieved in most answers. It's very likely to come across many answers that don't work at all. and these answers, oddly enough, can get so many points. It is debatable how good he is in terms of trust.

  • by m348e912 on 12/8/2022, 7:16:41 PM

    How does Stack Overflow know the answer is generated by ChatGPT? Also wouldn't they be better suited by addressing incorrect answers rather than who or what created the answer? If they are concerned about incorrect answers they have to concede that humans provide incorrect answers as well.

  • by Alifatisk on 12/8/2022, 7:31:41 PM

    The fact that Stackoverflow have to come out with a statement like this shows how powerful ChatGPT is!

  • by crmd on 12/8/2022, 7:53:17 PM

    I love how Trust and Safety are the go-to watchwords for every tech platform’s self-serving business model-protecting content policy decisions.