• by ggm on 9/5/2023, 5:07:19 AM

    The threat model really depends on how strongly you mean "existential" -if its hyperbole and its just "risk" then there's heaps of risk potential in non-AGI, misapplied LLM/GPT AI outcomes. Search "robodebt Australia" and weep. Reliance on "the machine said so" by government bodies under pressure to deliver a specific outcome is the Nuremberg defence of our times.

    Since I don't believe AGI is on the horizon, I can answer the non-hyperbolic "existential" as no, it's not. But, if you want to end the species, continuing to be dumb about AGW is definitely the way to go, and it's only one letter off AGI.

    Wake up sheeple! spelling matters! AGW is an existential risk (not joking)

  • by archo on 9/5/2023, 4:05:54 AM

  • by ftxbro on 9/5/2023, 4:11:08 AM

    these kinds of articles are usually cringey but this one was pretty good. it doesn't have the usual smug subtext like of course ai isn't an existential threat

  • by arisAlexis on 9/5/2023, 7:04:23 AM

    Just tell anyone non tech related: hey we are building something smarter than us and we haven't found a way to control it yet.

    Like, what do you think?