• by fat-chunk on 5/6/2023, 6:05:35 PM

    I was at a conference called World Summit AI in 2018, where a vice president of Microsoft gave a talk on progress in AI.

    I asked a question after his talk about the responsibility of corporations in light of the rapidly increasing sophistication of AI tech and its potential for malicious use (it's on youtube if you want to watch his full response). In summary: he said that it's the responsibility of governments and not corporations to figure out these problems and set the regulations.

    This answer annoyed me at the time, as I interpreted it as a "not my problem" kind of response, and thereby trying to absolve tech companies of any damage caused by rapid development of dangerous technology that regulators cannot keep up with.

    Now I'm starting to see the wisdom in his response, even if this is not what he fully meant, in that most corporations will just follow the money and try to be the first movers when there is an opportunity to grab the biggest share of a new market, whether we like it or not, regardless of any ethical or moral implications.

    We as a society need to draw our boundaries and push our governments to wake up and regulate this space before corporations (and governments) cause irreversible negative societal disruption with this technology.

  • by mrshadowgoose on 5/6/2023, 4:05:13 PM

    I fully agree that malicious corporations and governments are the largest risk here. However, I think it's incredibly important to reject the reframing of "AI safety" as anything other than the existential risk AGI poses to most of humanity.

    What will the world look like when AGI is finally achieved, and the corporations and governments that control them rapidly have millions of useless mouths to feed? We might end up living in a utopic post-scarcity society where literally every basic need is furnished by a fully automated industrial base. But there are no guarantees that the entities in control will take things in that direction.

    AI safety is not about whether "tech bros are going to be mean to women". AI safety is about whether my government is concerned with my continued comfortable existence once my economic value as a general intelligence is reduced to zero.

  • by tgv on 5/6/2023, 3:19:43 PM

    While I'm in the more alarmist camp when it comes to AI, these arguments surprised me a bit. This time it isn't "will somebody think of the children" but rather "won't someone think of the women who aren’t white". The argumentation then lays the blame at corporations (i.c. Google) for not preventing actual harm that happens today. While discrimination is undeniable, and it is an actual source of harm, the reasoning seems rather generic and can be applied to anything corporate and is more politically inspired than the other arguments against AI.

  • by agentultra on 5/6/2023, 3:34:23 PM

    This is exactly the problem with ML right now. Hinton and other billionaires are making sensational headlines predicting all sorts of science fiction. The media loves a good story and fear is catchy. But it obscures the real danger: humans.

    LLM’s are merely tools.

    Those with the need, will, and desire to use them for their own ends pose the real threat. State actors who want better weapons, billionaires who want an infallible police force to protect their estates, scammers who want to pull off bigger frauds without detection, etc.

    It is already causing undue harm to people around the world. As always it’s those less fortunate that are disproportionately affected.

  • by nologic01 on 5/6/2023, 5:25:18 PM

    The biggest risk I see (in the short term) is people being forced to accept outcomes where "AI" plays, in one form or another a defining role that materially affects human lives.

    Thus people accepting implicitly (without awareness) or explicitly (as a precondition for receiving important services and without any alternatives on offer) algorithmic regulation of human affairs that is controlled by specific economic actors. Essentially a bifurcation of society into puppets and puppeteers.

    Algorithms encroaching into decision making have been an ongoing process for decades and in some sense it is an inescapable development. Yet the manner in which this can be done spans a vast range of possibilities and there is plenty of precedence: Various regulatory frameworks and checks and balances are in place e.g., in the sectors of medicine, insurance, finance etc. where algorithms are used to support important decision making, not replace it.

    The novelty of the situation rests on two factors that do not merely replicate past circumstances:

    * the rapid pace of algorithmic improvement which creates a pretext for suppressing societal push-back

    * the lack of regulation that rather uniquely characterized the tech sector, which allowed creating de-facto oligopolies, lock-ins and lack of alternatives

    The long term risk from AI depends entirely on how we handle the short term risks. I don't really believe we'll see AGI or any such thing in the foreseeable future (20 years), entirely on the basis of how the current AI mathematics looks and feels. Risks from other - existential level - flaws of human society feel far greater, with biological warfare maybe the highest risk of them all.

    But the road to AGI becomes dystopic long before it reaches the destination. We are actually already in a dystopia as the social media landscape testifies to anybody who wants to see. A society that is algorithmically controlled and manipulated at scale is a new thing. Pandora's box is open.

  • by bioemerl on 5/6/2023, 2:55:35 PM

    And hey guys, there are two big open source communities running that focus heavily on running this stuff offline.

    KoboldAi

    oobabooga

    Look them up, join their discords, rent a few GPU servers and contribute to the stuff they are building. We've got a living solution you can contribute to right now if you're super worried about this.

    This stuff is actually a very valid way to move towards finding a use for LLMs at your workplace, they offer pretty easy tools for doing things like fine tuning, so if you have a commercially license model you could throw a problem at it and see if it works.

  • by satisfice on 5/6/2023, 8:46:55 PM

    The feminist complains about feeling disrespected for half the interview instead of dealing with the substance of the question. When she finally gets around to commenting on his point, it's a vacuous and insulting dismissal-- exactly the sort of thing she seems to think people shouldn't do to her.

    Most of what she says is sour grapes. But when you put all that aside, there's something else disturbing going on: apparently the AI experts who wish to criticize how AI is being developed and promoted can't even agree on the most basic concerns.

    It seems to me when an eminent researcher says "I'm worried about {X}" with resepct to the focus of their expertise, no reasonable person should merely shrug and call it a fantasy.

  • by superkuh on 5/6/2023, 3:24:42 PM

    AI's aren't the AIs. The artificial intelligences with non-human motives are the non-human legal persons: corporations themselves. They've already done a lot of damage to society. Corporate persons should not have the same rights as human persons.

  • by flangola7 on 5/6/2023, 4:24:14 PM

    The biggest risk is machines running out of hand and squishing all of us like a bug by accident. Once pseudo-intelligent algorithms are running every part of industry and engaging in global human communications it only takes minor errors to cascade and amplify into a real problem, one that will be moving faster than we can react to.

    Think stock market flash crash, replacing digital numbers that can be paused and reset with physical activity in supply chains, electrical grids, internet infrastructure, and interactions in media and interpersonal communication.

  • by mitthrowaway2 on 5/6/2023, 5:13:39 PM

    Hinton: "The main immediate danger is bad actors. Also, while not immediate, there is a concern that AI might eventually become smarter than humans".

    Whittaker: "Wrong! The main immediate danger is corporations. And the concern that AI might become smarter than humans not immediate."

  • by siliconc0w on 5/6/2023, 4:51:57 PM

    I think my biggest concerns are:

    0) civil unrest from economic impacts and changes in how the world works

    1) increasing the leverage of bad actors - almost certainly this will increase frauds and thefts but on the far end you things like, "Your are GPT bomb maker. Build me the most destructive weapon possible with what I can order online."

    2) swarms of kill bots, maybe homemade above

    3) AI relationships replacing human ones. I think this one cuts both ways since loneliness kills but seems like it'll have dangerous side-effects like further demolishing the birth rate.

    Somewhat down on the list is the fear corporations or government gatekeeping the most powerful AIs and using them to enrich themselves, making it impossible to compete or just get really good at manipulating the public. There does seem to be a counterbalance here with open-source models and people figuring out how to make them more optimized so better models are more widely available.

    In some sense this will force us to get better at communicating with each other - stamping out bots and filtering noise from authentic human communication. Things seem bad now but it seems inevitable that every possible communication channel is going to get absolutely decimated with very convincing laser-targeted spam which will be very difficult to stop without some sort of large scale societal proof of human/work system (which ironically altman is also building).

  • by krono on 5/6/2023, 4:22:20 PM

    Relevant recent announcement by Mozilla regarding their acquisition of an e-commerce product/review scoring "AI" service, with the intent to integrate it into the core Firefox browser: https://blog.mozilla.org/en/mozilla/fakespot-joins-mozilla-f...

    Mozilla will be algorithmically profiling you and your actions on covered platforms, and if it ever decides you are a fraud or invalid for some reason, it very conveniently advertise this accusation to all its users by default. Whether you will be able to sell your stuff or have your expressed opinion of a product be appreciated and heard by Firefox users will be in Mozilla's hands.

    A fun fact that serves to show what these companies are willing to throw overboard just to gain the smallest of edges, or perhaps simply to display relevance by participating in the latest trends: the original company's business strategy was essentially Mozilla's Manifesto in reverse, and included such things as selling all collected data to all third parties (at least their policies openly admitted to this). The person behind all that is now employed by Mozilla, the privacy proponent.

  • by gmuslera on 5/6/2023, 3:20:21 PM

    Guns don't kill people, at least tightly controlled guns. If they do, then the killer was whoever controls it. And not just corporations. Intelligence agencies, non-tech corporations, actors with enough money and so on.

    The not-so-tightly controlled ones, at least in the hands of individuals not in a position of power or influence, may run into the risk of becoming illegal in a way or another. The system will always try to get into an artificial scarcity position.

  • by 13years on 5/6/2023, 3:27:41 PM

    I wouldn't constrain it to only corporations, but all entities.

    Ultimately, most of the dangers, at least those close enough to reason about all are risks that come about from how we will use AI on ourselves.

    I've described those and much more from the following.

    "Yet, despite all the concerns of runaway technology, the greatest concern is more likely the one we are all too familiar with already. That is the capture of a technology by state governments and powerful institutions for the purpose of social engineering under the guise of protecting humanity while in reality protecting power and corruption of these institutions."

    https://dakara.substack.com/p/ai-and-the-end-to-all-things

  • by eachro on 5/6/2023, 8:23:00 PM

    At this point there are quite a lot of companies training these massive LLMs. We're seeing startups with models that are not quite GPT-4 level but close enough to GPT-3.5 pop up on a near daily basis. Moreover, model weights are being released all the time, giving individuals the opportunity to tinker with them and further release improved models back to the masses. We've seen this with the llama/alpaca/alpaca.cpp/alpaca-lora releases not too long ago. So I am not at all worried about this risk of corporate control.

  • by 1vuio0pswjnm7 on 5/6/2023, 10:49:40 PM

    "Because there's a lot of power and being able to withhold your labor collectively, and joining together as the people that ultimately make these companies function or not, and say, "We're not going to do this." Without people doing it, it doesn't happen."

    The most absurd "excuse" I have seen, many times now online, is, "Well, if I didn't do that work for Company X, somebody else would have done it."

    Imagine trying to argue, "Unions are pointless. If you join a union and go on strike, the company will just find replacements."

    Meanwhile so-called "tech" companies are going to extraordinary lengths to prevent unions not to mention to recruit workers from foreign countries who have lower expectations and higher desperation (for lack of a better word) than workers in their home countries.

    The point that people commenting online always seem to omit is that not everyone wants to do this work. It's tempting to think everyone would want to do it because salaries might be high, "AI" people might be media darlings or whatever. It's not perceived as "blue collar". The truth is that the number of people who are willing to spend all their days fiddling around with computers, believing them to be "intelligent", is limited. For avoidance of doubt, by "fiddling around", I do not mean sending text messages, playing video games, using popular mobile apps and what not. I mean grunt work, programming.

    This is before one even considers only a limited number of people may have actually the aptitude. Many might spend large periods of time trying and failing, writing one line of code per day or something. Companies could be bloated with thousands of "engineers" who can be laid off immediately without any noticeable effect on the company's bottom line. That does not mean they can replace the small number of people who really are essential.

    Being willing does not necessary equate to being able. Still, I submit that even the number of willing persons is limited. It's a shame they cannot agree to do the right thing. Perhaps they lack the innate sense of ethics needed for such agreement. That they spend all their days fiddling with computers instead of interacting with people is not surprising.

  • by fredgrott on 5/6/2023, 4:39:20 PM

    I have a curious question, where did the calculator(tabulator) operators go?

    Did we suddenly have governments fall when they were replaced by computers?

    Did we suddenly have massive unemployment when they were replaced?

    AI is a general purpose tool, and like other general purpose tools it expands not only human's reach mind wise it betters society and lifts up the world.

    We have been through this before, we will get through it quite well since the last oh general purpose tool will replace us rumor mill reactive noise.

  • by tpoacher on 5/6/2023, 4:59:48 PM

    The two are not mutually exclusive dangers. If anything, they are mutually reinforcing.

    The Faro Plague in Horizon Zero Dawn was indeed brought on by Ted Faro's shortsightedness, but the same shortsightedness would not have caused Zero Dawn had Ted Faro been a car salesman instead. (forgive my reliance on non-classical literature for the example).

    The way this is framed makes me think this framing itself is even more dangerous than the dangers of AI per se.

  • by brigadier132 on 5/6/2023, 3:47:07 PM

    AI's biggest risk are governments with militaries controlling them. Mass human death and oppression has always been carried out by governments.

  • by data_maan on 5/6/2023, 4:25:53 PM

    All these warnings about AI safety are bullshit.

    Humanity is perfectly well capable of ruining itself without help from AGI (nuclear proliferation is unsolved and getting worse, climate change will bite soon etc).

    If anything AGI could save us by giving us some help in solving these problems. Or perhaps doing the mercy kill to put us out quickly, instead of us suffering a protracted death by a slowly deteriorating environment.

  • by peteradio on 5/6/2023, 4:09:54 PM

    The risk is already here, its the data companies of men control and the 100 year effort to enhance our ability to mine it. If we say AI is the coming risk we are fools.

  • by EVa5I7bHFq9mnYK on 5/6/2023, 6:14:58 PM

    Now that everyone and their mother in law has chimed in about the perils of AI, folks are arguing whose mother in law gave the better talk.

  • by mmaunder on 5/6/2023, 5:06:23 PM

    Much of todays conversation around AI mirrors conversations that occurred at the dawn of many other technological breakthroughs. The printing press, electricity, radio, the microprocessor, PCs and packaged software, the Internet and the Web. Programmers can now train functions rather than hand coding them. It’s just another step up.

  • by photochemsyn on 5/6/2023, 3:46:23 PM

    > "What you said just now—the idea that we fall into a kind of trance—what I’m hearing you say is that’s distracting us from actual threats like climate change or harms to marginalized people."

    Is the argument here that people are rather passive and go along with whatever the system serves up to them, hence they're liable to 'fall into a trance'? If so, then the problem is that people are passive, and it doesn't really matter if they're passively watching television or passively absorbing an AI-engineered social media feed optimized for advertiser engagement and programmed consumption, is it?

    If you want to use LLMs to get information about fossil-fueled global warming from a basic scientific perspective, you can do that, e.g.:

    > "Please provide a breakdown of how the atmospheric characteristics of the planets Venus, Earth, and Mars affects their surface temperature in the context of the Fourier and Manabe models."

    If you want to examine the various approaches civilizations have used to address the problem of economic and social marginalization of groups of people, you could ask:

    > "How would [insert person here] address the issue of economic and social marginalization of groups of people in the context of an industrial society experiencing a steep economic collapse?"

    Plug in Ayn Rand, Karl Marx, John Maynard Keynes, etc. for contrasting ideas. What sounds best to you?

    It's an incredibly useful tool, and people can use it in many different ways - if they have the motivation and desire to do so. If we've turned into a society of brainwashed apathetic zombies passively absorbing whatever garbage is thrown our way by state and corporate propagandists, well, that certainly isn't the fault of LLMs. Indeed LLMs might help us escape this situation.

  • by 29athrowaway on 5/6/2023, 3:18:32 PM

    The biggest risk is giving unlimited amounts of data to those corporations.

  • by nico on 5/6/2023, 5:30:52 PM

    The people that control those corporations

    It’s not AI, it’s us

    It’s humans making the decision

  • by nico on 5/6/2023, 9:02:38 PM

    No corporation controls AI

    AI is open

    AI is the new Linux

    And it’s people in control, not corporations

  • by irrational on 5/6/2023, 4:02:27 PM

    I thought the biggest risk was Sarah Connor and Thomas Anderson.

  • by benreesman on 5/6/2023, 6:34:37 PM

    I’m just completely at a loss for how so many people ostensibly so highly qualified even start with absurd, meaningless terms like “Artificial General Intelligence”, and then go on to conclude that there’s some kind of Moore’s Law going on around an exponent, an exponent that fucking Sam Altman has publicly disclaimed. The same showboat opportunist that has everyone changing their drawers over the same 10-20% better that these things have been getting every year since 2017 is managing investor expectations down, and everyone is losing their shit.

    GPT-4 is a wildly impressive language model that represents an unprecedented engineering achievement as concerns any kind of trained model.

    It’s still regarded. It makes mistakes so fundamental that I think any serious expert has long since decided that forcing language arbitrarily hard is clearly not to path to arbitrary reasoning. It’s at best a kind of accessible on-ramp into the latent space where better objective functions will someday not fuck up so much.

    Is this a gold rush thing at the last desperate end of how to get noticed cashing in on hype? Is it legitimate fear based on too much bad science fiction? Is it pandering to Sam?

    What the fuck is going on here?