by sublimefire on 6/23/2025, 10:10:57 AM
by pif on 6/23/2025, 10:14:21 AM
The role of the university is to show that calling AI "AI" is just for idiots. Intelligence has nothing to do with AI.
by r0x0r007 on 6/23/2025, 10:03:26 AM
Students are like the electrical current, given the path with least resistance they will follow it(as with most of people). Cutting off such a path is NP hard to do at scale IMO. But definitely agree on the idea of the article.
by _0ffh on 6/23/2025, 10:09:02 AM
Schools in general are doomed to complete obsolescence now, as every kid can have their own personal AI tutor. Doesn't look much better for universities. Outside of lab work and testing, there is just no further need for them. There, I said it.
by psyklic on 6/23/2025, 10:19:41 AM
The article's main point seems to be that students cheat with AI; hence, universities must "resist" AI to preserve critical thought.
Although universities are certainly against cheating, the responsibility has always been on the student not to cheat. Universities do not oppose useful technologies simply because they may be misused for cheating.
Put another way, the role of a university is to "discover and invent the future." In this light, universities will be more interested in developing AI than so-called "resisting" it. This is especially since it has already yielded breakthroughs in science, e.g. a Nobel Prize in Chemistry for protein prediction.
by lewdwig on 6/23/2025, 10:09:11 AM
We exist in an era in which coursework as a medium of assessment has suddenly become nearly worthless. It does not surprise me that since this is the way things have been done for centuries they haven’t quickly rustled up some easy solutions.
by TheServitor on 6/23/2025, 10:19:34 AM
No. And there are a LOT of assumptions baked into this about people's passive engagement with models, early model flaws being projected indefinitely into the future, and the effectiveness of AI.
by simianwords on 6/23/2025, 10:58:31 AM
A characteristic of LLMs bad articles is that they try to stack up small multidimensional criticisms in hopes that the at least one would stick.
LLMs curb independent thinking (links some n=5 article published recently)
LLMs reduce wages (wrong!!)
LLMs cause environmental damage (wrong!)
Add in some vague leftist jargon about decolonisation and you have your standard llms bad article. There is I admit, a nugget of truth in each criticism and I hope we can explore it from an unbiased angle.
After thinking about it I have a theory on the real reason some people have a bad opinion on LLMs.
by rob_c on 6/23/2025, 10:02:10 AM
Oh hell no.
This is like saying mathematics should avoid the calculator.
Don't be so naive, the only AI models have are able to perform more powerful contextual lookups and reference and on the case of chat bots hallucinate when this fails.
These tools have no agency, people who blindly trust it beyond this are the highest of fools and don't understand garbage in garbage out.
The role of a university is to train in the use of tools and that includes the squishy one used for critical thinking. If the university wasn't doing that before now it wasn't doing its job.
None of this pseudo intellectualism and politico opinion posting.
by lugu on 6/23/2025, 10:32:49 AM
Saying that LLM prevent independent thinking is like saying books prevent independent thinking. It depend how you use them.
by amelius on 6/23/2025, 10:08:41 AM
The role of the University is to curate datasets useful for building new models, and perhaps training those models.
This post is close to pure waffle. Yes there are parts of common sense but just to give some example “spice” thrown in amongst other lines:
> Deep learning has historical and epistemological connections to eugenics through its mathematics, its metrics and through concepts like AGI, and we shouldn't be surprised if and when it gets applied in education to weed out 'useless learners'.
It might be partially correct but this is similar to saying Germans should not be trusted because of WWII.
(sad face) this post subtracts from the valid arguments against the usage of AI tools in some valid scenarios, it is because some folks have a knee jerk reaction and label authors as Luddites