

LLMs have no intentions. They only do what the user asks them to.
A contrarian isn’t one who always objects - that’s a confirmist of a different sort. A contrarian reasons independently, from the ground up, and resists pressure to conform.
LLMs have no intentions. They only do what the user asks them to.
Everything you do changes your brain activity.
This isn’t about using ChatGPT broadly, but specifically about the difference between writing an essay with the help of an LLM versus doing it without. And in this case, I think it all comes down to how you use it. If you just have it write the essay for you, then of course it won’t stimulate your brain to the same extent - that’s like hiring someone to go to the gym for you.
Personally, the way I use it to help with my writing is by doing all the writing myself first. Only after that do I let it check for grammatical errors and help improve the clarity and flow by making minor structural adjustments - while keeping the tone and message of my original draft intact.
For me, the purpose of writing is to convert abstract thoughts into language and pass that information along, hoping the reader understands it well enough that it forms the same idea in their mind. If ChatGPT can help untangle my word salad and make that process more effective, I welcome it.
This headline format makes me irrationally annoyed.
They shouldn’t be making assumptions about what the reader thinks. It almost feels like they’re planting a bias first and then presenting the facts - instead of just laying things out and letting people make up their own minds.
Working with your hands is a good way. I feel like online discussions often forget that people like this even exists.
The ads-based business model is one of the main reasons so much of the internet sucks so bad. It should either be completely free or run on donations or subscriptions.
I don’t have an issue with YouTube ads because I’ve never actually had to see any - thanks to adblocking. But when they eventually figure out how to prevent that, I’d rather just pay a monthly fee than deal with ads. I think their pricing is completely reasonable, and I can’t morally justify blocking ads - I do it because it’s easy and free. Honestly, I’ve subscribed to services that cost more and give me less value than YouTube does.
As much as I hate dealing with their shenanigans, I can’t really blame them either. As long as I can get away with using an adblocker, I will - but honestly, YouTube gives me more value for free than a lot of services I actually pay for. I have no moral argument for why YouTube should let me watch videos for free, even though I like free stuff just as much as the next guy.
It is the source most Americans get their news from wether it’s technically a news source in itself.
Seems like I’m not in the target audience for these ads. I have absolutely zero clue what any of the things mentioned above are. I use WhatsApp to send messages.
It’s not a place for incivility that I’m making, either. I just struggle to believe you genuinely don’t understand what people mean when they ask for less moderation or censorship.
Nobody is asking for an unmoderated space.
“Your claim is only valid if you first run this elaborate, long-term experiment that I came up with.”
The world isn’t binary. When someone says less moderation, they don’t mean no moderation. Framing it as all-or-nothing just misrepresents their view to make it easier for you to argue against. CSAM is illegal, so it’s always going to be against the rules - that’s not up to Google and is therefore a moot point.
As for other content you ideologically oppose, that’s your issue. As long as it’s not advocating violence or breaking the law, I don’t see why they’d be obligated to remove it. You’re free to think they should - but it’s their platform, not yours. If they want to allow that kind of content, they’re allowed to. If you don’t like it, don’t go there.
You don’t get notified if the channel owner deletes your comment.
I agree. There just seem to be a fairly widespread pro-censorship sentiment among Lemmy users, usually driven by the desire to block speech that could be harmful to marginalized groups - but in practice, it often extends to broadly silencing all ideas they disagree with. The strawman here tends to be that anyone who wants more free speech just wants to shout slurs and spread (in their view) objectively harmful ideas.
That’s a bit different than using chatGPT in what is effectively a one-on-one interview. This isn’t about writing a job application. It’s about someone asking you a question and instead of you answering it you make chatGPT to answer it for you.
That’s because it is.
The term artificial intelligence is broader than many people realize. It doesn’t mean human-level consciousness or sci-fi-style general intelligence - that’s a specific subset called AGI (Artificial General Intelligence). In reality, AI refers to any system designed to perform tasks that would typically require human intelligence. That includes everything from playing chess to recognizing patterns, translating languages, or generating text.
Large language models fall well within this definition. They’re narrow AIs - highly specialized, not general - but still part of the broader AI category. When people say “this isn’t real AI,” they’re often working from a fictional or futuristic idea of what AI should be, rather than how the term has actually been used in computer science for decades.
Different definitions for intelligence:
We have plenty of intelligent AI systems already. LLM’s probably fit the definition. Something like Tesla FSD definitely does.
Our current AI models, sure - but a true superintelligent AGI would be a completely different case. As humans, we’re inherently incapable of imagining just how persuasive a system like that could be. When bribery doesn’t work, it’ll eventually turn to threats - and even the scenarios imagined by humans can be pretty terrifying. Whatever the AI would come up with would likely be far worse.
The “just pull the plug” argument, to me, sounds like a three-year-old thinking they can outsmart an adult - except in this case, the difference in intelligence would be orders of magnitude greater.
Thanks.
Well, I don’t think OpenAI knows how to build AGI, so that’s false. Otherwise, Sam’s statement there is technically correct, but kind of misleading - he talks about AGI and then, in the next sentence, switches back to AI.
Sergey’s claim that they will achieve AGI before 2030 could turn out to be true, but again, he couldn’t possibly know that. I’m sure it’s their intention, but that’s different from reality.
Elon’s statement doesn’t even make sense. I’ve never heard anyone define AGI like that. A thirteen-year-old with an IQ of 85 is generally intelligent. Being smarter than the smartest human definitely qualifies as AGI, but that’s just a weird bar. General intelligence isn’t about how smart something is - it’s about whether it can apply its intelligence across multiple unrelated fields.
Is there a link where I could see them making these claims myself? This is something I’ve only heard from AI critics, but never directly from the AI companies themselves. I wouldn’t be surprised if they did, but I’ve just never seen them say it outright.
I only have the beer part of this equation figured out.