• 0 Posts
  • 16 Comments
Joined 2 months ago
cake
Cake day: June 28th, 2025

help-circle




  • I keep hearing stuff like this, but I haven’t found a good use or workflow for AI (other than occasional chatbot sessions). Regular autocomplete is more accurate (no hallucinations) and faster than AI suggestions (especially accounting for needing to constantly review the suggestions for correctness). I guess stuff like Cursor is OK at making one-off tools on very small code-bases, but hits a brick-wall when the code base gets too big. Then you’re left with a bunch of unmaintainable code you’re not very familiar with and you would to spend a lot of time trying to fix yourself. Dunno if I’m doing something wrong or what.

    I guess what I’m saying is that using AI can speed you up to a point while the project accumulates massive amounts of technical debt, and when you take into account all the refactoring and debugging time, it results in taking longer to produce a buggier project. At least, in my experience.



  • I’ve tried Copilot for a while and played around with Cursor for a bit. I was better and faster without Copilot due to sometimes not paying enough attention of the lines it would generate. This would cause subtle bugs that took a long time to debug. Cursor just produced unmaintainable code-bases that I had no knowledge of, and to make major changes, would be faster for me to just rewrite it from scratch. The act of typing gives me time to think more about what I’m doing or am going to do, while Copilot generations are distracting and break my thought processes. I work best with good LSP tooling and sometimes AI chatbots (mostly just for customized example snippets for libraries or frameworks I’m unfamiliar with; though that has its own problems because the LLMs knowledge is out of date a lot) that don’t directly modify my code.


  • Weird, I’ve been forced to use a Mac for work, never liked it. I prefer Debian or other non-rolling-release distros with long term support, and haven’t had a Linux install get messed up in many years (since I used Arch, and something went wrong with my proprietary Nvidia drivers after an update).



  • They are black boxes, and can even use the same NN architectures as the generative models (variations of transformers). They’re just not trained to be general-purpose all-in-one solutions, and have much more well-defined and constrained objectives, so it’s easier to evaluate how their performance may be in the real-world (unforeseen deficiencies, and unexpected failure modes are still a problem though).


  • Kind of a nitpick, but the CEO wasn’t a billionaire. It’s also kind of an important distinction, because it’s not necessarily the wealth that’s the main problem, but how the owner class/bourgeoisie obtain their wealth/income. A slumlord worth less than a million is arguably as morally wrong as a Blackstone CEO (one obviously has more wealth/power/impact though). The evidence of owner class solidarity and government capture/corruption is also important. Rashid, being a politician, is likely trying to not alienate is millionaire donors.





  • Yeah, they probably wouldn’t think like humans or animals, but in some sense could be considered “conscious” (which isn’t well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

    This argument seems weak to me:

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    You can emulate inputs and simplified versions of hormone systems. “Reasoning” models can kind of be thought of as cognition; though temporary or limited by context as it’s currently done.

    I’m not in the camp where I think it’s impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I’m not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a “singularity-like” scenario.

    I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.