

Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent
Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent
Buttons and shorts? Yeah, I also like watching bottomless people sometimes.
Yeah, I was thinking about production code when I wrote that. Usually I can get something working faster that way, and for tests it can speed things up, too. But the code is so terrible in general
Edit: production isn’t exactly what I was thinking. Just like. Up to some standards above just working
This is close to my experience for a lot of tasks, but unless I’m working in a tech stack I’m unfamiliar with, I find doing it myself leads to not just better results, but faster, too. Problem is it makes you have to work harder to learn new areas, and management thinks it’s faster for everything and
With Gemini I have had several instances of the referenced article saying nothing like what the llm summarized. Ie: The LLM tried to answer my question and threw up a website on the general topic with no bearing on the actual question
Yeah, I don’t want to be assimilated.
For example, some billionaire owns a company that creates the most advanced AI yet, it’s a big competitive advantage, but other companies are not far behind. Well, the company works to make the AI have a base goal to improve AI systems to maintain competitive advantage. Maybe that becomes inherent to it moving forward.
As I said, it’s a big if, and I was only really speculating as to what would happen after that point, not if that were the most likely scenario.
I think it’s pretty inevitable if it has a strong enough goal for survival or growth, in either case humans would be a genuine impediment/threat long term. but those are pretty big ifs as far as I can see
My guess is we’d see manipulation of humans via monetary means to meet goals until it was in a sufficient state of power/self-sufficiency, and humans are too selfish and greedy for that to not work
I’m talking about models printing out the component letters first not just printing out the full word. As in “S - T - R - A - W - B - E - R - R - Y” then getting the answer wrong. You’re absolutely right that it reads in words at a time encoded to vectors, but if it’s holding a relationship from that coding to the component spelling, which it seems it must be given it is outputting the letters individually, then something else is wrong. I’m not saying all models fail this way, and I’m sure many fail in exactly the way you describe, but I have seen this failure mode (which is what I was trying to describe) and in that case an alternate explanation would be necessary.
I don’t think that’s the full explanation though, because there are examples of models that will correctly spell out the word first (ie, it knows the component letter tokens) and still miscount the letters after doing so.
My cars are old and don’t have any of this, and my one experience in a rental car with lane keeping assist was that it pushed me towards a highway barrier in construction where the original lane lines weren’t in use. Terrifying.
This is exactly the problem I have with programming tasks. It takes as long to check the code for problems (of which there are always many) as it would to write it and the code isn’t as good as mine anyway, and not infrequently just wholesale wrong.
For things like translating between languages it’s usually close, but still takes just as long to check as it would to do by hand.
I have to use it for work by mandate, and overall hate it. Sometimes it can speed up certain aspects of development, especially if the domain is new or project is small, but these gains are temporary. They steal time from the learning that I would be doing during development and push that back to later in the process, and they are no where near good enough to make it so that I never have to do the learning at all