

ah I see. I misunderstood - when you said “I’d rather pick what is actually true”, you meant you’d pick a story you like and call it truth. Yes that’s also an option, why not.
ah I see. I misunderstood - when you said “I’d rather pick what is actually true”, you meant you’d pick a story you like and call it truth. Yes that’s also an option, why not.
What’s that?
Christians worship Christ, while Catholics worship … er … Cath
I guess there isn’t a point given to you by someone/something else, but you’re free to pick one of your own if you want. Or not. Ultimately we just have our evolved desire to survive and see our loved ones do well.
You think if people who publish their work publicly didn’t research things like this, they would just never be discovered?
At least this way, we all know about the possibility, and further research can be done to see what can mitigate it.
well obviously it won’t, that’s why you need ethical output restrictions
In case you haven’t seen it, the paper is here - https://machinelearning.apple.com/research/illusion-of-thinking (PDF linked on the left).
The puzzles the researchers have chosen are spatial and logical reasoning puzzles - so certainly not the natural domain of LLMs. The paper doesn’t unfortunately give a clear definition of reasoning, I think I might surmise it as “analysing a scenario and extracting rules that allow you to achieve a desired outcome”.
They also don’t provide the prompts they use - not even for the cases where they say they provide the algorithm in the prompt, which makes that aspect less convincing to me.
What I did find noteworthy was how the models were able to provide around 100 steps correctly for larger Tower of Hanoi problems, but only 4 or 5 correct steps for larger River Crossing problems. I think the River Crossing problem is like the one where you have a boatman who wants to get a fox, a chicken and a bag of rice across a river, but can only take two in his boat at one time? In any case, the researchers suggest that this could be because there will be plenty of examples of Towers of Hanoi with larger numbers of disks, while not so many examples of the River Crossing with a lot more than the typical number of items being ferried across. This being more evidence that the LLMs (and LRMs) are merely recalling examples they’ve seen, rather than genuinely working them out.
I think it’s an easy mistake to confuse sentience and intelligence. It happens in Hollywood all the time - “Skynet began learning at a geometric rate, on July 23 2004 it became self-aware” yadda yadda
But that’s not how sentience works. We don’t have to be as intelligent as Skynet supposedly was in order to be sentient. We don’t start our lives as unthinking robots, and then one day - once we’ve finally got a handle on calculus or a deep enough understanding of the causes of the fall of the Roman empire - we suddenly blink into consciousness. On the contrary, even the stupidest humans are accepted as being sentient. Even a young child, not yet able to walk or do anything more than vomit on their parents’ new sofa, is considered as a conscious individual.
So there is no reason to think that AI - whenever it should be achieved, if ever - will be conscious any more than the dumb computers that precede it.
I’m convinced from the evidence that God doesn’t care what we believe. If there even is such a thing, and I don’t think it’s possible to meaningfully answer that, there are many better ways to communicate vital information to people on earth other than choosing someone to relay your messages in a way that is indistinguishable from a mentally ill person who just hears voices in their head.