

Yeah, good point.
Yeah, good point.
Ooh, good point. Well, they’re both going to ship from the factory in nonworking condition, so that’ll be tough to tell.
Absolutely gonna be “made in America” in that the application of the cheap Chinese gold decal to the cheap Chinese handset will be done in America.
It’s definitely going to be the Escobar Phone all over again. Anyone who accidentally receives one will get a foil-wrapped $150 Huawei handset with a preinstalled background image.
Easier than spotting the Cybertrucks?
So…tries and fails? 😛
Mozilla! Stop doing stupid stuff!
Well, now you know otherwise. I use it daily.
Nah, it’s completely different from bookmarks. But obviously there’s no sense trying to sell anyone on it anymore.
Honestly a lot of the issues result from null results only existing in the gaps between information (unanswered questions, questions closed as unanswerable, searches that return no results, etc), and thus being nonexistent in training data. Models are therefore predisposed toward giving an answer of any kind, and if one doesn’t exist it’ll “make one up.”
Which is itself a misnomer, because it can’t look for an answer and then decide to make one up when it can’t find it. It just gives an answer that sounds plausible, and if the correct answer is most likely in its training data then that’ll seem most plausible.
“Unintentionally” is the wrong word, because it attributes the intent to the model rather than the people who designed it.
You misunderstand me. I don’t mean that the model has any intent at all. Model designers have no intent to misinform: they designed a machine that produces answers.
True answers or false answers, a neural network is designed to produce an output. Because a null result (“there is no answer to that question”) is very, very rare online, the training data doesn’t include it; meaning that a GPT will almost invariably produce any answer; if a true answer does not exist in its training data, it will simply make one up.
But the designers didn’t intend for it to reproduce misinformation. They intended it to give answers. If a model is trained with the intent to misinform, it will be very, very good at it indeed; because the only training data it will need is literally everything except the correct answer.
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn’t find. The customer didn’t believe him when he said that the promotion didn’t exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
Samsung actually added Knox to their Android implementation a few months before iOS added Secure Enclave. I think Qualcomm had some sort of trusted execution environment around that time, too, if I recall correctly. And Google added Trusty to the AOSP two years ago. So it’s already running on Android, and has been for ages.
But I’m not convinced a TEE would be necessary for a device that doesn’t run any third-party native code. Browser tab sandboxing is already pretty robust; I haven’t heard of an escalation exploit being found in ages on any major JavaScript engine, meaning that the risk of data exfiltration or bootloader compromise are extremely remote, and would be much quicker (and less risky!) to patch via browser updates than firmware/OS updates.
The only other reason I know of that you’d need a TEE is for DRM, and I’d be willing to wager most people who would want a FirefoxOS phone would actively prefer not to have that on their device.
Honestly, I think the old FirefoxOS could do well these days. Literally everything an app can do can be done by a browser with a decent caching/local storage scheme. Slap a decent camera on that and it would be amazing.
Probably. “We’ve investigated! No issues here!”