
Nice that watches ranging from £16 to £200 already support USB-C. My two Garmin watches both have (incompatible) unique charging/data ports.
Nice that watches ranging from £16 to £200 already support USB-C. My two Garmin watches both have (incompatible) unique charging/data ports.
To me, it’s too late for NIST there. China is driving the agenda in AI now, because the US took too long to get organized.
Second is the rise of AI-powered systems that depend on fast, reliable access to edge or cloud-based intelligence.
I’m sorry… what?
Is that just word salad? I’m not seeing “AI” as being anything but an excuse there. On the cloud side, AI involves server farms with physical interconnects. Same for endpoint AI, and edge server AI.
Are they saying that accessing these systems depends on fast, reliable access? Like, faster and more reliable than using Google from your web browser over the past 20 years?
The whole point of ML systems is that all the heavy compute and speed dependent stuff happens somewhere with dedicated bandwidth to handle it, and the interface can be slower and lossier because the service can take more steps without guidance.
I don’t know how Duckstation does it, but Retroarch cores (Beetle/Mednafen and PCSX) support widescreen?
Well, I fail to see what makes it better than Freenet. Which ended up becoming exactly that.
Heh; I remember when Candy Crush was just one guy and an Apple Developer account.
Yeah; allies still care because of the US military industrial complex. Compromising the US still compromises a large chunk of the world, making things even worse for everyone than the current US administration can do on its own.
The one thing I’m continually annoyed about though is battery management.
Why, in this day and age, do we not have a smartphone that can last on a single charge for a week? Instead, after a year or two of use, the devices with a glued in battery can barely last 8 hours on a charge.
Doesn’t seem all that smart.
There’s also the fact that
Now, these things could both change over time, but humans are much more efficient to train than current state of the art probability sieves we call GenAI.
I don’t miss spending hours trying to get a slot on the modem pool.
But I’m still happy to while away a few hours on mume.org or some random Diku server.
Facebook was never fine; it just wasn’t a silo effect at first—but it was still a privacy and security nightmare.
I remember cliques and a lack of online monoculture on Usenet and IRC before the World Wide Web even existed; the web exploded things even further, as did the privatization of DNS and takeover of funding by VCs and ad conglomerates. All that had happened by 1998.
When was this?
Asking as someone who’s been on the Internet since 1989.
For decades, many computer scientists have presumed that for practical purposes, the outputs of good hash functions are generally indistinguishable from genuine randomness — an assumption they call the random oracle model.
Er, no. The falsity of this is taught in virtually all first year CS courses.
Computer programmers and other IT workers? Sure… but hash functions have never been considered a substitute fore pure randomness.
That’s why we have a random generator in each computer based on thermal variance, I/O input, and other actually random features. And even then, we have to be careful not to hash the randomness out of the source data.
And those status reports will be generated by AI, because that’s where the real savings is.
So you treated it like a junior developer and did a thorough review of its output.
I think the only disagreement here is on the semantics.
Yeah, I’ve added AI to my review process. Sure, things take a bit longer, but the end result has been reviewed by me AND compared against a large body of code in the training data.
It regularly catches stuff I miss or ignore on a first review based on ignoring context that shouldn’t matter (eg, how reliable the person is who wrote the code).
I’ve had success with:
Essentially, all the stuff that I’d need to review anyway, but use of AI means that actually generating the content can be done in a consistent manner that I don’t have to think about. I don’t let it create anything, just transform things in blocks that I can quickly review for correctness and appropriateness. Kind of like getting a junior programmer to do something for me.
There’s only one way to solve all diseases.
Did they test this on Mars first?
OK, I stopped posting on Reddit but left my account and comments in place because I considered them part of the public record. If Reddit is taking that record private, it’s time for me to start removing my content from the platform.
Does anyone know if historical Reddit content will remain in IA? If not, I’m going to have to back up years of content somewhere else.