

If we are going to eschew open source projects from shitty tech companies, then there’s a pretty long list.


If we are going to eschew open source projects from shitty tech companies, then there’s a pretty long list.


Quite an interesting development and here’s hoping this makes it to production.


Cool, let me know when the model leaks. 🥱


I know people who lied about having a degree, could do the job, and never got caught. I suppose speed running a degree from a degree mill yields a similar level of education, except with a piece of paper.


I should not have laughed so hard at that. I’m a horrible person.


I decided against Backblaze for server backups because they charge for certain API calls, and I ended up exceeding the quota when I was testing with the free tier. I was experimenting with encrypted backups and not sure how I exceeded it, but it really put me off that I could potentially have a surprise bill from experimenting without exceeding my storage quota. I went with iDrive e2 specifically because they don’t have API fees and it has worked fine the last couple years. My storage utilization has grown and I’ve been charged extra, which is expected, whereas API calls would be harder to predict depending on what I do in a given month. For self-hosting, I want easy, predictable pricing and don’t want to deal with surprise bills. It’s enough of a chore to manage cloud spend at work without it being a headache at home too.
deleted by creator


I would’ve assumed it would be made from plexiglass.


Just watched the video. It’s hilarious that it breaks the glass, pauses for a few awkward moments, bats its eyes, backs up, then just sits there batting its eyes.



There’s a kid’s book called Positive Ninja where the advice is to reframe situations using the word yet. As in, I haven’t been successful in accomplishing this yet. With this kind of positive thinking going around, those robots better have a care. 😉


Yawn, we all know how this goes. So what model am I not supposed to use? I’ll be sure and avoid it, though I’d much rather avoid downloading their leaked weights like I avoid other things I’m not supposed to download.


Clearly SaaS isn’t working out, so just open source all the frontier models and stop building data centers so we can all buy our own GPUs.


I can see where an AI that fucks everything up all the time might be entertaining like a good slapstick comedy, but nah, Resident Evil Requiem is sufficient entertainment for now.


I didn’t realize TSLA stock had an upward trend most of last year and is only heading downhill this year.
Interesting, I’ve only ever had this issue in Mint as well.
My Mint laptop audio stopped working for a couple months and then miraculously fixed itself this week. I made various attempts to fix it with no luck. It’s either a hardware issue or some obscure software issue.
In the past, I had plugged in a HDMI cable to mirror the screen and couldn’t get the audio working again until I plugged it back into HDMI and switched it back to the internal speakers before unplugging HDMI. Before the audio broke this time, I had connected a USB microphone, so it’s possible that’s what did it.


Reading Tesla workers shared images from car cameras, including “scenes of intimacy” was enough to put me off Tesla. The build quality being garbage and Musk also being a garbage human being make the company just a complete waste of space.


The coal miners were told to learn to code, but maybe software engineers should be learning coal mining at this point. With all the AI data centers being built, there sure is going to be a lot of demand for coal.
Just as open weight models are getting good. Qwen 3.6 27B just dropped with claimed performance approaching Opus 4.6, but it can run on a Mac with a M-series SoC. I tested it out today on a M4 Pro with Ollama and Cline and was impressed with its reasoning, but it was slow. Going to try with llama.cpp tomorrow and mess around tweaking it for speed.
https://ai.rs/ai-developer/qwen-3-6-27b-local-coding-model
AI coding agents are useful, but it’s time for the cloud-based models to chill out so we can get cheap RAM again to run our shit locally.