

Just pull the fuse for the onstar radio. It can log all it wants locally.
https://rhizomehouse.org/mutualaid/ is a good list of people you can help.
https://nlgmn.org/mass-defense/
https://www.wfmn.org/funds/immigrant-rapid-response/
How to organize a rapid response from a very high level with further detailed resources. https://southerncoalition.org/resources/rapid-response-101/
Good general advice on organizing, also a good resource to find groups near you that are likely aligned. https://www.fiftyfifty.one/organizer-resources
Feel free to reach out for any other resources.


Just pull the fuse for the onstar radio. It can log all it wants locally.


squints at your username


I am not saying this will work for you, but I am leaving it here for others as it does work for me when doing mass edits.


Curious to see if another LeakBase will pop up around this. I’m already hearing rumors that a lot of it was AI training data but that’s unfounded squiddy speak on social media.


Yeah I don’t sort or tag with DarkTable I only edit.


Why not just use Darktable?


Darktable is my go to.


“Googling” used to get you to the needed IRS documentation, but now, with the help of Gemini you’re just being lied to.
If you need tax help, call your local library, they often have tax help. Also if it seems like a tax dodge, don’t take the deduction. Don’t outsource your brain to an LLM. You’ve done your taxes before without a GPT you can do it again.


Have fun with that audit.


The problem is message previews, not push notifocations. Which is funny because Meredith addresses that in the thread you posted.


The message preview notification is handled similarly in IOS and Android. The issue isn’t people seeing the notification, it’s that the content of the message being passed to the phone’s launcher. Which is unencrypted.


Y u make Tux into bad guy? Tux just want people compute. Tux is friend shaped.


Always buy refurbished laptops, including MacBooks.


You scoff but this is already being done in China. They desolder good chips from bad cards and add them to a mule card.


Almost like an LLM wrote it…


I mean what you’re proposing was the initial push of gpt3. All the experts said, these GPTs will only hallucinate more with more resources and they’ll never do anything more than repeat their training data as a word salad posing as novelty. And on a very macro scale, they were correct.
The scaling problem
https://arxiv.org/abs/2001.08361
The scaling hype
https://gwern.net/scaling-hypothesis
Ultimately, hype won out.


will never achieve AGI or anything like it
On this we absolutely agree. I’m targeting a more efficient interactive wiki essentially. Something you could package and have it run on local consumer hardware. Similar to this https://codeberg.org/BobbyLLM/llama-conductor but it would be fully transform native and there would only need to be one LLM for interaction with the end user. Everything else would be done in machine code behind the scenes.
I was unclear I guess, I was talking about injecting other models, running their prediction pipeline for the specific topic, and then dropped out of the window to be replaced by another expert. This functionality handled by a larger model that is running the context window. Not nested models, but interchangeable ones dependent on the vector of the tokens. So a qwq RAG trained on python talking to a qwen3 quant4 RAG trained on bash wrapped in deepseekR1 as the natural language output to answer the prompt “How do I best package a python app with uv on a linux server to run a backend for a …”
Currently this type of workflow is often handled with MCP servers from some sort of harness and as I understand it those still use natural language as they are all separate models. But my proposal leverages the stagnation in the field and leverages it as interoperability.


Ah I see, however you do bring up another point. I really think we need a true collection of experts able to communicate without the need for natural language and then a “translation” layer to output natural language or images to the user. The larger parameters would allow the injection of experts into the pipeline.
Thanks for the clarification, and also for the idea. I think one thing we can all agree on is that the field is expanding faster than any billionaire or company understands.
My bolt euv doesn’t transmit after I pulled the fuse.