Bio field too short. Ask me about my person/beliefs/etc if you want to know. Or just look at my post history.

  • 0 Posts
  • 6 Comments
Joined 2 years ago
cake
Cake day: August 3rd, 2023

help-circle
  • That new hire might eat resources, but they actually learn from their mistakes and gain experience. If you can’t hold on to them once they have experience, that’s a you problem. Be more capitalist and compete for their supply of talent; if you are not willing to pay for the real human, then you can have a shitty AI that will never grow beyond a ‘new hire.’

    The future problem, though, is that without the experience of being a junior dev, where do you think senior devs come from? Can’t fix crappy code if all you know how to do is engineer prompts to a new hire.

    “For want of a nail,” no one knew how to do anything in 2030. Doctors were AI, Programmers were AI, Artists were AI, Teachers were AI, Students were AI, Politicians were AI. Humanity suffered and the world suffocated under the energy requirements of doing everything poorly.


  • I fully agree: Companies and their leadership should be held accountable when they cut corners and disregard customer data security. The ideal solution would be that a company is required to not store any information beyond what is required to provide the service, a la GDPR, but with a much stricter limit. I would put “marketing” outside that boundary. As a youtube user, you need literally nothing, maybe a username and password to retain history and inferred preferences, but trying to collect info about me should be punished. If your company can’t survive without targeted content, your company should not survive.

    In bygone days, your car’s manufacturer didn’t know anything about you and we still bought cars. Not to start a whole new thread, but this ties in to right-to-repair and subscriptions for features as well. I did not buy a license to the car, I bought the fucking car; a license to use the car is called a lease.


  • I understand what you are saying, and what you want… but admitting fault publicly is a huge liability, as they have then stated it was their negligence that caused the issue. (bear with me and read this wall of text – or skip to the last paragraph)

    I’ve worked in the Sec Ops space, and it’s an arms race all the time. There are tools to help identify issues and breaches quickly, but the attack surface is just not something that can be managed 100%. Even if you know there is a problem, you probably have to send an issue to a developer team to update their dependency and then they might need to change their code as well and get a code review approved and get a window to promote to production. A Zero-Day vulnerability is not something you can anticipate.

    You’ve seen the XKCD of the software stack where a tiny peg is propping up the whole thing? The same idea applies to security, but the tiny peg is a supply chain attack where some dependency is either vulnerable, or attacked by malicious actors and through that gain access to your environment.

    Maybe your developers leverage WidgetX1Z library for their app, and the WidgetX1Z library just updated with a change-log that looks reasonable, but the new code has a backdoor that allows an attacker to compromise your developers computer. They now have a foothold in your environment even with rigorous controls. I’ve yet to meet a developer who didn’t need, or at least want, full admin rights on their box. You now have an attacker with local admin inside your network. They might trip alarms, but by then the damage might be done and they were able to harvest the dev database of user accounts and send it back home. That dev database was probably a time-delayed copy of prod, so that the developer could be entirely sure there were no negative impacts of their changes.

    I’m not saying this is what happened to Plex, but the idea that modern companies even CAN fully control the data they have is crazy. Unless you are doing full code reviews of all third-party libraries and changes or writing everything in-house (which would be insane), with infallible review, you cannot fully protect against a breach. And even then I’m not sure.

    The real threat here is what data do companies collect about us? If all they have is a username, password and company-specific data, then the impact of a breach is not that big – you, as a consumer, should not re-use a password. When they collect tons of other information about us such as age, race, location, gender, sex, orientation, habits, preferences, contacts, partners, politics, etc, then those details become available for anyone willing to pay. We should use breach notifications like this to push for stronger data laws that prevent companies from collecting, storing, buying or selling personal data about their customers. It is literally impossible for a company to fully protect that information, so it should not be allowed.


  • Full agree. It’s scary. These companies have collected enough data on us all – sometimes (often?) through things we didn’t directly use and thus didn’t need to accept any T&C for, such as surveillance cameras in a business or public street – that they can predict our actions, moods, and make inferences about our lives.

    They have been doing this for YEARS, and they are constantly getting better. They don’t even need health data, but I can guarantee they want it. I remember noticing that we had a phase where my wife was being advertised baby products on her streaming service. We were not having another child, but the timing was eerily close to the interval between #1 and #2. I actually just had a hesitation about divulging that I have 2 kids, but then said fuck it, they already know.

    Add to all that the ‘for the children’ angle, which I’ve always hated. It’s such a transparent lie that anyone with a lick of common sense can see through it. For anyone even on the fence, this is the foot in the door: Allow them the ability to track you ‘for the children’ and they will track you for the corporation as well, and the government, and your ex-boyfriend who is now a cop.

    Fight this shit.


  • It’s almost like the privacy alarmists, who have been screaming for decades, were on to something.

    Some people saw the beginning of Minority Report and thought, ‘that sounds like a good idea.’

    We used to be in a world where it was unfeasible to watch everyone, and you could get away with small ‘crimes’ like complaining about the president online because it was impossible to get a warrant for surveillance without any evidence. Now, we have systems like Flock1 cameras2, ChatGPT and others that generate alerts to law enforcement on their own, subverting a need for a warrant in the first place. And more and more frequently, you both can’t opt out and are unable to avoid them.

    For now, the crime might be driving a car with a license plate flagged as stolen (or one where OCR mistakes a number), but all it takes is a tiny nudge further towards fascism before you can be auto-SWATted because the sentiment of your draft sms is determined to be negative towards the fuhrer.

    Even now, I’m sacrificing myself to warn people. This message is on the internet and federated to multiple instances. There’s no way I can’t be identified by it with enough resources. Once it’s too late, I’ll be already on the list of people to remove.


  • Like many things, a tool is only as smart as the wielder. There’s still a ton of critical thinking that needs to happen as you do something as simple as bake bread. Using an AI tool to suggest ingredients can be useful from a creative perspective, but should not be assumed accurate at face value. Raisins and Dill? maybe ¯\(ツ)/¯, haven’t tried that one myself.

    I like AI, for being able to add detail to things or act as a muse, but it cannot be trusted for anything important. This is why I’m ‘anti-AI’. Too many people (especially in leadership roles) see this tool as a solution for replacing expensive humans with something that ‘does the thinking’; but as we’ve seen elsewhere in this thread, AI CANT THINK. It only suggests items that are statistically likely to be next/near based on its input.

    In the Security Operations space, we have a phrase “trust but verify”. For anything AI, I would use 'doubt, then verify" instead. That all said. AI might very well give you a pointer to the place to ask how much motrin an infant should get. Hopefully, that’s your local pediatrician.