Damn. That is terrifying. Somebody should be slapping these on all of them.
Damn. That is terrifying. Somebody should be slapping these on all of them.
I wouldn’t want to be near one of these cars.
The sad thing is that you might be marginally safer inside the thing. Maybe… a little anyway. At least inside it, you don’t have to worry about it running you or your kid over. These things should not be on the road in any capacity.
It’s also an article about another article from Variety that actually has a better headline. These things are a pet peeve for me. Hey, here’s a story from an actual news service and I’ll even include a link to it, but I’m going to post my link all over so people will see the ads on my page instead of theirs. Variety does some good reporting, I’ve rather they get the clicks.
TIL. The ones I’ve seen were all black.
I would expect black-out curtains to make the room hotter. Black absorbs the full spectrum of sunlight shining in and and the material heats up. I would think white curtains would reflect the visible light back out through the window and keep the room cooler.
Really bad headline. The actual article is about a study showing that browser fingerprinting is being used in real time in pricing target ads to your browser.
To investigate whether websites are using fingerprinting data to track people, the researchers had to go beyond simply scanning websites for the presence of fingerprinting code. They developed a measurement framework called FPTrace, which assesses fingerprinting-based user tracking by analyzing how ad systems respond to changes in browser fingerprints. This approach is based on the insight that if browser fingerprinting influences tracking, altering fingerprints should affect advertiser bidding — where ad space is sold in real time based on the profile of the person viewing the website — and HTTP records — records of communication between a server and a browser.
“This kind of analysis lets us go beyond the surface,” said co-author Jimmy Dani, Saxena’s doctoral student. “We were able to detect not just the presence of fingerprinting, but whether it was being used to identify and target users — which is much harder to prove.”
The researchers found that tracking occurred even when users cleared or deleted cookies. The results showed notable differences in bid values and a decrease in HTTP records and syncing events when fingerprints were changed, suggesting an impact on targeting and tracking.
Additionally, some of these sites linked fingerprinting behavior to backend bidding processes — meaning fingerprint-based profiles were being used in real time, likely to tailor responses to users or pass along identifiers to third parties.
Aaaah! Act now! Hurry! Change ALL your passwords! Your password was stolen by malware on your device so change it now… on your device… that still has malware… Wait a minute. Shouldn’t this article at least suggest removing the malware first?
I bet the business address is that same as the one for his Trump watches, gold sneakers, and some “male enhancement honey” that you’d have to be very hard up well, desperate to try.
Texas state legislature has passed a law making it illegal for cities to pass laws more restrictive than the state laws and Austin which is known to be full of progressives. This makes it a perfect place for Tesla to beta-test it’s software. They’ll kill people likely to vote for Democrats.
Sorry, misunderstood.
…it somehow dilutes the argument against AI ads.
I didn’t think it diluted the arguement. They were just disagreeing with the prior poster. At the end, they even state:
But AI ads will make me never go back.
Yes, but I dont think that’s relevant. Whether gross or net, they are still ruining lives to achieve a pointless profit motive.
Edit: relevant, not irrelevant
You don’t need $10 billion in revenue. You could just coast along and only hit, what, $9.8 billion? And then you wouldn’t have to ruin 500 people’s lives. I’m betting the CEO has a bonus scheduled if he hits this goal.
Yikes!
In that sense, Westgate explains, the bot dialogues are not unlike talk therapy, “which we know to be quite effective at helping people reframe their stories.” Critically, though, AI, “unlike a therapist, does not have the person’s best interests in mind, or a moral grounding or compass in what a ‘good story’ looks like,” she says. “A good therapist would not encourage a client to make sense of difficulties in their life by encouraging them to believe they have supernatural powers. Instead, they try to steer clients away from unhealthy narratives, and toward healthier ones. ChatGPT has no such constraints or concerns.”
This is a rather terrifying take. Particularly when combined with the earlier passage about the man who claimed that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler.” Therapists have to be very careful because human memory is very plastic. It’s very easy to alter a memory, in fact, every time you remember something, you alter it just a little bit. Under questioning by an authority figure, such as a therapist or a policeman if you were a witness to a crime, these alterations can be dramatic. This was a really big problem in the '80s and '90s.
Kaitlin Luna: Can you take us back to the early 1990s and you talk about the memory wars, so what was that time like and what was happening?
Elizabeth Loftus: Oh gee, well in the 1990s and even in maybe the late 80s we began to see an altogether more extreme kind of memory problem. Some patients were going into therapy maybe they had anxiety, or maybe they had an eating disorder, maybe they were depressed, and they would end up with a therapist who said something like well many people I’ve seen with your symptoms were sexually abused as a child. And they would begin these activities that would lead these patients to start to think they remembered years of brutalization that they had allegedly banished into the unconscious until this therapy made them aware of it. And in many instances these people sued their parents or got their former neighbors or doctors or teachers whatever prosecuted based on these claims of repressed memory. So the wars were really about whether people can take years of brutalization, banish it into the unconscious, be completely unaware that these things happen and then reliably recover all this information later, and that was what was so controversial and disputed.
Kaitlin Luna: And your work essentially refuted that, that it’s not necessarily possible or maybe brought up to light that this isn’t so.
Elizabeth Loftus: My work actually provided an alternative explanation. Where could these merit reports be coming from if this didn’t happen? So my work showed that you could plant very rich, detailed false memories in the minds of people. It didn’t mean that repressed memories did not exist, and repressed memories could still exist and false memories could still exist. But there really wasn’t any strong credible scientific support for this idea of massive repression, and yet so many families were destroyed by this, what I would say unsupported, claim.
The idea that ChatBots are not only capable of this, but that they are currently manipulating people into believing they have recovered repressed memories of brutalization is actually at least as terrifying to me as it convincing people that they are holy prophets.
Edited for clarity
But the AI people that the tech bros can now create outnumber real people by ♾️:1. The opinions of real people have ceased to matter even the tiny amount that they once did. So open wide and try not to gag.