I’m a #SoftwareDeveloper from #Switzerland. My languages are #Java, #CSharp, #Javascript, German, English, and #SwissGerman. I’m in the process of #LearningJapanese.

I like to make custom #UserScripts and #UserStyles to personalize my experience on the web. In terms of #Gaming, currently I’m mainly interested in #VintageStory and #HonkaiStarRail. I’m a big fan of #Modding.
I also watch #Anime and read #Manga.

#fedi22 (for fediverse.info)

  • 0 Posts
  • 17 Comments
Joined 1 year ago
cake
Cake day: March 11th, 2024

help-circle
  • Update 7/31/25 4:10pm PT: Hours after this article was published, OpenAI said it removed the feature from ChatGPT that allowed users to make their public conversations discoverable by search engines. The company says this was a short-lived experiment that ultimately “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”

    Interesting, because the checkbox is still there for me. Don’t see things having changed at all, maybe they made the fine print more white? But nothing else.

    In general, this reminds me of the incognito drama. Iirc people were unhappy that incognito mode didn’t prevent Google websites from fingerprinting you. Which… the mode never claimed to do, it explicitly told you it didn’t do that.

    For chats to be discoverable through search engines, you not only have to explicitly and manually share them, you also have to then opt in to having them appear on search machines via a checkbox.

    The main criticism I’ve seen is that the checkbox’s main label only says it makes the chat “discoverable”, while the search engines clarification is in the fine print. But I don’t really understand how that is unclear. Like, even if they made them discoverable through ChatGPT’s website only (so no third party data sharing), Google would still get their hands on them via their crawler. This is just them skipping the middleman, the end result is the same. We’d still hear news about them appearing on Google.

    This just seems to me like people clicking a checkbox based on vibes rather than critical thought of what consequences it could have and whether they want them. I don’t see what can really be done against people like that.

    I don’t think OpenAI can be blamed for doing the data sharing, as it’s opt-in, nor for the chats ending up on Google at all. If the latter was a valid complaint, it would also be valid to complain to the Lemmy devs about Lemmy posts appearing on Google. And again, I don’t think the label complaint has much weight to it either, because if it’s discoverable, it gets to Google one way or another.











  • Here’s a question regarding the informed consent part.

    The article gives the example of asking whether the recipient wants the AI’s answer shared.

    “I had a helpful chat with ChatGPT about this topic some time ago and can share a log with you if you want.”

    Do you (I mean generally people reading this thread, not OP specifically) think Lemmy’s spoiler formatting would count as informed consent if properly labeled as containing AI text? I mean, the user has to put in the effort to open the spoiler manually.




  • Isn’t the Atari just a game console, not a chess engine?

    Like, Wikipedia doesn’t mention anything about the Atari 2600 having a built-in chess engine.

    If they were willing to run a chess game on the Atari 2600, why did they not apply the same to ChatGPT? There are custom GPTs which claim to use a stockfish API or play at a similar level.

    Like this, it’s just unfair. Both platforms are not designed to deal with the task by themselves, but one of them is given the necessary tooling, the other one isn’t. No matter what you think of ChatGPT, that’s not a fair comparison.


    Edit: Given the existing replies and downvotes, I think this comment is being misunderstood. I would like to try clarifying again what I meant here.

    First of all, I’d like to ask if this article is satire. That’s the only way I can understand the replies I’ve gotten that critized me on grounds of the marketing aspect of LLMs (when the article never brings up that topic itself, nor did I). Like, if this article is just some tongue in cheek type thing about holding LLMs to the standards they’re advertised at, I can understand both the article and the replies I’ve gotten. But the article never suggests so itself. So my assumption when writing my comment was that this is not the case and it is serious.

    The Atari is hardware. It can’t play chess on its own. To be able to, you need a game for it which is inserted. Then the Atari can interface with the cartridge and play the game.

    ChatGPT is an LLM. Guess what, it also can’t play chess on its own. It also needs to interface with a third party tool that enables it to play chess.

    Neither the Atari nor ChatGPT can directly, on their own, play chess. This was my core point.

    I merely pointed out that it’s unfair that one party in this comparison is given the tool it needs (the cartridge), but the other party isn’t. Unless this is satire, I don’t see how marketing plays a role here at all.




  • why don’t they program them to look up math programs and outsource chess to other programs when they’re asked for that stuff?

    They will, when it makes sense for what the AI is designed to do. For example, ChatGPT can outsource image generation to an AI dedicated to that. It also used to calculate math using python for me, but that doesn’t seem to happen anymore, probably due to security issues with letting the AI run arbitrary python code.

    ChatGPT however was not designed to play chess, so I don’t see why OpenAI should invest resources into connecting it to a chess API.

    I think especially since adding custom GPTs, adding this kind of stuff has become kind of unnecessary for base ChatGPT. If you want a chess engine, get a GPT which implements a Stockfish API (there seem to be several GPTs that do). For math, get the Wolfram GPT which uses Wolfram Alpha’s API, or a different powerful math GPT.