• 0 Posts
  • 47 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle



  • It is indeed due to the lack of medical knowledge. The thing is, so called “generational wisdom” is basically a synonym for lack of knowledge, medical or otherwise. When your go-to source of knowledge is the elders from your community, it stops you from seeking more relevant, and almost certainly more correct knowledge. Community isn’t smart, community is dominated by people who are speaking the loudest, the community consensus revolves around certainty, and people who know the most tend to be least certain in their language.








  • Nalivai@lemmy.worldtoComic Strips@lemmy.worldFacts
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 month ago

    calling the proud boys “gravy seals” doesn’t make me upset

    Good for you. However, there are people who were bullied for being larger than average their wholr live, and this shit reinforces the idea that being fat is something bad and should be mocked.



  • Oh yeah, you absolutely can test it.
    And then it gives you (and this is a real example, with real function names removed)

    find_something > dirpath

    rm - rf $dirpath/*
    do_something_in_the_dir(dirpath)

    And it will work, but on a failure of a first question, instead of failing gracefully it wipes your hard drive clean.
    You can find shit like that on the regular Internet, but the difference is, it will be downvoted and some nerd will leave a snarky comment explaining why it’s stupid. When llm gives you that, you don’t have ways to distinguish a working code from a slow boiling trap







  • See, this is the problem I’m talking about. You think you can gauge if the code works or not, but even for small pieces (and in some cases, especially for small pieces) there is a world of very bad, very dangerous shit that lies between “works” and “not works”.
    And it is as dangerous when you trust it to explain something for you. It’s by definition something you don’t know therefore can’t check.


  • It’s not that LLM can’t know truth, that’s obvious but besides the point. Its that the user can’t really determine when the lies are, not to the degree that you can be when getting info from a human.
    So you really need to check everything, every claim, every word, every sound. You can’t assume good intentions, there are no intentions in real sence of the word, you can’t extrapolate or intrapolate. Every word of the data you’re getting might be a lie with the same certainty as any other word.
    It requires so much effort to check properly, you either skip some or spend more time that you would without the layer of lies.