Please do not perceive me.

  • 0 Posts
  • 30 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle

  • So now my man is out of a job and I also have no tacos.

    I do get the point you’re making though. I don’t even necessarily disagree. I just think there are a lot of factors at play and until we get some sort of meaningful change around this system - i.e. some sensible legislation around the minimum wage issue - we’ve gotta make some concessions if we expect to still participate in society. I figure that tipping my service worker is a sensible concession to make in many cases.

    Now there are definitely some cases where it’s getting out of hand though. You made me some good food? Fair enough, here’s a tip, good stuff. You hand me something off a shelf and expect a tip you can fuck right off though. Anyone making a proper wage shouldn’t expect tips and customers shouldn’t be expected to give them.

    Food service in particular just stands out to me because of their generally exceptionally low wage. Most of the money your waitress or your cook takes home is in tips. Refusing to engage with this system harms the worker more than it harms the company, and finding a restaurant (not fast food, strangely enough) that doesn’t engage in this “tipped wages” practice is effectively impossible. So if you want to eat a nice meal out, basically ever, you’re either going to tip your workers or be a gigantic asshole unless you have a real genuine complaint to air. It’s not right, but it’s the reality we exist in.

    …Though to be fair seeing the way America is going these days the restaurant industry as a whole might just completely implode soon when people can’t afford to eat there anymore. Who knows. Maybe the problem solves itself in a fashion.


  • It’s about the same as looking a homeless man right in the eyes as you drive off in your Benz without giving him anything.

    It’s totally doable, expected even, people do it all the time. But anyone with a functioning moral code probably feels a bit shitty about it, even if the circumstances around the source of the problem are largely out of their control.

    The No Tip button says “I’m fine paying money to your corporate overlord but not to you, specifically, peon.” You’re specifically opting out of the ability to directly benefit the workers of a company rather than the management.

    The discussion around why tips are an expected or “required” part of this transaction is another story. But so long as food service workers are being paid $2.13 an hour plus tips I’m going to continue to tip the guy that makes my tacos, largely because I feel that’s probably the only semi-ethical way for me to excuse buying those tacos.

    And before you say “if they don’t make enough in tips to equal minimum wage then the employer pays the difference”, that is true, but you should also know that it is true that those people do not retain employment.


  • Brian Thompson is not a politician, nor did he have plans to become one, nor was he in the public eye. This killing was public but it was not specifically publicized beyond just being a broad-daylight street killing which happens literally dozens of times daily across America. There wasn’t even a manifesto until the police “found” one inside of the backpack that they “found” on Luigi without due process. Why would this be considered politically motivated?

    And if it is considered politically motivated, while having nothing to do with politics or politicians, then where does that definition end?

    This is partially me being snarky to make a point but this is also a real question. I don’t think there are any grounds to call this “terrorism” on account of being “politically motivated”. However, it sure is being called this in court. I know the reason why this is (scared rich boys want this to be a capital offense), but I do not understand the legal reasoning why.









  • So, fun fact about that, this enabled one of my favorite exploits.

    When you sell stuff to merchants, they’ll automatically equip it, if it’s of higher value than what they already have equipped. Most anything with a constant effect enchantment is higher value than almost anything they’re likely to be wearing.

    So, you go enchant yourself a shirt with Constant Effect Damage Health on Self 1pt. Sell it to a merchant, and then wait patiently for an hour until he keels over dead. Proceed to loot his entire shop without getting a bounty for it and then move on to the next shop.

    Pro tip, if you get too happy with this strategy, remember not to do this to the creature merchants as well or you can very easily find yourself left in a world without commerce.


  • Fair enough, you got me there. Didn’t realize there was such a population of internet craving people in what’s supposed to be one of the last relatively untouched areas of nature on the planet.

    That being the case though, why didn’t this all happen in 2013, when O3b launched to specifically solve this problem for them? It’s still running, by the way, after several rounds of upgrades, and significantly more stable than Starlink with their dinky little 5 year disposables. Microsoft, Honeywell and Amazon all use it. But the original and ongoing intent of the project was explicitly to bring internet access to all otherwise unreachable areas, such as islands, deep in Africa, and the open ocean.

    I don’t oppose Brazilian villagers having internet if they want it, but the situation in which it arrived to them feels suspect to me. I have no proof that Starlink actively went out and pushed internet service onto them like a drug dealer but it would not be out of character for Musk and his subordinates to do so, and that just feels bad.

    Regardless there is already an existing solution to this. If you want internet in the Amazon you can use satellite internet. It does not have to be Starlink. If you want good internet, maybe don’t live in the Amazon. People in general should probably be leaving that place alone. The article you linked even talks about one of the village leaders splitting his time between the village and the city. We can try and run a fiber line to Manaus and/or Porto Velho and that should be able to serve a reasonably large enough area around them, but even if that fails there are already other solutions.



  • But they specifically don’t want to do that because ensuring a 5 year service life means you are required to continue buying more satellites from them every 5 years. Literally burning resources into nothingness just to pursue a predatory subscription model.

    It also helps their case that LEO has much lower latency than mid or high orbit but I refuse to believe that that is their primary driving concern behind this and not the former.



  • Personally, I think the fundamental way that we’ve built these things kind of prevents any risk of actual sentient life from emerging. It’ll get pretty good at faking it - and arguably already kind of is, if you give it a good training set for that - but we’ve designed it with no real capacity for self understanding. I think we would require a shift of the underlying mechanisms away from pattern chain matching and into a more… I guess “introspective” approach, is maybe the word I’m looking for? Right now our AIs have no capacity for reasoning, that’s not what they’re built for. Capacity for reasoning is going to need to be designed for, it isn’t going to just crop up if you let Claude cook on it for long enough. An AI needs to be able to reason about a problem and create a novel solution to it (even if incorrect) before we need to begin to worry on the AI sentience front. None of what we’ve built so far are able to do that.

    Even with that being said though, we also aren’t really all that sure how our own brains and consciousness work, so maybe we’re all just pattern matching and Markov chains all the way down. I find that unlikely, but I’m not a neuroscientist, so what do I know.


  • That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

    An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

    If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.