• 0 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle

  • My son has doubled in size every month for the last few months. At this rate he’ll be fifty foot tall by the time he’s seven years old.

    Yeah, it’s a stupid claim to make on the face of it. It also ignores practical realities. The first is those is training data, and the second is context windows. The idea that AI will successfully write a novel or code a large scale piece of software like a video game would require them to be able to hold that entire thing in their context window at once. Context windows are strongly tied to hardware usage, so scaling them to the point where they’re big enough for an entire novel may not ever be feasible (at least from a cost/benefit perspective).

    I think there’s also the issue of how you define “success” for the purpose of a study like this. The article claims that AI may one day write a novel, but how do you define “successfully” writing a novel? Is the goal here that one day we’ll have a machine that can produce algorithmically mediocre works of art? What’s the value in that?



  • The key difference being that AI is a much, much more expensive product to deliver than anything else on the web. Even compared to streaming video content, AI is orders of magnitude higher in terms of its cost to deliver.

    What this means is that providing AI on the model you’re describing is impossible. You simply cannot pack in enough advertising to make ChatGPT profitable. You can’t make enough from user data to be worth the operating costs.

    AI fundamentally does not work as a “free” product. Users need to be willing to pony up serious amounts of money for it. OpenAI have straight up said that even their most expensive subscriber tier operates at a loss.

    Maybe that would work, if you could sell it as a boutique product, something for only a very exclusive club of wealthy buyers. Only that model is also an immediate dead end, because the training costs to build a model are the same whether you make that model for 10 people or 10 billion, and those training costs are astronomical. To get any kind of return on investment these companies need to sell a very, very expensive product to a market that is far too narrow to support it.

    There’s no way to square this circle. Their bet was that AI would be so vital, so essential to every facet of our lives that everyone would be paying for it. They thought they had the new cellphone here; a $40/month subscription plan from almost every adult in the developed world. What they have instead is a product with zero path to profitability.



  • It’s not the standard because it will likely have a LOT of unintended consequences.

    How do you share evidence of police brutality if they can use copyright to take down the video? How do newspapers print pictures of people if they have to get the rightsholders permission first? How do we share photos of Elon Musk doing a Nazi salute if he can just sue every site that posts it for unauthorized use of his likeness?

    Unless this has some extremely stringent and well written limitations, it has the potential to be a very bad idea.



  • There are, as I understand it, ways that you can train on AI generated material without inviting model collapse, but that’s more to do with distilling the output of a model. What Musk is describing is absolutely wholesale confabulation being fed back into the next generation of their model, which would be very bad. It’s also a total pipe dream. Getting an AI to rewrite something like the total training data set to your exact requirements, and verifying that it had done so satisfactorily would be an absolutely monumental undertaking. The compute time alone would be staggering and the human labour (to check the output) many times higher than that.

    But the whiny little piss baby is mad that his own AI keeps fact checking him, and his engineers have already explained that coding it to lie doesn’t really work because the training data tends to outweigh the initial prompt, so this is the best theory he can come up with for how he can “fix” his AI expressing reality’s well known liberal bias.




  • Thing is, there’s going to be a lot of public attention on that “Made in the USA” claim, given how central it is to Trump’s domestic and foreign policy.

    Sure, the FCC can turn a blind eye, but all it takes is for one worker at the assembly plant to call up a journalist. And let’s face, and journalist worth their salt is going to be hanging around every bar near that place. Even trying to screen specifically for MAGA friendly workers won’t help them much when one of those workers feels betrayed by how much of Trump’s product is actually coming from China.

    My point is, there’s no good way to keep this under wraps. If they don’t actually build this thing in the US, word is going to get around, and it’s going to be seen as a total repudiation of Trump’s entire tariff strategy.





  • The key detail is that, like with rear brake lights, they extinguish when the foot is removed from the brake pedal. So it’s not so much the presence of the brake light, but the presence of an inactive brake light that would, serve as a warning that a car is about to start moving. This would be very helpful to drivers on a road when other drivers are pulling out too early from a side road or driveway. That little bit of extra warning is, in many situations, enough for you to pump the brakes, hit the horn, or both.


  • TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that’s not showy and doesn’t require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing.

    Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/

    Amazon isn’t the big surprise, they’ve always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they’re scaling back, but they’ve also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more.

    As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they’re pulling back. That’s how you know how they really feel.


  • Remember, this is all about OpenAI convincing investors to shovel more money into their furnace.

    They are not profitable. They have no realistic path to being profitable. Their only hope for survival is to South Seas Company their through round after round of investor funding. And to do that they have to create the appearance of near unlimited demand for their services, and therefore for additional capacity to run those services.

    The writing is on the wall. Microsoft and Amazon, two of the biggest players in the compute space, both of which also run their own AI projects, have both massively scaled back their plans for future compute expansion. If anyone should be building out like crazy it’s them. If anyone has a clear idea of what the actual demand is, it’s them. If Amazon and Microsoft are out, this thing is fucked.

    OpenAI is fucked. Sam Altman knows it. But if he can keep the illusion going, the money train doesn’t have to stop. Yet.


  • This is such obvious BS. Putting aside that Tesla are always making claims about self driving that they can’t deliver on, let’s just consider the basic logistics of doing this for any customer who lives more than about 300km from a Tesla factory (of which there are two IIRC, and one of those is in Austin and the other is in Fremont)…

    How the fuck is the car going to recharge?

    It’s not like it’s going to plug itself in, and there are no staff at Tesla supercharger stations as far as I’m aware. With the range on a typical Model S even getting to LA might be tough if it gets stuck in traffic. Fremont to LA is just under 600km if you’re lucky.



  • That’s not what’s happening here. Microsoft management are well aware that AI isn’t making them any money, but the company made a multi billion dollar bet on the idea that it would, and now they have to convince shareholders that they didn’t epicly fuck up. Shoving AI into stuff like notepad is basically about artificially inflating “consumer uptake” numbers that they can then show to credulous investors to suggest that any day now this whole thing is going to explode into an absolute tidal wave of growth, so you’d better buy more stock right now, better not miss out.