• 0 Posts
  • 12 Comments
Joined 2 years ago
cake
Cake day: June 16th, 2023

help-circle

  • The kneejerk reaction is gonna be “Meta bad”, but it’s actually a bit more complicated.

    Whatever faults Meta has in other areas, it’s been mostly a good player in the AI space. They’re one of the major reasons we have strong open-weight AI models today. Mistral, another maker of open AI models and Europe’s only significant player in AI, has also rejected this code of conduct. By contrast, OpenAI a.k.a. ClosedAI has committed to signing it, probably because they are the incumbents and they think the increased compliance costs will help kill off competitors.

    Personally, I think the EU AI regulation efforts are a big missed opportunity. They should have been used to force a greater level of openness and interoperability in the industry. With the current framing, they’re likely to end up entrenching big proprietary AI companies like OpenAI, without doing much to make them accountable at all, while also burying upstarts and open source projects under unsustainable compliance requirements.


  • The EU AI Act is the thing that imposes the big fines, and it’s pretty big and complicated, so companies have complained that it’s hard to know how to comply. So this voluntary code of conduct was released as a sample procedure for compliance, i.e. “if you do things this way, you (probably) won’t get in trouble with regulators”.

    It’s also worth noting that not all the complaints are unreasonable. For example, the code of conduct says that model makers are supposed to take measures to impose restrictions on end-users to prevent copyright infringement, but such usage restrictions are very problematic for open source projects (in some cases, usage restrictions can even disqualify a piece of software as FOSS).