• ∟⊔⊤∦∣≶@lemmy.nz
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    7 months ago

    5 links later, here are the actual rules (apparently): https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682

    And I would just like to say, that the term ‘AI’ is a marketing term; all the generative models are just complex digital Galton Boards. Put thing in, different thing comes out. But if you leave them alone, nothing happens. Is that really intelligence, or is that just data transformation?

    I’m far more concerned about the things that do things when you leave them alone… Drones, Boston Robotics, that kind of thing.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 months ago

    This is the best summary I could come up with:


    LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

    Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

    The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

    Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

    However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.

    Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.


    The original article contains 846 words, the summary contains 241 words. Saved 72%. I’m a bot and I’m open source!

  • sugarfree@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    They believe they can regulate AI, but it remains to be seen whether or not that is true. Especially as the rules come into place in 2025, that’s a very long time in AI.

  • Muffi@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    Cars are destroying the world way faster than any “AI”. Can we regulate those to hell first?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      Doing one thing does not stop you from also doing a second thing in parallel, and the best time to regulate a technology is when it’s emerging. That way you have a chance to stop it getting out of hand.