Major tech companies have signed an accord to voluntarily adopt “reasonable precautions” to prevent potentially disruptive artificial intelligence tools from being used to disrupt democratic elections across the world.
This agreement took place at the 2024 Munich Security Conference and included executives from some of the world’s most powerful tech giants, including Adobe, Amazon, Google, Meta, Microsoft, OpenAI and TikTok. Thirteen other companies, IBM and X (formerly Twitter) are also signing on the accord.
As per the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections”: “2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives through the right to vote. At the same time, the rapid development of artificial intelligence, or AI, is creating new opportunities as well as challenges for the democratic process. All of society will have to lean into the opportunities afforded by AI and to take new steps together to protect elections and the electoral process during this exceptional year.”
The accord lays out seven key elements of principle, including: prevention, provenance, detection, responsive protection, evaluation, public awareness.
The main advantage of this initiative is the commitment from each company to work together to share best practices, and “explore new pathways to share best-in-class tools and/or technical signals about Deceptive AI Election Content in response to incidents”.
While this may be an initially positive step, the agreement is non-binding and is effectively more of a symbolic gesture of goodwill. There are no definite actions lain down, nor are there definite penalties for failing to do so. Moreover, the vagueness of the commitments which presumably helped convince the range of companies to agree to the accord are precisely what may disappoint and frustrate activists.
Leave feedback about this