Skip to main content

Technology industry to combat deceptive use of AI in 2024 elections - Stories - Microsoft

| Microsoft Source

MUNICH – February 16, 2024 – Today at the Munich Security Conference (MSC), leading technology companies pledged to help prevent deceptive AI content from interfering with this year’s global elections in which more than four billion people in over 40 countries will vote.
The “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” is a set of commitments to deploy technology countering harmful AI-generated content meant to deceive voters. Signatories pledge to work collaboratively on tools to detect and address online distribution of such AI content, drive educational campaigns, and provide transparency, among other concrete steps. It also includes a broad set of principles, including the importance of tracking the origin of deceptive election-related content and the need to raise public awareness about the problem. The accord is one important step to safeguard online communities against harmful AI content, and builds on the individual companies’ ongoing work.
Digital content addressed by the accord consists of AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can vote.
As of today, the signatories are: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap, Stability AI, TikTok, TrendMicro, Truepic, and X.
Participating companies agreed to eight specific commitments:
These commitments apply where they are relevant for services each company provides.
“Elections are the beating heart of democracies. The Tech Accord to Combat Deceptive Use of AI in 2024 elections is a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices,” said Ambassador Christopher Heusgen, Munich Security Conference Chairman. “MSC is proud to offer a platform for technology companies to take steps toward reigning in threats emanating from AI while employing it for democratic good at the same time.”
“Transparency builds trust,” said Dana Rao, General Counsel and Chief Trust Officer at Adobe. “That’s why we’re excited to see this effort to build the infrastructure we need to provide context for the content consumers are seeing online. With elections happening around the world this year, we need to invest in media literacy campaigns to ensure people know they can’t trust everything they see and hear online, and that there are tools out there to help them understand what’s true.”  
“Democracy rests on safe and secure elections,” said Kent Walker, President, Global Affairs at Google. “Google has been supporting election integrity for years, and today’s accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust. We can’t let digital abuse threaten AI’s generational opportunity to improve our economies, create new jobs, and drive progress in health and science.”
“Disinformation campaigns are not new, but in this exceptional year of elections – with more than 4 billion people heading to the polls worldwide – concrete, cooperative measures are needed to protect people and societies from the amplified risks of AI-generated deceptive content,” said Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM. “That’s why IBM today reaffirmed our commitment to ensuring safe, trustworthy, and ethical AI, signing the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ alongside industry peers at the Munich Security Conference.”
“With so many major elections taking place this year, it’s vital we do what we can to prevent people being deceived by AI-generated content,” said Nick Clegg, President, Global Affairs at Meta. “This work is bigger than any one company and will require a huge effort across industry, government and civil society. Hopefully, this accord can serve as a meaningful step from industry in meeting that challenge.” 
“As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections,” said Brad Smith, Vice Chair and President of Microsoft. “AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.”
“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” said Anna Makanju, Vice President of Global Affairs at OpenAI. “We look forward to working with industry partners, civil society leaders and governments around the world to help safeguard elections from deceptive AI use.” 
“It’s crucial for industry to work together to safeguard communities against misleading and deceptive AI in this historic election year,” said Theo Bertram, VP, Global Public Policy, TikTok. “This builds on our continued investment in protecting election integrity and advancing responsible and transparent AI-generated content practices through robust rules, new technologies, and media literacy partnerships with experts.”
Linda Yaccarino, CEO of X said, “In democratic processes around the world, every citizen and company has a responsibility to safeguard free and fair elections, that’s why we must understand the risks AI content could have on the process. X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency.”
More information can be found at: AIElectionsAccord.com.
Press Contact: 
[email protected]
 
Follow us:
Share this page:

source

Comments

Popular posts from this blog

Apple wins over AliveCor antitrust lawsuit ov

Apple has finally got a win over the ongoing legal battle with AliveCor, the US-based medical device company over the Apple Watch's heart monitoring technology. AliveCor had accused the tech giant for limiting third-party access to specific heart rate data which is collected by the Apple Watch, claiming that it was an anti-competitive move. However, a judge in a US District Court has ruled completely in favour of Apple, stating that the company is not required to stand trial for the lawsuit In the 9to5Mac report, Apple's spokesperson was quoted saying, “AliveCor’s lawsuit challenged Apple’s ability to improve important capabilities of the Apple Watch that consumers and developers rely on, and today’s outcome confirms that is not anticompetitive.” “We thank the Court for its careful consideration of this case, and will continue to protect the innovations we advance on behalf of our customers against meritless claims,” it added. The case dealt with upgrades to Apple W