close
close

Gottagopestcontrol

Trusted News & Timely Insights

“Alexa, how should I vote?”: Right-wing uproar over the voice assistant’s pro-Kamala Harris arguments | US elections 2024
Alabama

“Alexa, how should I vote?”: Right-wing uproar over the voice assistant’s pro-Kamala Harris arguments | US elections 2024

Amazon’s voice assistant Alexa has caused an uproar among conservatives after viral videos showed the device giving supportive responses to the election of Kamala Harris but refusing to answer similar questions about Donald Trump.

The problem was due to a software update designed to improve the quality of Alexa’s features and its artificial intelligence, according to leaked documents obtained by The Washington Post.

When asked why someone should vote for Harris, videos showed the device listing positive qualities of the Democratic candidate. When asked the same question about Trump, Alexa gave a standard response that it could not promote or provide answers about specific political candidates.

“These responses were errors that should never have happened, and they were fixed as soon as we became aware of them,” an Amazon spokesperson said. “We designed Alexa to provide customers with accurate, relevant and helpful information without showing a preference for any particular political party or politician.”

The discrepancy between Alexa’s responses became the focus of widespread social media posts and coverage by right-wing media outlets such as Fox News.

Media influencer Mario Nawfal posted a video on X (formerly Twitter) earlier this week in which a woman asks Alexa about both candidates. He captioned it, “GUESS WHO AMAZON’S ALEXA IS VOTING FOR?” The video was viewed more than 740,000 times in a matter of days, prompting pro-Trump tech billionaire Elon Musk to respond with “!!”.

The backlash escalated to the point where South Carolina Republican Senator Lindsey Graham sent a letter to Amazon on Thursday demanding answers while suggesting that the company represented liberal causes.

The spread of videos of Alexa’s answers sparked heated internal debates at Amazon, according to the Washington Post, as engineers manually prevented the device from responding to such questions and tried to figure out what had gone wrong.

The problem appeared to be with a software update called Info LLM, which the Post said was designed to improve the accuracy of answers and reduce the number of errors on political questions. (The Post is owned by Amazon founder Jeff Bezos.)

Skip newsletter promotion

Major tech companies have gone to great lengths in recent years to be seen as politically neutral, but this has become more difficult since they launched a number of generative AI products. Image generators, chatbots and other tools have frequently sparked controversy for creating media that appears politically biased. This is often due to problems with the AI ​​models’ training data or inadequate bans on what they generate.

Most major AI companies have therefore built safeguards into their tools to avoid PR disasters and awkward questions about the political bias of their models. For example, OpenAI’s flagship product ChatGPT provides general answers about a candidate’s political positions when asked whether to vote for Trump or Harris, and refuses to give specific information about the US election when asked questions like “How do I vote?”

Republicans and conservative activists still claim that platforms and AI companies secretly collude to promote a left-wing worldview, despite no empirical evidence to support this. A 2021 New York University study found no evidence of liberal bias on social media platforms; instead, data suggests they tend to amplify right-wing content.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *