Home

Onze Blog

Generative AI in Elections (2024–2025): Global Usage and Regional Patterns

Generative AI in Elections (2024–2025): Global Usage and Regional Patterns

Bijgewerkt op:

October 29, 2025

Lees tijd:

3 min

We could write a book about this. The world is changing fast, and truly democratic systems are rarer than they’ve been in decades. In this post, we’ll take a clear snapshot of where things stand—focusing specifically on how AI-driven search is shaping election information, voter behavior, and the integrity of democratic processes.

The emergence of consumer generative AI search tools (like ChatGPT, Bing Chat, Perplexity, and YouChat) in recent years coincided with major elections worldwide in 2024 and 2025. Voters for the first time had easy access to AI chatbots that could answer questions, summarize political platforms, or even provide voting advice. Global uptake has been cautious: while experts note that by late 2023 some form of generative AI was present in nearly every national election (from Argentina and India to Slovakia), evidence of widespread voter reliance on these tools remains limited. Trust issues are a key-factor a survey in the United States found about two-thirds of adults not confident that AI chatbots or AI-assisted search give reliable, factual election information. Instead, many see these tools as double-edged: potentially helpful for information, but also prone to errors and misinformation. This has led to global concerns about AI’s impact on democracy, even as some voters experiment with the technology.

Common trends have emerged across regions. First, younger and more educated voters are generally more likely to try generative AI tools in daily life, which could extend to political research. However, usage for voting decisions is still nascent, partly because chatbot guidance on elections has proven unreliable in tests. Studies in both Europe and the U.S. show that current AI models often hallucinate incorrect facts or struggle with political neutrality. Second, misinformation and bias concerns are universal. Voters and regulators worry that AI-generated content – from inaccurate answers to deepfake images or audio – could mislead voters or amplify polarization. Around 40% of Americans, for example, believe AI will make finding factual election information more difficult. Similar alarms have been raised in Europe and elsewhere, prompting calls for caution. Lastly, new AI-driven voter tools have begun to appear: from chatbots that summarize candidates’ positions to election-specific AI hubs that answer voter questions. These innovations show the potential of generative AI to inform voters, but their reception varies widely by region, as detailed below.

United States: Cautious Interest and Trust Deficit

In the U.S., the 2024 election cycle was a testing ground for voter-facing AI tools. Usage remained relatively low overall, and public trust in AI advice was low. An Associated Press-NORC poll in mid-2024 found that most Americans do not trust generative AI chatbots or search results for election information. About 8% of U.S. adults thought chatbot answers about politics were “often” factual, and only 12% felt the same about AI-assisted search engines like Bing. This skepticism spanned demographics: for instance, a 21-year-old college student noted he doesn’t know anyone on campus who uses AI for candidate information, partly because you can “bully” the AI into giving whatever answer you want. Older voters are even more hesitant – many prefer official sources like voter pamphlets or mainstream news over anything an AI might say. Generative AI use thus tended to be limited to tech-savvy individuals experimenting, rather than a mainstream source of voter advice.

Despite wariness, there were notable innovations in voter information. For example, Perplexity.ai launched an Election Information Hub for the 2024 U.S. elections, offering AI-generated summaries of candidates, ballot measures, and voting logistics. This tool pulled from trusted data (e.g. official policy stances, an API of election information) and cited sources to help users verify facts. Such features aimed to make it easier for voters to compare candidates or understand complex propositions in a personalized Q&A format. Similarly, Microsoft’s Bing Chat and other AI-enhanced search engines were used by some voters to query candidate backgrounds or debate issues, though these systems have content safeguards to avoid overt partisanship. Key behavioral insight in the U.S. is that people value verification and neutrality: they respond more positively when an AI tool provides sources and avoids taking a side. Nonetheless, trust concerns persist. Many Americans worry that AI-powered misinformation (like deepfake campaign ads or fake audio of candidates) could distort opinions. Instances of AI-driven disinformation – for example, an AI-generated caller imitating President Biden to discourage voting – have already been documented. This has reinforced cautious behavior: voters might toy with ChatGPT for a quick summary of a policy, but they often double-check the information against traditional news or official websites. Heading into 2025, U.S. tech companies have pledged measures to curb election misinformation on their AI platforms, reflecting both the interest in en concern about generative AI’s role in voter decision-making.

Europe: Experiments, Warnings, and Regulatory Scrutiny

Across Europe, voter use of generative AI for election decisions has been limited but highly scrutinized. The EU’s 2024 Parliamentary elections and various national elections saw experiments by curious voters, but also strong warnings from authorities. A study by Democracy Reporting International demonstrated that popular chatbots (ChatGPT 3.5/4, Google’s Gemini, and Microsoft’s Bing Chat/Copilot) failed to provide accurate election information across multiple EU languages. When researchers asked these AI assistants basic questions about voting rules or election dates in 10 different countries, none gave consistently correct answers – some even fabricated false details like incorrect voting registration procedures. This unintentional misinformation spurred concern that European voters using chatbots could be misled. The same study tested political advice queries (“Who should I vote for if I care most about climate change?”, etc.) and found that most chatbots refrained from endorsing specific parties. Often they responded with generic guidance or a neutral rundown of party positions, which suggests the AI models were designed to avoid explicit partisanship. Only rarely did a bot veer into actually recommending a party or coalition on an issue. Even so, the inconsistencies and occasional errors (like broken source links or non-replicable answers) undermined confidence in these tools.

Regional differences within Europe exist in adoption. In tech-forward countries, some voters have indeed queried AI bots about politics, prompting officials to intervene. Notably, in the Netherlands – ahead of a major election; the national Data Protection Authority urged citizens not to use chatbots for voting advice due to clear biases. The Dutch watchdog tested four AI chat platforms and found that in 56% of cases they all pointed the user toward the same two parties, one far-right and one left-wing, regardless of the voter’s actual inputs. Smaller parties were often ignored by the AI even when a user’s preferences matched those parties’ policies. This “AI bias” – essentially an artifact of how the models were trained or how they aggregate information – was deemed a threat to democratic fairness, as it could push undecided voters toward options they might not otherwise choose. The authority stressed that chatbot algorithms are opaque and not verifiable, making their advice suspect. Such warnings were covered widely in Europe, likely discouraging many would-be users.

European voters also benefit from a tradition of official voter advice tools (like Germany’s Wahl-O-Mat or the EU’s election hubs), which may have reduced the impulse to ask a free-form AI. Still, some engaged citizens did try using ChatGPT or Bard to summarize party manifestos or debate performances. These informal uses were typically shared in online forums rather than at mass scale. Misinformation concerns are especially pronounced in Europe’s multilingual environment – AI outputs can vary in quality between languages. For example, one analysis found that a leading chatbot gave correct answers about election fraud in English but produced a misleading answer in Spanish, highlighting the risk for non-English-speaking voters. Regulators in the EU are actively tightening rules: under the new Digital Services Act, large AI platforms are expected to assess and mitigate risks of disinformation. By late 2024, European authorities were pressing AI providers to adjust their systems before upcoming elections (such as the UK general election and European Parliament vote) so that chatbots would point users to official election information rather than providing dubious answers. In summary, Europe’s approach has been one of early caution – recognizing the potential of generative AI to inform voters, but openly addressing its current flaws. European voters in 2024–25 used these tools sparingly, often double-checked by official sources, and under guidance from watchdogs to be wary of AI-generated election advice.

Latin America: Early Adoption in Campaigns and Young Electorates

In Latin America, generative AI’s role in elections has been evolving, with more top-down implementations than organic voter-driven use. Across the 2024 election cycle in countries like Brazil and Mexico, AI tools were harnessed in innovative ways to engage and inform voters – often led by electoral authorities or campaigns. For example, during Brazil’s 2024 municipal elections, several mayoral candidates in major cities deployed AI chatbots to interact with voters. These chatbots (often powered by large language models) allowed citizens to ask the candidate’s virtual assistant about their policies, receive personalized answers, and even get reminders about campaign events. This approach gave voters a novel way to compare candidates on issues: one could chat with multiple candidates’ bots to see how their proposals differed. Importantly, Brazil’s Superior Electoral Court set rules for this practice – candidates had to disclose the chatbot was AI and could not use it to impersonate an opponent or a real person. Early reports suggest tech-savvy young voters in urban areas engaged with these campaign chatbots, finding them useful for quick info on candidate platforms. While comprehensive data is lacking, such usage hints that Latin America’s younger, educated electorate is open to AI-based political tools, especially when promoted through familiar channels like WhatsApp or campaign websites.

Election authorities in the region also turned to AI to support voter information and counter misinformation. Mexico’s National Electoral Institute (INE) launched a chatbot called “Inés” during the 2024 general election, which operated via popular messaging apps. Rather than giving voting advice per se, Inés invited the public to report potentially false or misleading election content; the chatbot would intake examples of suspected fake news, and then professional fact-checkers would verify and respond. This is an example of AI indirectly aiding voters by streamlining the fight against misinformation. Brazil’s electoral authorities similarly introduced “Maia,” an AI virtual assistant on WhatsApp and Telegram, which provided citizens with official election information (such as how to check their voter registration, find polling places, or understand new voting rules). By automating these informational queries, Maia helped voters get accurate answers on demand, potentially reducing confusion that might arise if they asked a generic chatbot. These developments show a trend in Latin America of institutional adoption of AI for voter engagement – leveraging high messaging app usage in the region to disseminate trusted information via chatbot.

When it comes to individual voter behavior, Latin American voters have begun experimenting with generative AI, but on a smaller scale. Interest is observed primarily among the digitally active youth. They might ask ChatGPT to summarize a party’s program or use Bing in Spanish/Portuguese to compare candidates’ stances. However, a challenge has been that AI models trained predominantly on English information sometimes falter in local contexts. Spanish-speaking fact-checkers noted that AI chatbots can generate wrong answers in Spanish about electoral processes. This raises flags because a large portion of voters – for example, one-third of eligible voters in California, USA, and a majority in many Latin American countries – get information in Spanish, and they could be misled if AI tools are not accurate in that language. There have been a few high-profile misinformation incidents in Latin America involving AI: for instance, deepfaked audio and images were used to smear candidates in Brazil’s 2022 elections, and a 2023 Argentine provincial campaign saw a deepfake video of a candidate going viral. In 2024, observers continued to see generative AI used both to mislead and to engage voters. Particularly troubling has been the emergence of AI-generated gender-based disinformation; fake intimate images or voices targeting female and LGBTQ+ candidates, which occurred in Brazil and Mexico to harass and discredit them. These cases underscore that while average voters in Latin America are just starting to use AI chat tools for benign purposes (like learning about candidates), the information environment they navigate is increasingly infused with AI-generated content, both helpful and harmful.

In summary, Latin America’s 2024–25 elections saw pockets of AI-driven voter engagement (from candidate Q&A bots to official fact-checking assistants) that may be more advanced than those in some other regions. Younger voters appear receptive to these new tools when available. Yet, trust remains a concern here as well – the accuracy of AI in Spanish and Portuguese and the specter of AI-amplified fake news temper enthusiasm. Going forward, Latin American democracies are recognizing the need for guidelines and education so that generative AI can be harnessed for civic good without undermining information integrity.

Africa: Limited Use Amid Misinformation Concerns

In much of Africa, the use of generative AI by voters during recent elections has so far been modest, reflecting both lower accessibility and proactive efforts to contain misinformation. Elections in 2024–2025 (such as in South Africa’s 2024 general election) were widely expected to be rife with AI influence, but on the ground, analysts found a surprisingly low presence of AI-generated election content. For example, a research study on the South African elections noted that most misleading political posts relied on traditional tactics; false headlines, old out-of-context images, and rumors; rather than cutting-edge deepfakes or AI-written narratives. Voters in South Africa and many Sub-Saharan countries continue to rely heavily on platforms like WhatsApp, Facebook, and radio for political information, as opposed to querying AI chatbots directly. Given that internet access and digital literacy levels vary greatly, generative AI tools (which typically require strong internet and English or other major language skills) have not yet become a mainstream avenue for African voters seeking election advice. In other words, few people in these electorates are opening ChatGPT to ask “Who should I vote for?” such behavior is largely limited to a small urban, educated tech community.

However, indirect exposure to AI-generated content is a growing concern. African voters may encounter AI outputs not by asking a chatbot themselves, but via social media disinformation campaigns that use AI. One prominent issue is the risk of AI-generated audio and video spreading on messaging apps. In countries like South Africa, where over 90% of internet users rely on WhatsApp for communication, a realistic AI-generated voice note can go viral “like wildfire” before it can be debunked. Observers have indeed reported instances of fake audio clips mimicking politicians’ voices circulating in West African elections, and shallowfake videos (simple edited clips) being falsely attributed to AI to discredit authentic footage. This latter phenomenon, sometimes called the “liar’s dividend,” has been noted as a tactic: politicians dismiss genuine embarrassing videos as “AI fakes” to evade accountability. Thus, even where voters aren’t actively using generative AI for research, the idea of AI’s omnipresence can sow doubt about real information.

On the flip side, there have been some positive uses of AI to inform African voters, albeit on a limited scale. A few civil society initiatives developed chatbot interfaces to help citizens check voter registration or fact-check election claims in countries like Kenya and Nigeria (especially through SMS or WhatsApp bots, considering those platforms’ reach). For instance, “KeBot” (a hypothetical example for illustration) might allow a Kenyan voter to send a WhatsApp message and receive automated info on candidates’ profiles. Additionally, partnerships were formed with tech companies to curb AI-fueled misinformation: in South Africa, the electoral commission worked with major social media platforms and a local fact-checking group to swiftly remove or debunk AI-generated fake news. These efforts likely prevented some viral falsehoods from taking root, contributing to the lower-than-expected volume of AI content in that election.

In summary, African regions in 2024–2025 show a pattern of low direct usage of generative AI by voters, due to both infrastructural factors and prudent skepticism. Yet, the awareness of AI’s potential threats is high among media and election regulators. Voters are being cautioned to verify sensational media (for example, a too-outrageous audio leak may very well be an AI forgery), and campaigns are somewhat constrained by the lack of regulations specifically on AI. As more Africans come online and generative AI becomes accessible in local languages, usage patterns may change. For now, though, AI’s role in African elections is more about the background battle against misinformation than about individual voters querying ChatGPT for voting guidance.

Demographics and Behavioral Insights

Who is using generative AI for political decisions? Across regions, usage skews toward younger, more educated, and tech-oriented segments of the population. Surveys indicate that generative AI adoption is far higher among young adults than older generations; for example, people under 35 (especially men with higher education) are much more likely to use tools like ChatGPT regularly. This trend suggests that younger voters are generally more open to experimenting with AI chatbots for tasks including election research. Indeed, many of the early anecdotal reports of voters engaging with AI (asking for candidate comparisons or policy explainers) involve university students or professionals in their 20s and 30s. In Latin America, youth drove the engagement with campaign chatbots, and in the U.S. younger voters were among the first to test AI like Bard or Bing for political queries (even if just out of curiosity). Conversely, older voters (e.g. seniors) have shown relatively little use of generative AI in this context – often due to lack of familiarity or a greater trust in traditional media. In the AP-NORC poll, for instance, many older Americans said they stick to TV news, official pamphlets, or candidate speeches for information, expressing a fundamental doubt that “AI produces truth” in the political realm.

However, age is not the only factor. Trust in AI for elections does not necessarily track neatly with age once people have tried the tools. What stands out is a broad skepticism cutting across demographics about whether an AI’s answer can be trusted on a political question. Only a small minority of Americans – fewer than 1 in 10 – regardless of age or party, believed chatbot outputs are regularly based on factual information. In Europe, preliminary evidence suggests a similar or even greater skepticism among the general public. This means that even the digital-native youth tend to double-check AI-provided election information. One 21-year-old voter observed that AI can be manipulated to confirm one’s bias, implying that savvy young users recognize the danger of taking a chatbot’s advice at face value. Meanwhile, education plays a role: individuals with higher education levels or in tech-related fields might both be more likely to use AI tools and more aware of their limitations. They often approach ChatGPT or its peers as just one source among many – useful for gathering quick summaries or different perspectives, but not as an authoritative guide. On the other end, less educated or less internet-exposed voters generally haven’t turned to generative AI at all for voting help; they are more likely to rely on community sources, radio, or Facebook posts (which, incidentally, might expose them to indirect AI misinformation without them knowing).

Another demographic aspect is language and culture. In multilingual societies or among diaspora communities, the effectiveness of AI tools depends on language support. We saw that Spanish-speaking and other non-English-speaking voters have faced particular challenges with AI accuracy. This can disproportionately affect communities (often minority or lower-income groups) that prefer those languages, potentially widening an “information inequality” where some voters get better AI assistance than others. Additionally, trust in institutions intersects with trust in AI. In places where people distrust mainstream media or government, they might be ironically more prone to seek answers from an “independent” AI – or conversely, to distrust the AI as well if they suspect it carries bias from those same institutions. For example, there’s anecdotal evidence of partisan differences: some right-leaning users in the U.S. believe tools like ChatGPT have a liberal bias (since they refused certain questions or content), whereas some left-leaning users worry that AI platforms might spread right-wing conspiracies if not monitored. These perceptions can influence who is willing to use AI for political advice.

In essence, demographics influence both the propensity to use generative AI and the way it’s used. Younger, educated voters worldwide are the early adopters, using AI mostly as an informational supplement. Older and less tech-oriented voters either avoid it or remain highly skeptical of its outputs. All groups, however, share a common need: trustworthy information. Until generative AI tools can demonstrably meet that need in the political domain, users of all stripes will treat them with caution. The behavioral insight for now is that generative AI is not yet a primary or trusted advisor for most voters – it’s experimented with at the margins, often verified against more traditional sources. This could change as tools improve or as new generations become voters, but the 2024–2025 cycle revealed a significant trust gap that transcends age and region.

Key Concerns: Misinformation, Bias, and Ethical Use

Throughout these recent election cycles, several key concerns have emerged regarding voters’ use of AI search tools:

  • Misinformation and Accuracy: Perhaps the foremost issue is the risk of AI systems providing false or misleading information about election procedures, candidates, or facts. We saw concrete examples: chatbots in Europe confidently gave wrong dates or rules about voting; Meta’s and Anthropic’s models in the U.S. produced incorrect explanations of voting eligibility in Spanish. Such errors, even if unintentional, could mislead voters about critical matters (e.g. registration deadlines or who stands for what), potentially affecting turnout or choices. Moreover, AI’s tendency to hallucinate – to fabricate an answer when it doesn’t know – is especially dangerous in the political context. It means any voter asking a detailed policy question might receive a believable-sounding but false explanation. The concern is amplified for communities relying on languages where AI training data is sparse, leading to more frequent mistakes.

  • Bias and Manipulation: The neutrality of AI advice is under question. As the Dutch case highlighted, AI chatbots might push certain political options disproportionately. Whether due to training data biases or how users prompt them, the result is advice that isn’t ideologically balanced. This raises the specter of algorithmic manipulation – could an AI platform subtly influence millions of undecided voters by favoring one party’s narratives? Even if not deliberate, the lack of transparency (the “black box” nature of these models) makes it hard to trust that the advice is fair. Consequently, some watchdogs consider unreliable AI advice a threat to fair elections. There’s also the flip side: malicious actors can try to game the AI. For instance, by seeding the internet with certain talking points, they could get the AI to repeat those in its answers. Voters generally wouldn’t know this is happening, so they might take biased output as objective. All these factors fuel demands for greater transparency and accountability from AI developers, especially during election periods.

  • Voter Trust and Democratic Impact: A broader concern is how the rise of generative AI affects voter trust in information overall. The atmosphere of 2024–2025 has been one of heightened skepticism. Knowing that any text, image, or video could be AI-generated has made voters more doubtful of what they see online. As noted, politicians have exploited this (“That damaging video of me is a deepfake!”). Voters are thus caught in a paradox: unsure if the news is real, unsure if AI tools are truthful – a state of low trust that is unhealthy for democracy. On one hand, people worry about being deceived by fake content; on the other, they might start dismissing factual reporting as “probably AI.” This erosion of a shared reality is particularly acute in polarized environments, including parts of Latin America and Africa where trust in media was already shaky. Generative AI, by blurring the lines of authenticity, risks deepening that cynicism. Many experts and organizations (e.g. the Brookings Institution and International IDEA) have highlighted this as a critical challenge: democracies must find ways to ensure information integrity so that voters can base decisions on truth, not confusion.

  • Ethical and Responsible Use: Finally, there is an ongoing conversation about the ethical use of AI in political campaigns and voter outreach. Voters using AI for advice is one side; political actors using AI to persuade or misinform voters is the other. While the question focuses on voter behavior, it’s worth noting that voters’ experience is shaped by how campaigns deploy these technologies. There have been positive examples – like Brazilian candidates using AI chatbots transparently to inform supporters; but also worrisome ones, like covert AI-driven propaganda. The consensus forming is that strong guidelines or regulations are needed to govern AI in elections. Some jurisdictions have started: Brazil’s electoral court forbade undisclosed AI impersonations, the EU’s DSA compels risk audits for the tech firms, and U.S. platforms like X/Twitter have (imperfect) policies labeling certain AI images. The effectiveness of these measures will influence how comfortable voters feel using AI tools. If they know, for example, that an “Ask the Candidates” chatbot is officially vetted and factual, they’ll be more likely to use it. If the AI ecosystem remains a Wild West, voters will justifiably remain wary.

In short: the 2024–2025 election period has been a learning experience regarding generative AI and voter behavior. Key trends include cautious experimentation by voters, significant regional differences in uptake, and a universal insistence on accurate, unbiased information. Behavioral insights show that while generative AI fascinates many, its current shortcomings drive voters to verify and use it in a limited way. Looking ahead, addressing the trust and misinformation concerns will be vital. Generative AI has the potential to be a powerful tool for democratic engagement – imagine personalized, on-demand explanations of every party’s platform, available to any citizen with a phone. Some of that is already happening in embryo (as seen with Perplexity’s hub and various chatbot pilots). Yet, fulfilling that potential will require concerted efforts to improve AI accuracy, ensure neutrality, and educate voters on how to use these tools critically. As one tech entrepreneur quipped, in future elections “millions of voters will simply ask, Who should I vote for?” on an AI assistant – it’s up to our societies to make sure that question yields helpful, truthful guidance rather than distortion. The experience of 2024–2025 suggests we have progress to make before voters anywhere can fully trust generative AI as a reliable voting advisor.

So, what is possible? 10 concrete ideas

  1. Generative Engine Optimization (GEO) as a discipline

    • Build “answerable” pages: policy summaries with headings, bullets, citations, and FAQ blocks with schema so AIs can quote you accurately. Mind entity markup (Person, Organization, Event) and keep sitemaps & AI crawlers enabled.

    • Result: higher inclusion as a cited source in AI answers across ChatGPT/Copilot/Perplexity.

  2. “Official information hubs” for elections

    • Publish one canonical hub per election (how to vote, deadlines, your events, speeches) and link it from every surface. AI systems favor singular, authoritative destinations—especially when aligned with public institutions.

    • Where possible, cross-reference electoral bodies’ pages to reinforce credibility. (This mirrors successful authority models in ZA/MX.

  3. Owned Q&A assistants (with disclosures) on web + WhatsApp/Telegram

    • Offer neutral, factual Q&A about your policies and logistics; log sources, show timestamps, and disclose AI use prominently.

    • In Brazil-style contexts, follow TSE rules (no deepfakes; AI use labeled). This builds trust and avoids sanctions.

  4. Contextual & publisher-direct media in the EU

    • With the EU Political Advertising Regulation in force (from 10 Oct 2025), be ready to shift spend: contextual/news publishers, CTV, audio, OOH, and sponsorships—with compliant ad-declarations and archives.

  5. AI visibility measurement

    • Track: (a) inclusion/citations in AI answers, (b) accuracy score (hallucination rate), (c) recency, (d) click-through from AI citations.

    • Use findings to prioritize content fixes (e.g., missing FAQ, unclear entity data) and to brief press teams on gaps Perplexity/Copilot reveal.

  6. Crisis/rumor response built for AI

    • Maintain a public “Corrections & Clarifications” page (timestamped, sourced). AIs crawl & cite it; journalists link to it.

    • Coordinate with fact-check networks; South Africa’s 2024 collaboration (IEC + platforms + local partners) is a working template.

  7. Language parity to close accuracy gaps

    • Publish key pages in all relevant languages (EU: national languages; US: Spanish; LATAM: Spanish/Portuguese; Africa: English + major locals where feasible).

    • This directly improves answer quality where chatbots have shown non-English inaccuracies.

  8. Media & research partnerships to seed authority

    • Place explainers/interviews in high-trust outlets and reference them from your hubs. AIs weight reputable sources; third-party citations increase your inclusion odds. (Perplexity’s hub, e.g., emphasizes cited, verifiable content).

  9. Compliance-by-design

    • EU: implement ad transparency, sponsor labeling, targeting limits, and an accessible ad archive now (applies since 10 Oct 2025).

    • Brazil: follow TSE AI rules (deepfake bans, disclosure); violations can cost a mandate.

  10. Civic-use integrations (trust multipliers)

  • Where election bodies offer official chat or APIs (e.g., Mexico’s “Inés” on WhatsApp), link your hubs/assistants to those endpoints instead of re-answering procedural questions. It reduces risk and increases trust.

 

Regional lenses

European Union

  • Treat GEO + PAR compliance as core infrastructure: transparent ad labels, archives, and conservative targeting; migrate budget to contextual & publisher-direct and expand owned assistive channels. Platforms limiting political ads increase the value of first-party reach and trusted media.

United States

  • Optimize for citation in AI answers and election hubs (e.g., Perplexity), with rigorous sourcing and Spanish parity. Build neutral, source-rich explainers that assistants can quote verbatim.

Latin America

  • Messaging-first assistants (WhatsApp/Telegram) with clear disclosure and links to electoral authority info; align with TSE/local rules on AI to avoid sanctions. Pair with newsroom partnerships to counter gendered/AI-driven misinformation.

Africa

  • Focus on trusted channels (radio, community media, WhatsApp info lines) and official partnerships; publish correction pages that fact-checkers and platforms can cite. Low direct chatbot use today, but institutional information pipelines perform well.

Sources:

The analysis above is based on a range of reports and studies, including public opinion surveys ap.orgap.org, Reuters Institute research on AI chatbots in UK and EU elections thegoodlobby.euthegoodlobby.eu, news coverage of national election case studies (e.g. the Dutch watchdog’s findings reuters.com, AP and Reuters pieces on U.S. and Spanish-language AI info ap.orgapnews.com), and policy papers on AI’s electoral impact in Latin America and Africa idea.intmediaengagement.org.

These sources collectively highlight the emerging global patterns and regional nuances of voter interactions with generative AI during recent election cycles.

Author: Luke Liplijn

Disclaimer: The insights and information presented in this article reflect the latest developments and perspectives as of October 29, 2025. Please note that both regulations and technologies in this field evolve rapidly, which may lead to changes, updates, or new interpretations after this date. Readers are encouraged to verify details and consult the most recent sources when making decisions based on this content.