Discourses From the East
The 2026 Assam Assembly elections stand at the vanguard of a fundamental paradigm shift in democratic discourse, where the traditional boundaries between reality and fabrication have been structurally dissolved by generative artificial intelligence. In a state historically characterized by intricate ethnic, linguistic, and demographic anxieties, the integration of synthetic media into political campaigning has moved beyond mere technological assistance to become the primary architect of electoral narratives. This transformation is not occurring in a vacuum; it is the culmination of trends observed during the 2024 Indian general elections, where an estimated $16 billion was expended on a high-tech media environment that utilized deepfakes, voice cloning, and hyper-personalized outreach at an unprecedented scale.
For Assam, a region defined by the legacy of the Assam Accord, the National Register of Citizens (NRC), and ongoing debates over undocumented migration, AI-generated content provides a potent mechanism to weaponize communal fear and manufacture political consent regardless of empirical authenticity.
The Industrialization of Deception: Digital War Rooms and Infrastructure
The shift toward the 2026 election cycle is marked by the formalization and industrialization of the “media war.” On October 29, 2025, the Bharatiya Janata Party (BJP) officially inaugurated its electoral media strategy during a high-level organizational meeting at the Atal Bihari Vajpayee Bhawan in Guwahati. The scale of this operation reveals the depth of institutional integration: 16 spokespersons, 28 media panellists, and 26 state media department functionaries were mobilized alongside media conveners representing 39 organizational districts. This infrastructure is designed to bypass traditional journalistic gatekeeping, utilizing a decentralized network of social media handles and WhatsApp groups to disseminate both official achievements and synthetically generated narratives directly to the voter’s handheld device.
The technological arsenal available to these war rooms has expanded in sophistication and accessibility. While early iterations of electoral technology focused on mass broadcasting, the 2026 campaign is defined by “agentic AI” and hyper-personalization. Political consultancies now offer services that clone the voices of local candidates to deliver personalized audio messages, addressing individual voters by name. During previous cycles, over 50 million such automated calls were utilized to forge emotional connections that would be impossible through traditional rally-based campaigning. In Assam, where linguistic nuance is a primary marker of identity, this technology allows for the delivery of “tailored truths”; messages that can be adjusted to resonate with the specific ethnic or linguistic concerns of a micro-constituency, often contradicting the party’s broader state-level rhetoric.

The logistical execution of these digital campaigns often relies on the exploitation of “shadow networks.” For instance, political consultants have been documented using the phone numbers of construction labourers and other ordinary citizens to register WhatsApp accounts, making it functionally impossible for regulatory bodies or law enforcement to trace inflammatory AI content back to the political parties themselves. This creates an environment of plausible deniability where the most divisive content is spread through unofficial channels, while official party handles maintain a veneer of responsible communication.
Manufactured Dystopias: The Visual Language of Communal Fear
In the context of Assam’s socio-political fragility, AI-generated imagery has been strategically deployed to create “manufactured dystopias.” A seminal incident in late 2025 involved a series of videos released by the Assam BJP titled “Assam Without BJP”. These videos utilized hyper-realistic generative AI to depict a future state where landmarks such as the Lokpriya Gopinath Bordoloi International Airport and iconic tea estates were purportedly “taken over” by the Muslim community. The visuals were meticulously crafted to evoke existential dread: skull-capped men in positions of authority, the transformation of cultural heritage sites into Islamic centres, and the prominent display of Pakistan flags.
The power of this narrative lies in its disregard for statistical reality. The video prominently claimed that Assam would become “90% Muslim” without the intervention of the ruling party, a figure that is starkly contradicted by the 2011 Census, which placed the Muslim population at 34.22%. Despite this empirical gap, the immersive nature of AI-generated content allows these “synthetic truths” to take root in the public consciousness. Unlike crude Photoshop edits, modern generative AI creates a “felt reality” that resonates with pre-existing biases and pre-conceived notions of demographic threat. This is what communication theorists refer to as “affective relation,” where misinformation is designed not necessarily to be believed as a literal fact, but to fuel community sentiments and empower exclusionary movements.
The psychological impact of these images is profound. Even when a viewer is aware that a video is AI-generated, the vivid depiction of a “feared future” can influence voting behaviour by triggering a survivalist instinct. This tactical use of “fear-as-narrative” is particularly effective in Assam, where historical anxieties regarding the Assam Accord and the perceived erosion of indigenous identity have been the dominant themes of political life for over four decades. By weaponizing these anxieties through immersive technology, political actors can bypass rational debate and force the electorate into a binary choice between “safety” and “cultural erasure”.
The Opposition’s Digital Counter-Strategy and Hijacked Authority
Opposition forces, most notably the Indian National Congress (INC) and the All-India United Democratic Front (AIUDF), have faced a dual challenge: defending themselves against AI-driven character assassination and launching their own digital counter-narratives. The response has been a combination of legal resistance and the use of “hijacked authority.” On September 18, 2025, the Assam Pradesh Congress Committee (APCC) filed a formal complaint at the Dispur Police Station, naming high-ranking BJP officials including State President Dilip Saikia and Social Media Convener Shaktidhar Deka. The charges, which included criminal conspiracy and incitement to communal disturbances, signify the transition of AI content from a campaign tool to a central piece of forensic legal dispute.
However, the opposition has also been implicated in the use of deceptive synthetic media. A significant instance involved the circulation of a deepfake news bulletin masquerading as a broadcast from Aaj Tak. The video featured a synthetically generated news anchor reporting on a “leaked intelligence document” that predicted a major defeat for the BJP in the 2026 elections. This tactic represents a sophisticated form of “source hijacking,” where the credibility and perceived neutrality of a mainstream media institution are co-opted to lend weight to a fabricated political narrative.

The effectiveness of these hijacked narratives is often amplified by “viral lag”; the window of time between the content’s release and its formal debunking. Even when AI detection tools such as Hive or Hiya confirm that a clip is 100% digitally altered, the narrative of an “impending defeat” or a “leaked report” has already permeated the information ecosystem. In a highly polarized environment, many voters are predisposed to share content that aligns with their desired outcome, regardless of its authenticity, leading to a state of “informational choice” where truth is secondary to ideological satisfaction.
The Liar’s Dividend and the Pre-emptive Neutralization of Truth
One of the most consequential side effects of the proliferation of AI in the 2026 Assam elections is the emergence of the “Liar’s Dividend.” This occurs when the mere existence of deepfake technology allows political actors to dismiss authentic but damaging information as “AI-generated fabrications”. This strategy has been adopted pre-emptively by the highest levels of the Assam government. Chief Minister Himanta Biswa Sarma has publicly warned that his comments would be “distorted” using AI before the 2026 assembly polls to mislead the public.
While this warning highlights a genuine threat, it also functions as a strategic defence mechanism. By sensitizing the public to the possibility of AI distortion, a politician can create a blanket of scepticism around any future visual or audio evidence of misconduct. This creates a paradox where the “defence of truth” becomes a tool for the “evasion of accountability.” If a compromising video of a leader surfaces, the default public reaction; already primed by pre-emptive warnings; is to question its authenticity rather than its content. This leads to what analysts describe as “epistemic nihilism,” where the electorate, overwhelmed by the inability to distinguish real from fake, retreats into pre-existing tribal loyalties as their only reliable guide.
Furthermore, the “Liar’s Dividend” is often exploited to suppress legitimate press oversight. When investigative journalists uncover genuine evidence of corruption or electoral malpractice, political entities can now utilize AI-powered chatbots and coordinated social media campaigns to label the evidence a deepfake. This tactic not only protects the specific actor but also devalues the role of the press as a whole, contributing to a broader decline in trust in democratic institutions.
The Northeast Factor: Ethnic Tensions and the Cycle of Violence
The impact of AI-driven narratives in Assam is inextricably linked to the region’s unique socio-political sensitivities. The Northeast is a landscape where “digital contestation” frequently translates into “physical confrontation”. The 2023 ethnic clashes in Manipur serve as a grim precedent for how manipulated content can trigger a “destructive cycle” of violence. In Manipur, synthetic media and doctored visuals, such as a fabricated video showing a confrontation between the Assam Rifles and state police, were used to stoke communal fears and ignite large-scale protests and arson.
For Assam, the risk is magnified by the linguistic diversity of the state. With over 200 languages and dialects, many of which are spoken by small, regional communities, the state’s information ecosystem is highly fragmented. Traditional content moderation by platforms like Meta and Google is primarily focused on English and Hindi, leaving approximately 86% of Indian languages underserved by fact-checking and safety protocols. This “linguistic gap” allows inflammatory narratives to spread in local languages like Bodo, Karbi, or Mising without detection by automated filters or national fact-checking units.

The cycle of disinformation in the Northeast typically follows a predictable pattern: a manipulated piece of content gains traction on local social media, triggers heightened community tensions, and is followed by real-world protests or violence. This cycle is often accelerated by the involvement of young users and influencers who may not possess the “media and information literacy” (MIL) required to evaluate synthetic content critically. In a region where identity is tied to land and language, an AI-generated image of a “cultural encroachment” can be more effective at mobilizing a crowd than any political manifesto.
Regulatory Plenary Powers and the Institutional Response
In response to the escalating threat of AI misuse, the Election Commission of India (ECI) has invoked its plenary powers under Article 324 of the Constitution to establish a new framework for electoral integrity. As of October 24, 2025, the ECI has mandated a rigorous set of guidelines for all political parties and candidates contesting the 2026 elections. The centrepiece of this regulation is the “3-hour takedown rule,” which requires political parties to remove any misleading or unauthorized AI-generated content from their official handles within three hours of being notified.
The ECI’s directive also includes specific technical requirements for disclosure. Any synthetically generated image, video, or audio used for campaigning must bear a prominent label such as “AI-Generated” or “Synthetic Content”. For video content, this label must cover at least 10% of the visible display area and be positioned at the top of the screen. Furthermore, political parties are now required to maintain internal records of all AI-generated campaign material, including the details of the creators and timestamps, for verification by the Commission.

However, the efficacy of these regulations is hindered by two primary factors: they are “reactive” and “content-centric”. By the time the ECI or a social media platform initiates a takedown, the content has often already achieved its primary goal of setting a narrative or inciting an emotional response. Moreover, the regulatory focus on official handles ignores the vast “underground” network of parody accounts, meme pages, and personal WhatsApp messages that form the bulk of the misinformation landscape. Critics also argue that the vague definitions of “misleading” content provide a loophole for political parties to use AI for “satire” or “creative expression,” effectively circumventing the spirit of the guidelines.
The Economic Engine: Cheap fakes vs. Deepfakes in the 2026 Cycle
The democratization of AI technology has created a lucrative market for “synthetic media creators” in India. For a mere subscription fee of approximately 10 cents per video, political actors can now produce content that once required expensive studio time and post-production. This has led to the proliferation of “cheap fakes”; content created using basic editing software or low-end AI tools that, while not photorealistic, are sufficient to influence public perception.
Analysis from the Deepfakes Analysis Unit (DAU) indicates that during recent election cycles, “cheap fakes” were actually more pervasive than high-end deepfakes. These often involve the use of synthetic audio tracks overlaid on real footage to alter the meaning of a leader’s speech. Because audio-only deepfakes are harder for the average user to identify; especially when mixed with background noise or music; they have become a preferred tool for “targeting candidates with forged call recordings about arranging ‘black money’ or buying votes”.
The market for these services is not limited to domestic firms. Indian companies like Polymath Solutions, which played a major role in the 2024 elections, are now expanding their synthetic media capabilities to Western democracies, indicating that India has become a global hub for political AI innovation. This commercialization ensures that as the 2026 Assam polls approach, the supply of synthetic media will outstrip the capacity of regulatory and fact-checking bodies to monitor it.
Voter Suppression and the “Special Intensive Review” Controversy
Beyond communal narratives, AI and digital tools are being utilized to manipulate the very mechanics of the electoral process in Assam. A significant controversy emerged in late 2025 regarding “Special Intensive Reviews” (SIR) of electoral rolls. Opposition parties have alleged that the ruling BJP utilized digital records and purportedly leaked video conference instructions to orchestrate the targeted deletion of opposition supporters from the voter lists in 60 assembly constituencies.

These allegations point to a more sinister use of technology: “data-driven voter suppression”. By utilizing AI-powered context analysis tools, political parties can identify the likely political leanings of a household based on demographic data, past voting patterns, and social media activity. This information can then be used to challenge the citizenship or residential status of specific voters, particularly in sensitive regions where the NRC process remains incomplete. The result is a “digital contest” where the target is not just the voter’s mind, but their right to participate in the democratic process.
The involvement of “Special Revision” (SR) processes, which are standard for updating voter lists due to migration or death, has become a point of high tension in Assam’s socio-political landscape. Opposition groups argue that without transparency and digital safeguards, these revisions become a tool for disenfranchisement. In response, cybersecurity experts and organizations like CryptoWild have called for the use of “voter data integrity tools” and blockchain-based solutions to prevent the tampering of electoral databases.
Building Resilience: The Role of Fact-Checking and Media Literacy
As the 2026 elections approach, the defence of democratic integrity in Assam relies heavily on a “multi-stakeholder approach” involving fact-checkers, civil society, and academia. The “Shakti – India Election Fact Checking Collective,” supported by the Google News Initiative, represents the largest collaboration of its kind, uniting over 50 organizations to combat AI-generated misinformation in regional languages.
However, fact-checking faces a significant challenge in “amplification.” While a deepfake can go viral in minutes, a detailed fact-check report often struggles to reach the same audience. To address this, organizations are adopting a “literacy-centric” rather than “content-centric” approach. This involves integrating “media and information literacy” (MIL) into school curricula and launching awareness campaigns that teach citizens how to “spot a deepfake” by looking for anomalies in lip-syncing, shadows, or inconsistencies in audio tracks.
In Assam, local influencers and NGOs are being leveraged to reach linguistically diverse populations that are often ignored by national media. The goal is to foster “cognitive resilience”; the ability of a citizen to resist emotional manipulation and verify information before deciding to share it. Empirical research has shown that simple “behavioural nudges,” such as prompting a user to consider the accuracy of a headline before sharing, can significantly reduce the spread of misinformation.
Narrative Hegemony and the Final Appraisal of the 2026 Cycle
The 2026 Assam Assembly elections will serve as a definitive case study in how synthetic media can set a political narrative regardless of its authenticity. The research demonstrates that AI is no longer a peripheral tool for campaign efficiency; it has become a “narrative force multiplier” that can manufacture existential threats and communal divisions in a fraction of the time required by traditional propaganda. In a post-truth electoral landscape, the “authenticity” of a video is less important than its “effectiveness”; the degree to which it confirms a voter’s bias and mobilizes their support.
The overarching implication for the future of democracy in Assam is the potential for a “permanent state of informational instability.” When AI can resurrect dead leaders, manufacture dystopian futures, and pre-emptively neutralize truth through the “Liar’s Dividend,” the very concept of an “informed electorate” is called into question. The 2026 cycle will determine whether the institutional and societal safeguards currently being established; from ECI regulations to media literacy programs; are sufficient to protect the “sanctity of the voter’s decision” against the onslaught of algorithmic manipulation.
Ultimately, the battle for Assam’s future is being fought not just at the polling stations, but in the digital war rooms where narratives are coded and disseminated. The intersection of communal anxieties and high-tech deception creates a volatile environment where the “fragile social fabric” of the Northeast is the primary target. As the state moves toward the 2026 polls, the challenge for journalists, regulators, and citizens alike is to navigate this algorithmic frontier without losing sight of the objective reality that lies beneath the synthetic surface.
