The subtext of the summit is that integration capacity, not just frontier leadership may shape long‑run advantage.
Rahul PAWA | x – @imrahulpawa
In the week ahead, the India AI Impact Summit convenes at Bharat Mandapam in New Delhi, promoted by its organisers as a first global AI summit hosted in the Global South and designed to produce tangible outputs, not just declarations. India’s Ministry of External Affairs has confirmed that Emmanuel Macron and Luiz Inácio Lula da Silva will participate during their February visits. A government guide also positions the summit as a mass convening of governments, industry leaders, researchers, startups, students and citizens.

Its philosophical anchor is unusually explicit for an AI conference. In remarks preceding the summit, Prime Minister Narendra Modi set the theme as “Sarvajana Hitaya, Sarvajana Sukhaya” (welfare for all, happiness for all) and advocated for equitable access, population‑scale skilling, and responsible deployment AI that is safety‑by‑design, transparent, and auditable in high‑impact settings.
Intriguingly, India is hosting at a moment when the global AI governance stack is fragmenting. The United States is pushing for innovation velocity and national coordination. A 2025 executive order sets “global AI dominance” as policy and directs agencies to revoke or revise prior federal AI actions seen as barriers to innovation. A later order argues for a “minimally burdensome” national standard and launches federal litigation against state AI laws that create a compliance patchwork or constrain model behaviour.The European Union is coding caution into law. The European Commission describes the AI Act as a four‑level risk regime that bans “unacceptable risk” practices and imposes strict obligations on “high‑risk” systems, risk management, data quality, logging, documentation, human oversight, robustness and cybersecurity under a phased timeline. Implementation is politically contested; reported calls from a major tech lobbying group to pause parts of the rollout, warning that missing implementation pieces and rushed timelines could stall innovation. China combines rapid deployment with guardrails bound to state priorities. In the generative‑AI rules (translated by the China Aerospace Studies Institute), providers must “adhere to the socialist core values,” avoid specified content categories, and submit services with public‑opinion or social‑mobilisation attributes to security assessment and algorithm filing.
However, India’s positioning diverges from all three. Rather than betting primarily on capital‑intensive frontier model races, it is trying to make deployment the moat: shared inputs and repeatable governance that let AI plug into public services and regulated industries without reinventing the stack each time. The summit’s structure reflects that engineering mindset. Official material describes “People, Planet, Progress” as pillars, with working groups tasked to present deliverables such as an “AI Commons,” trusted tools, shared compute infrastructure and sector compendiums of use cases. This is infrastructure policy framed as an implementation programme.
That focus matches what diffusion research emphasises. An OECD working paper on digital technology diffusion argues that advanced tech adoption builds on enabling systems, varies across sectors and firm sizes, and depends heavily on skills and digitisation; it calls for policy mixes that accelerate diffusion to unlock productivity. The subtext of the summit is that integration capacity, not just frontier leadership may shape long‑run advantage.
A government explainer says that, under the IndiaAI Mission, more than 38,000 high‑end GPUs and 1,050 TPUs have been onboarded for shared access, with subsidised pricing positioned as a democratisation tool for startups, researchers and public agencies. On the data side, the same release positions AIKosh as a shared repository and reports thousands of datasets and hundreds of models aggregated across sectors. Together, compute plus data commons turn “intent” into something implementers can plan against: predictable unit costs, reusable artefacts, and a shorter path from prototype to audited deployment.
This framing also echoes the G20 concept of digital public infrastructure: modular, interoperable building blocks; identity, payments and consented data flows that multiple actors can reuse across sectors. India’s AI story is “DPI, but for models”: reduce duplication and make safeguards portable.
The summit’s most consequential test is whether “responsible deployment” stays concrete at scale. India’s stated stance is that high‑impact AI should be auditable and human‑overseen, with explicit guardrails on misuse such as deepfakes, crime and terrorism. The summit’s own deliverables; trusted tools, sector playbooks, and shared infrastructure implicitly treat assurance as a prerequisite for diffusion, not an afterthought.
If the week produces reusable assets, an implementable AI Commons, sector compendiums that specify data and evaluation standards, and scalable access to compute. India’s thesis becomes plausible: long‑term AI advantage may belong to whoever can make systems reliable, affordable, multilingual and governable for the largest number of institutions, not only whoever trains the largest frontier model first.
(The author is an international criminal lawyer and director of research at New Delhi based think tank Centre for Integrated and Holistic Studies (CIHS).
