,

Kazakhstan in the GIRAI 2023 Assessment: Institutional Ambition and the Ethics Gap

Kazakhstan in the GIRAI 2023 Assessment: Institutional Ambition and the Ethics Gap

The Global Index on Responsible AI (GIRAI) 1st Edition assessed 138 countries across three dimensions — Responsible AI Governance, Human Rights and AI, and National Responsible AI Capacities. Kazakhstan placed 74th globally and 2nd among the five Central Asian states. The full dimensional and thematic breakdown is available through the GIRAI data explorer. This post reads the Kazakhstan country profile — its governance architecture, the human rights picture, and the pattern that emerges when policy ambition moves faster than accountability frameworks.

Kazakhstan has the most developed institutional AI posture in Central Asia. It is also the country in the region where the gap between policy ambition and ethical governance frameworks is most visible — precisely because the ambition is high enough to make the gap measurable.

The Three-Dimensional Breakdown

GIRAI structures every country profile across three top-level dimensions evaluated through thematic areas and actor categories. Legend: = documented evidence approved by GIRAI headquarters · = drafted, planned, or partially documented · = no documented evidence at the time of assessment. Full per-indicator scores at global-index.ai.

Dimension 1 — Responsible AI Governance

Thematic areaStatusEvidence (Kazakhstan)
Enabling policiesDigital Kazakhstan 2018; Concept of Digital Transformation, ICT and Cybersecurity 2023–2029 (Resolution No. 269, March 28, 2023); AI Concept 2024–2029 in public discussion (~2-week window).
Rule of lawPolicy instruments at presidential decree level; constitutional framework intact. Independent civil society oversight constrained by political environment.
Technical standardsNational AI platform mandated by December 2024; AI development roadmap due December 2023. Technical standards specified at programme level, not yet codified as standards.
Technology-specific regulationFrameworks for algorithmic accountability, human oversight, and transparency absent or underdeveloped relative to deployment commitments.
Responsible AI Governance dimension — Kazakhstan, GIRAI 1st Edition (2023).

Dimension 2 — Human Rights and AI

Thematic areaFrameworkGov. actionPrivate sectorCivil societyAcademia
Freedom of Expression
Public Participation
Data Protection
Cultural & Linguistic Diversity
Health & Well-Being
Children’s Rights
Indigenous Data Sovereignty
Bias & Unfair Discrimination
Gender Equality
Education
Environmental Protection
Labour Protection
Human Rights and AI dimension — Kazakhstan, GIRAI 1st Edition (2023). ◐ for Cultural & Linguistic Diversity reflects ISSAI/KazLLM research output not yet bridged to a national policy framework. ◐ for Health gov. action reflects oncology AI deployment captured under capacities rather than rights-protective framework.

Dimension 3 — National Responsible AI Capacities

Sub-dimensionStatusEvidence (Kazakhstan)
Institutions24 universities and research centres engaged in AI R&D per Ministry of Science and Higher Education. Four anchor institutions named in AI Concept: Eurasian National (L.N. Gumilyev), Al-Farabi Kazakh National, Satpayev, Nazarbayev University. Nazarbayev’s ISSAI produces KazLLM research.
InvestmentsNational Development Plan to 2029 targets GDP doubling to $450B; AI named as tool for healthcare diagnostics (oncology AI radiology). National AI platform commitment by December 2024. Investment scale is real; governance pacing is not matched.
CompetenciesAI Concept itself states «many citizens lack understanding of how these technologies function across sectors.» 2-week public consultation window for the same Concept flagged as functionally narrow.
National Responsible AI Capacities dimension — Kazakhstan, GIRAI 1st Edition (2023).

The Governance Framework: Moving Fast

The anchor of Kazakhstan’s AI governance architecture during the GIRAI research period is the Digital Kazakhstan program, launched in 2018, which set out to transform the country into a leading digital economy — integrating AI across healthcare, education, transportation, and government services. This program established the baseline from which subsequent AI-specific instruments have developed.

The Concept of Digital Transformation, Development of ICT, and Cybersecurity Industry for 2023–2029 — approved by Government Resolution No. 269 on March 28, 2023, within the GIRAI research window — marks a more specific commitment: it includes a roadmap for AI development to be completed by December 2023 and a national AI platform to be created by December 2024. This is the kind of actionable, time-bound policy instrument the index captures as evidence of government commitment in the Responsible AI Governance dimension.

Two additional documents — the Concept for the Development of Artificial Intelligence for 2024–2029 and the Draft Presidential Decree on the National Development Plan until 2029 — appeared for public comment in early 2024, just outside the GIRAI research window. The National Development Plan sets a headline economic target — GDP doubling to $450 billion by 2029 — and specifically names AI as a tool for healthcare diagnostics: oncology centers are to receive equipment using AI for reading X-rays, mammograms, CT, and MRI scans. The AI Concept, meanwhile, went to public discussion for approximately two weeks before closing — a consultation window the GIRAI researcher flags as functionally narrow for a document setting AI policy parameters through 2029. Taken together, they indicate the direction of travel: Kazakhstan is iterating its AI policy framework at speed.

The GIRAI researcher’s assessment of this rapid iteration is direct: the recent documents demonstrate accelerating AI ambition, but contain a notable gap in ethical dimensions. Responsible AI governance means more than deployment roadmaps. Frameworks for algorithmic accountability, human oversight, and transparency are either absent or underdeveloped relative to the pace of implementation commitments.

Human Rights and AI: The Academic Anchor

The most substantive finding in the Human Rights and AI dimension concerns the academic sector. According to the Ministry of Science and Higher Education of Kazakhstan, 24 universities and research centers are engaged in AI research or development to varying degrees. The AI Concept identifies four institutions with advanced computing infrastructure — Eurasian National University (L.N. Gumilyev), Al-Farabi Kazakh National University, Satpayev University, and Nazarbayev University — as the current backbone of Kazakhstan’s AI research capacity. This is a significantly deeper institutional footprint than any other country in Central Asia at the time of the assessment.

The country’s demographic profile supports this: more than 6.2 million young people — approaching one third of the total population — are under 35, with 39.3% holding higher education degrees. The NEET rate (young people not in employment, education, or training) stood at 7.1% in Q3 2023, with a government target of reducing it to 3.5% by 2029 — a figure that signals both the scale of the youth workforce pipeline and the pressure on digital-sector employment to absorb it.

The limitation in this dimension is in the non-state actor space. The GIRAI researcher found limited evidence of private sector or civil society engagement on AI and human rights — not because such engagement is absent, but because documentation meeting the index’s evidentiary standard was sparse. Independent operation of non-state actors in AI governance in Kazakhstan is constrained by the country’s political environment, contributing to a narrower evidence base than the scale of AI activity might otherwise suggest.

There are also documented concerns about AI use in law enforcement. Bitter Winter, the religious freedom and human rights watchdog, documented surveillance deployments during the January 2022 civil unrest — when protests over fuel price increases were designated as terrorism by the government. Street cameras supplied by Hikvision cover Kazakhstan’s cities; Huawei controls the country’s telecommunications infrastructure, including internet routing and switching. Reports from that period describe a Chinese team deployed specifically to assist with facial recognition to identify protesters from surveillance footage. These allegations cannot be independently verified, but they are documented in the GIRAI country record and sit unresolved alongside Kazakhstan’s positive institutional commitments.

National AI Capacities: Investment Without Inclusion

On the National Responsible AI Capacities dimension, Kazakhstan’s profile reflects genuine investment — in research, in institutional infrastructure, in workforce development — combined with limited participation by private sector entities and civil society in shaping the direction of that investment.

The public consultation window for the 2024 AI Concept was approximately two weeks — the document was put to online discussion on May 21, 2024 and closed shortly after. For a strategy document setting AI development parameters through 2029, that timeframe is short enough to function as a constraint on meaningful input from anyone outside the government’s immediate networks. The AI Concept itself acknowledges a knowledge gap at the baseline: «many citizens lack understanding of how these technologies function across sectors» — which makes a two-week public consultation window for the strategy meant to govern those same technologies a notable tension. The GIRAI researcher flags this explicitly as a question about genuine public engagement rather than formal compliance.

What the Profile Shows

Kazakhstan in 2023 is the Central Asian country with the most elaborated AI policy architecture, the most developed academic research base, and the most visible gap between governance ambition and ethical framework. A 74th-place global ranking — in a field of 138 countries — places Kazakhstan in the upper half of the index, but the dimensional picture is uneven: strong on policy instruments and research capacity, thinner on rights-protection and accountability structures. Those three things are connected: the more a country commits to AI at scale, the more visible the absence of accountability structures becomes. The GIRAI baseline captures this moment — when the infrastructure is moving and the rights-protection layer has not caught up.


Based on the Kazakhstan country context and research findings submitted to the Global Index on Responsible AI (GIRAI) 1st Edition, 2023. Data source: global-index.ai. Regional hub: IDFI (Georgia). Publication consent: Yes. This is an observational read by KG Labs as part of its Central Asia AI governance coverage.

Work With Us

Ready to Go Deeper?

Whether you need expert input, research support, a project partner, or simply a conversation — KG Labs is here.