Kazakhstan’s AI Governance Landscape: A Policy Brief (CAIDP AI Policy Clinic, Spring 2023)
The Center for AI and Digital Policy (CAIDP) AI Policy Clinic is a structured programme that brings together practitioners and researchers from across the world for a semester-long curriculum on AI policy analysis. The Spring 2023 cohort covered frameworks for responsible AI governance, rights implications of algorithmic systems, and the gap between international principles and national implementation. Each participant produced a thematic country analysis as the programme’s practical output. This post presents the Kazakhstan analysis produced through that programme.
Kazakhstan is the most institutionally active country in Central Asia on AI governance — it has a policy pipeline, state investment, and a research base that no other country in the region matches. It is also, as this analysis argues, the country where the gap between governance ambition and governance accountability is most visible precisely because the ambition is high enough to make the gap measurable. The Oxford Insights Government AI Readiness Index 2022 placed Kazakhstan at 45.78 out of 100, with its lowest score in the technology sector — an indicator of where the country’s investment and institutional development have not yet produced the technical depth the ambitions require.
The Legislative Landscape
Kazakhstan’s AI governance architecture in 2023 rests on a layered set of legal and strategic instruments. The foundation is the Digital Kazakhstan programme launched in 2018, which set out to transform the country into a leading digital economy across healthcare, education, transportation, and government services — the policy baseline from which all subsequent AI-specific instruments develop. Under it: an international technopark of IT start-ups (Astana Hub), open platform and big data development, smart city components, and innovative financial technologies.
The Action Plan for the Implementation of the Concept of the Legal Policy of the Republic of Kazakhstan until 2030 (Government Resolution, April 29, 2022) establishes a specific legislative mandate for AI: the development of a definition of “artificial intelligence” — including the procedure, scope, and range of its use, its legal status, and legal consequences — with a separate offense for criminal AI use, to be developed by the Ministry of Digital Development by December 2025. The same action plan commits to codifying norms governing AI, machine learning, and data processing into a single legal instrument by December 2023. These are concrete legislative targets with named responsible agencies and deadlines — the type of governance commitment that the GIRAI framework captures as evidence of government action.
The Concept of Digital Transformation, Development of ICT, and Cybersecurity Industry for 2023–2029 (Government Resolution No. 269, March 28, 2023) sets a roadmap for AI development to be completed by December 2023 and a national AI platform to be operational by December 2024. The Concept for the Development of Artificial Intelligence for 2024–2029 went to public discussion in mid-2024 with a consultation window of approximately two weeks — a timeframe the CAIDP clinic analysis flagged as functionally insufficient for a strategy document governing AI through 2029, particularly given the document’s own acknowledgement that “many citizens lack understanding of how these technologies function across sectors.”
On the statistical side: the National Statistics Committee added artificial intelligence, machine learning, and big data as recognized indicators to the enterprise ICT survey — “Report on the Use of Information and Communication Technologies in Enterprises” (August 2022). The definition embedded in the survey form — AI as a computer system’s ability to imitate cognitive functions such as learning and problem-solving, applying mathematics and logic to model human reasoning — represents Kazakhstan’s first official public definitional commitment to what AI means in an administrative context.
The Geopolitical Dimension: China–Kazakhstan AI Cooperation
A significant dimension of Kazakhstan’s AI governance landscape that does not appear in its domestic strategy documents is the China–Kazakhstan digital technology partnership. At President Xi Jinping’s February 2022 meeting with President Kassym-Jomart Tokayev, both governments committed to cooperation in fields including artificial intelligence, e-commerce, digital finance, and the “Digital Silk Road” — language that was reinforced verbatim at Xi’s September 2022 state visit to Kazakhstan. These commitments create a parallel track to the domestic legislative pipeline: AI infrastructure development channelled through Belt and Road cooperation frameworks, outside the public consultation and accountability structures of domestic policymaking.
This matters for AI governance in two directions. First, it means AI infrastructure is arriving in Kazakhstan through diplomatic and commercial channels that have their own terms — terms not necessarily aligned with the accountability, transparency, or rights-protection provisions Kazakhstan’s domestic AI strategy is supposed to establish. Second, the Hikvision and Huawei infrastructure documented in Kazakhstan’s telecommunications and surveillance sector represents the operational layer of that digital partnership: AI tools deployed in the country whose governance accountability is determined by their country of origin rather than by Kazakhstani law. Hikvision — a Chinese state-owned company placed under US sanctions in 2019 for unethical use of AI technology — provided the hardware for Kazakhstan’s surveillance infrastructure. President Tokayev’s direct visit to Hikvision and discussions of future cooperation were reported in October 2019, the same year facial recognition cameras began appearing on Almaty buses.
Facial Recognition and Smart Cities: Deployment Ahead of Accountability
The deployment picture in Kazakhstan is concrete. In 2019, the small city of Akkol was declared the first “Smart City” in Kazakhstan — monitored by an AI-based facial recognition system with thermal imaging, license plate recognition, missing persons search, and weapons detection in schools, hospitals, and public places. By the same year, over 4,000 cameras had been installed across the capital Nur-Sultan. In 2020, $23 million was allocated for facial recognition cameras in Almaty.
During the COVID-19 state of emergency in 2020, the Ministry of Internal Affairs required approximately 8,000 citizens under quarantine to use the SmartAstana tracking application for compliance monitoring. Facial recognition technology was simultaneously deployed to identify quarantine violations in Almaty. By the end of the two-month emergency, 2,424 people had been charged with quarantine violations in Almaty and 3,347 in Nur-Sultan. Experts noted that the pandemic expanded surveillance without any public oversight framework — the same absence the governance documents had not yet addressed.
In 2022, a facial recognition system was launched at Kazakhstan’s airports, religious institutions, and underground walkways in Nur-Sultan. Reports of arrests of protesters based on footage from AI-powered recognition cameras continued through 2022, following the January 2022 civil unrest. In the same year, Kazakhstan implemented biometric authentication in the eGov Mobile application — Digital ID verification for 400 public services and e-license services, used by over 2.4 million citizens across banking, transport, postal, and notary services.
The governance gap here is specific: the domestic AI strategy commits to defining AI by December 2025 and codifying governance rules by December 2023. Both commitments post-date deployments of AI facial recognition at scale, across government services, law enforcement, and public health enforcement. The accountability framework is being drafted after the system is already operational.
E-Government and Healthcare AI: Where Deployment Is Ahead of Governance
Two sectors illustrate the deployment-ahead-of-governance pattern most clearly. In e-government, the Ministry of Digital Development is implementing a pilot project for the automation of 1,414 state services using AI technologies — a programme scale that, if completed, would represent one of the largest AI-driven public service transformations in the post-Soviet space. The programme was confirmed as active and referenced by Minister Bagdat Musin in September 2022. No public accountability framework, audit mechanism, or citizen recourse process for automated government decisions is documented alongside it.
In healthcare, the Concept for Healthcare Development until 2026 includes procurement of AI-enabled radiology, X-ray, ultrasound equipment, and software for oncology centres — with specific mention of improving AI-assisted diagnostic quality in remote regions. The National Development Plan until 2029 reaffirms this: oncology centres are to receive equipment using AI for reading X-rays, mammograms, CT, and MRI scans. These are high-stakes diagnostic decisions with direct consequences for patients. The governance framework for how AI-assisted diagnostics are validated, audited, or contested is not specified in the strategy documents.
Data Protection and Algorithmic Transparency
Kazakhstan’s data protection framework has developed reactively. The Law on Personal Data and its Protection (No. 94-V, May 21, 2013) established baseline rules for data localization, collection, and processing. In 2019, the country experienced major data breaches from the databases of the Central Election Commission and the Prosecutor General’s Office — personal information of 11 million people was published online and accessible to anyone. These incidents triggered amendments: the Law on Amendments and Additions on the Regulation of Digital Technologies (2020), which entered force in two phases (July 2020 and January 2021), established the Ministry of Digital Development as the competent enforcement authority, created rules for personal data collection and processing, and introduced a “personal data safety protection service” concept. The amendments moved Kazakhstan’s framework toward alignment with GDPR standards — but stopped short of two key protections that GDPR requires.
First, Kazakhstan’s law does not include the right to not be subject to automated decision-making — the right that GDPR’s Article 22 provides against consequential decisions taken by algorithms without human review. In a country implementing AI across 1,414 government services and AI-powered loan underwriting through its commercial financial sector, this gap is not abstract. Second, the amendments do not require data protection training for personnel with access to personal data — despite the fact that the 2019 breaches (like most data incidents globally) involved human error. Kazakhstan could ratify the Council of Europe’s Modernized Convention 108 for the Protection of Individuals with regard to the Processing of Personal Data, which provides for algorithmic transparency rights, but has not done so.
Kazakhstan in International AI Governance Instruments
Kazakhstan is a UNESCO member since 1992 and has endorsed the UNESCO Recommendation on the Ethics of Artificial Intelligence (November 2021). Its engagement with the Recommendation goes beyond formal endorsement: a delegation from the Institute of Smart Systems and Artificial Intelligence (ISSAI) at Nazarbayev University participated in the UNESCO session on AI Ethics held online on July 23–24, 2020, submitting the ISSAI Ethical Principles document as a sample of organizational AI ethics statements and offering specific observations on explainability, accountability, responsibility, and human rights. ISSAI operates under its own ethical principles — Societal Well-being, Human Centered Values, Transparency, Technical Resilience and Robustness, and Accountability — which predate Kazakhstan’s domestic legislative commitments on AI accountability. Despite the formal endorsement, Kazakhstan has yet to take concrete steps to implement the Recommendation domestically.
On autonomous weapons, Kazakhstan has been active in international forums. In October 2022, it was among 70 states that endorsed a joint statement at the UN General Assembly First Committee on the recognition of dangers of autonomous weapons systems, the need for human oversight, and the importance of an international framework. In February 2023, Kazakhstan participated in the Netherlands-hosted Summit on the Responsible Application of AI in the Military Domain (REAIM) and signed the joint Call for Action — committing to responsible AI use consistent with international humanitarian law. These commitments to human oversight in military AI sit alongside domestic AI deployments in law enforcement and public service automation where analogous oversight frameworks have not yet been established.
On OECD AI Principles: Kazakhstan has not endorsed them. The OECD AI Policy Observatory notes that the Digital Kazakhstan programme addresses some OECD principles — inclusive growth, investment in R&D, fostering an AI ecosystem — but the endorsement-level commitment has not been made.
The Governance Gap
The CAIDP analysis identifies the same gap the GIRAI assessment would confirm: Kazakhstan’s AI governance architecture is moving at speed, but the speed is in deployment and policy volume, not in accountability infrastructure. Frameworks for algorithmic accountability, human oversight, transparency, and access to remedy are either absent or underdeveloped relative to the pace of implementation commitments.
The Action Plan commits to defining AI by December 2025 — after the 2024–2029 AI Concept has already been in operation for a year. The consultation window for the AI Concept is two weeks. The e-government automation pilot has no documented recourse mechanism. The geopolitical AI supply chain — Hikvision cameras, Huawei infrastructure — has no domestic governance bridge. Facial recognition has been deployed at scale across public spaces, transport, law enforcement, and biometric government services without any public accountability framework in the publicly available documentation. The Personal Data Law’s amendments moved toward GDPR alignment but stopped at the points that matter most for AI: the right to contest automated decisions, and protection training for those who process data.
Freedom House rates Kazakhstan as Not Free (23 out of 200 for political rights and civil liberties), noting that the government and legislature “offer little transparency on their decision-making processes, budgetary matters, and other operations” and that “the media and civil society do not have a meaningful opportunity to provide independent commentary and input on pending laws and policies.” This is the political environment in which AI governance accountability is supposed to be developed. Kazakhstan’s endorsement of the UNESCO Recommendation on AI Ethics and its active participation in drafting it through ISSAI represent a genuine commitment at the international level. The domestic accountability infrastructure that would implement that commitment has not been built alongside the deployments that now require it.
None of this is unique to Kazakhstan — it is the pattern across Central Asia, and in many respects across the world. What makes Kazakhstan’s case analytically instructive is that the ambition is high enough, and the documents specific enough, to make the gap visible. A country with no AI strategy has no measurable gap. Kazakhstan has a gap that can be named.
Produced through the Center for AI and Digital Policy (CAIDP) AI Policy Clinic, Spring 2023. Author: Aziz Soltobaev, KG Labs. Sources: Kazakhstan government legal portal (adilet.zan.kz), Prime Minister of Kazakhstan press releases, National Statistics Committee orders, GIRAI 1st Edition Kazakhstan country profile, Oxford Insights Government AI Readiness Index 2022, Freedom House Freedom in the World 2022, ISSAI Nazarbayev University (issai.nu.edu.kz). Part of KG Labs’ Central Asia AI governance series.
