Teaching Responsible AI in Russian: Notes from the ICNL Training, July 2024

On July 3 and 4, 2024, KG Labs delivered an intensive two-day training in Bishkek on Internet Governance and Artificial Intelligence Policy for civil society representatives from across Kyrgyzstan. The programme was organised in partnership with the International Center for Not-for-Profit Law (ICNL), which works on the legal enabling environment for civil society — including digital rights and internet governance — in more than 100 countries. These are short notes from the room. Anyone wanting the full curriculum, the case readings, or the Russian-language questionnaire is welcome to ask.
The West debated algorithmic accountability for two decades before AI hype arrived. Kyrgyzstan met AI hype first — without the prerequisite vocabulary, case law, or public-policy debate that would let civil society engage with it on equal terms. Closing that gap is not a knowledge problem. It is a participation problem.
The Word Before the Machine
The opening session began with a small etymological detour. The word algorithm comes from al-Khwarizmi — the ninth-century mathematician from Khwarezm, in present-day Central Asia, whose treatises were translated into Latin in the twelfth century and reshaped European mathematics. The point of starting there was not nostalgia. It was to remind the room that the conceptual ancestry of what we now call AI runs through this region, and that participating in the global conversation on AI governance is a return to one of our own traditions, not a catching-up exercise.
The Cases That Land

Abstract AI ethics does not move people. Specific cases do. Day two opened with three from Bishkek: Yandex Taxi surge pricing (algorithmic, undisclosed inputs, no recourse for the riders most affected); facial recognition cameras deployed by the Ministry of Internal Affairs at city intersections under the public-safety banner, with no published procurement, retention rules, or accuracy benchmarks; and the state-database leaks that have appeared on Telegram channels and dark-web markets in recent years. The session did not dwell on attribution. It dwelt on what the absence of a working data-protection regime means when AI systems trained on, or making decisions about, that data are layered on top.
AI is already deployed in Kyrgyzstan — in ride-hailing pricing, in public-space surveillance, in financial services. AI governance is not. The policy gap is not a future risk. It is the current operating condition.
AI and Rights

A middle session worked through the rights frameworks AI deployments intersect with — privacy, freedom of expression, non-discrimination, due process, freedom of assembly — and the specialised dimensions that came up on participant request: children and AI under UN CRC General Comment No. 25 (2021), and gender bias in hiring tools, voice recognition, and image generation.
The reference instrument was the UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted by the 41st General Conference in November 2021 — the first global instrument of its kind. Four operational commitments: protection of personal data through transparency and user control; explicit prohibition on social scoring and mass surveillance; environmental protection through resource-efficient AI practices; equal representation across the AI development pipeline. Kyrgyzstan, as a UNESCO member state, has expressed documentary support for the Recommendation. What it has not yet done is translate that support into binding domestic instruments. That gap is where civil society advocacy has the most room to move.
For the Journalists in the Room
A subset of participants were working journalists. They got their own session, organised around recurring traps in AI reporting — anthropomorphising the model, citing demos as products, confusing benchmarks with real-world performance, crediting «the algorithm» for decisions made by the people who designed and deployed it. The session referenced the OpenAI Kenya story — the Nairobi moderators who labelled toxic content for under two dollars an hour — as a case study in the labour conditions that the language of «automation» tends to make invisible.
Theory and Practice
The closing block was practical. Participants generated content with ChatGPT and DALL·E and read the outputs critically — identifying the choices the models had made and the alternatives they had foreclosed. The third exercise asked participants to draft a Russian-language freedom-of-information request a Kyrgyz civil society organisation could submit to the relevant regulator about the Yandex Taxi pricing algorithm, covering inputs, audit mechanisms, complaint pathways, and disclosure expectations.
The exercises did not produce policy. They produced something more useful: a small group of practitioners who had now done the work themselves, in their own language, on cases from their own city.
Civil society engagement on AI in Kyrgyzstan was confirmed in the GIRAI 2023 assessment across several thematic areas — the only Central Asian country where it was. The training is one input into whether that engagement deepens or stalls. The next GIRAI cycle will measure the answer.
What the Training Revealed
The gap is not primarily a knowledge gap. It is a participation gap. Civil society organisations across Central Asia are affected by AI deployments — in government services, content moderation, financial access, employment screening, public-space surveillance — without being systematically included in the conversations about how those deployments are governed. The training was one attempt to shift that, by giving organisations the framework, the vocabulary, and the international reference points to assess the AI governance landscape themselves and to advocate from that assessment.
Two-day training delivered July 3–4, 2024 in Bishkek, in partnership with the International Center for Not-for-Profit Law (ICNL). Trainer: Aziz Soltobaev, KG Labs. Materials developed in Russian. References available on request: full curriculum, the Russian-language Responsible AI questionnaire, UNESCO Recommendation on the Ethics of AI (2021), GIRAI 1st Edition, UN CRC General Comment No. 25.
