
AI Governance in Central Asia
2025-04-07
Цели и задачи
Наша программа направлена на обеспечение прозрачности, подотчетности, объяснимости, ответственности, интерпретируемости и надежности при развертывании систем искусственного интеллекта. Мы стремимся защищать права и достоинство человека, продвигая подход к ИИ, основанный на правах человека. Ключевые цели включают в себя:
- Повышение алгоритмической грамотности, навыков искусственного интеллекта и грамотности в области искусственного интеллекта среди членов гражданского общества, средств массовой информации, некоммерческих организаций, членов сообщества, а также правительственных учреждений и министерств.
- Внедрение и обеспечение соблюдения этики ИИ для защиты прав человека, свобод, демократии и верховенства закона.
- Регулярный мониторинг региональных и национальных систем искусственного интеллекта для обеспечения надежности, целостности и соблюдения этических норм на протяжении всего их жизненного цикла.
- Поощрение субъектов ИИ назначать сотрудников по этике ИИ для надзора, оценки воздействия, аудита и постоянного мониторинга систем ИИ.
- Содействие трансграничному сотрудничеству между странами и секторами Центральной Азии для разработки стандартов и продвижения ответственного управления ИИ.
- Повышение осведомленности о международных структурах, таких как Рекомендация ЮНЕСКО по этике ИИ, Принципы ОЭСР по ИИ, Закон ЕС об ИИ и по Биллю о правах ИИ США.
Инициативы по наращиванию потенциала. Мы предлагаем программы по наращиванию потенциала, специально разработанные для государственных служащих и государственных служащих, участвующих в развертывании ИИ. Мы намерены разработать программы обучения для таких групп, как судьи и члены судебных органов, чтобы улучшить их понимание законов и правил, связанных с ИИ.
Ключевые бенефициары: Наша программа в первую очередь приносит пользу членам гражданского общества, академической среде, правительственным учреждениям и министерствам.
Направления деятельности:
- Алгоритмическое принятие решений
- Человеческий надзор
- Предотвращение алгоритмического вреда
- Автоматизированное принятие решений
- Алгоритмическое смещение
- Цифровые права
- Цифровая демократия
- ИИ для устойчивого развития
- Управление ИИ
- Авторегрессионные модели большого языка или генеративный искусственный интеллект
- Фундаментальные модели
Регионы стран: Наши усилия направлены на Казахстан, Кыргызстан, Таджикистан, Туркменистан и Узбекистан.
Вы можете связаться с нами по имейлу [email protected]
#TrustworthyAI #EthicalAI #AIGovernance #CentralAsiaAI #AIforGood #AlgorithmLiteracy #DigitalRights #TechEthics
Latest posts on topic
- The Future of AI Governance: Ensuring Global InclusivityConference on “The Future of AI Governance: Ensuring Global Inclusivity”, co-organized by IRIS and the Stimson Center and supported by Microsoft’s Office of Responsible AI. This event brings together leading experts, policymakers, and innovators to explore how we can shape equitable AI regulations in a time of …
- Paris AI Summit TakeawaysПарижский Саммит по Искусственному Интеллекту оказался крайне насыщенным на мероприятия событием, которые паралелльно проходили в разных частях города в течение недели. Саммит, проходивший под сопредседательством Франции и Индии, объединил официальных представителей практически 60 стран, которые в совокупности контролируют более 95% вычислительных мощностей ИИ. Итогом стал широкий консенсус …
- KG Labs attend the Paris AI SummitAs one of the delegates of the joint global program of the Stimson Center and Microsoft’s Office on Responsible AI, I speak at several panel discussions, and also participate in multilateral meetings with donors, international partners, the civil sector to exchange experience and expertise on ensuring equal access and development of AI technologies in the countries of the Global South.
- Trainings on Internet and Artificial Intelligence Governance for civil societyAziz Soltobaev, conducted a two-day intensive training on Internet Governance and Artificial Intelligence Policy-making for civil society representatives in Kyrgyzstan on July 3-4, 2024. On the first day, the training covered the fundamental components of the Internet, the entities responsible for international policies and standards, and the …
- Central Asia on the Global Index for Responsible Artificial Intelligence 2024We are pleased to announce the launch of the First Edition of the Global Index on Responsible AI. This groundbreaking report underscores a significant lag in global progress toward responsible AI, particularly in comparison to the rapid development and adoption of AI technologies. It highlights major gaps …
- Spotlighted in the Unlocked Platform by MicrosoftThis week, a series of publications is being released on the Unlocked platform by Microsoft on the topic of forming a responsible policy in the field of artificial intelligence. I am very glad that I was given the opportunity to be in the first cohort of speakers …
About Aziz Soltobaev

Aziz Soltobaev is multifacted professional at the intersection of technology, policy and artificial intelligence. With a rich background on digital policy and research, Aziz has established himself as a prominent figure in the field of advanced technologies, contributing significantly to various initiatives aimed at shaping responsible AI governance and deployment.
As a Fellow of the Stimson Center and the Microsoft Responsible AI Program, Aziz has delved deep into the complexities of AI ethics, governance, and regulation. The Fellowship Program examined AI applications and evaluate their impacts in developing countries. Together with other fellows, the program seeked to understand how AI-related harms and benefits may manifest themselves in various social, cultural, economic and environmental contexts, and identify technological as well as regulatory solutions that might help mitigate risks and maximize opportunities (2023-2025).
Aziz had successfully passed AI Policy fundamental certification program organized by the Center for Artificial Intelligence and Digital Policy (USA). The certification program is a semester-long AI policy and regulation training, provided to AI policy practitioners, policymakers, lawyers, academics and civil society members. The participants learn about AI policy research, analysis, and main AI policy frameworks around the world (OECD, G20, UNESCO, EU AIA, Blueprint for AI Bill of Rights, Council of Europe, African Resolution on Human Rights, etc). The certification program is an outgrowth of the work of the Research Group, and includes requirements for research, writing, and policy analysis. Receipt of the CAIDP AI Policy Certification requires completion of a detailed multi-part test. The subjects are: AI History, AI Issues and Institutions, AI Regulation, and Research Methods. Aziz obtained Certificate with distinction and signed Statement of Professional Ethics for AI Policy.
As a fellow of the Atlantic Council Artificial Intelligence Connect Program, Aziz had a privilege on learning about opportunities and challenges in responsibly developing and deploying AI technologies across sectors in line with human-centric values in the OECD AI Principles. As a part of in-regional workshops and site visits, Aziz had examines case studies and best practices, strenghtened connections with distinguished AI policy experts and professionals from different parts of the world.
His research endeavors extend beyond theoretical frameworks to practical applications, particularly in the context of Central Asia. Aziz contributed to the review of the Kazakhstan’s National AI Policy and representation of the country in the Artificial Intelligence and Democractic Values Report issued by the CAIDP in 2023.
In addition to his contributions to national policy frameworks, Aziz has conducted thorough assessments of Kyrgyzstan’s AI landscape, offering valuable insights and recommendations through his overview of the country’s National AI Policy for the Global Index on Responsible AI (GIRAI) in 2024. The Global Index is designed to equip governments, civil society, and stakeholders with the evidence needed to advance rights-based principles for the responsible use of AI.
Aziz’s interests extend beyond conventional AI paradigms, encompassing emerging technologies such as small language models like Phi-2 and the field of TinyML. His forward-thinking approach reflects a keen awareness of the evolving AI landscape and a commitment to exploring innovative avenues for harnessing AI’s potential for the benefit of humanity.
With a wealth of experience and a passion for leveraging AI for positive societal impact, Aziz Soltobaev continues to be a driving force in shaping the responsible and equitable deployment of artificial intelligence on both national and global scales.