Responsible AI Central Asia

Goals and objectives

Our program aims to ensure transparency, accountability, explainability, responsibility, interpretability, and reliability in the deployment of AI systems. We strive to uphold human rights and dignity by fostering a rights-based approach to AI. Key objectives include:

  • Increasing algorithm literacy, AI skills, and AI literacy among civil society members, mass media, non-profit organizations, community members as well as governmental agencies and ministries.
  • Implementing and enforcing AI ethics to safeguard human rights, freedoms, democracy, and the rule of law.
  • Monitoring regional and national AI systems regularly to ensure robustness, integrity, and adherence to ethical guidelines throughout their lifecycle.
  • Encouraging AI actors to appoint AI ethics officers for oversight, impact assessment, auditing, and continuous monitoring of AI systems.
  • Facilitating cross-border cooperation among Central Asian countries and sectors to develop standards and promote responsible stewardship of AI.
  • Raising awareness about international frameworks such as the UNESCO Recommendation on the Ethics of AI, OECD AI Principles, EU AI Act, and US Blueprint for an AI Bill of Rights.

Capacity Building Initiatives: We offer capacity building programs tailored for civil servants and government personnel involved in AI deployment. We are intended to design training programs for such groups as judges and judiciary members to enhance their understanding of AI-related laws and regulations.

Key Beneficiaries: Our program primarily benefits civil society members, NGOs, journalists, governmental agencies and ministries.

Focus Areas:

  • Algorithmic decision-making
  • Human oversight
  • Prevention of algorithmic harms
  • Automated decision-making
  • Algorithmic bias
  • Digital rights
  • Digital democracy
  • AI for sustainability
  • AI governance
  • Auto-regressive large language models or generative AI
  • Foundational models

Country Areas: Our efforts extend to Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan.

Keywords: Trustworthy AI, responsible AI, ethical AI, automated decision making, human-in-the-loop, algorithm literacy, generative AI.

For more information please contact [email protected]

#TrustworthyAI #EthicalAI #AIGovernance #CentralAsiaAI #AIforGood #AlgorithmLiteracy #DigitalRights #TechEthics

About the Team

Aziz Soltobaev

About Aziz Soltobaev

As a Fellow of the Stimson Center and the Microsoft Responsible AI Program, Aziz has delved deep into the complexities of AI ethics, governance, and regulation. The Fellowship Program examined AI applications and evaluate their impacts in developing countries. Together with other fellows, the program seeked to understand how AI-related harms and benefits may manifest themselves in various social, cultural, economic and environmental contexts, and identify technological as well as regulatory solutions that might help mitigate risks and maximize opportunities (2023-2024).

Aziz had successfully passed AI Policy fundamental certification program organized by the Center for Artificial Intelligence and Digital Policy (USA). The certification program is a semester-long AI policy and regulation training, provided to AI policy practitioners, policymakers, lawyers, academics and civil society members. The participants learn about AI policy research, analysis, and main AI policy frameworks around the world (OECD, G20, UNESCO, EU AIA, Blueprint for AI Bill of Rights, Council of Europe, African Resolution on Human Rights, etc). The certification program is an outgrowth of the work of the Research Group, and includes requirements for research, writing, and policy analysis. Receipt of the CAIDP AI Policy Certification requires completion of a detailed multi-part test. The subjects are: AI History, AI Issues and Institutions,  AI Regulation, and Research Methods. Aziz obtained Certificate with distinction and signed Statement of Professional Ethics for AI Policy.

His research endeavors extend beyond theoretical frameworks to practical applications, particularly in the context of Central Asia. Aziz contributed to the review of the Kazakhstan’s National AI Policy and representation of the country in the Artificial Intelligence and Democractic Values Report issued by the CAIDP in 2023.

In addition to his contributions to national policy frameworks, Aziz has conducted thorough assessments of Kyrgyzstan’s AI landscape, offering valuable insights and recommendations through his overview of the country’s National AI Policy for the Global Index on Responsible AI (GIRAI) in 2024. The Global Index is designed to equip governments, civil society, and stakeholders with the evidence needed to advance rights-based principles for the responsible use of AI.

Aziz’s interests extend beyond conventional AI paradigms, encompassing emerging technologies such as small language models like Phi-2 and the field of TinyML. His forward-thinking approach reflects a keen awareness of the evolving AI landscape and a commitment to exploring innovative avenues for harnessing AI’s potential for the benefit of humanity.

With a wealth of experience and a passion for leveraging AI for positive societal impact, Aziz Soltobaev continues to be a driving force in shaping the responsible and equitable deployment of artificial intelligence on both national and global scales.