Last week, leaders of the G20 met in New Delhi to discuss economic cooperation and collaboration across numerous themes including sustainable development and the green development pact. The meeting also addressed the importance of privacy and trust in the digital economy and the need for global AI regulation.

International cooperation for Digital Public Infrastructure

As more public and private-sector services around the world are made digital, the G20 recognized that such technologies can support inclusive and sustainable development at a societal scale.

However, with a goal that such digital public infrastructure (DPI) must be trusted by consumers and service users and protect their privacy rights, while also supporting free flow of data and cross-border data flows within applicable legal frameworks, the G20 urged international cooperation to achieve these aims.

A new framework for the development, deployment and governance of DPI was welcomed at the September meeting. The voluntary framework will be supported by a global repository of DPI and a proposed One Future Alliance that aims to provide funding support for low- or middle-income countries.

Security, resilience and trust for the digital future

The G20 leadership declared their commitment to “enabling an open, fair, non-discriminatory and secure digital economy.” This also means ensuring that digital connectivity must be available and accessible to all.

Mitigating security threats and ensuring the privacy of information was highlighted as a key concern by the G20, who declared that members should “share their approaches and good practices to build a safe, secure, and resilient digital economy.”

Ensuring responsible AI for the public good

In their declaration, the G20 acknowledged the rapid progress of AI and its potential to enhance the global digital economy. However, they also recognized the risks posed by irresponsible AI development, which may lead to increased bias and discrimination.

Reaffirming their commitment to the G20 AI Principles (2019), the group further agreed that they “will pursue a pro-innovation regulatory/governance approach that maximized the benefits and takes into account the risks associated with the use of AI.”

The 2019 AI principles, which will be used to guide the development of new global regulation, define that AI must be developed based on:

  • Inclusive growth, sustainable development and well-being — AI must contribute to growth and well-being for individuals, society and the planet
  • Human-centered valued and fairness — AI must respect applicable laws, human rights, privacy, diversity and democratic values, with safeguards to protect society
  • Transparency and explainability — The use of AI must be disclosed transparently and responsibly so people know when they are engaging with them
  • Robustness, security and safety — AI systems must be robust, safe and secure, and risks must be continuously assessed and managed
  • Accountability — Organizations and individuals developing, deploying and operating AI systems are accountable for their function in line with these principles.

Want to keep up with all our blog posts? Subscribe to our newsletter!

Subscribe