BRICS Call for AI Transparency Meets Technical Reality Check

Medical news

A Global Governance Declaration Sparks Debate on AI Explainability.

Leaders from the BRICS nations recently convened for their XVII summit in Brazil and adopted a declaration addressing global artificial intelligence governance. This document outlines a collective intention to shape the future of AI, emphasizing safety and stability. However, a key element of the declaration – the call for AI transparency – has prompted a cautionary response from a technical expert, who suggests this requirement could pose a significant impediment to the industry`s progress.

BRICS leaders at summit
Summit discussions included focus on AI governance.

The section titled “Creating a Safe and Stable Digital Environment” within the BRICS declaration articulates a commitment to developing international principles for AI development and application. These principles are envisioned to be inclusive, ensuring accessibility for all nations, particularly those in the Global South, and to uphold human rights. The proposed AI governance frameworks are intended to be dynamic, transparent, and secure, aligning with national laws and the United Nations Charter.

Furthermore, the declaration acknowledges the potential complexities surrounding “Strong Artificial Intelligence” or Artificial General Intelligence (AGI) – AI capable of human-level cognition and potentially self-awareness – suggesting a need for careful consideration as these technologies advance.

While the intentions appear sound from a governance perspective, the call for AI transparency presents a fundamental technical challenge, according to Roman Dushkin, Director General of the IT company `A-Ya Expert` and head of the `Artificial Intelligence` educational program at the National Research Nuclear University MEPhI.

“I understand that the leaders of the BRICS countries are striving for everything good,” Dushkin remarked, “but now the main thing is not to create too many barriers for artificial intelligence developers, hindering the development of new mathematical algorithms and technologies.”

His primary concern lies with the mandate for AI transparency and explainability. “For example,” he continued, “the declaration talks about AI transparency, about the need for artificial intelligence models to be explainable, and so on… This alone could put the industry on `stop`!”

The technical reality, as Dushkin explains, is that many sophisticated modern AI models, particularly deep neural networks, function as opaque “black cybernetic boxes.” Inputs are processed to produce outputs, but the intricate internal workings and the precise reasoning behind a specific output are extremely difficult, if not currently impossible, to fully interpret or “explain” in a human-understandable way. “If the law explicitly mandates `clear explainability`, which is currently fundamentally unattainable,” Dushkin warned, “then everything developers are doing today becomes illegal.”

This perspective highlights a potential friction between regulatory ideals and the current state of AI technology. Alternative regulatory approaches, such as Russia`s system of national standards (like standard №164 for AI), focus on technical regulations and limitations based on the *application area* of the AI system (e.g., healthcare, transportation), rather than mandating universal internal transparency of the models themselves. This pragmatic approach regulates *how* AI is used in specific contexts, side-stepping the “black box” problem.

Regarding the misuse of AI technology, such as creating deepfakes or cloning voices, Dushkin views this primarily as a matter for existing legal frameworks. He posits that current criminal codes are likely sufficient to cover fraudulent activities, regardless of whether digital technologies like AI are employed. After all, while the tools evolve, the underlying intent to deceive remains a classic offense.

The BRICS declaration underscores the growing global recognition of the need to govern powerful AI technologies. However, the expert commentary reveals a critical point of tension: the aspiration for universal transparency may collide with the inherent technical characteristics of advanced AI models. Finding a balance between fostering safety and stability, promoting inclusivity, and allowing for technological innovation will be the central challenge for international AI governance moving forward. Mandating the impossible, however well-intentioned, risks stifling the very progress it seeks to guide.

Authored by based on original reporting by Natalya Vedeneeva.

Alexander Reed
Alexander Reed

Alexander Reed brings Cambridge's medical research scene to life through his insightful reporting. With a background in biochemistry and journalism, he excels at breaking down intricate scientific concepts for readers. His recent series on genomic medicine earned him the prestigious Medical Journalism Award.

Latest medical news online