Nigeria, alongside the United States, Britain, and 15 other nations, revealed an international pact aimed at ensuring the safety of artificial intelligence (AI) from malicious use. Emphasising the need for AI systems to be “secure by design,” the 20-page document focuses on urging companies involved in AI development and utilisation to prioritise public and customer safety against potential misuse.
Although the agreement is non-binding, it emphasises crucial measures such as monitoring AI systems to prevent abuse, safeguarding data integrity, and vetting software providers. Jen Easterly, Director of the U.S. Cybersecurity and Infrastructure Security Agency, highlighted the significance of prioritising security in AI design, stressing that this marks a departure from solely focusing on features and market competitiveness.
“This marks an agreement that security is paramount in AI design, not just about cool features or speed to market,” Easterly affirmed to Reuters, endorsing the guidelines’ emphasis on security during the design phase.
Despite lacking specific enforcement measures, this initiative is part of a broader global effort among governments to shape AI development. The consortium of nations includes Germany, Italy, Estonia, Poland, Australia, Chile, Israel, and Nigeria, among others, which are all signatories to these new guidelines.
The outlined framework primarily addresses preventing AI technology hijacking and suggests steps like conducting security testing before releasing AI models. Notably absent are discussions on data acquisition for these models or the ethical uses of AI, areas that continue to pose significant challenges.
The rise of AI technology has triggered concerns regarding potential disruptions to democracy, increased fraud, and substantial job displacement. While Europe has taken strides in regulating AI, with lawmakers drafting rules, the U.S. has faced hurdles in enacting effective regulations due to a polarised Congress.
The Biden administration’s efforts towards AI regulation culminated in an executive order in October, aiming to minimise AI risks for consumers, workers, and marginalised groups while enhancing national security.