US, Britain and other countries ink ‘secure by design’ AI guidelines – Mon Wellness

US, Britain and other countries ink ‘secure by design’ AI guidelines

The guidelines suggest cybersecurity practices AI firms should implement when designing,

developing, launching, and monitoring AI models.

The United States, United Kingdom, Australia, and 15 other countries have released global

guidelines to help protect AI models from being tampered with, urging companies to make their models “secure by design.”

On Nov. 26, the 18 countries released a 20-page document outlining how AI firms should handle

their cybersecurity when developing or using AI models, as they claimed “security can often be a

secondary consideration” in the fast-paced industry.

The guidelines consisted of mostly general recommendations such as maintaining a tight leash on

the AI model’s infrastructure, monitoring for any tampering with models before and after release,

and training staff on cybersecurity risks.

Not mentioned were certain contentious issues in the AI space, including what possible controls

there should be around the use of image-generating models and deep fakes or data

collection methods and use in training models — an issue that’s seen multiple AI firms sued on copyright infringement claims.

“We are at an inflection point in the development of artificial intelligence, which may well be the

most consequential technology of our time,” U.S. Secretary of Homeland Security Alejandro

Mayorkas said in a statement. “Cybersecurity is key to building AI systems that are safe,

secure, and trustworthy.”

Leave a Reply

Your email address will not be published. Required fields are marked *