IMDA makes its AI toolkit available for free to help firms check their systems

Minister for Communications and Information Josephine Teo speaking at the Asia Tech x Singapore conference on June 7. PHOTO: IMDA

SINGAPORE – An artificial intelligence (AI) toolkit that lets firms check their AI systems for bias and potential leaks was made available to the public for free on Wednesday as an open-source platform after several months of pilot runs with tech companies.

Available on code-sharing platform GitHub and developed by the Infocomm Media Development Authority (IMDA), the AI Verify toolkit helps developers check that their algorithms and datasets used to train AI are in line with 11 internationally recognised principles. These include security, accountability and being able to explain how it arrives at a decision – or explainability.

“We believe that system developers, solution providers and the research community can all use and contribute to AI Verify. (Bringing in) their expertise will also promote the growth of new and better testing tools,” Minister for Communications and Information Josephine Teo said on Wednesday at the Asia Tech x Singapore conference, where she announced the toolkit and the set-up of the AI Verify Foundation.

The foundation will see more than 60 companies, including Microsoft, Meta and Google, work with the IMDA to tackle issues in AI and improve the toolkit.

Users can upload their AI model with its underlying datasets for the toolkit to generate a report on how well the code meets AI governance principles, and where improvements can be made, said the foundation. It may recommend, for instance, that data be labelled to make clear the source or that further security measures are needed to protect confidential data.

As the software is open source, developers can customise AI Verify to their own needs. While this means the report will no longer be considered an AI Verify report, making the toolkit open source helps to draw expertise from across the industry to grow the nascent field, said its developers.

User feedback will also help improve the toolkit by adding best practices and benchmarks.

Tech firms that used the toolkit said that it helped standardise how their AI models should be vetted, and spotted gaps in datasets. Many used publicly available data to run tests during the toolkit’s pilot phase.

Huawei International chief data and AI officer Ashley Fernandez said the toolkit highlighted racial bias in AI models based on publicly available data. The toolkit, for example, found the AI models assumed a preferred travel destination based on the customer’s race.

The software provides non-technical users an easy way to test their AI models and receive detailed explanations, UBS bank AI analyst Helen Wang said in a presentation at the event.

The bank put AI Verify to the test with open data of income levels taken in the United States in the 1990s to hypothetically assess customers’ credit risk.

The toolkit reminds users to define “sensitive variables”, referring to attributes of individuals that could affect a model’s fairness such as age, language and ethnicity, said Ms Wang.

Highlighting specific groups can help developers to prompt the AI to produce more balanced results.

Singapore Airlines (SIA) used the toolkit to check how its human resource chatbot fared, and is using feedback from the toolkit for future developments.

SIA principal digital strategist Patrick Chua said the toolkit helped consolidate the key priorities in AI development, allowing the airline’s software developers to adapt the toolkit results onto their grading system to improve their AI models. Its AI systems were incompatible with AI Verify, so the developers used it to check the datasets instead.

Mr Chua told The Straits Times: “The greatest benefit is that the toolkit consolidates all the principles. Now that we know the full criteria, it is easier to streamline our processes.”

Join ST's Telegram channel and get the latest breaking news delivered to you.