Duration

2 months (2019)

Role

Lead Researcher

Tools

Paper, Google Workspace apps

<aside> ‼️

Under A Non-disclosure Agreement

Some of the details in this case study may be vague to protect the client's intellectual property.

</aside>

This one started as a policy brief and turned into something more interesting. As AI was being built into everything, IMDA needed to understand whether the companies doing the building had actually thought about what responsible meant, or whether they were mostly hoping the question wouldn't come up.

image.png

Challenge

Privacy and bias in AI aren't new concerns, but in 2019 they were still being treated as edge cases rather than design considerations. IMDA wanted to understand where the gaps were - in awareness, in practice, and in the cultural and organisational factors that shaped how Singaporean companies were approaching ethics in the first place.

Talking to the people building it

We interviewed middle and senior stakeholders across companies developing or deploying AI systems, the people actually making decisions about data, models, and deployment.

The gap between knowing and doing

The finding that kept coming up: most companies were aware of the concerns, but very few had frameworks to do anything about them. Privacy was being treated as a compliance checkbox. Bias was being caught after the fact, if at all - often not until it was causing real problems. Many companies didn't realise their datasets were skewed or that their models were reinforcing existing inequalities until it was too late.

The research contributed to the development of IMDA's approach to AI governance and contributed to the foundation for the Model AI Governance Framework. It also made clear that regulation alone wasn't enough - what the ecosystem needed most was education, and a shared language for talking about responsibility before something went wrong.