Duration
Role
Tools
<aside> ‼️
Under A Non-disclosure Agreement
Some of the details in this case study may be vague to protect the client's intellectual property.
</aside>

Privacy and bias in AI aren't new concerns, but in 2019 they were still being treated as edge cases rather than design considerations. IMDA wanted to understand where the gaps were - in awareness, in practice, and in the cultural and organisational factors that shaped how Singaporean companies were approaching ethics in the first place.
We interviewed middle and senior stakeholders across companies developing or deploying AI systems, the people actually making decisions about data, models, and deployment.
The finding that kept coming up: most companies were aware of the concerns, but very few had frameworks to do anything about them. Privacy was being treated as a compliance checkbox. Bias was being caught after the fact, if at all - often not until it was causing real problems. Many companies didn't realise their datasets were skewed or that their models were reinforcing existing inequalities until it was too late.
The research contributed to the development of IMDA's approach to AI governance and contributed to the foundation for the Model AI Governance Framework. It also made clear that regulation alone wasn't enough - what the ecosystem needed most was education, and a shared language for talking about responsibility before something went wrong.