Duration
2 months (2019)
Tools
Paper, Google Workspace apps
Role
Lead Researcher
<aside>
‼️
Under A Non-disclosure Agreement
Some of the details in this case study may be vague to protect the client's intellectual property.
</aside>
As artificial intelligence began to reshape industries, how do we ensure that the AI systems being built are not only innovative but also ethical, unbiased, and respectful of privacy? This project set out to map the challenges and opportunities around privacy and bias in Singapore’s AI ecosystem, working closely with stakeholders at the forefront of this technological revolution.

Challenge
As AI systems became more integrated into everyday life, concerns about privacy and bias began to surface. How were companies in Singapore addressing these issues? Were they even aware of the potential risks? And how could IMDA support the development of AI that was not only powerful but also fair and trustworthy?
Approach
We engaged directly with the people shaping Singapore’s AI landscape:
- In-depth interviews with middle and top-level stakeholders across companies developing or deploying AI systems.
- Mapping the ecosystem to understand where privacy and bias risks were most prevalent—whether in data collection, model training, or deployment.
- Identifying gaps in awareness, regulation, and best practices around ethical AI.
- Exploring cultural and organisational factors that influenced how companies approached these issues.
While many companies were excited about the potential of AI, few had fully grappled with the ethical implications. Privacy was often seen as a compliance issue, while bias was frequently overlooked until it caused problems.