Why Governance is Paramount in the New AI Workforce

Singapore, September 16, 2025 — To say that the arrival of Generative AI (GenAI) has ignited a transformation within businesses would be an understatement. According to KPMG’s latest survey, many of the largest companies (with over US$1 billion in revenue) are already seeing the impact of GenAI on their business. 71 percent are using it to improve decision-making, 52 percent say it is influencing their competitive position, and 47 percent say it is opening up new sales opportunities.

It’s remarkable to think that today’s AI breakthroughs emerged just a few years ago, and yet AI investments in Asia Pacific are already on track to reach US$110 billion by 2028, growing at a CAGR of 24% over the coming years. At the same time, it’s important to understand the dual nature of AI. Without the right checks and balances, the very technology that promises to unlock a treasure trove of innovation can quickly become a major liability.

Governing AI’s Data Needs

As businesses rushed to adopt AI, whether out of necessity or because they had succumbed to the hype, many found that the deciding factor in the success or failure of their generative AI projects was the quality of their data. Moreover, as AI adoption grows, it has also become clear that not everything it generates is always accurate. AI models can be prone to bias, mistakes, and even hallucinations (or misleading outputs) — issues that are closely related to the quality, or lack of it, of the underlying data.

What’s more, AI’s insatiable appetite for data raises considerable data privacy concerns, especially as awareness of this risk grows. Recent headlines, for example, have been dominated by criticism over DeepSeek’s data collection practices, prompting places like Taiwan, Australia, and South Korea to ban its use in government agencies.

On the bright side, these challenges are pushing businesses and governments alike to reassess their AI implementation strategies and place greater emphasis on robust data governance and management practices. In Singapore, these initiatives include the National AI Strategy and Smart Nation 2.0, which drive the development of clear and effective AI regulations that ensure transparency and accountability. The AI Verify Tool was also launched to strengthen the country’s leadership in AI trust and provide businesses with an internationally recognised method for assessing the integrity and compliance of their AI systems.

Smarter Governance, Smarter AI

Good governance will empower organisations to thrive with AI, particularly if they can see past the hype, mitigate the risks and prioritise AI solutions that deliver tangible business value and align with their strategic goals. In fact, according to Boomi’s report, A Playbook for Crafting AI Strategy, produced in collaboration with MIT Technology Review Insights, 45% of organisations consider governance, security, and privacy issues to be major obstacles to rapid AI deployment.

With AI technology itself still maturing, the next phase of its evolution likely lies in the agent-based use of AI. These “AI agents” are designed to function as virtual assistants that work alongside humans, capable of making independent decisions, learning in real time, and potentially becoming fully autonomous one day.

However, if there are important lessons to learn from the past, it’s that effective management and governance of AI agents must start now. Otherwise, the consequences of leaving these digital workers to function independently without proper monitoring or oversight could be far more severe than any AI hallucination.

Keeping AI Agents In Check

When AI agents eventually operate at scale, organisations will need to turn to AI governance platforms that can effectively monitor the operational aspects of their AI systems centrally. This is something current tools simply cannot do. More advanced governance platforms will play a key role in establishing, managing, and enforcing policies that promote the transparent and responsible use of AI agents.

Through effective application programming interface (API) management and an AI agent catalogue, these platforms can not only monitor and track AI agents throughout their lifecycle but also provide valuable insights into key aspects such as activity logs, model construction, data usage, and the rationale behind their outputs. Just as important will be their robust security monitoring capabilities, which can detect, limit, or halt any adverse agent behaviour within complex multi-agent systems.

More Than a Technical Fix

As AI continues to evolve, regulations are expected to follow suit, albeit perhaps not quite as quickly. That said, countries in this region are likely to introduce regulations similar to those seen globally, following measures such as the stringent EU AI Act that came into force in February 2025. This will fuel further investments in technologies designed to enforce AI governance and reduce the associated risks.

Nevertheless, AI governance is far from just a technological issue. Organisations will quickly realise that they must also address certain challenges, like cultural resistance, which cannot be solved by technology alone. Gaining stakeholder support hinges on recognising and addressing cultural attitudes toward AI, with education playing a critical role in easing adoption. The ultimate goal should be to standardise AI governance practices, especially within diverse and fragmented enterprise environments.

Trust, Tranparency and AI

AI holds immense potential but, as a fast-evolving field, its widespread adoption will not come without obstacles. Robust AI governance should serve as the bedrock for responsible and effective AI. Gartner believes that “by 2028, enterprises using AI governance platforms will achieve 30% higher customer trust ratings and 25% better regulatory compliance scores than their competitors, along with 40% fewer AI-related ethical incidents compared to those without such systems.”

Then, of course, comes training AI systems on high-quality, representative data to achieve fair and accurate outcomes. At the heart of it, it is about keeping pace with the “growing pains” of using AI, managing the risks and, above all, fostering transparency that strengthens the trust of users, customers and all stakeholders.

Attributed to: David Irecki, Chief Technology Officer for APJ at Boomi