Expert Insights On Ethics, Tech, And Law
January 9, 2025

Expert Insights On Ethics, Tech, And Law

On the fast-moving AI train, those responsible for risk management often strive to stay ahead.

As stories of bots and AI tools running amok make headlines and consumer AI tools flood the market, public trust conversational AI has taken a hit. Gallup/Bentley University poll conducted in 2024. found only 23% of American consumers trust businesses to handle AI responsibly.

For AI governance and compliance professionals, this is a reality they face every day. As 2025 brings with it new challenges, from AI agents to new regulatory developments, we spoke to industry leaders to get their views on the future of AI governance.

The regulatory labyrinth will become more complex

In 2025, AI governance will largely depend on compliance with new regulations, predicts Michael Brent, director of responsible AI at Boston Consulting Group (BCG).

The EU’s Artificial Intelligence Law, with possible fines of €35 million, is set to become a defining force in the global governance of artificial intelligence.

“The EU’s regulatory approach will serve as a pilot, with organizations and countries monitoring its impact on competitive advantage and business operations,” explains Ms Alyssa Lefaivre Szkopak, Director of Trust and Artificial Intelligence Security at Alberta Machine Intelligence Institute (Amii).

Ms. Lefavre Szkopak predicts that soft law mechanisms, including standards, certification, collaboration between national AI safety institutes and field-specific guidelines, will play an increasingly important role in filling regulatory gaps. “It will continue to be fragmented and will not be fully harmonized for the foreseeable future, if ever,” she admits.

Meanwhile, the situation in the US is expected to remain fragmented.

Alexandra Robinson, who leads the AI ​​governance and cybersecurity policy groups supporting US federal government partners at Steampunk Inc., predicts that “state governments will invest in consumer-focused AI legislation, while Congress will likely , will prioritize reducing barriers to innovation, reflecting the consumer privacy regulatory landscape in the United States.”

Experts predict that the compliance situation will take different forms. Ffion Lee-Madan, co-founder of Fairly AI, an AI management software company, makes a bold prediction: “ISO/IEC 42001 certification will be the hot ticket in 2025 as organizations move from talking about AI to addressing real-life AI security and liability compliance requirements. ”

Standards and certification, while voluntary, are becoming important tools for navigating a complex regulatory environment, and procurement departments are increasingly under pressure to ensure trust and compliance from AI suppliers, Ms. Lee-Madan argues.

Agentic AI will change management priorities

While generative AI dominated the headlines in 2024, experts believe 2025 belongs to “agent AI.” These systems, capable of autonomously planning and executing tasks based on user-defined goals, pose unprecedented management challenges.

“With more research into agent workflows, we expect a surge in AI management focused on AI agents,” predicts Apoorva Kumar, CEO and co-founder of Inspeq AI, an AI mission-critical operations platform.

Building on this, José Belo, co-chair of the International Association of Privacy Professionals (APPLICATION) The London branch warns that the decision-making capabilities of these systems raise pressing questions about the autonomy and safeguards needed to prevent harm. Likewise, experts like AMII’s Ms. Lefebvre Shkopak expect significant research aimed at balancing the autonomy of these systems with accountability for their actions.

The implications for the workforce also look significant: “This will naturally intensify debate and research about the impact of AI on the workforce and the replacement of employees by AI agents and at what scale,” she warns.

AI Governance Will Move from Ethics to Operational Realities

“AI governance is no longer just an ethical afterthought; it becomes standard business practice,” says Ms. Lefavre Shkopak.

According to Giovanni Leoni, responsible AI manager and associate director at Accenture, companies are embedding responsible AI principles into their strategies, recognizing that management involves people and processes as much as the technology itself.

Viewing governance as part of a larger transformation, Mr. Leoni notes: “AI governance is the way to manage change.” This shift reflects the growing recognition of AI governance as a critical component of strategic planning rather than an isolated initiative.

This evolution is further highlighted by Alice Thwaite, Head of Ethics at Omnicom Media Group UK, who notes that companies are starting to share the concepts of AI governance, ethics and compliance. “Each of these areas requires a unique framework and expertise,” she notes, reflecting the growing understanding of AI issues.

Meanwhile, Mr. Kumar turns his attention to the operational side of this transformation. With the advent of Responsible AI Operations (RAIops) and platforms like Inspeq AI, companies now have tools to measure, monitor and audit their AI applications, integrating governance directly into their workflows.

Environmental considerations will play a big role in AI governance

Environmental considerations are becoming a major management issue, experts predict. IAPP’s Mr Belo emphasizes that reducing the environmental impact of AI is a shared responsibility between vendors and developers.

Suppliers must take the lead by designing energy efficient systems and implementing transparent carbon reporting practices. Developers, in turn, must adopt sustainable cloud practices, prioritize greener data centers, and minimize redundancies. Ethical decommissioning of AI systems will also be critical to prevent unnecessary environmental degradation.

Key drivers for progress in AI governance

What will drive progress in AI governance? Industry leaders offer key insights, each emphasizing different but interrelated factors:

Michael Brent of BCG emphasizes the role of active corporate participation: “The most important factor that will accelerate progress in AI governance is active corporate investment, including the creation of responsible AI teams.”

From a practical perspective, Inspeq AI’s Apoorva Kumar points out the real-world implications: “The loss of trust and reputation has already been costly for companies such as DPD, Snapchat and Google Gemini. Continued failures will drive further progress in AI governance.”

Regarding entrepreneurship, Ms. Lefavre Shkopak emphasizes the importance of increasing purchasing power: “Organizations must use their purchasing power to demand higher standards from AI providers, demanding transparency, documentation and testing results.”

Finally, as AI becomes more widespread, Mr. Belo emphasizes the need for education: “AI literacy is gaining recognition as a critical requirement across all industries.”

Each view reinforces the notion that progress in AI governance requires action on multiple fronts—corporate commitment, transparency, and a growing emphasis on literacy and accountability.

The way forward: clear objectives, comprehensive solutions

To sum up, the path to improving AI governance is unlikely to be easy. Some of the more optimistic predictions, such as increased investment in AI compliance, have been clouded by the ongoing complexity of both the theoretical foundations and operational challenges in AI governance.

Global harmonization remains an elusive goal, especially in light of recent events in the United States. Organizations continue to struggle with a combination of soft power mechanisms—structures, standards, and protocols—without clear regulatory guidance for specific use cases.

At the same time, new trends in AI, such as agent-based AI, are poised to unleash a new wave of challenging risks that will test the adaptability of responsible AI practitioners. A key distinction remains between a holistic, human-centered approach to responsible AI development and a narrower focus on risk management at the highest levels.

One thing is clear: no team can face these challenges alone. As Ms. Robinson of Steampunk so aptly sums it up: “My motto for 2025 is to move from extractive AI compliance to effective collaboration. For those of us working on AI governance, we need to empower technologists to build and deploy safe, secure, and responsible AI. . This means we have to meet people where they are: we can’t give a product owner a 500-question AI risk assessment and expect anything other than disappointment.”

While the AI ​​governance landscape in 2025 promises to be as complex as ever, the outlines of a more structured and actionable AI governance framework are becoming visible.

2025-01-09 14:05:59

Leave a Reply

Your email address will not be published. Required fields are marked *