What the EU AI Act means for HR 

  
  

As artificial intelligence (AI) gains traction in HR, the European Union (EU) could shape the path forward.

In March, the EU Parliament approved the EU AI Act: a proposal to ensure that AI is developed and deployed in a way that balances empathy with economics. The Act is expected to become law in 2024 and take full effect in 2026; companies face a burning imperative to be ready.

Exploring the Artificial Intelligence Act

Because AI is essentially borderless, we’ve yet to see how the global push for regulation will ultimately play out. The EU AI Act might lead the charge, but as some countries (e.g., the UK, Australia and Japan) draft their own regulations — which may not focus on the same areas as the EU’s — there’s a concern that too many guardrails will stifle innovation and blunt the EU’s competitive edge. The AI Act will, however, provide a legal framework for AI solution providers, their shareholders and investors.

In part, the EU AI Act considers the risk level of different AI applications. What qualifies as “low-risk” remains unclear. What we do know is that the EU demands a more cautious approach when AI has the potential to influence vulnerable populations, identify people through biometrics or impact their health, wealth and careers.

For EU companies that develop and use AI, these rules can lead to new risks and rewards. Regulation drives trust, which in turn supports the funding, research and acceptance of responsible AI innovations. Still, the cost of compliance — roughly €300,000 by some estimates — could pose a potential barrier to AI adoption for small- to mid-sized firms that can’t afford it.

The EU Act in a global context

Many other countries and coalitions are drafting their own regulations to ensure safety and ethics in the AI space. These include the AI Safety Institute Consortium in the US,  the Association of Southeast Asian Nations’ (ASEAN’s) Guide on AI Governance and Ethics, and other initiatives from the United Nations and G7. Some policies are voluntary and less comprehensive than the EU AI Act, and could potentially drive varying degrees of societal risk and economic opportunity.

For global companies, however, these standards will stretch beyond EU borders and could spark — at least in the West — a shared agenda for AI governance and protection. In June 2023, Margrethe Vestager, Executive Vice President of A Europe Fit for the Digital Age, at the EU’s European Commission, commented: “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted.” Only time will tell. 

Best practices for AI governance and adoption

These regulations lay the groundwork for a heightened public awareness of AI and a shared language around the risks, but they’re just one chapter in the AI governance saga.  Balancing guidance and growth in this new era takes a sprawling network of investments, education and upskilling, and digital defense. Future innovations will augment work at the task level, optimize yesterday’s workflows and power us forward to new heights of productivity.

As we prepare for the full force of these EU measures, here are some best practices for organizations to consider:

  • Set up a robust AI governance board that tracks global developments in regulations, actively guides the workforce on ethical usage and prescribes the right checks and balances.

  •  Guide innovation teams to define outcomes and assess risks according to ethical guidelines. The impact of AI on end users and society can’t be ignored.

  •  Enhance workforce communication and training programs to foster firm-wide awareness of the changing regulatory landscape. Recognize that general-purpose AI models, like GPT-4 and other large language models, bring their own risk exposure and will require fresh thinking, education and knowledge-sharing to extend beyond product development teams and digital groups.

  • Because the EU AI Act may apply to third-party AI suppliers or deployers outside the EU, employers are advised to perform due diligence and thoroughly consider the AI impacts of their technology solutions and vendors.

  • Avoid building and using high-risk tech solutions, such as “black-box” AI tools that automate HR processes with little documentation and transparency. There is a risk that these may be banned or difficult to implement, given the EU AI Act.

  • In lieu of more complex and high-risk AI tools, organizations can leverage AI tools for efficiency gains that reduce manual labor: drafting concept work, contract documents and the like. The long-term gains from amplified intelligence — higher-level thinking, innovation and workforce development — will come only when employers learn how to leverage AI in a responsible, ethical manner that serves the business and its people. 

For more on the “Art of the Possible” with AI, how AI is transforming the future of work and how you can set up appropriate governance models and training programs, contact Mercer

The content of this article are intended to convey general information only and not to provide legal advice or opinions
About the author(s)
Kate Bravery

Global Talent Advisory Leader, Mercer

Sebastian Karwautz

Partner, European & UK Transformation Services Leader

Kai Anderson

Transformation Lead International

Sebastian Unterreitmeier

Principal, Senior Manager Talent Strategy Consulting

Andreas Gömmel

Principal, Senior Manager Talent Strategy Consulting

Related products for purchase
Related solutions
Related insights