The use of AI systems to enhance worker safety and health is expanding rapidly. Examples include wearables, AI-driven software, and cobots. With the recent implementation of the AI Act, the European Union has established its first comprehensive legal framework for artificial intelligence. This regulation aims to ensure that AI systems are utilized in a safe, transparent, ethical, and reliable manner. On behalf of the European Agency for Safety and Health at Work (EU-OSHA), The Hague University of Applied Sciences investigated how this legislation influences current and future AI applications in the workplace, with a specific focus on identifying potential gaps and legal ambiguities within occupational safety.  

Target audience 

Policymakers (national and European), researchers, employers, employees, and OSH professionals. 

Research methodology 

Literature review. 

Key findings 

The study resulted in a discussion paper detailing the opportunities and challenges the AI Act presents for OSH systems. The main conclusions are: 

  • Safety vs. surveillance: AI applications are permitted as long as they serve safety purposes. However, as soon as a system is used for emotion recognition or intensive surveillance of employees, strict restrictions or even bans apply. 
  • The grey area: In practice, the distinction between what is legally permitted as a safety tool and what is prohibited as a surveillance mechanism is often razor-thin. 
  • Debate: The paper concludes with a series of open-ended questions intended to further stimulate social and political debate on this topic. 

Team 

  • John Bolte, Professor of Smart Sensor Systems  
  • Stefania Marassi, Researcher in the Smart Sensor Systems research group 

Funding 

European Agency for Safety and Health at Work (EU-OSHA) 

Publications 

Contact 

For more information regarding this research, please contact Stefania Marassi

More information