The EU’s AI Act: What It Means for Companies Working With AI

Head of Product at Voxel, Brandon Loyd

Brandon Loyd

August 6, 2024

The European Union’s (EU) new AI Act is designed to make sure AI is a safe technology as it advances. This post reviews its impact on US companies, some ways to mitigate the risks of AI, and how Voxel can help partners navigate those risks.

The AI Act went into effect on Thursday, August 1st. This landmark EU rule is the first of its kind in the world, and creates guardrails for how companies can use AI. It affects many US based tech companies, many of whom do business in the EU, and we can expect similar legislation to start appearing around the world. 

This makes sense. Voxel strongly supports the safe development and deployment of AI technologies. We also know that new tech regulations create risks for companies working with that tech. With that in mind, we wanted to help people understand this regulatory framework, and what it means for them. 

In this post, we will: 

  • Provide the major takeaways of the EU’s AI Act (4 Categories of Risk and Regulation) 
  • Describe some issues and friction this act creates for companies 
  • Offer ways to mitigate those risks (including partnering with Voxel)

The European Union’s AI Act - What’s In It

The EU calls the AI Act the world’s “first ever legal framework on AI” in its summary of the act. It separates AI technologies into 4 categories of risk, and sets out a series of regulatory benchmarks that must be met in each category. 

The EU’s 4 Levels of Risk and Regulation for AI Systems

Level 1: Minimal or No Risk

The EU places no restrictions on this level of risk. Companies may freely develop these systems without regulation. Examples of systems in this category are “AI enabled video games or spam filters.” 

Level 2: Limited Risk 

This category covers transparency when it comes to AI created content. It includes chatbots and other AI created content including generated videos and photos. The regulations are largely around a requirement to disclose that the chatbot, photos, videos or other content is generated by AI. Basically, people should know they’re conversing with a machine, or the images in front of them are “not real.” 

While that regulation is fairly straightforward, there can be confusion regarding if an AI system fits under the level 2 or level 3 category because, according to the law firm Davis + Gilbert, “the risk level of such systems may vary depending on the application for which they are used.” 

Which brings us to level 3. 

Level 3: High Risk 

Level three consists of AI systems that affect essential services and areas that could determine a person’s livelihood, health, and financial future. Some examples include: 

  • Critical infrastructure such as transportation 
  • Educational and vocational grading systems 
  • Safety systems 
  • Employment determinations such as resumé scanning 
  • Essential services 
  • Law enforcement determinations such as evidence review 
  • Border control management and other areas involving immigration 
  • Any areas involving the justice system, government, and democracy 

The EU lays out a number of regulatory benchmarks companies must clear in order to operate in any of the spaces above. AI systems in this category must have: 

  • Sufficient risk assessment and mitigation systems 
  • Quality of datasets feeding the system 
  • Accurate activity logs 
  • Detailed documentation of the system and its purpose 
  • Clear information available for anyone deploying the system 
  • Adequate human oversight 
  • Robust levels of security and accuracy 

Level 4: Unacceptable Risk

If a level 3 system does not clear the benchmarks listed, it is deemed unacceptable. In this case, the system cannot be sold, used, or distributed in the EU. 

Potential Issues for Companies Using AI 

Adapting new technologies always comes with risk. The AI Act amplifies that risk for companies developing and employing AI. Here are a few of the issues that companies should be aware of: 

  • Increased Financial Risk: Developing AI technology is expensive. Making sure that it conforms with new regulations increases the cost of that development. Furthermore, the penalties for any kind of violation are substantial. Depending on the violation, penalties start at either 7 million Euros or 35 million Euros (roughly $7.7 million or $38.3 million). 
  • Operational Disruption: It is unclear how long compliance certification from the EU will take. Waiting on approval can delay launch dates and erase strategic advantages when many competitors are racing to get products to market. And for many AI systems, this risk will not end once it secures initial approval. According to the EU, “if substantial changes happen in the AI system’s lifecycle,” it needs to be assessed for conformity again. This means that there is a chance company’s will have to suspend service when updating their systems. 
  • Reputational Risks: AI systems are under severe public scrutiny, and they should be. New transformational technology can change the world for the better, but it also can be used nefariously. The public is both excited about AI and skeptical of it. Running foul of the EU’s AI Act can create public distrust. Onlookers may see any penalty or failure to comply as signs of bad intent or operational incompetence. This sort of reputational hit could cause damage to business relationships, employee retention, and brand value. 
  • Legal Liability: All of the potential risks listed here create a series of legal liabilities for companies engaged in AI. Stockholders, partners, public interest groups, competitors, and regulatory bodies themselves can create a vast series of legal stumbling blocks for companies developing new technologies. 

How to Mitigate Risks and Embrace Regulation 

The EU’s AI Act is the first wide-reaching piece of AI regulation, but it certainly will not be the last. Companies using AI always need to be ready to prove to regulatory bodies and the public that their use of AI is safe. Dealing with all this risk and the potential for changing rules can be time consuming, expensive, and difficult. 

One way for companies to reduce their risk is to spread it. Businesses are using AI in every sector. This doesn’t mean that they have to develop it or own the technology. An excellent way to stay clear of risk is to partner with existing AI companies. Finding a partner that is dedicated to keeping up with the latest regulations and providing safe AI services can be a cost-effective and efficient mitigation technique. 

As an AI services company in the workplace safety space, Voxel has developed partnerships with a wide variety of sectors. Our technology is helping keep workers safe in industries that vary from automotive plants, retail and grocery, and even major ports

We are in the business of risk mitigation. We reduce safety risks for our partners, and we also are constantly making sure that our technology is safe, reliable, and compliant. Here are a few of the mitigation techniques we would suggest to any company that is considering incorporating AI into their businesses: 

Risk Mitigation Techniques 

  • Comprehensive Risk Assessment: Conduct thorough risk assessments to identify potential issues before adopting new technology. This is required in the AI Act, but is also simply best practice. It is not always cheap or easy, but we make sure to stress-test any updates or new products before we offer them to our partners. 
  • Pilot Programs: Implement pilot programs to test new technology on a small scale before full deployment. This prevents overcommitment and saves money. Many of the companies we work with engage in a localized pilot program before expanding Voxel’s site-intelligence platform more widely. We encourage this, and are always excited to see if our services are right for a potential partner. 
  • Stakeholder Engagement: It is critically important to involve stakeholders in the planning and implementation process when it comes to adopting AI technologies. Voxel sees artificial intelligence as a tool that helps people make decisions and improvements. So buy-in from management, employees, and other stakeholders is vital. Many of our partners make agreements with unions and hold multiple town halls before installing our site-intelligence platform. 
  • Training and Support: Companies should provide adequate training and support for employees to ease the transition and ensure the effective use of any new technology. New tech can be useless without proper training. This is why we offer detailed onboarding and full time support to our partners. Our AI technology is widely used to help safety leaders train workers to develop safer habits, so training and support are in our nature. 
  • Robust Security Measures: Implement strong cybersecurity protocols to protect against data breaches and other security threats. This is not only required in the AI Act, but is essential in today’s world. Voxel complies with the industry’s most stringent cyber security protocols. We have achieved SOC-2 Compliance, and look forward to maintaining the highest industry cybersecurity standards. 
  • Communication Strategy: Develop a clear communication strategy to manage public perception and maintain transparency with all stakeholders. You want to make sure that any interested party knows how seriously you take being a transparent, trustworthy company. This is one reason we constantly update our blog and provide case studies describing our work with our partners. 

We at Voxel are excited to be here with you as both regulation and AI technology continues to evolve. 

How Voxel Can Help 

Voxel’s intelligence platform connects directly to your existing security cameras and then uses computer vision and AI to give you more visibility and insight into the safety of every site, every day. You’ll have the data you need to take impactful, preventative action that keeps employees safe and strengthens your safety culture. 

Start your journey to a safer workplace with Voxel. Get a demo today.