Publication from the Development Department
Responsible AI at Orbit with the AI Act as a framework
This is how Orbit ensures responsible, transparent and secure use of AI with the AI Act as the foundation.
René Dalsgaard
AI as an integrated part of everyday life
AI started as something new and exciting, but in just a few years, it has grown rapidly and is now used by everyone from ordinary citizens to large companies. It has evolved into a tool that can optimise businesses and free up more time.
However, rapid development also means that it is essential to have control over how AI is used, both to protect data and to ensure that the technology supports the business in a responsible manner.
That is precisely why the EU has introduced the AI Act. But what does it actually mean in practice? And how does it affect companies like ours and you as customers?
What is the AI Act?
The AI Act is the EU's new artificial intelligence regulation. Its aim is to ensure that AI is used in a responsible and transparent manner. While the law applies to all companies that use, develop or offer AI, its impact varies depending on the risk level of the technology.
The EU classifies AI into three categories:
- Unacceptable risk – e.g. manipulative or surveillance AI. These systems will be banned.
- High risk – e.g. AI influencing decision-making in areas such as employment, credit, health or safety. These systems will be subject to strict documentation and control requirements.
- Low risk – this covers most of the AI used by companies, e.g. chatbots and automation. While there are only a few requirements here, you still need to understand how the AI works and how to use it responsibly.
- Minimal or no risk – Most AI systems pose minimal or no risk, e.g. spam filters. This category is not affected by the AI Act.
Even if you fall into the “low risk” or “minimal or no risk” categories, you still need an overview of your AI use and a clear understanding of your responsibilities.
What are we doing?
At Orbit, we fully support the AI Act and its aim to increase credibility, transparency and accountability.
We know that data often contains sensitive information that must not be shared externally. That is why Orbit’s AI Assistant is designed to integrate seamlessly with the AI model you have approved within your organisation. This means you can realise the value of AI on your own, controlled terms.
Our solution falls under the low-risk category, but we have put a plan in place for how we will comply with the new regulation. Some of the key measures are outlined below:
- Transparency: Customers are informed when AI features are being used.
- Human oversight: All AI features are advisory in nature; users retain full control and decision-making authority.
- Risk and impact assessment: Internal risk assessment regarding data quality, fairness, explanations, and the impact on end users.
- Life cycle and change control: AI functionality is managed within our secure, compliance-focused ISAE 3402 framework.
- Security and data protection: All AI processing complies with our GDPR, ISO-based security controls and our privacy-by-design approach.
For future high-risk or advanced AI features (if relevant), we will implement conformity assessment and risk management processes in line with the requirements of the legislation.
What does this mean for our customers?
Although AI and the AI Act can feel like a large and overwhelming area to keep on top of, we see it as a strength that supports the more responsible use of AI. For you, this means:
- Peace of mind that Orbit’s AI solutions are built with responsibility at their core
- The ability to use AI effectively without compromising on security
- Transparency and clarity across all AI features
At Orbit, we see the legislation as a tool for navigating complex technology, enabling both our customers and us to use AI with peace of mind.
The best results are achieved when responsibility, transparency and technology go hand in hand. That is exactly what we are working towards. We therefore recommend involving all your suppliers in mapping risk levels to give you complete clarity and assurance regarding the AI Act.

Curious about Orbit's AI features?
If you would like to learn more about Orbit's AI features, you can read about them here


