Responsible AI Statement
Last updated: 9 March, 2026
Executive Summary
At ICS.AI, we believe that Artificial Intelligence (AI) will only deliver meaningful and sustainable value to the public sector if it is designed, deployed, and governed responsibly.
As a UK provider of public sector AI transformation, we are committed to ensuring that the AI systems we deliver are trustworthy, transparent, secure, and aligned with high ethical and regulatory standards.
Responsible AI is embedded throughout the SMART: Unified AI Platform and underpins our transformation methodology. From the SMART: AI Assessment through to the SMART: AI Target Operating Model (AI TOM), enterprise deployment and continuous value realisation, responsible practice is designed into our platform, operating model and governance framework.
Our solutions are deployed in customers’ own Microsoft Azure tenants and regions. ICS.AI is Cyber Essentials Plus certified and commissions regular independent penetration testing (including CREST/CHECK-accredited testing where applicable).
Our platform and methodology are designed to support alignment with the EU AI Act and the UK government’s Data and AI Ethics Framework (updated December 2025), helping customers meet their obligations under applicable laws, policies and standards.
Scope and how to use this statement
This statement summarises ICS.AI’s public commitments for responsible design, deployment and governance of AI-enabled capabilities within the SMART: platform. Formal role allocation, risk classification, and the applicable documentation set are confirmed per deployment and recorded in each customer’s compliance pack. Customer organisations remain accountable as deployers/controllers for how AI is used in their services, including lawful basis, transparency, retention and complaint handling.
Key definitions (plain English)
-
Controller/Processor: Under data protection law, customers typically act as Controllers for operational data; ICS.AI typically acts as a Processor under contract.
-
Provider/Deployer (EU AI Act): ICS.AI is usually the Provider of the AI system; customers are usually Deployers. If a customer substantially modifies or rebrands a system, the customer may become the Provider.
-
Customer Tenant Deployment: The SMART: Platform is deployed within the customer’s Microsoft Azure tenant/region. Data, identities, logging and security configuration are governed primarily by the customer’s environment and policies.
Our Commitment
1. Public Sector Safety, Security, and Reliability
Our AI solutions are designed specifically for the needs, risks, and operational constraints of UK and EU public bodies. Our systems are:
-
Secure by design, following robust cybersecurity practices and data-minimisation principles.
-
Governed by strong access controls, auditability, and transparent oversight structures
-
Monitored to support reliability, stability, and operational resilience.
Platform deployments run in customers’ own Microsoft Azure tenants and regions, so identities, access policies, logging and security controls remain under the organisation’s governance. ICS.AI is Cyber Essentials Plus certified and commissions regular independent penetration testing (including CREST/CHECK-accredited testing where applicable).
2. Ethical, Explainable, and Transparent AI
We design AI systems that are understandable, accountable, and transparent in operation. Our approach includes:
-
Clear explanations of AI outputs in accessible language.
-
Linking responses to underlying sources where appropriate.
-
Avoiding opaque deployments and supporting transparent operational governance.
-
Clear ‘you are interacting with AI’ notices at first contact where AI interacts directly with residents or staff, with accessible wording and a route to a human where required.
Our SMART: Ethical Edge governance layer supports explainability, auditability and accountability through transparent interaction logging and oversight capabilities.
3. Fairness, Non-Discrimination, and Inclusivity
We work to reduce and mitigate bias and exclusion in datasets, retrieval sources, and model outputs. Our approach includes:
-
Structured processes for bias identification and mitigation, proportionate to the use case.
-
Fairness checks and monitoring practices, with documented remediation where needed.
-
Curated knowledge sources and governance controls to reduce unsafe, biased or misleading outputs.
-
Support for inclusive service delivery across channels, reading levels, and languages.
We aim to improve accessibility and service quality. Our AI solutions are designed to improve accessibility to services rather than restricting it. We recognise that residual bias risk can remain in any AI system; we therefore emphasise monitoring, human review, and governance.
4. Privacy and Data Protection
ICS.AI complies with UK GDPR and, where relevant, EU GDPR. In most customer deployments, ICS.AI acts as a Processor, and the customer acts as the Controller.
-
Customer data is not used to train generic or shared foundation models.
-
Strict segregation between organisations protects resident and staff information.
-
Privacy-by-design architecture supports lawful and secure processing.
-
Deployment within the customer’s Azure tenant/region supports customer control over logs and telemetry.
-
No option for multi-tenant SaaS for SMART: customer deployments.
Customers remain responsible as Controllers for selecting lawful bases, providing transparency to end users, configuring retention and access policies, and meeting public accountability requirements.
5. Human Oversight and Accountability
We design AI to enhance, not replace, human expertise. Our approach includes:
-
Human-in-the-loop or human-on-the-loop workflows where appropriate to the use case and risk profile.
-
Role-based oversight and approval mechanisms.
-
Guardrails to prevent autonomous decisions without human validation in sensitive contexts.
-
Tools and reporting that support accountability and review.
Organisational accountability for decisions remains with the deploying organisation, supported by transparent tooling and evidence where required.
6. Responsible Deployment and Continuous Improvement
Responsible AI is operationalised through the SMART: AI Target Operating Model (AI TOM), defining how AI is governed, monitored, reviewed, and continuously improved.
This includes:
-
Structured risk management and ethical review processes.
-
Explainability standards for system behaviour and outputs.
-
Comprehensive audit logging of interactions and operational events.
-
Bias monitoring and corrective mechanisms.
-
Human oversight structures with escalation pathways.
-
Approval workflows for changes, updates, and new use cases.
Our SMART: AI TOM and AI Compliance Framework are structured to cover the core areas of the EU AI Act - risk management, data and data governance, transparency, human oversight, logging and post-market monitoring. Deployment-specific details are captured in each customer’s compliance pack, reflecting the use-case, data context, and risk classification.
7. Societal Impact and Value-Based AI
Our mission is to help public bodies achieve better outcomes for the communities they serve, not simply to deploy technology. Consistent with the UK government’s Data and AI Ethics Framework principle of societal impact, we ensure:
-
Promote AI uses that improve access, reduce inequality, and enhance service quality.
-
Balance productivity and financial gains with ethical responsibilities to residents and communities.
-
Assess and monitor broader societal effects of AI systems, not only technical performance.
-
Support organisations to become AI-native through responsible cultural, operational, and governance change.
8. Environmental Sustainability
The December 2025 update to the UK government’s Data and AI Ethics Framework introduced environmental sustainability as a distinct ethical principle. ICS.AI takes this seriously. Our commitments include:
-
We run on Azure. In February 2026 Microsoft confirmed that a key milestone on their journey to carbon negative by 2030 - to match 100% of their annual global electricity consumption with renewable energy by 2025 – had been met.
-
Deploying AI workloads within Microsoft Azure, whose data centres operate against published sustainability and net-zero targets and selecting regions with lower carbon intensity where operationally appropriate.
-
Applying retrieval-augmented generation (RAG) architectures - which are significantly more efficient than repeated full model training - to minimise compute and energy consumption
-
Not using customer data to train or fine-tune foundation models, avoiding the significant energy overhead of model training
-
Working with customers to right-size AI deployments and avoid unnecessary or redundant workloads
We will continue to develop and publish our approach to sustainable AI as guidance and measurement standards in this area mature.
Transparency, Contestability, and Oversight
Public bodies deploying AI systems remain responsible for meeting transparency and public-accountability obligations. ICS.AI supports
this by:
-
Designing systems that clearly indicate AI use at first contact.
-
Providing technical capabilities and guidance to help organisations meet transparency requirements, including the UK government’s Algorithmic Transparency Recording Standard (ATRS).
-
Supplying documentation and audit evidence where needed.
ICS.AI supports the right of individuals to contest AI-assisted outcomes that affect them by enabling:
-
Accessible explanations of how outputs are generated (where appropriate and proportionate).
-
Audit logs that support review and challenge processes.
-
A route to a human for sensitive topics and escalations where required.
-
Evidence that helps deploying organisations respond to challenges and complaints.
Residents can raise concerns through the complaint and oversight mechanisms operated by the deploying public authority, or relevant regulatory bodies where applicable. ICS.AI does not operate a separate resident complaint channel but supports customers with logging, transparency and auditability. We provide support materials to help customers implement complaint-handling requirements under the Data (Use and Access) Act 2025 as applicable.
Our SMART: Platform - Regulatory Alignment
We design AI for public bodies and regulated organisations so it is safe, transparent and accountable by default. Solutions are deployed in the customer’s Microsoft Azure tenant and region, so data, identities, logging, backups and availability remain under customer control. We use retrieval-augmented generation (RAG) over customer-approved sources and do not use customer data to train shared foundation models.
UK alignment (supporting customer obligations)
ICS.AI’s platform and methodology are designed to support customers in meeting obligations under:
-
UK GDPR and the Data (Use and Access) Act 2025 - privacy-by-design architecture, data minimisation, and support materials for complaint handling and accountability processes.
-
UK government Data and AI Ethics Framework (updated December 2025) - our commitments align with the framework’s principles, including transparency, accountability, fairness, privacy, safety, societal impact and environmental sustainability.
-
UK Government AI Playbook (February 2025) - our deployment methodology supports public sector teams in applying the Playbook’s principles, including lawful and ethical use, bias mitigation, and appropriate assurance.
-
Algorithmic Transparency Recording Standard (ATRS) - documentation, logging, and audit capabilities to support completion of required records where applicable.
-
Cyber Essentials Plus and independent penetration testing - assurance of core security controls.
EU AI Act alignment (supporting risk-based obligations)
ICS.AI has designed the SMART: platform to support alignment with the EU AI Act (Regulation (EU) 2024/1689). In most implementations, ICS.AI acts as Provider, and the customer acts as Deployer; Microsoft is typically the general-purpose AI (GPAI) model provider (for example, Azure OpenAI). Where a customer substantially modifies or rebrands the system, the customer may instead become the Provider.
For use cases that the parties classify as high-risk, we support customers by:
-
Operating a risk management approach and contributing to post-market monitoring activities for the system and deployment.
-
Providing logging capabilities and recommending appropriate retention periods for deployers (for example, at least six months, subject to customer policy and legal requirements).
-
Supporting public-body deployers with Fundamental Rights Impact Assessment (FRIA) materials before first use where required.
-
Supporting any required registration steps for the relevant system and role allocation.
If ICS.AI acts as a non-EU Provider for a high-risk system placed on the EU market, we will appoint an EU Authorised Representative where required. We consume and retain relevant GPAI transparency artefacts made available by model providers (for example, EU training-data summary templates) as part of our documentation set.
Security and Resilience
We inherit your Entra ID (RBAC/MFA) policies; encrypt data in transit (TLS/SIP TLS/SRTP) and at rest (e.g., Azure SQL TDE); and undergo annual CREST/CHECK penetration testing.
Fairness and Inclusion
Our Ethical Edge and zero-bias safeguards, together with curated sources and human review, reduce biased or exclusionary outputs and route sensitive topics to staff.
Explainability and Human Oversight
We label AI interactions at first contact, link answers to their sources and provide a clear path to a human at any time. Staff copilots produce drafts; humans remain accountable for decisions.
Prohibited Uses
We will not build or deploy systems that fall under prohibited practices under the EU AI Act or other practices classified as unacceptable risk, including (where applicable) social scoring; manipulative or exploitative systems; and emotion recognition in the workplace or education.
Our Promise
ICS.AI is committed to delivering trusted AI that is safe, secure, inclusive, transparent, and aligned with the public good. Through the SMART: AI Target Operating Model and the SMART: Unified AI Platform, we support organisations to adopt AI confidently and responsibly.
We will develop and deploy AI in a way that supports public trust and strengthens the services that communities rely on.
Contact
For questions or concerns about Responsible AI at ICS.AI, contact: info@ics.ai.
We review this statement at least annually and whenever material changes occur to the regulatory landscape or our platform.
This statement summarises our public commitments. Formal role allocation, risk classification, and documentation are confirmed per deployment and captured in each customer’s compliance pack.
