Nomad AI
AI Policy Development
Custom AI governance frameworks, acceptable use policies, and regulatory compliance — built for how your organisation actually works.
Why AI policy matters
Most organisations adopt AI tools before establishing governance. Staff use free AI tools with sensitive data. There are no guidelines on what's appropriate. Compliance gaps emerge only when something goes wrong.
We build clear, practical AI policies tailored to your organisation — not generic templates. Every policy considers your sector, your data, your regulatory environment, and how your team actually works. The result is governance your people will actually follow.
What we cover
Acceptable Use Policies
Clear rules for what AI tools staff can use, what data can be input, and what requires human review. Practical enough to follow, specific enough to protect you.
Data Classification
Frameworks that define what data is safe for AI, what needs enterprise-grade tools, and what should never touch an AI system. Mapped to your actual data landscape.
EU AI Act Compliance
Assessment of where your AI use cases fall under the EU AI Act risk classification. Compliance roadmaps, documentation requirements, and transparency obligations.
GDPR Alignment
Ensuring AI systems comply with data protection law — lawful basis for processing, data minimisation, automated decision-making requirements, and DPIAs.
Vendor & Tool Assessment
Evaluation frameworks for assessing AI tools before adoption. Data handling practices, security posture, training data usage, and contractual protections.
Risk Assessment
Identifying and mitigating AI-specific risks — hallucination, bias, data leakage, over-reliance, and reputational exposure. Practical risk registers, not theoretical frameworks.
Governance Structures
Internal accountability — who approves AI tools, who reviews outputs, escalation paths, incident response, and ongoing oversight responsibilities.
Sector-Specific Alignment
Policies that account for your industry's regulatory environment — financial services, healthcare, legal, education, engineering, and public sector requirements.
How it works
01
Audit
We assess your current AI usage, data flows, tools in use, and risk exposure across the organisation.
02
Draft
We develop policies and frameworks specific to your organisation, sector, and regulatory environment.
03
Align
We ensure compliance with GDPR, EU AI Act, and any sector-specific regulations that apply to you.
04
Implement
We train your team on the policies and embed them into daily operations — not just a document on a shelf.
05
Review
Ongoing policy updates as AI tools, regulations, and best practices evolve. Your governance stays current.
Common scenarios we address
Staff using free ChatGPT with client data
No clear rules on which AI tools are approved
Board asking about AI governance but no framework exists
Preparing for EU AI Act compliance deadlines
Client or partner requesting your AI policy as part of procurement
Scaling AI adoption but concerned about reputational risk
GDPR uncertainty around automated decision-making
Employees experimenting with AI but no oversight structure
