Responsible AI
Last Updated: April 15, 2026
At Smalt AI, we believe that AI should augment human capabilities, not replace human judgement. As an AI platform serving the financial services industry, we hold ourselves to the highest standards of responsibility, transparency, and ethical practice.
Our AI Principles
1. Human-in-the-Loop
AI is a powerful tool, but critical decisions require human oversight. We design our platform so that:
- AI outputs are presented as recommendations, not directives
- Users are always in control of final decisions
- The platform encourages review and verification before action
- We never fully automate decisions that have significant financial or legal consequences
2. Transparency
You deserve to understand how our AI works:
- Model Disclosure: We are transparent about which AI models power our platform (currently Anthropic's Claude and Google's Gemini)
- Limitations: We clearly communicate what AI can and cannot do. AI-generated content may contain errors, hallucinations, or outdated information
- Confidence Indicators: Where feasible, we provide context to help users assess the reliability of outputs
- No Black Boxes: We explain our approach to AI processing in our Security & Trust Center
3. Data Privacy and Protection
Your data is sacred to us:
- No Training on Customer Data: We do not use your inputs or outputs to train any AI models
- Data Isolation: Each customer's data is logically isolated from other customers
- Minimal Data Sharing: We share only what is technically necessary with AI providers, under strict data processing agreements
- Right to Deletion: You can delete your data at any time
4. Fairness and Bias Mitigation
We are committed to minimising bias in our AI outputs:
- We select foundation models from providers who invest heavily in safety and bias research
- Our prompt engineering and system design aim to produce balanced, objective outputs
- We encourage users to critically evaluate AI outputs and report any concerning patterns
- We regularly review our AI systems for potential bias in financial analysis and recommendations
5. Safety and Reliability
We build with safety as a core requirement:
- Robust input validation and output filtering
- Rate limiting and abuse prevention systems
- Clear disclaimers that AI outputs are not professional financial, legal, or tax advice
- Error handling and graceful degradation when AI services experience issues
- Regular testing and evaluation of AI output quality
6. Accountability
We take responsibility for our AI systems:
- Clear governance structures for AI-related decisions
- Feedback mechanisms for users to report issues with AI outputs
- Commitment to continuous improvement based on user feedback
- Cooperation with regulators and industry bodies on AI governance
How Our AI Works
Architecture Overview
Smalt AI uses a multi-model architecture to deliver the best results:
| Component | Description |
|---|---|
| Foundation Models | We use leading AI models (Anthropic Claude, Google Gemini) via their enterprise APIs. These models provide the core reasoning and language capabilities. |
| Intelligent Routing | Our system intelligently routes queries to the most appropriate model based on the task type, optimising for quality and efficiency. |
| Context Engineering | We use advanced context management to provide models with relevant information while minimising unnecessary data exposure. |
| Specialised Skills | Domain-specific capabilities (financial modelling, document generation, research) are built as structured skill modules that guide AI behaviour. |
| Output Validation | Generated outputs pass through validation layers to catch formatting issues, calculation errors, and policy violations. |
What Our AI Does NOT Do
- Does not make autonomous trading or investment decisions
- Does not access external systems or execute actions without explicit user instruction
- Does not store or recall information from other customers' sessions
- Does not provide regulated financial advice (outputs should be reviewed by qualified professionals)
- Does not learn or retain information between separate conversations (unless you use conversation history features)
Regulatory Alignment
We monitor and align with emerging AI regulations globally:
| Regulation / Framework | Our Approach |
|---|---|
| EU AI Act | We classify our system as a general-purpose AI application and comply with transparency and documentation requirements. We monitor regulatory guidance for financial services-specific requirements. |
| UK AI Regulation | We follow the UK's pro-innovation framework and sector-specific guidance from the FCA and other regulators. |
| NIST AI RMF | Our risk management practices are informed by the NIST AI Risk Management Framework. |
| ISO/IEC 42001 | We are aligning our AI management practices with this emerging standard. |
Known Limitations
We believe in being upfront about what AI cannot do:
- Hallucinations: AI models can generate plausible-sounding but incorrect information. Always verify critical facts.
- Knowledge Cutoff: AI models have training data cutoffs and may not have the most current information.
- Calculation Accuracy: While our financial modelling tools include calculation engines, AI-generated numerical analysis should be independently verified.
- Context Limitations: Very long or complex conversations may result in the AI losing track of earlier context.
- Bias: Despite mitigation efforts, AI outputs may reflect biases present in training data.
Feedback and Reporting
We actively welcome feedback on our AI systems:
- Report AI Issues: support@smaltai.com with subject line "AI Feedback"
- Responsible Disclosure: support@smaltai.com for security concerns
- General Feedback: Use the in-app feedback button or contact your Customer Success Manager
Our Commitment
We are committed to evolving our responsible AI practices as the technology and regulatory landscape develops. We will:
- Regularly review and update this page
- Engage with industry bodies and regulators
- Invest in AI safety research and testing
- Maintain open dialogue with our customers about AI capabilities and limitations
Contact us at support@smaltai.com or speak to your account manager.