What this means in practice
Real AI ethics in practice across our delivered systems:
Citations users can verify. The RAG chatbot for a leading member network serving 1,000+ HumHub members and the GDV AI Knowledge Assistant for 400+ insurance companies both return cited answers grounded in the client's own documents. Users can click through to the source and verify.
Human-in-the-loop on consequential outputs. The AI email agent for a leading donation platform classifies and drafts replies, but every reply is reviewed by a human before it is sent. The Tannenhof Berlin-Brandenburg AI transcription pilot produces structured therapy session reports that clinicians review before submission to German Pension Insurance.
Risk-tiered before code. The Tannenhof pilot started with a premortem session that mapped failure modes – legal change, data security, user acceptance – before any code was written.
EU sovereignty by default. n8n in Berlin, Qdrant in the EU, Azure OpenAI via Microsoft EU sovereignty. Open-source self-hosted EU alternatives like Mistral and Milvus on request.