Skip to main content
Carsten Felix Draschner, PhD

Alan - Full-Stack Enterprise GenAI and LLM Platform

Alan is a sovereign enterprise GenAI platform by Comma Soft AG, built around secure LLM usage, company knowledge, retrieval augmented generation, evaluation, and deployment options for European organizations.

Alan project overview

TL;DR ⏱️

Context

Alan was developed at Comma Soft AG as an enterprise GenAI platform for organizations that want to use Large Language Models with stronger control over their data, infrastructure, and business context. The public Alan documentation describes it as a software solution for using Generative AI and especially LLMs securely in organizations, with a focus on making AI accessible without requiring every user to understand the underlying technical complexity.

The motivation was not simply to add another chat interface. In my original post about the Alan development journey, I framed the question as: do we need another GenAI solution? The technical answer was that many enterprise use cases need more than access to a strong base model. They need data-flow control, secure hosting, on-premise or air-gapped deployment options, domain adaptation, knowledge enrichment, alignment choices, and evaluation methods that the team can actually trust.

What made the project interesting

Alan sits at the intersection of research, software engineering, enterprise architecture, and responsible AI product development. Public Alan materials emphasize secure deployment, German hosting, GDPR-compliant usage, on-premise operation, RAG, APIs, team collaboration, role management, and optional fine-tuning for company-specific requirements.

From my perspective, the most interesting technical work was around building an end-to-end GenAI system rather than a single model demo:

My contribution areas

During my time as Applied AI Scientist and Machine Learning Consultant at Comma Soft AG, I worked around the technical development and applied R&D of Alan and related GenAI projects. The work connected LLM training and fine-tuning, autonomous hybrid model evaluation, model selection, RAG, response verification, image GenAI, and scalability optimization.

Several topics from my blog posts came directly out of this work:

Lessons learned

The Alan project reinforced a pragmatic view of GenAI: the model is only one part of the product. For real organizations, the surrounding system matters just as much. Data governance, user workflows, integration, deployment, cost, latency, evaluation, documentation, and change management decide whether a powerful model becomes useful in practice.

It also showed how important it is to keep research curiosity close to product reality. New models, evaluation methods, agentic patterns, and RAG techniques move quickly. But enterprise systems need stability, transparency, and a clear view of the trade-offs. That tension made Alan an especially valuable project for me: it combined hands-on LLM engineering with the practical question of how organizations can adopt GenAI responsibly.