Prof. Dr. Kay Rottmann

Service · AI Implementation

AI Implementation

Pragmatic AI implementation from prototype to production. With eval framework, monitoring, and clean handover to your team - based on 15+ years of AI engineering at Meta, Bosch, and Amazon.

Implementation and content: Prof. Dr. Kay Rottmann

Professor of Applied AI · HdM Stuttgart · ex-Meta, Bosch, Amazon

Last updated:

What does AI implementation in a company actually mean?

AI implementation is the step from a prototype or PoC to a production system that's measured, controllable, and operable by your team. It's the part most AI projects underestimate - and where most of them fail. Getting a model to work in a notebook is easy. Embedding it into your systems so it runs reliably, can be controlled, catches errors, can be updated, and meets your compliance requirements - that's the actual work.

When does an AI implementation make sense?

When you have a use case that delivers clear business value, has been proven to work in a prototype, and that you want to run continuously rather than once. Typical signals: you have a working ChatGPT prompt or script that produces good results - but has to be run manually every time. A business unit uses an AI tool ad-hoc and now wants to make it available to 50 colleagues. A PoC from a workshop convinced you, and the next step is a clean implementation.

How an AI implementation with r7net works

In four phases. Architecture (1 week): how does the AI integrate with your existing system landscape? Which data sources, which APIs, which permissions? Development (3 to 8 weeks): we build the production system including error handling, logging, monitoring, and an eval framework. Test & hardening (1 to 2 weeks): realistic load tests, edge-case coverage, security review. Handover & operation: training your team, documentation, optional ongoing maintenance via r7net GmbH.

Three things every good AI implementation includes

Three things that are often forgotten in pilots but cannot be missing in production: Eval framework - automated tests that regularly check whether the system still does what it should. Models change, data drifts, requirements shift. Without evals you don't know when it breaks. Monitoring & alerting - when the AI service fails or returns absurd answers, you want to know before your customers do. Fail-safe mechanisms - what happens when the model is uncertain or a tool doesn't respond? Clear fallbacks instead of hallucinations.

Build vs. buy: when does custom development pay off?

Not every use case needs custom implementation. If a standard SaaS (Microsoft Copilot, Notion AI, Zendesk AI) covers 80 % of your use case, buy it. Custom development pays off when: (a) you need the use case integrated with your own data and systems, (b) compliance/data protection rules out external services, (c) the use case is specific enough that no off-the-shelf product fits, or (d) ongoing SaaS costs at your scale make in-house development more economical. In discovery we check exactly these questions - and tell you honestly if build isn't a good idea.

Format and investment

Typical engagement: 6 to 12 weeks from architecture workshop to production system, depending on complexity and integrations. Investment typically in the five-figure euro range, broken down by phase. Includes eval framework, monitoring setup, documentation, and handover to your team. On request we provide a concrete quote.

Frequently asked questions

How much does an AI implementation cost?
Typically in the five-figure euro range for a production system, depending on complexity, integrations, and whether a prototype already exists. On request we provide a concrete, phase-by-phase quote.
How long does an AI implementation take?
Standard is 6 to 12 weeks from architecture workshop to go-live. Depends on complexity, number of integrations, and availability of your data and APIs.
Are you tied to specific technologies or cloud providers?
We're technology-neutral. In practice we often use OpenAI, Anthropic, Google, or open-source models (Qwen, Gemma, Mistral) - depending on the use case. Cloud: AWS, Azure, GCP, or on-prem. The choice depends on your existing infrastructure and compliance requirements, not on affiliate deals.
What happens if the implemented system stops working later?
That's exactly why every implementation includes an eval framework that regularly checks whether the system still works, plus monitoring and alerting. Optionally, r7net GmbH offers ongoing maintenance - alternatively your team is equipped to operate the system independently after handover.
Do we need a prototype or workshop first?
Not strictly required, but strongly recommended. Without a validated use-case hypothesis, you risk ending up with a technically clean system that nobody uses. If you don't have a prototype yet, we start with an AI workshop or strategy engagement.
How are you different from a regular software agency?
Regular software agencies can integrate API calls but usually don't know the quirks of LLM-based systems - hallucinations, evals, prompt drift, model updates, cost management. We bring 15+ years of AI-specific engineering experience from Meta, Bosch, and Amazon to every engagement.

Let's talk.

Drop me a short note about what you're working on - I'll get back to you within a few days.

Request implementation