Popular
Categories
Blog - Popular articles
Jobs in Germany
About Recare
As one of the leading German HealthTech companies, we are reshaping discharge management – technology-driven, patient-centered, and free from bureaucracy. In addition to our market-leading SaaS platform, we develop AI solutions that radically simplify processes in hospitals and for aftercare providers, relieve healthcare professionals, and refocus attention on patients. Today, we already connect two-thirds of all German hospitals with over 650 rehabilitation clinics and 25,000 nursing and homecare providers. With currently around 100 employees, we continue to grow – and we are looking for people with character who want to help us improve the healthcare system and solve administrative complexity across care journeys in Europe.
What to expect as our Senior AI Data Platform Engineer (m/w/d):
Purposeful work – your role will have a positive impact on patients, their families, and healthcare professionals.
Company culture – we believe in flat hierarchies that promote high performance and strong team dynamics. We foster an environment characterized by mutual respect, loyalty, and recognition. Together, we strive for our goals – and expect the same from you.
Flexibility – want to pick up your child from daycare? Like to exercise during lunch? We’ll support you. We are a remote-friendly company offering flexible working hours. Workations are also possible by arrangement.
Edenred card – which you can use according to your needs.
Extra vacation day – so you can celebrate your birthday with your loved ones, you’ll have the day off.
In this role, you can make an impact and grow with us as Senior AI Data Platform Engineer (m/w/d):
You will own and evolve the data backbone that powers Recare’s AI products (Voice, Extract & Docs, Agent) and the shared platform primitives that agentic systems depend on. Your work ensures that data ingestion, processing, and serving are reliable, secure, and production-ready - forming the foundation for scalable AI systems in healthcare.
You own and evolve the ingestion and processing backbone, including batch and streaming pipelines for structured and unstructured data - designed with idempotency, retries, backpressure, and failure isolation from the start.
You design and operate asynchronous processing systems for heavy AI workloads, including queue/worker patterns, independent scaling, retry semantics, and poison-message handling.
You define and maintain API contracts and data schemas across producers and consumers, owning identity keys, versioning, and compatibility guarantees.
You own data lifecycle primitives, including secure storage (encryption at rest), sensitive data handling, retention and deletion semantics, right-to-erasure flows, and provenance/lineage for audit and debugging.
You design secure service-to-service authentication for cross-boundary synchronization, including short-lived credentials, rotation, least privilege, and auditability.
You build platform primitives that make AI features productizable, such as status tracking for long-running operations, feature gating, rollout controls, environment separation, multi-tenant isolation, and data minimization.
You build and maintain serving layers for ML/AI workloads - for example, unified data models that reconcile multiple sources into consistent, versioned views with cross-system identity resolution.
You make the system production-grade by driving observability (metrics, logs, traces, alerting), defining runbooks, and establishing cost and performance guardrails (throughput, latency, load testing).
You partner closely with BI and Product Data teams to expose clean, trustworthy datasets and align on KPI definitions - ensuring instrumentation is built into the backbone, not added later.
Here’s how we picture you as our Senior AI Data Platform Engineer (m/w/d):
You bring strong software engineering fundamentals and a high bar for design, testing, code quality, and operability.
You are very familiar with the current state of AI-native engineering. You stay close to the frontier and adjust accordingly.
You have deep experience building production-grade data and ingestion systems under real reliability constraints.
You are hands-on with asynchronous processing systems (queues, workers) and understand how they behave under load.
You are comfortable owning service APIs and data contracts across teams, treating schema evolution as a core engineering discipline.
You have strong experience with AWS and cloud-native architectures, including IAM, encryption, and event-driven systems.
You think in systems and trade-offs and proactively design for observability, reliability, and performance.
You take ownership of ambiguous problems and drive them to resolution, aligning stakeholders and delivering outcomes.
You are comfortable working adjacent to AI/ML systems, understanding data, evaluation, and performance trade-offs without needing to be a model expert.
You communicate confidently in English and are comfortable operating in an international environment.
Nice to haves:
Experience with Go or similar backend-oriented languages.
Experience working with regulated or sensitive data environments (e.g. healthcare, fintech).
Familiarity with healthcare data standards and modeling (e.g. HL7, FHIR, coded terminologies, cross-system identity resolution).
Experience with modern BI and data tooling (e.g. Snowflake, dbt, Looker).