Popular
Categories
Blog - Popular articles
Jobs in Germany
adjoe is redefining the future of mobile ads. Powered by advanced AI, first-party data and world-class engineers, we’ve perfected the offerwall experience for monetizing and scaling app publishers with solutions like Playtime – now the fastest growing rewarded advertising channel globally – driving incremental engagement, retention, and revenue. Together, this ecosystem connects app developers to over 770 million users worldwide for scalable growth. We are a profitable, high-growth company backed by a $100 million investment from Bertelsmann. Operating from offices in Hamburg, Boston, Singapore, and Tokyo, adjoe is defining the next stages of app and ad experience – right now. Join us.
Meet Your Team: Playtime Data Science
Playtime is a time- and event-based ad unit that continuously rewards users with in-app currency – for the time they spend and events completed while playing mobile games. We connect advertisers to 700+ million Playtime users and serve 2bn requests per day at low latency. We ensure that all parties involved have a positive experience. Advertisers get more users for their apps. Monetizers earn revenue for users on their platforms. Users play fun games while simultaneously getting rewarded.
Our data science team powers the engine that distributes our ads. They solve multiple tasks such as developing algorithms to provide the most relevant ads for users, predicting user interests and inclinations, and dynamically adjusting pricing based on these predictions. Because the user base is very diverse we are using deep learning models that we could show to be able to serve the best ads to users.
Within the Playtime team you will be responsible for services and models that we use to provide automated solutions both for our advertisers and publishers.
We strongly believe in self-hosted open source technologies backed by battle-proven AWS technologies.
Apache Kafka as our event streaming system
Apache Flink (using Java) for stateful stream processing and real-time feature engineering
Go as our primary language for backend
Kubernetes and Terraform to manage our infrastructure
S3 and Druid for long-term data storage
DynamoDB and Redis to store per-user data and power low-latency real-time aggregations.
TensorFlow and PyTorch for training ML models, TensorFlow Serving and Triton for inference
Prometheus, Grafana, the ELK stack and OpenObserve for logging and monitoring
Airflow for orchestration
… and we’re always open to trying new technologies that suit each case best.
Scale at a glance
Thousands of requests for distributions every second
Our ML models handle those at p99 latency of 100ms
100k+ ML predictions per second
2TB data ingested in real-time every day
100+ Airflow jobs and other data pipelines
Scale the Feature Store: Leverage Apache Flink to translate Data Science requirements into high-performance, real-time streams. Enable Data Scientists to contribute new features and implement them yourself when needed.
Ensure Data Quality, Observability & Integrity: Implement data validation, monitoring, and governance processes to maintain accuracy, consistency, and reliability across all datasets and features in Feature Store.
Optimize Pipelines Performance: Identify and eliminate bottlenecks in complex ETL jobs, transforming long-running processes into streamlined, rapid-iteration cycles.
Bridge Data Science & Backend Teams: Act as the key link between data science and backend engineering, ensuring seamless data integration and usage across the organization.
Explore new Data Sources: Partner with Data Scientists to build custom ingestion logic for unstructured or non-typical data sources, handling the heavy preprocessing needed for experimental research.
Evolve the Data Architecture: Maintain and optimize our Data Lake. You'll help us decide on the future of our storage (e.g., moving toward a Data Lakehouse model) and implement best practices.
Work in an International Environment: Join an international, English-speaking team focused on scaling our adtech platform to new heights.
You have 5+ years’ of software development experience, working on modern data engineering stack.
You have proven experience with Apache Flink for stateful stream processing and real-time feature computation.
You have worked extensively with real-time data streaming systems such as Kafka, Kinesis, or Pub/Sub.
You have experience with systems handling several TB of data per day and multiple thousand events per second.
You know how to identify bottlenecks in data pipelines and you have experience in optimizing them and scaling them up.
You have strong Java knowledge, Golang/Python knowledge would be a plus.
You have worked closely with Data Scientists on online ML systems with low latency.
You know how to move beyond "raw data" to design robust, multi-layered data architectures. You have hands-on experience using dbt to build these layers and can guide us on the best tools and formats to manage them at scale.
You know scheduling frameworks such as Airflow / Kubeflow.
You know the concepts of data quality and how to apply them in production.
You are familiar with relational and NoSQL databases.
You are open to relocating to Hamburg, Germany
You have strong problem-solving skills and ability to tackle complex technical challenges.
Plus: You have hands-on experience in working with AWS, Terraform and Kubernetes.
Plus: You are familiar with the Medallion Architecture and have experience building Semantic Layers for downstream data consumption.