Key Responsibilities:
- Build and maintain scalable data pipelines across cloud and on-prem systems.
- Integrate data from diverse sources in a repeatable, stable way.
- Design and test distributed processing workflows for large datasets.
- Collaborate with data scientists to support model development and deployment.
- Evaluate performance and cost implications of different solution designs.
- Produce clear dashboards and visualisations for end users.
- Strong background in large-scale data engineering and distributed systems.
- Several years' experience in data or software engineering with a focus on sustainable, high-quality code.
- Proficiency in Python and SQL; plus experience or interest in technologies like Spark, Hadoop, Hive, Airflow, RDBMS, NoSQL, Kubernetes, DevOps tooling, Java or .NET.
- Good understanding of AWS, Azure, or GCP.
- Degree in Computer Science, Mathematics, or a related field.
- Fluent in German and English
- Zurich 2/3 days home office per week
- An additional 2 weeks of fully remote work per year
- Compensation 130-140k CHF