Data Pipelines Engineer

Posted 11 May 2023
Salary Negotiable
LocationDublin
Job type Contract
Discipline Data
Reference86161
Contact NameDaniel Mapstone

Job description

  • Location: 100% remote - candidates must be located in the UK/Europe
  • Contract start date is ASAP
  • Contract end date is 31/12/2023 and is likely to be extended
  • Pay rate is paid per day in Euros
  • Rate range is €450 - €550 per day
  • Contract falls outside IR35 for UK based Contractors

Key Responsibilities
?
Design, and implement the data pipelines providing access to large datasets and transforming
power for data across the org
? Write complex but efficient code to transform curated data into business questions oriented
datasets and data visualizations.
? Work with big data and distributed systems using technologies such as Spark, AWS EMR, and
Python.
? Actively contribute to the adoption of strong software architecture, development best practices,
and new technologies. We are always improving the process of building software; we need you to
help contribute.
? Interface with other technology teams to extract, transform, and load data from a wide variety of
data sources using open sources and GCP big data technologies
? Explore and learn the latest GCP technologies to provide new capabilities and increase efficiency
? Collaborate with Business Users, Infra Engineers , Data Scientists to recognize and help adopt
best practices in data gathering and transforming big data
? Identify, design and develop new tools and processes to improvise the data storage and compute
to help the Data Engineering and Data Consumption teams and users
? Interface directly with stakeholders, gathering requirements and owning automated end-to-end
data data engineering solutions.
? Provide technical guidance and mentoring to other engineers for best practices on data
engineering
? Work with the team to discuss the technical design and development needs.

Preferred Qualifications
? Bachelor's degree in computer science, mathematics, or a related technical field
? 5+ years of relevant employment experience in data engineering or related field
? At least 3 years of SPARK development experience
? At least 1 year experience with Airflow, NiFi, or Azkaban
? Clear understanding of testing methodologies and AWS/GCP cloud Best Practices
? Mastery to big data technologies (e.g. Hadoop, Hive, Spark, EMR)
? Excellence in technical communication and experience working directly with stakeholders
? Experience maintaining data pipelines using big data technologies like Hadoop, Hive, Spark, EMR etc.
? Demonstrated ability to coordinate projects across functional teams, including engineering and product management
? Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations