Big Data Administrator

Posted 17 February 2023
Salary €500 - €650 per day + Negotiable, reach out to find out more :)
Job type Contract
Contact NameEnrique Cabanas Rodriguez

Job description

Big Data Administrator

Start Date: March 2023, or 4 week notice period.

Contract Length: 6 Months + Extension

Location: Zurich, Switzerland. 2-3 days on-site 2-3 days remote

Pay: 500 - 650 CHF per day

I am working with a leading global consultancy to find an experienced Big Data Administrator to join one of the leaders in the banking and financial services industry.

Ideally, we are looking for someone that would be able to start this project in March 2023, however, we can accommodate a four/six-week notice period.

Core responsibilities:

  1. Infrastructure & technical product engineering, integration and management
  2. Client multi-tenancy management and technical support
  3. Lifecycle (EoL, upgrade) management and coordination

Working experience:

  • Experience in Bigdata Platform Management or working in Bigdata Solution Architecture.
  • Minimum 1 years of experience in handling CDH to CDP migration.
  • Expertize on Big data lake design, multi tenancy framework setup and implementation.
  • Experience with Cloudera Hadoop components such as HDFS, HBase, Impala, Hue, Spark, Sentry, Ranger, Hive, Kafka, YARN and Zookeeper
  • Expereince in Multi-tenancy architecture and implementation (in DEV and Admin perspective)
  • Expert knowledge on the capabilities of YARN/Impala resource management
  • Take end-to-end responsibility of the Hadoop life Cycle in the organization.
  • Detect, analyze and remediate performance problems.
  • Experience with integration of data from multiple data sources, onboarding applications on Hadoop Platform.
  • Experience in at least one of the following: Python, Java or Unix, Shell Scripting and eager to pick up new programming languages on the go
  • Ability to automate manual mundane tasks through scripting and building automation solution.
  • The ability to function within a multidisciplinary, global team. Be a self-starter with a strong curiosity for extracting knowledge from data and the ability to elicit technical requirements from a non-technical audience
  • Data Concepts (ETL, near-/real-time streaming, data structures, metadata and workflow management)
  • You have deep understanding of Dev ops and Agile software development methodologies

Nice to have

  • Python, Scala, Java,Kafka, CSS, Performance Engineering Tuning skills (Impala, Spark, Hive), Azure Cloud