Big Data Engineer (PySpark)
Posted on Oct 13, 2021 by Experis AG
Big Data Engineer (PySpark)
Experis is the global leader in professional resourcing and project-based workforce solutions. Our suite of services ranges from interim and permanent recruitment to managed services and consulting, enabling businesses to achieve their goals. We accelerate organisational growth by attracting, assessing, and placing specialised professional talent.We offer
- You the exciting chance to become part of our highly motivated Analytics team in Zurich focusing on building and driving new and state of the art technologies in various fields
- A challenging role as a Data Engineer working closely with data owners, business and the big data project team to work out data sourcing requirements in a demanding, dynamic and international environment
- You will have the ownership of various software components assisting Data Delivery team to design, build and maintain ETL/ELT data pipelines.
- The opportunity to design and drive implementation of data management software components helping to profile, analyze, govern and serve data for the data scientist's needs.
- Work with data governance and operations teams to improve data quality, data discovery and data profiling components to their needs
- The chance to serve as a domain expert during user acceptance testing and discussions with IT partners
- Becoming part of an open-minded team with a strong team spirit in a versatile, dynamic, and flexible working environment
- A Bachelor or Master degree in a quantitative field (Statistics, Mathematics, Economics) or Information Technology (University or University of Applied Sciences)
- At least 3-5 years' experience in Software Development, ideally combined with a technology background in DWH, ETL, Data Quality
- Experience as Python Developer (pandas, PySpark)
- Experience in Java and/or Scala software development (Spring Boot, Rest API, PKI)
- Profound BigData knowledge (Hadoop, Apache Spark, Hive)
- Hands on experience in streaming technologies (Apache Kafka, Apache Pulsar)
- Background in area of Data-as-Service, Data Lake/Mesh, Kappa/Lambda architecture is a strong plus
- Experience with Docker, Openshift/Kubernetes and/or cloud technologies (AWS, Azur) is a plus
- Familiarity with Linux environment (Bash, cli)
- Business fluency in English is a necessity, German is a plus
- Your excellent organisation, coordination skills and presentation skills enable you to effectively collaborate and building your professional network on all levels incl. senior leaders
- Superb communication and strong motivation to communicate with source system owners
- High integrity, responsibility and confidentiality is a requirement for dealing with sensitive data
- Consistent track record as a strong, persistent and self-starting, standout colleague and independent thinker, willing to cooperate in a highly collaborative environment and contribute to the team's success
Interested in this opportunity? Kindly send us your CV today through the link in the advert. However, should you have any questions please contact Danny Besse.
Set up alerts to get notified of new vacancies.