Hadoop Data Engineer - AWS
Posted on Aug 31, 2020 by Stelfox Ltd
Hadoop Data Engineer - AWS - Remote Contract - Roles x2
Excellent roles working with a cross-functional multi-national company.
4-5 years of building and operationalizing large scale enterprise data solutions, Data Lakes and applications using one or more of AWS data and analytics services in combination with 3rd parties -
- EC2, EMR, S3, Kinesis, Dynamo DB, RedShift, RDS, Lambda, Glue, Spark, Snowflake etc.
- Experience with Enterprise Scheduling tool Control-M
- Experience with Data Integration tool QDC (Qlik Data Catalyst- a.k.a Podium data)
- Hadoop stack ( PIG, Hive, Spark, SQOOP, Tez, Ranger etc..)
Minimum 3 years of designing and building production data pipelines from ingestion to
consumption within a hybrid big data architecture, using Podium, Java, Python, Scala, C++ etc.
- Minimum 3 years architecting and implementing next generation data and analytics platforms on AWS cloud serving analytics and BI application integrations
- Hands-on AWS experience with a minimum of 3 year of solution design, build and implemented at production scale
- 4-5 Years of Demonstrated knowledge and application of ETL Data Warehousing best practices
- 4-5 Years of Experience with SQL against relational databases preferably with SQL Server, Oracle database platforms 10g and above on Linux/Unix.
- 1-2 year's exposure to Logi, SAS, Tableau, R or other dash boarding reporting experience is a plus
* Live Roles - Apply for immediate consideration*
If you are interested in these positions please forward your CV for immediate consideration.