Big Data Engineer
Posted on Mar 12, 2020 by Request Technology
*We are unable to sponsor as this is a permanent full time role*
A fortune 500 company is on the search for a Senior Big Data Engineer. This role will be responsible for the design, prototyping, and delivery of software solutions within the big data ecosystem. They need someone with 5-7 years of experience with data integration, ETL and/or business intelligence/analytic related function. This person needs to be an expert with high level coding skills with SQL and Python. Also needed is experience with big data and Hadoop ecosystem (HDFS, SPARK, SQOOP, Hive, Impala) and this is a requirement.
- Responsible for design, prototyping and delivery of software solutions within the big data eco-system
- Leading projects and/or serving as analytics SME to provide new or enhanced data to the business
- Improving data governance and quality increasing the reliability of our data
- Influencing the creation of a single, trusted source for key Claims business data that can be shared across the Enterprise
- Responsible for designing and building new Big Data systems for turning data into actionable insights
- Train and mentor junior team members on Big Data/Hadoop tools and technologies
- Identifies opportunities for improvement and presents recommendations to management
- Seeks out and evaluates emerging big data technologies and open-source packages
- Participate in strategic planning discussions with technical and non-technical partners
- Uses, teaches, and supports a wide variety of Big Data and Analytics tools to achieve results (ie, Python, Hadoop, HIVE, Scala, Impala and others).
- Uses, teaches, and supports a wide variety of programming languages on Big Data and Analytics work (ie Java, Python, SQL, R)
- Undergraduate degree in Computer Science, Mathematics, Engineering (or related field) or equivalent experience preferred
- 5-7 years of experience preferred in a data integration, ETL and/or business intelligence/analytics related function
- Experience in developing, managing, and manipulating large, complex datasets
- Expert high-level coding skills such as SQL and Python and/or other Scripting languages(UNIX) required. Scala is a plus.
- Some understanding and exposure to - streaming toolsets such as Kafka, FLINK, Spark streaming a plus.
- Experience with source control solutions (ex git, GitHub, Jenkins, Artifactory) required
- 4-5+ years of experience with big data and the Hadoop ecosystem (HDFS, SPARK, SQOOP, Hive, Impala, Parquet) required
- Experience with Agile development methodologies and tools to iterate quickly on product changes, developing user stories and working through backlog (Continuous Integration and JIRA a plus)
Set up alerts to get notified of new vacancies.