Big Data Engineer
Posted on Sep 13, 2020 by Request Technology
A prestigious fortune 500 company is on the search for a Big Data Engineer. This role is revolved around being highly experienced in the use continuous integration tools (eg Jenkins, Hudson, etc.) and infrastructure automation (VM Ware, Puppet, Chef, Vagrant, Docker, etc.). This engineer will support the team in the writing of deployment scripts and place strong emphasis in automated deployment, infrastructure automation solutions, and continuous delivery process. The client wants someone who has 8-10 years of experience.
Key Job Functions
- Support the team in the writing of deployment scripts and place strong emphasis in automated deployment, infrastructure automation solutions, and continuous delivery process.
- Work with product owners and other development team members to determine new features and user stories needed in large/complex development projects
- Create or Update documentation in support of development efforts. Documents may include detailed specifications, implementation guides, architecture diagrams or design documents.
- Participate in code reviews with peers and managers to ensure that each increment adheres to original vision as described in the user story and all standard resource libraries and architecture patterns as appropriate.
- Respond to trouble/support calls for applications in production in order to make quick repair to keep application in production.
- Serve as a technical lead for an Agile team and actively participate in all Agile ceremonies.
- Participate in all team ceremonies including planning, grooming, product demonstration and team retrospectives
- Mentor or provide technical guidance to less experienced staff; may use high end development tools to assist or facilitate development process.
- Leverage Company DevOps tool stack to build, inspect, deploy, test and promote new or updated features.
- May serve as technical lead, architect, project lead or principle developer in course of large or complex project.
- Expert proficiency in unit testing as well as coding in 1-2 languages (eg Java, etc.).
- Expert proficiency in Object Oriented Design (OOD) and analysis.
- Expert proficiency in application of analysis/design engineering functions.
- Expert proficiency in application of non-functional software qualities such as resiliency, maintainability, etc.
- Expert proficiency in advanced behavior-driven testing techniques.
- Provide expertise for teams in all matters related to deployment, building and release process.
Education Level Required
Education Level Preferred:
Master or Other Advanced Degree
- 8-10 years of related experience; Highly experienced with Agile practices/methodologies (eg Scrum, TDD, BDD, etc.).
Specialized Knowledge and Skills
- Highly experienced in the use continuous integration tools (eg Jenkins, Hudson, etc.) and infrastructure automation (VM Ware, Puppet, Chef, Vagrant, Docker, etc.).
- Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
- Strong analytic skills related to working with unstructured datasets
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Build the infrastructure required to process data from a variety of data sources using SQL.
- Create data tools for analytics and data scientists to optimize data
- Experience working with either a Map Reduce or an MPP system on any size/scale
- Experience with big data tools: Hadoop, Spark, Kafka, etc.
- 5+ years of Experience with object-oriented/object function Scripting languages: Python, Java, C++, Scala, et
- AWS cloud services: EC2, EMR, RDS, Redshift
- SQL experience and No-SQL experience is a plus
Set up alerts to get notified of new vacancies.
$145k - $165k Annual