Big Data Engineer - AWS
Posted on Feb 16, 2021 by Request Technology
A prestigious company is on the search for a Big Data Engineer - AWS. This company is looking for someone with 2+ years of big data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark and also need experience with building and deploying applications in AWS (S3, Hive, Glue, EMR, AWS Batch, Dynamo DB, Redshift, Cloudwatch, RDS, Lambda, SNS, SQS etc.) They also want someone with 4+ years of Java/Python, SQL, SparkSQL, PySpark
- Work with product owners and other development team members to determine new features and user stories needed in new/revised applications or large/complex development projects.
- Create or Update documentation in support of development efforts. Documents may include detailed specifications, implementation guides, architecture diagrams or design documents.
- Participate in code reviews with peers and managers to ensure that each increment adheres to original vision as described in the user story and all standard resource libraries and architecture patterns as appropriate.
- Respond to trouble/support calls for applications in production in order to make quick repair to keep application in production.
- Serve as a technical lead for an Agile team and actively participate in all Agile ceremonies.
- Participate in all team ceremonies including planning, grooming, product demonstration and team retrospectives.
- Mentor less experienced technical staff; may use high end development tools to assist or facilitate development process.
- Leverage Company DevOps tool stack to build, inspect, deploy, test and promote new or updated features.
- Set up and configure a continuous integration environment.
- Advanced proficiency in unit testing as well as coding in 1-2 languages (eg Java, etc).
- Advanced proficiency in Object Oriented Design (OOD) and analysis.
- Advanced proficiency in application of analysis/design engineering functions.
- Advanced proficiency in application of non-functional software qualities such as resiliency, maintainability, etc.
- Advanced proficiency in advanced behavior-driven testing techniques.
- Bachelor Degree or Equivalent
- 4-6 years of related experience; experienced with Agile practices/methodologies (eg Scrum, TDD, BDD, etc).
- 2+ years with Big Data Hadoop cluster (HDFS, Yarn, Hive, MapReduce frameworks), Spark
- 2+ years of recent experience with building and deploying applications in AWS (S3, Hive, Glue, EMR, AWS Batch, Dynamo DB, Redshift, Cloudwatch, RDS, Lambda, SNS, SQS etc.)
- 4+ years of Java/Python, SQL, SparkSQL, PySpark
- Excellent problem-solving skills, strong verbal & written communication skills
- Ability to work independently as well as part of a team
- Familiar with Hadoop information architecture, data modelling, machine learning, Talend
- Knowledge of Spark streaming technologies, Graph Database will be a nice to have
- Knowledge of Financial Products, Risk Management, Portfolio Management is preferred but not mandatory. Training will be provided to help you gain ground