This Job Vacancy has Expired!

Hadoop Data Engineer

Posted on Jan 6, 2023 by IDC Technologies Solutions Ltd

London, United Kingdom
IT
Immediate Start
Daily Salary
Contract/Project

Data Content Services Lab

We are the owners of the Enterprise Data Hub - a Hadoop implementation based upon Hive and Spark, currently favouring Scala. XXX has also signed up to a strategic partnership with Google to utilise their Cloud Platform services (GCP) to create a new strategic platform for the Group to prepare our Bank of the Future service for customers. We anticipate very significant opportunity for review of our systems, data processing methods and approaches which you would be a key part of.

Software Engineer (Big Data)

Role Responsibilities

  • Build solutions that ingest data from source systems into our big data platform, where the data is transformed, intelligently curated and made available for consumption by downstream operational and analytical processes
  • Create high quality code that is able to effectively process large volumes of data at scale
  • Put efficiency and innovation at the heart of the design process to create design blueprints (patterns) that can be re-used by other teams delivering similar types of work
  • Use modern engineering techniques such as DevOps, automation and Agile to deliver big data applications efficiently
  • Produce code that is in-line with team, industry and group best practice using a wide array of engineering tools such as GHE (Github Enterprise), Jenkins, Urbancode, Cucumber, Xray etc
  • Work as part of an Agile team, taking part in relevant ceremonies and always helping to drive a culture of continuous improvement
  • Work across the full software delivery life cycle from requirements gathering/definition through to design, estimation, development, testing and deployment ensuring solutions are of a high quality and non-functional requirements are fully considered
  • Consider platform resource requirements throughout the development life cycle with a view to minimising resource consumption
  • Once Cloud is proven within the bank, help to successfully transition on-prem applications and working practices to GCP

About You

  • Prior experience doing technical development on big data systems (large scale Hadoop, Spark, Beam, Flume or similar data processing paradigms)
  • Associated data transformation and ETL experience are required.
  • Must have Spark and Scala experience.
  • Hive, Pig, Sqoop and knowledge of Data transfer technologies such as Kafka, Attunity, CDC are a bonus
  • Database technology experience such as Teradata, Oracle, Hadoop, DB2 and familiarity with NoSQL databases such as Cassandra and HBase is needed
  • GCP or Cloud expertise is a plus
  • Passionate about data and technology
  • Excellent people and communication skills, able to communicate with technical and non-technical colleagues alike
  • Good team player with a strong team ethos
  • Show capability to change, evolve and to learn new tools and techniques and help and encourage others to do likewise

Reference: 1846877743

CareerAddict

Alert me to jobs like this:

Digital marketing manager in London, Full-time

Amplify your job search:

CV/résumé help

Increase interview chances with our downloads and specialist services.

CV Help

Expert career advice

Increase interview chances with our downloads and specialist services.

Visit Blog

Job compatibility

Increase interview chances with our downloads and specialist services.

Start Test