Data Engineer - Permanent - London (Remote)
Data Engineer - London - Remote
DataBuzz are recruiting a Data Engineer to join leading prestigious organisation. You will be a Data Engineer looking to build out best-in-class analytic solutions and platforms and work closely with team of data scientists, contributing to the development and deployment of new and existing products, writing high-quality code that allows to put solutions into production.
Position - Data Engineer - Permanent - Remote
Location - London/Remote
Salary - 65000 GBP Per Annum
Responsibilities
- Design, build, and maintain data pipelines using PySpark and SQL
- Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS or Azure or GCP.
- Collaborate with data scientists and business analysts to understand their data needs and develop solutions that meet their requirements.
- Develop and maintain data models and data dictionaries for our data warehouse.
- Develop and maintain documentation for our data pipelines and data warehouse.
- Continuously improve the performance and scalability of our data solutions.
Requirements
- Minimum of 5 years of experience as a data engineer, with a focus on AWS or Azure or GCP, PySpark, and SQL.
- Strong Expertise to write robust, maintainable, readable, and clean code in Python.
- Experience with ETL processes and data warehousing.
Reference: 2680418091
Data Engineer - Permanent - London (Remote)
Posted on Nov 17, 2023 by Databuzz Ltd
Data Engineer - London - Remote
DataBuzz are recruiting a Data Engineer to join leading prestigious organisation. You will be a Data Engineer looking to build out best-in-class analytic solutions and platforms and work closely with team of data scientists, contributing to the development and deployment of new and existing products, writing high-quality code that allows to put solutions into production.
Position - Data Engineer - Permanent - Remote
Location - London/Remote
Salary - 65000 GBP Per Annum
Responsibilities
- Design, build, and maintain data pipelines using PySpark and SQL
- Develop and maintain ETL processes to move data from various data sources to our data warehouse on AWS or Azure or GCP.
- Collaborate with data scientists and business analysts to understand their data needs and develop solutions that meet their requirements.
- Develop and maintain data models and data dictionaries for our data warehouse.
- Develop and maintain documentation for our data pipelines and data warehouse.
- Continuously improve the performance and scalability of our data solutions.
Requirements
- Minimum of 5 years of experience as a data engineer, with a focus on AWS or Azure or GCP, PySpark, and SQL.
- Strong Expertise to write robust, maintainable, readable, and clean code in Python.
- Experience with ETL processes and data warehousing.
Reference: 2680418091

Alert me to jobs like this:
Amplify your job search:
Expert career advice
Increase interview chances with our downloads and specialist services.
Visit Blog