CareerAddict

Data Engineer (PySpark, Databricks & Azure Data Factory)

Red - The Global SAP Solutions Provider

Posted on Jun 18, 2025 by Red - The Global SAP Solutions Provider
Not Specified, Italy
IT
Immediate Start
Annual Salary
Full-Time

Role: Data Engineer (PySpark, Databricks & Azure Data Factory)
Duration: 6 months + Possible extension

Role Type: Freelancing

Language: English, Italian (nice to have)

Location: 100% Remote

Capacity: 5 days a week
Seniority: minimum 5 years exp

Job Description:

Main Competencies:
python/Spark/databricks
Azure Data Factory, CI/CD, YAML complete the profile.

Key responsibilities:

  • Design and implement data pipelines in collaboration with our data users and other engineering teams.
  • Ensure reliability, data quality and optimal performance of our data assets.
  • Transpose complex business and analytics requirements into high-quality data assets.
  • Deliver high-quality code, focusing on simplicity, performance and
  • Where applicable re-design and implement existing data pipelines to leverage the newest technologies and best practices.
  • Work with solution engineers, data scientists and product owners to deliver end to end products.
  • Support our partners with proof-of-concept initiatives and data related technical questions.

Skills:

  • Excellent software/data engineering skills and proficiency in at least one programming language, Python
  • Good knowledge of distributed computing frameworks, Pyspark a must have with others being an advantage.
  • Familiarity with system design, data structures, algorithms, storage systems, and cloud infrastructure.
  • Understanding of data modelling and data architecture concepts.
  • Experience with CI/CD processes as well as data testing and monitoring.
  • Knowledge of Delta Protocol and Lakehouse architectures.
  • Experience with Databricks and Azure services for data, such as Azure Data Factory, Synapse Analytics or Fabric

Skills plus:

  • Ability to work effectively in teams with both technical and non-technical individuals.
  • Ability to communicate complex technical concepts and results in a clear and detailed manner to non-technical audiences.
  • Excellent verbal and written communication skills


Reference: 2967006843

https://jobs.careeraddict.com/post/104566775

This Job Vacancy has Expired!

Red - The Global SAP Solutions Provider

Data Engineer (PySpark, Databricks & Azure Data Factory)

Red - The Global SAP Solutions Provider

Posted on Jun 18, 2025 by Red - The Global SAP Solutions Provider

Not Specified, Italy
IT
Immediate Start
Annual Salary
Full-Time

Role: Data Engineer (PySpark, Databricks & Azure Data Factory)
Duration: 6 months + Possible extension

Role Type: Freelancing

Language: English, Italian (nice to have)

Location: 100% Remote

Capacity: 5 days a week
Seniority: minimum 5 years exp

Job Description:

Main Competencies:
python/Spark/databricks
Azure Data Factory, CI/CD, YAML complete the profile.

Key responsibilities:

  • Design and implement data pipelines in collaboration with our data users and other engineering teams.
  • Ensure reliability, data quality and optimal performance of our data assets.
  • Transpose complex business and analytics requirements into high-quality data assets.
  • Deliver high-quality code, focusing on simplicity, performance and
  • Where applicable re-design and implement existing data pipelines to leverage the newest technologies and best practices.
  • Work with solution engineers, data scientists and product owners to deliver end to end products.
  • Support our partners with proof-of-concept initiatives and data related technical questions.

Skills:

  • Excellent software/data engineering skills and proficiency in at least one programming language, Python
  • Good knowledge of distributed computing frameworks, Pyspark a must have with others being an advantage.
  • Familiarity with system design, data structures, algorithms, storage systems, and cloud infrastructure.
  • Understanding of data modelling and data architecture concepts.
  • Experience with CI/CD processes as well as data testing and monitoring.
  • Knowledge of Delta Protocol and Lakehouse architectures.
  • Experience with Databricks and Azure services for data, such as Azure Data Factory, Synapse Analytics or Fabric

Skills plus:

  • Ability to work effectively in teams with both technical and non-technical individuals.
  • Ability to communicate complex technical concepts and results in a clear and detailed manner to non-technical audiences.
  • Excellent verbal and written communication skills

Reference: 2967006843

CareerAddict

Alert me to jobs like this:

Amplify your job search:

CV/résumé help

Increase interview chances with our downloads and specialist services.

CV Help

Expert career advice

Increase interview chances with our downloads and specialist services.

Visit Blog

Job compatibility

Increase interview chances with our downloads and specialist services.

Start Test