Posted on Jan 31, 2021 by Nicoll Curtin Technology
Are you an experienced Big Data Engineer with extensive experience of PySpark? Are you experienced in performing data analysis, data scouring and implementing data pipelines to enrich and complement data? Excellent technical skills in Python, PySpark and ideally Scala? Are you keen to work in a data engineering position that will solve real business problems? If so this is the perfect Big Data Engineer position for you.
You will be joining a small centralised Data Engineering team that are working on a large scale finance transformation programme, in which their goal is to create high throughput and low latency data pipelines for their Azure Cloud PaaS. You will be developing Python/PySpark and ideally have some exposure to Scala. You will take responsible for working with business users, designing and implementing the technical solution for data transformation and working with Databricks for structured streaming.
For more information on this Big Data Engineer position or any other Data positions we have available, send your CV to (see below) or alternatively you can contact me.