Posted on Nov 26, 2021 by CV-Library
• Implementing data lake and data warehouse solutions on AWS using AWS S3, AWS Redshift, AWS RDS, AWS DMS, and other appropriate services. (or Azure/GCP equivalent)
• Developing logical and physical data models for NoSQL data stores such as AWS DynamoDB, MongoDB, Cassandra, CosmosDB, etc. (or Azure/GCP equivalent)
• Developing and orchestrating highly performant, scalable and re-usable data ingestion and transformation pipelines across AWS components and services such as AWS Lambda, AWS EMR, and AWS Glue. (or Azure/GCP equivalent)
• Developing Event based and streaming data ingestion and processing using AWS services such as AWS Kinesis and Amazon MSK, or Apache Kafka. (or Azure/GCP equivalent)
• Developing data governance, data cataloguing and data lineage solutions using tools such as AWS Glue Data Catalog, Informatica Data Catalog, Talend Data Catalog, IBM Information Governance Catalog. (or Azure/GCP equivalent)
• Developing Apache Spark applications for both batch and streaming data processing.
• Developing RESTful and GraphQL APIs for inter-service and external data exchange.
• Have worked with platforms from key cloud vendors such as Databricks, Snowflake, or similar.
• Have worked in agile delivery teams with DevOps ways of working to continuously deliver iterative deployments with experience in using Git, Jira, Confluence, or similar.
• Have had experience in data migration activities in the past including migration strategy and approach, source and target system discovery, analysis, mapping, development and reconciliation.
Years of experience:
2 to 4 years
Specific Technology Knowledge:
SQL + Python, Scala, Java
Set up alerts to get notified of new vacancies.
£1 - £100k Annual
£45k - £60k Annual
£480 - £525 Daily
£35k - £55k Annual
£38k - £42k Annual
£38k - £42k Annual
£80k - £100k Annual
£50k - £65k Annual