This Job Vacancy has Expired!

Cloud Bigdata Solutions Architect

Gazelle Global Consulting

Posted on Apr 4, 2021 by Gazelle Global Consulting

Not Specified, Sweden
IT
Immediate Start
€120k - €120k Annual
Full-Time


My client is looking for Cloud Bigdata Solutions Architect, and I wanted to know whether this would be of interest to you or not?


Position: Cloud Bigdata Solutions Architect


Duration: Permanent


Location: Gothenburg, Sweden


Salary: 120k per annum


Technical Skills:



  • Experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems.

  • Ability to work with a multi-technology/cross-functional teams and customer stakeholders to guide/managing a full life cycle of a Spark solution.

  • Extensive experience in data modelling and database design involving any combination of - Data warehousing and Business Intelligence systems and tools.

  • Hands-on experience in at least 3 4 of data technologies like Big Data, Apache Spark, NoSQL databases, Python, Scala, Hadoop, Hive, SQOOP, HiveQL, Hadoop MapReduce, MySQL, Hadoop Commands, Shell Scripting, etc

  • Hands on experience in SQL and programming language: Python/Java/Scala

  • Demonstrated ability to design, solution estimate data solutions

  • Demonstrated ability leverage data for analytics and insights development

  • Must have hands on experience on database (Relational,NoSQL. Graph and document ): PostgreSQL, MySQL, Cassandra, Neo4J, Mongo DB etc .

  • Data Ingestion skills and experience Streaming data - Kafka, Spark, IoT data management experience

  • Ideate, design and develop Data Lake for Insurance packages from Legacy systems

  • Hands on experience EDW ETL,BI projects/Big Data Architecture experience, software development experience using object-oriented languages,

  • Must have Hands on knowledge of Big Data Analytics and Cloud technologies and Database, Data Warehouse etc, databases, SQL, BI tools like Qlik/MSTR/Tableau/Power BI etc

  • Developing Hive scripts and optimizing the scripts.

  • Developing PySpark modules using RDDs, Dataframes API, Spark SQL and their optimization.

  • Developing and scheduling OOZIE workflows with all the supported actions.

  • Hadoop commands and Shell Scripting to develop the adhoc jobs for data importing/exporting.

  • Must have experience at least 2 end to end implementation of Azure/AWS/GCP cloud data warehouse and data modelling experience

  • Must have Hands on experience EDW ETL,BI projects/Big Data Architecture experience, software development experience using object-oriented languages,

  • Must have Hands on knowledge of Big Data Analytics and Cloud technologies and Database, Data Warehouse etc, databases, SQL, BI tools like Qlik/MSTR/Tableau/Power BI etc... Key

  • Ideate and lead solutions for complex data problems and hands on experience with Data Ingestion streaming data skills


Softs kills



  • Excellent problem solving, hands-on engineering skills and communication skills

  • Build key partnerships with top tools, platform and cloud providers.

  • Improvise new and innovative data assets and drive it's adoption to the market.

  • Strong communication skills, Good Analytical and visualization skill, Programming and Automation abilities


Nice to have



  • BigData certification

  • TOGAF or any other similar certifications


If you are interested in this position, please send me your CV ASAP for immediate consideration.


If you are not interested in the role but know someone that may be interested or a good fit for it, please help me and them out by sending me their details or them the details of this role so they can get in touch with me about it




Reference: 1152719484

Set up alerts to get notified of new vacancies.