Our team is composed of customer-oriented people, passionate about delivering the best solution for our customers.
The ability to address complex issues is based on a solid computer science and engineering background. But our drive is solely based on understanding that technology is a tool, not an end in itself.
The Big Data Engineer provides expert guidance and delivers through self and others to :
Build applications that make use of large volumes of data and generate outputs that allow commercial actions that generate incremental value
Deliver and implement core capabilities (frameworks, platform, development infrastructure, documentation, guidelines, and support) to speed up the Local Markets and tenants delivery in the Cloud Analytics Programme, assuring quality, performance, and alignment to the Group technology blueprint of components releases in the platform
Support local markets, tenants, and Group functions in obtaining benefiting business value from the data using the analytics cloud capabilities.
Should have technical / professional qualifications :
Experience in Google Cloud Platform, in big data and analytics services, and related open source technologies, other cloud providers desirable (AWS, Azure).
Experience with distributed processing platforms such as Hadoop ecosystem (Spark, Hive / Impala, HBase, Yarn);
Strong software development experience in Scala, Java, and / or Python programming languages; other functional and scripting languages desirable;
Experience with container-based infrastructure and technologies, such as Docker and Kubernetes
Experience in data streaming technologies like Kafka or Kinesis.
In case you're interested in this opportunity, please send your updated CV to