We are looking for a motivated and hands-on Senior Data Engineer to join our Data Analytics Team. The candidate will be responsible for designing, implementing, and maintaining the data pipeline in collaboration with diverse stakeholders. Good coding skills in Java and Python are required for this position.
Relocation & Remote WorkAlthough we believe living close to our studio helps foster team spirit and stronger bonds between team members, we understand that people have different needs and expectations for their place of residence. You have a choice of three location options, allowing you to work with Crytek from anywhere you wish:
1. Come to our modern headquarters in Frankfurt and receive an attractive relocation package and have access to all of our benefits.
2. If you are already living in a European Union member state, we are able to offer you a permanent work contract and allow you to work remotely as an employee from there.
3. If you are interested in full-time remote work in any other country outside of the European Union, we can offer you a freelance contract arrangement.
Implement and maintain diverse data infrastructure systems, as well as their related ETL pipelines
Work closely and in collaboration with NetOps team, Data Analysis team and several other stakeholders from diverse departments. As an outcome of these collaboration, it is expected from the candidate, to be able to define and apply data engineering best practices and align systems with up-to-date technologies
Independently create and run automation scripts for operational data processing
Work and manage data ETL pipelines using AWS cloud platform, in particular: S3 (Parquet data lake), Glue, SageMaker, Airflow, Athena, Redshift and Redshift Spectrum
Maintain current data infrastructure systems, including ELK (Elasticsearch, Logstash and Kibana) stack, Kafka cluster, PostgreSQL and Clickhouse databases, and ETL data sinkers developed in Java
The candidate should also be able to work and manage independently all AWS management services: IAM, VPC, Security/ACL Groups, and CloudWatch
A degree in Computer Science, Information Systems or Engineering
At least 5 years experience in data engineering, preferably in context with online games
Excellent understanding of ETL pipeline cycles
Proficient in using AWS cloud platform
Experience using Elasticsearch, PostgreSQL, Clickhouse and Redshift databases
Good understanding of data streaming processing using Kafka, as well as, batching processing with PySpark
Good programming and scripting skills using Java 11, Spring Boot, Project Reactor, as well as Python 3, Ruby and shell scripting.
Good understanding of data types and handling of different data models
Experience in using Python and PySpark for data engineering
Comfortable using Docker containers
Flexibility in using different technologies and platforms
Proactive and self-motivated, able to work without direct supervision
Good English communication and writing skills
What you can expect from us
A refreshing yet highly professional atmosphere in a diverse team.
Flexible work time.
Free public transportation ticket, which lets you use public transport free of charge 24/7.
Free German lessons
Company language is English. Any additional language is a plus, but not a requirement.
Extensive assistance with getting visa, work permits and communication with local authorities.
A company apartment for your first few months and help in finding a private apartment.
...and much more!