Data is the core of our business and provides insights into the effectiveness of our decisions and products. To support this, we operate an extensive data infrastructure which processes billions of events across the company every day, but data infrastructures are useless without talented data engineers. Because of that, we are looking for software engineers who are enthusiastic about data and can bring in software engineering best practices into building high-quality and business-critical data pipelines and systems. We are effectively building from the ground up and plan to leverage the most recent open-source tools and technologies. So, in this role, you will use the cutting edge open-source tools and technologies and also develop our own if needed, to improve the scalability, durability, and resilience of the solutions we provide. Also, you love building and working with complex data pipelines and systems, and making them better by developing innovative solutions and participating in high performing engineering teams.
BS or MS degree in a related technical field or equivalent experience, surprise us!
Experience in software engineering and best practices across the development life-cycle.
Proficiency with Java, Scala, Python or Go.
Knowledge of data storage, retrieval and management principles.
Experience working with (non-)relational databases and (No)SQL (is a plus).
Experience working with homogeneous and heterogeneous data (is a plus).
Experience in designing and building scalable real-time and batch processing data pipelines and systems.
Broad knowledge of data frameworks like Hadoop, Spark, Kafka, Druid, Presto, Flink, etc (is a plus).
You are passionate about creating clean code and adheres to the coding best practices.
Ability to communicate efficiently and work independently with little supervision in team.
Flexible working hours
Appropriate and on-time payments
Talented colleagues and interesting work environment