We are growing fast and looking for an experienced Data Engineer with a strong technical background in data infrastructure, data architecture design and robust data pipes to take part in our data platform creation, working with a team of strong tech leads. We are looking for a Data Engineer to play a key role in our expanding data engineering team of the technology group, that translates our vision into a solid SaaS product that performs, scales and is easy to use.
Responsibilities:
- Participate in exploration, design and execution of Nemodata's evolving data platform.
- Deploying and maintaining critical data pipelines in production.
- Design infrastructural data services, coordinating and work in Agile process with other R&D team members, Data Scientists and product managers to build scalable data solutions.
- End to end responsibility and development of data crunching and manipulation processes within the product.
- Design and implementation of ETLs.
- Create data tools for various teams, assisting them in building, testing, and optimizing the delivery of clients' data into our system.
- Explore and implement new data technologies to support the data infrastructure.
- Work closely with the core data science personnel to support implementation and maintenance ML features and tools.
- Take part the integration of new data sources.
Requirements:
- Experience with building big data environments, tools, and data modeling end to end, production grade.
- Strong capability of schema design and data modeling.
- Deep knowledge of big data DBs and processing big amounts of time-series data.
- In depth familiarity with the AWS eco-system and especially relevant tools such as S3, Glue, Athena, Step functions, Lambda, Redshift, EMR, Kinesis, and the various relevant products for data processing and persisting.
- Understanding of data lake / data warehouse concepts.
- Experience with programming languages (preferably, Python)
- Ability to conduct performance analysis for a system and find its bottlenecks.