As we enter the stage of rapid commercialisation and customer account growth, we have a number of exciting new offerings to launch to customers. We’re looking for an exceptional person to help us continuously deliver features that provide value to our customers. Our ideal engineer would be an individual who loves to engage with interesting software problems, with an interest in data related development and the passion to build and shape the future within a collaborative, community-based environment. We operate a highly agile development approach, giving wide scope to be involved with hands-on system-design, test driven development, deployment and operations.
Our data sources and problems are many and varied. We have some simple but high throughput data sources (e.g. over 4,000,000,000 rows a day and growing rapidly), complex unstructured and semi structured data and complex application data from our various microservices.
Our aim is to allow our business and our customers to answer increasingly complex questions, and gain new insights, based on our data, additional external data, and things we can learn and models we can extract from our data.
Responsibilities
- Build data pipelines that deliver key data and insights to the business
- Provide operational support for existing data pipelines during working hours
- Integrate new data sources into the data platform as well as working on initiatives to continuously improve stability, quality and performance
- Build and maintain testing and documentation frameworks for our data sources
- Work with the business to scope and deliver new data engineering projects and requirements
- Maintain and build on our existing data infrastructure and tools
- Support and mentor teams in the APAC region in internationalisation of our data and reporting infrastructure as we continue to grow globally
- Contribute to the software engineering and data engineering culture here at KrakenFlex
- Collaborate regularly with colleagues across the world with many different professional specialities, including software engineers and data scientists, to create innovative solutions that delight our customers and colleagues
- Work as part of a team of engineers, regularly seeking feedback and growing your skills as technical professionals.
Requirements
- In depth industry experience in software development & design
- Experience in Python
- Experience with data processing and pipeline technologies e.g. dbt, Databricks, AWS Glue, Spark, Airflow, Redshift, SQL, Parquet (don’t worry, we don’t expect or want you to have them all, and experience with other technologies doing the same jobs is also interesting).
- A drive to get things done in a collaborative, agile development environment but also able to work independently and proactively raise issues
- An interest in working with data, both processing and analysing
- A proven ability to perform well in a fast-paced environment
- Excellent communication skills, especially in an asynchronous context
Nice to haves
While not specifically required, tell us if you have any of the following.
- Experience working with data lakes and data at scale
- Experience with AWS or similar cloud providers, and serverless technologies e.g. AWS Lambda, Kinesis, DynamoDB, API Gateway
- Experience developing, securing or operating cloud scale applications or infrastructure; ideally terraform or cloudformation