- Develop a data pipeline supporting our growing data needs
- Perform the design and extension of data marts, meta data, and data models
- Help designing a data lake house using the latest technologies and ideologies
- Help deprecate legacy code and replace it with new architecture
- Improve our data quality and scalability of our data-reliant systems
- Collaborate with other teams and departments to create value from data
- 3+ years of proven experience as a data engineer or similar role
- Experience in modern data warehouse design (Dimension, facts, marts, slowly changing dimensions etc)
- Experience with troubleshooting issues in data pipeline processes.
- Experience in managing data flows from/ to data lake
- Advanced SQL knowledge
- Experience with Python for provisioning data pipelines
- Experience with products like Apache Spark, Apache Airflow, Apache Flink, Apache Beam or related services.
- Have experience in GCP including BigQuery or related services in other cloud providers.
- Has an eye for modern data architecture
- Previous experience in creating a large-scale data pipeline
- Communicate effectively under pressure in a highly collaborative and fast-paced environment.
- Ability to liaison with all members and leaders of different nationalities and backgrounds.
- Continuous learning and improvement mindset
Nice to haves
While not required, tell us if you have any of the following.
- Hands on experience with a data lakehouse
- Hands on experience on Kubernetes, Kubeflow, CI/CD tools, cloud provisioning like Pulumi/Terraform.
- Hands-on experience with data governance tools.
- Being able to build a data pipeline from the ground up.
- Have strong attention to data quality and cost of ownership for elements
Up to 7 million JPY annually.