- Design, build, and maintain distributed batch and real-time data pipelines and data models.
- Facilitate real-life actionable use cases leveraging our data with a user- and product-oriented mindset.
- Be curious and eager to work across a variety of engineering specialties (i.e., Data Science, and Machine Learning to name a few).
- Support teams without data engineers with building decentralized data solutions and product integrations, for example around DynamoDB.
- Enforce privacy and security standards by design.
- Conceptualize, design and implement improvements to ETL processes and data through independent communication with data-savvy stakeholders.
- 3+ years experience building complex data pipelines and working with both technical and business stakeholders.
- Experience in at least one primary language (e.g., Java, Scala, Python) and SQL (any variant).
- Experience with technologies like BigQuery, Spark, AWS Redshift, Kafka, or Kinesis streaming.
- Experience creating and maintaining ETL processes.
- Experience designing, building, and operating a DataLake or Data Warehouse.
- Experience with DBMS and SQL tuning.
- Strong fundamentals in big data and machine learning.
- Experience with RESTful APIs, Pub/Sub Systems, or Database Clients.
- Experience with analytics and defining metrics.
- Experience with measuring data quality.
- Experience productionalizing a machine learning workflow; MLOps
- Experience in one or more machine learning frameworks, including but not limited to scikit-learn, Tensorflow, PyTorch and H2O.
- Language ability in Japanese and English is a plus (We have a professional translator but it is nice to have language skills).
- Experience with AWS services.
- Experience with microservices.
- Knowledge of Data Security and Privacy.
8 to 12 million JPY annually.