At CADDi, we have a massive amount of technical drawings, corresponding quotations, along with data describing the required manufacturing processes. By processing these data into a form that is easier to utilize and quickly implementing the cycle of hypothesis testing, we aim to find answers to the problems faced by the manufacturing industry.
In this role, you’ll
- Design and implementation of the data infrastructure for
- Manufacturing cost estimation data
- Transaction performance data
- Manufacturing control process data
- Usage data for various products
- Perform analysis of base data to obtain hypotheses concerning business and operational improvement
- Build data processing pipelines in cooperation with algorithm designers
- An understanding our mission to unleash the potential of manufacturing
- 3+ years of hands-on experience with server-side technologies
- 3+ years of experience with data pipelines or data processing frameworks
- Experience using SQL for analysis
- General knowledge of Linux-based infrastructure and public cloud architecture
- Familiarity with container technologies such as Docker
- Experience operating container-related technologies such as Kubernetes, Tekton, Argo, etc.
- Experience with data pipeline technologies such as Airflow, Apache Beam, Spark, etc.
- Hands-on experience with declarative infrastructure-as-code technologies (terraform, puppet, ansible, etc)
- Experience using version control systems such as Git
Nice to haves
These aren’t required, but be sure to mention them in your application if you have them.
- General knowledge of computer networking
- Experience with cloud providers such as AWS, GCP, or Azure
- Experience monitoring data pipelines with modern solutions such as Datadog, New Relic, Grafana, etc
- Experience with production use cases of Elasticsearch
- Experience in application development or analysis work with Rust, Python, MATLAB, or R
5 - 12 million JPY annually.