Senior Software Engineer - Query Engines & Storage

Treasure Data Minato-ku, Tokyo April 3 2026
  • 💴 No salary range given
  • 🏡
    Partially remote
  • 🗾 Japan residents only
  • 💬
    No Japanese required
    Business English
  • 🧪
    Senior level
    5+ years experience required
DO YOU NEED MORE INFO?
ASK A QUESTION

About Treasure Data

Treasure Data Minato-ku, Tokyo

Treasure Data is the only enterprise Customer Data Platform that harmonizes an organization’s data, insights, and engagement technology stacks to drive relevant, real-time customer experiences throughout the entire customer journey.

Key benefits

  • Highly Technical Founders
  • Globally distributed company
  • Open Source is in our DNA

About the position

The Plazma team at Treasure Data is one of the essential elements of our CDP solution and is part of the Core Services group, which supports customer data ingestion and availability at a rate of 70B records per day. You are expected to help the team develop the future of our Hadoop/Hive & Trino query engines and expand from there into our in-house developed storage solution. This includes maintaining technical excellence to address challenges that currently lack industry-wide solutions and delivering the roadmap together with your team. Our team consists of Big Data experts across Japan, Korea and Canada who are passionate about OSS contribution, and we take pride in the quality of service we offer.

Responsibilities

  • Design and develop Hadoop/Hive & Trino solutions, providing technical expertise for modern data architecture assessment and use case development

  • Establish engineering standards for design, development, tuning, deployment, and maintenance of advanced data access frameworks and distributed systems

  • Collaborate with your team to define product roadmaps based on operational needs and customer-requested features while mentoring and training new team members

  • Own version and release management, including baseline evaluation, patch backporting, and deployment of customer-facing features

  • Coordinate with Support and Product teams on release cycles and feature delivery

  • Contribute to Hadoop/Hive & Trino OSS through bug fixes, new features, and technical documentation

  • Partner with SRE to automate cluster operations, reducing operational overhead through automated lifecycle management and load balancing workflows

  • Design and implement observability solutions, including health metrics, capacity planning tools, and automated failure detection and recovery systems

  • Provide expert customer support, including on-call responsibilities, escalation handling, and in-depth troubleshooting of performance and defect issues

  • Develop custom technical solutions, including user-defined functions (UDFs) and specialized tooling for Hadoop/Hive & Trino

Requirements

  • 5+ years building and operating distributed systems

  • Strong Java and deep understanding of algorithms, data structures, and distributed systems fundamentals

  • Solid understanding of cloud architecture and services in public clouds like AWS, GCP, or Microsoft Azure

  • Strong capability in implementing new and improved data solutions for multi-tenant environments

  • Experience in developing use cases, functional specs, design specs, etc.

  • Experience working with distributed, scalable Big Data stores or NoSQL, including HDFS, S3, Cassandra, Big Table, etc.

  • Strong analytical and communication skills; able to influence across Product, SRE, and Support

Nice to haves

While not specifically required, tell us if you have any of the following.

  • Understanding of the capabilities of Hadoop/Hive or Trino

  • Proven experience operating production query engines on a petabyte scale

  • Microservices architecture, data integration patterns, and extending OSS

  • Infra-as-Code, SRE practices, and advanced observability

  • UDF development and familiarity with data visualization ecosystems

  • Security and privacy-by-design expertise

  • Experience with storage patterns and optimizations for massive parallel processing

DO YOU NEED MORE INFO?
ASK A QUESTION

Meet Treasure Data's Developers

David discusses how he enjoys switching hats between ML and software, and why he finds Treasure Data’s “extensive ecosystem” so much fun.

Read their story...

Carlo, a Staff Software Engineer, shares how Treasure Data’s latest AI initiatives are opening up unprecedented opportunities for both the company, and his career.

Read their story...

Tyler is a software engineer at Treasure Data working on their Data Clean Room product. He talks about how Treasure Data supports their team’s learning and growth, and how they invest in the quality and performance of their services.

Read their story...

Related jobs

More jobs like this

We'll send you a digest of new English-friendly software developer jobs in Japan. Your email stays private, we don't share or sell it.