We are builders, we are integrators. Tech Services creates and optimizes solutions for a rapidly growing business on a global scale. We work with distributed infrastructure, petabytes of data, and billions of transactions with no limitations on your creativity. You don’t have to wait for some architect or manager to tell you what you can work on - you decide the priorities. With tech hubs in Seattle, San Francisco, Austin, Tokyo and Hyderabad, we are improving people’s lives all around the world, one job at a time.
This position is located at Indeed’s world class development center in the heart of Tokyo! We offer globally competitive compensation that is superior within the market, including quarterly bonuses, a long term incentive plan, and an excellent international relocation package to get you here, and culture and Japanese language training to help you adapt to your new home.
We have an open vacation policy giving you time to rest and recharge, and the leisure time to experience Japanese culture and pursue your interests. We abhor crunch time and artificial deadlines and promote a sustainable work pace with a rational number of hours in each day and work week, so you have time to spend with family and friends, or pursuing other interests.
Our offices are conveniently located near major stations (Ebisu, Meguro and Roppongi) on major train routes, most notably the Yamanote line. We conduct business in English and apply the same management practices globally. At the same time our Tokyo offices are a paragon of multiculturalism, with over 35 nationalities represented, and is an expression of the value Indeed places on diversity in multiple dimensions. We have meals catered daily, gaming rooms with billiards, ping pong, video consoles and more, and even a climbing wall in our Ebisu office. A large variety of free snacks and beverages are supplied, there a weekly happy hours and many other social events throughout each month.
You have a cool head under pressure. When a technical fire occurs, you understand that putting it out should always avoid collateral damage. When you cause a fire (as everyone inevitably does), you take responsibility for it and work with the team to figure out the right way to put that fire out. You believe blaming is a waste of time: when something goes wrong, you figure out why it happened and how to prevent it from happening again in the future. Better yet, you look for how things went right in the first place and improve upon those.
- The design, care, and feeding of our multi-petabyte Big Data environments built upon technologies in the Hadoop Ecosystem
- Day-to-day troubleshooting of problems and performance issues in our clusters Investigate and characterize non-trivial performance issues in various environments
- Work with Systems and Network Engineers to evaluate new and different types of hardware to improve performance or capacity
- Deep understanding of system architecture and ability to validate system configurations from hardware layer to Hadoop Application layer
- Work with developers to evaluate their Hadoop use cases, provide feedback and design guidance
- Work simultaneously on multiple projects competing for your time and understand how to prioritize accordingly
- Be part of the On-call Rotation
- Willingness to mentor and teach people around you
As a member of this team, you seek out feedback on your designs and ideas and provide the same to others. You constantly ask ‘What am I missing?’ and ‘How will this NOT work?’ You don’t shy away from what you don’t know; you readily admit that you don’t know everything, and use every resource available to learn what you need to know.
- Bachelor’s degree in Computer Science or equivalent
- Intimate and extensive knowledge of Linux Administration and Engineering.
- We use CentOS/Red Hat Enterprise Linux (RHEL), you should too
- Experience in designing, implementing and administering large (200+ node), highly available Hadoop clusters secured with Kerberos, preferably using the Cloudera Hadoop distribution
- In-depth knowledge of capacity planning, management, and troubleshooting for HDFS, YARN/MapReduce, Hive, Spark, and HBase
- Experience with Distributed SQL Query Engine for Big Data
- An advanced background with common automation tools such as Puppet
- An advanced background with a higher level scripting language, such as Python or Ruby
- Must have experience with monitoring tools used in the Hadoop ecosystem such as Nagios, Cloudera Manager
- Experience with Pepperdata a plus
- Experience with parallel and distributed systems, algorithms and Kafka message queuing a plus