SmartNews is a news app with 20 million monthly active users in the U.S. and Japan. We are a machine-learning company deeply committed to helping users find quality news beyond the filter bubble.
Our algorithms evaluate tens of millions of articles, behaviors and social interactions to deliver only the most interesting and important stories on a global scale — a fascinating engineering challenge that has attracted a world-class data science and engineering team.
Join our international culture — smart work, smart people and a global mission to bring quality information to the world. Our perks include offering you a free healthy lunch, delicious coffee provided by our in-house barista, and financial support towards language-learning.
We don’t require Japanese ability, and welcome people looking to relocate to Japan.
Our response to COVID-19
Our user base is increasing quickly due to people’s interest in news (COVID and non-COVID related), and so we’re still hiring. If you’re abroad looking to relocate to Japan, we’re still interested. If you’re based in China or US, we can initially hire you with our local branches there, and then transfer you here in the future. If you’re based in another location, we’ll initially hire you as a remote contractor.
For our existing employees, in response to the situation, we’re having everyone work remotely for the time being.
About the position
The data platform software engineer in SmartNews plays a key role in accelerating the products/business developments. Great efforts are paid on building a highly efficient and flexible data service for analytical and operational purposes.
To serve the internal users from analytics and product-dev teams, the goal and mission of data engineers is to create high-level, easy-to-use data services for simplifying the accessing, integration and consolidation of various data sets, and also building the platforms for executing tasks processing massive data in terms of TB per day.
Technology drives the growth of SmartNews, and thus we eagerly adopting cutting-edge technologies from the industry and academia especially, the open-source community.
- Design and develop new services, libraries, tools, frameworks for data processing or management, and investigate new algorithms to increase efficiency for Data Processing, such as ETL, Data Pipelines, OLAP DBMS, real-time messages and streams processing, data-sync between systems, etc.
- Do performance evaluation, monitoring and tuning of the data processing procedures or platforms, get insights of efficiency and stabilizability and make continuously improvement, such as optimizing distributed query engines, computing resource management and isolation, multi-tier storage systems, etc.
- Own and maintain the key data processing portfolios such as building and taking care of the environment, trouble-shooting and being responsible to the on-call of incidents.
- Work closely with data architecting/modeling roles to understand ways to implement the data service, and interact with Site Reliability Engineering (SRE) team to deploy the environments and drive production excellence.
- Diagnose and resolve complex technical challenges for data accessing or processing. Using elegant and systematic rather than ad-hoc methods to help other teams tuning the performance and improving stability.
- BS/MS degree in computer science or equivalent science/engineering degree with 5 or more years of experience
- Strong Programming skills and experiences with deep understanding of data structures and algorithms are required for building efficient and stable solutions
- Rich experiences with one or more programming languages such as Java, Scala, C++ or Python; familiar with agile development and manage testing skills
- Need certain knowledge on shell scripts and operating systems, especially on Linux
- Good understanding of modern bigdata technologies and ecosystems
- Familiar with Hadoop, Spark, Hive, Presto, Storm or Flink, be able to develop data processing programs with them in batch or streaming manner
- Familiar with modern data stores either RDBMS or NoSQL stores (such as HBase, Cassandra or Druid, etc); have experiences on developing application or function-extensions on such data stores
- Be able to implement and tune complicated heavy-lifting data flows (ETLs or pipelines), familiar with certain toolings
- Capability of system design with good modularity and extensibility
- Familiar with system/module design methods and toolings such as UML
- Be able to draft the user-understandable blueprint and precise, detailed designs
- Experience of building highly scalable distributed systems
- Able to design and implement distributed services with scalability and performance in mind
- Able to debug and troubleshooting performance and reliability problems