The data platform software engineer in SmartNews plays a key role in accelerating products/business development. Great efforts are made on building a highly efficient and flexible data service for analytical and operational purposes.
To serve the internal users such as our analytics and product-dev teams, the mission of data engineers is to create high-level, easy-to-use data services for simplifying the accessing, integration, and consolidation of various data sets, and also building the platforms for executing tasks processing massive data in terms of TB per day.
Technology drives the growth of SmartNews, and thus we eagerly adopt cutting-edge technologies from the industry and academia, especially the open-source community.
Responsibilities
- Design, develop, set up, and maintain new services, libraries, tools, frameworks for data processing or management, and investigate new algorithms to increase efficiency for Data Processing, such as ETL, Data Pipelines, OLAP DBMS, real-time messages and streams processing, data-sync between systems, etc.
- Develop and operate in-housing data service offerings such as high performant K/V store, adopt/enhance open-source frameworks to meet SmartNews’ growing business needs
- Develop tooling for system performance evaluation, monitoring, and tuning of the data processing procedures or platforms, get insights of efficiency and stabilizability and make continuous improvements, such as optimizing distributed query engines, computing resource management and isolation, multi-tier storage systems, etc.
- Own and maintain the key data processing portfolios such as building and taking care of the environment, troubleshooting, and being responsible for incidents that occur during on-call periods. Work closely with data architecting/modeling roles to understand ways to implement the data service, and interact with the Site Reliability Engineering (SRE) team to deploy the environments and drive production excellence.
- Devise system, tooling, and approaches for data privacy and security. Establish access control, create processes to handle sensitive data.
- Diagnose and resolve complex technical challenges for data accessing or processing. Using elegant and systematic rather than ad-hoc methods to help other teams tune the performance and improve stability.
Requirements
- BS/MS degree in computer science or equivalent science/engineering degree with 5 or more years of experience
- Strong Programming skills and experiences with a deep understanding of data structures and algorithms are required for building efficient and stable solutions
- Rich experiences with one or more programming languages such as Java, Scala, C++, or Python; familiar with agile development and manage testing skills
- Need certain knowledge on shell scripts and operating systems, especially on Linux
- Good understanding of modern bigdata technologies and ecosystems
- Familiar with Hadoop, Spark, Hive, Presto, Redis, and Flink, be able to develop data processing programs with them in batch or streaming manner
- Familiar with modern data stores either RDBMS or NoSQL stores (such as MySQL, Cassandra or Druid, etc); have experience with developing application or function-extensions on such data stores
- Be able to implement and tune complicated heavy-lifting data flows (ETLs or pipelines), familiar with certain toolings
- Capability of system design with good modularity and extensibility
- Familiar with system/module design methods and toolings such as UML
- Be able to draft the user-understandable blueprint and precise, detailed designs
- Experience with building highly scalable distributed systems
- Able to design and implement distributed services with scalability and performance in mind
- Able to debug and troubleshoot performance and reliability problems