As a Distributed Systems Engineer on the Data Platform team, you’ll be designing, building, and operating large-scale distributed systems that process petabytes of data daily and support critical business functions across the entire company. Your work will directly impact the reliability and efficiency of our data infrastructure, empowering teams across to make data-driven decisions.
What you’ll do:
- Design, build, and operate high-scale distributed data systems that serve a variety of use cases across the company.
- Develop and maintain data processing pipelines, ETL jobs, and data warehousing solutions.
- Ensure the reliability, scalability, and performance of our data platform through robust testing, monitoring, and on-call support.
- Collaborate with other engineering teams, data scientists, and product managers to understand their needs and deliver innovative data solutions.
- Contribute to the overall architecture and long-term vision of the Data Platform.
What you’ll need:
- B.S. or M.S. in Computer Science or a related field, or equivalent practical experience.
- 4+ years of experience in designing, building, and operating large-scale distributed systems.
- Proficiency in Java, Scala, Python, or Go, with a strong understanding of data structures, algorithms, and software design principles.
- Experience with big data technologies such as Apache Spark, Flink, Kafka, Hadoop, or similar.
- Familiarity with cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes).
- Excellent problem-solving skills, ability to work independently, and strong communication skills.
- Familiarity with data warehousing concepts, SQL, and NoSQL databases.