Software Engineer - Data Infrastructure

Remote - United States

Applications have closed
Jerry logo
Jerry

We’d love to hear from you if you are looking for a tech company with the following:

  • Huge market ($800 billion market size), we are building the first AI-powered super app to help people owning a car;
  • Silicon Value Engineer Culture - Open, transparent, merit-based culture; no 996;
  • Best user experience: #1 ranked app in the insurance comparison category;
  • Strong leadership: from Amazon, Microsoft, Facebook, Nvidia, Alibaba, etc. and rockstar colleagues;
  • Approaching series C with $100M+ total financing, backed by top VCs such as Y-Combinator, Goodwater, SV Angel, Funders Club, and Bow Capital, etc.

About jerry.ai:

Jerry.ai is building the first SUPER app for your car that helps people optimize the cost and experience of owning a car (making ownership easy & affordable). Having built the #1 ranked app and the fastest growth app in the insurance comparison category, we are tackling other areas of car ownership and looking for engineering talents to join us in expanding our product offerings. Headquartered in Silicon Valley, CA, we have offices in the U.S., China, and Canada.

About the role:

We are looking for a Data Engineer who is passionate and motivated to make an impact in creating a robust and scalable data platform. In this role, you will have ownership of the company’s core data pipeline that powers our top line metrics. You will also leverage data expertise to help evolve data models in various components of the data stack. You will be working on architecting, building, and launching highly scalable and reliable data pipelines to support the company’s growing data processing and analytics needs. Your efforts will allow access to business and user behavior insights, leveraging the data to fuel other functions such as Analytics, Data Science, Operations and many others.

Responsibilities:

  • Owner of the core company data pipeline, responsible for scaling up data processing flow to meet the rapid data growth
  • Consistently evolve data model & data schema based on business and engineering needs
  • Implement systems tracking data quality and consistency
  • Develop tools supporting self-service data pipeline management (ETL)
  • SQL and MapReduce job tuning to improve data processing performance

Requirements:

  • 3+ years of data engineering experience within a rigorous engineering environment
  • Proficient in SQL, specially with Postgres dialect.
  • Expertise in Python for developing and maintaining data pipeline code.
  • Experience with Apache Spark and PySpark library (experience with AWS extension of PySpark is a plus).
  • Experience with BI software (preferably Metabase or Tableau).
  • Experience with Hadoop (or similar) Ecosystem.
  • Experience with deploying and maintaining data infrastructure in the cloud (experience with AWS preferred).
  • Comfortable working directly with data analytics to bridge business requirements with data engineering
Job region(s): Remote/Anywhere North America
Job stats:  1  0  0

Explore more DevOps, Cloud and SRE career opportunities