• Anywhere

Divvy - Lehi, Utah

What We’re Doing 🙌

Divvy is a startup located at Thanksgiving Point in Lehi, UT looking to disrupt the financial technology industry. We’re on a mission to automate the spending and expense reporting process, enabling companies and individuals to have more control, greater convenience, and increased security when distributing funds. With our product, keeping receipts, asking for reimbursement, and spending wastefully are things of the past. Our web and mobile applications enable our users to take proactive control of their money.

 

What You’ll Be Doing 🤔

As a Data Engineer at Divvy, you will be a part of an early stage team that builds the data transport, collection, and storage, and exposes services that make data a first-class citizen at Divvy. We are looking for a Data Engineer that is passionate and motivated to make an impact in creating a robust and scalable data platform. In this role, You will be working on architecting, building, and launching highly scalable and reliable data schemas and pipelines to support Divvy’s growing data processing and analytics needs;  You will also leverage data expertise to help evolve data models in various components of the data stack. Your efforts will allow access to business and user behavior insights, leveraging large amounts of Divvy data to fuel reporting, machine learning, credit decisioning, user behavior, and product decisions.

 

Key Responsibilities ☝️

  • Implementing a solid, robust, extensible data warehousing design that supports key business flows
  • Consistently evolve data model & data schema based on business and engineering needs
  • Owner of the core company data model and pipeline, responsible for scaling up data processing flow to meet the rapid data growth at Divvy
  • Create monitoring systems to track of the quality of data and the health of our systems
  • Develop tools supporting self-service data pipeline management (ETL)

 

Qualifications 👊

  • BS/BA in a technical field, Computer Science or Mathematics.
  • 4+ years experience in the data warehouse space.
  • 4+ years experience in custom ETL design, implementation, and maintenance.
  • 4+ years experience with schema design and dimensional data modeling.
  • Proficient in at least one of the SQL languages (MySQL, PostgreSQL)
  • Strong skills in Python
  • Experience with workflow management tools (Airflow, Oozie, Azkaban, Luigi)
  • Experience designing, implementing and maintaining STAR schemas

 

We’re looking for great humans to join our team. Fill out this quick form and upload your resumé below.

Upload your CV/resume or any other relevant file. Max. file size: 50 MB.