Gradient AI:
Gradient AI is revolutionizing Group Health and P&C insurance with AI-powered solutions that help insurers predict risk more accurately, improve profitability, and automate underwriting and claims. Our SaaS platform taps into one of the industry’s largest data lakes—tens of millions of policies and claims—to deliver deep, actionable insights. Trusted by leading carriers, MGAs, TPAs, and self-insured employers, Gradient AI has grown rapidly since our founding in 2018. Backed by $56M in Series C funding, we're scaling fast—and it's an exciting time to join the team.
About the Role:
We are looking for a Senior Data Engineer to join our Technical Implementations & Solutions Delivery team to lead the design, build, and deployment of data pipelines for new customer implementations and to support existing customer activity.
This is an ideal role for anyone looking to dive into health data and make a real impact. This role requires expertise in using Airflow to orchestrate ETL pipelines, ensuring the efficient, reliable movement of healthcare data across systems. You’ll work closely with our engineering and client teams to ensure smooth data integration and top-notch customer support. You’ll also help shape our vision and play a key role in transforming the entire industry (really!)
This is a fully remote opportunity.
How you will make an impact:
- Own the technical implementation process for new customers, from ingestion to deployment, ensuring accuracy, consistency, and performance with an eye for scalable and repeatable processes.
- Build and maintain infrastructure for the extraction, transformation, and loading (ETL) of data from a variety of sources using SQL, AWS, and healthcare-specific big data technologies and analytics platforms.
- Innovate new tools to quickly extract, process, and validate client data from different sources and platforms.
- Collaborate with data scientists to transform large volumes of health-related and bioinformatics data into modeling-ready formats, prioritizing data quality, integrity, and reliability in healthcare applications.
- Apply health and bioinformatics expertise to design data pipelines that translate complex medical concepts into actionable requirements.
Skills needed to succeed:
- BS in Computer Science, Bioinformatics, or another quantitative discipline
- 5+ years of experience implementing, managing, or optimizing data solutions in a professional setting
- 3+ years of experience using data orchestration frameworks such as Airflow, Dagster, Prefect
- Experience serving as a technical lead, setting coding standards, and mentoring other engineers is strongly preferred.
- Ability to work with and visualize health and/or medical data, with Insurtech industry exposure, is considered a plus.
- Knowledge of healthcare data standards and a solid understanding of healthcare data privacy and security regulations (such as HIPAA) are highly desirable.
- Strong proficiency in Python and SQL within a professional environment.
- Hands-on knowledge of big data tools like Apache Spark (PySpark), DataBricks, Snowflake, or similar platforms.
- Skilled in using data orchestration frameworks such as Airflow, Dagster, or Prefect.
- Comfortable working within cloud computing environments, preferably AWS, along with Linux systems.
What We Offer:
- A fun, team-oriented startup culture.
- Generous stock options - we all get to own a piece of what we’re building.
- Unlimited vacation days.
- Flexible schedule that supports working from home.
- Full benefits package includes medical, dental, vision, 401k, paid paternal leave, and more.
- Ample opportunities to learn and take on new responsibilities.
We are an equal opportunity employer.