Careers chevron_right Engineering
Engineering Full-time · Remote / Hybrid

Data Engineer

Build pipelines, data products, and platform capabilities that turn raw data into trusted, usable assets. Work across ingestion, transformation, testing, orchestration, and observability.

Role Summary

We are looking for a Data Engineer to build the pipelines, data products, and platform capabilities that turn raw data into trusted, usable assets. This role focuses on hands-on engineering across ingestion, transformation, testing, orchestration, observability, and operational reliability. The ideal candidate is a strong builder who cares about quality, understands business context, and can help move data systems from brittle scripts to scalable production services.

What You'll Do

Core Responsibilities

  • Build and maintain data pipelines for ingestion, transformation, and publishing across batch and near-real-time use cases.
  • Create reliable data models and curated datasets that support analytics, operations, and AI workloads.
  • Implement automated testing, monitoring, and alerting for pipelines and data products.
  • Optimize pipeline performance, cost, and operational resilience.
  • Support backfills, migrations, and enhancements across modern data platforms.

Strategic & Cross-Functional Responsibilities

  • Partner with architects and product stakeholders to translate requirements into practical data solutions.
  • Contribute to reusable patterns for ingestion, transformations, testing, and deployment.
  • Help define data contracts, ownership expectations, and operational standards.
  • Participate in incident response and continuous improvement for data systems.
  • Support platform modernization and domain-specific delivery efforts.

What You Bring

Required Qualifications

  • 3+ years of experience in data engineering, analytics engineering, or related roles.
  • Strong SQL and Python skills.
  • Experience building data pipelines and transformations in cloud data environments.
  • Familiarity with orchestration, testing, and monitoring for production pipelines.
  • Ability to work across technical and business requirements with strong attention to detail.

Preferred Qualifications

  • Experience with dbt or analytics engineering patterns.
  • Experience with Snowflake, Databricks, BigQuery, Redshift, Fabric, or Synapse.
  • Familiarity with CI/CD and infrastructure-as-code concepts.
  • Experience supporting AI/ML feature pipelines or model-serving data flows.
  • Consulting or client-facing delivery experience.

Skills and Capabilities

Technical Skills

Data ingestion, transformation, and modelingSQL performance tuning and pipeline optimizationData quality checks and observabilityOrchestration and workflow managementVersion control, testing, and deployment automation

Domain & Business Skills

Translating requirements into usable data productsUnderstanding downstream impact of data design decisionsBuilding for reliability, not just initial deliveryCommunicating tradeoffs and progress clearly

Tools, Platforms, and Languages

SQL, Python, Gitdbt, Spark, Airflow, Dagster, ADFSnowflake, Databricks, BigQuery, Redshift, FabricMonitoring and alerting toolsCI/CD and infrastructure tooling

What Success Looks Like

  • Pipelines are reliable, testable, and observable.
  • Data products are trusted and adopted by downstream users.
  • Delivery velocity improves through reuse and automation.
  • Operational issues are detected and resolved quickly.
  • Platform and data costs stay aligned to value.

How You'll Collaborate

Internal Partners

Data Architects, AI Engineers, Product Managers, Analysts, Strategists

Client Partners

Data owners, analytics teams, engineering teams, business stakeholders

We are committed to creating an inclusive workplace and providing equal opportunity to all applicants and employees. We welcome candidates from all backgrounds and provide reasonable accommodations throughout the hiring process.