Axial Search
← Back to all positions

Senior Data Engineer

United States · Hybrid

TechnologyData Engineering$145k – $260k
Sign in to apply

Learn more about our Data & Analytics Recruitment solutions.

Axial Search builds long-term talent networks for AI, data, and transformation leaders across the United States, and applying for this role indicates your interest in positions like this one as your next move. This particular position isn't tied to a specific client today, but we actively place people with your background — apply and we'll be in touch when a matching role opens with one of our clients. In the meantime, please make use of our free tools to help with your job search, including our live job market dashboard with salary, skills, and hiring-trend data from thousands of AI transformation roles.

Job market data

Data engineering remains the quiet backbone of every serious AI program. We track several thousand senior data engineering postings in the US annually, with hiring concentrated across product-led technology firms, data-intensive enterprises, and consultancies. Mid-senior compensation generally lands in a $145K–$230K base band, and around one in eight roles is fully remote. Cloud warehouse work (Snowflake, Databricks, BigQuery) is near-universal, and streaming and real-time patterns are increasingly the differentiator between mid and senior candidates.

Key responsibilities

  • Design, build, and operate data pipelines that power analytics, ML, and GenAI products

  • Own the reliability, quality, and observability of critical data assets

  • Partner with data scientists, ML engineers, and analysts on schema, tooling, and data contracts

  • Drive architectural decisions around warehouse, lakehouse, and streaming patterns

  • Mentor junior engineers and set the team's standards for data engineering practice

  • Advocate for the right balance of speed, cost, and long-term maintainability

  • Contribute to the roadmap for internal data platform capabilities

  • Participate in on-call rotations where the team maintains critical production data services

Candidate requirements

  • 5+ years of data engineering experience with strong SQL and Python (or Scala)

  • Hands-on experience with modern warehouse or lakehouse stacks — Snowflake, Databricks, or BigQuery

  • Strong background in orchestration (Airflow, Dagster, dbt) and streaming (Kafka, Flink, or Kinesis)

  • Comfort with cloud infrastructure (AWS, GCP, or Azure) and infrastructure-as-code

  • Strong debugging instincts across pipeline, infrastructure, and data-quality issues

  • Experience partnering with ML and analytics teams on shared data assets