Axial Search
← Back to all positions

MLOps Engineer

Austin, TX · Hybrid

TechnologyMLOps$140k – $240k
Sign in to apply

Learn more about our AI Recruitment solutions.

Axial Search builds long-term talent networks for AI, data, and transformation leaders across the United States, and applying for this role indicates your interest in positions like this one as your next move. This particular position isn't tied to a specific client today, but we actively place people with your background — apply and we'll be in touch when a matching role opens with one of our clients. In the meantime, please make use of our free tools to help with your job search, including our live job market dashboard with salary, skills, and hiring-trend data from thousands of AI transformation roles.

Job market data

MLOps and AI infrastructure hiring has picked up sharply as AI programs move past pilot into production at scale. We track over 2,000 US postings for MLOps-focused roles annually, concentrated across technology, financial services, and larger healthcare systems. Mid-senior base compensation generally sits in a $140K–$220K band, and the strongest MLOps engineers we place are those who bring platform-engineering rigour to what has historically been a lower-discipline space.

Key responsibilities

  • Design and operate the infrastructure that moves models from training to production at scale

  • Build and maintain CI/CD pipelines for ML — training, evaluation, registration, and deployment

  • Own observability for production models — latency, drift, data quality, and cost

  • Partner with ML engineers to codify reproducible training and evaluation workflows

  • Select, integrate, and evolve MLOps tooling (MLflow, Kubeflow, feature stores) for the team's actual needs

  • Drive cost and reliability improvements for live inference systems as usage scales

  • Collaborate with platform and security engineers on cluster, identity, and secrets management for ML workloads

  • Document standards, patterns, and runbooks so the ML organization can operate without constant hand-holding

Candidate requirements

  • 4+ years in platform, infrastructure, or DevOps with direct exposure to ML workloads

  • Strong experience with Kubernetes, Docker, and infrastructure-as-code (Terraform, Pulumi, or equivalent)

  • Hands-on experience with at least one major cloud ML stack (SageMaker, Vertex AI, or Azure ML)

  • Comfort with Python and shell scripting, with familiarity with PyTorch or TensorFlow

  • Experience designing observability and rollout strategies for live ML systems

  • Strong debugging instincts across both infrastructure and model code