You must be authorized to work for ANY employer in the US (e.g., Green card holders, TN visa holders, GC EAD, H4 EAD, U4U with EAD), as we are unable to sponsor or take over employment visa sponsorship at this time; - Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience; - 5+ years of experience in DevOps, cloud infrastructure, or site reliability engineering; - Expertise in cloud computing platforms such as AWS, Google Cloud, or Azure; - Deep experience with Kubernetes, including cluster management, workload orchestration, and production operations at scale; - Experience with ArgoCD and Helm for Kubernetes-native deployment and release management; - Strong proficiency with Apache Airflow, including DAG development, scheduling, and pipeline reliability in a data engineering context; - Proven ability to build and maintain CI/CD pipelines using GitHub Actions and GitHub Enterprise; - Experience supporting cloud data warehouse platforms such as BigQuery, Snowflake, or Databricks; - Proficiency with Infrastructure as Code tooling and cloud automation best practices; - Familiarity with observability and incident management platforms for monitoring, alerting, and on-call response; - Comfort working with AI-assisted developer tooling to accelerate engineering workflows; - Strong scripting skills in Python or similar languages for automation and tooling; - Excellent communication and collaboration skills, with the ability to partner across technical and non-technical teams; - Upper-intermediate English level. You will build and maintain CI/CD pipelines with GitHub Actions, operate Kubernetes workloads, design Apache Airflow workflows, and support cloud data warehouse platforms in a production environment.