AI Automation Engineer

Skills
Python R SQL / NoSQL Java / C++ Scikit-learn Natural Language Processing (NLP) Computer Vision Machine Learning (ML) Algorithms Deep Learning Data Wrangling / Cleaning Data Visualization (e.g., Matplotlib, Seaborn, Tableau) Model Deployment (e.g., Flask, FastAPI, Docker) MLOps / CI/CD Pipelines Cloud Platforms (AWS, Azure, GCP) Robotic Process Automation (RPA) Tools: UiPath, Blue Prism, Automation Anywhere Workflow Automation API Integration Business Process Modeling Process Mining Scripting (PowerShell, Bash) Power BI / Tableau

Job Description

As the Lead AI Automation Engineer, you will architect and oversee the deployment of AI-driven automation across data ingestion, transformation, model training, serving, and monitoring pipelines. You’ll ensure all processes meet high standards for data security, privacy, and regulatory compliance.

🛠 Core Responsibilities

  • Vertex AI pipeline development
    Build, manage, and scale Vertex AI Pipelines (Kubeflow / Vertex Workbench) to enable reproducible, robust ML/AI workflows.
  • Data ingestion & orchestration
    Engineer data ingestion flows from various sources into GCS, BigQuery, or Cloud Storage, using Dataflow, Pub/Sub, Composer (Airflow), and Cloud Functions.
  • Secure data handling
    Implement data classification, encryption (at‑rest and in‑transit), IAM governance, and audit logging using Cloud KMS, VPC Service Controls, Cloud DLP.
  • CI/CD for ML
    Automate model builds, testing, deployment using Vertex AI Model Registry, Container Registry, Cloud Build, GitOps tools, and open-source CI/CD.
  • Infrastructure as Code (IaC)
    Use Terraform, Deployment Manager, or CDK to define data and AI infrastructure, incorporating least-privilege policies and reproducibility.
  • Monitoring & observability
    Deploy logging and monitoring using Cloud Monitoring, Logging, APM, Vertex AI Model Monitoring, and alerting for data drift, resource issues, and SLIs/SLOs.
  • Security reviews & compliance
    Conduct threat modeling, risk assessments, align with SOC 2, ISO 27001, HIPAA or GDPR requirements as relevant.
  • Team leadership & collaboration
    Mentor junior engineers, define best practices, collaborate cross-functionally with Data Engineering, MLOps, Security, and Product teams.

Job Requirement

✅ Qualifications & Skills

Must-Have:
  • 0 to 3 years in engineering or MLOps roles, with hands-on experience building production workflows in GCP.
  • Deep experience with Vertex AI, Kubeflow Pipelines, or Kubeflow on GKE.
  • Proficiency in Python, Terraform (or comparable IaC tools), SQL.
  • Strong knowledge of GCP services: BigQuery, Dataflow, Pub/Sub, Cloud Functions, Cloud Storage, Secret Manager, IAM, KMS, VPC, etc.
  • Expertise in secure data workflows: encryption, compliance frameworks, identity and access management.
  • Experience implementing CI/CD automation for AI/ML systems.
Nice-to-Have:
  • Certifications such as Google Cloud Professional Data Engineer, Professional Cloud Architect, or MLOps Engineering Specialist.
  • Familiarity with Docker, Kubernetes, Kubernetes-native orchestration.
  • Knowledge of GitOps tooling: ArgoCD, Flux, or Jenkins X.
  • Experience with data cataloguing tools like Data Catalog, DataGov, Great Expectations, or similar.
  • Statistical understanding of model evaluation, drift detection, bias mitigation.