Later Logo

Later

ML Infrastructure Engineer

Posted 5 Days Ago
Be an Early Applicant
Easy Apply
In-Office
Vancouver, BC
Mid level
Easy Apply
In-Office
Vancouver, BC
Mid level
The ML Infrastructure Engineer will build and maintain ML systems, manage production workflows, automate deployments, and support data scientists with scalable infrastructure.
The summary above was generated by AI

Later is the world’s most intelligent influencer marketing company, built to give brands the confidence to create unforgettable campaigns. By combining real creator relationships, trusted intelligence, and expert guidance, Later removes fear and guesswork from one of marketing’s most visible investments.

Built on a native, AI-powered platform and more than a decade of proprietary data—including billions of social interactions, impressions, and $2.4B+ in verified influencer-driven purchases—Later helps teams understand what will work before they launch.

By combining trusted insight with expert guidance, Later removes guesswork from influencer marketing, enabling brands to choose the right creators, execute fully managed campaigns, and drive meaningful growth across awareness, engagement, and revenue. Trusted by leading enterprise brands including Nike, Wayfair, Unilever, and Southwest Airlines, Later bridges creativity and performance so campaigns don’t just look good—they deliver results. Learn more at later.com.

About this position:

We’re looking for a Machine Learning Infrastructure Engineer to join our growing Data & Platform team and build the foundation that powers our AI and machine learning capabilities across Later’s product portfolio. As our first dedicated ML Infrastructure Engineer, you will own the systems that support model experimentation, training, deployment, and monitoring at scale.

This role is critical to accelerating our data science initiatives and enabling future AI innovation. You’ll design and operate reliable, secure, and scalable ML infrastructure that empowers data scientists and engineers to ship high-impact models with confidence. If you’re excited about building robust ML systems in a fast-moving environment—and want to define the standard for ML Ops at Later—this is your opportunity.

What you'll be doing:Strategy
  • Define and own the long-term ML infrastructure roadmap, ensuring it supports both current experimentation needs and future AI initiatives.
  • Establish best practices for model lifecycle management, deployment standards, monitoring, and governance. 
  • Identify infrastructure gaps and proactively design scalable solutions to enable high-velocity ML development.
  • Contribute to cross-functional technical planning, ensuring ML systems align with product and platform strategy.
Technical/ Execution
  • Design, build, and maintain production-grade model deployment and inference systems using CI/CD pipelines, containerized services (Docker), and API frameworks (e.g., Flask).
  • Automate end-to-end ML lifecycle workflows including training pipelines, model validation, registry management, deployment, and rollback strategies.
  • Implement robust monitoring systems for model performance, latency, drift detection, and infrastructure health using tools such as CloudWatch, Prometheus, and Grafana.
  • Operate across AWS and GCP environments to manage training and inference workloads, including GPU-based infrastructure and BigQuery datasets.
  • Develop and maintain infrastructure-as-code (Terraform, CloudFormation) to ensure scalable, repeatable, and secure cloud environments.
  • Implement and optimize CI/CD workflows (e.g., GitHub Actions, GitLab CI, Bitbucket Pipelines) for ML and infrastructure automation.
Team / Collaboration
  • Partner closely with Data Scientists, Analysts, Platform Engineers, and Product Engineers to support end-to-end ML workflows.
  • Translate data science experimentation needs into production-ready infrastructure solutions.
  • Serve as the technical bridge between ML experimentation and productized deployment.
  • Share knowledge and best practices to elevate ML maturity across teams.
Research/Best Practices
  • Stay current on emerging ML Ops practices, tools, and frameworks to continuously improve system reliability and efficiency.
  • Evaluate and implement model-serving frameworks (e.g., TorchServe, Seldon, TensorRT) where appropriate.
  • Contribute to governance, reproducibility, and auditability standards for ML systems.
  • Experiment with new tooling and workflows to improve reproducibility, performance, and developer velocity.
What success looks like:
  • ML models move from experimentation to production quickly and reliably, with minimal manual intervention.
  • CI/CD pipelines enable safe, repeatable deployments with clear rollback strategies.
  • Model performance, drift, and infrastructure health are proactively monitored and observable.
  • Infrastructure supports scalable GPU training and real-time inference without bottlenecks.
  • Data scientists report improved velocity, reproducibility, and confidence in deploying models.
  • ML systems are secure, compliant, and aligned with evolving product and AI strategy.
What you bring:
  • 4+ years of experience in ML Ops, ML infrastructure, backend engineering, or related roles supporting production ML systems.
  • Experience working in cloud-native environments (AWS and/or GCP) with hands-on deployment of ML workloads.
  • Proven track record designing and implementing CI/CD pipelines for ML systems.
  • Strong experience with Amazon SageMaker, Docker, Flask-based APIs, and infrastructure automation tools.
  • Hands-on experience with ML lifecycle tooling such as MLflow, SageMaker Studio, or Weights & Biases.
  • Experience managing container orchestration platforms (Kubernetes, EKS, or GKE).
  • Strong programming experience in Python (additional experience in Go, Java, or Scala is a plus).
  • Experience working with infrastructure-as-code tools such as Terraform or CloudFormation.
  • Familiarity with observability tools such as CloudWatch, Prometheus, Grafana, Datadog, or centralized logging platforms.
  • Experience managing GPU-based workloads and scaling training/inference systems.
  • Familiarity with data infrastructure tools such as BigQuery and cloud-native data pipelines.
  • Bonus: Experience supporting LLMs or generative AI pipelines, distributed training systems, feature stores (e.g., Feast), real-time inference systems, or ML governance frameworks.
  • A mindset focused on automation, reliability, performance, and continuous improvement in fast-scaling environments.
How you work: 
  • Driven by Impact: You deliver results that matter—prioritizing high-value work, meeting deadlines, and adapting quickly while keeping outcomes clear.
  • Strategic & Customer-Centric: You anticipate risks and opportunities, connect decisions to long-term growth, and build trust through proactive insights.
  • Curious & Growth-Oriented: You seek knowledge, ask sharp questions, and apply learnings fast—challenging the status quo with a mindset of improvement.
  • Collaborative & Resilient: You thrive in change by staying resourceful, solution-focused, and positive—removing roadblocks, sharing insights, and keeping morale high.
  • Accountable & Honest: You own your work, hold yourself and others to a high bar, and use transparent feedback to drive growth.
  • Emotionally Intelligent: You build trust through empathy and collaboration, foster inclusion, and inspire others with grit, optimism, and integrity.
Our approach to compensation:

We take a market-based & data-driven approach to compensation. We leverage data from trusted third-party compensation sources to help us understand the market value of a role based on function, level, geographic location, and scope. We evaluate compensation bi-annually, including performance and market-related factors.

Our salaries are benchmarked against market Total Cash Compensation for the geographic location of our job posting. Compensation for some roles is structured as On Target Earnings (OTE = base + commission/variable) while for others it is structured as Salary only.

To comply with local legislation and ensure transparency, we share salary ranges on all job postings. Skills, experience and other factors help determine the final salary we offer which may vary from the original range posted. 

Additionally, all permanent team members are eligible to participate in various benefits plans as part of their overall compensation package.

Salary Range: 

$ 145,000 -165,000

#LI-Hybrid 

Where we work:

We have offices in Boston, MA; Vancouver, BC; Chicago, IL; and Vancouver, WA. For select positions, we are open to hiring fully remote candidates. We post our positions in the location(s) where we are open to having the successful candidate be located. 

Diversity, inclusion, and accessibility:

At Later, we are committed to fostering a culture rooted in an inclusion-first mindset at every level of the company, embracing the importance of hiring and building teams for culture add rather than culture fit. We openly build and maintain unbiased hiring, pay, and promotion practices to create a foundation for an equitable workplace, paving the way for systemic change.

We are committed to creating a diverse environment and are proud to be an equal opportunity employer. All applications will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, national origin, disability, or age. Please let us know if you require any accommodations or support during the recruitment process.

Top Skills

AWS
Ci/Cd
CloudFormation
Docker
Flask
GCP
Go
Grafana
Java
Kubernetes
Prometheus
Python
Sagemaker
Scala
Terraform

Later Vancouver, British Columbia, CAN Office

Vancouver, British Columbia, Canada

Similar Jobs

4 Days Ago
Easy Apply
In-Office or Remote
9 Locations
Easy Apply
Junior
Junior
Logistics • Transportation
The role involves designing and developing machine learning infrastructure for annotation, evaluation, and training models, focusing on scalable systems and efficient data workflows.
Top Skills: C++KubernetesPythonSpark
21 Days Ago
In-Office
Vancouver, BC, CAN
Senior level
Senior level
Aerospace • Agriculture • Big Data Analytics
Design, build, and optimize large-scale cloud data ingestion, preprocessing, distributed training, and benchmarking infrastructure for geospatial foundation models. Contribute to model architecture, fine-tuning for agriculture, ML experiment tooling, and maintain documentation while working in an Agile team.
Top Skills: Python,C++,Numpy,Scipy,Opencv,Matplotlib,Gdal,Pytorch,Torchgeo,Jupyterlab,Jupyternotebook,Aws,Azure,Gcp,Ci/Cd,Api-Driven Architectures
3 Days Ago
In-Office or Remote
7 Locations
Senior level
Senior level
Financial Services
Design and scale the infrastructure for ML systems, enabling reliable trading system production, optimizing performance and ensuring correctness across complex environments.
Top Skills: Cloud-Native EnvironmentsData Processing PipelinesDistributed SystemsMachine LearningPython

What you need to know about the Vancouver Tech Scene

Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account