ZoomInfo Logo

ZoomInfo

Senior DevOps Engineer

Posted Yesterday
Be an Early Applicant
In-Office
Toronto, ON
Senior level
In-Office
Toronto, ON
Senior level
The Senior DevOps Engineer will drive implementation of best practices for data infrastructure, manage cloud services, optimize CI/CD pipelines, and support cross-functional teams.
The summary above was generated by AI

ZoomInfo is where careers accelerate. We move fast, think boldly, and empower you to do the best work of your life. You’ll be surrounded by teammates who care deeply, challenge each other, and celebrate wins. With tools that amplify your impact and a culture that backs your ambition, you won’t just contribute. You’ll make things happen–fast.

Job Summary:

We are seeking a highly skilled and self-motivated Senior Embedded DevOps Engineer to support our engineering  teams. This role will focus on driving changes and ensuring adherence to company-established standards for data infrastructure and CI/CD pipelines. The ideal candidate will have strong experience working with AWS and/or GCP,  cloud-based data streaming and processing services, containerized application deployments, infrastructure automation, and Site Reliability Engineering (SRE) best practices for performance and cost optimization.

What You'll do:

  • Drive initiatives to implement and enforce best practices for data streaming, processing, analytics and monitoring infrastructure.
  • Deploy and manage services on Kubernetes-based platforms such as Amazon EKS and Google Kubernetes Engine (GKE).
  • Provision and manage cloud infrastructure using Terraform, ensuring best practices in security, scalability, and cost-efficiency.
  • Maintain and optimize CI/CD pipelines using Jenkins, ArgoCD, and GitHub Enterprise Actions to support automated deployments and testing.
  • Work with cloud-native data services such as AWS Kinesis, AWS Glue, Google Dataflow, and Google Pub/Sub, BigQuery, BigTable
  • Familiarity with  workflow orchestration services such as Apache Airflow and Google Cloud Composer.
  • Develop and maintain automation scripts and tooling using Python to support DevOps processes.
  • Monitor system performance, troubleshoot issues, and implement proactive solutions to enhance reliability and efficiency.
  • Implement SRE practices to improve service reliability, scalability, and cost-effectiveness.
  • Analyze and optimize cloud costs, identifying areas for improvement and implementing cost-saving strategies.
  • Ensure compliance with security policies and best practices in cloud environments.
  • Drive adoption of company standards and influence data teams to align with best DevOps and SRE practices.
  • Collaborate with cross-functional teams to improve development workflows and infrastructure.

   

What you bring:

  • 7+ years of experience in a DevOps, Site Reliability Engineering, or Cloud Infrastructure role.
  • Strong experience with AWS and GCP data services, including Kinesis, Glue, Pub/Sub, and Dataflow.
  • Proficiency in deploying and managing workloads on Kubernetes (EKS/GKE) in production environments.
  • Hands-on experience with Infrastructure-as-Code (IaC) using Terraform.
  • Expertise in CI/CD pipeline management using Jenkins, ArgoCD, and GitHub Enterprise Actions.
  • Programming skills in Python for automation and scripting.
  • Experience with observability and monitoring tools (e.g., Prometheus, Grafana, Datadog, or CloudWatch).
  • Strong understanding of SRE principles, including performance monitoring, incident response, and reliability engineering.
  • Experience with cost optimization strategies for cloud infrastructure.
  • Self-motivated and driven, with a strong ability to influence and drive changes across multiple teams.
  • Ability to work collaboratively in an agile environment and support multiple teams. 

Preffered Qualifications:

  • Experience with data lake architectures and big data processing frameworks (e.g., Apache Spark, Flink, Snowflake, BigQuery).
  • Familiarity with event-driven architectures and message queues (e.g., Kafka, RabbitMQ).
  • Experience with workflow orchestration tools such as Apache Airflow and Google Cloud Composer.
  • Knowledge of service mesh technologies like Istio.
  • Experience with GitOps workflows and Kubernetes-native tooling.

#LI-SK1

#LI-Hybrid

About us: 

ZoomInfo (NASDAQ: GTM) is the Go-To-Market Intelligence Platform that empowers businesses to grow faster with AI-ready insights, trusted data, and advanced automation. Its solutions provide more than 35,000 companies worldwide with a complete view of their customers, making every seller their best seller.

ZoomInfo is committed to protecting your privacy when you apply for jobs with us. Please review our Job Applicant Privacy Notice for more details on how we handle your personal information.

ZoomInfo may use a software-based assessment as part of the recruitment process. More information about this tool, including the results of the most recent bias audit, is available here.

ZoomInfo is proud to be an equal opportunity employer, hiring based on qualifications, merit, and business needs, and does not discriminate based on protected status. We welcome all applicants and are committed to providing equal employment opportunities regardless of sex, race, age, color, national origin, sexual orientation, gender identity, marital status, disability status, religion, protected military or veteran status, medical condition, or any other characteristic protected by applicable law. We also consider qualified candidates with criminal histories in accordance with legal requirements.

For Massachusetts Applicants: It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. ZoomInfo does not administer lie detector tests to applicants in any location.

Top Skills

Apache Airflow
Argocd
AWS
Aws Glue
Aws Kinesis
BigQuery
Bigtable
Cloudwatch
Datadog
GCP
Github Enterprise Actions
Google Cloud Composer
Google Dataflow
Google Pub/Sub
Grafana
Jenkins
Kubernetes
Prometheus
Python
Terraform

Similar Jobs

19 Days Ago
Hybrid
Senior level
Senior level
Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
The role involves driving platform infrastructure, building CI/CD systems, ensuring observability, providing technical leadership, and collaborating cross-functionally for AI/ML workloads.
Top Skills: AWSAzureBashDatabricksDockerElkGCPGithub ActionsGitlab CiGrafanaJenkinsKubernetesMlflowPrometheusPythonPyTorchScikit-LearnSplunkTensorFlowTerraform
16 Days Ago
Easy Apply
In-Office
Easy Apply
Senior level
Senior level
Software
As a Senior DevOps Engineer, you will design and manage cloud infrastructure, improve site reliability, mentor developers, and ensure security best practices while collaborating across teams.
Top Skills: ArgocdAWSBashBigQueryDatadogGithub ActionsGoGCPKubernetesMySQLPythonRabbitMQTerraform
19 Days Ago
Easy Apply
In-Office or Remote
4 Locations
Easy Apply
Senior level
Senior level
AdTech • Marketing Tech
The Senior DevOps Engineer will manage AWS infrastructure, develop infrastructure as code using Terraform, support deployment processes, and improve observability in cloud environments, collaborating with engineering teams.
Top Skills: AWSGrafanaKubernetesPrometheusTerraform

What you need to know about the Vancouver Tech Scene

Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account