Terawatt Infrastructure Logo

Terawatt Infrastructure

Senior Data Engineer

Posted 16 Hours Ago
In-Office or Remote
Hiring Remotely in Toronto, ON
Senior level
In-Office or Remote
Hiring Remotely in Toronto, ON
Senior level
Design and implement scalable data architectures, collaborate with teams to optimize data pipelines, develop data models, and ensure data governance and quality.
The summary above was generated by AI
About Terawatt Infrastructure

The once in a century transition to autonomous and electric vehicles is underway and will require a multi-trillion-dollar investment in energy and charging infrastructure, and the real estate to site it on. Terawatt is the leader in delivering large scale, turnkey charging solutions for companies rapidly deploying AV and EV fleets. Whether it’s an urban mobility hub, or a carefully located multi-fleet hub for semi-trucks, Terawatt brings the talent, capabilities, and capital to create reliable, cost-effective solutions for customers on the leading edge of the transition to the next generation of transport.

With a growing portfolio of sites across the US in urban hubs and along key logistics and transportation corridors and logistics hubs, Terawatt is building the permanent transportation and logistics infrastructure of tomorrow through a robust combination of capital, real estate, development, and site operations solutions. The company develops, finances, owns, and operates charging solutions that take the cost and complexity out of electrifying fleets. 

At Terawatt, we execute humbly and with urgency to provide tailored solutions for fleets that delight our clients and support the transition of transportation.

Role Description

    We are seeking a highly skilled Senior Data Engineer to join our growing team. In this role, you will design and implement scalable and efficient data architectures to support our business needs. You will collaborate closely with data scientists, analysts, and other cross-functional teams to build and optimize data pipelines, ensuring that data is accessible, secure, and well-structured for analytics and reporting.

    A key part of this role involves developing and maintaining data models, databases, and data lakes, while implementing robust data governance and quality assurance practices. You will drive the development of scalable data infrastructure aligned with company architecture standards and best practices.

    This role also requires curiosity and a commitment to building and maintaining production data lake pipelines that transform raw time-series data into ML-ready features, training datasets, and batch predictions. This includes ensuring data quality, reproducibility, and reliable retraining so ML outputs—such as forecasts and risk scores—can be trusted by downstream systems.

Problems You Will Solve

  • Turning messy operational data into reliable signals by building pipelines that transform noisy, incomplete, and high-volume time-series data into trusted datasets for analytics, product features, and ML workflows
  • Design a resilient lakehouse platform by architecting a scalable Databricks-based platform that support both streaming and batch workloads while ensuring governance, observability, and reliability

  • Enable production-ready ML pipelines by creating reproducible workflows, reliable feature datasets, and batch prediction pipelines that downstream systems can depend on

  • Enable self-service analytics and ML by building infrastructure and abstractions that allow analysts, engineers, and data scientists to independently explore and use data

  • Scale a platform for product and analytics by designing systems that support operational product features, internal reporting, and ML use cases without compromising performance or data quality

Core Responsibilities

  • Architect and evolve a Databricks-based data platform that serves as the scalable foundation for product features, internal reporting, and ML workflows.
  • Set technical standards for modeling raw data into clean, reliable datasets, ensuring high integrity and point-in-time accuracy for both BI and ML applications.
  • Build and maintain self-service tooling and infrastructure abstractions that improve the developer experience for data producers, analysts, and data scientists.
  • Design and optimize high-performance ETL/ELT pipelines using Delta Live Tables and Structured Streaming to handle seamless ingestion from diverse data sources.
  • Own platform observability, testing, and proactive monitoring to ensure the performance and reliability of critical data delivery and pipeline health.
  • Architect and enforce data security, compliance, and access controls by implementing Unity Catalog and IAM (Identity and Access Management) best practices across the enterprise.
  • Build and maintain production-grade pipelines that transform raw data into ML-ready features, training datasets, and reliable batch predictions.
  • Lead Infrastructure as Code (IaC) initiatives using Terraform and improve team productivity by identifying technical debt and automating complex deployment workflows.
  • Partner with Engineering, Product, and Business teams to resolve ambiguities and ensure shipped data features are impactful, reliable, and aligned with business outcomes.
  • Build and maintain a self-service data lake environment, empowering non-data engineers and stakeholders to discover, explore, and analyze data independently.
  • Promote engineering excellence through code reviews, documentation, and technical standards for orchestration and testing.

Minimum Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • 6+ years in data engineering, platform development, or large-scale data systems.
  • Hands-on experience with Databricks or modern lakehouse platforms and cloud platforms (AWS, GCP, or Azure).
  • Experience building scalable ETL/ELT pipelines using Spark and SQL.
  • Proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
  • Strong understanding of data modeling, schema design, and performance optimization.
  • Experience building reliable, production-grade data pipelines with a focus on data quality and observability.
  • Experience supporting analytics and/or ML workflows, including preparing ML-ready datasets.
  • Working knowledge of data governance, security, and access control frameworks.
  • Familiarity with Infrastructure as Code (IaC) and automated deployment workflows (e.g., Terraform).
  • Proven ability to collaborate across teams and contribute to technical direction.

Preferred Qualifications

  • Experience working with time-series, IoT, or high-volume telemetry data systems.
  • Familiarity with EV charging ecosystems, including OCPP (Open Charge Point Protocol).
  • Domain experience in electric vehicles (EV), energy systems, or distributed energy resources (DERs).
  • Experience building ML feature pipelines, training datasets, or batch inference workflows.
  • Experience designing self-service data platforms for analysts and data scientists.
  • Background in event-driven or real-time data architectures.
  • Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
  • Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
  • Proven ability to influence technical direction and collaborate across teams.

We are building a team that represents a variety of backgrounds, perspectives, and skills. At Terawatt, we continuously strive to foster inclusion, humility, energizing relationships, and belonging, and welcome new ideas. We're growing and want you to grow with us. We encourage people from all backgrounds to apply.
If a reasonable accommodation is required to fully participate in the job application or interview process, or to perform the essential functions of the position, please contact [email protected].

Terawatt Infrastructure is an equal-opportunity employer.

Top Skills

AWS
Azure
Cassandra
Databricks
DynamoDB
GCP
MongoDB
NoSQL
Spark
SQL
Terraform

Similar Jobs

3 Days Ago
Easy Apply
Remote
Canada
Easy Apply
Senior level
Senior level
Healthtech
The Senior Data Engineer will own and evolve data pipelines, collaborate with teams for data models and pipelines, optimize ETL processes, and implement data storage solutions.
Top Skills: AWSDatabricksETLJavaPythonSQL
5 Days Ago
Remote or Hybrid
Canada
Senior level
Senior level
Cloud • Digital Media • Enterprise Web • Marketing Tech • Software
As a Senior Data Engineer, you'll design and maintain cloud-native data infrastructure, develop data pipelines, and ensure high-quality data solutions while mentoring teammates and influencing strategy across the data ecosystem.
Top Skills: AWSDbtDockerKinesisPythonSnowflakeSQLTerraform
5 Days Ago
Remote or Hybrid
Canada
Senior level
Senior level
Cloud • Digital Media • Enterprise Web • Marketing Tech • Software
The Senior Data Engineer will develop and deploy machine learning models, build end-to-end ML pipelines, optimize performance, and collaborate across teams to enhance AI capabilities at ClickUp.
Top Skills: AWSAzureDockerGCPHadoopKubeflowKubernetesMlflowPythonPyTorchSagemakerScikit-LearnSparkSQLTensorFlowVertex Ai

What you need to know about the Vancouver Tech Scene

Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account