Role Description
We are seeking a highly skilled Senior Data Engineer to join our growing team. In this role, you will design and implement scalable and efficient data architectures to support our business needs. You will collaborate closely with data scientists, analysts, and other cross-functional teams to build and optimize data pipelines, ensuring that data is accessible, secure, and well-structured for analytics and reporting.
A key part of this role involves developing and maintaining data models, databases, and data lakes, while implementing robust data governance and quality assurance practices. You will drive the development of scalable data infrastructure aligned with company architecture standards and best practices.
This role also requires curiosity and a commitment to building and maintaining production data lake pipelines that transform raw time-series data into ML-ready features, training datasets, and batch predictions. This includes ensuring data quality, reproducibility, and reliable retraining so ML outputs—such as forecasts and risk scores—can be trusted by downstream systems.
Problems You Will Solve
- Turning messy operational data into reliable signals by building pipelines that transform noisy, incomplete, and high-volume time-series data into trusted datasets for analytics, product features, and ML workflows
Design a resilient lakehouse platform by architecting a scalable Databricks-based platform that support both streaming and batch workloads while ensuring governance, observability, and reliability
Enable production-ready ML pipelines by creating reproducible workflows, reliable feature datasets, and batch prediction pipelines that downstream systems can depend on
Enable self-service analytics and ML by building infrastructure and abstractions that allow analysts, engineers, and data scientists to independently explore and use data
Scale a platform for product and analytics by designing systems that support operational product features, internal reporting, and ML use cases without compromising performance or data quality
Core Responsibilities
- Architect and evolve a Databricks-based data platform that serves as the scalable foundation for product features, internal reporting, and ML workflows.
- Set technical standards for modeling raw data into clean, reliable datasets, ensuring high integrity and point-in-time accuracy for both BI and ML applications.
- Build and maintain self-service tooling and infrastructure abstractions that improve the developer experience for data producers, analysts, and data scientists.
- Design and optimize high-performance ETL/ELT pipelines using Delta Live Tables and Structured Streaming to handle seamless ingestion from diverse data sources.
- Own platform observability, testing, and proactive monitoring to ensure the performance and reliability of critical data delivery and pipeline health.
- Architect and enforce data security, compliance, and access controls by implementing Unity Catalog and IAM (Identity and Access Management) best practices across the enterprise.
- Build and maintain production-grade pipelines that transform raw data into ML-ready features, training datasets, and reliable batch predictions.
- Lead Infrastructure as Code (IaC) initiatives using Terraform and improve team productivity by identifying technical debt and automating complex deployment workflows.
- Partner with Engineering, Product, and Business teams to resolve ambiguities and ensure shipped data features are impactful, reliable, and aligned with business outcomes.
- Build and maintain a self-service data lake environment, empowering non-data engineers and stakeholders to discover, explore, and analyze data independently.
- Promote engineering excellence through code reviews, documentation, and technical standards for orchestration and testing.
Minimum Qualifications
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
- 6+ years in data engineering, platform development, or large-scale data systems.
- Hands-on experience with Databricks or modern lakehouse platforms and cloud platforms (AWS, GCP, or Azure).
- Experience building scalable ETL/ELT pipelines using Spark and SQL.
- Proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB).
- Strong understanding of data modeling, schema design, and performance optimization.
- Experience building reliable, production-grade data pipelines with a focus on data quality and observability.
- Experience supporting analytics and/or ML workflows, including preparing ML-ready datasets.
- Working knowledge of data governance, security, and access control frameworks.
- Familiarity with Infrastructure as Code (IaC) and automated deployment workflows (e.g., Terraform).
- Proven ability to collaborate across teams and contribute to technical direction.
Preferred Qualifications
- Experience working with time-series, IoT, or high-volume telemetry data systems.
- Familiarity with EV charging ecosystems, including OCPP (Open Charge Point Protocol).
- Domain experience in electric vehicles (EV), energy systems, or distributed energy resources (DERs).
- Experience building ML feature pipelines, training datasets, or batch inference workflows.
- Experience designing self-service data platforms for analysts and data scientists.
- Background in event-driven or real-time data architectures.
- Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
- Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems.
- Proven ability to influence technical direction and collaborate across teams.


