HOOPP (Healthcare of Ontario Pension Plan)
Senior Data Engineer, Unified Data Platform
Why you’ll love working here:
-
high-performance, people-focused culture
-
our commitment that equity, diversity, and inclusion are fundamental to our work environment and business success, which helps employees feel valued and empowered to be their authentic selves
-
learning and development initiatives, including workshops, Speaker Series events and access to LinkedIn Learning, that support employees’ career growth
-
membership in HOOPP’s world class defined benefit pension plan, which can serve as an important part of your retirement security
-
competitive, 100% company-paid extended health and dental benefits for permanent employees, including coverage supporting our team's diversity and mental health (e.g., gender affirmation, fertility and drug treatment, psychological support benefits of $2,500 per year, parental leave top-up, and a health spending account).
-
optional post-retirement health and dental benefits subsidized at 50%
-
yoga classes, meditation workshops, nutritional consultations, and wellness seminars
-
the opportunity to make a difference and help take care of those who care for us, by providing a financially secure retirement for Ontario healthcare workers
Job Summary
The Senior Data Engineer is responsible for designing, implementing, and optimizing scalable data architectures and pipelines to support business objectives. This role requires collaboration with enterprise architecture teams, cross-functional stakeholders, and junior engineers to build robust data solutions. The engineer will manage and optimize data storage, processing, and querying in AWS S3 and Snowflake, drive automation initiatives, and ensure data quality, governance, and security compliance. Additionally, the role involves monitoring and troubleshooting data systems while contributing to release planning and backlog management.
What you will do:
-
Build and maintain scalable, robust, and high-performance data architectures: This involves designing and implementing data systems that can handle large volumes of data efficiently and reliably.
-
Collaborate with Enterprise Architecture team to ensure alignment to enterprise standards: Working closely with the architecture team to ensure that data solutions adhere to the company's standards and best practices.
-
Collaborate with cross-functional teams to gather and define data requirements and translate them into technical solutions: Engaging with various teams to understand their data needs and translating those requirements into effective technical solutions.
-
Manage and optimize data storage, processing, and querying in AWS S3 and Snowflake: Handling data storage and processing tasks using AWS S3 and Snowflake, ensuring optimal performance and efficiency.
-
Drive automation of the platform for Infrastructure as Code (IaC) and CI/CD capabilities: Implementing automation processes for infrastructure management and continuous integration/continuous deployment.
-
Design, implement, and optimize scalable data pipelines using AWS native tools, Data Zone, and Snowflake: Creating and optimizing data pipelines to ensure smooth data flow and processing.
-
Implement data quality checks, cleansing, and validation processes to ensure data accuracy and reliability: Ensuring that data is accurate and reliable through various quality checks and validation processes.
-
Ensure data security, governance, and compliance in alignment with organizational standards: Maintaining data security and compliance with organizational policies and standards.
-
Monitor and troubleshoot data systems for optimal performance and reliability: Regularly monitoring data systems and addressing any issues to ensure they perform optimally.
-
Mentor junior engineers and provide leadership in best practices for data engineering: Guiding and mentoring junior engineers, sharing best practices and knowledge.
-
Maintain feature backlog and participate in release planning: Managing the backlog of features and participating in planning releases.
What you bring:
-
Bachelor’s in Engineering or higher
-
8+ years in proven hands on experience in Data engineering domain specifically in AWS data ecosystem including S3, Glue and Snowflake. Working knowledge in AWS Datazone and SageMaker Unified Studio.
-
Strong expertise in AWS platform in general and AWS data platform in particular.
-
Strong expertise in building and optimizing ETL pipelines for large-scale data processing.
-
Strong problem-solving skills with a focus on automation including CI/CD and IaC, and scalability.
-
Experience working with cloud-based data warehousing solutions - Snowflake.
-
Proficiency in SQL, Python, or other relevant programming languages.
-
Excellent communication and collaboration skills to work with stakeholders and teams.
Technical Expertise
-
Proficiency in building and maintaining scalable, robust, and high-performance data architectures.
-
Expertise in AWS services (S3, Data Zone) and Snowflake for data storage, processing, and querying.
-
Experience with Infrastructure as Code (IaC) and CI/CD automation.
-
Strong understanding of data pipelines, ETL/ELT processes, and cloud-native tools.
-
Knowledge of data governance, security, and compliance best practices.
Data Engineering & Optimization
-
Ability to design, implement, and optimize data pipelines for efficiency and scalability.
-
Experience in data validation, cleansing, and quality checks to maintain data integrity.
-
Familiarity with performance monitoring and troubleshooting of data systems.
Collaboration & Leadership
-
Ability to work with Enterprise Architecture teams to ensure alignment with enterprise standards.
-
Strong cross-functional collaboration skills to gather and define data requirements from different teams.
-
Experience in mentoring junior engineers and advocating best practices in data engineering.
-
Agile mindset with experience in feature backlog management and release planning.