Beatdapp Logo

Beatdapp

Data Platform Engineer

Reposted 5 Days Ago
Be an Early Applicant
In-Office
Vancouver, BC, CAN
Mid level
In-Office
Vancouver, BC, CAN
Mid level
Design and maintain scalable data pipelines, manage Kubernetes environments, ensure operational excellence and security in data infrastructure solutions.
The summary above was generated by AI

Data Platform Engineer Position

About Beatdapp

Beatdapp is a venture-backed startup delivering the most advanced streaming integrity and recommendation technology in the world. While our roots are in fighting the multi-billion dollar problem of streaming fraud, we have leveraged our "Trust & Safety Operating System" to power a new generation of discovery.

We believe that true personalization starts with verified behaviour. By filtering out noise and manipulated signals before they impact the model, we build recommendation engines on a foundation of clean, authentic data. To deliver these insights at a global scale, we require a robust, elastic, and secure infrastructure.

The Role

We are seeking a Data Platform Engineer who is passionate about building the high-availability systems and data infrastructure that power our recommendation engines at scale. In this role, you will be operating in the intersection of cloud infrastructure, distributed systems, and data engineering. You will be one of the architects of a system where trillions of data points are ingested, processed, and served as real-time recommendations.

You will take full ownership of multi-cluster Kubernetes environments and backend service layers, ensuring our API workloads scale seamlessly and our data pipelines are robust and reliable. You will bridge the gap between raw streaming data and the clean, high-quality signals our models depend on — ensuring the systems that move, store, and serve data remain fast, secure, and resilient.

Responsibilities

  • Data Engineering: Design, build, and maintain scalable data pipelines and processing workflows that move and transform high-volume streaming data across our platform. You will optimize batch and streaming workloads, manage data quality at ingestion, and ensure reliable delivery to downstream consumers including ML feature stores and serving layers.
  • Cloud Infrastructure & Orchestration: Manage and optimize multi-cluster Kubernetes (K8s) environments. You will implement sophisticated autoscaling policies and node management strategies to support high-availability ML workloads.
  • Production Deployment Excellence: Design and orchestrate live service deployments using strategies such as A/B testing and Canary releases. You will ensure the system supports seamless rollbacks and API versioning.
  • Infrastructure as Code (IaC): Design and maintain our infrastructure using IaC principles to ensure environment consistency and rapid disaster recovery.
  • End-to-End Observability: Take ownership of the logging, tracing, and metrics components across backend services and data pipelines. You will work together with Ops teams to define SLOs/error budgets, build dashboards, and maintain the health monitoring systems that keep our data infrastructure and RecSys engine running 24/7.
  • Security & Compliance: Partner with security teams to enforce patch management, secrets handling, and data encryption protocols to protect sensitive streaming data.
  • Systems Ownership: Automate routine operational tasks and environment provisioning. You will be a primary stakeholder for system uptime, managing outages with a critical-thinking mindset and clear communication.

Successful Candidates will have:

  • 3+ years of professional experience in Backend, DevOps, and/or Data Engineering, preferably supporting data-intensive or ML applications at scale.
  • Kubernetes Experience: Deep familiarity with K8s, including experience with compute instances, network configuration (VPCs/Subnets), and scaling API workloads.
  • Strong Engineering Skills: Proficiency in writing clean, scalable backend services and data processing code, primarily in Python. You are comfortable writing production-grade code that handles large-scale data with stream processing, batch ETL, and API development in cloud-native environments.
  • CI/CD Expertise: Proven track record of building automated pipelines, managing image registries (Docker/Podman), and handling complex code versioning.
  • Architectural Fluency: A strong understanding of datastores (relational, non-relational, and columnar), distributed data systems, caching strategies, and data transfer protocols. Experience with streaming platforms (e.g. Kafka, Pub/Sub) or query engines (e.g. BigQuery, Spark) is a strong asset.
  • Security-First Mindset: Experience working with sensitive data, encryption, and secure cloud networking.

Bonus Points

  • Hands-on work experience with Google Cloud Platform (GCP) services
  • Hands-on work experience with Terraform
  • Service Mesh Experience: Hands-on work with Istio or Linkerd for Kubernetes.
  • Experience with data orchestration tools (e.g. Airflow), vector databases, or feature stores. Comfort operating in a data-intensive ML environment and familiarity with how backend systems support model serving pipelines.
  • Experience with GitHub Actions (GHA) and building highly automated, self-healing deployment workflows.
  • A strong feel for creating clear architecture diagrams, code commenting, and technical design documents.
HQ

Beatdapp Vancouver, British Columbia, CAN Office

Vancouver, British Columbia, Canada, V6B0M6

Similar Jobs

10 Hours Ago
In-Office or Remote
CA
Senior level
Senior level
Blockchain • eCommerce • Fintech • Payments • Software • Financial Services • Cryptocurrency
Design and build scalable data ingestion pipelines and modernize Block's CDC platform while collaborating cross-functionally to ensure data flows reliably for analytics and ML initiatives.
Top Skills: Apache IcebergApache KafkaAWSDatabricksDelta LakeGoJavaKafka ConnectPythonScalaSnowflakeTerraform
16 Days Ago
In-Office or Remote
Vancouver, BC, CAN
Expert/Leader
Expert/Leader
Blockchain • Fintech • Payments • Financial Services • Cryptocurrency • Web3
As a Lead Security Engineer, you will architect and manage Circle's security data platform, ensuring robust data ingestion, normalization, and response strategies while collaborating on security operations initiatives.
Top Skills: AthenaAWSGlueKafkaMskPythonS3SQL
13 Days Ago
In-Office
Burnaby, BC, CAN
Senior level
Senior level
eCommerce • Fintech • Payments • Software • Financial Services
The Senior Software Development Engineer will design and implement scalable backend services, lead team projects, and mentor junior engineers while ensuring system scalability and reliability.
Top Skills: AWSDynamoDBGoJavaNoSQLPython

What you need to know about the Vancouver Tech Scene

Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account