This role involves developing scalable data integration pipelines, collaborating with teams on solutions, and ensuring data quality using various modern technologies.
This is a remote position.
About our client:
Our client develops and supports software and data solutions across a variety of industries. They want you to get ahead of the market and stay there. They offer a combination of plug and play products that can be integrated with existing systems and processes and can also be customised to client needs. Their capabilities extend to big data engineering and bespoke software development, solutions are available as both cloud-based and hosted.
What you will be doing:
- Analyzes complex customer data to determine integration needs.
- Develops and tests scalable data integration/transformation pipelines using PySpark, SparkSQL, and Python.
- Contributes to the codebase through coding, reviews, validation, and complex transformation logic.
- Automates and maintains data validation and quality checks.
- Collaborates with FPA, data engineers, and developers to align solutions with financial reporting and business objectives.
- Participates in solution architecture and technical discussions, refining user stories and acceptance criteria.
- Utilizes modern data formats/platforms (Parquet, Delta Lake, S3/Blob Storage, Databricks).
- Partners with the product team to ensure accurate customer data reflection and provide feedback based on data insights.
What our client is looking for:
- A Data Analytics Engineer with 5+ years of experience.
- Must have strong Python, PySpark, Notebook, and SQL coding skills, especially with Databricks and Delta Lake.
- Proven ability to build and deploy scalable ETL pipelines to cloud production environments using CI/CD.
- Experience with Agile/Scrum, data quality concepts, and excellent communication is essential.
- Cloud environment (Azure, AWS) and Infrastructure as Code (Terraform, Pulumi) experience beneficial.
- Telecoms industry or consulting experience, plus accounting knowledge, is a plus.
Job ID:
- J106998
For a more comprehensive list of opportunities that we have on offer, do visit our website - https://www.parvana.co.uk/careers
Requirements
Data Engineer, PySpark, Python, SQL, Databricks, ETL, CI/CD, Cloud, Azure, AWS
Similar Jobs
Automotive • eCommerce • Fintech • Transportation
The Data Engineer will develop and maintain data pipelines, perform data extraction and transformation, and support data analytics across the enterprise.
Top Skills:
AirbyteApache AirflowGoogle BigqueryKubernetesPandasPysparkPythonSQL
Artificial Intelligence • HR Tech • Information Technology • Social Impact
The Data Engineer will analyze and interpret data, design scalable databases, optimize data operations, and develop LookML models while collaborating with various teams.
Top Skills:
BigQueryDbtFivetranGitLookerLookmlPythonSQL
Fintech • Payments • Financial Services
As a Junior Data Engineer, you will design and maintain data architectures, process large data sets, and work with tools like Spark and AWS.
Top Skills:
AirflowAWSDockerDynamoDBFlinkHadoopKafkaKinesisKubernetesLookerMySQLPostgresPythonRabbitMQRedisRedshiftSnsSparkSQLSQL ServerSupersetTableau
What you need to know about the Vancouver Tech Scene
Raincouver, Vancity, The Big Smoke — Vancouver is known by many names, and in recent years, it has gained a reputation as a growing hub for both tech and sustainability. Renowned for its natural beauty, the city has become a magnet for professionals eager to create environmental solutions, and with an emphasis on clean technology, renewable energy and environmental innovation, it's attracted companies across various industries, all working toward a shared goal: advancing clean technology.



