Data Engineer at Billigence, London/Remote, 3-6 Months, £Contractor Rate (Outside IR35)

up to 6 Months

Contract Description

Data Engineer (Outside IR35)

Billigence is a boutique data consultancy with global outreach & clientele, transforming the way organizations work with data. We leverage proven, cutting-edge technologies to design, tailor, and implement advanced Business Intelligence solutions with high added value across a wide range of applications from process digitization through to Cloud Data Warehousing, Visualisation, Data Science and Engineering, or Data Governance. Headquartered in Sydney, Australia with offices around the world, we help clients navigate difficult business conditions, remove inefficiencies, and enable scalable adoption of analytics culture.

 

About the role:


We are working with a key client who is undergoing a period of growth and transformation. As part of this, Billigence is looking to add an experienced Data Engineer to their team. The customer is investing heavily in modern data platforms to drive business innovation, and this role will be instrumental in building and maintaining scalable data infrastructure to support their ongoing digital transformation.

 

This is a unique opportunity to join a growing project that requires a strong technical background in data engineering to ensure seamless data integration, efficient pipeline development, and robust data solutions. (Contract length: 3 to 6 months, outside IR35)

 

What you’ll do:

 

  • Develop and maintain data pipelines using Databricks
  • Work with CosmosDB and GraphQL to ensure efficient data management
  • Collaborate with business teams to ensure data solutions align with business objective
  • Support the integration of cutting-edge technologies into the client’s data architecture
  • Lead initiatives to drive business transformation through data-driven solutions
  • Design and build efficient ETL processes for data extraction, transformation, and loading
  • Monitor, troubleshoot, and optimize data pipelines for performance and scalability

What you'll need:

 

  • Strong experience with DatabricksCosmosDB, and GraphQL
  • Proven ability to work with business stakeholders to drive data strategy
  • Expertise in Python and SQL for data manipulation and automation
  • Proven experience in developing, testing, and maintaining data pipelines
  • Strong communication skills to interact with both technical teams and business leaders
  • Solid understanding of ETL processes and best practices
  • Experience with cloud technologies (e.g., AzureAWS, or GCP)
  • Familiarity with data warehousing concepts and technologies (e.g., RedshiftSnowflake, or BigQuery)
  • Experience in data modeling and database design principles
  • Ability to work in dynamic, fast-paced environments