π Data Engineer β AWS | PySpark | SQL | Lambda
π London / Remote
πΌ Contract | Β£300/day | Outside IR35
π About the Role
We are seeking a talented and driven Data Engineer to join a forward-thinking organisation where data sits at the heart of decision-making. You will design and build scalable, high-performance data pipelines that power analytics and business insights across the company.
This is an exciting opportunity to work with modern cloud technologies, contribute to mission-critical systems, and make a real impact in a fast-paced, collaborative environment.
π§ What Youβll Be Doing
- Designing and implementing robust ETL pipelines to move and transform data at scale
- Building cloud-native solutions on AWS to ensure reliability and performance
- Working with PySpark, Python, and SQL to process large datasets efficiently
- Orchestrating workflows using Apache Airflow
- Leveraging AWS services such as S3, RDS, Redshift, and Lambda
- Deploying infrastructure using Terraform and Infrastructure-as-Code best practices
- Automating releases through CI/CD pipelines with GitHub Actions
- Collaborating closely with engineering and analytics teams to deliver high-quality solutions
β
Required Skills
- Strong experience with PySpark and AWS
- Proven background building and maintaining ETL pipelines
- Solid programming ability in Python, SQL, and Spark
- Hands-on experience with Apache Airflow
- Deep understanding of AWS data services
- Terraform for cloud deployments
- CI/CD workflow experience using GitHub Actions