About the Role
Join our mission-critical data team supporting defence intelligence workflows. You’ll leverage Palantir Foundry and Python to build robust, secure data pipelines that power analytic apps for senior leadership and A-level decision-makers.
Day rate: £700/day Outside IR35 (negotiable dependent on experience)
Key Responsibilities
- Design, implement and maintain data ingestion pipelines into Foundry Ontology.
- Develop Python/PySpark transforms to clean, enrich, and aggregate high-volume datasets.
- Collaborate with InfoSec to enforce security policies: ACLs, encryption, audit trails.
- Integrate Foundry workflows with external CI/CD tools; maintain code quality via unit tests and peer reviews.
- Troubleshoot performance bottlenecks in Spark and optimize cluster configurations.
- Mentor junior engineers and drive best practices in code documentation and data governance.
Required Qualifications
- 3 + years’ experience in a Foundry heavy environment (coding Foundry Code Repositories).
- Expert-level Python skills, including PySpark and pandas.
- Strong SQL proficiency—able to tune complex queries at scale.
- Hands-on familiarity with AWS data services (S3, EMR, Glue).
- Active UK DV clearance
- Excellent written and verbal communication; comfortable briefing non-technical stakeholders.
Desirable
- Experience with Terraform-managed Foundry deployments.
- Exposure to containerization (Docker/Kubernetes) and GitOps workflows.
- Knowledge of data science toolkits (scikit-learn, TensorFlow) or MLops patterns.
- Prior defence/intel sector background, understanding of STIGs and DISA compliance.
This role is not for the faint of heart - expect to own end-to-end data solutions under tight SLAs and stringent security regimes. If you can’t handle large-scale Python code, distributed compute, and heavy governance, keep scrolling. But if you thrive on solving hairy data problems in a secured environment, this role will let you cut your teeth on one of the most powerful analytics platforms out there.