Work

Selected engagements demonstrating multi-disciplinary engineering across infrastructure, backend systems, and data platforms.

Technical Leadership →

Fan Engagement Platform Transformation

International Motorsport · Technical Leadership

Problem

Multi-million-pound fan personalisation initiative near failure. Release cadence at once every two months. Manual deployment processes creating operational risk. Multi-vendor teams lacking coordination and engineering standards. Critical fan engagement features blocked by infrastructure and delivery problems.

Approach

Set a clear technical direction that all teams — platform, backend, data, and multiple external vendors — could build towards. Gave developers the tools to ship independently without waiting on a central team. Brought engineering standards, quality gates, and accountability to an initiative that had none, and held the line on them across every workstream.

Outcome

Transformed release cadence from once every two months to 8+ deployments per day. Stabilised near-failing initiative through technical leadership and engineering standards. Accelerated team velocity with self-service infrastructure and automated pipelines. Enabled critical fan engagement initiatives through coordinated multi-disciplinary delivery.

Platform Engineering →

CI/CD Platform at Enterprise Scale

International Motorsport · Platform Engineering

Problem

Growing engineering organisation with 1,000+ repositories and millions of lines of code outgrew organic CI/CD. Pipelines had sprung up haphazardly across teams — no naming conventions, no reusability, no governance. Build tooling was inconsistent, slow, and hand-crafted per team. Developer self-service was non-existent, funnelling all CI/CD work through a small specialist team that could not scale linearly.

Approach

Built a reusable library of versioned pipeline components that developers could pick up and use without needing to understand how they worked — security scanning, build caching, and consistent outputs came included. Replaced thousands of lines of unmaintained custom scripts with a tested, documented internal tool. Established naming conventions so that anyone could look at a pipeline and immediately understand what it did, what it deployed, and where.

Outcome

Platform scaled to 7,000+ monthly builds — equivalent to 363 FTEs a month of throughput. Build times reduced by 95% through component-level optimisation with zero disruption to consuming teams (a single version bump). At sustained peak: one build every 20 seconds, 24/7. A level of throughput seldom seen outside hyperscalers.

AI/ML Engineering →

Agentic AI Supply Chain Intelligence

Supply Chain Consultancy · Agentic AI & Data Engineering

Problem

Supply chain data is heterogeneous at every level: suppliers send files with different column names for the same concept, different schemas for the same entity type, and no consistent classification. Manual schema mapping across dozens of clients and hundreds of file types is not a human-scalable problem. Beyond classification, operational intelligence has historically required bespoke BI tooling — there is no natural way to ask "this vessel is delayed, which orders are impacted?", "how many of my SKUs are ambient?", or "this news story just broke, how does it affect my supply lines?"

Approach

Every file uploaded to the platform is automatically analysed by an AI agent: it identifies what the data is, how it fits into the supply chain, what fields are significant, and how it connects to other datasets — before a human has looked at it. Users can then have a conversation to refine its understanding. The architecture is being extended into a fully agentic model: agents that track vessels in real time, reconstruct product journeys from images, and answer natural language questions across the entire data estate — "this ship is delayed, which of my orders are affected?", "how many of my products are ambient?", "this news story just broke, how does it impact my supply lines?"

Outcome

Zero-touch metadata classification from file upload — automated schema inference replacing manual data mapping per onboarded dataset. Conversational refinement loop reduces onboarding friction for non-technical users. Agentic extension layer in delivery: targeting vessel disruption impact analysis, SKU-level supply chain querying via embeddings, and product lifecycle timeline reconstruction from image sequences.

Technical Leadership →

Multi-Tenant Supply Chain Data Platform

Supply Chain Consultancy · Data Engineering & Infrastructure

Problem

Supply chain organisations generate high volumes of data across dozens of file formats — orders, inventory snapshots, warehouse lists, logistics logs. Without a governed ingestion layer, each client ends up with siloed, unclassified data that cannot be queried, joined, or trusted. Standing up new clients required weeks of manual provisioning and every data schema required manual mapping before anything could be analysed.

Approach

Engaged as strategic engineering partner to design and build the entire platform from scratch. Client onboarding is a single config change — everything else provisions automatically. Uploaded files move through a structured pipeline: validated, transformed into a consistent format, classified by data type, then loaded into a data warehouse — all without human involvement. Each client's data is fully isolated; cross-tenant leakage is architecturally impossible.

Outcome

End-to-end file ingestion to Snowflake with zero manual intervention per upload. New client onboarding reduced from weeks to minutes. Complete tenant isolation via custom JWT claims with no cross-tenant data leakage possible. 23 engineering standards and OWASP Top 10 controls established across 12 repositories.

Platform Engineering →

Self-Service GitHub Platform as Code

Multi-Client · Platform Governance

Problem

Engineering organisations at scale routinely accumulate GitHub sprawl: repositories created ad-hoc, permissions granted directly to individuals, no consistent RBAC, no ruleset enforcement. New starters need manual access provisioning. Leavers need manual revocation. Security and compliance teams cannot audit who has access to what — and why. Across engagements in international motorsport, government, and supply-chain consulting, these problems manifested at different scales but with a common root cause: GitHub management was treated as a manual, reactive task rather than a governed, automated platform.

Approach

Designed and delivered a fully automated GitHub platform managed entirely as code. All permissions flow through teams — no one gets direct repository access — and every rule is enforced automatically and self-correcting. For enterprise clients, integrated directly with identity providers so access management happens at the HR/IT layer: a new hire is added to the right groups during onboarding and arrives on day one with exactly the right access to exactly the right repositories, with nothing extra.

Outcome

1,000+ repositories migrated from Bitbucket to GitHub at an international motorsport client with zero downtime and governance standards enforced from first commit. Multiple GitHub Enterprise organisations managed as code across identity providers. New starter access provisioning reduced to zero engineering toil. Leaver revocation automatic via SCIM. Deployed the same pattern for a government agency and a greenfield supply-chain SaaS platform, proving the architecture is environment-agnostic.

Strategic Engineering →

Mission-Critical Telemetry Migration

International Motorsport · Real-Time Data Engineering

Problem

Mission-critical embedded QNX telemetry system processing 4,000+ high-frequency CAN messages per second required cloud migration. On-premises infrastructure created single point of failure for live race broadcasts. Race-day failure risk unacceptable. Needed proof of cloud-native architecture feasibility.

Approach

Delivered a proof-of-concept cloud migration for the telemetry system, demonstrating that the full data pipeline — from car to broadcast — could run reliably on cloud infrastructure. Coordinated across embedded hardware, software, and infrastructure teams to prove the architecture held under race conditions, within a fixed window before the start of the season.

Outcome

Proved scalability with high-throughput ingestion architecture in AWS. Reduced race-day failure risk by validating cloud-native telemetry pipelines. Enabled long-term resilience by demonstrating feasibility for full migration. Proved real-time processing of 4,000+ messages per second powering fan-facing features.

Technical Leadership →

Cloud IoT SaaS Platform

Global Industrial Technology · Market Entry

Problem

Industrial technology company needed market entry for embedded analytics SaaS. Required transition from on-chip analytics to scalable cloud platform—enabling remote management for devices without physical cabling dependency. Needed proof-of-concept to secure board-level funding.

Approach

Delivered a working proof-of-concept that demonstrated the full product vision: devices reporting data to the cloud, analytics running in real time, dashboards surfacing insights to operators — all without a physical cable in sight. Presented live to the board, covering frontend, backend, and data architecture in a single coherent demo.

Outcome

Secured SaaS market entry for client by delivering project as scalable cloud delivery model. Secured post-PoC funding following successful board-level presentation. Demonstrated cohesive delivery across frontend, backend, and data architecture ensuring technical feasibility.

Platform Engineering →

Government Infrastructure Modernisation

UK Government Agency · Infrastructure & Compliance

Problem

Multi-cluster Kubernetes estate suffering deployment failures and operational risk. Critical services unreliable due to 8,000+ lines of brittle bash/Jenkins logic. Manual processes blocking release velocity. Compliance requirements for government security standards.

Approach

Replaced thousands of lines of fragile custom scripts with a clean, automated infrastructure platform. All configuration managed as code — any environment can be reproduced, audited, and proven compliant from a single source of truth. Secrets management, deployment pipelines, and infrastructure configuration all brought under consistent, automated control.

Outcome

Reduced onboarding time and compliance risk significantly. Cut release delays and stability issues. Increased reliability of critical services by modernizing multi-cluster Kubernetes estate. Reduced deployment failure and operational risk through automated, compliant infrastructure.

Strategic Engineering →

CI/CD Visibility Platform

International Motorsport · Operational Excellence

Problem

No visibility into CI/CD pipeline utilisation and performance trends. Senior leadership lacked data-driven insights for platform investment decisions. Operational improvements hampered by lack of analytics. Commercial monitoring tools costing thousands monthly while requiring only basic metrics and alerting.

Approach

Built a real-time monitoring platform that pulled pipeline data automatically and surfaced it as dashboards for engineering leadership. Added alerting so incidents were caught and flagged immediately rather than discovered after the fact. Achieved full enterprise-grade visibility without paying enterprise-grade prices for a commercial tool.

Outcome

Executive visibility into CI/CD utilisation enabling data-driven investment decisions. Operational improvements through analytics identifying bottlenecks across the platform. Increased reliability via real-time alerting reducing incident response times. Cost efficiency achieving enterprise monitoring for under $1/month versus thousands for commercial alternatives.

Platform Engineering →

Developer Platform & Automation

International Motorsport · Engineering Productivity

Problem

Inconsistent CI/CD patterns across teams reducing developer productivity. Hybrid system integration challenges slowing delivery velocity. Manual processes blocking automation of common workflows. Lack of reusable abstractions forcing teams to rebuild solutions repeatedly.

Approach

Built an internal developer tool that gave teams a consistent, tested way to automate common workflows — whether that meant talking to old systems, new systems, or CI/CD pipelines. Bridged legacy tooling with modern automation so teams weren't rebuilding the same integrations independently. Wired quality feedback directly into pull requests so developers got answers immediately, at the point of change.

Outcome

Improved developer consistency and productivity through reusable class patterns. Integrated legacy and modern tooling eliminating manual coordination overhead. Increased pipeline reliability via automated discovery and reporting. Strengthened quality gates enabling PR comment injection for immediate feedback.

Similar challenges?

If infrastructure, backend, or data engineering problems are blocking your roadmap, get in touch.

Get in touch