🧠 Lead Software Engineer (Tech Lead) — Emerging Biotech Platform Location: New Haven, CT (On-site, full-time)

Compensation: $150–180k
The Opportunity I’m working with a pioneering biotech company that’s building something you won’t find anywhere else: a mission-critical platform that supports postmortem human brain research to unlock new therapies for neurodegenerative disease.

This isn’t another “scale the dashboard” job. The platform underpins experiments where losing a single data record is unacceptable. Think fintech-grade reliability and compliance — but applied to biotech brains and AI-driven insights. The company has already secured major pharma contracts (with billions in downstream pipelines) and recently raised a $10M seed. They’re scaling rapidly, from processing 500 human brains this year to 1,000 by 2026, with a growing biobank of millions of samples.

They’re now hiring a hands-on Tech Lead / Staff Engineer to take ownership of architecture, reliability, and delivery of their core platform.

What You’ll Do
  • Architect the backbone — schemas, transactional core, immutable audits, and chain-of-custody records.
  • Stream everything — CDC → Pub/Sub → BigQuery pipelines, telemetry ingestion, reconciliation jobs.
  • Ship operator-first UIs — React/TypeScript with scanning/labeling flows, real-time updates, guardrails.
  • Reliability engineering — disaster recovery drills, tested RPO = 0, observability everywhere.
  • Technical leadership — mentor engineers, coordinate contractors, set high standards.
Tech Environment
  • Cloud: GCP (Cloud Run, Spanner, Pub/Sub, BigQuery, GCS, Terraform). AWS possible down the road.
  • Languages: Python for services; TypeScript React for operator UIs.
  • Data/Infra: Streaming pipelines, observability (OpenTelemetry), immutable audits, CI/CD, reconciliation jobs.
Success Looks Like
  • Vertical slice demo: plan → simulate donor arrival → run steps → audited completion.
  • Operators who trust the system instead of fighting it.
  • Brain Bank Explorer scaling toward millions of samples with printable labels and lineage.
  • RPO = 0 proven in production disaster recovery drills.
Minimum Qualifications
  • 8 years building production systems where data integrity was critical (biotech, fintech, manufacturing, healthtech, etc.).
  • Designed and operated transactional cores with streaming/CDC into a warehouse (BigQuery preferred).
  • Hands-on GCP production experience (Cloud Run/GKE, SQL/Spanner, Pub/Sub, BigQuery, GCS, IAM).
  • Proficiency in Python React/TypeScript.
  • Delivered immutable audit record lifecycle under regulated/QMS-like processes.
  • Proven track record of RPO = 0 with tested DR.
Nice-to-Have Experience
  • LIMS/ELN, MES/SCADA, or other chain-of-custody systems.
  • Barcode/RFID scanner or label printer integrations.
  • Timeseries ingestion and aggregation.
  • Omics pipeline interfaces.
  • Designing UI for lab environments (gloves, dark rooms, offline tolerance).
Why This Role Stands Out
  • 🧬 Impact: The platform enables groundbreaking brain science and fuels AI models pharma is already investing in.
  • 🚀 Growth: Scaling to 1,000 brains annually and millions of samples.
  • 💰 Backed: $10M seed funding major pharma contracts already in place.
  • 🎯 Ownership: Lead architecture and delivery end-to-end.
  • 🌟 Mission: Zero tolerance for data failure. Operator-first UX. Real reliability.
Next Steps If this sounds like the kind of challenge you’d like to take on, let me know. I’d be happy to share more about the company and set up an intro with the hiring team.

⚡ Please note: Sponsorship is not available for tthis role is 100% on-site in New Haven, CT. You’ll be working directly alongside the scientists and surgeons using the system day-to-day