Work Experience

Work Experience

Qure.ai

Title History

Senior AI Scientist
Level 3
Apr 2024 - Present
1 year 9 months
AI Scientist
Level 2
Jul 2022 - Mar 2024
1 year 9 months
AI Scientist Intern
Full-time (pre-grad)
Jul 2021 - Jun 2022
1 year

Overview

Qure.ai has been my first job and the place where I really grew up as an engineer. I joined as an earnest college graduate in a fast‑moving startup and, over a few intense years, was given room to tackle problems that genuinely matter in stroke and lung‑cancer care. R&D at Qure has always run on trust and growth—exploration is encouraged, ownership is expected—and that pushed me to think beyond individual models to the whole product and patient story. I learned to turn broad goals into clear problem statements, align early with clinicians and product, and then carry work all the way from raw data and annotations to production systems and monitoring that hold up in the wild. Somewhere along the way I shifted from just “shipping tasks” to creating leverage for the team, surfacing risks while there’s still time to course‑correct, and staying with a feature until it’s deployed, observed, and trusted. Qure has been a real multiplier for my growth, it’s where I learned to blend deep 3D computer‑vision work with practical problem‑solving instincts that travel well beyond this one role.

Projects

1) Stroke and Trauma Imaging Intelligence (qER)

Summary: Progressed from an early-career AI Scientist to qER’s R&D lead by scaling supervised computer-vision programs across NCCT/CTA/trauma datasets, owning supervised fine-tuning and transfer-learning pipelines, and converting those models plus their experimentation stack into multi-region FDA Clearances/CE Marks, peer-reviewed papers, patents, and the $100K Johnson & Johnson Japan QuickFire grant. Mission: Streamline the complexity of incoming stroke/trauma imaging into rapid, data-backed triage decisions—NCCT infarct core, CTA LVO, perfusion surrogates, trauma alerts—while equipping ER and hub-spoke teams with the context they need to cut door-to-needle time.

Clinical Coverage: MRI DWI, NCCT core/penumbra, CTA LVO, trauma suite
  • Built and productionized models that span the entire acute stroke journey: NCCT acute/hyperacute infarct classification & segmentation, CTA large-vessel occlusion detection/localization, NCCT-derived perfusion surrogates (core/penumbra), ASPECTS scoring assist, gaze deviation assessment, plus trauma detections (intracranial hemorrhage classes, midline indicators, fractures via teammates) to cover the same ER workflow.
  • Early projects included a DWI/ADC infarct segmentation UNet (Dice ≈ 0.70) that served as my ML foundation even though it was later shelved, giving me MRI domain familiarity when CT-first models took over.
  • Partnered with product & hospital innovation teams to extend the suite into coordination tools that link spoke hospitals to thrombectomy hubs, standardize hand-offs, and shorten door-to-needle times—a differentiator beyond raw model outputs.
  • Curated ~140k NCCT studies (sourced via pan-India teleradiology partners) for classification models, carved out ~10k radiologist-annotated scans for all stroke/trauma segmentations, and spun up nimble ~1k-scan datasets for utility models (cranium classifier, intracranial volume) plus research/regulatory holdouts covering new geographies.
  • Designed and published a mechanical thrombectomy likelihood model that fuses NIHSS-like pre-clinical scores, demographics (age, last-known-well), and NCCT-derived biomarkers (infarct volume, ASPECTS). It now acts as an upstream signal for hub-and-spoke coordination and is published as a full paper.
Model Portfolio & Metrics: Infarct AUC 0.85–0.92, LVO AUC 0.98, DSC 0.75+
  • NCCT acute/hyperacute infarct: ensemble of classification heads (AUC 0.85–0.92 on ~140k scans) plus segmentation models (Dice 0.30–0.75) using CNN, transformer, and hybrid encoder families (ResNet/SE-ResNet, ConvNeXt, EfficientNetV2, 3D SwinV2, 3D MaxViT) with UNet/UPerNet/ConvLSTM decoders; best variants are published and deployed.
  • CTA LVO pipeline (patented & deployed globally): multi-stage stack combining cranium isolation, ANTsPy-based tilt correction, intracranial volume extraction, vascular-territory segmentation, MCA occlusion detection via 2D CNNs on MIPs, and ICA patch classifiers—achieving AUC ≈ 0.98 and segmentation Dice ≥ 0.95.
  • Core & penumbra on NCCT: novel segmentation leveraging CT-perfusion ground truth; currently in patent filing/clinical validation with Dice > 0.30 even against noisy perfusion labels, positioning NCCT-only workflows to mimic perfusion decisions.
  • Gaze deviation estimation: eye/lens segmentation (Dice 0.88) feeding geometric gaze-angle computation aligned with NIHSS; research published even though not commercialized yet.
  • ASPECTS post-processing: added region-level smoothing and rule-based corrections atop a colleague’s model, cutting mean absolute ASPECTS error by 36% (2.5 → 1.6).
  • ICH classification assist: co-designed augmentation strategies and ensemble logic for the hemorrhage detector that shipped inside qER Trauma.
Architecture & Experimentation: 2D/3D FCNs, transformers, MONAI, ClearML
  • Ran large-scale sweeps across 2D FCNs, 3D FCNs, 2D backbones with 3D adaptors, transformer-only encoders (ViT, SwinV2), hybrid stacks like Swin-UNETR, and ConvLSTM heads to balance accuracy vs. latency for ER deployments.
  • Standardized experimentation on MONAI + ClearML plus an in-house vision_architectures library I authored to provide production-grade implementations of 3D transformer/convolutional networks that the open-source ecosystem lacked.
  • Built a stratified evaluation harness that auto-generates ROC/PR plots, per-stratum metrics, and threshold deltas between candidate models so stakeholders can see the clinical trade-offs quickly—slashing unmeasured but substantial iteration cycles.
Workflow & Stakeholder Enablement: Hub-spoke orchestration, analysis toolkit
  • Provided structured AI outputs, occlusion maps, and confidence overlays that platform teams wired into dashboards/notifications for spoke↔hub coordination, while I stayed focused on the underlying models.
  • Partnered with Johnson & Johnson Japan on deploying these outputs into their Smart Healthy Aging Initiative QuickFire pilots—winning the $100K grant and international recognition (news, press).
Regulatory & Publication Footprint: FDA/CE wins, journals, patent
  • Led the latest FDA 510(k) submission (CTA LVO) end-to-end—owning retrospective reader studies, operating-point selection (sensitivity/specificity), and documentation—which cleared in 3.5 months, our fastest turnaround ever.
  • Serve as primary owner for CE MDR/MDSAP and other regional filings; earlier FDA packages saw me in a supporting role, now I represent R&D on the pivotal ones.
  • Co-authored peer-reviewed papers and conference talks covering infarct detection, LVO detection, perfusion surrogates, gaze deviation, and the thrombectomy-likelihood model (ICCVW, WSC, ASFNR, etc.).
  • Named inventor on the U.S./India patent for automated LVO detection on CTA; additional IP filings (e.g., core/penumbra on NCCT) are underway.
Role Evolution & Collaboration: Novice → Lead, mentoring juniors
  • Started as a novice AI Scientist with two mentors, rapidly took ownership of stroke ML deliverables, and was promoted directly to Level 2 upon graduation because I was already operating as a full-time contributor.
  • Drove cross-functional delivery with product managers, clinical advisors, data teams, and hospital partners to translate experimental models into regulated releases.
  • Continue to mentor newer scientists on experimentation hygiene, architecture choices, and deployment readiness, while transitioning some responsibilities to lung-cancer initiatives.

2) 3D Foundation Models

Summary: Incubated a CT-native foundation backbone that pretrains once on heterogeneous neuro/chest datasets with the aim of accelerating every downstream task: stroke, lung cancer, and other products we may venture into (such as COPD, CaC, PH, etc.), by shipping reusable 3D representations instead of rebuilding encoders per project. Mission: Deliver a general-purpose, attention-first 3D encoder that trims labeled-data needs, boosts transfer learning reliability, and plugs seamlessly into multimodal stacks so new CT products hit production faster.

Scope & Architecture: SwinV2-3D, MaxViT, SimMIM + MedCLIP
  • Selected SwinV2-3D and MaxViT backbones because their hierarchical attention windows preserve global spatial context and are future-proof for cross-modality fusion (e.g., PET+CT) while remaining trainable on limited GPUs.
  • Pretrained with a dual objective: SimMIM for masked-volume reconstruction and MedCLIP-style contrastive learning against paired and unpaired reports, striking a balance between structural understanding and semantic alignment without requiring curated labels.
  • Aggregated CT cohorts spanning the entire body so the encoder internalizes anatomy beyond targeted datasets.
Training Strategy: Memory-aware parallelism for 512³ scans
  • Diagnosed how CT foundation-model training inverts the usual LLM/computer-vision memory profile: instead of 10B+ parameter models (~37 GB in FP32) ingesting tiny <4 MB token streams, we run “small” million-parameter (<3 GB) encoders against 512³ voxel inputs (~512 MB each), and the published fixes mostly downsample to 2D slices—destroying the 3D context we actually need.
  • Solved that imbalance by layering tensor-splitting parallelism (to keep intermediate conv buffers from exploding), activation checkpointing (to drop and recompute giant activations), and pipeline parallelism (to shard models/activations across GPUs while trading batch-size gains from vanilla DDP). Together they let the full volumes flow without resorting to lossy cropping/resizing tricks.
  • Benchmarked throughput vs. quality to ensure these tricks delivered net-positive wall-clock time compared to naive cropping/resizing approaches that would have lost 3D context.
Downstream Impact: Stroke AUC +0.02, DSC +0.07
  • Fine-tuning the foundation encoder on stroke tasks lifted acute/hyperacute infarct classification AUC from 0.92 → 0.94 and segmentation Dice from 0.68 → 0.75, confirming the backbone excels at discriminative features as the pretext tasks emphasize reconstruction/contrastive signals.
  • The improved convergence speed translated into materially faster experimentation cycles and lower labeled-data demand for existing problem statements.
Explorations & Hand-offs: Perceiver trials, sister-team 2D MAE
  • Investigated a perceiver-style VAE to map arbitrary CT volumes (ranging from 32×384×384 to 2000×1024×1024) into fixed-length embeddings. Despite extensive tuning, cross-attention couldn’t retain the high-frequency detail without exploding embedding sizes, so reconstructions stayed mediocre and downstream lifts were negligible.
  • Paused broader downstream benchmarking to unblock urgent lung-cancer deliverables (nodule characteristics/ranking). A sister team continued foundation work on 2D CT slices with DINOv2 + MAE ViTs, while our 3D weights remain ready for the next wave of volumetric tasks.

3) Lung Cancer AI Platform (qCT)

Summary: Joined the lung-cancer initiative in 2024 to transplant my supervised computer-vision toolkit into a multimodal CT/PET pipeline, strengthening nodule characterization, detection research, and malignancy-risk modeling while mentoring 3 junior scientists and keeping the product’s data health visible via automated observability hooks. Mission: Help radiologists, pulmonologists, and thoracic surgeons surface clinically urgent nodules early—whether discovered on LDCT screening, PET/CT follow-ups, or incidental findings—and feed them consistent rankings, visualizations, and risk scores that accelerate reporting and patient routing.

Scope & Tooling: CT-first remit, multimodal inputs
  • Focus on the CT/PET portion of the platform while sister teams specialize on X-ray-first screening; collaborate on shared annotations and cross-modality heuristics so nodules discovered on X-ray can be traced on CT follow-ups.
  • Curated ~22k labeled nodules with multiple reads for characteristic classification, ~27k LDCT nodules for detection research, and continue to expand with CT/PET-CT pairs, biopsy notes, and longitudinal CT reports for malignancy risk modeling.
Nodule Characteristics & Ranking: Calcification AUC 0.97, spiculation 0.84, Ranking NDCG@5 0.92
  • Raised calcification classification AUC from 0.93 → 0.97 (sensitivity/specificity 0.94/0.76 → 0.93/0.96) by cleaning the 22k-nodule dataset, smarter sampling of datapoints, Hu value heuristics, and using an inverse-frequency class-balanced cross entropy loss of my own making. Model is live in production.
  • Boosted spiculation classification AUC from 0.80 → 0.84 (sensitivity/specificity 0.57/0.87 → 0.60/0.91) by treating it as a regression problem, introducing a context crop of better understanding of the presence of the abnormality, and using the same inverse-frequency class-balanced cross entropy loss as calcification. Model is live in production.
  • Designed a ranking engine that scores nodules by clinical urgency using calcification, spiculation, texture, juxta-pleural/perifissural location, diameter, and volume; radiologists now rely on its ordered worklist during reporting sessions, noting markedly faster prioritization even without a historical baseline (average NDCG@5 of 0.921).
Detection Research & Evaluation: DETR + Swin/ViT3D, custom 3D mAP
  • Prototyped DETR and Deformable DETR pipelines with SwinV2-3D and ViT-3D backbones to replace the legacy RetinaNet detector, moving beyond the old precision/recall-only checks by implementing bespoke 3D mAP/mAR metrics and IoU thresholds that respect anisotropic CT voxels.
  • Demonstrated that limited (27k) noisy LDCT annotations capped DETR’s gains, documenting the data/label gaps and handing the evaluation harness to the next cycle so the team can quickly re-test when scale improves.
  • Hardened the existing RetinaNet pipeline by backporting the new evaluation suite, giving product managers clearer launch criteria for newer models even though the DETR track was paused.
Emerging Models & Multimodal Research: Gemini trials, malignancy score, PET/CT fusion
  • Explored Gemini Pro via Portkey/OpenAI APIs to classify nodules and filter confounders directly from CT slice stacks; documented the modality mismatch (2D/video inputs vs. 3D HU volumes) and why the approach underperformed compared to the X-ray team’s success, saving future cycles.
  • Currently architecting a proprietary lung-nodule malignancy score that fuses PET-CT uptake patterns, CT morphometrics , biopsy outcomes, and longitudinal reports; work is under wraps until patents/publications land, but the data contracts, schemas, and training scaffolds are ready.
Visualization & Monitoring: CT→X-ray projection, Grafana signals
  • Adapted an internal CT-to-X-ray projection algorithm (using the efficiency of CUDA shared objects) to denoise scans, strip patient-table artifacts, and render realistic projections that highlight detected nodules for customer demos and follow-up planning.
  • Established a weekly Grafana-driven digest with product managers that surfaces client input distributions (scanner types, slice thickness, LDCT vs. diagnostic CT ratios) so product and data teams spot drift before it hits model performance.
Leadership & Collaboration: Supervisory role, CT↔X-ray alignment
  • Operate in a supervisory capacity, guiding junior scientists on experimentation hygiene, reviewing their PRs, and aligning CT-model deliverables with client commitments while X-ray peers tackle parallel milestones.
  • Partner with clinicians, product strategists, and the CT/PET-CT research pod to ensure every model spec maps to a real reporting constraint, whether that’s screening-camp throughput or early detection and management of lung cancer.

4) Generative AI

Summary: Prototyped CT-native generative pipelines—disease-aware mathematical augmentations, hierarchical VAEs, and latent diffusion planning—to stretch tiny labeled cohorts into statistically useful corpora for stroke and lung cancer research without compromising clinical realism. Mission: Pair physics-anchored heuristics with modern generative modeling so every downstream model, especially the data-starved core/penumbra stack, benefits from richer edge cases and higher-fidelity reconstructions than raw hospital submissions alone can offer.

Pathophysiologic Augmentations: Hyperacute infarcts, +80% dataset
  • Modeled hyperacute infarcts and core/penumbra surrogates on NCCT by simulating water-content loss, gray–white blurring, and density drop-offs that mirror early ischemia progression instead of simple copy-paste patches.
  • Tuned lesion geometry, HU decay, and peri-lesional gradients per presentation so clinicians couldn’t visually separate the synthetic scans from real early-onset cases.
  • Augmented the scarce perfusion-aligned dataset from ~500 → ~900 scans (+80% usable volume) while keeping class balance in check, which materially stabilized the core/penumbra segmentation training described in qER.
Hierarchical NVAE Compression: SSIM 0.96, PSNR 36
  • Found that a single high-compression VAE bottleneck couldn’t capture CT micro-structure and was prohibitively expensive, so I built a hierarchical NVAE trained scale-by-scale to keep per-stage compute tractable.
  • The staged training unlocked reconstructions with SSIM ≈ 0.96 and PSNR ≈ 36 dB, close enough to source scans for both radiologist review and downstream latent modeling.
  • This NVAE became the latent workspace for prospective diffusion models, giving us deterministic, high-fidelity encodings without bloating storage or retraining costs.
Diffusion Research & Hand-off: MAISI superseded in-flight work
  • Began architecting latent diffusion models per NVAE scale, but the three-scale design meant any weak diffuser degraded final outputs, making the effort brittle.
  • When NVIDIA released MAISI with an open-source single-scale VAE + latent diffusion combo that matched our fidelity (albeit with lower compression), we retired the in-house diffusion track to avoid redundant maintenance.
  • Documented our findings and handed the baton to a sister group that focused on controllability and conditioning of MAISI-generated CT volumes for future data-augmentation programs.

5) Architecture Implementations

Summary: Authored and maintain vision_architectures, a plug-and-play 3D CT model library that reimplements research-grade encoders/decoders (DETR3D, FPN3D, MaxViT3D, SwinV2-3D, etc.) with granular building blocks so experimentation never stalls just because an architecture shipped only with 2D code. Mission: Give every CT scientist at Qure a production-ready toolkit—documented here—adopted by the R&D team—that balances academic fidelity with pragmatic controls (checkpointing, tensor parallel tricks, custom losses) to accelerate model bring-up and keep our deployed stacks consistent.

Library Scope: Transformers, detectors, decoders
  • Implemented 3D variants of DETR/DETR3D, FPN/UPerNet heads, SwinV2, MaxViT, ViT, CaiT, etc., and supporting blocks (SE, transformer layers, codebooks, patch/voxel embeddings) so researchers can compose novel stacks without hunting for partial repos.
  • Every module exposes consistent configs (depths, window sizes, dilation, attention flavor) and ships with inference/training harnesses, letting us swap backbones mid-experiment without touching downstream code.
  • Although it started as a personal project to deep-dive into the intricacies of these architectures, the package now underpins most CT production models and is the default scaffold for new proof-of-concept work.
Modularity & Training Ergonomics: Checkpoint toggles, tensor splitting
  • Baked activation-checkpoint controls into every block: a single flag cascades through embeddings, latent layers, and heads, giving us fine-grained memory/compute trade-offs without bespoke patches per architecture.
  • Added tensor-splitting convolution kernels and safe gradient accumulation helpers so ultra-high-resolution CT volumes fit across our GPU fleet without manual surgery each time.
  • Provided reusable modules for latent-space adapters (e.g., cross-attention bridges, codebook quantizers) so multimodal and diffusion-heavy projects can still lean on the same core library.
Utilities & Adoption: Custom losses, schedulers, EMA
  • Shipped an inverse-frequency class-balanced cross-entropy loss, Dice/IoU hybrids, sliding-window inference helpers, EMA/students for teacher–student training, LR schedulers, and diffusion noise schedulers so experimentation stays centralized.
  • Wrote thorough docstrings plus lightweight docs (linked above) so teammates can onboard quickly; code style mirrors our production stack (type hints, tests, pre-commit) for easy vendoring.
  • Library is now standard issue for the CT R&D org—production models in qER and qCT tracks all import these modules, which keeps architectural drift low and review cycles short.

6) Data & Annotations

Summary: Own the data acquisition, curation, and annotation programs for two CT-first, multimodal products, safeguarding 30+ TB of vendor, research, and client data (arriving via S3 buckets, other cloud shares, and literal hard-drive shipments) while transforming every raw submission into a standardized, analysis-ready corpus. Built an end-to-end operating model—from ingestion and metadata modeling to annotation orchestration and QA—that keeps R&D unblocked, gives product teams instant visibility into data readiness, and sustains high annotator satisfaction even as volume exploded.

Scope & Data Governance: 30+ TB, multi-modality, compliance-ready
  • Manage several terabytes (30+ TB) of training, validation, and testing data spanning CT, CTA, perfusion, X-ray, PET-CT, biopsy, and hybrid patient datapoints sourced from vendors, open datasets, research collaborators, and both prospective and live clients—regardless of whether the drop arrives through S3/GCS/Azure shares, secure FTP, or encrypted hard drives.
  • Ingest unstructured deliveries, normalize them into a universal folder/schema layout, and capture every metadata field in strongly-typed BSON documents with canonical naming so downstream teams can query any cohort without spelunking raw disks.
  • Run automated integrity checks (modality compliance, DICOM completeness, scan contract specs, corruption and duplication detection) before a study is accepted, and log all results to a Postgres-driven ingestion tracker that ultimately publishes authoritative references into MongoDB.
  • Classify each DICOM series into modality / protocol buckets (non-contrast head CT, CTA, perfusion, etc.), tag their viable problem statements, and persist those tags in MongoDB for instant cohort filtering.
  • Cache frequently accessed scans as memory-efficient safetensors blobs, shrinking read latency by ~80% while staying within tight on-prem / cloud storage budgets.
  • Ensure every ingestion, transformation, and storage workflow adheres to the company’s multi-framework regulatory obligations for medical data handling, with audit-ready trails baked into the process definitions.
Annotation Operations: Taxonomy, tooling, validation
  • Define per-problem taxonomies, sampling rules, and prioritization logic so every annotation batch directly advances a product hypothesis or research deliverable.
  • Authored upload/download automation for the RedBrick AI portal, including scripts that package imaging + metadata, push batches, and pull completed labels with version tracking.
  • Built validation daemons that inspect coverage immediately after each datapoint is annotated, flagging incompleteness or schema drift in near real time and driving re-annotation rates to ~0%.
  • Process downloaded annotations into lightweight, queryable stores with fast filtering (e.g., by modality, labeler, abnormality) so model training pipelines can materialize cohorts without manual wrangling.
  • Developed NLP + LLM parsing utilities powered by a stack of proprietary models, GPT, Gemini, and self-hosted Qwen instances to read radiology reports, extract hierarchical findings/attributes, and align them with structured tags to boost weak supervision and cohort triage.
Quality & Issue Resolution: Concordance-first workflows
  • Partnered with product POCs (who manage annotator contracts) to codify escalation paths so R&D only intervenes for nuanced clinical clarifications while still getting rapid answers.
  • Tackled ambiguous problem statements by rolling out secondary review passes, consensus templates, and label-specific heuristics that maximize usable signal despite inherent reader variability.
  • Maintained structured QC logs that correlate annotator performance, modality difficulty, and downstream model impact, ensuring noisy labels are filtered or reweighted before training.
  • Increased annotator satisfaction (surveyed by the product team) by giving them clearer instructions, tighter taxonomies, and faster tooling.
Team Leadership & Collaboration: Managed 3 Data Engineers, scaling via playbooks
  • After bootstrapping the system, recruited, trained, and now supervise three data engineers who run day-to-day ingestion and annotation ops, each capable of adapting the framework to new client quirks or modalities.
  • Provide KT packets, SOPs, and shadowing sessions so engineers can troubleshoot vendor datasets independently while still escalating blocking edge cases to me.
  • Coordinate with the broader product, BI, and engineering orgs for capacity planning, security reviews, and audit readiness.
Impact: Throughput, latency, satisfaction
  • Achieved a raw 18× increase in annotation throughput; even after normalizing for 3× higher label demand and a 2× larger annotator pool, per-annotator productivity still improved ~3×.
  • Validation scripts cut re-annotation loops to nearly 0%, freeing annotators to focus on net-new data instead of fixes.
  • Safetensor caching reduced data access time by ~80%, enabling near-immediate fulfillment of ad-hoc analysis and training-data requests.
  • Maintained annotation quality at previous baselines despite much higher volume, while overall turnaround for data/annotation analysis requests dropped sharply thanks to the trained three-person team (qualitatively observed by stakeholders).

7) Production Codebase

Summary: Architected and own a production‑grade AI pipeline framework for head CT/CTA/MRI triage that reduced turnaround time by 57%, increased automated test coverage from 22% → 91%, cut new‑model integration time from ~8 days → ~1 day, and dropped configuration errors from ~700/year → 0/year along with processing errors from ~500/year → 2/year.

Ownership & Scope: Hybrid deployments, resource-aware, 2k scans/mo
  • Lead and own the end‑to‑end AI production codebase for the neuro‑imaging triage product, covering head CT, CTA, and MRI across on‑prem, cloud, and hybrid deployments.
  • Designed the system to operate reliably under CPU/GPU constraints (e.g., production on AWS g4dn.4xlarge with limited GPU and RAM), adding failsafes so models continue to run gracefully even under resource pressure.
  • Support ~2,000 valid series per month through this pipeline while maintaining strict reliability and performance guarantees.
Architecture & Implementation: Modular TorchScript graph pipelines
  • Designed and implemented the entire Python codebase from scratch, building a modular, fault‑tolerant pipeline framework that decouples TorchScript‑serialized models per CT/CTA study, each with its own preprocessing, postprocessing, and multi‑level collation stages.
  • Introduced a custom process‑graph / conditional‑subgraph abstraction with node‑level logging, benchmarking, and multiprocessing, allowing independent steps to run in parallel, with clear failure tracebacks and automatic merging of longest‑common‑prefix graphs to deduplicate shared processing.
  • Ensured backward compatibility so existing models and legacy pipelines continue to run unchanged, while new models can be plugged into the same modular framework without disrupting production.
  • Implemented output caching and database‑backed persistence for intermediate and final results, keeping outputs serializable and memory‑efficient to respect RAM limits on large CT scans.
Configuration Management & Safety: Nested Pydantic configs, backward compatible
  • Architected a nested Pydantic‑based client‑configuration schema that strictly validates model selection, thresholds, routing, and reporting rules, making it effectively impossible to persist invalid configurations in the database.
  • Migrated all existing client configs into the new structure, validating them and enforcing a standardized, modular config format where all clients share a common base schema plus optional, explicit customizations.
  • Passed model outputs at every collation stage through Pydantic model classes as an additional safety and consistency check, even though the code path already guarantees correct types in 99.99% of cases.
Quality, Testing, and Tooling: 2k+ tests, enforced pre-commit + CI
  • Built >2,000 unit and integration tests using pytest (with parameterization), including end‑to‑end tests for every model pipeline on real data and collation‑level tests on real outputs, raising coverage from 22% to 91%.
  • Enforced a high‑quality development workflow with Black, isort, Flake8, and pyupgrade, all wired into pre‑commit hooks so every contributor follows the same style and linting rules.
  • Integrated with GitHub Actions (for CODEOWNERS and review ownership of the R&D code) and Jenkins (for build, test, and deployment pipelines managed by engineering), and worked with QA/engineering teams who run additional regression and golden‑case tests before productizing new models.
  • Documented the codebase with Google‑style docstrings, type hints, and graph visualizations of the processing pipelines to make behavior transparent for both engineers and non‑R&D stakeholders.
Reliability, Monitoring, and Impact: Resource benchmarking, Slack alerts, < 2 incidents
  • Performed extensive benchmarking under multiprocessing to ensure that total CPU, GPU, and RAM usage stays within safe limits; added failsafes and graceful‑degradation paths for resource‑related failures.
  • Enabled Slack‑based alerting (implemented with the engineering and business‑integrations teams) for errors, weekly volume metrics, and other key product KPIs.
  • Reduced end‑to‑end CT/CTA turnaround time by 57%, cut new‑model integration time from ~8 days to ~1 day, and achieved zero configuration‑related failures with only two processing‑pipeline issues in two years, both resolved in ≤2 days (vs. ~14 days previously).
Collaboration & Leadership: Sole R&D owner → mentor, cross-org standards
  • Served as the sole R&D owner for the production pipeline, then transitioned into a mentorship and review role after knowledge transfer, as additional engineers began contributing to the codebase.
  • Standardized pre‑commit and testing practices for all technical contributors working on this product, driving a company‑wide uplift in code quality for this area.
  • Collaborated closely with the product’s engineering lead, core engineering team, and business‑integrations team to align interfaces, deployment strategy, monitoring, and operational processes.

Publications

Reeborn TotalHealth

Title History

Co-founder
Aug 2021 - Jun 2022
11 months

Overview

I set up Reeborn to explore whether longitudinal data from wearables, CGMs, and patient preference logs could drive AI-led nutrition and exercise prescriptions that push diabetes, hypertension, and obesity toward remission. The ambition was to pair that data with tailored diet and workout playbooks so physicians could taper medications once biomarkers stabilized. I also began sketching a lightweight digital-twin simulator to test interventions virtually before asking patients to make real lifestyle changes, helping them trust the plan. A few months in, I realized the business-building pieces (sales, ops, regulatory) needed more time, capital, and experience than I could spare while still excelling at the technical work I love. I wound down the venture quickly and redirected the learnings into my full-time work at Qure, where the ecosystem to ship regulated AI already existed.

Aditya Jyot Eye Hospital

Title History

Research Intern
May 2019 - Jul 2019
3 months

Overview

During this internship, I set out to quantify how diabetic retinopathy erodes the quality of life for both patients and caregivers so the condition could be treated as life-altering, not merely chronic. My teammate and I built and clinician-vetted a structured survey, then ran a 115-person field study across Mumbai eye camps to capture financial, emotional, and functional burden. We compiled the analysis, submitted it to India’s Ministry of Health & Family Welfare (MoHFW) to advocate higher public funding for diabetic retinopathy screening and rehabilitation programs. A brief report can be found here.

Projects

Diabetic Retinopathy Quality-of-Life Study

Summary: Led the end-to-end research workflow—from questionnaire design through policy hand-off—to measure how diabetic retinopathy impacts daily living in low-resource settings.

My Contributions: Questionnaire, field ops, policy brief
  • Partnered with a fellow BITS researcher and Aditya Jyot ophthalmologists to craft a 40-question instrument spanning vision disability, caregiver load, and cost-of-care markers, iterating until it satisfied the hospital’s ethics checklist.
  • Led on-ground data collection for 115 participants across Mumbai clinics, ensured anonymized storage, and ran descriptive statistics plus correlation slices that highlighted how vision loss cascades into income loss and caregiver burnout.
  • Authored the final dossier and submitted it through Aditya Jyot to MoHFW, pairing the dataset with funding and screening recommendations.

TATA Power

Title History

Digitization Intern
Jun 2018 - Jul 2018
2 months

Overview

Shadowed Tata Power’s digitization pod to understand how field data feeds reliability programs, primarily labeling thermal imagery from UAV flyovers and wiring up a proof-of-concept sensor rig. The internship was lightweight—more observational and data-handling than model-building—but it gave me early exposure to how utilities scope problems before deeper AI investment.

Projects

Solar Fault Triage Pilot

Role: Thermal-image labeling, issue cataloging
  • Annotated hot-spot classes on UAV thermal passes over solar farms and kept the metadata tidy so the core team could experiment with simple classifiers.
  • Logged mentor feedback on likely failure modes (soiling, connector heat, string imbalance), which later informed how we binned images, but I did not train the downstream models myself.

Warehouse Equipment Health Prototype

Role: Arduino + ESP8266 mock-up
  • Helped assemble a rough sensor stack (Arduino Uno, ESP8266, basic vibration/temperature probes) that streamed readings to a dashboard to demonstrate what continuous monitoring could look like.
  • Documented the prototype limitations and hand-off notes so the in-house engineers could decide whether to pursue a production-grade version later.