Denspath compares plausible kinetic mechanisms, generates challenger models, and produces a reviewable evidence package before structural model decisions are locked.
Best overall fit with interpretable persistence behavior and cleaner late-phase diagnostics.
Retained because it explains early contraction differently and remains plausible on current data.
Add late-phase sampling at day 60-90 to separate persistence mechanisms before locking the structural model.
Designed to slot into incumbent MIDD workflows rather than replace them.
This isn't generic model selection. It's mechanism discrimination for CAR-T and adjacent cell-therapy workflows — where the wrong structural choice is expensive.
Expansion, contraction, persistence, exhaustion, and target effects can all matter. The hard part is often deciding which mechanism family is even plausible.
Senior modelers test a few mechanism variants manually, usually under time pressure. The result is often “best among what we tried,” not a systematic discrimination workflow.
Reviewers ask why one mechanism was selected over another. Diagnostics exist, but assumption differences, challenger models, and next-step evidence are still assembled manually.
Model-informed drug development is review-heavy by design. The harder part is upstream: deciding which mechanism family is even plausible, and explaining why one structural choice is defensible. That's where Denspath sits.
One workflow, one buyer story, one output: a defendable mechanism package.
Denspath ingests the study data and generates candidate mechanism families plus challenger models that test alternative structural assumptions. The goal is not a magical single answer. The goal is a credible comparison set.
A reviewable package that fits into the workflow already used for MIDD work.
Candidate mechanism families with explicit assumptions, ranked by fit and diagnostic quality.
Alternative structural forms generated to stress the leading explanation, not just confirm it.
Side-by-side evidence showing where candidates agree, where they diverge, and where each one breaks.
A concrete recommendation for the experiment most likely to resolve the remaining mechanism ambiguity.
A reviewable document explaining what was tested, what changed across models, and why the selected mechanism is defensible.
Exportable package for the tools the team already trusts rather than a forced platform migration.
A CAR-T cellular-kinetics program with ambiguous persistence behavior.
A modeling team has Phase I CAR-T kinetic data with rapid expansion, sharp contraction, and slow persistence. Standard small-molecule PK templates are not enough.
Today the senior modeler hand-tests a few structural variants, usually under time pressure. The comparison set is limited by what the team had time to propose, not by a systematic mechanism-discrimination workflow.
With Denspath, the team gets ranked candidate mechanisms, challenger models that stress the leading explanation, explicit assumption differences, and a recommendation for the next measurement that would actually separate the top two candidates.
Denspath is not a replacement for existing modeling tools. It is a better upstream mechanism-comparison workflow.
| Denspath | Manual + template tools | |
|---|---|---|
| Search space | Candidate mechanisms plus challenger structures | Template variants inside a predefined family |
| Model comparison | Explicit structural assumption diffs | Mostly implicit in equations and notes |
| Uncertainty handling | Can recommend what to test next | Usually manual follow-up planning |
| Review package | Diagnostics plus assumptions plus justification | Diagnostics plus manual narrative |
| Workflow fit | Exports into incumbent tools | Native to one toolchain at a time |
Denspath doesn't replace the estimation and review tools your team already trusts. It helps you lock a defensible structural model, then exports it into the workflow you use for fitting, identifiability, and simulation.
The first engagement should prove value on real data before any broader deployment discussion.
Side-by-side against the current workflow on a problem you scope with us. Success means the package changes either model choice, review clarity, or experimental planning. Scope, duration, and pricing are agreed in the kickoff call.
Better mechanism comparison than the current process.
Clearer review packet for internal or regulatory discussion.
Evidence that the output changes either model choice or the next experiment.
Get a mechanism package that makes the alternatives explicit, surfaces the real disagreement, and shows what to test next.
Request a pilot