Welcome again after a few years, dear readers. Much has happened since the last posting – I went to Davis (CA) and back to Oxford, I have two children, wrote a book on statistics with Prof. David Eisner, and recently, a new model was born too, the T-World (Paper 1, Paper 2 ). I thought I’d revisit several points in this blog, that will complement the paper and share a bit of background of its creation, without any need for salesmanship. Specifically, I’ll address the following:
1) “You claim high generality of the model – but
who cares?” I found it so interesting that most people respond to the high
generality of T-World with “obviously, this is what we always wanted”, but a
nontrivial number of people also didn’t find it that important at the first
sight. Let’s go over some reasons why I think it matters deeply.
2)
“Why did you guys take so long?” Yes, it
took ages. I’m sorry, I’m the first to have wished it was done in a year or
two. Why did we have to go back to the drawing board several times? This is introduced here and described
in several posts of increasingly detail-oriented nature: 1) Overall architecture, 2) ICaL and RyR coupling, 3) ICaL model, and 4) Na-K pump voltage dependence. Concluding remarks are here.
These blogs are written under the assumption that you are
aware of what T-World is – these blogs may be best read after reading the papers!
Why does generality of a computational model matter?
It is a truth universally acknowledged, that a single
cellular mechanism is insufficient to drive arrhythmia in cardiac tissue, and
that both arrhythmic trigger and arrhythmic substrate are needed. As 3D
simulations become more common, we are going to want organ-scale simulations of
arrhythmia that are increasingly sophisticated and increasingly less
artificial. And if we want to build an organ model that can be used to
investigate how early or delayed afterdepolarisations interact with alternans
and steep restitution to produce arrhythmia, then we first need a cellular
model that can reproduce all of those behaviours in the first place.
It matters in several other ways. First, in complex diseases
such as heart failure or diabetes, the phenotype is rarely driven by a single
mechanism. The diseased state usually reflects multiple interacting changes. So,
some degree of generality matters not only for mechanistic accuracy, but also
for interpretability and predictive power. If a model only works in one narrow
context, it may still be useful, but it is much less likely to help us
understand how a disease phenotype emerges from interacting processes.
Second, it matters for pharmacology, both for safety and for
efficacy. For example, we clearly want models that can represent both early
afterdepolarisations (EADs) and contractility. In a model, it is almost trivial
to prevent drug-driven EADs: just block L-type calcium current. The obvious
problem is that this also markedly diminishes contractility, so it is not a
particularly meaningful solution on its own. What we really want are models
that support compound queries such as: “find drug profiles that reduce EAD risk
without reducing contractility by more than 2%.” (That particular combination was already feasible in previous
work using ToR-ORd-Land by Margara et al., used recently e.g. in Trovato et al., but with T-World we can now also
look at drug effects on other arrhythmogenic behaviours as well.)
In that context, one especially interesting point is that
our paper helps to highlight the striking observation from Shattock
et al. that the longer the APD, the steeper the S1S2 restitution. Much of
the discussion around drug-induced QT prolongation and arrhythmia has focused
on EADs. However, this result suggests that restitution steepening may be an
underappreciated part of the story, which is well worth exploring further (please
get in touch if you’d like to look into this together).
A general problem with lack of generality – a model not
reproducing behaviour or reproducing it through a problematic mechanism – is
that you never know just how deep the problem runs. For some applications, it
may be a minor detail, but for other applications it may be a major issue. A
good example is alternans in the previous ToR-ORd model. It appears at
human-like frequencies and looks reasonable at first sight. But I became aware
that the underlying mechanism of alternans in that model (refractoriness of
junctional SR refilling, largely linked to slow NSR-JSR diffusion) is not
especially well supported by experimental data. And that matters, because if
you use that model to represent SERCA reduction in heart failure, you do not
get the typical increase in alternans vulnerability. Reducing SERCA does not
markedly extend the alternans frequency range, and often does the opposite. In
other words, because the mechanism is suboptimal (unphysiological), the model
is less predictive than we might want it to be. This is one reason why cellular
realism matters so much for simulation work across scales. If the cell model
produces a phenotype through the wrong mechanism, tissue- or organ-level
simulations may still look plausible on the surface while being misleading
underneath. A simulation can “look right” and still reach the right-looking
answer for the wrong reason. That is exactly the kind of thing we want to
minimise, especially if the long-term goal is to use these models for more
integrative or translational questions.
Now, you may object, saying that every model is wrong at
some level of abstraction (T-World will be too). And I agree, that is so – but
for me, the take-home message is not “we’re doomed, there is no point in trying”,
but rather that we should try to get as close as possible to a biologically
faithful model, while accepting that we may not fully succeed, or that at some
points the available data are simply not good enough yet. In fact, that is
often a useful outcome in itself. If model development tells you that a
particular part of the biology cannot yet be constrained properly, then that is
valuable information that can guide further data collection.
At the same time, I think it is important to distinguish
generality from unnecessary complexity. I am not arguing that every model
should be as complicated as possible, or that we should just keep adding
mechanisms forever. Extra detail can absolutely become a burden. It can make
models harder to analyse, harder to interpret, and harder to trust. The goal is
not maximal complexity. The goal is the right kind of generality: enough
mechanistic breadth to support the key emergent behaviours we care about, without
turning the model into a hard-to-interpret overcomplicated behemoth. That
balance is difficult, but it is worth aiming for.
In summary, I believe that generality and closer to
life-like behaviours are important to make the model more predictive,
especially outside the initial domain of application. At the same time, those
may be the most useful and interesting applications. Computer models often get (partly
unfairly, partly fairly) criticized for just mirroring assumptions we put in.
That may be the case in some situations, and it may still be useful in checking
that the assumptions we put in are internally consistent and can reproduce the
observed phenotype. Because sometimes models flag that the phenotype is not reproduced
from our assumptions, and we need to reconsider the hypothesis. But often a
more interesting use of models is to go beyond the original domain of
design/calibration, and use them for more complex/integrative tasks. That is
why independent validation matters so much. It tells us whether the model can
do more than just recapitulate what it was tuned to do – and, encouragingly,
often it can.
This leads into a broader question: should computational
models be mainly focused tools for testing specific hypotheses, or should they
be more general foundational cell-like systems that can be used outside one
narrow initial domain? This affects how we develop models and what we should
invest energy and funds into. I’ve been through several debates on whether we
want one or the other, and I think that we want both things at once in fact –
they are not mutually exclusive. For some questions, we need focused models
addressing a particular question in a particular context, and extra complexity
would be an unnecessary burden. But for other questions, we do need generality
and capability to predict emergent behaviours that we did not “assume into” the
model.
This is especially so if we want to continue establishing computational cell models (a.k.a. virtual cardiomyocytes) as NAMs: New Approach Methodologies (or one of the other several meanings of NAM, which curiously seem to be used interchangeably). There is increasing pressure for their adoption and development, including by EMA, FDA, and UK government, and computer models are a strong route towards this vision. I may have some reservations about the occasional hype surrounding NAMs, but I definitely think they are an important direction of development and something we want, especially once properly developed, understood, and validated. I will write a bit more on that in the last section of this series.
Comments
Post a Comment