Skip to main content

T-World 1: Introduction, and why do we care about model generality?

 Welcome again after a few years, dear readers. Much has happened since the last posting – I went to Davis (CA) and back to Oxford, I have two children, wrote a book on statistics with Prof. David Eisner, and recently, a new model was born too, the T-World (Paper 1, Paper 2 ). I thought I’d revisit several points in this blog, that will complement the paper and share a bit of background of its creation, without any need for salesmanship. Specifically, I’ll address the following:

1)      You claim high generality of the model – but who cares?” I found it so interesting that most people respond to the high generality of T-World with “obviously, this is what we always wanted”, but a nontrivial number of people also didn’t find it that important at the first sight. Let’s go over some reasons why I think it matters deeply.

2)      Why did you guys take so long?” Yes, it took ages. I’m sorry, I’m the first to have wished it was done in a year or two. Why did we have to go back to the drawing board several times? This is introduced here and described in several posts of increasingly detail-oriented nature: 1) Overall architecture, 2) ICaL and RyR coupling, 3) ICaL model, and 4) Na-K pump voltage dependence. Concluding remarks are here.

These blogs are written under the assumption that you are aware of what T-World is – these blogs may be best read after reading the papers!

Why does generality of a computational model matter?

It is a truth universally acknowledged, that a single cellular mechanism is insufficient to drive arrhythmia in cardiac tissue, and that both arrhythmic trigger and arrhythmic substrate are needed. As 3D simulations become more common, we are going to want organ-scale simulations of arrhythmia that are increasingly sophisticated and increasingly less artificial. And if we want to build an organ model that can be used to investigate how early or delayed afterdepolarisations interact with alternans and steep restitution to produce arrhythmia, then we first need a cellular model that can reproduce all of those behaviours in the first place.

It matters in several other ways. First, in complex diseases such as heart failure or diabetes, the phenotype is rarely driven by a single mechanism. The diseased state usually reflects multiple interacting changes. So, some degree of generality matters not only for mechanistic accuracy, but also for interpretability and predictive power. If a model only works in one narrow context, it may still be useful, but it is much less likely to help us understand how a disease phenotype emerges from interacting processes.

Second, it matters for pharmacology, both for safety and for efficacy. For example, we clearly want models that can represent both early afterdepolarisations (EADs) and contractility. In a model, it is almost trivial to prevent drug-driven EADs: just block L-type calcium current. The obvious problem is that this also markedly diminishes contractility, so it is not a particularly meaningful solution on its own. What we really want are models that support compound queries such as: “find drug profiles that reduce EAD risk without reducing contractility by more than 2%.” (That particular combination was already feasible in previous work using ToR-ORd-Land by Margara et al., used recently e.g. in Trovato et al., but with T-World we can now also look at drug effects on other arrhythmogenic behaviours as well.)

In that context, one especially interesting point is that our paper helps to highlight the striking observation from Shattock et al. that the longer the APD, the steeper the S1S2 restitution. Much of the discussion around drug-induced QT prolongation and arrhythmia has focused on EADs. However, this result suggests that restitution steepening may be an underappreciated part of the story, which is well worth exploring further (please get in touch if you’d like to look into this together).

A general problem with lack of generality – a model not reproducing behaviour or reproducing it through a problematic mechanism – is that you never know just how deep the problem runs. For some applications, it may be a minor detail, but for other applications it may be a major issue. A good example is alternans in the previous ToR-ORd model. It appears at human-like frequencies and looks reasonable at first sight. But I became aware that the underlying mechanism of alternans in that model (refractoriness of junctional SR refilling, largely linked to slow NSR-JSR diffusion) is not especially well supported by experimental data. And that matters, because if you use that model to represent SERCA reduction in heart failure, you do not get the typical increase in alternans vulnerability. Reducing SERCA does not markedly extend the alternans frequency range, and often does the opposite. In other words, because the mechanism is suboptimal (unphysiological), the model is less predictive than we might want it to be. This is one reason why cellular realism matters so much for simulation work across scales. If the cell model produces a phenotype through the wrong mechanism, tissue- or organ-level simulations may still look plausible on the surface while being misleading underneath. A simulation can “look right” and still reach the right-looking answer for the wrong reason. That is exactly the kind of thing we want to minimise, especially if the long-term goal is to use these models for more integrative or translational questions.

Now, you may object, saying that every model is wrong at some level of abstraction (T-World will be too). And I agree, that is so – but for me, the take-home message is not “we’re doomed, there is no point in trying”, but rather that we should try to get as close as possible to a biologically faithful model, while accepting that we may not fully succeed, or that at some points the available data are simply not good enough yet. In fact, that is often a useful outcome in itself. If model development tells you that a particular part of the biology cannot yet be constrained properly, then that is valuable information that can guide further data collection.

At the same time, I think it is important to distinguish generality from unnecessary complexity. I am not arguing that every model should be as complicated as possible, or that we should just keep adding mechanisms forever. Extra detail can absolutely become a burden. It can make models harder to analyse, harder to interpret, and harder to trust. The goal is not maximal complexity. The goal is the right kind of generality: enough mechanistic breadth to support the key emergent behaviours we care about, without turning the model into a hard-to-interpret overcomplicated behemoth. That balance is difficult, but it is worth aiming for.

In summary, I believe that generality and closer to life-like behaviours are important to make the model more predictive, especially outside the initial domain of application. At the same time, those may be the most useful and interesting applications. Computer models often get (partly unfairly, partly fairly) criticized for just mirroring assumptions we put in. That may be the case in some situations, and it may still be useful in checking that the assumptions we put in are internally consistent and can reproduce the observed phenotype. Because sometimes models flag that the phenotype is not reproduced from our assumptions, and we need to reconsider the hypothesis. But often a more interesting use of models is to go beyond the original domain of design/calibration, and use them for more complex/integrative tasks. That is why independent validation matters so much. It tells us whether the model can do more than just recapitulate what it was tuned to do – and, encouragingly, often it can.

This leads into a broader question: should computational models be mainly focused tools for testing specific hypotheses, or should they be more general foundational cell-like systems that can be used outside one narrow initial domain? This affects how we develop models and what we should invest energy and funds into. I’ve been through several debates on whether we want one or the other, and I think that we want both things at once in fact – they are not mutually exclusive. For some questions, we need focused models addressing a particular question in a particular context, and extra complexity would be an unnecessary burden. But for other questions, we do need generality and capability to predict emergent behaviours that we did not “assume into” the model.

This is especially so if we want to continue establishing computational cell models (a.k.a. virtual cardiomyocytes) as NAMs: New Approach Methodologies (or one of the other several meanings of NAM, which curiously seem to be used interchangeably). There is increasing pressure for their adoption and development, including by EMA, FDA, and UK government, and computer models are a strong route towards this vision. I may have some reservations about the occasional hype surrounding NAMs, but I definitely think they are an important direction of development and something we want, especially once properly developed, understood, and validated. I will write a bit more on that in the last section of this series.

Comments

Popular posts from this blog

Several tips on junior fellowship applications

It turns out I was fortunate enough to receive the Sir Henry Fellowship from Wellcome Trust. This is a four-year international fellowship that will allow me to spend some time at UC Davis, California as well as University of Oxford, focusing on interdisciplinary investigation of diabetic arrhythmogenesis. It was certainly an interesting and person-developing experience (obviously viewed more favourably given the outcome). I had the advantage of working under/with highly successful people who gave me valuable advice about the process and requirements. I am quite sure that  I would not have gotten the fellowship without the support of Profs. Manuela Zaccolo, Blanca Rodriguez, and Don Bers, to whom I'm deeply grateful. However, not everyone has such nice and investing-in-you supervisors and beyond very generic advice, there is very little information on the internet on what the process of applying for junior fellowship entails [1]. The aim of this text is to share some findings I ma...

Writing a book punk-style

  Our book on statistics is coming out, after three years of work. The number of questions on “how did this happen?” and “how is it to write a book” I’ve been getting recently warrants a blog post with a bit of an early post-mortem – here it is! How did this happen is answered quite easily: I’ve often advised colleagues on statistics, and they frequently asked, “which book should I get”? Despite suggesting some textbooks, colleagues generally found them too long, and/or complicated and/or missing important content while containing plenty of lower-relevance information. Over time, I understood why existing books do not quite meet their needs. Examples of problems include:   Length over 500 pages.  This is completely fine for people who  want  to learn statistics in a certain depth. But most researchers in life sciences are busy with their own research and do not have training and priority for an in-depth d...

Making a Model: Part 0 - Introduction

Welcome, dear reader. This is the start of a short series of blog posts aimed at providing some insight into the process of development of a computational model of a cell. The type of the model we’ll focus at is one which simulates the development of ionic concentrations and behavior of ionic currents and fluxes over time (probably most relevant for excitable cells such as cardiomyocytes or neurons). I'm hoping that tips and observations in this series will be of use to graduate students and researchers who are interested in computer simulations. While the posts are about the development of human ventricular cardiomyocyte model ToR-ORd ( https://elifesciences.org/articles/48890 ), I mostly try to focus on general observations (usually those I wish I knew about when I started). I decided to write up the topics in the form of blog, given that scientific publications tend to have a somewhat rigid format, and tend to focus at what is done, how, and what it means, rather than at ...