Skip to main content

Making a model: Part 7 - concluding remarks


We’ve reached the end of this series – I hope it was at least somewhat interesting and useful. Despite my not-fully-serious suggestion to avoid model development if you can, I think the process is quite a unique experience that may change quite a lot how one perceives computer models. In my case, it definitely made me appreciate much more the limitations as well as strengths of computer models and it transformed the way how I read and interpret modelling papers. Even though I've got low number of observations, my impression is that model developers are far more critical of models (theirs included) than model users. This is however probably nothing new – people developing lab protocols also seem to me more aware of caveats of the methodology than people who use it more or less as a black box.

Beyond being transformative with regards to computer modelling, making a model may be really useful for one’s physiological intuition. Given one has to go deeper to develop a model compared to using it, I think it’s much easier to understand and internalize many links between ionic concentrations, currents, what is the interplay of exchangers, etc. Some of it may be model-specific, but again, as long as one is aware of this possibility, I think there is no problem. It’s going back to the view of computer models a formalized literature review that you can simulate to check how different studies are consistent and what is the knowledge we’re missing. Developing a model really forces one to understand and actively read a lot of literature in depth, which is good in itself.

By the way, speaking of limitations – please let me know if you find issues with ToR-ORd. Even though the process of making it was rather exhausting and I don’t want to see it crumbling down, I think models will always have their problems and the way to better models and research community as such is not pretending they are perfect. There is no reason to feel offended [1] upon hearing criticism – I surely will be grateful. ToR-ORd is a snapshot of a process of multiple streams flowing through the field of computational cardiology where we felt that it was worth validating, packaging, and publishing, but maybe there will be another version in the future, or it may inspire other groups to make it better.

In this place, I’d also like to say how important it was to work on ToR-ORd development in an excellent group led by Prof. Blanca Rodriguez. Her supervision was really great, very supportive, and with minimal pressure, which worked perfectly for this project. There were things that did take a while and if there was a supervisor of the type “I need a solution in one week” it would make an already hard project much harder. In such a case, one would have to take shortcuts, but taking shortcuts in model development usually means you have to go the longer way in the end anyway.  The supportive style of leadership was also key to maintain reasonable state of mental health, which can easily suffer in projects like this. Furthermore, given the group’s status and connections, I could discuss some aspects of the work with top experts in the field and with people in regulatory bodies – especially in the latter phase, seeing interest from more senior researchers did help me find energy to overcome the last few hurdles. Another critical factor was the expertise already present in the group which I didn’t have and which saved us loads of time and allowed us to make the paper all-around stronger. Dr. Alfonso Bueno-Orovio and Dr. Xin Zhou created the CellML code and ran 1D simulation via Chaste, Dr. Elisa Passini replicated her previous study on drug safety with ToR-ORd, and Dr. Ana Minchole ran 3D torso simulation to extract pseudo-ECG [2]. Dr. Oliver Britton has provided me with annotated and pre-processed data from the Szeged group of Prof. Varro, as he used it in his previous study. If I was to do it myself in a group without such background/breadth of expertise, it would have taken me ages to do all that. I suspect some of the validations we did might become “this is not enough of a priority” and the final publication would be probably quite a lot poorer for it. Obviously, the good spirit of the group permeates everything, from group meetings to seminars and discussions – everyone has contributed to this project in some form or another, and it was a real pleasure to work there. So if you’re working on something similarly hard, make sure you’re not alone in life, both personally and academically.
Thanks for reading and let me know if you have any questions!


[1] It was quite a shock when I was moving from the theoretical computer science community (where people seemed to be predominantly extremely open to sound criticism and they were grateful for it) to biomedical research, where one is basically dancing in a minefield. Even as a junior researcher, I already lived through and heard of various interesting stories that would be nearly unthinkable in computer science. Talking to researchers like e.g., Dr. Michael Colman, whose words on the importance of self-criticism and not taking offense came like a healing rain after hearing some pretty bad stories of ego and revenge, really helped me to partially restore faith in humanity (at least in academia).

[2] This was ran on supercomputing resources that were available again only due to efforts of the group – both in writing the grant applications, but also in writing publications that helped persuade the grant assessors that our group should be funded.

Comments

Popular posts from this blog

Several tips on junior fellowship applications

It turns out I was fortunate enough to receive the Sir Henry Fellowship from Wellcome Trust. This is a four-year international fellowship that will allow me to spend some time at UC Davis, California as well as University of Oxford, focusing on interdisciplinary investigation of diabetic arrhythmogenesis. It was certainly an interesting and person-developing experience (obviously viewed more favourably given the outcome). I had the advantage of working under/with highly successful people who gave me valuable advice about the process and requirements. I am quite sure that  I would not have gotten the fellowship without the support of Profs. Manuela Zaccolo, Blanca Rodriguez, and Don Bers, to whom I'm deeply grateful. However, not everyone has such nice and investing-in-you supervisors and beyond very generic advice, there is very little information on the internet on what the process of applying for junior fellowship entails [1]. The aim of this text is to share some findings I ma...

Making a Model: Part 0 - Introduction

Welcome, dear reader. This is the start of a short series of blog posts aimed at providing some insight into the process of development of a computational model of a cell. The type of the model we’ll focus at is one which simulates the development of ionic concentrations and behavior of ionic currents and fluxes over time (probably most relevant for excitable cells such as cardiomyocytes or neurons). I'm hoping that tips and observations in this series will be of use to graduate students and researchers who are interested in computer simulations. While the posts are about the development of human ventricular cardiomyocyte model ToR-ORd ( https://elifesciences.org/articles/48890 ), I mostly try to focus on general observations (usually those I wish I knew about when I started). I decided to write up the topics in the form of blog, given that scientific publications tend to have a somewhat rigid format, and tend to focus at what is done, how, and what it means, rather than at ...

Making a model: Part 1 - Development strategy

This is just a short post about the criteria that one sets for the model to fulfill when making a model. In our paper, we decided to strictly separate criteria for model calibration (based on data that are used to develop the model) and validation (based on data not used in model creation that are used to assess it after the development has finished). Drawing a parallel to the world of machine learning, one could say that calibration criteria correspond to training data, while validation criteria correspond to testing [1] data. In the world of machine learning, the use of testing set to assess model quality is firmly established to the degree that it is hard to imagine that performance on training set would be reported. Returning from machine learning to the literature on cardiac computer modelling, it becomes rather apparent that the waters are much murkier here. Most modelling papers do not seem to specify whether the model was directly made to do the reported behaviours, or wh...