A machine learning based model accurately predicts cellular response to electric fields in multiple cell types | Scientific Reports – Nature.com

Recurrent neural network models for galvanotaxis

Few machine learning models are suitable for timeseries data31. A popular category of ML models able to learn dynamical systems are known as recurrent neural networks (RNNs). Recurrent neural networks have a property such that the output of the model is fed back into the model as an input. If a dynamical system can be represented by an ODE, then a RNN can approximate the governing equations. The next question is whether galvanotaxis can be accurately modeled by an ODE given the stochastic characteristics of cell migration. RNNs are deterministic models but they can approximate stochastic behavior by converging to a chaotic model32. In this work, we show that such a chaotic model provides a good approximation to variability seen in cell migration.

We use a long short-term memory (LSTM) recurrent neural network (see Supplementary Fig. S1 for more details) to predict the direction of cell migration based on previous measured angles of migration and the current strength of the electric field (see Fig. 2 for details, and Table S1 for sample data). This is also referred to as a one-step ahead prediction. It has been shown that cell movement can be completely described mathematically using the speed and the angle of migration25. Furthermore, the speed is independent of the EF. Thus, our model only needs to consider the angle of migration. Here, the angle of migration is referred to as directedness and defined as the cosine of the angle between the electric field and the straight line which connects the centroid of the cell from its initial to current location. We note that simulations of cell trajectories can be reconstructed using directedness values. Figure 2c shows a reconstruction of single cell trajectories from computed directedness assuming constant speed. LSTM models have feedback connections and are designed to explicitly avoid the vanishing gradient problem, meaning that they can process entire sequences of timeseries data33. LSTM networks are advantageous over other recurrent networks since they are relatively insensitive to the duration of time delays34. These advantages make LSTM models desirable for understanding complex systems, and LSTM models have had success capturing the behavior of noisy dynamical systems35,36.

Figure 2
figure 2

(a) Quantifying directional movement of cells by directedness. (b) Trained LSTM model takes in directedness and EF values from the past 20 times steps and outputs directedness at current time step. (c) Reconstruction of single-cell trajectories from directedness assuming constant cell speed.

LSTM networks, like all neural networks, are trained by iteratively updating the internal weights of a network, which are usually randomly initialized, to minimize the loss function on the training set. For multilayer neural networks, including RNNs, this loss function is nonconvex in general. Thus, the weights found in the training stage are not guaranteed to represent a global minimum of the loss function, and the exact local minimum found is dependent on the initial weights. We use a fivefold cross-validation to evaluate the dependence of the model performance on the training set. To ensure that results are not dependent on any one set of initial weights, we train 50 randomly initialized models with identical architecture on the same training set. We then evaluate the performance of all 50 models so that the results reflect the overall performance of this modeling approach. All of our presented results show predictions of all models on every cell to demonstrate that these results are not dependent on any one random weight initialization.

Recurrent NNs can predict the directedness of EF-induced cell migration at the single cell level

We first demonstrate the ability of the LSTM model to capture cell migration patterns under an electric field by predicting cell directedness one step ahead, given measured cell directedness at previous time points. We first train and test the model on a collection of timeseries Cranial Neural Crest Cell (CNCC) data1 capturing single cell migration under a set of EFs: 0 mV/mm, 15 mV/mm, 30 mV/mm, 50 mV/mm, 75 mV/mm, 100 mV/mm, and 200 mV/mm (see “Materials and methods”). To evaluate the model’s accuracy, we consider the distribution of root mean squared error (RMSE) values for single cell trajectories over a population of cells when comparing predicted directedness at each time step to the measured ground truth.

Figure 3 shows the results of predicting single-cell behavior for all EFs. In particular, the median values and their distribution across the 50 models are plotted against time. See Table S2 for the distributions of RMSE values when predicting on the training, validation, and test sets. The center and spread of the RMSE distributions for the training and test sets are comparable, implying that the model is not overfit. This is further supported by model simulations in a later section. See table S3 for cross-validation results using different combinations of standardized features. Standardization did not improve performance and was therefore not used further.

Figure 3
figure 3

(a) Average predicted directedness and distribution across 50 models compared to ground truth measurements at each EF. A timestep unit is 5 min (the interval at which images were taken). The first 19 time steps are used to initiate predictions. (b) Comparison of LSTM model to naïve predictors. Root mean square error (RMSE) values computed based on predicted directedness and measured directedness. The boxes represent the middle 50% of error values and the whiskers extend to the minimum and maximum error values.

Additionally, to demonstrate that the predictions are indeed informed we compare the results to those of two naïve predictors (see Fig. 3). The first of these predictors, which we will call the “constant directedness” model for this discussion, makes a naïve assumption that directedness will remain constant from one timestep to the next. So, the directedness prediction made by this naïve model is just the previous directedness value. The second naïve predictor, which we refer to as the “linear” predictor, makes a linear extrapolation using the previous two directedness values. That is, the rate of change of directedness between the previous two timesteps is assumed to remain constant between the previous timestep and the next timestep. Figure 3 shows the error distributions of the naïve predictors alongside that of our base model, the LSTM. See Table S4 for the median RMSE values and corresponding IQR values. It’s interesting to note that simply assuming no change in directedness leads to a better approximation than a linear extrapolation. Thus, prediction of directedness is non-trivial even under a constant EF and it is clear that cell migration is driven by underlying dynamics.

Recurrent NNs can predict the directedness of EF-induced cell migration at unseen EFs

To understand the generalizability of the model with respect to the EF strength, we use the same modelling framework to both interpolate and extrapolate to EF strengths that were not seen in the training set. For interpolation, we remove all instances of cells in an intermediate EF, 30 mV/mm, from the training set and train a new model with identical architecture as before. For extrapolation, we follow a similar approach except we remove all instances of cells in an extreme EF, 200 mV/mm from the training set. The model is then tested on the complete data including unseen EFs. We also highlight the performance exclusively on cell trajectories under EFs omitted during training. First, we evaluate the ability of the model to interpolate to unseen EF strengths by considering the performance of the model trained without 30 mV/mm instances on all cells in the test set (see Fig. 4), as well as exclusively on the 30 mV/mm test instances. On both the full test set and the 30 mV/mm test set instances, the median RMSE of the interpolation model is only moderately higher than the base model (~ 5%). Additionally, the performance of the interpolation model on the 30 mV/mm instances alone is comparable to the performance on the full training set (see Table S5), meaning that the model interpolates well to unseen EF strengths.

Figure 4
figure 4

(a) Distributions of cell-level test set RMSE values of the base model and a model with identical architecture which was trained with a modified training set from which the 30 mV/mm instances were removed. Error distributions shown for both the complete testing set and for the 30 mV/mm test instances. (b) Distributions of cell-level test set RMSE values of the base model and a model with identical architecture which was trained with a modified training set from which the 200 mV/mm instances were removed. Error distributions shown for both the complete testing set and for the 200 mV/mm test instances. The boxes represent the middle 50% of error values and the whiskers extend to the minimum and maximum error values.

For extrapolation, we compare the base model to a model trained without any 200 mV/mm instances (see Fig. 4). The median RMSE of this extrapolation model, when evaluated on the full test set is ~ 6.5% higher than the base model trained on the full training set. When evaluating this model specifically on the 200 mV/mm, the median RMSE is ~ 17.4% higher than that of the base model. See Table S5 for median RMSE and corresponding IQR distribution values. We note that the base model predicts directedness exceptionally well at 200 mV/mm when compared to its performance on the full test set. This implies cell migration is more predictive at this higher EF and removing this data when training the extrapolation model results in a noticeably increase in error across the full test set.

For both the interpolation and extrapolation tests, the model trained on the limited training set performs worse than our original model overall. This is to be expected given an overall smaller dataset. However, the acceptable performance exclusively on omitted EFs demonstrates the ability of our model to interpolate and extrapolate with respect to EF values, with relatively better performance on the interpolation task than on the extrapolation task.

Transfer learning allows for high prediction accuracy when minimal data is available

Transfer learning is the method of using a model’s knowledge about one learning problem (called the source domain) to improve the performance on a second, related learning problem (called the target domain)37,38,39,40,41. While the traditional approach to machine learning requires learning a separate randomly initialized model from each domain’s training set to converge on a model specific to the task it was trained on, transfer learning involves learning a model for the source domain and then using that trained model as the starting point for learning the model for the target domain (see Fig. 5a). Transfer learning allows for target domain instances to be in a different feature space and have a different distribution than the instances in the source domain, which allows for relatively high performance when target domain data is too limited to allow for such performance were the model to be trained from a random initialization40,41. Because galvanotaxis experiments and manual cell tracking can be both expensive and time-consuming, galvanotaxis tracking datasets for some cell types may be limited in both the number of cells tracked and the variety of EF conditions in which experiments are conducted. Thus, transfer learning may be a pivotal tool in developing accurate models for cell types and experimental conditions for which data is limited. Here, we evaluate the effects of transfer learning on extending our constant EF CNCC model to different cell types and to a time-varying EF.

Figure 5
figure 5

(a) Diagram comparing the traditional machine learning training approach, which involves training a separate randomly initialized model for each learning task, with the transfer learning approach, which involves training a model for one task and then retraining that model on another dataset to perform a related task. (b) Distributions of cell-level RMSE values for the base model, a model training only on the reversal dataset, and a model which uses transfer learning to retrain the base model for the polarity reversal task. The boxes represent the middle 50% of error values and the whiskers extend to the minimum and maximum error values. (c) Plot of average directedness over time for the polarity reversal dataset, with error bars representing standard error of the mean for each timestep to illustrate the spread of directedness values at each step.

First, we consider transfer learning methods for making predictions about cells in time-varying EFs using the model which we trained on constant EFs. We evaluate the ability of the model to capture CNCC galvanotaxis dynamics in an experiment in which the polarity of a 200 mV/mm EF is reversed halfway through the experiment (see Fig. 5). We compare the performance of a “reversal model”, trained only on the polarity reversal data, and a “transfer learning model”, which retrains the base model on the polarity reversal data. Once again, we use the base model predictions on the constant-EF test set as a performance benchmark.

The median RMSE of the reversal model is ~ 57.2% higher than the benchmark performance of the base model on the original test set. The transfer learning model provides an improvement of ~ 18.1% over the reversal model. The transfer learning model’s median test set RMSE is ~ 28.8% higher than the benchmark model’s median RMSE, which can likely be attributed to both the limited polarity reversal training data, as well as the increased complexity of dynamics in the time-varying EF setting. See Table S6 for median RMSE values. Despite the inability of the model to reach benchmark performance on this task (note that only one dataset with dynamic EF was available), we have demonstrated that transfer learning methods are effective at improving model performance for cells in time-varying EFs over models trained only in those settings.

Next, we evaluate the effectiveness of transfer learning methods for extending our method to cell types with limited galvanotaxis tracking data. We consider the application of our CNCC model to both fish keratocytes and human keratinocytes. Both of these cell types have been shown to migrate towards the cathode of an electric field2,3, while CNCC migrate towards the anode1. Thus, the model must learn to predict galvanotactic behavior which differs significantly from the behavior of CNCC. Using limited training sets for both target cell types, we compare the performance of a model that uses the same architecture as our original model but has been trained only on the target data with our CNCC model, which we have retrained using the same target data using transfer learning methods. Again, we use the performance of the CNCC model as a benchmark, as our goal is that these models, once transfer learning methods have been applied, can have target data test performance similar to the benchmark test performance on the CNCC data.

We have one keratocyte dataset and two keratinocyte datasets. The keratocyte data contains tracking timeseries for 0 mV/mm, 50 mV/mm, 100 mV/mm, 200 mV/mm, and 400 mV/mm electric fields. All keratinocyte cells are tracked in 100 mV/mm EFs. The keratocyte training set contains tracking data for two cells from each available EF strength and the two keratinocyte training sets each contain tracking data for two cells total. For keratocytes, images are taken, and cell positions are recorded every 30 s. The first keratinocyte dataset records positions at one-minute intervals, while the second keratinocyte dataset has positions recorded at ten-minute intervals. Thus, this task evaluates not only the ability of the model to transfer knowledge to other cell types, but also the ability of the model to adjust to different timescales.

Our keratocyte model, trained only on the keratocyte data, has a median RMSE of 0.0554 on the test set, which is ~ 189.7% higher than the median RMSE of the benchmark model performance on the CNCC test set. For transfer learning, we take the CNCC model and retrain the weights on the keratocyte training set, resulting in a median RMSE of 0.0260 on the keratocyte test set, which is ~ 53.1% lower than the keratocyte model which did not use transfer learning and ~ 11% lower than the benchmark performance on the CNCC dataset. So, the model trained only on our limited keratocyte data has much higher median RMSE than the benchmark, while the transfer learning model achieves lower median error than the benchmark (see Fig. 6).

Figure 6
figure 6

Distributions of RMSE values for the base model on the CNCC test set (benchmark), the keratocyte model on the keratocyte test set, and the transfer learning model on the keratocyte test set. Distributions of RMSE values for the base model on the CNCC test set (benchmark), the target cell models without transfer learning on the target cell test sets, and the target cell models which used transfer learning with the CNCC source domain on the target cell test sets. The boxes represent the middle 50% of error values and the whiskers extend to the minimum and maximum error values.

The first keratinocyte model, the model trained only on the one-minute interval keratinocyte dataset, has a median RMSE of 0.1006, ~ 244.5% higher than the benchmark median RMSE. The transfer learning model, in which the base model was retrained using the same keratinocyte training set, has a test set median RMSE of 0.0362, which is ~ 64% lower than the median error of the model only trained on the keratinocytes, and is just ~ 24% higher than the median RMSE of the benchmark CNCC model. The spread of the error distribution was also much lower for the transfer learning model than for the keratinocyte-only model, with a RMSE IQR of 0.0274 for the transfer learning model and 0.2502 for the keratinocyte-only model (see Fig. 6).

The median RMSE of the model trained only on the ten-minute interval keratinocyte dataset when predicting on the test set is 0.2176, which is ~ 645.2% higher than the benchmark median RMSE. After retraining the CNCC model on the ten-minute interval keratinocyte training set, the resulting model has a median RMSE of 0.1134, which is ~ 288.4% higher than the benchmark model, but ~ 47.9% lower than the keratinocyte model that did not use transfer learning. Once again, the spread is significantly lower in the model that used transfer learning, with a RMSE IQR of 0.1059 in the transfer learning model and 0.2383 in the keratinocyte-only model (see Fig. 6). The unusually large increase in error overall may be due to the significantly increased sampling time from 5 to 10 min.

While transfer learning in these cases did not always lead to performance comparable to the benchmark, both the median and IQR of RMSE distributions were much lower for the transfer learning model than for the model trained only on the target cell type in all cases. We have shown that transfer learning can be an effective approach for developing predictive models about cell types for which available data is limited, even when the source cell data differs from target cell data in significant ways, such as the time interval between observations and anodal- versus cathodal-directed migration.

NN-based models can be used for in silico studies

In recent years, the massive increase in the quantity of available data has led to much attention being paid to in silico biological studies, which are studies performed on computers using mathematical modeling and simulations42,43,44,45. The advantages of in silico studies include estimating hidden system parameters that are experimentally inaccessible46, optimizing the timeline of experimental procedures and product development47,48, reducing the need for animal and human trials48, and lowering experimental costs47,48,49. In this section, we demonstrate that the recurrent neural network-based model that we have developed can be used for in silico galvanotaxis assays with arbitrary and time-varying EFs.

We simulate cell migration experiments by designing an EF timeseries and using some initial ground truth data to begin making predictions. In this way, we can generate timeseries of synthetic galvanotaxis tracking data using arbitrary EF values, which may vary in time. We compare the distributions of synthetic directedness values with those from the ground truth data to evaluate the ability of the model to capture the long-term effects of EFs on CNCC.

The specific comparison we consider is between the distributions of the directedness values at the end of the experiments. Our simulations use 20 timesteps of initial ground truth data to begin making predictions and each CNCC is tracked in a constant EF for 37 timesteps after the initial image. Thus, we are comparing how the ground truth directedness distribution evolves in the final 17 timesteps with the evolution predicted by the simulation in the same time period. The ground truth data we consider are the 350 cells in the test set. These cells are used for the initial lookback to begin the simulations and for the comparison with ground truth final directedness values.

To determine the ability of our model to replicate the effects of an EF on cell motility in silico, we compare the distribution of final directedness values of the in silico synthetic data against the ground truth data across all the EF values in the CNCC dataset (see Fig. 7). In Fig. 8, we show the circular distribution of the directional data for a subset of the EFs. We compare the directedness values by EF to evaluate whether the model has learned the effects of various EF values on the cells. If the distributions of EF-level predicted directedness values are similar to those of the EF-level ground truth directedness values, we can conclude that the in silico studies capture the general migration behaviors of the CNCCs.

Figure 7
figure 7

Distributions of final directedness values of cells in both ground truth data and synthetic data generated by simulations. The boxes represent the middle 50% of cell directedness values and the whiskers extend to the minimum and maximum directedness values for each distribution. For simulations, these distributions are over 50 trained models to ensure that these results are not dependent on the random initialization of any one model; see “Materials and methods” subsection “Recurrent model architecture” for more details. See Table S7 for mean and median values of final directedness values for both ground truth and synthetic data.

Figure 8
figure 8

Directional plots for cell migration at various EF for ground truth measurements and in silico simulations with added noise on computed directedness at each time step.

The means and medians of final directedness values computed by the simulations are closely correlated with the ground truth. The correlation coefficient between the means is R = 0.9906 and between the medians is R = 0.9721. In general, the distributions of simulated and ground truth final directedness values get closer as EF strength increases and cell behavior becomes more predictable.

Specifically, there is a significant drop in the differences between both means and medians at 30 mV/mm and higher, compared to 0 mV/mm and 15 mV/mm simulations. The threshold of response of CNCC to electric fields has been identified as being in the range between 15 mV/mm and 30 mV/mm1, so we expect that the simulations will more closely reflect the ground truth in EF strengths above that threshold due to the largely stochastic behavior of cells below the threshold.

To get a better sense of the distribution of directionality across cells we also present cell motility in polar coordinates, where the angle represents the direction of motion (with 30 degree bin widths) and the radius represents the proportion of cells moving in a given direction. To create these rose plots using our synthetic data, we must map the directedness values generated by the model to (x, y) positions. These positions cannot be recovered exactly from the directedness values alone, so we approximate them using previous positions, as assumption of cell speed, and the previous heading of the cell (see “Materials and methods” for more details). This approximation was shown to provide fairly accurate results in Fig. 2. We note that the LSTM model is ultimately a deterministic model and so directedness can converge to deterministic equilibrium point over long simulations. Figure 8 shows comparison of rose plots for experimental data and data generated by the LSTM model in silico.

In silico demonstration of feedback control on cell migration

In this section we demonstrate the utility of the model to design and simulate a feedback algorithm to control cell directedness by real-time regulation of the EF. We present an in silico study, applying a PID controller to evaluate the EF necessary to keep the average cell directedness at a certain reference value. Multiple cells are simulated using one of the 50 LSTM models. The cells’ directedness is averaged and used as the measured state for output feedback control. Figure 9a shows the details of the closed loop control system and simulation results. The reference directedness was set to − 0.8. 5% gaussian noise was added to the output of the model to maintain stochasticity. As seen in Fig. 9b, individual cells achieve the desired directedness by applying the appropriate voltages derived from the PID. This implies a possibility to be able to control cell directedness through feedback control in an in vitro setting. We note that the PID was carefully tuned for the particular model chosen. More work remains to be done in the development of feedback control algorithms with guaranteed convergence under varying experimental conditions affecting measured response.

Figure 9
figure 9

(a) The top figure depicts the closed loop design. A reference value/trajectory is picked. The error is evaluated and fed to the PID. The PID uses this value to determine the appropriate EF in order to guide the average directedness towards the reference. This EF is applied to the model of cell migration. (b) (Bottom upper left) The average directedness of all the cells (the dashed red line indicates the reference value). (Bottom upper right) The directedness of individual cells. (Bottom lower left) The voltage being applied throughout the simulation to get the appropriate response. (Bottom lower right) The error of the average directedness in relation to the reference value.

Spread the love

Leave a Reply

Your email address will not be published.