End-to-end learning via a convolutional neural network for cancer cell line classification

Darlington A. Akogo (minoHealth AI Labs, Accra, Ghana)
Xavier-Lewis Palmer (Department of Electrical and Computer Engineering, Old Dominion University, Norfolk, Virginia, USA) (minoHealth AI Labs, Accra, Ghana)

Journal of Industry - University Collaboration

ISSN: 2631-357X

Article publication date: 12 April 2019

Issue publication date: 12 April 2019

1086

Abstract

Purpose

Computer vision for automated analysis of cells and tissues usually include extracting features from images before analyzing such features via various machine learning and machine vision algorithms. The purpose of this work is to explore and demonstrate the ability of a Convolutional Neural Network (CNN) to classify cells pictured via brightfield microscopy without the need of any feature extraction, using a minimum of images, improving work-flows that involve cancer cell identification.

Design/methodology/approach

The methodology involved a quantitative measure of the performance of a Convolutional Neural Network in distinguishing between two cancer lines. In their approach, they trained, validated and tested their 6-layer CNN on 1,241 images of MDA-MB-468 and MCF7 breast cancer cell line in an end-to-end fashion, allowing the system to distinguish between the two different cancer cell types.

Findings

They obtained a 99% accuracy, providing a foundation for more comprehensive systems.

Originality/value

Value can be found in that systems based on this design can be used to assist cell identification in a variety of contexts, whereas a practical implication can be found that these systems can be deployed to assist biomedical workflows quickly and at low cost. In conclusion, this system demonstrates the potentials of end-to-end learning systems for faster and more accurate automated cell analysis.

Keywords

Citation

Akogo, D.A. and Palmer, X.-L. (2019), "End-to-end learning via a convolutional neural network for cancer cell line classification", Journal of Industry - University Collaboration, Vol. 1 No. 1, pp. 17-23. https://doi.org/10.1108/JIUC-02-2019-002

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Darlington A. Akogo and Xavier-Lewis Palmer

License

Published in Journal of Industry-University Collaboration. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Convolutional neural networks were developed initially in the 1980s and were called Neocognitron (Fukushima, 1980; Fukushima et al., 1983; Fukushima, 1987). They are broadly part of a wide set of models called Multi-Stage Hubel-Wiesel Architectures. In 1989, LeNet-5 was introduced which simplified the architecture and used the back-propagation algorithm to train the entire architecture in a supervised fashion (LeCun et al., 1989). The architecture was successful for tasks such as optical character recognition and handwriting recognition. Convolutional neural networks have been an important aspect of deep learning in recent years. They were mainly responsible for the re-emergence and popularity of neural networks. The work of Alex Krizhevsky and Ilya Sutskever which won the ImageNet Large Scale Visual Recognition Competition in 2012 (ILSVRC-2012) was disruptive in the fields of artificial intelligence, machine learning and computer vision community (Krizhevsky et al., 2012). Since then Convolutional Neural Networks have been heavily applied to all sorts of problems, from various object detection and image segmentation problems (Chen et al., 2014; Redmon et al., 2015; Ren et al., 2015) and to specific domains like medical image analysis (Albarqouni et al., 2016; van Grinsven et al., 2016; Yang et al., 2017; Esteva et al., 2017).

Their effectiveness can be attributed to their ability to handle translation invariances in images by relying on shared weights and exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. We chose them for this reason, knowing we wanted a model that could visually detect and differentiate between different breast cancer cell lines such as MDA-MB-468 and MCF7 in an end-to-end fashion. According to the American Cancer Society, breast cancer is the leading diagnosed cancer for American women, not including skin cancer, with more than 250,000 new cases of invasive cases expected and more than 40,000 deaths as of 2017, making it an important target to address stateside (DeSantis et al., 2017). This is especially the case in developing regions abroad, where care is less accessible, especially due to a lack of sophisticated equipment, reagents and more may hinder detection. In efforts to diagnose and treat cancer, tools that can assist less equipped labs are increasingly important. Within this, we present a tool that can distinguish between images of cell lines via brightfield microscopy, without additional preparation, that may assist automated detection tools and diagnoses.

2. Data

We used a collection of 1,241 grayscale images of MDA-MB-468 and MCF7 breast cancer cell for training, validation and testing our model. Sample images from the data set can be seen in Figure 1. Our data set contains 664 MDA-MB-468 breast cancer cell images and 577 MCF7 breast cancer cell as shown in Figure 2. MDA-MB-468 cells and MCF7 cells were cultured and then placed into three separate six-well cell plates and imaged at 400X via brightfield microscopy. Images were separated into brightly and dimly lit categories and then tiled into 128×128 pixel images for analysis. Given the broadness of the wells and cell positions imaged, lighting differed, adding a challenge to the process, which can reflect practical research realities and variations, leading to difficulty in automated systems properly detecting cells.

The data set was split into training set, validation set and testing set with a 8:1:1 ratio (995, 123, 123), respectively. The dimensions of all the images were reshaped to 128×128 pixels. The images were then transformed by standardization so ourimage pixel values that would act as inputs to our model would have a similar range for more stable gradients during training. To increase the variation in our data set to ensure our trained model generalizes beyond its training data. We further augmented the images with random horizontal flips, 5° rotations, width shifts, height shifts and zooms.

2.1 Model architecture

We use a six-layer convolutional neural network trained and tested on 1,241 grayscale images of MDA-MB-468 and MCF7 breast cancer cells. Our convolutional neural network architecture is based on the ScaffoldNet architecture (Akogo and Palmer, 2018). As shown in Figure 3, our network ScaffoldNet starts with two two-dimensional convolutional layers with a 3×3 kernel size and 32 output filters with the first as its input layer. Then followed by a single two-dimensional convolutional layers also with a 3×3 kernel size and 64 output filters.

We then introduce a two-dimensional global average pooling layer to reduce the spatial dimensions of our tensor (Lin et al., 2013). Global average pooling performs dimensionality reduction to minimize overfitting by turning a tensor with dimensions h×w×d into 1×1×d, which is achieved by reducing each h×w feature map to a single number simply by taking the average of all hw values. To further prevent overfitting, we then add a dropout regularizer with a fraction rate of 0.5 (Srivastava et al., 2014). Then, we introduce a 32-unit densely connected neural network layer into our network architecture, followed by another dropout regularizer with a 0.5 fraction rate. Our final output layer is a single unit dense neural network layer.

All convolutional and densely connected layers except the output layer use the rectified linear unit activation function:

f ( x ) = max ( 0 , x ) ,
where x is the input to a neuron (Hahnloser et al., 2000).

The final single neuron output layer uses a Sigmoid activation function:

S ( x ) = 1 1 + e z ,
where x is the input to a neuron and e is the natural logarithm base (also known as Euler’s number).

Our convolutional neural network is trained end-to-end with the first-order gradient-based optimization algorithm, Adam, using the standard parameters (β1=0:9 and β2=0:999) (Kingma and Ba, 2014). Then, we use the cross-entropy loss function for binary classification:

( y log ( p ) + ( 1 y ) log ( 1 p ) ) ,
where log is the natural log; y, the binary indicator (0 or 1) if class label c is the correct classification for observation o p-predicted probability observation.

We train our model using mini-batches of 32. We use a learning rate (α) of 0.001, and pick the model with the lowest validation loss.

2.2 Model training and validation

Using the training set (995 images), our convolutional neural network was trained with Adam optimization algorithm. Cross-entropy loss function and the accuracy classification score were used as metrics. The accuracy score formula is:

accuracy ( y , y ˆ ) = 1 n samples i = 0 n samples 1 1 ( y ˆ i = y i ) ,
where the y ˆ i is the predicted output for ith sample yi is the (correct) target output computed over nsamples.

Our model was trained in eight epochs and its hyperparameters tuned using the validation set (123 images). After just the first epoch, our model had the following performance results on the validation set:

  • Accuracy score: 94.31 percent; cross-entropy loss: 0.1801.

After the eighth epoch, our model’s final performance results on the validation set were:

  • Accuracy score: 98.37 percent; cross-entropy loss: 0.0934.

2.3 Model testing and results

After all training and validation, we finally evaluated our model on the test set (123 images). The test set is the final evaluation for a model and changes are not made to the model after the results.

ScaffoldNet’s final performance results on the test set were;

  • Accuracy score: 99.00 percent; cross-entropy loss: 0.0926.

From the results of the final evaluation, we can tell that our model generalizes well and does not overfit, the high accuracy performance on the validation set is consistent with the evaluation results on the test set.

To further evaluate our convolutional neural network’s output quality, we use the receiver operating characteristic (ROC) metric and its area under curve (AUC) score. We use the ROC curve to plot our model’s true positive rate on the Y-axis, and false positive rate on the X-axis. Our convolutional neural network classifier has a near-perfect AUC score of 0.98 as shown in the plot shown in Figure 4.

3. Related work

Some earlier works exist within the domain of computer vision for automated analysis of cells and tissues. Some of such works segment various cells from each image, and then extract features like size and shape from such cells. These extracted features are then used to train machine learning models or further analyzed by other machine vision algorithms. This includes examples where they extract features from segmented blood cells and then classify them via multilayer perceptrons (Lin et al., 1998). Other examples include grading of cervical intraepithelial neoplasia by extracting geometrical features that are analyzed using a combination of computerized digital image processing and Delaunay triangulation analysis (Keenan et al., 2000) and localization of sub-cellular components via threshold adjacency statistics which are then analyzed by support vector machine (Hamilton et al., 2007). Others compare extracted features and raw pixel densities analyzed via Bayesian classifier, K-nearest neighbors, support vector machine and random forest (Timothy et al., 2016). Unlike all these works, we used deep learning in an end-to-end fashion, where we train our convolutional neural network to directly analyze raw pixel values without any need for feature extraction. This drastically simplifies the process of developing automated computer vision systems for cell and tissue analysis. By eliminating feature extraction, computer vision system can then fully and truly learn important regularities pertaining cells themselves rather than being limited by rules via extracted features we create.

4. Conclusion and outlook

We developed a convolutional neural network that accurately classifies MDA-MB-468 and MCF7 breast cancer cells after being trained on 995 brightfield breast cancer cell images, validated with 123 brightfield breast cancer cell images and then tested on 123 brightfield breast cancer cell images. The convolutional neural network performed well, with a 99 percent accuracy score and 0.98 AUC score, indicating reliability for classification purposes. We believe that this system holds promise for expansion into other cancerous and normal cell lines of other diseases cases as may be reflected in upcoming work. More importantly, it can potentially help lower barriers for care in less equipped labs.

Figures

Samples of MDA-MB-468 and MCF7 breast cancer cell lines that were used in training the convolutional neural network

Figure 1

Samples of MDA-MB-468 and MCF7 breast cancer cell lines that were used in training the convolutional neural network

The complete data set contains 664 MDA-MB-468 breast cancer cell images and 577 MCF7 breast cancer cells

Figure 2

The complete data set contains 664 MDA-MB-468 breast cancer cell images and 577 MCF7 breast cancer cells

The architecture of our convolutional neural network based on the ScaffoldNet architecture

Figure 3

The architecture of our convolutional neural network based on the ScaffoldNet architecture

As seen on the curve, our convolutional neural network classifier has a high AUROC score of 0.98

Figure 4

As seen on the curve, our convolutional neural network classifier has a high AUROC score of 0.98

References

Akogo, D.A. and Palmer, X.L. (2018), “ScaffoldNet: detecting and classifying biomedical polymer-based scaffolds via a convolutional neural network”, available at: https://arxiv.org/abs/1805.08702

Albarqouni, S., Baur, C., Achilles, F., Belagiannis, V., Demirci, S. and Navab, N. (2016), “AggNet: deep learning from crowds for mitosis detection in breast cancer histology images”, IEEE Transactions on Medical Imaging, Vol. 35 No. 5.

Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K. and Yuille, A.L. (2014), “Semantic image segmentation with deep convolutional nets and fully connected CRFs”.

Desantis, C.E., Ma, J., Goding Sauer, A., Newman, L.A. and Jemal, A. (2017), “Breast cancer statistics, 2017, racial disparity in mortality by state”, CA: A Cancer Journal for Clinicians, Vol. 67 No. 6, pp. 439-448, doi: 10.3322/caac.21412.

Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M. and Thrun, S. (2017), “Dermatologist-level classification of skin cancer with deep neural networks”, Nature, Vol. 542, February, pp. 115-118

Fukushima, K. (1980), “Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, Vol. 36 No. 4, pp. 193-202, doi: 10.1007/bf00344251.

Fukushima, K. (1987), “A hierarchical neural network model for selective attention”, in Eckmiller, R. and Von der Malsburg, C. (Eds), Neural Computers, Springer-Verlag, Tokyo, pp. 81-90.

Fukushima, K., Miyake, S. and Ito, T. (1983), “Neocognitron: a neural network model for a mechanism of visual pattern recognition”, IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-13 No. 3, pp. 826-834.

Hahnloser, R., Sarpeshkar, R., Mahowald, M.A., Douglas, R.J. and Seung, H.S. (2000), “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit”, Nature, No. 405, pp. 947-951.

Hamilton, N.A., Pantelic, R.S., Hanson, K. and Teasdale, R.D. (2007), “Fast automated cell phenotype image classification”, BMC Bioinformatics, Vol. 8, March, p. 110.

Keenan, S.J., Diamond, J., McCluggage, W.G., Bharucha, H., Thompson, D., Bartels, P.H. and Hamilton, P.W. (2000), “An automated machine vision system for the histological grading of cervical intraepithelial neoplasia (CIN)”, Journal of Pathology, Vol. 192 No. 3, pp. 351-362.

Kingma, D.P. and Ba, J. (2014), “Adam: a method for stochastic optimization”.

Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012), “ImageNet classification with deep convolutional neural networks”, Advances in Neural Information Processing Systems, Vol. 25.

LeCun, Y., Boser, B., Denker, J.S., Henderson, D., Howard, R.E., Hubbard, W. and Jackel, L.D. (1989), “Backpropagation applied to handwritten zip code recognition”, Neural Computation, Vol. 1 No. 4.

Lin, M., Chen, Q. and Yan, S. (2013), “Network in network”.

Lin, W., Xiao, J. and Micheli-Tzanakou, E. (1998), “A computational intelligence system for cell classification”.

Redmon, J., Divvala, S., Girshick, R. and Farhadi, A. (2015), “You only look once: unified, real-time object detection”.

Ren, S., He, K., Girshick, R. and Sun, J. (2015), “Faster R-CNN: towards real-time object detection with region proposal networks”.

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014), “Dropout: a simple way to prevent neural networks from overfitting”, The Journal of Machine Learning, Vol. 15 No. 1, pp. 1929-1958.

Timothy, B.L., Thege, F.I. and Kirby, B.J. (2016), “Comparison and optimization of machine learning methods for automated classification of circulating tumor cells”, Cytometry A, Vol. 89 No. 10, pp. 922-931, doi: 10.1002/cyto.a.22993.

van Grinsven, M.J.J.P., van Ginneken, B., Hoyng, C.B., Theelen, T. and Sánchez, C.I. (2016), “Fast convolutional neural network training using selective data sampling: application to hemorrhage detection in color fundus images”.

Yang, L., Zhang, Y., Chen, J., Zhang, S. and Chen, D.Z. (2017), “Suggestive annotation: a deep active learning framework for biomedical image segmentation”.

Further reading

Hubel, D.H. and Wiesel, T.N. (1968), “Receptive fields and functional architecture of monkey striate cortex”, The Journal of Physiology, Vol. 195 No. 1, pp. 215-243.

Corresponding author

Darlington A. Akogo can be contacted at: darlington@gudra-studio.com

Related articles