
Breast tissue stained with Ki-67 where tumor is present.
#10162
Developed for tumor detection in breast slides stained with ER, PR and Ki-67
This APP automatically identifies and clearly outlines tumor regions in breast tissue slides stained with ER, PR and Ki-67.
The APP is based on the AI/deep learning technology and has been trained to handle large variation in image slides by including data from different sites, scanners, markers, vendors and clones. The data for development was easily created using Visiopharm’s patent protected method VirtualDoubleStainingTM (VDS) and the true tumor regions have thus been identified based on a tumor marker eliminating subjectivity.
The performance of the APP has been validated on 111 Ki-67 images holding large variation with respect to staining- and morphological patterns. The results are listed in the table below and show good agreement.
Sensitivity | Specificity | Dice Score | |
Mean | 0.69 | 0.92 | 0.70 |
Standard Deviation | 0.21 | 0.08 | 0.18 |
The APP can be used to quantify the amount of tumor present in a tissue slide and/or it can be used in combination with another APP for nuclei quantification within the tumor regions such as 10002 – ER, Breast Cancer, 10003 – PR, Breast Cancer or 10004 – Ki-67, Breast Cancer. A fully automatic workflow for tumor assessment is thus made available.
Quantitative Output variables
The output variable obtained from this protocol is:
Workflow
Step 1: Load and run the APP “10162 – IHC, Tumor Detection, AI” on a full slide or within a drawn region.
Methods
The APP was developed using the DeepLabv3+ neural network. The neural network uses a cascade of layers of nonlinear processing units for feature extraction and transformation, with each successive layer using the output from the previous layer as input. DeepLabv3+ uses an encoder-decoder structure with atrous spatial pyramid pooling (ASPP) that is able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective field-of-views. This means that instead of using step-wise upsampling blocks to incorporate features from different levels, this network only needs two upsampling steps, i.e. it is faster to train and analyze than e.g. the U-Net. All of this also means that the decoder module can refine the segmentation results along the object boundaries more precisely. For more information on the network architecture, see [1].
Staining Protocol
There is no staining protocol available.
Additional information
To run the APP, a NVIDIA GPU with minimum 4 GB RAM is required.
Keywords
Tumor, Tumor Detection, Breast Cancer, Immunohistochemistry, ER, PR, Ki-67, Deep Learning, AI, Image Analysis
References
LITERATURE
1. Chen, L., et. al., Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), 2018, 801-818, arXiv:1802.02611