Pathologists observe tissue samples by first staining them. However, standard procedures for staining tissue samples in histopathology are time consuming and require specialized laboratory infrastructure, chemical reagents, and skilled technicians. Uncertainty in tissue staining in handling with various laboratories and tissue technicians may lead to misdiagnosis. In addition, the original tissue sample is not preserved by the currently used histochemical staining techniques since each step of the procedure has an irreversible effect on the sample.
As artificial intelligence (AI) advances, researchers are using AI techniques to improve pathology workflows. A recent study from the University of California Los Angeles (UCLA) used extensively neural networks To smear microscopic images of unlabeled tissues. The search was published in smart computing.
Deep neural networks have already been applied to staining images of unlabeled tissue sections, avoiding laborious and time-consuming chemical histological staining processes. However, there are some bottlenecks. “In all hypothetical label-free staining methods, obtaining center images of unsorted tissue sections is essential,” the authors said. “In general, focusing is a critical but time-consuming step in optical microscopy scanning.”
The most commonly used autofocus method requires many focus points across the tissue slice area with high focusing accuracy, and the best focal plane is determined by an iterative search algorithm, which is time-consuming and may cause photodamage and photobleaching on samples.
To overcome these issues, the authors introduce a new framework for rapid virtual coloring based on deep learning. They say that “this framework uses an auto-focus neural network (called Deep-R) to digitally refocus defocused autofluorescence images. A virtual chromatography network is then used to convert the refocused images into physically blurred images.”
Compared with the standard default coloring framework, the new framework demonstrated by the authors uses fewer Contact points It reduces focus accuracy for each focus point for coarsely focused whole-slice autofluorescence images of tissue.
This new default coloring frame can greatly reduce the AF time and the entire image acquisition process. The authors say that the “deep learning-based framework reduces the total image acquisition time required for virtual colorization of label-free, whole-slide (WSI) images by up to 32%, which also results in an approximately 89% reduction in per-slice AF time. tissue”.
Despite the loss of image sharpness and contrast compared to standard virtual colorization frames, high-quality staining can still be produced, closely matching relatively stained ground truth images. Moreover, this framework can also be used as an add-on to improve the robustness of the standard default coloring framework.
This quick default coloring Domain It will have more development prospects in the future. “This rapid virtual staining workflow can also be extended to many other stains, such as Mason’s trichrome stain, Jones silver stain, and immunohistochemical (IHC) stains,” the authors said. “Although the phenotypic staining approach presented here has been demonstrated based on non-labeled autofluorescence imaging tissue sections, it can also be used to accelerate the virtual staining workflow of other label-free microscopy modalities. ”
Virtual staining based on deep tissue learning facilitates rapid assessment of breast cancer biomarker
Yijie Zhang et al, Virtual staining of defocused autofluorescence images of unlabeled tissues using deep neural networks, smart computing (2022). DOI: 10.34133/2022/9818965
Introduction to Intelligent Computing
the quote: AI methods may replace chemical fabric coloring (2022, Oct 31) Retrieved Oct 31, 2022 from https://medicalxpress.com/news/2022-10-artustry-intelligence-methods-histochemical.html
This document is subject to copyright. Notwithstanding any fair dealing for the purpose of private study or research, no part may be reproduced without written permission. The content is provided for informational purposes only.