Neural networks enlist physics-based computations for faster, clearer image restoration

Artificial intelligence gets physical
RLN processing of a raw wideview image of a human cancer cell resulted in a clearer image than one created using a popular neural network known as Thunder. Yellow arrows show missed details by the Thunder program. The RLN image was very close to the gold-standard image created with an expensive confocal microscope. The lower panel is a side view of the cell. Credit: Li et al,Nature Methods(2022). DOI: 10.1038/s41592-022-01652-7, Creative Commons Attribution 4.0 International License

Fluorescence microscopy allows researchers to study specific structures in complex biological samples. However, the image created using fluorescent probes suffers from blurring and background noise. The latest work from NIBIB researchers and their collaborators introduces several novel image restoration strategies that create sharp images with significantly reduced processing time and computing power. The research is published inNature Methods.

The cornerstone of modern image processing is the use of artificial intelligence, most notablythat useto remove the blurring andin an image. The basic strategy is to teach theto predict what a blurry, noisy image would look like without the blur and noise. The network must be trained to do this with large datasets of pairs of sharp and fuzzy versions of the same image. A significant barrier to using neural networks is the time and expense needed to create the large training data sets.

的3 d渲染清除脑组织板(约1.4 × 2.3 × 0.5 mm3) expressing tdTomato in axons acquired with 0.7/0.7 NA cleared-tissue diSPIM, comparing raw single view, dual-view joint deconvolution and RLN prediction. The RLN prediction improves image resolution and contrast relative to the raw input. The joint deconvolution output causes artifacts and shows many fewer neurites relative to the raw input and RLN prediction, likely due to failures of registration between the two raw views. Credit:Nature Methods(2022). DOI: 10.1038/s41592-022-01652-7

Before the use of neural networks, images were cleaned up—known as deconvolution—using equations. Richardson-Lucy Deconvolution (RLD) employs an equation that uses knowledge of the blurring introduced by the microscope to clear up the image. The image is processed through the equation repeatedly to further improve it. Each pass through the equation is known as an iteration and many iterations are needed to create a clear image. The resources and time that it takes to run an image through many iterations is a main drawback of the RLD approach.

RLD is considered to be physics-driven because it describes thethat cause the blurring and noise in an image. Neural networks are said to be data-driven because they must look at a lot of images (data) to learn what constitutes a fuzzy or clear image. The NIBIB team sought to leverage the advantages—and mitigate the disadvantages—of each method by combining them. The result is athat also uses RLD—a Richardson-Lucy Network (RLN).

Creative Commons Attribution 4.0 International License">
Artificial intelligence gets physical
Noise and blur were added to computer generated images of dots, circles and spheres to create synthetic data used to train neural networks to clean up fuzzy images. Credit: Li, et al. Nat Methods 19, 1427–1437 (2022),Creative Commons Attribution 4.0 International License

By design, a neural network detects characteristics in the matched pairs that will help it learn how to make a fuzzy image clear. Interestingly, the scientists who designed these networks generally do not know the specific characteristics the network is using to accomplish this feat. What is known is that the features detected by the network are based at least in part on physical properties of the microscope and so can be represented by equations.

The team developed a training regimen that incorporates RLD-like equations into the neural network that add information about the physical properties distorting the image. The most helpful equations are recycled through the network—accelerating the. Thus, the iterative equations of RLD were built into the neural network to create RLN.

"We think of this approach as "guiding" the neural network's learning process," explains Yicong Wu, Ph.D., a senior author of the study. "Put simply, the guidance helps the network learn more rapidly."

Tests on images of worm embryos showed that RLN improved both training andcompared to other deep learning programs now in use. The number of parameters need to train the network using RLN were dramatically reduced from as many as several million to less than 20,000. The processing time to obtain clear images of the embryos was also greatly reduced with RLN taking only a few seconds to resolve an image compared with 20 seconds to several minutes using other popular neural networks.

Creative Commons Attribution 4.0 International License">
Artificial intelligence gets physical
Wideview and close-up of raw images of human cancer cells processed with RLD and RLN. Arrows in the close-up view show better detail in images processed with RLN. Credit: Li, et al. Nat Methods 19, 1427–1437 (2022),Creative Commons Attribution 4.0 International License

Although RLN accelerates the training process, the datasets of fuzzy and clear images needed to train the network are difficult to obtain or create from scratch. To address the problem, the researchers ran RLD equations in reverse to rapidly create synthetic data sets for training. Computer-generated images were created with a mixture of dots, circles, and spheres—termed mixed synthetic data. Based on measurements from out-of-focus cells, blur was added to the synthetic images. Background noise was further added to create blurry, noisy images of the computer-generated synthetic shapes. The pairs of clear and fuzzy synthetic mixed shapes were used to train a neural network to restore actual images of live cells.

The experiment demonstrated that RLN trained on synthetic data outperformed RLD in creating clear images of the out-of-focus cells. Impressively, RLN cleared up many fine structures in the images that RLD failed to detect.

"The success we are having with using synthetic data to train neural networks is very exciting," explained Hari Shroff, Ph.D., one of the lead authors of the study. "Creating or obtaining data sets for training has been an enormous bottleneck in image processing. This combination of the findings in this work—that synthetic data really works, especially when used with RLN—has the potential to usher in a new era in image processing that we are vigorously pursuing."

The group is extremely excited about another aspect of the work. They found that synthetic data sets created to restore images of a specific subject such as live cells were also able to restore fuzzy images of completely different images such as human brains. They describe this as the "generalizability" of synthetic training. The team is now moving full speed ahead to see how far such generalizability can be taken to accelerate the creation of high-quality images for biological research.

更多的信息:Yue Li et al, Incorporating the image formation process into deep learning improves network performance,Nature Methods(2022).DOI: 10.1038/s41592-022-01652-7
Journal information: Nature Methods

Citation: Neural networks enlist physics-based computations for faster, clearer image restoration (2022, November 11) retrieved 13 November 2022 from //www.pyrotek-europe.com/news/2022-11-neural-networks-physics-based-faster-clearer.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Microscopists push neural networks to the limit to sharpen fuzzy images

28shares

Feedback to editors