Improving the workhorse: Artificial intelligence

Multiview Confocal Super-Resolution Microscopy: musesophageal tissue

image: Museesophageal tissue plate (XY image), immunostained for tubulin (cyan) and actin (magenta), imaged in triple-view SIM mode.
view more

Credit: Yicong Wu and Xiaofei Han et al., Nature, 2021

WOODS HOLE, Mass. – Since the pioneer of artificial intelligence Marvin Minsky patented the principle of confocal microscopy in 1957, it has become the standard of the workhorse in life science laboratories worldwide due to its superior contrast to traditional wide-field microscopy. Yet confocal microscopes are not perfect. They increase the resolution by imaging only a single, focus point at a time, so it may take a while to scan an entire, delicate biological sample and expose it to light doses that can be toxic.

To push confocal imaging to an unprecedented level of performance, a collaboration at the Marine Biological Laboratory (MBL) has invented a “kitchen sink” confocal platform that borrows solutions from other high-performance imaging systems, adds a unifying thread of “Deep Learning” artificial intelligence algorithms, and successfully improves the volumetric resolution of the confocal by more than 10 times, while reducing the phototoxicity. Their report on the technology, called “Multiview Confocal Super-Resolution Microscopy”, was published online this week in Nature.

“Many laboratories have confocals, and if they can get more performance out of them by using these artificial intelligence algorithms, then they do not have to invest in a whole new microscope. For me, that is one of the best and most exciting reasons for to apply these AI methods, ”said senior author and MBL Fellow Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering.

Among its innovations, the new confocal platform uses three objective lenses, which make it possible to image a wide range of sample sizes, from nuclei and neurons in C. elegans embryo for the entire adult worm. Multiple sample views are recorded, recorded, and fused rapidly to provide reconstructed enhanced resolution compared to individual confocal microscopy. The platform also introduces innovative scanning heads for the three lenses, making it easy to add line-scanning lighting to the microscope base.

In addition, the team added “super-resolution capability” to the platform (enhanced resolution beyond the diffraction limit of light) by adapting techniques from structured illumination microscopy.

“The hardware summit being climbed in this platform is the many lenses around the test and then the super-resolution trick that require a combination of hardware and calculation to achieve. It’s a tour de force, but it’s a pretty phototoxic recipe. There is a lot of light being delivered to the sample, ”said co-author and MBL Fellow Patrick La Riviere from the University of Chicago.

One way to address phototoxicity is to lower the light coming from the microscope’s laser. But then you start having problems with “noise” in the image – background graininess that can hide fine details about the object you want to image (the “signal”). This is where artificial intelligence comes in.

The team trained a Deep Learning computer model, or neural network, to distinguish between poorer quality images with a low signal-to-noise ratio (SNR) and better images with a higher SNR. “Eventually, the network could predict the higher SNR images, even given a fairly low SNR input,” Shroff said.

“Deep Learning allows you to take this hardware summit as the gold standard for resolution and then train a neural network to achieve similar results with much lower SNR data, many fewer recordings and so much less light dose for the test,” La Rivière said.

The team demonstrated the platform’s capabilities in more than 20 different solid and live samples, targeting structures ranging from less than 100 nanometers to one millimeter in size. These included protein distributions in single cells; nuclei and developing neurons in C. elegans embryos, larvae and adults; myoblaster i Drosophila imaginary wing discs and mouse kidney, esophageal, heart and brain tissues. They also see potential applications for imaging human tissue in histological and pathological laboratories.

Shroff, La Rivière and co-author and cell biologist Daniel Colón-Ramos of the Yale School of Medicine have been collaborating on MBL for nearly a decade to develop higher-speed, resolution, and longer-lasting imaging technologies. Collaborators on this confocal platform also included Applied Scientific Instrumentation, a company they worked with both at MBL and at the National Institutes of Health.

Yicong Wu, first author on paper, built the new confocal platform and implemented its Deep Learning approaches. Wu learned how to use Deep Learning on MBL in the pilot version of a new course launched this year, DL @ MBL: Deep Learning for Microscopy Image Analysis. (La Rivière is a faculty member in the course.)

“It’s a testament to the course that Yicong was able to learn Deep Learning methods in 4 days and quickly innovate with them so we can now apply them in our lab,” Shroff said. “It’s a short feedback form, isn’t it? It was great that MBL catalyzed it.”


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases sent to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Leave a Comment

Advertise