Stem Cells and AI: Better Together – Innovita Research

One day in the future when you need medical care, someone will examine you, diagnose the problem, remove some of your body’s healthy cells, and then use them to grow a cure for your ailment. The therapy will be personalized and especially attuned to you and your body, your genes, and the microbes that live in your gut. This is the dream of modern medical science in the field of “regenerative medicine.”

Two images of cells, one is black and white and the other is blue and green.

Left: Transmitted light brightfield image of tissue-engineered retinal pigment epithelium (RPE). Mature RPE express melanin, which is a pigment that absorbs light to yield the darkened regions in the image. The individual cells can be seen as the small circular shapes that are about 0.01 to 0.02 mm in diameter. Right: The quantitative “absorbance” image with a calibrated absorbance scale on the bottom right. Artificial intelligence algorithms were able to detect subtle patterns in the pigmentation, not apparent to humans, that could predict the quality of an RPE specimen. Image credit: N.J.Schaub et al.

There are many obstacles standing between this dream and its implementation in real life, however. One obstacle is complexity.

Cells often differ so much from one another and differ in so many ways that scientists have a hard time predicting what the cells will do in any given therapeutic scenario. There are literally millions of parameters when it comes to living products. And that means millions of ways that a medical therapy could possibly go wrong.

“It is notoriously difficult to characterize cell products,” says Carl Simon, a biologist at the National Institute of Standards and Technology (NIST). “They are not stable, and they are not homogenous, and the test methods for characterizing them have large error bars.”

Simon and his colleagues want to change that by narrowing down the possibilities and increasing the chances the doctor will know that these cells will do exactly what’s expected.

One of the keys is good measurement. Scientists need to be able to measure what happens in cells as they are manufactured into medical products, but how do you efficiently measure something that has millions of parameters?

The cell measurement question has been plaguing medical product researchers and developers for years, including eye researchers. As some people age, they begin to lose their eyesight in a process called age-related macular degeneration (AMD). Finding an effective therapy based on stem cells could mean an increased quality of life for people around the world. Personalized, regenerative medicine seems like a strong possibility for this ailment, but quality assurance measurements have been slow and halting.

To help improve that quality assurance piece of the puzzle, Simon’s team was working with Kapil Bharti, a researcher at the National Eye Institute at the National Institutes of Health, to use a new kind of microscopy to examine lab-grown eye tissues for treating blindness.

One day when Simon and Nicholas Schaub, one of the postdoctoral researchers on Simon’s team, were experimenting with computers in the lab, it struck Schaub that the free, open-source artificial intelligence (AI) program he’d used to narrow down good investment choices for personal finance projects might be useful for their research.

They took data they’d collected from their experiments with Bharti — which is normally very difficult to decipher — and applied a type of AI program called deep neural networks.

The results came back with an astounding rate of accuracy. They discovered that the AI program made only one incorrect prediction about cell changes out of the 36 predictions it was asked to make.

The AI program they used was based on a well-known model architecture, GoogLeNet. Their program learned how to predict cell function in different scenarios and settings from annotated images of cells. It soon could rapidly analyze images of the lab-grown eye tissues to classify the tissues as good or bad. Once trained, an AI program can classify eye tissues more accurately and faster than any human.

What was most novel in this case, however, was that the numbers being plugged into the software were the result of one of the oldest pieces of technology in biology: a basic microscope set up to gather what are called “brightfield” images. This team’s effort paired one of the most modern ways to do research with one of the oldest.

Eyeballing the Math

Brightfield microscopy goes back 400 years, to when European scientists first discovered a way to see cells using magnifying lenses, a metal tube and a light source. In this method, a sample is lit from below. As denser parts of it absorb the light, they appear dark against a bright background. It remains one of the most ubiquitous tools for biological research in use today, and technological advances have drastically improved the scale and detail that can be seen through a microscope.

But brightfield microscopy has one big limiting factor. It is hard to make precise measurements with a microscope. You can take photos of cells on your own microscope, but comparing them to photos taken on other microscopes in other labs remains a subjective activity, in part because microscopes and the data that they yield can vary widely. The lenses, mirrors, light sources and cameras in another lab may be very different from yours. Until now there’s been no good way for labs to report results to each other and ensure that those results would be reproducible and reliable in a different environment. Too many things were being measured by too many different types of tools.

The combination of those limiting factors and the huge number of parameters in cell research has made it hard to get the highly precise data that is needed to reliably biomanufacture a tissue product for human clinical trials.

Simon and Schaub had a theory, though, for improving brightfield microscopy measurements: Maybe it was all about math.

According to Simon, the process is fairly simple.

You turn on the microscope and measure the background intensity of light that shines through the lens by taking a photo.

Then, you put the lab-grown eye tissue in the light path and take another photo.

You divide the cell photo by the background photo to figure out the ratios, and now you know how much light the tissue absorbs. This value allows you to compare your cell measurements with someone else’s.

“The beauty of absorbance imaging is that it can be done with existing microscopes, without the need for additional hardware or expense,” says Simon. “It only entails doing relatively simple math on the captured images, which makes the data comparable across different instruments, labs and people.” Simon and his team were able to demonstrate that the error rate between given labs will be about 4% or 5%.

AI, he further explains, allows you to quickly scale up the math and compare millions of images to one another.

Simon’s team had already been collaborating with another NIST team, headed by computer scientist Peter Bajcsy, on image processing and machine learning problems.

Bajcsy’s team had designed a system, known as Web Image Processing Pipeline (WIPP), for processing terabyte-sized images of cells. WIPP could track cell changes in size, over time, and by cell function. But it could only process images, not evaluate them.

For the brightfield microscopy project, Bajcsy and his team added machine learning and new AI-based methods to WIPP so it could handle more input and help to evaluate the data.

IT and bioscience experts at NIST worked together to create the Web Image Processing Pipeline (WIPP) to allow anyone with an auto-imaging tool to collect, view, and manage terabyte-sized images. The user experience is a lot like using a phone mapping app to view cell changes across time, space and function.

AI for Scaling Up

The complexity of the stem cell work for macular degeneration was astounding to Bajcsy. “One can imagine the amount of data generated over 155 days — it is terabytes and terabytes for each individual patient,” he says. “Analyzing terabytes of images is not trivial.”

“When this project began, the practical application of AI was just sort of heating up and getting popular,” Bajcsy says. This was the first time he and his team had a chance to conduct some basic research with AI-based methods and make them available in WIPP to cell scientists.

The development of this efficient and effective new way of doing stem cell work, which the team calls quantitative brightfield absorbance microscopy (QBAM), took time and patience. For five long years the research team of IT scientists and biologists had to meet and try to understand each other’s fields.

“We all speak different technological languages, and sometimes understanding each other was hard,” Simon says. “There were many two-hour group phone calls that seemed to lead nowhere.”

Eventually, however, the investment of time paid off, and results started to roll in that matched the accuracy rates that Simon and Schaub had initially witnessed with their tiny test on the open-source software. The results of their collaboration with the National Eye Institute were published last week in The Journal of Clinical Investigation.

For Simon, the moral of the story is about the unique value of interdisciplinary federal research. Working at NIST means being able to walk over to a lab of experts from another field and ask them questions as needed. Often, research disciplines are isolated from one other, which is not always the most effective way for science to advance.

The other lesson, Simon says, is about the value of postdoctoral participation. “Postdocs are great. They bring in fresh ideas.”

Paper:

N.J. Schaub, N.A. Hotaling, P. Manescu, S. Padi, Q. Wan, R. Sharma, A. George, J. Chalfoun, M. Simon, M. Ouladi, C.G. Simon Jr., P. Bajcsy and K. Bharti. Deep learning predicts function of live retinal pigment epithelium from quantitative microscopy. In Journal of Clinical Investigation. In-press preview published online November 14, 2019. DOI: 10.1172/JCI131187

Source: NIST