Artigo Revisado por pares

Putting the “More” Back in Morphology: Spectral Imaging and Image Analysis in the Service of Pathology

2008; American Medical Association; Volume: 132; Issue: 5 Linguagem: Inglês

10.5858/2008-132-748-ptmbim

ISSN

1543-2165

Autores

Richard M. Levenson,

Tópico(s)

Radiomics and Machine Learning in Medical Imaging

Resumo

Although the title suggests this presentation is going to be technology-heavy, it is really issue-heavy. I would like to start with 2 preliminary remarks. One is that I kept miscalling this conference, "Futurescape-s" rather than "Futurescape," and I really think that is a more apt title because there are lots of different visions of how we are going to proceed and different timelines and different mixtures. Important questions brought up at the beginning of the talk include the challenge, "Is the biopsy going to go away?" and "What does it now mean to be a pathologist?" Differing answers to such questions can lead to very different "Futurescapes."The other theme we have already heard is the incredibly rapid pace of developments in non–tissue-based diagnostic technology, in which pathology has not really shared. At my company, we just had a chance to talk to someone at Carl Zeiss (Oberkochen, Germany), a large and well-established company, who shared with us the astonishing fact that 60% of their revenue comes from products that have been introduced just in the last 3 years.I think incoming developments are going to come rolling over us in waves and in ways that we do not understand. However, in this talk, I am actually going to try to blend the old with the new. The new is the advent of molecular medicine and the onslaught of high-dimensional data, information on multiple genes and multiple proteins. However, pathology is historically related to morphology. I am going to put the "more" back in morphology because I think if you separate morphology from molecular indicators, you lose an essential link to the really intricate biology that we are trying to understand.So, we currently have morphology, the backbone of traditional pathology, often, via immunohistochemistry, linked to the expression of, typically, a single molecular marker. However, expression levels of single markers cannot begin to capture the complex interrelationships going on within a single cell, and of course, it is the cell that is really the prime unit in a tumor. It is not the tumor that metastasizes; it is a particular cell that decides to take the leap.The other side of the coin, where most of the excitement has been found recently, is highly multiplexed analysis: tens of thousands of genes, potentially hundreds of thousands of proteins can be interrogated, but at the expense of the morphologic perspective. You do not know where these entities came from, from one type of cell or from a mixture, and whether or how these cells may be interacting with each other. A major theme along these lines is the interactive dance between stroma and epithelium, for example.In the middle, the area we have been trying to work on, is an attempt to merge the best of both worlds. I am going to show you that we can measure 5 or 6 or 7 different analytes while preserving the cellular context.For some degree of perspective on where we have been and where we are going, I am going to cover the history of pathology—in 2 slides—and its somewhat tremulous status in the molecular era. Then we will look at how to fix that or at least how to address it. Finally, I discuss spectral imaging and machine vision, because spectral imaging or analogous methods help with collecting the molecular, multiplexed data, but if we want to automate things, we also need some computational tools that can handle the morphology and the complexity of the spatial relationships.One question to ask is how much progress has been made since the 18th century or even before. Figure 1, A, is a photograph of an original van Leeuwenhoek microscope. People have taken modern images through a replica using a digital camera, and it is actually remarkably capable. It is a ×295-magnification instrument, which translates into about a ×29 lens on a conventional microscope. You can see bacteria and individual cells, including a white blood cell with its nucleus. I submit to you that with the hematoxylin-eosin (H&E) stain and this microscope, you could manage 90% of what pathologists do now (Figure 1, B through D).Let us also look at the pace of technology evolution in pathology. Although it is been around for 300 years, we will see that things have not happened that quickly (Table). So, after the development of microscopy itself, the next major event in pathology? Reimbursement? Actually, no: application of the hematoxylin stain and, simultaneously, elaboration of a theoretical underpinning (cell theory of disease) that could allow it to make sense of what was going on. Those 2 things happened around 1850.It took another 20 years to add eosin. Next, formaldehyde. After which, 50 years passed before we were able to add protein expression data. And another 30 years before we got DNA information. The last thing that occurred that I would submit to you has been a major advance is the computer. We have been hearing about all that it can enable.However, let me show you how vulnerable this whole field is. Let us say we agree that lots of genes are good to understand. How do you get that information? You get it from a biopsy or other tissue-based method? Not necessarily. A Stanford group took standard computed tomography images of liver cancer, just took a picture, measured a variety of different features from the x-ray, and correlated them with DNA expression array data, and, just from the computed tomography picture, the investigators were able to capture 80% of the gene expression variance.1 They went on to say, "In real life this approach would avoid the pain and risk of infection and bleeding from a biopsy." The traditional biopsy is very much under siege from cost and other standpoints. So, a very good question is not only whether, in the future, we will be looking at biopsies through microscopes, but will we be looking at biopsies at all? Or, to put it another way, will the biopsy suffer the fate of the autopsy? In Figure 2 the blue line represents the autopsy rate in recent times (1972–1990) and it is continuing to head down. It is pretty easy to extrapolate: There may be some forces remaining to bring it up a little, but the autopsy rate appears to be asymptotically approaching zero.Why are the clinicians reluctant to request autopsies? One reason reflects the feeling that advances in modern diagnostic techniques are reducing the need for the autopsy; I think the same could be read for the biopsy.What are the alternatives to the classical biopsy? There is, of course, functional molecular imaging. There are serum markers, which you have already heard being promoted. One can examine circulating cells for clues to cancer status, for example. Expression arrays typically require tissue anyway, such as can be acquired by a needle biopsy or the like. In vivo microscopy with or without molecular markers could replace the traditional biopsy if you could get an appropriate probe into or near the tissue of interest.So those are some of the "hot" areas of diagnostic and prognostic activities. What is not hot? Tissue- and cell-based pathology. How can we improve the status of the biopsy? One of the issues is that, at least at a molecular level, only 1 or 2 molecular markers are evaluated per section. The problems with that are, first of all, it is not a big data set, and second, by doing that, one fails to capture possibly crucial co-relationships between markers. What is happening in a slide, in a cell, simultaneously? Are estrogen receptor (ER), progesterone receptor (PR), and Ki-67 all being expressed in that particular cell or not? You cannot determine these things using single-stained serial sections.Second, quantitation (eg, percentages and intensities), now key to a lot of outcomes-based medicine, if performed by eye, has been shown to be both subjective and subject to error. Digital imaging can help, but it requires sophisticated software and a high level of caution, because a computer can tell you anything and be believable.What are the solutions? One is to extend the capabilities of microscopy itself. There are novel optical techniques that can extract information from the biopsy with or without special stains—one example shown here conveys information mostly from connective tissue (stroma). Another technique, multispectral imaging, allows for the application of many more labels per histologic section, allowing molecular multiplexing in a way that can increase sensitivity and quantitative accuracy. Finally, machine vision and machine-learning techniques can be used to teach systems to find regions and cellular compartments and to automate accurate tissue- and cell-based quantitation.With regard to basic microscopy, one can ask what kind of information can be relayed by light. Light conveys color and intensity information, and you can use these to look at classical tissue qualities reflecting stain-binding, molecular labels, quantities, shapes, and textures. These are most if not all of what pathologists appreciate when they look at slides. Going further, one can also extract information from biologically important structured molecules, without a special stain.With some technology that we have at Cambridge Research Instrumentation, Inc (CRI, Inc) (Woburn, Mass), you can extract the collagen signal of a standard H&E-stained slide of liver, for example, all by itself (Figure 3). You do not need to do a trichrome stain. As many cancer biologists realize, organization of the collagen (a major component of the stroma surrounding benign and malignant epithelia) can tell a lot about what is going on in or around a tumor. We have not gone a long way with this yet, but I am showing you these examples to indicate that there are still more tricks that you can play just with a simple H&E slide.Let us now talk about multispectral imaging. The difference between spectral information and color information is that the former is independent of the observer, and the latter is intrinsic to the human experience, color being purely a psychological construct. So yellow can indicate a lot of things. It can be a single narrow wavelength distribution, formed, perhaps, by separating light with a prism, or it can be created by combining red light and green light. These 2 yellows are thus spectrally very different, but indistinguishable to the human eye or, in fact, to a color RGB (red-green-blue) camera. This explains how multispectral imaging can help to distinguish between, say, bright green tissue autofluorescence and the green fluorescein used as a label. They are both green; neither you nor your color camera can pull them apart, but spectrally they can be very different.CRI's multispectral imaging instrument (Figure 4) can be added onto any microscope. Of course, with a different form factor, the same capability can be folded into automated scanning platforms.What can you do with it? One of the important tasks you can accomplish is to get more capability out of simple immunohistochemical stains or multiple color stains by separating the different color labels even when they overlap. The next slide shows an RGB image (Figure 5, A) of a cytokeratin and CD3 stains in lymph node, with CD3 in brown (Figure 5, D), cytokeratin (metastatic tumor) in red (Figure 5, E), along with a hematoxylin (blue) tissue counterstain (Figure 5, F). Nothing exotic, but multispectrally, you can separate these signals. You can view the hematoxylin counterstain all by itself and actually appreciate some very nice morphology underneath the other stains. There is no evident crosstalk between the labels, making it easy to quantitate and analyze. You can also put back together the different labels, using pseudocolor, and can also transmute the brightfield display into a simulated fluorescence mode (Figure 5, C), which can help highlight faint signals that might be otherwise hard to appreciate.Why is multiplexing useful? It really does make a difference when molecular events can be monitored on a cell-by-cell basis. Figure 6 shows an example of breast tissue containing normal breast duct (in the blue dotted oval), and invasive cancer (in the red oval) (Figure 6, A), stained simultaneously for ER (Figure 6, B) and PR (Figure 6, C), using 1 brown stain and 1 red stain. It is difficult to appreciate visually where they overlap, but multispectrally, it is simple: We can detect cellular expression of ER and PR and can locate where these 2 signals are coexpressed. The normal duct shows only ER expression, whereas the cancer is expressing both signals simultaneously. If we display colocalization in yellow, only the cancer lights up (Figure 6, D). Suddenly this can reveal some new biology. You can also express these findings quantitatively, as shown in the plot of ER and PR positivity and colocalization data from 39 patients (Figure 7), showing a subgroup with high colocalized expression. We do not yet know what the clinical implications of these patterns mean, because the studies to actually correlate per-cell ER-PR coexpression with clinical outcomes have not been completed. The point is that the tools are now here to enable this kind of research, using brightfield, chromogenic, traditional immunohistochemical staining methods.Some more intricate biology can be explored. For example, one can look at 3 nuclear stains from important cell cycling pathways, p21, p27, and Ki-67, all labeled in brightfield using conventional red, brown, and blue chromogens, and these are amenable to spectral unmixing (not shown). How many labels can we handle, just with chromogens? It is not yet clear. As a challenge, a colleague decided to stain a slide with 3 different red stains, AEC signal, Liquid Permanent Red, and Nova Red (Figure 8, A through C). Even though they are all red, and pretty similar, as documented in their individual spectra, we were able to pull them apart; Figure 8, C, shows the image redisplayed with each label given a pseudocolor. So, we can handle 3 red stains simultaneously and, of course, that leaves the green and the blue channels if you will, still open for more multispectral exploitation.There is a limit. The more absorbing stains you use, the fewer photons get transmitted, and the sample becomes hard to interpret; eventually you could end up with 1 photon—very informative but hard to detect.Because there are limitations on the number of chromogens that can be reliably used simultaneously, there is an impetus toward fluorescence use, even in pathology. Fluorescence does provide a much high level of multiplexing, especially with quantum dots or similar fluorescent tags. It can use potentially easier protocols than conventional immunohistochemistry because you can use directly conjugated entities, so if you pour a mixture of labeled antibodies on a slide and then wash, you can be done in 1 step. Fluorescence does have a claim to higher sensitivity than brightfield labels, and it can also provide for better linearity and dynamic range. Pathologists do not use fluorescence much at the present for a variety of reasons. One of these is that autofluorescence can be big problem in fixed tissue. That is to say, the tissue all by itself can glow bright green or yellow, and this glow interferes with what you are trying to look at. Photobleaching is also a problem—the signals go away as you look at them—and fluorescence microscopes are complex and expensive. Also, fluorescent samples are not archival, and possibly most damning, fluorescently stained slides do not look like their H&E-stained counterparts. Some of these problems can be addressed using multispectral approaches.Figure 9 presents an example of using multispectral imaging to overcome autofluorescence. The sample is prostate tissue (Figure 9, A) stained with 525-nm (green) quantum dots coupled to an antibody directed against prostate-specific membrane antigen (Figure 9, B). The signal is supposed to line the lumens of the glands. However, all of the visible signal consists only of green autofluorescence, and we cannot discern any specific labeling by eye.Even if we tune our spectral discrimination filter to pass 525-nm light to peer at just the wavelength range where the quantum dots are supposed to be emitting, it is still not possible to see specific signals because the autofluorescence is also very bright in that same spectral region (Figure 9, B). However, with multispectral imaging, you can pull out the signal, now bright against a black background, and it can be combined with the autofluorescence signal (here functioning as something akin to a counterstain), and suddenly you have something that is both credible and useful (Figure 9, D).Figure 10 presents an example of a lymph node labeled with 5 different quantum dots plus a 4′,6-diamidino-2-phenylindole (DAPI) nuclear counterstain. The sample consists of a mostly germinal center, with a little mantle region in the top-left corner (Figure 10, A). The multispectral data set was collected using a single excitation wavelength, and acquisition time was less than 20 seconds (Figure 10, B). The spectra of the 5 quantum dots (determined from this image) are shown, along the green "tail" of the DAPI signal, sufficient to allow unmixing and detection of the nuclear counterstain. The bottom image shows the location of the 5 quantum dot labels: Ki-67, blue; CD4, yellow; CD20, red; immunoglobulin (Ig) D, green; and macrophages, cyan (Figure 10, C). The DAPI nuclear signal is not shown here for clarity.Finally, pathologists in particular recoil from fluorescence because the images natively do not resemble H&E-stained specimens (which appear as nature intended). One can take fluorescence images, and where appropriate, unmix, invert, and recolor to generate images that look very much like brightfield, as shown in Figure 11, A through C, and as the individual layers (here, autofluorescence, histone, and neuronal stains) can be turned on and off, it is easy to appreciate the spatial context of molecular corelationships.To turn to the third component we see as contributing to a renaissance of tissue-based explorations, we can address the morphologic context. I will give you some quick pictures of what we have been doing at CRI with machine-learning software that performs learn-by-example–powered segmentation. To give you a simple example of how it works, let us take a look at an image of a group of hikers, looking out over a nice landscape (Figure 12, A). To teach the software to distinguish between hikers and nonhikers, you simply draw a region around some hikers and some regions around nonhikers (Figure 12, B), say, in essence, "go learn," and within a second or two, it can find the hikers, with essentially no hits in the background landscape (Figure 12, C). If it works for hikers, it should work for prostate cancer.Moving beyond hikers, Figure 13 provides an example of segmentation in prostate cancer biopsies. The machine-learning software quickly learns from minimal training to detect cancer and separate it from normal prostate epithelium and stroma. Why is this kind of capability useful? Because segmentation is fully half of what you need for a tissue-based automated molecular revolution. Not only do you need to be able to detect the molecular events but you also need to make sure you are collecting them from the right regions in the tissue. So, from this example (Figure 13, A and B), you can see that we would be able to distinguish molecular events in the prostate cancer regions from those measured in normal glands and stroma—without operator assistance.Figure 14 provides an example of this applied to the analysis of an image of breast cancer stained for PR. It is important to ensure when you measure the nuclear PR-positivity score and intensities that you are looking only in the breast cancer components. So what is shown is the segmentation software detecting, this time at ×20, the breast cancer regions, after which additional tools are used to measure the PR signals in the cancer nuclei, separating the intensities into a 4-bin histogram (0, 1–3+). This maneuver did not require a human intervention to outline the cancer regions.These are the elements that we hope will restore the "more" to morphology. Spectral imaging provides high levels of multiplexing. It has a potential for looking at pathway molecules, for putting together a cell-by-cell snapshot of signaling and cell regulatory status. The other component is the spatial understanding. We do not need to abandon the structural, architectural, and cytologic underpinnings of pathology. This combined approach provides a link between morphology and molecular information, and the automation component also has another benefit that can be important, especially for accelerating preclinical drug development studies; it democratizes pathology insight. Very often there are studies that require pathology input, but the pathologists are overworked and may not be available for weeks—the whole study grinds to a halt. If you have some of these automated tools— validated by in-house pathologists, no doubt—you could use them to speed up preclinical investigations.Here are my conclusions. Ingredients to make pathology hot again: (1) whole slide scanning—although not necessarily the "virtual slide" concept that implies high-resolution images of the entire specimen; (2) automated morphologic segmentation and assessment; (3) subcellularly localized, multiplexed molecular markers; (4) quantitative accuracy; (5) interinstitutional reliability—this stuff cannot be snake oil. It really has to work around the world; and (6) results that are truly tied to prognosis and therapeutics. Thank you for your attention.

Referência(s)