Artigo Acesso aberto Revisado por pares

The Antibody Challenge

2014; Future Science Ltd; Volume: 56; Issue: 3 Linguagem: Inglês

10.2144/000114143

ISSN

1940-9818

Autores

Jeffrey M. Perkel,

Tópico(s)

Biomedical and Engineering Education

Resumo

BioTechniquesVol. 56, No. 3 Tech NewsOpen AccessThe Antibody ChallengeJeffrey M. PerkelJeffrey M. PerkelPublished Online:3 Apr 2018https://doi.org/10.2144/000114143AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack Citations ShareShare onFacebookTwitterLinkedInReddit Imagine you're just starting in a lab. Your new PI decides to test your mettle at the bench with a simple project: Replicate some immunohistochemistry results from a recent publication. No problem, right? Not necessarily.Immunohistochemistry (IHC) relies on antibodies, and antibodies, says Anita Bandrowski of the University of California, San Diego, "are extremely messy." Unlike many reagents in today's molecular biology lab, antibodies are far from the turnkey solutions commercial vendors would have you believe. Antibodypedia.com lists 1.2 million antibodies in its database. Some work in Western blots but not in IHC, others can precipitate protein complexes but come up empty in flow cytometry, and some don't work at all. More than a quarter of 246 histone modification antibodies tested in a 2011 study were found to be non-specific; of those that were specific, 22% were unsuitable for chromatin immunoprecipitation 1. Perhaps more alarmingly, some antibodies work, but recognize the wrong target. In that 2011 study, 4 antibodies "showed 100% specificity, but for the wrong [histone] peptide," the authors reported."We have talked to a lot of researchers who say this is actually one of the single biggest problems that they've experienced with reagents in the lab," says Elizabeth Iorns, cofounder and CEO of ScienceExchange.Just ask David Rimm, Director of Translational Pathology at Yale University School of Medicine. Rimm is an anatomic pathologist who develops quantitative immunofluorescence assays for clinical applications. In 2009, after his team showed that the staining patterns of five antibodies (ATF2, p21(WAF1), p16(INK4A), beta-catenin, and fibronectin) could together predict melanoma survival, Rimm began planning to prospectively test the biomarker in the clinic. First, though, he needed to prepare the assay for CLIA lab testing. Two years had passed, and the reagent stocks were depleted, so he tasked a member of his lab with purchasing new batches of antibodies against the five antigens and checking to make sure everything was working as expected. It wasn't.At first, Rimm assumed there was a "hiccup" in the lab, but subsequent tests produced the same negative findings, which were traced to non-reproducibility of the commercial ATF2 and fibronectin reagents. In the wake of these findings, the clinical trial was put on hold, but the original paper stands. After all, the data were accurate, as far as they go. "The data are correct … for that lot of antibody."A question of validityTo get a glimpse into the challenge of antibody validation, take a look at the Human Protein Atlas (HPA) project. The HPA seeks to document the tissue distribution of at least one isoform for every human protein-coding gene. The project, with some 150 staffers, is cranking out antibodies by the thousands— 21,984 at last count, representing 16,621 human genes. Those antibody preparations are used to drive an IHC juggernaut, with each antibody applied to 144 normal tissues, 216 cancers, and a ream of cell lines, 708 samples in all.R&D Systems' Alex Kalyuzhny penned an editorial on antibody validation in 2009 (J Histochem Cytochem, 57:1099–1101).Credit: Michael Ehlen, R&D Systems.Clearly, the HPA knows what it means to validate an antibody. According to Jochen Schwenk, Associate Professor at the Royal Institute of Technology in Stockholm, Sweden and a principal investigator in the project, the HPA has actually generated more than 50,000 antibody preparations to date.HPA antibodies are made in rabbits from bacterially expressed protein fragments called PrESTs (protein epitope signature tags). When a serum comes back, it is affinity-purified and subjected to a battery of tests, including protein microarrays and Western blotting to assess specificity. About 40,000 antibodies validated in such a manner progressed to the next phase of testing, using tissue microarrays and confocal microscopy to check tissue and subcellular distribution against previously published findings. If possible, the team checks two independent antibody preparations to ensure that the patterns match. Sometimes, they cross-reference transcriptomic data, too—cells flush with a particular transcript should produce a stronger signal than those that do not.About 50% of the antibodies make it through this heroic process, says Fredrik Ponten, the HPA's Vice Program Director, but not all are of equally high quality. For instance, if nothing is known about a particular protein besides its sequence, and the corresponding antibody produces a distinct signal in IHC, the team will approve the antibody if no evidence exists to dispute it. "That's when you want to have solid RNA sequencing data and two antibodies for each protein," he says.Rimm's own antibody validation process is similarly comprehensive, having evolved over the years. Initially, his team relied simply on Westerns. Then they added tissue microarrays, cell lines that overexpress the protein, and siRNAs to knock expression down, increasing the sophistication of their process as they went. Rimm published his validation algorithm in 2010 3."That algorithm was arguably too complex, and the flow chart we put in is too hard to follow," he concedes. But there's no one-size-fits-all solution when it comes to validation, and assessing antibodies is important since some commercial suppliers have exceptional track records, says Bandrowski, and others, not so much. Some, for instance, provide only a Western blot cropped to highlight the band of interest, while others provide unedited, application-specific data. Henrik Wernérus, chief scientific officer at Atlas Antibodies, which commercializes the HPA project antibodies, says the project tested some 5000 commercial antibodies before making them themselves. On average, about half the reagents worked, but the success rate varied wildly among suppliers. "It was basically from 0% to 100% success rate for the different vendors."Elizabeth Iorns, cofounder and CEO of ScienceExchange.Credit: University of Miami.Ultimately, says Rimm, users must take responsibility for validating their own antibodies. "If you think about it, who's responsible for QC on an antibody that you buy?" he asks, rhetorically. "Is it the company or is it the user?"Unfortunately validation is not so straightforward, and researchers attempting to validate their own antibodies should anticipate some "muddy waters," according to Alex Kalyuzhny, Scientific Manager for Immunocytochemistry and Elispot assays at R&D Systems. "It's not black and white—it's grey." An antibody may fail because it truly does not recognize the target antigen, and that happens quite frequently. Clifford Saper, the James Jackson Putnam Professor at Harvard Medical School and former editor of the Journal of Comparative Neurology (JCN), recalls that when he was a young researcher, his lab attempted to produce antibodies to brain naturetic peptide. Of 29 rabbits injected, only 3 produced an appropriate staining pattern, and those varied wildly in signal. "There's a huge amount of variation, and it's quite common to have antibodies that stain irrelevant things or artifacts." But antibodies also may fail because they don't work in a particular assay type, for instance, because they recognize conformational epitopes that are lost when the target is denatured in a Western blot. They can fail if they recognize both the specific and nonspecific targets. Or they may recognize everything, or nothing. In short, says Kalyuzhny, "there are many different levels to that definition [of 'bad']."Journals take a standIn the early 2000s during his tenure at JCN, Saper noticed a growing number of research articles in his journal were being retracted due to faulty antibody data. The researchers had been quite meticulous in demonstrating their reagents— properties, but apparently not meticulous enough. They began to find that their antibodies behaved no differently in knockout mice than in wild-type tissues, a sign they were not as specific as they should be. Soon, Saper decided he'd had enough.In 2005, he published an editorial 3 laying out submission requirements for future articles in JCN. Essentially, researchers would need to demonstrate that their antibodies really did work as advertised. In particular, they had to document the source of the antibody (researcher or company, including catalog, clone, and lot numbers), the immunogen used, the nature of the preparation (polyclonal/monoclonal and species in which it was raised), its specificity (i.e., that it recognizes a particular band on a Western blot), and any necessary controls (such as its behavior in a knockout tissue) (see also Reference 4)."It was not a popular stand," Saper recalls. "Authors were pretty annoyed." But the impact on authors was pretty minimal. Between 2006 and 2011, Saper says, only "two, three, or four" papers could not be published as a result of the requirements, out of "close to 2000" papers total. Particularly difficult, he says, was the immunogen requirement; many companies initially refused to provide that information, regarding it as a trade secret."If that company goes out of business— and in the world of biotech companies they go out of business like fireflies winking out in the night—then you have no idea what that antibody was made against, and you can't replicate the experiment any more," he explains.Saper went to considerable effort to reach out to antibody vendors and get them on board. In the end, "virtually" every company but one agreed with him.Although Saper has stepped down as editor, JCN still maintains the policy. They even have compiled a database of good antibodies published in the journal as an aide to prospective authors. The current version (V.13) contains just over 7500 entries. "If an antibody has been used in JCN, it's been vetted," he explains.Part of a bigger challengeAntibody validation is one facet of a larger problem in science at the moment: data irreproducibility. In one widely discussed study, C. Glenn Begley of Amgen and Lee Ellis of the University of Texas MD Anderson Cancer Center reported that, of 53 "landmark" studies in hematology and oncology, only 6 (11%) could be replicated. "Even knowing the limitations of preclinical research, this was a shocking result," the authors wrote. 5In part, researchers may have difficulty replicating findings because they cannot unambiguously determine what reagents and conditions were used to drive those studies. Companies come and go, product lines change, and catalogues are renumbered. A researcher may dig into his or her freezer and pull out an antibody from a company that's long since gone under, says Bandrowski. In that case, even if they faithfully and accurately report the reagent they used in the literature, what is the research community to do? "Researchers can't go back. They can't get in the way-back machine and figure out what the catalog of an out-of-business company said when these authors actually purchased that antibody. So how do you resolve that?"In 2013 Melissa Haendel of Oregon Health & Science University and colleagues decided to document the issue by attempting to uniquely identify antibodies, organisms, cell lines, constructs, and knockdown reagents from 238 journal articles spanning five segments of the biological literature 6. In total, 54% of resources could be uniquely identified. Among antibodies, the figure was only 44%. "It's absolutely the case that you can't have scientific reproducibility without knowing what the ingredients of the recipe were, so to speak," says Haendel.Bandrowski and Haendel's answer to this problem is the Research Identification Initiative, which assigns unique identifiers, like a Social Security number, to each antibody, model organism strain, and software tool used in a research study. "There is a cultural change that really needs to happen, and that cultural change is to improve the way that we talk about these things in the literature," Bandrowski explains. Funded as part of the Neuroscience Information Framework, the RII's Antibody Registry (antibodyregistry.org) currently features 2.2 million antibodies, some of them tied into the JCN database to link reagents and references.An example badge from the ScienceExchange Independent Antibody Validation Initiative.The Registry, Haendel says, "is like a Genbank for antibodies"—not just because it assigns unique identifiers, but because it can serve as a aggregator of disparate pieces of information. Ultimately, Haendel says, researchers should be able to use the resource both to identify the best reagents for a given application and also to correct the literature in the event an antibody is published and later found to be less specific than originally thought. "You can retroactively correct for that in the context of the data," she says.Such initiatives should bridge the reproducibility gap. Indeed, in January NIH Director Francis Collins and deputy director Lawrence Tabak penned a commentary in Nature to address the lack of reproducibility in life science research. They noted that some journals, including Nature, Science, and Science Translational Medicine, had begun implementing editorial policies to encourage the detailed reporting of experimental details. 7Nature's new "Reporting Checklist for Life Sciences Articles," for instance, reads in part: "To show that antibodies were profiled for use in the system under study (assay and species), provide a citation, catalog number and/or clone number, supplementary information or reference to an antibody validation profile (e.g., Antibodypedia, 1DegreeBio)."Back to the benchFor the researchers at the bench, the fundamental question remains: What antibody should they choose for their particular assay? Several resources exist to help, including Antibodypedia, Linscott's Directory, and more. But with millions of reagents listed on some of these pages, selection remains a challenge.Recently, Iorns launched a new effort to help. In collaboration with antibody distributor antibodies-online.com, the ScienceExchange Independent Validation Initiative pairs vendors with independent labs that can provide third-party validation of antibody efficacy for several hundred dollars per test.The initiative kicked off in July 2013 but has only gained traction in the past few months, says Iorns. "We've done hundreds of tests so far, and we're planning to do 10,000 this year."Successfully validated antibodies are awarded a green check mark "badge" for the vendor's web site. "We are trying to establish this as kind of a sign of quality," says Stefan Pellenz, who runs the validation initiative at antibodies-online. But companies gain more from the process than just a logo, he adds; it is also a marketing tool, conveying to users that their reagent vendors take quality control seriously. Furthermore, the validating labs can work with the vendor (via Science Exchange) for troubleshooting. In one case, an ELISA kit shipped with a bad dilution buffer that was masking the target protein signal. Once the lab figured that out and reported it back to the vendor, the kit was updated subsequently passed validation "with flying colors," Pellenz says.Antibodies that fail validation are neither flagged nor removed from the antibodies-online catalog, though the company will inform users should they ask, so Pellenz recommends investing a few minutes to call technical support prior to purchasing any new antibody.To date, 42 antibodies and ELISA kits actually have been approved, though many more than that have been tested, and web site users have flagged an additional 1,200 antibodies they would like to have tested. Science Exchange and antibodies-online are also validating a collection of key reagents on their own, "to seed the antibodies-online catalog with validated antibodies and ELISA kits to establish the IV badge," Pellenz says.According to Pellenz, 8 of antibodies-online's 10 biggest vendors have expressed interest in the validation program, as have about three-quarters of their vendors overall. They cannot possibly test every antibody in existence—there simply are too many of them, and in theory, each lot must be tested anew, a cost-prohibitive proposition. But it's a start. And given the importance of antibodies in life science research, it's a good one.References1. Egelhofer, T.A., et al.. 2011. An assessment of histone-modification antibody quality. Nat. Struct. Mol. Biol. 18:91–93.Crossref, Medline, CAS, Google Scholar2. Lukinavičius, G., et al.. 2013. Commercial Cdk1 antibodies recognize the centrosomal protein Cep152. Biotechniques 55:111–114.Link, CAS, Google Scholar3. Saper, C.B. 2005. An open letter to our readers on the use of antibodies. J. Comp. Neurol. 493:477–478.Crossref, Medline, Google Scholar4. Saper, C.B. 2009. A guide to the perplexed on the specificity of antibodies. J. Histochem. Cytochem. 57:1–5.Crossref, Medline, CAS, Google Scholar5. Begley, C.G. and L.M. Ellis. 2012. Raise standards for preclinical cancer research. Nature 483:531–533.Crossref, Medline, CAS, Google Scholar6. Vasilevsky, N.A., et al.. 2013. On the reproducibility of science: Unique identification of research resources in the biomedical literature. PeerJ. 1:e148.Crossref, Medline, Google Scholar7. Collins, F.S. and L.A. Tabak. 2014. NIH plans to enhance reproducibility. Nature 505:612–613.Crossref, Medline, Google ScholarFiguresReferencesRelatedDetailsCited ByDevelopment of a cell-based reporter assay for detection of Human alphaherpesvirusesMolecular and Cellular Probes, Vol. 62Epitope-directed monoclonal antibody production using a mixed antigen cocktail facilitates antibody characterization and validation6 April 2021 | Communications Biology, Vol. 4, No. 1The determination of intact β-casein in milk products by biosensor immunoassayJournal of Food Composition and Analysis, Vol. 101Validation of antibody-based tools for galanin researchPeptides, Vol. 120Current Development and Future Prospects of Aptamer Based Protein Targeting12 November 2019Cell-free measurements of brightness of fluorescently labeled antibodies2 February 2017 | Scientific Reports, Vol. 7, No. 1Immuno-based detection of Shiga toxin-producing pathogenic Escherichia coli in food – A review on current approaches and potential strategies for optimization27 May 2015 | Critical Reviews in Microbiology, Vol. 42, No. 4[Letter to the Editor] The need for improved education and training in research antibody usage and validation practicesLeonard P. Freedman, Mark C. Gibson, Andrew R.M. Bradbury, Arthur M. Buchberg, Darryl Davis, Marisa P. Dolled-Filhart, Fridtjof Lund-Johansen & David L. Rimm16 March 2018 | BioTechniques, Vol. 61, No. 1The Membrane Marker mCLING Reveals the Molecular Composition of Trafficking Organelles4 January 2016 | Current Protocols in Neuroscience, Vol. 74, No. 1Application of Aptamers in HistopathologyEditorial: Antibody Can Get It Right: Confronting Problems of Antibody Specificity and IrreproducibilityMolecular Endocrinology, Vol. 28, No. 9 Vol. 56, No. 3 Follow us on social media for the latest updates Metrics History Published online 3 April 2018 Published in print March 2014 Information© 2014 Author(s)PDF download

Referência(s)