Genome (re‐)annotation and open‐source annotation pipelines
2010; Wiley; Volume: 3; Issue: 4 Linguagem: Inglês
10.1111/j.1751-7915.2010.00191.x
ISSN1751-7915
AutoresRoland J. Siezen, Sacha A. F. T. van Hijum,
Tópico(s)Genetics, Bioinformatics, and Biomedical Research
ResumoThese days, more and more scientists are diving into genome sequencing projects, urged by fast and cheap next-generation sequencing technologies. Only to discover that they are quickly drowning in an unfathomable sea of sequence data and gasping for help from experts to make biological sense of this ensuing disaster. Bioinformaticians and genome annotators to the rescue! Microbial genome annotation involves primarily identifying the genes (or actually the open reading frames: ORFs) encrypted in the DNA sequence and deducing functionality of the encoded protein and RNA products (Fig. 1). First, a gene finder such as Glimmer (Delcher et al., 1999) or GeneMark (Lukashin and Borodovsky, 1998) is applied to the genome DNA sequence, producing a set of predicted protein-coding genes. These programs are quite accurate, though not perfect. The next step is to take the set of predictions and search for hits against one or more protein and/or protein domain databases using blast (Altschul et al., 1997), HMMer (Eddy, 1998) or other programs. For each gene that has a significant match, the blast output together with the annotation of the hit can be used to assign a name and function to the protein. The accuracy of this step depends not only on the annotation software, but also on the quality of the annotations already in the reference database. A generalised flow chart of genome annotation. Statistical gene prediction: use of methods like GeneMark or Glimmer to predict protein-coding genes. General database search: searching sequence databases (typically, NCBI NR) for sequence similarity, usually using blast. Specialized database search: searching domain databases (such as Pfam, SMART and CDD), for conserved domains, genome-oriented databases (such as COGs), for identification of orthologous relationship and refined functional prediction, metabolic databases (such as KEGG) for metabolic pathway reconstruction and other database searches. Prediction of structural features: prediction of signal peptide, transmembrane segments, coiled domain and other features in putative protein functions. Genome sequences deposited in NCBI/GenBank, EMBL and DDBJ databases (which mirror each other) are annotated by the submitting groups, who each use their own methods, criteria and thoroughness. This leads to a large diversity in annotation completeness and accuracy. Many of the first genomes published had very limited or no functional annotation, simply because there was very little genomic information in these reference databases to compare with. Most public genome annotation remains static for years, and many annotations have never been changed since their initial publication. Over the years, annotation updates may have been maintained by the submitters, but they are generally only stored in local databases such as GenProtEC/EcoGene for Escherichia coli K12 (Rudd, 2000), Genolist/Bactilist for Bacillus subtilis 168 (Lechat et al., 2008) and SGD for Saccharomyces cerevisiae (Christie et al., 2004). Since gene functional annotation relies heavily on sequence similarity searching techniques with protein sequence databases, automatically annotated entries based on blast hits to NCBI databases can quickly become outdated. In the mean time, downstream sciences, such as comparative genomics, proteomics, transcriptomics and metabolomics, have rapidly increased our knowledge of many gene products. It is critical therefore, that genome annotations are frequently updated if the information they contain is to remain accurate, relevant and useful. Re-annotation is defined as the process of updating a previously annotated genome. Automated annotation pipelines combine many different algorithms for gene calling and protein function analysis. In some cases this is followed by manual expert curation, albeit less and less these days, which involves including experimental evidence, and using more sophisticated bioinformatics analysis, such as operon predictions, comparative genome analysis, regulatory motifs prediction, metabolic pathway reconstruction and a lot of common (biochemical) sense. Automated methods save time and resources, but will not incorporate the maximum information available from expert curators, leading to incomplete or even false designations. By contrast, manual annotation is costly and time-consuming. However, manual re-annotation of genomes can significantly reduce the propagation of annotation errors and thus reduce the time spent on flawed research. Hence, there is a need for a research community-wide review and regular update of genome interpretations. Re-annotations can be published in literature or made available on websites. Examples of published re-annotated genomes are unfortunately rare compared with the rapidly increasing number of sequenced genomes. A first overview of re-annotated genomes was made by (Ouzounis and Karp, 2002). In Table 1 we list some more recently re-annotated microbial genomes. In the latest cases, next-generation technologies have been used for re-sequencing of the original strain prior to re-annotation. Exemplary is the re-sequencing and re-annotation of B. subtilis 168 (Barbe et al., 2009), published 12 years after the original genome paper (Kunst et al., 1997). About 2000 sequence differences were revealed, mainly single nucleotide polymorphisms (SNPs), allowing correction of some frameshifts and variation of amino acid residues prior to re-annotation (Table 1). Many (re)annotation databases exist (see Table 2 for an overview), of which a few are general: DDBJ, EMBL, Pedant and NCBI GenBank. The ERGO resource is the only commercial database. Some of these databases contain manually curated and standardized gene functions (e.g. ERGO, RefSeq and Genome Reviews). Many of these databases contain gene functions compiled from various sources (e.g. GIB, GOLD, CMR, Genome Reviews, IMG, RefSeq, the SEED and ERGO). Many of the previous databases make use of annotation information from InterPro protein domains, Gene Ontologies (GO; controlled vocabulary of cellular functions), and TIGRFAMs (also part of Manatee, used in IGS/JCVI annotation services). The pseudogene.org database can be used to determine whether a gene in a given genome could be a pseudogene (non-functional). Microbes adapt to their environment by modulating parts of their metabolic and gene regulatory networks. Metabolic networks consist of gene products (enzymes) that catalyse chemical reactions where metabolic compounds are (re)used. The Enzyme Commission (EC) number is a way of classifying enzyme activity, using a nomenclature with specific numbers that are organized hierarchically to indicate the catalysed chemical reaction (ExPASy). Both the KEGG and MetaCyc databases describe the relation of gene products to metabolic pathways. In addition to (curated) annotation information, a few databases also offer bioinformatics and/or visualisation tools for comparative genomics, e.g. MOSAIC, CMR, the Seed, ERGO, GIB, xBASE, MicrobesOnline and BacMap. Many of the afore-mentioned databases contain annotation information that is generated by gene annotation pipelines. Table 3 lists annotation pipelines that are either offered as a service or that can be downloaded and installed locally. Locally running pipelines (AGMIAL, DIYA, Restauro-G, GenVar, SABIA, MAGPIE and GenDB) have the advantage that data can be kept confidential and that the annotation process is run on local hardware, ensuring reproducible annotation times. On-line services (IGS, IMG, JCVI, IGS, RAST, xBASE, BASys) have the advantage of simplicity and little time investment. Curation of the annotation results requires constant user interaction to view the genes in context of different annotation information. The JCVI and IGS services both use the (formerly known as TIGR) Manatee pipeline, which also uses the TIGRFAMs to detect functional domains in protein sequences. They offer the user the possibility to view and alter annotations in the respective databases they use. Similar functionality is offered by MAGE (which uses the MicroScope database) (Fig. 2), IMG-ER (uses the IMG data model as basis) and RAST (based on the Seed). The commercially available Pedant-Pro pipeline is based on the Pedant annotation pipeline with various enhancements. Usability of the MiGAP and ATCUG annotation pipelines could not be judged by us due to unavailable software (ATCUG) or website language in Japanese (MiGAP). The Taverna work-flow system allows to link different web services, and has the advantage that it can be adapted by experienced bioinformaticians. Assigning genes to metabolic pathways can be done using the KAAS service (Table 3), which annotates gene products by assigning EC numbers based on amino acid similarity to gene products with known EC numbers. Simplified prokaryotic genome database (PkGDB) relational model composed of three main components: sequence and annotation data (in green), annotation management (in blue) and functional predictions (in purple). Sequences and annotations come from public databanks, sequencing centres and specialized databases focused on model organisms. For genomes of interest, a (re)-annotation process is performed using AMIGene (Bocs et al., 2003) and leads to the creation of new 'Genomic Objects'. Each 'Genomic Object' and associated functional prediction results are stored in the PkGDB. The database architecture supports integration of automatic and manual annotations, and management of a history of annotations and sequence updates. Reproduced from Vallenet and colleagues (2006). Once gene annotations have been determined, they can be checked for inaccurate or missing gene annotations using MICheck. Hsiao and colleagues (2010) describe an algorithm for policing gene annotations, which looks for genes with poor genomic correlations with their network neighbours, and are likely to represent annotation errors. They applied their approach to identify misannotations of B. subtilis. The Artemis generic visualisation tool can be used for manual editing of annotation (Rutherford et al., 2000). Prior to submission of a DNA sequence and annotation to the NCBI genome database, the NCBI Sequin service (http://www.ncbi.nlm.nih.gov/projects/Sequin/) also facilitates checking gene annotations, making sure that certain standards and formats are used. Genome annotations are accumulating rapidly and most genome centres depend heavily on automated annotation systems. But rarely has their output been systematically compared to determine accuracy and inherent errors. (Bakke and colleagues (2009) compared the automatic genome annotation services IMG, RAST and JCVI, and found considerable differences in gene calls (Fig. 3), features and ease of use. Each service provided multiple unique start sites and gene product calls as well as mistakes. They argue that the most efficient way to substantially decrease annotation error is to compare results from multiple annotation services. Aggregating data and displaying discrepancies between annotations should resolve many possible errors including false positives, uncalled genes, genes without a predicted function, incorrectly predicted functions and incorrect start sites. To accomplish multi-annotation comparison, information must be interchangeable between annotation services, and software should be built to connect annotations in a manner that promotes easy human review. Tools that cross-query annotations and provide side-by-side comparisons that include genomic context and multiple functional annotations will aid the user and decrease the amount of time required to make an accurate correction, i.e. to decrease manual curation time. Venn diagram of comparison of gene prediction in Halorhabdus utahensis using the RAST, IMG and JCVI automated annotation services. The diagram shows the number of predicted protein-coding genes that share start site and stop site with the other annotations. Overlapping regions indicate genes having exact matches between annotations. Adapted from Bakke and colleagues (2009). Clearly, standardization of ORF calling and annotation (and re-annotation of published genomes) is of utmost importance. A few standard operating procedures for genome annotation have already been proposed in recent years (Angiuoli et al., 2008; Mavromatis et al., 2009). Still, we are a long way from achieving that goal, and it is unlikely we will ever be able to weed out all the incorrect gene calls and inherited annotations that are abundant in present genome databases. The contents of NCBI GenBank can only be changed by the original submitters, and that rarely happens. So be aware that a blast search against GenBank may retrieve very outdated or incorrectly inherited annotations. It is wiser to blast against curated genome databases, but there are so many to choose from (Table 2), and we clearly need tools to compare annotations from different curated databases. Re-annotation of genomes is a never-ending process, and any current genome annotation is only a snap-shot. New information emerges almost every day from re-sequencing, experimentation (e.g. transcriptomics, proteomics, phenotypic tests, gene knock-outs), comparative genomics, etc. Salzberg (2007) has proposed that a 'genome wiki' might provide just the solution we need for genome annotation. A wiki would allow the community of experts to work out the best name for each gene, to indicate uncertainty where appropriate, to include experimental evidence, to discuss alternative annotations, and to continuously update annotations. Although wikis will not (and should not) supplant well-curated model-organism databases, for the majority of species they might represent our best chance for creating accurate, up-to-date genome annotation. And if you are really serious about updating your annotations, don't forget to re-sequence your original strains using next-generation sequencing, at least if you can still find them in your freezer! This project was carried out within the research programmes of the Kluyver Centre for Genomics of Industrial Fermentation and the Netherlands Bioinformatics Centre, which are part of the Netherlands Genomics Initiative/Netherlands Organization for Scientific Research.
Referência(s)