Capítulo de livro Produção Nacional

Replicated Studies: Building a Body of Knowledge about Software Reading Techniques

2003; Linguagem: Inglês

10.1142/9789812795588_0002

ISSN

1793-0995

Autores

Forrest Shull, Jeffrey C. Carver, Guilherme Horta Travassos, José Carlos Maldonado, Reidar Conradi, Victor R. Basili,

Tópico(s)

Model-Driven Software Engineering Techniques

Resumo

Series on Software Engineering and Knowledge EngineeringLecture Notes on Empirical Software Engineering, pp. 39-84 (2003) No AccessReplicated Studies: Building a Body of Knowledge about Software Reading TechniquesForrest Shull, Jeffrey Carver, Guilherme H. Travassos, José Carlos Maldonado, Reidar Conradi, and Victor R. BasiliForrest ShullFraunhofer Center—Maryland, USA, Jeffrey CarverDept. of Computer Science, University of Maryland, College Park, USA, Guilherme H. TravassosCOPPE-Systems Engineering and Computer Science Program, Federal University of Rio de Janeiro, Brazil, José Carlos MaldonadoDept. of Computer Science, University of São Paulo at São Carlos, Brazil, Reidar ConradiNorwegian University of Science and Technology, Norway, and Victor R. BasiliFraunhofer Center—Maryland and Dept. of Computer Science, University of Maryland, College Park, USAhttps://doi.org/10.1142/9789812795588_0002Cited by:13 (Source: Crossref) PreviousNext AboutSectionsPDF/EPUB ToolsAdd to favoritesDownload CitationsTrack CitationsRecommend to Library ShareShare onFacebookTwitterLinked InRedditEmail Abstract: An empirical approach to software process improvement calls for guiding process development based on empirical study of software technologies. This approach helps direct the evolution of new technologies, by studying the problems developers have applying the technology in practice, and validates mature technologies, by providing indication of the expected benefit and the conditions under which they apply. So, a variety of different empirical studies are necessary for a given technology over time, with evolving goals and hypotheses. Thus, what we as a field know about a software development technology is never based on a single study; rather, a "body of knowledge" must be accumulated out of many individual studies. Multiple studies also help mitigate the weaknesses inherent in any empirical study by requiring the confirmation or refutation of the original findings by means of independent replications, which can address the original threats to validity although they will invariably suffer from threats of their own. Since formal methods for abstracting results from independent studies (such as meta-analysis) have not proven feasible, we advocate a more informal approach to building up such bodies of knowledge. In this approach, replication is used to run families of studies that are designed a priori to be related. Because new studies are based upon the designs of existing ones, it becomes easier to identify the context variables that change from one study to another. By comparing and contrasting the results of studies in the same family, researchers can reason about which context variables have changed and hypothesize what their likely effects on the outcome have been. As more studies become part of the family, hypotheses can be refined, or supported with more confidence by additional data. By using this informal approach, we can work toward producing a robust description of a technology's effects, specifying hypotheses at varying levels of confidence. In this chapter, we first present a more detailed discussion of various types of replications and why they are necessary for allowing the variation of important factors in a controlled way to study their effects on the technology. For each type of replication identified, we provide an example of this informal approach and how it has been used in the evolution of a particular software development technology, software reading techniques. We present a brief description of each set of replications, focusing on the lessons learned about the reading technology based on the results of the original study and the replication together. We will also discuss what we learned about the technologies from the entire series of studies, as well as what we learned about reading techniques in general. We will indicate which of these lessons were due directly to the process of replication and could not have been learned through a single study. Based on these examples, this chapter concludes with lessons learned about replicating studies. Keywords: Empirical software engineeringexperiment replicationsoftware reading techniquesperspective-based readingobject-oriented reading techniquesexperimentation process FiguresReferencesRelatedDetailsCited By 13Cited by lists all citing articles based on Crossref citation.Testing the theory of relative dependency from an evolutionary perspective: higher dependencies concentration in smaller modules over the lifetime of software productsYixin Bian, Mohammed Aziz Parande, Gunes Koru and Song Zhao5 April 2016 | Journal of Software: Evolution and Process, Vol. 28, No. 5Reporting experiments to satisfy professionals’ information needsAndreas Jedlitschka, Natalia Juristo and Dieter Rombach11 August 2013 | Empirical Software Engineering, Vol. 19, No. 6A process for managing interaction between experimenters to get useful similar replicationsNatalia Juristo, Sira Vegas, Martín Solari, Silvia Abrahão and Isabel Ramos1 Feb 2013 | Information and Software Technology, Vol. 55, No. 2Software inspection adoption: A mapping studyDario Macchi and Martin Solari1 Oct 2012The role of non-exact replications in software engineering experimentsNatalia Juristo and Sira Vegas17 August 2010 | Empirical Software Engineering, Vol. 16, No. 3Testing the theory of relative defect proneness for closed-source softwareGunes Koru, Hongfang Liu, Dongsong Zhang and Khaled El Emam27 May 2010 | Empirical Software Engineering, Vol. 15, No. 6The impact of structural complexity on the understandability of UML statechart diagramsJosé A. Cruz-Lemus, Ann Maes, Marcela Genero, Geert Poels and Mario Piattini1 Jun 2010 | Information Sciences, Vol. 180, No. 11Using differences among replications of software engineering experiments to gain knowledgeNatalia Juristo and Sira Vegas1 Oct 2009A Framework for Software Engineering Experimental ReplicationsManoel G. Mendonça, José C. Maldonado, Maria C.F. de Oliveira, Jeffrey Carver and Sandra C.P.F. Fabbri et al.1 Mar 2008Reporting Experiments in Software EngineeringAndreas Jedlitschka, Marcus Ciolkowski and Dietmar Pfahl1 Jan 2008Formalizing a Systematic Review Updating ProcessOscar Dieste, Marta López and Felicidad Ramos1 Jan 2008Genetic algorithms to support software engineering experimentationR.E. Garcia, M.C. Ferreira de Oliveira and J.C. MaldonadoA controlled experiment for evaluating a metric-based reading technique for requirements inspectionB. Bernardez, M. Genero, A. Duran and M. Toro Recommended Lecture Notes on Empirical Software EngineeringMetrics History KeywordsEmpirical software engineeringexperiment replicationsoftware reading techniquesperspective-based readingobject-oriented reading techniquesexperimentation processPDF download

Referência(s)