Limpar
4.358 resultados

Acesso aberto

Tipo do recurso

Ano de criação

Produção nacional

Revisado por pares

Áreas

Idioma

Editores

Artigo Acesso aberto Brasil Produção Nacional Revisado por pares

Patrícia Paiva da Silva, Rafael Silva Ferreira, Paulo Eduardo Teodoro, Francisco Eduardo Torres, Gilcelene Medeiros Arima, Nanci Cappi, Larissa Pereira Ribeiro Teodoro,

Cada vez mais fertilizantes são importados, fato que faz com que o uso de dejetos tratados seja alternativa econômica e sustentável para a produção de forragens. O objetivo do trabalho foi avaliar o efeito da aplicação de diferentes doses de biofertilizante de aves sobre a produção de forragem de cultivares de Brachiaria brizantha (Hochst.) Stapf. O delineamento experimental utilizado foi o inteiramente casualizado, com quatro repetições, em esquema fatorial. O primeiro fator consistiu nos cultivares ...

Tópico(s): Agricultural and Food Sciences

2014 - Zeppelini Editorial | Arquivos do Instituto Biológico

Artigo Acesso aberto Revisado por pares

Yolanda Gil, Daniel Garijo, Deborah Khider, Craig A. Knoblock, Varun Ratnakar, Maximiliano Osorio, Hernán Vargas, Tam Minh Pham, Jay Pujara, Basel Shbita, Bình Dương Vũ, Yao‐Yi Chiang, Dan Feldman, Yijun Lin, Hae Jin Song, Vipin Kumar, Ankush Khandelwal, Michael Steinbach, Kshitij Tayal, Shaoming Xu, Suzanne A. Pierce, Lissa Pearson, Daniel Hardesty-Lewis, Ewa Deelman, Rafael Ferreira da Silva, Rajiv Mayani, Armen R. Kemanian, Yuning Shi, Lorne Leonard, S. D. Peckham, Maria Stoica, Kelly M. Cobourn, Zeya Zhang, Christopher Duffy, Lele Shu,

Major societal and environmental challenges involve complex systems that have diverse multi-scale interacting processes. Consider, for example, how droughts and water reserves affect crop production and how agriculture and industrial needs affect water quality and availability. Preventive measures, such as delaying planting dates and adopting new agricultural practices in response to changing weather patterns, can reduce the damage caused by natural processes. Understanding how these natural and ...

Tópico(s): Data Analysis with R

2021 - Association for Computing Machinery | ACM Transactions on Interactive Intelligent Systems

Revisão Acesso aberto Revisado por pares

Andrew W. Brown, Stella Aslibekyan, Dennis M. Bier, Rafael Ferreira da Silva, Adam Hoover, David M. Klurfeld, Eric Loken, Evan Mayo‐Wilson, Nir Menachemi, Greg Pavela, Patrick D. Quinn, Dale A. Schoeller, Carmen D. Tekwe, Danny Valdez, Colby J. Vorland, Leah D. Whigham, David B. Allison,

To date, nutritional epidemiology has relied heavily on relatively weak methods including simple observational designs and substandard measurements. Despite low internal validity and other sources of bias, claims of causality are made commonly in this literature. Nutritional epidemiology investigations can be improved through greater scientific rigor and adherence to scientific reporting commensurate with research methods used. Some commentators advocate jettisoning nutritional epidemiology entirely, ...

Tópico(s): Obesity and Health Practices

2021 - Taylor & Francis | Critical Reviews in Food Science and Nutrition

Artigo Acesso aberto Revisado por pares

Rafael Ferreira da Silva, Henri Casanova, Anne‐Cécile Orgerie, Ryan Tanaka, Ewa Deelman, Frédéric Suter,

While distributed computing infrastructures can provide infrastructure-level techniques for managing energy consumption, application-level energy consumption models have also been developed to support energy-efficient scheduling and resource provisioning algorithms. In this work, we analyze the accuracy of a widely-used application-level model that has been developed and used in the context of scientific workflow executions. To this end, we profile two production scientific workflows on a distributed ...

Tópico(s): Scientific Computing and Data Management

2020 - Elsevier BV | Journal of Computational Science

Artigo Brasil Produção Nacional Revisado por pares

Rafael Ferreira e Silva, Thiago R. L. C. Paixão, Marcelo D. T. Torres, William R. de Araújo,

Pseudomonas aeruginosa (PA) is an opportunistic pathogen responsible for several diseases in humans and it is one of the main causes of hospital-acquired infections exhibiting a high drug-resistance profile. Hence, the rapid detection of pathogenic infections caused by bacteria in biofluids from patients or screening contaminated surfaces/utensils is of the utmost importance to healthcare, especially in places with high clinical demand and in resource-limited settings. Herein, we report a portable, ...

Tópico(s): Bacterial Identification and Susceptibility Testing

2020 - Elsevier BV | Sensors and Actuators B Chemical

Artigo Acesso aberto Revisado por pares

Henri Casanova, Rafael Ferreira da Silva, Ryan Tanaka, Suraj Pandey, Gautam Jethwani, William F. Koch, Spencer Albrecht, James Oeth, Frédéric Suter,

Scientific workflows are used routinely in numerous scientific domains, and Workflow Management Systems (WMSs) have been developed to orchestrate and optimize workflow executions on distributed platforms. WMSs are complex software systems that interact with complex software infrastructures. Most WMS research and development activities rely on empirical experiments conducted with full-fledged software stacks on actual hardware platforms. These experiments, however, are limited to hardware and software ...

Tópico(s): Advanced Data Storage Technologies

2020 - Elsevier BV | Future Generation Computer Systems

Artigo Acesso aberto Revisado por pares

Rafael Ferreira da Silva, Rosa Filgueira, Ewa Deelman, Erola Pairo‐Castineira, Ian M. Overton, Malcolm Atkinson,

Scientific workflows have become mainstream for conducting large-scale scientific research. As a result, many workflow applications and Workflow Management Systems (WMSs) have been developed as part of the cyberinfrastructure to allow scientists to execute their applications seamlessly on a range of distributed platforms. Although the scientific community has addressed this challenge from both theoretical and practical approaches, failure prediction, detection, and recovery still raise many research ...

Tópico(s): Advanced Data Storage Technologies

2019 - Elsevier BV | Future Generation Computer Systems

Artigo Acesso aberto Revisado por pares

Ewa Deelman, Karan Vahi, Mats Rynge, Rajiv Mayani, Rafael Ferreira da Silva, George Papadimitriou, Miron Livny,

Since 2001, the Pegasus Workflow Management System has evolved into a robust and scalable system that automates the execution of a number of complex applications running on a variety of heterogeneous, distributed high-throughput, and high-performance computing environments. Pegasus was built on the principle of separation between the workflow description and workflow execution, providing the ability to port and adapt the workflow based on the target execution environment. Through its user-driven ...

Tópico(s): Advanced Data Storage Technologies

2019 - AIP Publishing | Computing in Science & Engineering

Artigo Acesso aberto Revisado por pares

Tristan Glatard, Gregory Kiar, Tristan Aumentado-Armstrong, Natacha Beck, Pierre Bellec, R. Bernard, Axel Bonnet, Shawn T. Brown, Sorina Camarasu-Pop, Frédéric Cervenansky, Samir Das, Rafael Ferreira da Silva, Guillaume Flandin, P. Girard, Krzysztof J. Gorgolewski, Charles R.G. Guttmann, Valérie Hayot‐Sasson, Pierre-Olivier Quirion, Pierre Rioux, Marc-Étienne Rousseau, Alan C. Evans,

We present Boutiques, a system to automatically publish, integrate, and execute command-line applications across computational platforms. Boutiques applications are installed through software containers described in a rich and flexible JSON language. A set of core tools facilitates the construction, validation, import, execution, and publishing of applications. Boutiques is currently supported by several distinct virtual research platforms, and it has been used to describe dozens of applications ...

Tópico(s): Explainable Artificial Intelligence (XAI)

2018 - University of Oxford | GigaScience

Artigo Acesso aberto Revisado por pares

Benjamín Tovar, Rafael Ferreira da Silva, Gideon Juve, Ewa Deelman, William Allcock, Douglas Thain, Miron Livny,

The user of a computing facility must make a critical decision when submitting jobs for execution: how many resources (such as cores, memory, and disk) should be requested for each job? If the request is too small, the job may fail due to resource exhaustion; if the request is too large, the job may succeed, but resources will be wasted. This decision is especially important when running hundreds of thousands of jobs in a high throughput workflow, which may exhibit complex, long tailed distributions ...

Tópico(s): Scientific Computing and Data Management

2017 - Institute of Electrical and Electronics Engineers | IEEE Transactions on Parallel and Distributed Systems

Artigo Acesso aberto Revisado por pares

Rafael Ferreira da Silva, Rosa Filgueira, Ilia Pietri, Jiang Ming, Rizos Sakellariou, Ewa Deelman,

Automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational ...

Tópico(s): Advanced Data Storage Technologies

2017 - Elsevier BV | Future Generation Computer Systems

Artigo Acesso aberto Revisado por pares

Idafen Santana-Pérez, Rafael Ferreira da Silva, Mats Rynge, Ewa Deelman, Marı́a S. Pérez, Óscar Corcho,

In the past decades, one of the most common forms of addressing reproducibility in scientific workflow-based computational science has consisted of tracking the provenance of the produced and published results. Such provenance allows inspecting intermediate and final results, improves understanding, and permits replaying a workflow execution. Nevertheless, this approach does not provide any means for capturing and sharing the very valuable knowledge about the experimental equipment of a computational ...

Tópico(s): Research Data Management Practices

2016 - Elsevier BV | Future Generation Computer Systems

Artigo Acesso aberto Revisado por pares

Stephan Schlagkamp, Rafael Ferreira da Silva, Ewa Deelman, Uwe Schwiegelshohn,

In this paper, we investigate the differences and similarities in user job submission behavior in High Performance Computing (HPC) and High Throughput Computing (HTC). We consider job submission behavior in terms of parallel batch-wise submissions, as well as delays and pauses in job submission. Our findings show that modeling user-based HTC job submission behavior requires knowledge of the underlying bags of tasks, which is often unavailable. Furthermore, we find evidence that subsequent job submission ...

Tópico(s): Parallel Computing and Optimization Techniques

2016 - Elsevier BV | Procedia Computer Science

Artigo Revisado por pares

Ewa Deelman, Karan Vahi, Mats Rynge, Gideon Juve, Rajiv Mayani, Rafael Ferreira da Silva,

The Pegasus Workflow Management System maps abstract, resource-independent workflow descriptions onto distributed computing resources. As a result of this planning process, Pegasus workflows are portable across different infrastructures, optimizable for performance and efficiency, and automatically map to many different storage systems and data flows. This approach makes Pegasus a powerful solution for executing scientific workflows in the cloud.

Tópico(s): Research Data Management Practices

2016 - IEEE Computer Society | IEEE Internet Computing

Artigo Acesso aberto Revisado por pares

James Howison, Ewa Deelman, Michael McLennan, Rafael Ferreira da Silva, James D. Herbsleb,

Software is increasingly important to the scientific enterprise, and science-funding agencies are increasingly funding software work. Accordingly, many different participants need insight into how to understand the relationship between software, its development, its use, and its scientific impact. In this article, we draw on interviews and participant observation to describe the information needs of domain scientists, software component producers, infrastructure providers, and ecosystem stewards, ...

Tópico(s): Distributed and Parallel Computing Systems

2015 - Oxford University Press | Research Evaluation

Artigo Revisado por pares

Rafael Ferreira da Silva, Gideon Juve, Mats Rynge, Ewa Deelman, Miron Livny,

Estimates of task runtime, disk space usage, and memory consumption, are commonly used by scheduling and resource provisioning algorithms to support efficient and reliable workflow executions. Such algorithms often assume that accurate estimates are available, but such estimates are difficult to generate in practice. In this work, we first profile five real scientific workflows, collecting fine-grained information such as process I/O, runtime, memory usage, and CPU utilization. We then propose a ...

Tópico(s): Cloud Computing and Resource Management

2015 - World Scientific | Parallel Processing Letters

Artigo Acesso aberto Revisado por pares

Tristan Glatard, Lindsay B. Lewis, Rafael Ferreira da Silva, Reza Adalat, Natacha Beck, Claude Lepage, Pierre Rioux, Marc-Étienne Rousseau, Tarek Sherif, Ewa Deelman, Najmeh Khalili‐Mahani, Alan C. Evans,

Neuroimaging pipelines are known to generate different results depending on the computing platform where they are compiled and executed. We quantify these differences for brain tissue classification, fMRI analysis, and cortical thickness (CT) extraction, using three of the main neuroimaging packages (FSL, Freesurfer and CIVET) and different versions of GNU/Linux. We also identify some causes of these differences using library and system call interception. We find that these packages use mathematical ...

Tópico(s): Neural dynamics and brain function

2015 - Frontiers Media | Frontiers in Neuroinformatics

Artigo Acesso aberto Revisado por pares

Weiwei Chen, Rafael Ferreira da Silva, Ewa Deelman, Thomas Fahringer,

Task clustering has proven to be an effective method to reduce execution overhead and to improve the computational granularity of scientific workflow tasks executing on distributed resources. However, a job composed of multiple tasks may have a higher risk of suffering from failures than a single task job. In this paper, we conduct a theoretical analysis of the impact of transient failures on the runtime performance of scientific workflow executions. We propose a general task failure modeling framework ...

Tópico(s): Cloud Computing and Resource Management

2015 - Institute of Electrical and Electronics Engineers | IEEE Transactions on Cloud Computing

Artigo Acesso aberto Brasil Produção Nacional

Alexson Filgueiras Dutra, Alberto Soares de Melo, Luanna Maria Beserra Filgueiras, Állisson Rafael Ferreira da Silva, I.M. Oliveira, Marcos Eric Barbosa Brito,

The objective of this study was to evaluate gas exchange and this yield components of cowpea cultivars under different levels of water stress in the semiarid Paraiba. The experimental design was randomized blocks factorial 3 x 4, with three replications. In the factor A allocated to the three varieties of cowpea BRS Guariba, BR17 Gurguéia and BRS Marataoã and the B four irrigation levels established in terms of fractions of the reference evapotranspiration (40, 60, 80 and 100% of ETo) and three replicates. ...

Tópico(s): Legume Nitrogen Fixing Symbiosis

2015 - UNIVERSIDADE FEDERAL RURAL DE PERNAMBUCO | Revista Brasileira de Ciências Agrárias - Brazilian Journal of Agricultural Sciences

Artigo Acesso aberto Revisado por pares

Ewa Deelman, C.D. Carothers, Anirban Mandal, Brian Tierney, Jeffrey S. Vetter, Ilya Baldin, Claris Castillo, Gideon Juve, Dariusz Król, V. E. Lynch, Ben Mayer, Jeremy Meredith, Thomas Proffen, Paul Ruth, Rafael Ferreira da Silva,

Computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Thus, workflow management systems are absolutely necessary to ...

Tópico(s): Simulation Techniques and Applications

2015 - SAGE Publishing | The International Journal of High Performance Computing Applications

Artigo Acesso aberto Revisado por pares

Ewa Deelman, Karan Vahi, Gideon Juve, Mats Rynge, S. Callaghan, P. J. Maechling, Rajiv Mayani, Weiwei Chen, Rafael Ferreira da Silva, Miron Livny, Kent Wenger,

Modern science often requires the execution of large-scale, multi-stage simulation and data analysis pipelines to enable the study of complex systems. The amount of computation and data involved in these pipelines requires scalable workflow management systems that are able to reliably and efficiently coordinate and automate data movement and task execution on distributed computational resources: campus clusters, national cyberinfrastructures, and commercial and academic clouds. This paper describes ...

Tópico(s): Advanced Data Storage Technologies

2014 - Elsevier BV | Future Generation Computer Systems

Artigo Revisado por pares

Weiwei Chen, Rafael Ferreira da Silva, Ewa Deelman, Rizos Sakellariou,

Scientific workflows can be composed of many fine computational granularity tasks. The runtime of these tasks may be shorter than the duration of system overheads, for example, when using multiple resources of a cloud infrastructure. Task clustering is a runtime optimization technique that merges multiple short running tasks into a single job such that the scheduling overhead is reduced and the overall runtime performance is improved. However, existing task clustering strategies only provide a coarse- ...

Tópico(s): Cloud Computing and Resource Management

2014 - Elsevier BV | Future Generation Computer Systems

Artigo Revisado por pares

Rafael Ferreira da Silva, Tristan Glatard, Frédéric Desprez,

Distributed computing infrastructures are commonly used through scientific gateways, but operating these gateways requires important human intervention to handle operational incidents. This paper presents a self-healing process that quantifies incident degrees of workflow activities from metrics measuring long-tail effect, application efficiency, data transfer issues, and site-specific problems. These metrics are simple enough to be computed online and they make little assumptions on the application ...

Tópico(s): Cloud Computing and Resource Management

2013 - Elsevier BV | Future Generation Computer Systems

Capítulo de livro Revisado por pares

Rafael Ferreira da Silva, Tristan Glatard,

Archives of distributed workloads acquired at the infrastructure level reputably lack information about users and application-level middleware. Science gateways provide consistent access points to the infrastructure, and therefore are an interesting information source to cope with this issue. In this paper, we describe a workload archive acquired at the science-gateway level, and we show its added value on several case studies related to user accounting, pilot jobs, fine-grained task analysis, bag ...

Tópico(s): Advanced Data Storage Technologies

2013 - Springer Science+Business Media | Lecture notes in computer science

Artigo Acesso aberto

Tristan Glatard, Carole Lartizien, Bernard Gibaud, Rafael Ferreira da Silva, Germain Forestier, Frédéric Cervenansky, Martino Alessandrini, H. Benoit‐Cattin, Olivier Bernard, Sorina Camarasu-Pop, Nadia Cerezo, Patrick Clarysse, Alban Gaignard, Patrick Hugonnard, Hervé Liebgott, Simon Marache, A. Marion, Johan Montagnat, Joachim Tabary, Denis Friboulet,

This paper presents the Virtual Imaging Platform (VIP), a platform accessible at http://vip.creatis.insa-lyon.fr to facilitate the sharing of object models and medical image simulators, and to provide access to distributed computing and storage resources. A complete overview is presented, describing the ontologies designed to share models in a common repository, the workflow template used to integrate simulators, and the tools and strategies used to exploit computing and storage resources. Simulation ...

Tópico(s): Scientific Computing and Data Management

2012 - Institute of Electrical and Electronics Engineers | IEEE Transactions on Medical Imaging

Artigo Revisado por pares

Sorina Camarasu-Pop, Tristan Glatard, Rafael Ferreira da Silva, P. Gueth, David Sarrut, Hugues Benoit-Cattin,

This paper introduces an end-to-end framework for efficient computing and merging of Monte Carlo simulations on heterogeneous distributed systems. Simulations are parallelized using a dynamic load-balancing approach and multiple parallel mergers. Checkpointing is used to improve reliability and to enable incremental results merging from partial results. A model is proposed to analyze the behavior of the proposed framework and help tune its parameters. Experimental results obtained on a production ...

Tópico(s): Simulation Techniques and Applications

2012 - Elsevier BV | Future Generation Computer Systems

Artigo Acesso aberto Revisado por pares

Ewa Deelman, Rafael Ferreira da Silva, Karan Vahi, Mats Rynge, Rajiv Mayani, Ryan Tanaka, Wendy Whitcup, Miron Livny,

Translational research (TR) has been extensively used in the health science domain, where results from laboratory research are translated to human studies and where evidence-based practices are adopted in real-world settings to reach broad communities. In computer science, much research stops at the result publication and dissemination stage without moving to the evaluation in real settings at scale and feeding the gained knowledge back to research. Additionally, there is a lack of steady funding ...

Tópico(s): Genetics, Bioinformatics, and Biomedical Research

2020 - Elsevier BV | Journal of Computational Science