Limpar
142.474 resultados

Acesso aberto

Tipo do recurso

Tipo de base de dados

Ano de criação

Produção nacional

Revisado por pares

Áreas

Idioma

Editores

Monograph Acesso aberto

William Thomas Taylor,

Provenance: Gift of Albert Brecken.; The Burndy Library Collection at the Huntington Library. Book.

1909 - Gale Group | NCCO-STM 1of2

Artigo Acesso aberto Revisado por pares

Jun Xue, Bingyi Wang, Hongchao Ji, Weihua Li,

... method to another. Results Therefore, we present RT-Transformer, a novel deep neural network model coupled with graph attention network and 1D-Transformer, which can predict retention times under any chromatographic ... obtain a pre-trained model by training RT-Transformer on the large small molecule retention time dataset ... no samples were removed. The pre-trained RT-Transformer was further transferred to 5 datasets corresponding to ... fine-tuned. According to the experimental results, RT-Transformer achieves competitive performance compared to state-of-the- ...

Tópico(s): Analytical Chemistry and Chromatography

2024 - Oxford University Press | Bioinformatics

Monograph Acesso aberto

Robert Willsher Weekes,

Publisher's Advertisements on Last [16] p. Provenance: J. M. Sutherland Autograph on Half Title Page and Stamps on Title Page and Half-Title Page.; Publisher' ...

1893 - Gale Group | NCCO-STM 1of2

Artigo Acesso aberto Revisado por pares

El Amine Cherrat, Iordanis Kerenidis, Natansh Mathur, Jonas Landman, Martin Strahm, Yun Yvonna Li,

In this work, quantum transformers are designed and analysed in detail by extending the state-of-the-art classical transformer neural network architectures known to be very performant ... neural layers, we introduce three types of quantum transformers for training and inference, including a quantum transformer based on compound matrices, which guarantees a theoretical ... on the spectrum between closely following the classical transformers and exhibiting more quantum characteristics. As building blocks ...

Tópico(s): Quantum-Dot Cellular Automata

2024 - Verein zur Förderung des Open Access Publizierens in den Quantenwissenschaften | Quantum

Monograph Acesso aberto

Gisbert Kapp,

Publication Date from Preface; Includes Index The Burndy Library Collection at the Huntington Library. Book.

1893 - Gale Group | NCCO SciTechMed 2 of 2

Artigo Revisado por pares

Jinjie Fang, Linshan Yang, Xiaohu Wen, Haijiao Yu, Weide Li, Jan Adamowski, Rahim Barzegar,

... In this study, we introduce an MVMD-ensembled Transformer model (MVMD-Transformer). This model employs the MVMD technique, which allows ... and associated variables. During the forecasting phase, the Transformer component of the MVMD-Transformer model establishes connections among streamflow and other influencing ... mode. We tested the effectiveness of the MVMD-Transformer model on streamflow forecasting in the Shiyang River, ... 5-, and 7-day forecasting horizons. The MVMD-Transformer model harnesses MVMD for the simultaneous decomposition of ...

Tópico(s): Time Series Analysis and Forecasting

2024 - Elsevier BV | Journal of Hydrology

Monograph Acesso aberto

Alfred Still,

Includes Index The Burndy Library Collection at the Huntington Library. Book.

1898 - Gale Group | NCCO-STM 1of2

Artigo Revisado por pares

S. A. Saleh, E. W. Zundel, G. Young-Morris, Julian Meng, Julián Cárdenas-Barrera, E.F. Hill, S. Brown,

... disturbances, and flow into power systems through power transformers with grounded windings. The flow of a GIC through a power transformer creates adverse impacts, including high levels of harmonic ... currents flowing through the grounded windings, overheating of transformer windings, and significant disruptions in the reactive power flow through the affected transformer. Adverse impacts of GICs on power transformers depend on various factors, among which are the ... harmonic distortion due to GIC flows in power transformers. Tests are carried out using a laboratory power ...

Tópico(s): Magnetic Properties and Applications

2024 - Institute of Electrical and Electronics Engineers | IEEE Transactions on Industry Applications

Monograph Acesso aberto

Rankin Kennedy,

Provenance: Armorial Bookplate of Sir David Salomons Bart., Broomhill, Tunbridge Wells.; The Burndy Library Collection at the Huntington Library. Book.

1887 - Gale Group | NCCO-STM 1of2

Artigo Acesso aberto Revisado por pares

Qingsen Yan, Shengqiang Liu, Songhua Xu, Caixia Dong, Zongfang Li, Qinfeng Shi, Yanning Zhang, Duwei Dai,

... long-range dependencies in the medical image. Recently, Transformer can benefit from global dependencies using self-attention ... representations. Some works were designed based on the Transformers, but the existing Transformers suffer from extreme computational and memories, and they ... in parallel and propose a novel network, named Transformer based High Resolution Network (TransHRNet), with an Effective Transformer (EffTrans) block, which has sufficient feature representation even ... elaborately for tokens that are fed into each Transformer stream in parallel to learn the global information ...

Tópico(s): AI in cancer detection

2023 - Elsevier BV | Pattern Recognition

Monograph Acesso aberto

Robert M. Wilson,

Provenance: Richard F. And Mary L. Fagan Collection.; The Burndy Library Collection at the Huntington Library. Book.

1916 - Gale Group | NCCO SciTechMed 2 of 2

Artigo Acesso aberto Revisado por pares

Peicheng Shi, Xinhe Chen, Heng Qi, Chenghui Zhang, Zhi-Qiang Liu,

... YOLOX, employ convolutional neural networks instead of a Transformer as a backbone. However, these techniques lack a ... of the most active feature detector. Recently, a Transformer with larger receptive fields showed superior performance to ... convolutional neural networks in computer vision tasks. The Transformer splits the image into patches and subsequently feeds them to the Transformer in a sequence structure similar to word embeddings. ... global understanding of images. However, simply using a Transformer with a larger receptive field raises several concerns. ...

Tópico(s): Industrial Vision Systems and Defect Detection

2023 - Hindawi Publishing Corporation | Computational Intelligence and Neuroscience

Artigo Acesso aberto Revisado por pares

Mohamed Barakat A. Gibril, Helmi Zulhaidi Mohd Shafri, Rami Al‐Ruzouq, Abdallah Shanableh, Faten Nahas, Saeed Al Mansoori,

... reliability and the efficiency of various deep vision transformers in extracting date palm trees from multiscale and multisource VHSR images. Numerous vision transformers, including the Segformer, the Segmenter, the UperNet-Swin transformer, and the dense prediction transformer, with various levels of model complexity, were evaluated. ... generalizability and the transferability of the deep vision transformers were evaluated and compared with various convolutional neural ... DANet). The results of the examined deep vision transformers were generally comparable to several CNN-based models. ...

Tópico(s): Smart Agriculture and AI

2023 - Multidisciplinary Digital Publishing Institute | Drones

Jornais Acesso aberto

Giles Smith, Andrew Robson, John Fryer, Portia Colwell, David Brown, Tim Albone, Greg Hurst Political Correspondent, Frances Gibb Legal Editor, John F. Hansen, James Harding, Alexandra Williams, Arthur Doughty, Richard Hobson, David Rose, Michael Evans Defence Editor, Nikki Lennox, Philip Howard, Carol Midgley, Camilla Cavendish, Hilary Finch, Christine Seib, David Chater, James Ducker, David Charter Europe Correspondent, Tim Woolford, Wendy Ide, James Hider, Jeremy Whittle, Kevin Eason, Lucy Bannerman, Joe Joseph, Robin Pagnamenta, Ben Webster Transport Correspondent, Valerie Elliott Countryside Editor, Siobhan Kennedy, Deborah Haynes, Tom Bawden, Dominic Walsh, Dan Sabbagh Media Editor, Michael Harvey, Dalya Alberge Arts Correspondent, Rodney Legg, Alan Hamilton, David Haldane, Elizabeth Colman, Jenny MacArthur, Anatole Kaletsky, Rhys Blakely, David Hands, Jonathan Richards, Martin Turner, Tim Teeman, David Charter, Sue Mallia, Hugo Rifkind, Greg Marcar, Derwent May, Edward Gorman, Dr Stuttaford, Gary Duncan Economics Editor, Oliver Kay, Catherine O'brien, James Harding Business Editor, Tom Baldwin, Cllr Richard Kemp, Jill Sherman Whitehall Editor, Ann Treneman, Simon de Bruxelles, Rob Wright, Arsineh Ghazarian, A. R. T. Kemasang, Mark Hunter, Edward Gorman Motor Racing Correspondent, Gabriel Rozenberg, Neil Harman Tennis Correspondent, David chater, Tony Evans, Raymond Keene, Adam Sage, Christine Buckley, David Shiels, Sean O'Neill, R. M. Edwards, Nigel Hawkes, Jeremy Page, Neville Scott, Chris Campling, Sandra Parsons, Damian Whitworth, Marcus Leroux, Richard Caborn, John Carr, Peter Dixon, Gabriel Rozenberg Economics Reporter, Nick Hasell, Kevin Maher, Robin I. M. Dunbar, Julian Muscat, Paul Simons, Michael Herman, Nigel Kendall, Anne Ashworth, Dearbáil Jordan, Philip Webster Political Editor, Robert Crampton, Barbara Young, Sandy Pratt, Steve Hawkes, Emily Ford, Dominic Maxwell, Matthew Parris, Judith Salomon, George Caulkin, Russell Jenkins, Michael Austin, Jonathan Coote, Alan Lee, John MacAllister, Peter Riddell, Sandra parsons, Tony Dawe, Carly Chynoweth, Philip Webster, Jacqui Goddard, David Sinclair, John Naish, Lewis Stuart, Alan B. Shrank, Martin Waller, David Fulton, Chris Ayres, Anne Sebba, Mark Baldwin, Nigel Hawkes Health Editor, Ashling O'Connor, Benedict Nightingale, Angus Batey, Rick Broadbent Athletics Correspondent, Sam Coates Political Correspondent, Dr Thomas Stuttaford, Stephen Dalton, Tim Luckhurst, Leo Lewis, Adam Sherwin, Michael Evans, Alexandra Blair Education Correspondent, Olav Bjortomt, Anil Sinanan, Miles Costello, Colin Perkins, Fiona Hamilton, Fay Schopen, David Robertson, Joe Bolger, Christopher Irvine,

... heavy metal thunder The really sickening thing about Transformers is how well it has already done at ... Ide recommends Maggie Gyllenhaal's addict mother instead Transformers 12A, 146mins Sherrybaby 15,96mins Running Stumbled No ...

2007 - Gale Group | TDA

Revisão Acesso aberto Revisado por pares

Chukwuemeka Clinton Atabansi, Jing Nie, Haijun Liu, Qianqian Song, Lingfeng Yan, Xichuan Zhou,

Transformers have been widely used in many computer vision challenges and have shown the capability of producing ... learning more complex relations in the image data, Transformers have been used and applied to histopathological image ... present a thorough analysis of the uses of Transformers in histopathological image analysis, covering several topics, from the newly built Transformer models to unresolved challenges. To be more precise, ... fundamental principles of the attention mechanism included in Transformer models and other key frameworks. Second, we analyze ...

Tópico(s): Advanced Neural Network Applications

2023 - BioMed Central | BioMedical Engineering OnLine

Monograph Acesso aberto

William Thomas Taylor,

First Edition Published 1909 under Title: Stationary Transformers; Includes Index Provenance: W. K. Mccord Stamp on Front and Back Pastedown.; The Burndy Library Collection at the Huntington Library. Book.

1913 - Gale Group | NCCO-STM 1of2

Artigo Acesso aberto Revisado por pares

Zhu Nan, Li Ji, Lei Shao, Hongli Liu, Lei Ren, Li Zhu,

A running transformer frequently experiences interturn faults; they are typically difficult to detect in their early stages but eventually progress to interturn short circuits, which cause damage to the transformer. Therefore, finding out the fault mechanism of the ... fault process can provide a theoretical basis for transformer fault detection. In this paper, an electromagnetic-solid ... consistent with an actual oil-immersed three-phase transformer is established. The transient process of winding from ...

Tópico(s): Magnetic Properties and Applications

2023 - Multidisciplinary Digital Publishing Institute | Energies

Artigo Acesso aberto Revisado por pares

Shahriar Hossain, Md Tanzim Reza, Amitabha Chakrabarty, Yong Ju Jung,

... proposed study aims to analyze the effects of transformer-based approaches that aggregate different scales of attention ... from image data. Four state-of-the-art transformer-based models, namely, External Attention Transformer (EANet), Multi-Axis Vision Transformer (MaxViT), Compact Convolutional Transformers (CCT), and Pyramid Vision Transformer (PVT), are trained and tested on a multiclass ... showcases that MaxViT comfortably outperforms the other three transformer models with 97% overall accuracy, as opposed to ...

Tópico(s): Greenhouse Technology and Climate Control

2023 - Multidisciplinary Digital Publishing Institute | Sensors