Generative Artificial Intelligence: Opportunities and Challenges of Large Language Models
2023; Springer International Publishing; Linguagem: Inglês
10.1007/978-981-99-3177-4_41
ISSN2367-3370
AutoresFabian Barreto, Lalita Moharkar, Madhura Shirodkar, Vidya Sarode, Saniya Gonsalves, A Johns,
Tópico(s)Natural Language Processing Techniques
ResumoArtificial Intelligence (AI) research in the past decade has led to the development of Generative AI, where AI systems create new information from almost nothing after learning from trained models. Generative AI can create original work, like an article, a code, a painting, a poem, or a song. Google Brain initially used Large Language Models (LLM) for context-aware text translation, and Google went on to develop Bidirectional Encoder Representations from Transformers (BERT) and Language Model for Dialogue Applications (LaMDA). Facebook created OPT-175B and BlenderBot, while OpenAI innovated GPT-3 for text, DALL-E2 for images, and Whisper for speech. GPT-3 was trained on around 45 terabytes of text data at an estimated cost of several million dollars. Generative models have also been developed from online communities like Midjourney and open-source ones like HuggingFace. On November 30, 2022, OpenAI launched ChatGPT, which used natural language processing (NLP) techniques and was trained on LLM. There was excitement and caution as OpenAI's ChatGPT reached one million users in just five days, and in January 2023 reached 100 million users. Many marveled at its eloquence and the limited supervision with which it generated code and answered questions. More deployments followed; Microsoft's OpenAI-powered Bing on February 7, 2023, followed by Google's Bard on February 8, 2023. We describe the working of LLM and their opportunities and challenges for our modern world.
Referência(s)