Artigo Acesso aberto Revisado por pares

David J. Chalmers

2023; Cell Press; Volume: 111; Issue: 21 Linguagem: Inglês

10.1016/j.neuron.2023.10.018

ISSN

1097-4199

Autores

David J. Chalmers,

Tópico(s)

Neuroscience, Education and Cognitive Function

Resumo

David Chalmers is a philosopher who studies consciousness. After sketching his background in mathematics, science, and philosophy, he describes the problems of consciousness and his collaboration with neuroscientists. He also discusses the roles of neuroscience and philosophy in studying consciousness and other topics as well as the future of these fields. David Chalmers is a philosopher who studies consciousness. After sketching his background in mathematics, science, and philosophy, he describes the problems of consciousness and his collaboration with neuroscientists. He also discusses the roles of neuroscience and philosophy in studying consciousness and other topics as well as the future of these fields. David Chalmers is a university professor of philosophy and neural science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He studied mathematics at Adelaide and Oxford and received a PhD in philosophy in cognitive science from Indiana University. He is the author of The Conscious Mind (1996), Constructing the World (2010), and Reality+: Virtual Worlds and the Problems of Philosophy (2022). He co-founded the Association for the Scientific Study of Consciousness (ASSC). He is known for formulating the “hard problem” of consciousness and for the idea of the “extended mind,” which says that the tools we use can become parts of our minds. Growing up, I was a math and science geek. As an undergraduate, I mainly studied math, physics, and computer science, but I also took one philosophy course, which planted a seed. I went onto graduate school in math, but I gradually became obsessed with the problem of consciousness. It seemed to me that this was the most interesting and important unsolved problem in science in the late 20th century, comparable to the problems of space and time in the 17th century or the problems of life in the 19th century. I ended up moving to Indiana University to work with Doug Hofstadter (author of Gödel, Escher, Bach). I did my PhD in philosophy and cognitive science, publishing some papers on artificial intelligence (AI)—especially on artificial neural networks—and writing my dissertation on consciousness. As I understand it, consciousness is subjective experience. It’s the experience of the world and of the mind from the first-person point of view. There’s a subjective experience of seeing red, of hearing middle C, or feeling pain, of thinking about your mother, of deciding what to do. That’s consciousness. The hard problem of consciousness is the problem of explaining subjective experience: how and why do physical processes give rise to a first-person point of view? Why doesn’t brain processing go on “in the dark,” without the illumination of consciousness? The hard problem is distinguished from the “easy” problems about explaining various objective functions such as discrimination, integration, control, and report. Those problems are highly nontrivial, but we at least have a framework for explaining them in terms of neural and computational mechanisms. Once you explain those things, however, you still need to explain why they’re accompanied by subjective experience, and we don’t really have a framework for that. I didn’t invent the hard problem, of course. It’s been around for centuries. I just gave it a catchy name and was in the right place at the right time—the first Tucson conference on consciousness in 1994—for the name to catch on. I’ve always said that the reason that the name caught on is that everyone knew what the hard problem was all along. The name just makes it a bit harder to avoid. The science of consciousness has made a lot of progress over the three decades since then, but I think the consensus view is that we’re still some distance from solving the hard problem. We’ve made considerable progress on correlations between brain processes and consciousness, but so far, those correlations haven’t added up to an explanation. My affiliation with NYU’s excellent Center for Neural Science is really a courtesy appointment, but neuroscience has become increasingly important for me over the years. I took a couple of courses in neuroscience as a PhD student, but my focus then was more on philosophy, psychology, and AI. After that, I was a postdoc in the new Philosophy-Neuroscience-Psychology program at Washington University in St. Louis, where I got to talk with superb neuroscientists such as Charlie Anderson, Steve Petersen, Marc Raichle, and Dave Van Essen. But, again, it was that first Tucson consciousness conference in 1994 that was the watershed moment for me. There I met a number of neuroscientists who were interested in consciousness: Walter Freeman, Ben Libet, Petra Stoerig, Giulio Tononi, and numerous others. Most saliently, I had a number of discussions with Christof Koch that turned into an extended interaction over the years. He and Francis Crick had set the agenda for finding neural correlates of consciousness back in 1990, and I would visit them both in Southern California from time to time to hash out issues from neural correlates to the hard problem. Christof and I ended up jointly (across a couple of papers) coming up with the now-standard definition of neural correlates of consciousness in terms of minimal neural mechanisms that are jointly sufficient for consciousness. We also co-founded the Association for the Scientific Study of Consciousness along with a few other philosophers, psychologists, and neuroscientists. ASSC has ended up being a huge focus for serious scientific work on consciousness over the decades and also a great venue for people from all these fields to interact. These days I am co-leading an experimental project on testing theories of the neural correlates of visual consciousness (including so-called first-order theories that focus on visual cortex and higher-order theories that focus on prefrontal cortex), along with my NYU colleagues Biyu He (neuroscience) and Ned Block (philosophy), with Jan Brascamp, Richard Brown, Rachel Denison, Victor Lamme, Hakwan Lau, and Megan Peters also playing key roles. The experiments involve fMRI and psychophysics with subjects in change blindness and subjective inflation paradigms. As a philosopher, it has been a lot of fun to work with neuroscientists on hashing out the details of experimental design. Yes, at the second ASSC conference in Bremen in 1998, Christof bet that we’d discover the neural correlates of consciousness within 25 years, and I bet that we wouldn’t. At the 2023 ASSC conference in New York City, we settled the bet. Unsurprisingly, I won, which was no achievement on my part. Finding the neural correlates of consciousness doesn’t require solving the hard problem, but it’s still pretty hard. We renewed the bet for a second round, so maybe Christof will have a better chance in 2048. I’m an eternal optimist, and I think the problem is solvable one way or another. I’ve argued that neuroscience alone can’t fully explain consciousness (it can solve the easy problems but leaves a gap in solving the hard problem), but it will certainly be a huge part of the story. One style of solution that could work is a mathematical theory of the fundamental laws that bridge neuroscience and consciousness, which in my view might be akin to the fundamental laws in physics. Neuroscience has a big role to play in that project. Neuroscientists have already developed theories of the biological conditions for consciousness in humans, though I don’t think the proponents of these theories would say they’ve solved the hard problem yet. I’m also interested in other speculative approaches from panpsychism (consciousness is everywhere) to illusionism (consciousness is an illusion). But it’s early days yet. Those two theories (IIT and GNWT) seem to be the most widely discussed theories these days. IIT offers a precise and sweeping mathematical theory of consciousness, centering on phi, a quantity that is a measure of consciousness. It’s highly controversial both because it makes strong claims (e.g., even very simple systems can be conscious) and because it seems to be a long way in front of the empirical evidence and it’s very hard to test (not least because phi is hard to measure). GNWT offers a less ambitious theory that stays closer to the neuroscience, postulating a neural workspace primarily in prefrontal cortex, but it still makes some strong and surprising claims (for example, that we are conscious of only one or two things at a time). I suspect that, at the end of the day, both theories will turn out to be wrong even as theories of the neural correlates of consciousness. But for now, it’s useful to have these theories and others on the scene to help integrate experimental data and to think about how the data might fit into a big picture. Someone once said scientists should listen to philosophers’ questions but not to their answers. Philosophers are also good with terminology and with concepts, providing distinctions and definitions that can clarify murky issues. Ned Block’s distinction between phenomenal consciousness and access consciousness has become standard in the neuroscience of consciousness, as has the definition of neural correlates of consciousness that I mentioned. Philosophers sometimes provide useful theoretical frameworks: Jerry Fodor’s ideas about modularity that have been influential in thinking about localization and higher-order theories of consciousness that originated in philosophy are now getting uptake in neuroscience. Philosophers are also pretty good at figuring out what follows and what doesn’t follow from experimental results. These days, it’s becoming common for philosophers of mind to know a lot of neuroscience. In some areas, especially on foundational issues like consciousness, representation, computation, and agency, it has become pretty standard for philosophers and cognitive neuroscientists to collaborate, and the line between philosophy and theoretical neuroscience can become quite blurry. Neuroscience is a huge and increasingly specialized field, and not everyone has to think about everything. But among the smallish but growing group of cognitive neuroscientists who work on consciousness (perhaps in the context of perception, emotion, or cognition), the hard problem is reasonably well understood. Within that group, many people understandably set the hard problem aside as too hard or too philosophical and work on something else, such as neural correlates of consciousness, instead. But there’s also a subpopulation of neuroscientists who try to engage with the hard problem where they can. Of course, there are many different ways of engaging with the problem, from trying to solve it to trying to dissolve it. My experience is that when neuroscientists have views on philosophical problems, those views range nearly as widely as philosophers’ views, from extreme reductionism to extreme anti-reductionism. We’d need survey data to know for sure. I’ve co-organized two major surveys of the views of professional philosophers on many philosophical questions (the PhilPapers surveys in 2009 and 2020, conducted by David Bourget and me). We found there that a majority of philosophers accept that there is a hard problem of consciousness. Some other results are that a large majority of philosophers are atheists and a smaller majority are physicalists about the mind. In a metasurvey, we found that researchers’ guesses about the results of the survey were often systematically wrong; I’d expect that the same would be true in neuroscience. The closest analog that I know of to date is a recently published survey by Jolien Francken and colleagues of researchers at the 2018 ASSC conference, which suggested that a majority of those researchers also accept that there is a hard problem. But I would love to see a large-scale sociological survey of neuroscientists, analogous to the PhilPapers surveys, on key issues in neuroscience. My biggest focus the last few years has been what I call “technophilosophy”: using philosophy to shed light on technology and using technology to shed light on philosophy. The name is inspired by Patricia Churchland’s term “neurophilosophy” for a similar interaction between philosophy and neuroscience. My recent book Reality+ focuses especially on virtual reality and uses this as a lens to address philosophical problems about reality. It also uses AI as a lens to think about philosophical problems about the consciousness and the mind-body problem and smartphones and augmented reality to think about how the tools we use can extend the mind. Since the book came out, I’ve been focusing especially on AI, returning to some of the questions about artificial neural networks that I worked on as a student. The recent explosion in AI is both exciting and concerning. One role that AI is playing is to make many questions more pressing, turning some philosophical problems into practical problems. For example, it has now become a seriously debated question whether current AI systems are conscious or whether they may be soon. Once we develop conscious AI systems, there will be many difficult moral and social issues about their role in society. I’ve been trying to get clearer about the status of both current AI systems, such as the GPT systems, and their successors in years to come. I take seriously the possibility that we could have conscious AI before too long. In one project, I’ve collaborated with a group of philosophers, AI researchers, and neuroscientists on developing potential indicators of consciousness based on our best current theories. Of course, this also makes it even more pressing that we develop good theories of consciousness. Another role that AI might play in the longer term is to help solve some of the deepest philosophical and scientific problems. Once AI systems are more intelligent than humans, it seems entirely likely that they may be better than us at doing science and better at doing philosophy. Or perhaps they may extend our minds so that augmented humans are better than before at thinking about these things. Either way, I’m hopeful that progress in AI might eventually help us solve many problems, including the problem of consciousness. The author declares no competing interests.

Referência(s)
Altmetric
PlumX