John K. Tsotsos
2015; Elsevier BV; Volume: 25; Issue: 1 Linguagem: Inglês
10.1016/j.cub.2014.10.048
ISSN1879-0445
Autores Tópico(s)Ancient Mediterranean Archaeology and History
ResumoJohn Tsotsos is Distinguished Research Professor of Vision Science at York University and Fellow of the Royal Society of Canada, with Adjunct Professorships in Computer Science and in Ophthalmology and Vision Sciences at the University of Toronto. He received his doctorate in Computer Science from the University of Toronto, developing the first computer system to interpret visual motion depicted in digital image sequences, with application to heart motion analysis. After a postdoctoral fellowship in Cardiology at Toronto General Hospital, he joined the University of Toronto on faculty in Computer Science and in Medicine. In 1980, he founded the Computer Vision Group at the University of Toronto. He was recruited to move to York University in 2000 as Director of the Centre for Vision Research. He has been a Canadian Heart Foundation Research Scholar (1981–1983), a Fellow of the Canadian Institute for Advanced Research (1985–1995), and currently holds the Canada Research Chair in Computational Vision (2003–2017). He has held visiting positions at: the University of Hamburg, Germany; Polytechnical University of Crete, Greece; Center for Advanced Studies at IBM Canada; INRIA Sophia-Antipolis, France; and, the Massachusetts Institute of Technology, USA. His current research focuses on a comprehensive theory of visual attention in humans. A practical outlet for this theory forms a second focus, embodying elements of the theory into the vision systems of mobile robots. What turned you on to science — and vision science in particular — in the first place? The earliest relevant memory I have is learning about the accomplishments of Albert Einstein; he passed away when I was three years old, but I do recall telling my friends at a young age that I was going to be a scientist like Einstein, so I must have heard about him and his accomplishments, likely from my parents, and was inspired. The space race of the 1960s was also a major inspiration, and I went through most of my younger years alternating between wanting to be a physicist, an astronaut or an aerospace engineer. In my last year of high school, I was fortunate enough to be present when the first computer was wheeled into my math class. I thank my math teacher, Mr Kostyniuk, who introduced me to computing and who allowed me to stay after class for hours to experiment with programming, after he noticed how taken I was with this new device. As a result of that day, I added computer science to my short list of possible careers — and it eventually won. Vision has fascinated me since my undergraduate days at the University of Toronto. In my first year, I subscribed to Scientific American, and in 1971 two papers there caught my fancy: Advances in Pattern Recognition, by R. Casey and G. Nagy, and Eye Movements and Visual Perception, by D. Noton and L. Stark. The first dealt in part with optical character recognition by computer, defining algorithms that might capture the process of vision and allow a computer to see. The second described the possible role of eye movements in vision, and how they might define our internal representations of what we see. There had to be a connection! I have been trying to understand vision and what the connection between machine and biological vision might be since about 1975. Who were your key early influences, mentors and ‘heroes’? I have already mentioned Einstein’s influence, but I must admit, this was well before I had any real understanding of science or even what I was talking about. Still, I do feel this pointed me in the right direction. I also have already mentioned my math teacher who introduced me to computers. But once I was in university, among many strong influences, the one person who wound up having the greatest influence on me, by far, was John Mylopoulos. I met John when he taught me a computing theory course and then invited me to join his research group as an undergraduate. I remained a member of his group for six years. He was also primary supervisor for my PhD along with two others. John taught me how to formalize my conceptual solutions, how to organize and nurture a research group, how to adapt supervision to the needs of individual students, and how to lead by example. Generally, John introduced me to Artificial Intelligence and especially to its sub-areas of knowledge representation, knowledge-based systems, problem solving and reasoning. Noting that I wanted to do computer vision, a topic not within his main expertise, John sent me to a NATO Advanced Study Institute in 1978, and this was a turning point for me. I heard lectures and tutorials over two weeks and had inspiring conversations with many of the pioneering and leading figures in computer vision. Among those was Steven W. Zucker. Steve, then at McGill University, agreed to help me with my thesis work and has been my vision guidepost ever since. Steve taught me about human and computer vision and how to take the conceptual formulations I developed into the world of mathematics with formal rigor. In 1985, I was appointed Fellow of the Canadian Institute for Advanced Research, founded and directed by J. Fraser Mustard. Fraser was an incredible individual with infectious enthusiasm and a drive for excellence; he taught me that judgment in the scientific world was a harsh business, and then how to react to it and rise to the challenge. John, Steve and Fraser hold an importance to me greater than they can understand. If you would not have made it as a scientist, what would you have become? My only other talent that has earned me money is music — I play the guitar, arrange music, and sing. Or at least I once did! At the end of high school, my friends and I were presented with a choice: become the house band at a local well-known club or go to university. We all chose university. My goal on entering university was to become an aerospace engineer, but discovered that my best ability was in computing. I guess that if I had not made it as a scientist, I would have had some kind of a career in computing. Which historical scientist would you like to meet and what would you ask him or her? I would love to meet any of the ancient Greek mathematicians or scientists, such as Thales, Pythagoras, Euclid, Archimedes, Democritus, Plato, Aristotle, Alcmaeon or Anaxagoras. I first learned these names and their accomplishments from my father and they provided strong inspiration during my childhood. Later, I found the excellent volumes by Sir Thomas Heath and Morris Cohen and I.E. Drabkin which gave very detailed histories of all the ancient Greek mathematicians and scientists. It has only been recently that a different level of wonder has emerged, namely, how is it that there was this enormous outpouring of creativity in so many domains from so small a population over a period of about 350 years? I would love to sit down with a group of these ancient intellects and probe them about the societal, cultural, religious, economic, and other circumstances and influences of the day. How did these factors contribute to the overall environment that led to the incredible contributions in mathematics, science, history, medicine, literature, art, and astronomy? And of the three who wrote about vision, Aristotle, Alcmaeon and Anaxagoras, I would like to hear what their intuitions were and how they arrived at conclusions; they did not have our modern experimental tools so their powers of observation must have been formidable. Do you think there is too much emphasis on big data-gathering collaborations as opposed to hypothesis-driven research by small groups? Yes, I do. I have nothing against data mining tools and their value; they are a terrific addition to the experimental repertoire. But that is all they are, an addition. They are not a replacement for traditional scientific methods. Statistical correlation seems to now be the new definition of scientific proof with increasingly fewer people understanding the difference. Correlation is not the same as proof. A proof provides a full explanation of why some phenomenon is observed whereas correlation simply tells us that it is observed. The scientific community is being misled into dismissing the former as irrelevant. I recall a quotation from C. Anderson’s article The End of Theory: Will the Data Deluge Make the Scientific Method Obsolete? (WIRED, 2008): “This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.” The problem is that I wish to know why! And I am certain I am not alone. Do you feel a push towards more applied science — and if you do, how does that affect your own work? This is quite obvious and I feel it most coming from granting agencies that get their marching orders from government. Governments are so worried about accountability that they feel each dollar spent on research must lead to a direct, measurable result in economic terms. This reveals a deep misunderstanding, if not also mistrust, of science and scientists. I do not believe that anyone can predict the future. Even the best entrepreneurs of our day owe, more than perhaps they will admit, to being in the right place at the right time. The scientific world is full of examples where discoveries made in one time period see applications years, if not decades, later. I feel the best strategy is to allow those trained for business to do their job, to understand scientific discoveries well enough to make sensible choices of what may or may not be commercialized, and when this might happen. Similarly, those trained in doing research should be allowed to do exactly that, science, without also trying to become third-rate entrepreneurs. A scientist’s track record of success in discovery remains the best measure of where to put resources. What do you think of the state of Artificial Intelligence research? AI research had always been of two minds, sometimes seeming to focus on trying to understand human intelligence, and sometimes trying to develop devices that display intelligence. Both have importance and value as well as close inter-dependence. AI has been guided, to some extent, by the goal of passing the Turing test. With all due respect to Alan Turing, I feel that the Turing test for artificial intelligence is just not relevant. The recent defeat of the Turing test by a computer program is misleading at best. Many feel that the test is inadequate; some propose Winograd schemas — simple questions that require pronoun referent disambiguation to answer — as a replacement. But this also misses the point. A test of artificial intelligence that does not include sensory perception, in its role of seeking, acquiring and interpreting input directed by task demands and interacting with cognition and behavior in satisfying tasks, is inadequate. The amount of human neocortex involved in some level of sensory or sensory-motor and associative processing has been estimated at perhaps 50% or more. Is it reasonable to discount perhaps half of the cortex when designing an intelligence test? Much of intelligence is occurring within those discounted brain areas: it cannot be otherwise simply because the remaining areas could not provide sufficient computing power on their own. There seems something wrong with this Turing-driven view and it is long-standing and almost unshakeable within AI. What have you learned about the interdisciplinary research process? I have been immersed in interdisciplinary research since graduate school. Over the years, I have linked computer science with engineering, medicine, dentistry, psychology, neuroscience, and robotics. I have collaborated with a wide spectrum of other scientists and been funded by a variety of sources. I have learned that the willingness of people to collaborate is a poor predictor of success. I have also found all peer review and reward mechanisms inadequate with respect to their ability to understand interdisciplinary collaborations. What does predict success is the following constellation: interpreting a willingness to collaborate as a willingness to share, not only data but sometimes control; asking a question that another discipline not only also cares about but has the tools and knowledge to answer; and, finding partners who already respect one another’s scientific language, background and accomplishments. What has been your biggest mistake? Have you ever seen the movie Mr Destiny with Jim Belushi? There, the protagonist goes through life lamenting his current state, regretting events he perceived as mistakes that caused him to not live his dream life. He goes into a bar on his birthday and the bartender magically transports him to a world where those mistakes did not in fact happen. Initially, he is elated with his new job, new wife, new mansion, new status, but quickly realizes what he lost in the process and longs to return to his original home, wife, family, and job, even though they were not as impressive. He then wakes up, fully appreciative of what he has here and now. I do not like to label events as mistakes, because I do not know where I would be if I had followed any different path, whether professionally or personally. I am happy and would not trade what I have for anything. But this doesn’t mean that I haven’t learned anything during my life’s course that I now teach my trainees. I have on my web page a ‘recipe’ for a successful research career. It involves varying amounts of 10 ingredients, in descending order: passion, focus, confidence, community, maintenance, communication, opportunism, competition, luck and humanity. Take a look at http://www.cse.yorku.ca/∼tsotsos/Tsotsos/Motivations.html to see what each means to me. What do you think computer science, as a discipline, can offer to biology? Computer science, broadly defined, is the theory and practice of representing, processing, and using information and encompasses a body of knowledge concerning algorithms, communication, languages, software, and information systems. In a nice 2007 paper, Peter Denning claimed that it offers a powerful foundation for modeling complex phenomena such as cognition. The language of computation is the best language we have to date, he claims, for describing how information is encoded, stored, manipulated, and used by natural as well as synthetic systems. I agree. It is no longer valid to think of computer science only as the study of phenomena surrounding computers. Computing is the study of natural and artificial information processes. Whereas the utility of the computer as a tool for storing, analysing, and using data is virtually ubiquitous, the conceptual foundations of all these uses remain obscure and not commonly appreciated. Nor are the theoretical aspects of computer science broadly known or how the techniques for system design, automation and evaluation may also apply to natural systems. Computer science has still much to offer natural science and the potential for novel collaborative science seems huge. What do you think are the big questions to be answered next in your field? I feel that there isn’t enough work on connecting the dots. If you attend the Annual Meeting of the Society for Neuroscience, for example, you see thousands of posters each describing a small, yet interesting, element of brain function — a dot. And over the years, many thousands of dots have been presented. And they all relate to the same brain — but how? Are they all mutually consistent? Not likely. The integration — connecting the dots — seems to not be a common theme in research. Certainly it is more risky. Perhaps it is also perceived as not novel enough and thus not of interest to top publication venues. I believe that it can be the most useful way to constrain science. Only by connecting the dots can one discover where there might be gaps, weed out inconsistencies, and develop new predictions that are at a larger scale of abstraction than the dots themselves. So it is really an issue of raising the importance of using the constraints that discoveries at one level provide to build up an explanation at a more abstract level of description. One problem that arises immediately is what language can be used to formalize the integration? I believe, as mentioned previously, that the language of computation, broadly interpreted, is ripe for such a task. In my own research area, the next big task is to develop theories of vision that explain a broad range of human visual behavior, not just single tasks as seems to be the current focus.
Referência(s)