Artigo Acesso aberto

Concepts and predication from perception to cognition

2020; Wiley; Volume: 30; Issue: 1 Linguagem: Inglês

10.1111/phis.12185

ISSN

1758-2237

Autores

Jake Quilty‐Dunn,

Tópico(s)

Multisensory perception and integration

Resumo

One popular doctrine in 20th-century philosophy was conceptualism about perception. The core idea was that perceptual awareness is structured by concepts possessed by the perceiver. A primary motivation for conceptualism was epistemological: perception provides justification for belief, and this justificatory relation is only intelligible if perception, like belief, is conceptually structured (Brewer, 1999; McDowell, 1994; Sellars, 1956). We perceive that a is F, and thereby grasp perceptual evidence that justifies the belief that a is F and inferentially integrates with premises like If a is F then a is G to produce the belief that a is G. Conceptualism is less popular today (cf. Bengson, Grube, & Korman, 2011; Mandelbaum, 2018; Mandik, 2012). The a priori justification for conceptualism has crashed face-first into a wall of empirical evidence. For instance, children and non-human animals possess perceptual capacities despite lacking many hallmarks of conceptual cognition (Bermudez, 1998; Burge, 2010a; Block ms). Meanwhile, in adults, mental imagery and related phenomena implicate iconic rather than conceptual/propositional formats (Carey, 2009; Fodor, 2007; Quilty-Dunn, 2019a). A growing contingent of theorists thus regard perception as a natural kind marked by its proprietary nonconceptual representations (Burge, 2014; Burnston, 2017a; Carey, 2009; Kulvicki, 2015a; Toribio, 2011; Block, ms; see also Evans (1982); Hopp (2011); Peacocke (2001) for other nonconceptualist arguments). Though opinion has shifted strongly in favor of nonconceptualism, it may be time for the pendulum to swing back. Putting the traditional normative motivations for conceptualism aside, it makes sense even from a purely descriptive, naturalistic perspective that at least some of the vehicles of perception should be conceptual. Many cognitive operations make use of concepts; thus many cognitive responses to perception would be facilitated if some outputs of perception came prepackaged in a conceptualized format. This point fits with modularity-based accounts of perception, and was fittingly made by Fodor in his discussion of input modules as "subsidiary systems" that must "provide the central machine with information about the world; information expressed by mental symbols in whatever format cognitive processes demand of the representations that they apply to" (Fodor, 1983, p. 40). Similarly, Mandelbaum argues that the outputs of modular perceptual systems ought to be conceptualized in order to "actually guide action by entering into other cognitive processes" (2018, p. 271). It is an underemphasized explanatory virtue of modularity that it allows for a system to be distinctly perceptual (in virtue of its modularity) while outputting representations that are immediately consumable by cognition (in virtue of their format). Modularity-based versions of conceptualism thereby avoid full-fledged versions of the "interface problem" in interactions between perception, cognition, and action (Burnston, 2017b; Butterfill & Sinagaglia, 2014; Mylopoulos & Pacherie, 2017; Shepherd, 2018; 2019). It is fully compatible with this modularity-based conceptualism that some perceptual processes output representations in nonconceptual (e.g., iconic) formats. Instead of insisting on conceptual structure as a transcendental epistemological requirement, modularity-based conceptualists can be pluralists about perceptual representation (Quilty-Dunn, 2019b). As long as some significant component of perception is conceptual and feeds immediately into cognition, there is room for other perceptual representations to have other formats with other functional advantages. For example, perhaps iconic representations allow for richer, messier content to be encoded in perception, while sparse conceptual representations provide neatly packaged categorizations to central cognition. However, perception is older than cognition. One might object that our perceptual systems evolved from creatures who lacked cognition, therefore there was no evolutionary pressure for concepts to figure in perception. In what follows, I'll sketch a version of conceptualism that posits concepts in perception independently of stimulus-independent cognitive abilities. In particular, I'll argue not only that adult humans have conceptually structured perceptual representations, but also that these conceptual outputs of perception constitute a natural representational kind found in children and animals alike. Perceptual object representations function to segment out particulars, track them, and predicate features of them, including conceptual categories. These object representations constitute an evolutionarily ancient and developmentally early source of predicate-argument propositional structure that is useful for (1) tracking individuals, (2) subsuming them under categories, and (3) distinguishing reference-guiding elements from pure attributions. These structures can function as evidential inputs to inferential processes in creatures that have the requisite inferential abilities. I will first argue against stimulus-independence as a constitutive condition on conceptuality (Prinz, 2002, p. 197; Beck, 2018; Burge, 2010b; Camp, 2009) in favor of a Cartesian view that concepts are simply representations of a certain sort that, in principle, require no particular mental abilities for their instantiation in human and animal minds (Fodor, 2004). I'll then use empirical evidence to argue that, in fact, perceptual object representations are conceptualized propositional structures that develop (and likely evolved) prior to creatures' abilities to use them in inference. The resulting picture preserves much of the letter—if not exactly the spirit—of traditional conceptualism. It is not entirely clear how we ought to understand stimulus-dependence. Lots of mental activity might happen to be prompted by a pattern of stimulation and happen to end when stimulation ends, but this wide net might capture a messy variety of mentation rather than a natural kind. One could reasonably demand well-defined, testable characterizations of stimulation and dependence thereupon, and difficulties will surely arise in trying to provide them (see Beck (2018) for careful discussion of the details). I'll discuss two forms of stimulus-independent use: recombinability and logical inference. However, I propose to grant in general that there is some notion of stimulus (in)dependence that's coherent enough to figure in a candidate condition on concept possession. What matters for present purposes is the following claim: concepts are the sorts of mental phenomena that can only occur in creatures that have the ability to deploy them independently of what their transducers are doing at the moment. I'll also put aside a particularly strong form of the claim at issue. Beck (2018) argues that perception and cognition are distinguished by means of stimulus-independence: a state/process is perceptual iff it is stimulus-dependent, and cognitive iff stimulus-independent. This formulation runs into a counterexample: perception-based demonstrative thought, which is stimulus-dependent but cognitive (Beck, 2018, pp. 328–329). Beck's way of responding to this counterexample is to add the condition that every element of a representation must be stimulus-dependent for the representation (and the process that produces it) to be perceptual (2018, p. 330). Since demonstrative thoughts have concepts as elements—e.g., the concept red is the predicative element in the thought that is red—and concepts are redeployable elsewhere, demonstrative thoughts fail this additional criterion. However, this additional criterion simply rules out the possibility of deploying concepts in perceptual systems by fiat. It seems like a largely empirical question whether humans can deploy concepts perceptually (Jacobson & Putnam, 2016; Mandelbaum, 2018). Our theories shouldn't build in the impossibility of this scenario to avoid counterexamples. One available move for Beck's view would be to distinguish states and processes: concepts are stimulus-independent states, but they can be deployed via stimulus-dependent perceptual processes. On this relaxed condition, however, perception-based demonstrative thoughts again represent a counterexample. Thus Beck's version of the stimulus-dependence criterion faces a dilemma: avoid demonstrative thought as a counterexample but render perceptual deployment of concepts impossible by fiat; or allow for the perceptual deployment of concepts but succumb to the counterexample. My main target is not the thesis that perception should be analyzed in terms of stimulus-dependence, but instead the hypothesis that having concepts constitutively requires the ability to use them in a stimulus-independent way—call that hypothesis stimulus-independence. It's compatible with stimulus-independence that perception is constituted by something completely unrelated, such as proprietary representational formats or modularity. It's even compatible with this hypothesis (so stated) that concepts could be deployed via stimulus-dependent processes. As long as a creature with the concept red can deploy that concept independently of stimulation, it's entirely possible that the creature might have some stimulus-dependent means of deploying it as well. Though stimulus-independence allows for concept deployment in perception, it places significant constraints on the kinds of creatures that can deploy concepts. In particular, such creatures must possess the ability for stimulus-independent thought. A member of a species that has evolved perceptual systems and uses them to guide action but lacks central cognition cannot have concepts. Such creatures are "passive reactors, at the mercy of their environments" (Camp, 2009, p. 290), lacking the cognitive freedom that marks conceptual thought. Thus, for Camp, the representations they deploy in perception and action-guidance must be nonconceptual. For a state "to even be a candidate for being conceptual, it must be cognitive" (2009, p. 279). Specifically, concepts are "cognitive, representational abilities that are systematically recombinable in an actively self-generated, stimulus-independent way" (Camp, 2009, p. 302). Likewise, for Burge, a representation is conceptual iff it "can function in pure predication" (2010b, p. 45), which requires functioning "outside the scope of a context-bound identificational, referential structure" (2010b, p. 44). For example, in 'That table is brown', the predicate 'table' functions within the scope of a referential noun phrase ('That table'), while 'brown' functions purely predicatively. Burge assumes that perceptual contents exclusively have context-bound identificational, referential structures (e.g., That F). Perception therefore cannot suffice for pure predication. Since possessing concepts requires the ability to use them in pure predication, possession of concepts requires the ability to use concepts outside perceptual contexts. For Burge, a paradigm example of such use is logical inference, discussed at length below. Thus Burge's view leads naturally to a version of stimulus-independence. A large part of the intuitive appeal of stimulus-independence is that it captures what Kant called the "spontaneity" of thought (Kant, 1929, A50/B74). The striking creativity and freedom of human thought suggests that concepts are the sorts of mental states that can be freely recombined into novel structures, forming the finite basis of indefinitely many thinkable thoughts (e.g., Chomsky, 1986). The requirement of recombinability has roots in Evans' Generality Constraint: "if a subject can be credited with the thought that a is F, then he must have the conceptual resources for entertaining the thought that a is G, for every property of being G of which he has a conception" (1982, p. 104). The Generality Constraint is regularly taken as a constitutive condition on concept possession (Beck, 2012; Camp, 2009; Peacocke, 1992), sometimes in relaxed versions (Carruthers, 2004; 2009). Camp argues that stimulus-independence captures the sense in which concepts are not merely recombinable, but recombinable in a way that constitutes "active, genuinely rational thinking" (Camp, 2009, p. 287). A starting point for Camp's approach is that concepts are mental abilities (Camp, 2009, p. 278n3; cp. Burge, 2010a, p. 197). A primary goal of her defense of stimulus-independence is to develop a theory of concepts "that captures the core set of cognitive tasks that we expect concepts to perform" (Camp, 2009, p. 276). Camp's theory is thus a version of what Fodor calls "concept pragmatism" (2004, p. 30) and dubs "the characteristic doctrine of twentieth century philosophy of mind/language" (2004, p. 29; emphasis his). According to concept pragmatism, "concept possession is some sort of dispositional, epistemic condition" (ibid.). Camp's defense of stimulus-independence commits to a form of concept pragmatism on which the ability to form novel thoughts independently of stimulation is constitutive of concept possession.1 Fodor's alternative to concept pragmatism is "Cartesianism", according to which to "have the concept dog is to be able to think about dogs as such" (2004, p. 31). Cartesianism is a thesis about what it is to possess a concept, but it fits naturally with a representationalist theory of concepts themselves. That is, concepts are mental representations (i.e., particulars rather than abilities) and the "concept dog is that mental particular the possession of which allows one to represent—to bring before one's mind—dogs as such" (Fodor, 2003, p. 19). Cartesianism is thus the natural extension of representationalism (i.e., "the representational theory of mind" (Fodor, 1998)), according to which mental states are primarily (relations to) representations, and mental processes are primarily computational operations over representations. It is crucial to representationalism that mental representations are ontologically more basic than cognitive abilities. Cognitive abilities, such as the ability to draw inferences, are analyzed in terms of mental processes, such as inferences; and mental processes are in turn analyzed as operations over mental representations, such as the operations over constituent structures that underlie deductive inference (Fodor & Pylyshyn, 1988; Quilty-Dunn & Mandelbaum, 2018). If this representationalist story is correct, then it can't also turn out that mental representations are analyzed as abstractions over cognitive abilities, on pain of circularity. If our cognitive abilities arise out of processes defined over representations, then representations must be characterizable independently of those abilities. Representationalism provides just such an independent characterization: mental representations are symbols, i.e., vehicles with representational contents. Concept pragmatism is motivated by a desire to capture a "core set of cognitive tasks" (Camp, 2009, p. 276) by building the ability to perform them into the metaphysics of concepts (cp. Prinz & Clark, 2004). For representationalists, cognitive abilities can be data to be explained by positing concepts, and some can even be diagnostic of concepts (rather than other sorts of representations, or nonrepresentational states). But they can't be constitutive of concepts or their possession conditions. A creature possesses cognitive abilities in virtue of having the right sort of computational machinery for computing over the right sort of concepts; possessing the concepts explains possessing the abilities rather than vice versa. Concept pragmatism, according to representationalism, conflates epistemology and metaphysics. While abilities to accomplish various cognitive tasks might be excellent evidence to justify attributions of concepts to some creature, the attributions are not made true by the creature's possessing the abilities. This representationalist critique of concept pragmatism is, to be sure, controversial.2 Providing a full defense of representationalism isn't possible here. However, one needn't be convinced to be interested in the upshots of a representationalist approach. One need only leave open the possibility that concepts are symbols (Camp, 2009, p. 278n3; cp. Evans, 1982, pp. 100–101). If concepts are symbols, then it's an open question whether inferential or other cognitive abilities are constitutive of concept possession. At first glance, tokening a symbol need not presuppose an ability to transform the symbol in any particular way, and thus tokening a concept need not require the ability to use it in stimulus-independent thought. Furthermore, concept pragmatists nearly always grant that cognitive abilities don't constitute absolute possession conditions on concepts. In his original formulation of the Generality Constraint, Evans added in a footnote that there must be "a proviso about the categorial appropriateness of the predicates to the subjects" (1982, p. 101n17). Camp (2004) rejects Evans' proviso, but, following Peacocke (1992, pp. 42–43), she grants that "strange chemical reactions, psychological traumas, or other external factors" (Camp, 2009, pp. 278–279) may prevent recombination. These pathological cases are not taken to undermine "the conceptual abilities themselves" (Camp, 2009, p. 279). However, barriers to the cognitive use of concepts in inference or recombination need not arise pathologically. As Peacocke notes, such barriers can arise "at the level of hardware" (1992, p. 43). There's no obvious reason why such hardware-level factors might not happen to be built-in to the normal functioning of some minds—that is, they might be aspects of mental architecture. Mental architecture comprises, roughly, functional properties that are invariant across changes in representational content, such as distinctions between memory stores (e.g., working memory vs. long-term memory) (Pylyshyn, 1984, pp. 30–32). There could in principle be aspects of mental architecture that prevent concepts from being deployed outside certain limited contexts. One way to make sense of this possibility is to consider memory limitations. Imagine a thinker with extremely limited working-memory capacity. Suppose they possess the complex concept horse that is smaller than the largest Clydesdale on Earth but, every time they try to compose that concept into a thought like Seabiscuit's second-youngest offspring is a horse that…, memory resources fail and the structure crashes before it's fully formed. This thinker lacks the ability freely to recombine this concept, but they possess the concept nonetheless; the limitation lies not in the concept itself, but in working-memory capacity. Thus a species might evolve that grasps and stores concepts but for completely independent, non-pathological reasons lacks the ability to deploy them in certain ways—due to working-memory limitations or other background architectural factors. A similar possibility arises regarding stimulus-independence tout court. A creature might possess a concept that it uses for perceptual identification. However, sustaining its deployment in the absence of relevant sensory input requires working-memory resources. It thus seems metaphysically possible that a creature might have a concept and the ability to deploy it in response to stimulation while also possessing a mental architecture that precludes deploying the concept independently of stimulation. One way to see the coherence of this possibility is to consider possible changes in mental architecture. Perhaps one day we'll be able to insert chips that enhance working-memory resources into brains. Suppose the following counterfactual is true: if we were to insert a chip into the brain of the creature just described and enhance their memory resources, they would be able to sustain the deployment of the concept independently of stimulation and freely recombine it with their other concepts. Perhaps the creature tokens representations that are apt to function as premises in modus ponens inferences but lacks the memory resources required to make those inferences. However, it might be true that they would gain the ability to make them were we to insert the memory-enhancing chip. A pragmatist might balk at invoking such fanciful scenarios. But the deeper truth they illustrate is that stimulus-independent use in inference and recombination require more than the mere possession of a concept. They require the right background architectural setup as well. And if we are willing to grant "hardware-level" restrictions on stimulus-independent deployment, then there's no clear reason why we should deny that concept possession could survive such restrictions when they arise from the architecture rather than pathology. According to Fodor's Cartesianism, possessing dog requires only the ability to think about dogs as such. But the present line of reasoning suggests that even Fodor's view is too pragmatist. Suppose (as Fodor surely would) that concepts are symbols and that their use in thought depends on background features of mental architecture. In that case, thinking about dogs as such requires that a symbol be retrieved from memory and deployed. Memory retrieval, however, is a psychological process that can fail. Failures of retrieval are perfectly ordinary and don't entail that the relevant information fails to be stored, as the head-slaps that follow the revelation of an answer to a pub trivia question may attest. It's therefore possible that some symbol might be stored and yet not be retrievable for independent reasons (e.g., mechanisms underwriting retrieval have malfunctioned). The concept would remain stored nonetheless. While this scenario involves malfunction, it is again conceivable that the same factor could arise from non-pathological aspects of mental architecture. Suppose some creature evolved innate symbols, but over time those symbols lost their adaptiveness in the environments of that creature's descendants. It's conceivable that the vestigial symbols remain innately stored in the minds of those descendants, but subsequent evolution has rendered them irretrievable as a matter of course. Thus if concepts are symbols, then having a concept need not presuppose any cognitive abilities—not even the ability to use the concept in thought. Instead, having a concept is simply storing a certain symbol in memory. This view—call it "possession-as-storage"—takes maximally seriously the idea that the mind is a computational system and that concepts are symbols stored and computed over in that system. Possession-as-storage may strike many readers as extreme. However, it is worth clarifying the relevant notion of cognitive ability. A natural way of interpreting 'ability' in the concepts literature (and in this paper thus far) is in terms of an ability that a creature can exercise at a time; the question whether a concept is possessed by that creature is (for pragmatists) transformed into a question about what the creature can do at that time. In that sense of ability, possession-as-storage entails that no cognitive abilities are constitutive of concept possession. One might more permissively attribute abilities to creatures who cannot exercise them in nearby possible worlds (e.g., because of architectural limitations). One might also attribute abilities not to creatures but to concepts themselves—Camp writes that external factors limiting recombinability fail to affect "the conceptual capacities themselves" (2009, p. 279; cp. Peacocke, 1992, p. 43). Combining these ideas, we might develop the following modified pragmatist view: possessing a concept requires that the creature possesses a state that has the ability to be used in stimulus-independent recombination and inference given the right circumstances, including background mental architecture. Even a Fodorian representationalist could accept this modified pragmatist view. However, a primary motivation for pragmatism is that it ties concept possession to verifiable cognitive tasks and thus furnishes us with diagnostic tests of conceptuality emanating from a "practically useful account that captures the core set of cognitive tasks that we expect concepts to perform" (Camp, 2009, p. 276). The modified pragmatist view weakens the link between concept possession and the actual exercise of cognitive abilities in a way that vitiates the initial motivation behind concept pragmatism. It also allows—importantly for present purposes—that creatures that cannot actually exercise stimulus-independent cognitive abilities may possess concepts nonetheless. What other evidence could license concept attribution beyond stimulus-independent cognition? For a representationalist, what matters is what type of symbol is instantiated in the mind rather than the cognitive abilities possessed by the creature. Some representations aren't concepts, such as icons, since they lack the right sort of representational format. Thus I suggest that investigating the representational format of perception provides an independent means of answering these questions about conceptuality. In particular, we can investigate whether some perceptual representations have a predicate-argument structure that is usable for logical inference (given the right background architecture). Such representations might be conceptual even if the creatures that possess them lack paradigmatically conceptual cognitive abilities. For both Burge and Camp, the argument for stimulus-independence relies on a more fundamental aspect of concepts: compositionality. Concepts can compose into more complex structures, as when pet and fish combine in pet fish. In particular, concepts compose into truth-evaluable propositional structures, like this is a fish. What marks the simplest propositional structures is their predicate-argument structure. A picture of a fish might represent an object and even represent it as a fish, but it doesn't do so by means of a predicate-argument structure. It's important to distinguish predication as an aspect of content (a structural feature of a proposition) and as an aspect of format. One might argue that a picture expresses predication in that its content predicates a property of some individual, but predicate-argument structure is not explicit in the structure of the vehicle (i.e., its format). Minimally, predicate-argument structure requires that "some sort of functional relation among syntactic constituents maps onto some sort of logical or metaphysical relation among the semantic values of those constituents" (Camp, 2007, p. 157). In a sentence like 'This is a fish', "the syntactic relation of function application mirrors a metaphysical relation of instantiation" (ibid.); the constituent 'This' corresponds to the individual, 'fish' corresponds to the property fish, and the syntactic relation between them functions to express the instantiation of fish by the individual. This sort of structure is a canonical example of predicate-argument structure (where 'fish' functions as predicate and 'This' as argument). In a picture of a fish, this structure is absent. There are not two separate constituents standing for an individual and for the property of being a fish. Instead, the same part of the picture that represents the individual also represents its various properties. In this sense, iconic representations are "holistic" (Green & Quilty-Dunn, 2017; Quilty-Dunn, 2019a). They're not digital, in Camp's sense of taking "a small number (typically, a singleton or pair) of discrete elements as inputs" (Camp, 2018, p. 25). Icons have a comparatively large number of primitives (e.g., pixels—cf. Davies, 2020), and their primitives encode multiple semantic values at once. For example, a part of a picture might encode values along multiple spatial axes as well as features instantiated at the corresponding location, such as values along color dimensions, shape and size dimensions, etc. A depicted individual is represented by means of parts (primitives or regions) of the icon that encode other information as well, including parts of the individual and/or their values along spatiotemporal and featural dimensions (Hagueland, 1998, p. 192; Kulvicki, 2006, p. 125; ms). I've argued elsewhere that perceptual object representations ("PORs"), the representations we use to perceptually detect and track objects, have a discursive/digital rather than iconic/analog format (Green & Quilty-Dunn, 2017; Quilty-Dunn, 2019b). I'll briefly describe these arguments now. First, PORs comprise separate constituents for individuals and properties. Tracking via PORs involves an index-like constituent that picks out individual objects and continues to track them even when featural information changes (Zhou et al., 2010) or is lost altogether (Bahrami, 2003; Kibbe & Leslie, 2011; Scholl, Pylyshyn, & Franconeri, 1999). The best and simplest explanation of these tracking abilities posits discrete constituents for individuals that are non-holistically bound to featural information (Pylyshyn, 2003; Scholl & Leslie, 1999), which fits a discursive model better than an iconic one. Second, PORs comprise separate constituents for distinct feature dimensions. While icons represent (e.g.) the color and orientation of a triangle by means of the same parts of the icon, PORs can successfully encode both features but lose them independently of each other in visual short-term memory (Bays, Wu, & Husain, 2011; Dowd & Golomb, 2019; Fougnie & Alvarez, 2011; Fougnie, Cormeia, & Alvarez, 2013; Markov, Tiurina, & Utochkin, 2019; Wang, Cao, Theeuwes, Olivers, & Wang, 2017; Markov et al. ms). The separability of features in PORs suggests that distinct features are represented via distinct vehicles, implicating discursive format. Third, PORs comprise separate constituents for high-level vs. low-level features. Exp

Referência(s)