Really out of their minds

A theme I saw dancing amidst these essays was a constant worry that seemed to dominate the motivation, the cause, the reason for the writing. There are at least to flavors of the worry, one more metaphysical the other epistemological, but they were deeply abstract in either case. Flavor 1 is a concern about the reality of the world ‘about which’ there were things like representations, goals, knowledge, etc. Flavor 2 itself has a few flavors, one is an implicit worry about the validity of cognitive science as a scientific pursuit (not letting it “dissolve into ”the fog of subjectivism"), the second is a concern about the underdetermination of theory by data, and the last is our week’s theme ‘levels of analysis’. To be fair I use the metaphor of flavors and not tastes, because these are deeply entertwined and difficult to isolate or talk about independently; but, like these authors, I will do my best. I think they are best approached as one would deal with flavors layered on top of one another, starting with the last one in.

levels

So on first pass it seems that for there to be levels of analysis we need to allow that analyses as ‘perspectival’. That is, if we can view the same phenomena at different ways, this is the same as looking at them from another point of view, splitting them with a different set of measures and goals, generalizing using another type of abstraction, describing them with another vocabulary, etc.. This is relevant because it establishes, as Pylyshyn emphasizes though not in these words, that data is measure relative. That is, your theoretical tools (your vocabulary/ontology/representations…) are important decisions not only in determining the answer to your problem, but in determining what the problem is.

The result of this is that even if phenomena are real, the mind will only be able to understand them relative to what it can identify. Surely, part of why the color red was included in Newton’s color wheel while X-ray colored light is not is that eyes tend to have sensors that contain rhodopsin and none (to my knowledge) filled with barium platinocyanide (which Röntgen used in identifying X-rays). Similarly, when one approaches a set of phenomena from different levels of analysis one will discover different regularities (a point most insistently made by Pylyshyn).

But if there are all these different regularities to be discovered then why one regularity instead of another? Why aren’t we all doing physics if we are to assume that all existence grounds out in physics? Or why isn’t there a profligation of “special sciences”, one for each author devoted to its author’s favorite theories and data? Or to put it another way…what makes a level a good level? Or less strongly what makes a level better than another? Or less strongly yet, how do we identify the right question to be asking and the right answer to be given for that question? I think that the last is the right question in this case, but is there an answer?

underdetermination and validity

If we need to choose a level at which to approach a problem that presumes that there is some set of questions and some set of answers that we are in search of. The problem is, if you assume a hierarchical or even an ordered structure of these levels (see obligatory xkcd link), you are going to want physics to underpin your chemistry. Chemistry will underpin your biology, and so on…or so the matryoshka doll crumbles. Then this raises the question of why we even need these “special” sciences when we all agree that it’ll ground out in physics eventually. Isn’t anything else just a product of the dissolution of science into the hazy stratus of confusion that Marr forewarned, but seems inevitable given the Bay Area climate?

To the legitimacy question, cognitive scientists offered up computationalism which at the very least (judging by the piece of glass I’m typing this on) has some predictive and interventional efficacy. That is, computationalism generates generalizable claims about existence (a topic we’ll return to shortly). This was an excellent point agreed on by all, but if there were no more issues there would be little need for these articles which returns us to the problem of underdetermination.

That is, the beauty and power of computation lies in its ability to be instantiated by almost any physical system with the right rules in place (xkcd #2).
That said, the horror and danger of computation lies in its ability to be instantiated by almost any physical system even with the right rules in place.

Because the articles largely focus on this aspect I’m less concerned with elaborating it here. But the underdetermination problem is a fascinating one from a number of respects that deserve some more emphasis.

First is Marr’s beginning with older Gestalt psychology and the demonstration of effects that seem to rise beyond being merely a sum of their parts. (I actually don’t see would be surprising were it not for the assumptions and emphasis made by early work on experimental methodology that focused on linear effects(possibly under some transformation) on continuous variables. Without those assumptions most things,given that we see objects that are composed with a plethora if not an infinitude of pieces interacting in everyday life, and few people would disagree that, it would seem that additive effects deserve the merely qualifier, not interactive effects..lbut I digress). This suggests one flavor of underdetermination, the pieces could be combined in any number of ways, but when combined in these particular ways they seem to be more than when combined in a number of other ways. That is, there seem to be levels of effects in the sense of interactions amidst lower level parts produce phenomena at higher levels. But the higher levels are just a subset of the ways in which the lower levels could conceivably interact, and so we need a theory of why we take the perspective that these objects interact as they do in terms of our cognitive experience of them. The solution that multiple authors seem to converge on are that this honing comes from something like our goals/rationality/“why”’s/.

A second of the bunch is Anderson’s critique of the proliferation of mechanistic theories to explain cognitive phenomena, which is underdetermination by another name (and it may smell sweeter, depending on how you describe mechanism that underlies your ability to smell words). It is a similar point, so not to belabour it too much, but even if you agree on the phenomena themselves and their appropriateness in terms of agreement among scientists that they are the ‘right’ things about which to be asking questions and getting answers, you still will run into underdetermination problems. That is, it goes beyond merely non-additive effects of known values, but once one is attempting to develop theories about the hidden entities and processes there is a temptation to postulate a new something (event or process or both) whenever some anomaly arises. Mechanism becomes the asteroid belt that absorbs any incoming anomalies into its orbit; providing a protective hull around the inner theory system(to steal Lakatos’ analogy).

Anderson proposes that being precise about what it is that the core of a theory of cognition is trying to accomplish will be enough to evade these swirls of mechanisms. Rationality can act as compass, though if the principle of rationality is not stronger than Newell’s we may be little better off since it gives no way to adjudicate amidst the innumerable simultaneous goals that any thing with cognition likely needs to solve.

But of course, that is merely a convenient theoretical tool. We can posit that instantiation doesn’t matter, and that we need concern ourselves with the higher order behavior (albeit on many measures) of agreed upon cognitive agents (humans and computers) and the constraints on the computations that those agents might face. However, even if we can validate cognitive science as an epistemic pursuit, and deal with underdetermination by glossing over mechanistic detail, we still haven’t addressed the problem that seems (to some(read: not me, but I don’t want to explain cause its late)) to be the real problem. That is, whether psychological entities are real.

really out of our minds

I’ve already written too much, and the deadline draws nigh, so I will not spend much time, but to ground more specifically my earlier allusions, it seems that there is almost a physics envy going on amidst some of these theorists. Or perhaps a reaction to a perceived imperialism that cognitivists still feel from the attack on hidden entities by logical positivism and behaviorists. They (especially Pylyshyn) seem driven to demonstrate that things like goals and beliefs are at least “psychologically real”. That is they want our minds and all that’s within them to exist in the real world, not merely in our minds.

And with that, I’m going to let worries about leaving without completing my thoughts on this issue fall out of my mind.

Iff or if

If the mind is like a computer how is a computer not like a mind?

I think it is fascinating that the advent of cognitive science as a discipline did not stem from many psychologists who banded together to throw off the constraints of behaviorism. Psychologists, even those who eventually became part of the cognitive revolution (e.g., Miller), generally were trained and identified themselves with behaviorism. The burgeoning cognitive sciences pulled their ranks from computer science and information theory: Newell and Simon were making positive claims attempting to build programs that could perform complex cognitive tasks, Miller was making positive claims based off idealizations of the mind's informational capacity in terms Shannon's (discrete) information theory, while Chomsky was demonstrating negative arguments that the analysis of language with the independence assumptions that Shannon premised his own analysis of language. Even anthropology (Conklin, Goodenough, Loounsbery) took on a formal bent.

These were scientists arguing for not just the study of and reality of mental representations, but arguing that science can be advanced by these postulations. What could be more antithetical to the behaviorist doctine of the black box. That dogma had stemmed out of a basic dissatisfaction with the phenomenalism that gestalt psychology had taken on as well as a rejection of the approaches epitomized by Freud, Jung and other theorists whose theories were seen to have little more empirical content than the theory that Poseidon raised storms in order to vent his rage. It was the fear of uninterpretability that won behaviorism the minds of the psychologists who worked so avidly to deny that those minds existed.

Perhaps, then, it is not surprising that the resurrection of mental representations and processes was accompanied by a rigor and formal complexity that surpassed anything dreamt of by the most imaginative behaviorists (who of course had neither dreams nor imagination). If behaviorism was attempting to inject rigor back into psychology, the promise offered by the new cognitive sciences may have been more like dialysis, which oddly enough also emulates complex computational processes that defy direct observation in the ways that motivated the Skinnereans to eschew postulating anything between input and output. In addition to the immediate empirical successes embodied in the MIT symposium on information theory in 1956, the emphasis on a formal way to represent the internal workings of a computer (and the programs instantiated therein) surely helped give an air of credence to these early computational theories of the mind.

But then, why stop there? That is, if we are willing to use computers as a successfully analogy for describing processes and representations that we suppose really exist in the mind then why stop at description and analogy. What grounds do we have for rejecting the stronger identification claim? That is, if our formal processes like those suggested by the early cognitive scientists give any credence to the possibility of mental entities and events in human minds/brains, other than a prejudice for meat-based computation, don't these arguments give equal support to the possibility of mental entities and events in electronic minds/brains.

Rather than dismissing the argument immediately I merely ask that you provide some data. That is, you would say that your computer allows you to communicate with people from afar, possibly even by looking at them and letting them look at you (e.g., via skype). If we allow the validity of arguments like those Newell and Simon put forth, what are the data that would allow you to rule out the possibility that the device you are staring at at is not staring back at you.