A theme I saw dancing amidst these essays was a constant worry that seemed to dominate the motivation, the cause, the reason for the writing. There are at least to flavors of the worry, one more metaphysical the other epistemological, but they were deeply abstract in either case. Flavor 1 is a concern about the reality of the world ‘about which’ there were things like representations, goals, knowledge, etc. Flavor 2 itself has a few flavors, one is an implicit worry about the validity of cognitive science as a scientific pursuit (not letting it “dissolve into ”the fog of subjectivism"), the second is a concern about the underdetermination of theory by data, and the last is our week’s theme ‘levels of analysis’. To be fair I use the metaphor of flavors and not tastes, because these are deeply entertwined and difficult to isolate or talk about independently; but, like these authors, I will do my best. I think they are best approached as one would deal with flavors layered on top of one another, starting with the last one in.
So on first pass it seems that for there to be levels of analysis we need to allow that analyses as ‘perspectival’. That is, if we can view the same phenomena at different ways, this is the same as looking at them from another point of view, splitting them with a different set of measures and goals, generalizing using another type of abstraction, describing them with another vocabulary, etc.. This is relevant because it establishes, as Pylyshyn emphasizes though not in these words, that data is measure relative. That is, your theoretical tools (your vocabulary/ontology/representations…) are important decisions not only in determining the answer to your problem, but in determining what the problem is.
The result of this is that even if phenomena are real, the mind will only be able to understand them relative to what it can identify. Surely, part of why the color red was included in Newton’s color wheel while X-ray colored light is not is that eyes tend to have sensors that contain rhodopsin and none (to my knowledge) filled with barium platinocyanide (which Röntgen used in identifying X-rays). Similarly, when one approaches a set of phenomena from different levels of analysis one will discover different regularities (a point most insistently made by Pylyshyn).
But if there are all these different regularities to be discovered then why one regularity instead of another? Why aren’t we all doing physics if we are to assume that all existence grounds out in physics? Or why isn’t there a profligation of “special sciences”, one for each author devoted to its author’s favorite theories and data? Or to put it another way…what makes a level a good level? Or less strongly what makes a level better than another? Or less strongly yet, how do we identify the right question to be asking and the right answer to be given for that question? I think that the last is the right question in this case, but is there an answer?
underdetermination and validity
If we need to choose a level at which to approach a problem that presumes that there is some set of questions and some set of answers that we are in search of. The problem is, if you assume a hierarchical or even an ordered structure of these levels (see obligatory xkcd link), you are going to want physics to underpin your chemistry. Chemistry will underpin your biology, and so on…or so the matryoshka doll crumbles. Then this raises the question of why we even need these “special” sciences when we all agree that it’ll ground out in physics eventually. Isn’t anything else just a product of the dissolution of science into the hazy stratus of confusion that Marr forewarned, but seems inevitable given the Bay Area climate?
To the legitimacy question, cognitive scientists offered up computationalism which at the very least (judging by the piece of glass I’m typing this on) has some predictive and interventional efficacy. That is, computationalism generates generalizable claims about existence (a topic we’ll return to shortly). This was an excellent point agreed on by all, but if there were no more issues there would be little need for these articles which returns us to the problem of underdetermination.
That is, the beauty and power of computation lies in its ability to be instantiated by almost any physical system with the right rules in place (xkcd #2).
That said, the horror and danger of computation lies in its ability to be instantiated by almost any physical system even with the right rules in place.
Because the articles largely focus on this aspect I’m less concerned with elaborating it here. But the underdetermination problem is a fascinating one from a number of respects that deserve some more emphasis.
First is Marr’s beginning with older Gestalt psychology and the demonstration of effects that seem to rise beyond being merely a sum of their parts. (I actually don’t see would be surprising were it not for the assumptions and emphasis made by early work on experimental methodology that focused on linear effects(possibly under some transformation) on continuous variables. Without those assumptions most things,given that we see objects that are composed with a plethora if not an infinitude of pieces interacting in everyday life, and few people would disagree that, it would seem that additive effects deserve the merely qualifier, not interactive effects..lbut I digress). This suggests one flavor of underdetermination, the pieces could be combined in any number of ways, but when combined in these particular ways they seem to be more than when combined in a number of other ways. That is, there seem to be levels of effects in the sense of interactions amidst lower level parts produce phenomena at higher levels. But the higher levels are just a subset of the ways in which the lower levels could conceivably interact, and so we need a theory of why we take the perspective that these objects interact as they do in terms of our cognitive experience of them. The solution that multiple authors seem to converge on are that this honing comes from something like our goals/rationality/“why”’s/.
A second of the bunch is Anderson’s critique of the proliferation of mechanistic theories to explain cognitive phenomena, which is underdetermination by another name (and it may smell sweeter, depending on how you describe mechanism that underlies your ability to smell words). It is a similar point, so not to belabour it too much, but even if you agree on the phenomena themselves and their appropriateness in terms of agreement among scientists that they are the ‘right’ things about which to be asking questions and getting answers, you still will run into underdetermination problems. That is, it goes beyond merely non-additive effects of known values, but once one is attempting to develop theories about the hidden entities and processes there is a temptation to postulate a new something (event or process or both) whenever some anomaly arises. Mechanism becomes the asteroid belt that absorbs any incoming anomalies into its orbit; providing a protective hull around the inner theory system(to steal Lakatos’ analogy).
Anderson proposes that being precise about what it is that the core of a theory of cognition is trying to accomplish will be enough to evade these swirls of mechanisms. Rationality can act as compass, though if the principle of rationality is not stronger than Newell’s we may be little better off since it gives no way to adjudicate amidst the innumerable simultaneous goals that any thing with cognition likely needs to solve.
But of course, that is merely a convenient theoretical tool. We can posit that instantiation doesn’t matter, and that we need concern ourselves with the higher order behavior (albeit on many measures) of agreed upon cognitive agents (humans and computers) and the constraints on the computations that those agents might face. However, even if we can validate cognitive science as an epistemic pursuit, and deal with underdetermination by glossing over mechanistic detail, we still haven’t addressed the problem that seems (to some(read: not me, but I don’t want to explain cause its late)) to be the real problem. That is, whether psychological entities are real.
really out of our minds
I’ve already written too much, and the deadline draws nigh, so I will not spend much time, but to ground more specifically my earlier allusions, it seems that there is almost a physics envy going on amidst some of these theorists. Or perhaps a reaction to a perceived imperialism that cognitivists still feel from the attack on hidden entities by logical positivism and behaviorists. They (especially Pylyshyn) seem driven to demonstrate that things like goals and beliefs are at least “psychologically real”. That is they want our minds and all that’s within them to exist in the real world, not merely in our minds.
And with that, I’m going to let worries about leaving without completing my thoughts on this issue fall out of my mind.