If the mind is like a computer how is a computer not like a mind?
I think it is fascinating that the advent of cognitive science as a discipline did not stem from many psychologists who banded together to throw off the constraints of behaviorism. Psychologists, even those who eventually became part of the cognitive revolution (e.g., Miller), generally were trained and identified themselves with behaviorism. The burgeoning cognitive sciences pulled their ranks from computer science and information theory: Newell and Simon were making positive claims attempting to build programs that could perform complex cognitive tasks, Miller was making positive claims based off idealizations of the mind's informational capacity in terms Shannon's (discrete) information theory, while Chomsky was demonstrating negative arguments that the analysis of language with the independence assumptions that Shannon premised his own analysis of language. Even anthropology (Conklin, Goodenough, Loounsbery) took on a formal bent.
These were scientists arguing for not just the study of and reality of mental representations, but arguing that science can be advanced by these postulations. What could be more antithetical to the behaviorist doctine of the black box. That dogma had stemmed out of a basic dissatisfaction with the phenomenalism that gestalt psychology had taken on as well as a rejection of the approaches epitomized by Freud, Jung and other theorists whose theories were seen to have little more empirical content than the theory that Poseidon raised storms in order to vent his rage. It was the fear of uninterpretability that won behaviorism the minds of the psychologists who worked so avidly to deny that those minds existed.
Perhaps, then, it is not surprising that the resurrection of mental representations and processes was accompanied by a rigor and formal complexity that surpassed anything dreamt of by the most imaginative behaviorists (who of course had neither dreams nor imagination). If behaviorism was attempting to inject rigor back into psychology, the promise offered by the new cognitive sciences may have been more like dialysis, which oddly enough also emulates complex computational processes that defy direct observation in the ways that motivated the Skinnereans to eschew postulating anything between input and output. In addition to the immediate empirical successes embodied in the MIT symposium on information theory in 1956, the emphasis on a formal way to represent the internal workings of a computer (and the programs instantiated therein) surely helped give an air of credence to these early computational theories of the mind.
But then, why stop there? That is, if we are willing to use computers as a successfully analogy for describing processes and representations that we suppose really exist in the mind then why stop at description and analogy. What grounds do we have for rejecting the stronger identification claim? That is, if our formal processes like those suggested by the early cognitive scientists give any credence to the possibility of mental entities and events in human minds/brains, other than a prejudice for meat-based computation, don't these arguments give equal support to the possibility of mental entities and events in electronic minds/brains.
Rather than dismissing the argument immediately I merely ask that you provide some data. That is, you would say that your computer allows you to communicate with people from afar, possibly even by looking at them and letting them look at you (e.g., via skype). If we allow the validity of arguments like those Newell and Simon put forth, what are the data that would allow you to rule out the possibility that the device you are staring at at is not staring back at you.