Abstract: When studying cognition and consciousness, there are three possible strategies: one can introspect in an armchair, one can observe natural cognition in the wild, or one can synthesise artificial cognition in the lab. Some strands of Artificial Life pursue the third strategy, and Evolutionary Robotics opens up a particular new approach. Whereas most attempts at building AI systems rely heavily on designs produced through introspection -- and therefore reflect the current fads and intellectual biases of the moment -- the evolutionary approach can start from the assumption that we humans are likely to be hopeless at designing cognitive systems anything like ourselves. After all, one would not expect a nematode worm with just 300 neurons to have much insight into its own cognitive apparatus.
The evolutionary method does not need the designer, the Watchmaker with insight. But it does need clear operational tests for what will count as cognition -- goal-seeking, learning, memory, awareness (in various objective senses of that word), communicating. We can evolve systems with many such cognitive abilities; so far to a rather limited extent with proofs of concept, but with no reason to expect any barriers in principle to achieving any behaviours that can be operationally and objectively defined. Of course, there are no operational tests to distinguish a so-called zombie from its human counterpart that has feelings, so this seems to leave unresolved the question of whether an evolved robot could indeed have subjective feelings.
Harnad (2011) laid out one version of this issue in a paper entitled "Doing, Feeling, Meaning and Explaining'", suggesting that the Doing (that can be verified operationally) is the Easy part; whereas the Feeling, and probably by extension the Meaning are the ineffable and Hard parts. In contrast, I shall be focussing on the Explaining, and pointing out that different kinds of explanations are needed for different jobs. In particular the concept of awareness, or consciousness, has a whole range of different meanings that need different kinds of explanation. Many of these meanings can indeed be operationally and objectively defined, and hence we should be able to build or evolve robots with these properties. But one crucial sense is subjective rather than objective, and cannot be treated in similar fashion. This is a linguistic issue to be dissolved rather than a technical problem to be solved.
Harvey, I., (2002): Evolving Robot Consciousness: The Easy Problems and the Rest. In Evolving Consciousness, J.H. Fetzer (ed.), Advances in Consciousness Research Series, John Benjamins, Amsterdam, pp. 205 -219.
Harvey, I., (2000): Robotics: Philosophy of Mind using a Screwdriver
In Evolutionary Robotics: From Intelligent Robots to Artificial Life, Vol. III, T. Gomi (ed), AAI Books, Ontario, Canada, 2000. pp. 207-230. ISBN 0-9698872-3-X.
Harvey, I., Di Paolo, E., Wood, R., Quinn, M, and E. A., Tuci, (2005). Evolutionary Robotics: A new scientific tool for studying cognition Artificial Life, 11(1-2), pp. 79-98.Harvey, I., (2005): Evolution and the Origins of the Rational Paper presented at Cognition, Evolution and Rationality: Cognitive Science for the 21st Century. Oporto, September 2002. In: Zilhhao, Antonio (ed.), Cognition, Evolution, and Rationality. London, Routledge, 2005. Routledge Studies in the Philosophy of Science. ISBN 0415362601.