Monday, October 26, 2015

Consciousness and The Interface Theory of Perception

Dan Luba passed the video down below to me. The speaker is Donald Hoffman and he is discussing what he calls the Interface Theory of Perception. Here are my reactions to it.


I think Hoffman makes some fundamental errors when he discusses evolution and cognition. He says that evolution doesn’t evolve us to perceive “the truth” but rather evolved us to only perceive what will help us adapt.

His first error is to think that this is some major revelation. It's not. In fact people like Dawkins and Robert Trivers have written at length about how (even though at first it seems counter intuitive) organisms can evolve to deceive themselves or otherwise have less than perfect knowledge about the external world. The most obvious example is in kin recognition. It is a much more harmful error (in the sense of reproductive success) for a mother to fail to recognize her child than to make the reverse error. Hence most females are tuned to recognize an organism as her child even when there seems to be strong evidence against that fact. Birds like the cuckoo take advantage of this bias by laying their eggs in the nests of other birds.  Even though the cuckoo baby often looks nothing at all like the other actual children the mother bird will usually adopt the cuckoo as her own, even when the Cuckoo is much larger than her adopted siblings and is taking far more than a fair share of the food.

There are many other examples. Trivers has an excellent book on the subject called The Folly of Fools: the logic of human self deception.

Hoffman’s second error is that he doesn’t understand that while it is correct that organisms didn’t evolve to have optimal information of the external world the kinds of errors that they make are mostly understandable and predictable. He speaks as if the rational conclusion of the fact that we don’t have optimal knowledge is to just say that all knowledge is suspect and should be discarded. That is clearly false.  For the majority of possible traits better information equals better adaptation. Predators evolve better sight. Prey evolve better hearing.

Indeed the optical illusions Hoffman starts his talk with are excellent examples that show that humans can understand and correct for the errors that evolution has saddled us with. Theories such as Evo-Devo as well as the standard Darwinian model of adaptation provide us with good models to explain and predict where imperfect knowledge will likely occur due to constraints on possible designs (the vertebrate eye example) or the adaptive advantages of imperfect knowledge (the cuckoo example).

The proper response to these issues is not to just assume that all existing information is wrong but rather to continue to try and understand why and where we may have errors in our perception and cognition.

What I found even more puzzling was that after saying that we have to throw out all of existing science and concepts such as causality,  Hoffman then proceeds to talk about causality and things like Markhov processes in regard to his new model. If causality is totally invalid then its as invalid in some new model as it is in existing models. If the external world and traditional math and physics are all an illusion then so is the science and math that assume there is an outside world which gave us things like Markhov models and quantum physics.

I will say that I think this topic is extremely interesting. For example, I think we can make a good case that the basic foundations for math as well as concepts such as causality and morality are innate cognitive mechanisms.  This leaves open the question: could we even know if there are alternative ways of conceiving the world? My suspicion is that the reason these mechanisms are innate is that they correspond to some universal truths about how the universe is organized and can be understood. I think it is a completely unjustified leap to go from the fact that there are minor and understandable biases in our faculties of understanding to the conclusion that we should completely discount them.

Friday, October 2, 2015

What's it like to be a Computer?

I recently audited a Philosophy of Mind seminar led by John Searle at UC Berkeley. We read Thomas Nagel's What is it like to be a Bat?  and it got me thinking about computer science. Nagel made me realize that we computer scientists are missing out on an essential aspect of what it means to be a computer. We know that most computers get input from the outside world through keyboards, cameras, and microphones. We know that they represent that world via objects, databases, logic, and eventually collections of bits. Clearly this is far different than our human methods for perceiving and representing the world. As Nagel says about bats so we must say about computers:
there is no reason to suppose that it is subjectively like anything we can experience or imagine. This appears to create difficulties for the notion of what it is like to be a [computer]. We must consider whether any method will permit us to extrapolate to the inner life of the [computer] from our own case, and if not what alternative methods there may be for understanding the notion.
As is probably obvious I don't really think we need to do anything more to understand what it's like to be a computer. My point is that Nagel's argument for why we need to wonder what it is like to be a bat seem as insubstantial as my juxtaposition for the computer.

I think what is going on here is an example of what Chomsky describes [Chomsky 1996, 2008] as trivial questions such as "do submarines swim?" In English submarines don't swim in Japanese they do but the question is not considered a conundrum for marine biologists. It's simply a question of a language convention. So in English (at least so far) few people wonder "what it is like" to be a computer. But we do wonder what it is like to be a bat. It is common in literature for people to turn into bats and frogs. We have a common sense idea that identity is not necessarily tied to a human brain. But common sense and intuition are not science. They may be the starting point for science. So that should be how we evaluate Nagel's question: are there any actual scientific issues he is getting at?

One of his primary criticisms is that consciousness can't be studied by a "materialist" or "physicalist" approach. I agree that a strictly materialist approach to studying consciousness won't work.  However, not for the reasons that Nagel advocates.  As Chomsky points out [Chomsky 2012], the mind-body distinction ceased to makes sense when Newton destroyed the mechanistic worldview on which it was based. Even more so in the modern world where the fundamental building blocks of "matter" are not sub-microscopic particles but wave functions.

Or consider fields such as computer science or computational linguistics. The concepts we deal with are grammars, languages, transformations, logic, state machines, Turing machines, ontologies, interfaces, etc. These aren't material except in the mundane sense that they can describe things and processes in the real world. However, they aren't materialistic concepts about electrical currents on silicon. Indeed most of those concepts can be implemented in highly diverse ways. A state machine can describe a software program or the call-response language of various mammals [Hauser 2003].   Several years ago I saw a fascinating paper presented by researchers at Stanford [Myers 2012] who showed that they could use RNA to store information exactly as one would store to a computer. They demonstrated this by showing how the PDF for their own paper was stored and retrieved via DNA in their lab. These examples show that Nagel's view of materialism is out dated and not relevant to what many people who study computation and cognition are doing. The modern sciences of cognition are "materialistic" only in the most trivial sense.

Now let us consider Nagel's emphasis on "reduction".  How can we possibly even begin to think about reducing a scientific theory of mind to biological concepts when we don't yet have a mature scientific theory of mind? As Chomsky points out [Chomsky 2002] we can't even map neural correlates of consciousness for animals whose behavior are several orders of magnitude less complex than humans such as bees. Why should we constrain scientists working on the far harder problem of human cognition that if they can't perform such a reduction their work is not worth doing?

This brings us to Nagel's general viewpoint on science and philosophy. He is in essence a science denier. If science leads to a conclusion that is uncomfortable then he prefers to reject the science. For example, in his book Mind and Cosmos [Nagel 2012]  on pages 26-27 referring to materialistic and evolutionary theories he says: "but the explanations they propose are not re-assuring enough". A few pages later in Mind and Cosmos on page 29 he says: “Everything we believe, even the most far flung cosmological theories has to be based ultimately on common sense and on what is plainly undeniable”.

The goal of science is not to re-assure us or to validate our common sense intuitions. Indeed, the history of science shows that some of the most important discoveries were resisted because they challenged the current world view and made us re-evaluate the place of humans in the universe. People still resist Darwin because they find it offensive to think that humans evolved from primates. The "far flung cosmological" theory of quantum entanglement undeniably violates our common sense notion of causality.

Based on the history of science I think it would be somewhat surprising if when we ultimately do have a mature scientific theory of mind it didn't make people feel somewhat uncomfortable by forcing us to rethink common sense notions of consciousness such as free will.

Finally, I wish to close with a quote from a rather unrelated text.  I'm also auditing a quite different class on philosophy of mathematics and for that class today I was reading Frege's Foundations of Arithmetic. And I hope this doesn't seem overly harsh, I have great regard for Nagel, he is clearly a very influential philosopher, but as I was reading the introduction to Frege I couldn't help but think of Nagel as I read the following:
If Frege goes too far... he is certainly on the side of the angels when he espouses as a model for philosophy the defense of objective scientific truth in matters of conceptual clarification. He is surely right to oppose the supine subjectivism that seems to think we can say whatever we want merely by articulating unargued opinions in the course of creating a literary creative writing exercise. That is not philosophy for Frege... [Jacquette 2007]
Amen brother.

Bibliography

Chomsky, Noam (1996) Language and Thought: Some Reflections on Venerable Themes: Excerpted from Powers and Prospects.

Chomsky, Noam (2002) On Nature and Language. p. 56.

Chomsky, Noam (2008) Chomsky and His Critics. p. 279.

Chomsky, Noam (2012) The machine, the ghost, and the limits of understanding: Newton's contribution to the study of Mind. Lecture at the University of Oslo.

Hauser, Marc and Mark Konishi (2003) The Design of Animal Communication.

Jacquette, Dale (2007). Introduction and Critical Commentary to Foundations of Arithmetic by Gottlob Frege.

Myers, Andrew (2012) Totally RAD: Bioengineers create rewritable digital data storage in DNA. Stanford press release. Note: this is not the research I saw presented which was over ten years ago and unfortunately I can't recall that specific paper but the concept here is the same.

Nagel, Thomas (2012) Mind and Cosmos.