![]() Given these premises, no one has proposed a scientific theory that explains how consciousness emerges from unconscious agents. These agents can develop higher memory and intelligent behavior. When they talk about intelligence, it’s about agents without consciousness interacting and developing a kind of swarm intelligence. They think that consciousness is not fundamental. They have a functional view of intelligence. When you think about some of the agents used by current AI algorithms to model the world, do you think that consciousness can be developed from unconscious agents? So it’s not clear to me what, precisely, the non-functional view of conscious intelligence might be. As a scientist, until I have a theory that is mathematically precise I don’t know how to test such a claim. ![]() This is an intuitive claim without rigorous research to back it up. Some of my colleagues think that intelligence necessarily involves consciousness. That is the standard way of viewing intelligence - as a purely functional notion. You could have intelligent, unconscious machines that act to achieve goals. If you define intelligence that way then it’s utterly distinct from consciousness, at least on the face of it. For example, if it has goals, it acts to accomplish these goals. ![]() In some sense, by definition, an agent behaves intelligently if it acts with certain functional properties. Typically in the cognitive sciences, we have a functionalist view of intelligence. What is the distinction between intelligence and consciousness? Is consciousness essential for intelligence? ![]()
0 Comments
Leave a Reply. |