In the Apple store yesterday, in the corner where they keep electronic learning toys and robots, I read this on a box: “Almost human”, “Cozmo doesn’t just learn - Cozmo plots and plans”, “Cozmo doesn’t just move - Cozmo gets curious and explores”. Cozmo is an $180 robot toy with image processing capabilities, expressive LED eyes, and a set of anthropomorphic behaviors. Ascribing mental states to it seems like an exaggeration - I don’t think Cozmo gets curious, just as I don’t think my phone gets hungry as its battery runs down - but what exactly does a robot or computer have to do before we can ascribe mental states to it without exaggerating? This question will get more and more relevant as AI and robotics continue to improve.
Contemporary philosophy and neuroscience offers two contradictory answers. “Functionalists” like Daniel Dennett argue that we are justified in ascribing mental states to a system whenever doing so helps us understand and predict its behavior. Proponents of more brain-based views such as integrated information theory (IIT) argue that mental states, or at least their subjective aspect, require a degree of information integration that we currently observe only in biological brains.
IIT has the distinct advantage that it recognizes the possibility of mental states in immobile systems such as unresponsive patients and simulated brains - Dennett’s behavior-based account does not. But IIT is prone to zombies: if IIT is correct it should be possible to build robots that mimic the behavior of animals or human beings but lack subjective states simply because they use integrated circuits to process information rather than neurons. This could get confusing or downright ugly, because how should we treat a robot that expresses every sign of need, trust, pain or love, but (according to IIT) has no subjective experience whatsoever? Some might feel entirely justified in treating such robots terribly, Westworld style.
Dear philosopher friends! Am I reading this right? Does Dennett accept that his intentional stance fails spectacularly as far as unresponsive patients are concerned? Would proponents of IIT agree that their framework may become the legal defense of the sexbot industry? What does a computer or robot have to do to deserve to be treated like a mind? Do we need to be less binary about whether or not a system is in a particular mental state? Perhaps my phone can get hungry after all, in its own way, or does that cheapen the concept of hunger? What do you think?
No comments:
Post a Comment