19 November 2017

What's your robot into?

In the Apple store yesterday, in the corner where they keep electronic learning toys and robots, I read this on a box: “Almost human”, “Cozmo doesn’t just learn - Cozmo plots and plans”, “Cozmo doesn’t just move - Cozmo gets curious and explores”. Cozmo is an $180 robot toy with image processing capabilities, expressive LED eyes, and a set of anthropomorphic behaviors. Ascribing mental states to it seems like an exaggeration - I don’t think Cozmo gets curious, just as I don’t think my phone gets hungry as its battery runs down - but what exactly does a robot or computer have to do before we can ascribe mental states to it without exaggerating? This question will get more and more relevant as AI and robotics continue to improve.

Contemporary philosophy and neuroscience offers two contradictory answers. “Functionalists” like Daniel Dennett argue that we are justified in ascribing mental states to a system whenever doing so helps us understand and predict its behavior. Proponents of more brain-based views such as integrated information theory (IIT) argue that mental states, or at least their subjective aspect, require a degree of information integration that we currently observe only in biological brains.

IIT has the distinct advantage that it recognizes the possibility of mental states in immobile systems such as unresponsive patients and simulated brains - Dennett’s behavior-based account does not. But IIT is prone to zombies: if IIT is correct it should be possible to build robots that mimic the behavior of animals or human beings but lack subjective states simply because they use integrated circuits to process information rather than neurons. This could get confusing or downright ugly, because how should we treat a robot that expresses every sign of need, trust, pain or love, but (according to IIT) has no subjective experience whatsoever? Some might feel entirely justified in treating such robots terribly, Westworld style.

Dear philosopher friends! Am I reading this right? Does Dennett accept that his intentional stance fails spectacularly as far as unresponsive patients are concerned? Would proponents of IIT agree that their framework may become the legal defense of the sexbot industry? What does a computer or robot have to do to deserve to be treated like a mind? Do we need to be less binary about whether or not a system is in a particular mental state? Perhaps my phone can get hungry after all, in its own way, or does that cheapen the concept of hunger? What do you think?

07 November 2017

Brain Implants in the News

In his October 27 article in the Wall Street Journal titled “To Keep Up With AI, We’ll Need High-Tech Brains” Christof Koch, President and Chief Scientific Officer at the Allen Institute for Brain Science, argues for the development of high-resolution brain implants for everyone. His stated reason is that this will create a place for humans in a future where all basic tasks are performed by computers and robots, but he also hints at something grander, saying of the brain that “It is within our reach to enhance it, to reach for something immensely powerful we can barely discern”. Koch muses about implants that “could translate a vague thought into a precise and error-free piece of digital code, turning anyone into a programmer.” and about how “People could set their brains to keep their focus on a task for hours on end” (now that reminds me of something…).

Enter John Horgan, foe of brain implants and writer at the Scientific American, where on November 1 he published "Do We Need Brain Implants To Keep Up With Robots?". Horgan thinks the technology Koch describes is far in the future; because before we can develop effective brain implants, Horgan thinks we need to solve what he calls the “neural code”, i.e. understand how communication among millions of individual neurons gives rise to brain function. I think this is an error of thought. Complete understanding is a valuable thing but improvisation and learning by trial and error are workable courses of action too, especially in an age of AI. Many powerful brain enhancing implants could be in widespread use today if the process for placing electrodes inside the skull could be made safe, but Horgan doesn’t mention this essential (and in my mind only) obstacle to rapid growth in the use of brain implants. Good thing Elon Musk is on the case.

PS. The neurorobot project is developing just fine, stay tuned :)

12 June 2016

Eyeball - A Minimal Neurorobot

This is my second post on personal neurorobotics. In the previous post I outlined the case for brain-based robots as consumer products, especially in education. In this post I will describe a minimal personal neurorobot that I call Eyeball. Eyeball implements a causal loop that is fundamental to animal (and neurorobot) behavior: signals travel from the brain to motors, causing behavior and change in the outside world, which is perceived by the brain as visual or other sensory feedback.

Eyeball is a web-camera attached to a servo motor. Eyeball's brain consists of two spontaneously spiking neurons that run on a USB-connected computer (eyeball.py, requires OpenCV, windows installation instructions here). One of the neurons is a motor neuron: whenever it spikes the motor moves to a new position. This changes the field of view of the web-camera, which continuously sends video frames back to the brain. The second neuron is a sensory neuron that is maximally activated by a dark spot on a white background. The spikes of the sensory neuron inhibit the motor neuron. This means that when Eyeball encounters a dark spot on a white background it stops moving and fixates on it.

Eyeball is very cheap. Web-cameras and servos can be bought for less than $5 each. To send serial commands via USB from Python to Eyeball I use an FTDI chip, which can also be purchased for around $5. I used an Arduino board to convert the serial commands to the PWM format needed to control the servo but only because I don't yet know how to send PWM commands from Python directly. So in principle the device costs less than $20. (Of course, to be successful Eyeball would also need a nice-looking plastic case.)

Despite the low cost, Eyeball has all the components needed to emulate some interesting brain functions and behavior. Vision is perhaps the best understood of all brain functions, and brain-based models of visual object recognition such as HMAX are already used to give neurorobots vision, but only in academia - not yet for consumer-oriented educational applications. Given the ability to recognize objects, Eyeball could moreover be trained to orient towards and track some objects and avoid others, ideally using a realistic implementation of the tectum/superior colliculus and basal ganglia. A reward-button could be used to deliver a dopamine-reward, changing synaptic weights according to known learning rules and thus training the robot to show preference for some objects. Easy access to the web-camera's microphone and the computer's speakers opens the door for voice communication... etc... etc...

While even this very simple neurorobot opens up a lot of interesting possibilities for implementing and exploring mechanistic models of the brain, what we really want is a robot that has two eyes and can move around independently. This will be the topic of the next post.

26 April 2016

Personal Neurorobotics

I've been exploring this idea for a few years now. It's time I start to document what I'm doing and ask for feedback. This is a summary, I'll go into detail in future posts.

Consider these trends:
  • Mechanistic models of brains and brain functions are getting better and better
  • Smartphones and laptops are becoming powerful enough to run such models
  • Hardware is getting cheap enough to build robots that can see, hear, make sounds, move around and communicate wirelessly for less than $200
I believe these trends open up a market for autonomous robots whose control-systems emulate biological brains. Consumer robotics is already a rapidly growing phenomenon and neurorobotics is an expanding area of research but I have yet to see the two endeavors combined. Please let me know in the comments below if there's some product or project I've missed.

A first generation personal neurorobot might emulate the brain of a fish or lamprey. Done right, it would explore its environment, avoid obstacles, escape threats, pursue desired states and objects, and learn, both from experience and explicit training. While these are behaviors that conventional robots can be programmed to perform, the defining feature of a neurorobot is its realistic brain architecture and activity, which makes it ideal for exploring and teaching neuroscience. Here, all the robot's brain processes could run on a wirelessly connected smartphone or computer, and be available for observation, explanation and modification in real-time. Using reinforcement learning to train the robot would be particularly interesting.

I see two markets for personal neurorobots:
  • Schools. Robots are already used in schools (e.g. FIRST Robotics Competition, LEGO Education). Neurorobots add the possibility of teaching neuroscience and behavior, which broadens the appeal substantially.
  • Enthusiasts. The popularity of neuroscience on the one hand and of consumer robotics on the other indicates that there would be a lot of interest in a project that combines both.
Importantly, the complexity of the brains and behavior of personal neurorobots could be increased year by year as new neuroscientific findings and models become available. Hopefully this can be an open process, with both individual enthusiasts and larger research teams working to make new brain circuits and capabilities available to the broader user-base. (The Human Brain Project is betting on a similar dynamic with their simulated robot testing environment, the Neurorobotics Platform.) The long-term aim would be to emulate the brain functions of higher vertebrates, such as complex learning, communication, attachment, planning, language and play.

While I think this project needs the attention of experienced roboticists, programmers, educators and investors, I will do my best to demonstrate feasibility and promote the idea. In my spare time I've developed a prototype neurorobot that I call a vertebot, pictured above. The hardware works, although ideally a personal neurorobot should be a light off-the-shelf product, not a five-pound beast that's prone to burst into flames. My main challenge is the code. I'm only fluent in Matlab and this project requires a non-proprietary language. So I'm learning Python and I'll ask for help with that as I move forward. Currently I can just about make a single neuron spike (single_neuron.py). I've set up a website (www.vertebot.com) where I'll put code and up-to-date information on the project. I'm also setting up a GitHub repo. All other observations and ideas I'll share here on the blog.

It's a big, sprawling project. I hope you'll find it interesting.

24 April 2016

Best Podcasts 2016

It's a great year in politics so let's start there. Slate's Political Gabfest is still going strong. Co-host John Dickerson has moved up in the world of US politics, he now moderates presidential debates and interviews the candidates on CBS's Face the Nation (don't miss his FTN Diary). However, the Gabfest faces formidable election coverage competition from the Five Thirty Eight elections podcast, which has a great, geeky ambiance and a fresh, numbers-based approach to punditry. Vox's The Weeds, The Econonist's various podcasts, and shows like The Glenn Show on Bloggingheads TV do good analysis of news and society. BBC's Newshour is still the go-to podcast for big complex stories like the Panama Papers.

Long-form interviews can be extraordinarily good when done right. WTF with Marc Maron, CNN's The Axe Files with David Axelrod, and The Ezra Klein Show frequently hit the mark.

Very Bad Wizards is the best podcast. Period. Social psychology, moral philosophy and great banter. I own the T-shirt. Repugnant. Space Time Mind, another philosophy podcast was really picking up steam (the Transhumanist Hot Tub episode is absolutely genius) but has been silent for months.

When it comes neuroscience, Neuroscientists Talk Shop wipes the floor with the competition. Try it (the episodes with Kreizer and Canavier are two recent highlights) and then try something like Nature's Neuropod or the Brain Science podcast and decide for yourself. (That said, yesterday I discovered Brain Matters, which looks like it could be really good. Fingers crossed.)

At the end if the week, relax with BBC's Friday Night Comedy podcast The News Quiz. Sandi Toksvig left to host QI and has been gloriously replaced by Miles Jupp.

Finally technology. Leo Laporte is still king. This Week in Google is pretty much always very good. Security Now, This Week in Law and This Week in Tech can be good if the topic and panel are. Non-TWiT podcasts worth keeping an eye on include Robohub's Robots podcast, the Gillmor Gang, and of course Seminars About Long-Term Thinking.

And don't forget In Our Time.

I use the paid-for version of Podcast Addict on Android.

Please let me know about any podcasts you think I would like.


Wernicke's Area, where podcast do their magic.
Polygon data were generated by Database Center for Life Science(DBCLS)[2]. Polygon data are from BodyParts3D[1], CC BY-SA 2.1 jp, https://commons.wikimedia.org/w/index.php?curid=32534031

24 March 2013

Musing on the mind-brain problem

Following a couple of recent conversations with friends and family I've written a short summary of my current views on the mind-brain problem. The dry jargon of scientific research reports is an obvious obstacle to a generally satisfying account of the conscious self, so I try to not be dry. This is just my current perspective, I'm not providing references and I reserve the right to be wrong.

I think it's important not to think of the brain as 'just a bunch of cells', but rather as a hundred billion individual identities that want to live and grow. The ancestors of the cells of the brain were free agents; swimming, creeping, crawling, swirling their way through the waters of ancient earth; feeding, resting, sensing, fighting, fleeing and multiplying. Now they're here, living together in this civilization we call brain; but they are still feral. In a very real way they rival and tussle every day to stay alive. The neurons of the brain do not grow old and die like other cells; you have almost entirely the same brain cells now as you had when you were a child. However, tens of thousands of them are wiped out every day - only the ones that form important constellations and alliances with other neurons receive 'neuromodulators' and grow; others shrivel and fade. Neuromodulators, what Gerald Edelman called 'value systems', are essential to the life and growth that brain cells seek. And here is the essential fact: neuromodulators are released in the brain in response to meaningful events of various kinds, happenings internal or external that bear on the interests of the body, the person, the brain as a whole, or one of its neural communities. The brain cells, seeking neuromodulators, seeking life and growth, are therefore in constant electrical communication and structural flux, seeking to bring about, probe and explore the meaningful, important aspects of the reality in which they find themselves.

What are these aspects of interest, of meaning, that allow neurons to survive and grow? What happenings attract the complexity and potential of a living human brain cell? To start with, every neuron is in constant electrical union with the sensing, moving body, and the neurons communicate and grow about this vital fact. The neurons share a common path through life, and so they explore and probe their shared memories constantly for nuggets of intrigue. Although their communities are often in tension as each continues the ancient will to live on and grow against the daily weeding out of the least relevant of them, they nevertheless share a common mouth, a common pair of hands and eyes, and the electrical urges of hunger, need, sleep and dreams reverberate across the neural fields endlessly. How could they not share a sense of I in this circumstance? From this seeking, seething, astronomically complex swirl of electric neural energy, membrane and will to survive and grow emerges pleasure and frustration, I and not-I, hopes, plans, dreams and distinctions. The astounding communities and constellations of living, electrified tissue that constitute each of these core features of the human experience are there for us to explore, by any method we choose - introspective, statistical, fictional, spiritual, communal - and it is our tremendous fortune and grace to be alive just as the technology to express and understand all this is finally beginning to become available.

At the heart of it all then, is a dynamic, inventive, persevering civilization of cells, seeking nourishment, excitement, love and force, in a never-ending myriad of ways. This is what it is to be alive, a conscious human being; a near-instantaneous sharing of memory, will and rich experience among the one hundred billion little lives within that one skull. It is in their nature to seek, like their cousins still independent in the sea; but the cells of the brain seek in communication and structural union with other brain cells. The subject of this electrical conversation is and feels like you.