Here are the reasons I’m excited about this news:
- I’m writing a series of novels about humanoid robots. (Scroll down for some most excellent blurbage.)
- DARPA has agreed to allow unpaid teams to enter the Grand Challenge competition. This means that hackers/makers/NewAesthetes/kitbashers can get in on the fun. You there, in your garage! This means you! Hie you hence to Active Surplus!
- DARPA projects have a way of spinning out into other projects. We might not get a fully autonomous humanoid robot, but we probably will get several different methods of saving human lives and repairing the things humans break. Basically, it looks like they want this robot to be able to find people in disaster-struck areas and rescue them, or move debris around so that medical personnel can work safely. Either way, it’s a win. And in the meantime, we’ll get a lot more advancements in object detection, natural language processing, and motorizing fine motor skills.
Here are the reasons I’m not so excited about this:
- I’m writing a series of novels about humanoid robots. And in them, DARPA is not involved in their creation. Well, not directly. They probably funded a lot of the initial research, much as they’re doing here, but the final vision of the vN — and crucially, the failsafe — was funded by tithes to a Rapture-oriented mega church. I chose this because humanoid robots, while fascinating, are an inefficient use of useful technologies, and I felt like I couldn’t quite justify what I was putting down on paper. There have been a lot of stunning advances in robotics lately, from modular units to Big Dog to throwing arms, but until now very few companies or labs focused on Frankensteining those technologies into one human-shaped product. And the ones that have — the Japanese ones — aren’t necessarily concentrating on autonomous robots, but on telepresence puppets. That’s because we can already do all the things we wish robots could do. We spent tens of thousands of years evolving the abilities we take for granted: walking, talking, hearing, grasping, thinking. It’s hard to code those things in to another form, and it’s inefficient to build all those abilities into a generic mass-production model when we already do so well with them. The problem is that we just can’t do them without suffering, in certain contexts and environments. That’s one of the reasons DARPA wants a humanoid robot. To do our suffering for us.
- A lot of that suffering could be avoided if we would just stop causing it, in the first place. DARPA’s robots — all of them, from the pack-bots to the drones to the ‘noids — are intended as defense units. They’re funded by the military, for the military. This isn’t to say they can’t help civilians — we could have really used some excellent robotics during Hurricane Katrina, for example, and I’d be disappointed if the National Guard didn’t get access to some of these units when they’re finished. But the thing is, we wouldn’t have to build these robots if our human forces weren’t spread so thinly. And I don’t just mean in Iraq and Afghanistan (and probably Pakistan and Iran, soon). I mean all the bases the United States has in other countries. I mean the military-industrial complex in general. I mean that it’s sad that DARPA has to fund these awesome projects, and that we don’t have a similar organization with similar resources dedicated to funding awesome solutions for problems like clean energy and climate change.
In general, though, I’m pretty excited. Robots! Who doesn’t love them?