Unit 8 Panel: Robotics

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: Panel discussion on how developments in robotics can provide insights about navigation and motor control in biological systems and how biological studies may inform robotic design for complex tasks such as those posed by DARPA grand challenges.

Instructors: Tony Prescott, Giorgio Metta, Stefanie Tellex, John Leonard, and Russ Tedrake

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PATRICK WINSTON: Well, I suppose my first question has to do with some remarks that Tony made about Rod Brooks. I remember Rod Brooks' work for the one great idea he had, which was the idea of subsumption. And the idea of subsumption was to take the notion of procedural and data abstraction from ordinary programming and elevate it to a behavior level. And the reason for doing that was that if you weren't working so well at one level, you would appeal to another level to get you out of trouble. So that sounds like a powerful idea to me. And I'm just very interested in what the panelists construe to be the great principles of robotics that have emerged since then. Are there great principles that we can talk about in a classroom without just describing how a particular robot works?

STEFANIE TELLEX: So we were talking about this ourselves a little bit and sort of asking ourselves what makes a systems paper? And what do you write down in one of those papers as these general principles that you extract from building a system? Because it seems like there's two kinds of papers in robotics-- the systems paper, where you said, I built this thing and here's kind of how it works. Where it's hard to extract, I think, general principles from that. It's like, I built this and this and this, and this is what it did. Here's a video. But it does something amazing, so it's cool.

And then there's, like, kind of algorithm papers, which tend to get more citations and I don't know. And they usually work because they have this chunk of knowledge, subsumption architectures, RRT Star's one of my favorite examples. It's the kind of paper, there's an algorithm, and then there's math that shows how the algorithm works, some results that they show. And there's this nugget that transfers from your brain-- to the author's brain to your brain. And I think it's hard to know what that nugget is when you've built a giant system.

One of the things that I've been thinking about that might be what that might look like is kind of design patterns for robots. This is a concept from software engineering. It's at a higher level abstraction than a library or something that you share, but it's things about the ways that you put software together. So if you're a hacker, you probably heard of some of these patterns like singleton and facade and strategy. They have these sort of evocative names from this book, Gang of Four is the nickname. And I think there's a set of design patterns for robotics that we are slowly discovering.

So when I was hanging out in Seth Teller and Nick Roy's for my post-doc, there was one that I really got in my head, which was this idea of pub/sub-- publish and subscribe. You're talking about YARP and LCM and Russ. They all had this idea that you don't want to write a function to call to get the autodetection results. You just want you detector blasting out the results as fast as it can all the time, and then break that abstraction. And you get a lot of robustness in exchange for that. I think that's a design pattern for robots. I think there's probably about 30 more of them. And I bet Russ knows a lot of them. And Albert and Ed-- there's people who know them maybe, but they're not written down. I think one thing I'd like to do is write some more of them down. Sorry I didn't.

PATRICK WINSTON: Russ, what do you think? Your talk seemed to focus on optimization as the answer.

RUSS TEDRAKE: It's a framework that I think you can guess a lot of the problems and get clear results. I think looking across you can point to clearer sort of ideas that worked very well. So I think for estimation Bayes Rule works really well. And you know, Monte Carlo estimation has worked really well for planning in high dimensional spaces. Somehow randomization was a magical bullet where people start doing RRT-type things as well as trajectory optimization-type things.

I do think that the open source movement and the ability to write software and components and modules has been a huge, huge thing. And I do think that at the low-level control level is optimization-based controllers have been a magical thing. I think any one of these-- in any one of these sub-disciplines, you can point to a few real go-ahead ideas that have rocked our world, and everybody gets behind them.

You know, I think maybe the biggest one of all, actually, has been LIDAR. And then the ability-- I think sensing has really come online and been so enabling in the last few years that I think if you look back at the last 15, 20 years of robotics, the biggest point changes I think has been with the sensors-- sensors upped their frame rate resolution, gave depth. When LIDAR and Kinect came out, those just changed everything.

PATRICK WINSTON: Before we leave the subject of Brooks and subsumption, Tony, you brought it up, and there was a little exchange between you and Russ about why it might be useful. I noted that both Russ and John talked about two-- one respect-- one each major blunders that their machines made. Do you construe that any of Brooks's stuff might have been useful in avoiding those kinds of blunders?

TONY PRESCOTT: Potentially. But I think these are really challenging things that we're trying to do with these robots. So the biomimetic approach that I take, I think, is partly I want to mine biology for insights about how to solve problems. And the problems that we're realizing are hard in robotics are the problems that are hard in biology as well. So I think we underestimated the problem of manipulating objects. But if you look in biology, what other species apart from us has really solved that problem to the level of dexterity? An octopus trunk, maybe but-- sorry, elephant trunk.

So I think that these challenges take a-- prove to be much more difficult than we might think. And then other things that intuitively seem hard are actually quite easy. So the path that we're taking in some of our biomimetic robots, like the MiRo robot toy is to do stuff which actually people think looks hard, but it's relatively easy to do, and these are solvable problems. And then we can progress towards what are obviously the harder problems, and where the brain has dedicated a lot of processing power.

And I think object manipulation, if you look in the brain, is massive representation for the hand. And there's all these extra motor systems in cortex that aren't there in non-mammals. And even in simpler mammals they don't have motor cortex. We've developed all these extra motor cortical areas, direct corticospinal projections. All of this is dedicated to the manipulation problem. So I think we're finding out what the hard problems are by trying to build the robots. And then we can maybe find out what the solutions are by looking at the biology. Because in that case, particularly, you have low level systems for-- that can do grasp, and that are there in reptiles. And then you have these additional systems in mammals, and particularly in primates, that can override the low-level systems to do dexterous control.

PATRICK WINSTON: John, you look like you're eager to say something.

JOHN LEONARD: I just want to talk about subsumption, because it had such a huge effect on my life as a grad student. It was sort of like the big thing in 1990 when I was finishing up. But I really developed a strong aversion to it, and so I tried to argue with Rod back then, but not very successfully. I would say, when will you build a robot that knows its position? And he would say, I don't know my position. But I think some of the biological evidence, like the grid cells and things that maybe at an autonomic subconscious level there is sort of position information in the brain. But subsumption, I think, as a way of trying to strive for robustness where you build on layers, that's a great inspiration. But I feel that the intelligence without representation sort of work that Rod did, I just don't buy it. I think we need representation.

PATRICK WINSTON: I guess there are two separable ideas.

JOHN LEONARD: Yes.

PATRICK WINSTON: The no representation idea and the layering idea.

JOHN LEONARD: Yeah. I like the layering, and I'm less keen on the no representation.

RUSS TEDRAKE: I'm curious if Giorgio-- I mean, you showed complicated system diagrams. And you're obviously doing very complicated tasks with a very complicated robot. Do you think-- do you see subsumption when you look at those--

GIORGIO METTA: Well, it's actually there. I don't have time to enter into the details. But the way it's implemented, ER, allows you to do subsumption or other things. There's a way to physically take the modules and, without modifying the modules, connect them through scripts that can insert logic on each module to sort of preprocess the messages and decide whether to subsume one of the module or not, or activate the various behaviors. So in practice, can be a subsumption architecture, a purer one or whatever other combination you have in mind for the particular task.

RUSS TEDRAKE: Maybe a slightly different question is did the subsumption view of the world shape the way you designed your system?

GIORGIO METTA: It may have happened without knowing, because that piece of software started at MIT while I was working with Rod. But we didn't try to build the subsumption architecture in that specific case. But the style, maybe of the publish-subscribe that we ended up doing was derived from subsumption in a sense. Was in spirit, although not in the software itself.

But going back to whether there's a clear message that we can take, certainly I will subscribe to Stefanie message of the recyclable software, the fact that we can now build these modules and build large architectures. This allows doing experiments that we never dreamed of until a few years ago. So we can connect many powerful computers and run vision and control optimisation very efficiently, and especially if recycling the software. So you don't have to implement inverse kinematics everyday.

PATRICK WINSTON: Well, I think the casual observer this afternoon, one good example being me, would get the sense that, with the exception of Tony, the other four of you are interested in the behavior, but not necessarily interested in understanding the biology. Is that a misimpression or is that correct? And when you mention LIDAR, for example, that's something I don't think I use. And it's doing something-- it's enabling a robot with a mechanism that is not biological. So to what degree are any of you interested in understanding the nature of biological computation?

JOHN LEONARD: I care deeply about it. I just don't feel I have the tools or the bandwidth to really dive into it. But every time I talk to Matt Wilson I leave feeling in awe, like I wish I could clone myself and hang out across the street.

GIORGIO METTA: Well-- Yeah. Go ahead.

STEFANIE TELLEX: I get it. I kind of feel the same. I mean, I'm an engineer. I'm a hacker. I build things. And I think that's the way that I can make the most progress towards understanding intelligence is by trying to build things. But every time I talk to Josh, you know, I learn something new, and Noah and Vikash.

PATRICK WINSTON: Say that again?

STEFANIE TELLEX: Every time I talk to Josh and Noah Goodman and Bertram Malle, people from the psychology and cognitive science [INAUDIBLE], I learn something and I take things away. But I don't get excited about trying to build a faithful model that incorporates everything that we know about the brain, because I just don't-- I can't put all that in my brain. And I feel that I'm better guided by my engineering intuition and the things that I've learned by trying to build systems and seeing how that plays out.

PATRICK WINSTON: On the other hand, Tony, you are interested in biology and you do build stuff.

TONY PRESCOTT: Yeah. I'm interested in it, because I trained originally a psychologist. So I came into robotics in order to build physical models.

PATRICK WINSTON: So why do you build stuff?

TONY PRESCOTT: Because I think the theory is the machine. Our theories and psychology and neuroscience are never going to be like theories are in physics. We're not going to be able to express them concisely and convince people of them in a short paper. We're going to be able to, though, build them into machines like robots and show people that they do behavior, and hopefully convince people that way that we have a complete theory.

PATRICK WINSTON: So it's a demonstration purpose? Or a convincing purpose?

TONY PRESCOTT: It's partly to demonstrate the sufficiency of the theory. I think that's the big reason. But another motivation that has grown more important for me is to be able to ask questions of biologists that that wouldn't occur to them. Because I think the engineering approach-- you're actually building something-- raises a lot of different questions. And those are then interesting questions to pursue in biological studies and questions that might not occur to you otherwise.

So I go back to Brightenburg's comment that when you try to understand a system, there's a tendency to overestimate its complexity, and that when you do synthesis that's a whole lot different from analysis. And actually, with the brain, we tend to either underestimate or overestimate its complexity and we rarely get it right. So the things that we think are complex sometimes turn out to be easy.

So an example would be in our whisker system, it's really quite easy to measure texture with a whisker. And there's lots of different ways of doing that that work. But intuitively you might not have thought that. But getting shape out of whiskers is harder, because it's an integration problem across time, and you have to track position and all these other things. So these things turn out to be hard. So I think trying to build synthetic systems helps us understand what are the real challenges the brain has to solve, and that's interesting for me.

PATRICK WINSTON: Is there an example of a question that you didn't know is there when you started?

TONY PRESCOTT: Yeah.

PATRICK WINSTON: And you wouldn't have found if you hadn't attempted to build?

TONY PRESCOTT: So when we started trying to build artificial whiskers, the engineers that were building the robot said, well, how much power does the motor have to have that drives the whisker? I mean, what happens when the whisker touches something? Does it continue to move and bend against the surface? Or does it stop? And well, we said we'll look that up in the literature. And of course, there wasn't an experiment that answered that question.

So at that point we said, OK, we'll get a high speed camera and we'll start watching rats. And we found that when the whiskers touch, they stopped moving very quickly. So they make a light touch. And intuitively, yeah, maybe-- because we make a light touch. Obviously, we don't bash our hands against surfaces. But it's not obvious, necessarily, when you have a flexible sensor that that's what you would do. And in some circumstances, the rats allow their whiskers to bend against objects. So understanding when you make a light touch and when you bend was really a question that became important to us after we'd started thinking about how to engineer the system. How powerful do the motors need to be?

PATRICK WINSTON: I'm very sympathetic to that view, being an engineer myself. I always say if you can't build it, you don't really understand it.

TONY PRESCOTT: Yeah.

PATRICK WINSTON: So many of you-- all of you have talked about impressive systems today. And I wonder if any of you would like to comment on some problem you didn't know that was there and you wouldn't have discovered if you hadn't been building the kinds of stuff that you have built.

RUSS TEDRAKE: It's a long list. I mean, I think we learn a lot every day. Let me be specific. So with Atlas, we took a robot to a level of maturity that I've never taken before.

I see videos from companies like Boston Dynamics that are extremely impressive. I think one of the things that separates a company like that from the results you get in a research lab is incredible amounts of hours, sort of a religion to data logging and analysis, and sort of finding corner cases, logging them, addressing them, incremental improvement. And researchers don't often do that. And actually, I think a theme that, at least in a couple of the talks, was that maybe this is actually a central requirement. And in some sense, our autonomy really should be well suited to doing that, to maybe automatically finding corner cases and proving robustness and all these things.

But the places that broke our theory were weird. I mean, so the stiction in the joints of Atlas is just dominant. So we do torque control, but we have to send a feedforward velocity signal to get over friction. If we're started at zero and we have to start moving, if we don't send a feedforward velocity signal, our model is just completely wrong.

When you're walking on cinder blocks and you go near the ankle limit in pitch, there's a strange coupling between the mechanism which causes the ankle to roll. And it'll kick your robot over just like that, if you don't watch for it. And we thought about putting that into our model, addressing it with sophisticated things. It's hard and ugly and gross. And it's possible, but we do things to just-- you know, Band-Aid solution that.

And I think there's all this stuff, all these details that come in. I think the theory should address it all. I think we did pretty well. I'd say we got 80% of the way there, 70%, 80% of the way there with our theory this time. And then we just decided that there was a deadline and we had to cover some stuff up with band-aids. But that's the good stuff. That's the stuff we should be focused on. That's the stuff we should be devoting our research efforts on.

PATRICK WINSTON: If you were to go into a log cabin today and write a book on Atlas and your work on it, what fraction of that book would be about corner cases and which fraction would be about principles?

RUSS TEDRAKE: We really stuck to principles until last November. That was our threshold. November, we had to send the robot back for upgrades. We said, until then, we're going to do research. The code base is going to be clean. And then when we got the robot back in January, we did everything we needed to to make the robot compete in the challenge. So I think 70% or 80% of the way, we got there. And that it was just putting hours on the robot, finding those screw cases.

And then, if I were to write a book in five years, I hope it would be--

PATRICK WINSTON: Is there a principle on that ankle roll? Or was that--

RUSS TEDRAKE: Oh, absolutely. We could have thrown that into the model. It just would have increased the dimensionality. It would have been a non-linear term. We could have done it, we just didn't have time to do it at the time. And it was going to be one of many things we would have had to do if we had taken the principle approach throughout. There's other things that we couldn't have put nicely into the model, that we would have needed to address. And that should be our agenda.

TONY PRESCOTT: If that was your own robot, would you have just re-engineered the ankle to make that problem less of an issue?

RUSS TEDRAKE: It wasn't about the ankle. It was about the fact that there's always going to be something unmodeled that's going to come up and get you. And with robots, I think we're data starved. We don't have the big data problem in robotics yet. And I think you're limited by the hours you can put on your robot. We need to think about how do you aggressively search for the cases that are going to get you? How do you prove robustness to unmodeled things? I think this is fundamental. It's not a theme I would have prioritized if I hadn't gotten this far with a robot.

PATRICK WINSTON: But Giorgio, what about building iCub? Were there big problems that emerged in the course of building that you hadn't anticipated?

GIORGIO METTA: Well first of all, there's a problem of power. I guess for Atlas it's very different. But for electric motors, you're always short of space, short of space where to put the actuators. And you start filling the available-- if you have a target size, you start filling it very quickly. And there's no room for anything else. And then you start sticking the electronics wherever you can, and cables and everything.

Certainly if-- I mean, a difference in design from the biological actuators and the artificial actuators to make life very difficult. And especially when you have to go through something like a hand, where you like to have a lot of degrees of freedom. But there's no way you could actually build it, so you have to take shortcuts here and there. And I guess the same is true, then, for computation. And you resort to putting as many micro-controllers as you can inside the robot, because you want to have efficient control loops. And then you say, well, maybe have a cable for a proper image processing because there's no way you can squeeze that into the robot itself.

It's not surprising. It's just a matter of when you're doing to design, you soon discover that there are limitations you have to take into account. I don't know whether it is surprising. I mean, I guess we learn the lesson across many years of design. We designed other robots before the iCub. We sort of-- I thought we knew where the limits where with the current technology.

PATRICK WINSTON: I wonder if the-- you say you learned a lot building iCub. I wonder if this knowledge is accessible. It's knowledge that you discussed in meetings and seminars, and thought about at night and fixed it the next day. Is any of it-- if I wanted to build iCub and couldn't talk to you, would I have to start from scratch? I know you've got the stuff on the web and whatnot, but--

GIORGIO METTA: Yeah. That's probably enough for--

PATRICK WINSTON: --reasons in them

GIORGIO METTA: Sorry?

PATRICK WINSTON: Your web material has designs, but it doesn't have reasons.

GIORGIO METTA: Yeah. Yeah, that's-- that's right. No, the other thing we documented the process itself. So that information, I don't know, resides in the people that actually made the choices when we were doing the design. There's one other thing that maybe is important, is that-- so the iCub is about 5,000 parts. And that's not good, because there are 5,000 parts that can break. And that may be something interesting for design of materials for robots, or new ways of building the robots.

And at the moment, basically everything that could potentially break has happened, that it failed on the iCub over many years. Even parts that theoretically we didn't think could break, well, they could. But we estimated maximum torques, whatever, and then it happened. Somebody did something silly and we broke a shoulder. It's a steel part that we never thought it could actually break. And it completely failed. I mean, those type of things are maybe interesting for future designs, or for either simplifying the number of parts or figuring out ways of doing less parts or, let's say, different ways of actually building the mechanics of the robot.

PATRICK WINSTON: I suppose I bring it up because some of us in CSAIL are addressing-- not me, but some people in CSAIL are interested in how you capture design rationale, how you capture those conversations, those whiteboard sketches and all of that sort of thing so that the next generation can learn by some mechanism other than apprenticeship.

But let's see. Where to go from here? iCub is obviously a major undertaking and Russ had been working like a slave for three years on Atlas robot. Do you-- I don't know quite how to phrase this without being too trumpish. But the soldering time to thinking time must be very high on projects like this. Is that your sense? Or do you think that the building of these things is actually essential to working out the ideas? Maybe that's not quite the question I'm going to ask. Maybe a sharper question is given the high ratio of soldering time to thinking time, is it something that a student should do?

RUSS TEDRAKE: I'm lucky that someone else built the robot for us. Giorgio has done much more than we have in this regard.

PATRICK WINSTON: Well, by soldering time you know--

RUSS TEDRAKE: I know. Yeah, yeah, sure. We--

PATRICK WINSTON: It's a metaphor.

RUSS TEDRAKE: Yeah. but we got pretty far into it with the good graces of DARPA and Google slash Boston Dynamics. The software is where we've invested our solder time. A huge amount of software engineering effort. I spent countless hours on setting up build servers and stuff.

Am I stronger, you know, am I better for it? I think having invested, we can do research very fast now. So I'm in a new position to be able to try really complicated ideas very quickly because of that investment. It was actually-- I knew going in what I was going to be doing. I saw what John and other people got out of being in the Urban Challenge, including in especially the tools, like the LCM that we've been talking about and stuff today. And I wanted that for my group. So it was a very conscious decision.

I'm at a place now that we can do fantastic research. Every one of the students involved in great research work on the project. We hired a few staff programmers to help with some of the non-research stuff. And I think the hardware is important. It's hard to balance, but I do think it's important.

PATRICK WINSTON: Just one short follow-up question there. Earlier you said that some of your students didn't want to work on it. And why was that? Was that a principal reason?

RUSS TEDRAKE: People knew how much soldering time there was going to be. Right? And the people that had their research agenda and it was more theoretical, and they didn't want that soldering time. Other people said, I'm still looking for ideas. This is going to motivate me for my future work. They jumped right in. And super strong students made different decisions on that.

PATRICK WINSTON: And they both made the right decisions.

RUSS TEDRAKE: I think so.

PATRICK WINSTON: Yeah.

RUSS TEDRAKE: Yeah.

PATRICK WINSTON: But John, you've also been involved in-- well, the just driving car thing was a major DARPA grand challenge. Some people have been critical of these grand challenges because they say that, well, they drive the technology up the closest hill, but they don't get you on a different hill. Do you have any feelings about these things, if they're good idea in retrospect, having participated in them?

JOHN LEONARD: Let's see. I'm really torn on that one because I see how the short term benefits of the community-- and you can point to things like the Google car-- that there's a clear impact. But DARPA does have a mindset that once they've done something, they declare victory and move on. So now if you work, say, on legged locomotion, which one of my junior colleagues does, DARPA won't answer his emails. It's like, OK, we did legged locomotion. And so I think that the challenge is to be mindful of where we are in terms of the real long-term progress. And it's not an easy conversation to have with the funding agencies, but--

PATRICK WINSTON: But what about a brand new way of doing something that is not going to be competitive in terms of demonstration for a while? Is that a problem that's amplified by these DARPA grand challenges? I mean, take chess, for example. If you had a great idea about how humans play chess, you would never be competitive with Deep Blue, or not for a long time. So you wouldn't be in a DARPA program that was aimed at doing chess. So is that a-- do you see that as a problem?

RUSS TEDRAKE: I think it's a huge problem. But I still see a role for these kind of competitions as benchmarks. And I wouldn't do another one today. I mean, for me it was the right time to sort of see how our theory had gotten, try it on a much more complicated robot, benchmark where we are, get some new ideas going forward. It was perfect for me. But you can't set a research agenda.

JOHN LEONARD: And they're dangerous for students. So one of our strongest students never got his PhD, because his wife is in program PhD program in biology and he did the DARPA challenge. And she finished her thesis. And he said, I don't want to live alone on the east coast while she starts her faculty position in California. So I'm out of here. And that's the sort of thing.

STEFANIE TELLEX: I kind of made different decisions about that over my career. So when I was a post-doc at MIT, I really, really, really worked to avoid soldering time. I was fortunate. I kind of walked around. There's all these great robot robotic systems. And I would bolt language on, and get one paper. And bolt language on another way and get another paper. And you look to get a faculty position, you have to have this focused research agenda. And so I was focused on that. And it worked. I think it was a very productive time for me.

But I really valued the past two years at Brown, where there's not as much other roboticists around there. So I've really been forced to broaden myself as a roboticist and spend a lot more time soldering, making this system for pick and place on Baxter. The first year I didn't hack at all, and the second year I started hacking on that system, the one that was doing the grasping with my student. It was the best decision I ever made. I learned so much about the abstractions. Because the problems that we needed to solve at the beginning, before I started hacking, I just didn't understand. The problems I thought we needed to solve were not the problems that we actually needed to solve to make the robot do something useful. And I don't think there's any way we could have gotten to that knowledge without hacking and trying to build it.

GIORGIO METTA: In our case, we've been lucky, in a sense, that we had resources in terms of engineers that could do the soldering. So at the moment we still have about 25 people that are just doing the soldering. So it's just a large number. Yeah.

PATRICK WINSTON: That would look like a battalion, something like that.

JOHN LEONARD: Can I say something more generally? So there's a lot of claims in the media and sort of hyped fears about robots that take over the world, or sort of very strong AI. And partly, they sometimes they point to Moore's law as this evidence of great progress. But I would say that in robotics we're lacking high-performance commodity robot hardware that lets us make tremendous progress. And so things like Baxter are great because they're cheap and they're safe, and they're a step in that direction. But I think we're going to look back 20 years from now and say, how did we make any progress with the robots we had at the time? Like, we really need better robots that just get massively out there in the labs.

RUSS TEDRAKE: But--

TONY PRESCOTT: I was going to echo that, because I think robotics is massively interdisciplinary. And you've maybe got people more towards control here slightly. What we're trying to do in Sheffield robotics is actually bringing in more of the other disciplines in engineering, but also science, social science. Everybody has a potential contribution to make. Certainly electronic engineering, mechanical engineering. Soft robotics, I think, depends very much on new materials, materials science.

And then these things have different control challenges. But sometimes the control problem is really simplified if you have the right material substrates. So if you can solve Giorgio's problem, having a powerful actuator, then his problem of building iCub is much simplified. So I think we have to think of robotics as this large, multi-disciplinary enterprise. And if we're going to build robots that are useful, you have to pull in all this expertise.

And we're interested in pulling the expertise in from social science as well. Because I think one of the major problems that we will face in AI and in robotics is kind of backlash, which is already happening. Do we really want these machines? And how are they going to change the world? Understanding what the impacts will be and trying to build in safeguards against the negative impact is something we should work on.

PATRICK WINSTON: But Giorgio had on one of the slides that one of the reasons for doing all this was fun. And I wonder to what degree that is the motivation? Because all of you talked about how difficult the problems are, and some of them are like the ones you talked about, John, watching that policeman say, go ahead through the red light. Those seem not insurmountable, but very tough and sound like they would take five decades. So is the motivation largely that it's fun?

RUSS TEDRAKE: That's a big part of it. I mean, so we've done some work on steady aerodynamics and the like, too, and I like-- so we made robotic birds. And I tried to make robotic birds like on a perch. And then we had a small side project where we tried to show that the exact same technology could help a wind turbine be more efficient.

PATRICK WINSTON: Yeah

RUSS TEDRAKE: And that's the important problem. I could have easily started off and done some of the same work by saying I was going to make wind turbines more efficient. I was going to study pitch control. I'd be very serious about that. But I did it the other way around. I wanted to try to make a robot bird. And I think the win-- not only do I get excited going in, try to make a bird fly for the first time instead of getting 2% more efficient on a wind turbine. I mean, I go in more excited, but I also-- I get to recruit the very best students in the world because of it. There's just so many good reasons to do that. Sometimes it makes me feel a little shallow because the wind turbine's way more important than a robotic bird. But that is-- the fun is the choice.

PATRICK WINSTON: What about it, Giorgio? Do you do it-- or you have a huge group there. Somebody must be paid for all those people. Are they in expectation of applications in the near term?

GIORGIO METTA: Sorry.

PATRICK WINSTON: You have a huge group of people.

GIORGIO METTA: Yeah. I mean, the group is mainly funded internally by IIT, which is public funding for large groups basically. And actually, the robotics program at IIT is even larger-- the group on the iCub is actually-- there are four PIs working on it, and collaborations with other people like with the IIT-MIT group also. But the overall robotics program at IIT's about 250 people, I would say. So they're certainly also part of the reason why we've been able to go for a complicated platform. There was one that actually participated in the DARPA robotics challenge. There's people doing quadrupeds and people doing robotics for rehabilitation. So there's various things.

PATRICK WINSTON: So there must be princes and princesses of science back there somewhere who view this as a long-term investment that will have some--

GIORGIO METTA: It was in the scientific program in the Institute to invest in robotics. And while one day they may be looking at the results and see whether we've done a good job, desire to fire us all, whatever. Hey man, that might be the case. And I think it was-- unexpectedly IIT started in 2006. And the centrifuge program include robotics and all the hype about robotics that started in recent years, Google acquiring companies, this and that, I think in hindsight has been a good choice to be in robotics at that time. Just by sheer luck, probably.

RUSS TEDRAKE: To be clear, I think we're having fun but solving all the right problems while-- I think we just sort of-- yeah. We lucked out, maybe, a little bit. But we found a way to have fun and solve the right problems. So I don't feel that we're--

GIORGIO METTA: I think it's a combination of fun and the challenge, so not solving trivial things just because it's fun, but a combination of the two, seeing something as an unsolved problem.

STEFANIE TELLEX: So I try really hard to only work on things that are fun and to spend as little time as possible on things that are not fun. And I don't think of it as a shallow thing. I think of it as a kind of resource optimization thing, because I'm about 1,000 times more productive when I'm having fun than when I'm not having fun. So even if it was more serious or something, I would get so much less done that it's just not worth it. It's better to do the fun thing and work the long hours because it's fun. So for me it's just-- it's still obviously the right thing because so much more gets done that way, for me.

PATRICK WINSTON: Well, to put another twist on this, if you were a DARPA program manager, what would you do for the next round of progress in robotics? Do have a sense of what ought to be next? Or maybe what the flaws in previous programs have been?

STEFANIE TELLEX: So we've been talking to a DARPA program manager about what they should do next. And we got a seedling for a program to think about planning in really large state action spaces to enable-- the sort of middle part of my talk, we were talking about the dime problem, right? So we wanted a planner that could find actions like picking up a like small scale actions, but also large scale things like unload the truck or clean up the warehouse. And so we were-- because we thought that is what's needed to interpret natural language commands and interact with a person at the level of abstraction. So we have a seedling to work on that.

JOHN LEONARD: So if I could clone myself, say I made four or five of my selves. One of them I would, if I were DARPA program manager, to do Google for the physical world. So think about having like an object-based understanding of things and people in the environment and places, and being able to do the equivalent of internet search, physical world search, just combining perception and then being able to go get objects. So the physical-- like, w get for the physical world. That's what I would like to do.

TONY PRESCOTT: In the UK, I think-- so I don't know about DARPA. But the government made it one of their eight great technologies a few years ago, robotics and autonomous systems. And looking again, now, at the priorities and they're now looking, well, what are the disruptive technologies in robotics, again, is coming out as one of the things that they think is important. So in terms of potential economic and societal impact, I think it's huge. And so if US funding agencies aren't doing it--

PATRICK WINSTON: What do you see those applications as being?

TONY PRESCOTT: I think-- well, the big one that interests me is assistive technology. In Europe, Japan, I think the US, we're faced with aging society issues. And I think assistive robotics in all sorts of ways are going to be--

PATRICK WINSTON: So you mean for home health care type of applications?

TONY PRESCOTT: Home health care-- prosthetics is already a massive growth area. But robots-- I mean, my generation, I've looked at the statistics and the number of people that are going to be in the age group 80 plus is going to be 50% higher when I reach that age. And it's a huge burden on younger people to care for us. So I think independence in my own age, I think in my old age I would love to be supported by technology. You can do what you like with a computer, but you can't physically help somebody. And that's where robots are different. So that would be one of the things that excites me, and one of the reasons I'm interested in applications. I'm driven, I think, by the excitement of the research and building stuff. But I'm also motivated by the potential benefits of the applications we can make.

PATRICK WINSTON: I suppose if they're good enough, we won't need dishwashers because they can do the dishes themselves. OK. So now we have a question from the audience, which if I may paraphrase, there have been a-- have there been examples-- I know you think of them all the time, Tony. But have there been examples where work on robotics in your respective activities have shed new light on a biological problem or inspired a biological inquiry that wouldn't have happened without the kind of stuff you do?

RUSS TEDRAKE: I started off more as a biologist, I guess. I was in a computational neuroscience lab with Sebastian Seung. I tried to study a lot about how the brain works, how the motor system works, in the hopes that it would help me make better robots.

PATRICK WINSTON: Oh, maybe pause there. Did it?

RUSS TEDRAKE: Yeah, it didn't. So I don't use that stuff right now. I mean, maybe one day again. But our hardware is very different, our computational hardware is very different right now. I think there's sort of a race to understand intelligence, and maybe we'll converge again right now. But the things I write down for the robots today don't look anything like what I think the brain-- what I was learning about what the brain did back then. But that doesn't mean there's not tons of cross-pollination.

So we have a great project with a biologist at Harvard, Andy Biewener. Andy has been studying maneuvering flight in birds. He's instrumenting birds flying through dense obstacles. We're trying to make UAVs fly through dense obstacles. We're exchanging capabilities and ideas and going back and forth. Just the algorithms that we have written helps him understand what birds are doing and vice versa. So there's tons of exchanges. But the code that I write to power of the robots today, I think, is not quite what the brain is doing, and nor should it be.

PATRICK WINSTON: Any other thoughts on that?

GIORGIO METTA: Well, we have experiments that I meant to actually present today, where we've been working with neuroscientists on trying to bring some of the principles from neuroscience to the robot construction. Let's say, not the physical robot, but the software. And I find always difficult to find the level of abstraction that actually motivate something from neuroscience and manages to show something important for computation. I think I only have one example, or two maybe overall. And it always happen not copying in details brain structure, but just taking an idea what information may be relevant for a certain task and trying to figure out solutions that use that information.

In particular, a couple of things we've done had to do with the involvement of motor controlling information in perception. And that's something the sort of paid off, at least in the smallest experiments. Still, we can't compare with full-blown systems. Like we've done experiments in speech perception and that show to be over-performing systems that don't use motor information, but on limited settings. We don't know if we build the full speech recognition system what happens, whether we better or worse existing commercial methods or commercial systems. So it's still a long way to actually show that we managed to get something from the biological counter-part. Although maybe for the neuroscientists this explains something. Because where they didn't have a specific theory, at least we showed the advantages of that particular solution, that it's being used by the brain.

TONY PRESCOTT: So I think there's-- we tend to forget in our history where our ideas came from. So for instance, reinforcement learning-- Demis Hassabis explained last night how he's using this to play Atari computer games and these amazing system he's developing. If you go back in the history of reinforcement learning, the key idea there came from two psychologists, Rescorla Wagner developing a theory of classical conditioning. And then that got picked up in machine learning in the 1980s, and it got really developed and hugely accelerated. But then there was crossover back into neuroscience with dopamine theory and so on. And ideas about hierarchical reinforcement learning have been developed that are partly brain inspired. So I think it's-- there is crossover, and sometimes we may lose track of how much crossover there is.

PATRICK WINSTON: I have another comment from the audience, I see we are under some pressure to not drone on for the rest of the evening. So the comment is I think relevant to the last topic I wanted to bring up, which is the question of ethics in all of this. And the comment is why should we make robots that are good at operating, doing things in the household, take care of the elderly and so on, when the rest of AI is going hell-bent to put a lot of people out of work and people who could perhaps use those jobs. But in any event, there's been a lot of concern, perhaps spawned by some of the films like Ex Machina and so on, that robots will take over. And I don't think are going to take over in that sense very soon. But do you see-- do you worry-- do you think about any dangers of the kinds of technology you're working on in terms of economic dislocation or battlefield robots or anything of that sort that might come about as a consequence of what you do?

RUSS TEDRAKE: I think it's inevitable. I think we shouldn't fear it, but we have to be conscious of it. So I mean, would you look back to the 1980s and avoid the invent of the personal computer because it was going to change the way people had to do work? I mean, of course you wouldn't. But at the same time that changed the way people had to do work. And it was painful for a big portion of the population, but ultimately it was good for society. I think robots will have the same sort of effect. It's going to raise the bar on what people are capable of doing. It's going to raise the bar on what people have to be successful in their jobs. And it might be painful, but I think it's super important for society to keep moving on it.

PATRICK WINSTON: Why, again, is it super important?

RUSS TEDRAKE: Because it's going to advance what we're capable of as a society. It's going to make us ultimately more productive.

PATRICK WINSTON: Other thoughts?

TONY PRESCOTT: I agree. I mean, I think the people that are worrying about jobs being taken by robots aren't the people that want to do those jobs. Because most of the jobs are ones that it's very hard to get anyone to do. They're low paid, they're unpleasant. And we're automating the dull and dreary aspects of human existence. And that gives the opportunity for people to have more fulfilling lives.

Now, the problem isn't that we're doing this great work to get robots or machines to do these things for us. It's that, as a society, we're not thinking about how we adjust to that, how we make sure people will have fulfilling lives and will be supported materially to enjoy that prosperity. So I think it's disruptive in many ways, and it's going to be disruptive politically. And we're going to have to adapt. Because if you're not working, then you have to be supported to enjoy your life. And maybe that means a change in the political system.

So those are questions perhaps not for us. But I think we maybe-- as the technologists, we have to be prepared to admit that what we're working on are really disrupted systems and they are going to have these large impacts. And people are waking up to that. And if we wave our hands and say, don't worry, I think we're not going to be taken seriously.

PATRICK WINSTON: Other thoughts?

JOHN LEONARD: I see how these are really important questions. And I see-- I have mixed emotions. I'm really torn. I came from a family that was affected by unemployment in the 1970s. So I feel like I'm very sympathetic to the potential for losing jobs.

At CSAIL we've had this wonderful discussion with some economists at MIT the last few years, Frank Levy, David Autor, and Eric Brynjolfsson, Andy McAfee, and I've learned a lot from them. And I think that they vary in their views. I think I am more along the lines of someone like David Autor, who's an economist who thinks that we shouldn't fear too rapid a replacement of robots. If you look at the data, that the things that are-- I would say the things that are hard for robots are still hard. But on the other hand, I think, longer term, we do have to be mindful of as a society, that like, as Russ said, things like this are going to happen. I think that the short term introduction, if you look at, for example, Kiva and how they they've changed the way a warehouse works. I think replacing humans just completely with robots, like, say, for gardening or agriculture, some really hard things to do, because the problems are so hard. But if you rethink the task to have humans and robots working together, Kiva's a good example of how you actually can change things. And so that's where I think the short term is going to come from, is humans and robots working together. That's why I think HRI is such an important topic.

PATRICK WINSTON: Well I don't know if you running for president, but be that is it may, do any of you have a one-minute closing statement you like to make?

JOHN LEONARD: Well I'll go sort of the deep learning thing. I think in robotics we have this, potentially, a coming divide between the folks that believe more in that data-driven learning methods and more models. And I'm a believer more on the model-based side, that we don't have enough data and enough systems. But I do fear that we could be in a society-- for certain classes of problems, he or she who has the data may win, in terms of if Google or Facebook have just such massive amounts of data for certain problems that academics can't compete. So I do feel there's a place for the professor and the seven grad students and a couple of post-docs. But if you do you have to be careful in terms of problem selection, that you're not going right up against the sort of one of these data machine companies.

RUSS TEDRAKE: I was going to say that I was looking at humans right now and trying to inform the robots, I wouldn't look at center-out reaching movements or nominal walking or things like this. I'd be pushing for the corner cases. I've been trying to really understand the performance of biological intelligence in the screw cases, in the cases where they didn't have a lot of prior data. They were once in a lifetime experiences. How did natural intelligence respond? That's, I think, a grand challenge for us on the computational intelligence side. And maybe there's a lot to learn.

PATRICK WINSTON: And the grand challenge for me, to conclude all this, has to do with what it would really take to make a robot humanoid. And I've been thinking a lot about that recently in connection with self-awareness, understanding the story, having the robot understand the story of what's going on throughout the day, having it able to use previous experiences to guide its future experiences, and so on. So there's a lot to be done, that's for sure. And I'm sure we'll be working together as time goes on. Now I'd just like to thank the panelists and conclude the evening.

Free Downloads

Video


Subtitle

  • English - US (SRT)