Intel Dir. of Research Kicks Off Developer Forum with Opening Address

The following is a transcript of Intel Corporation Vice President and Director of Research David Tennenhouse’s opening address at the Intel Developer Forum being held this week in San Jose. A transcript of a discussion between Tennenhouse and UC Berkeley professor David Culler is also included. Intel Developer Forum, Fall 2001 Monday, Aug. 27, 2001 Opening Address by David Tennenhouse Civic Auditorium, San Jose, Calif. ANNOUNCER: Ladies and gentlemen, please welcome vice president and director of research, Intel Corporation, Dave Tennenhouse. (Applause.) DAVID TENNENHOUSE: Thank you very much. It's great to be here at IDF. Now, as you just saw, it's a little bit dangerous to go about making predictions for the future. But the great thing about IDF is that this developer community has never been about just sitting back and making predictions. This group of people is about making the future happen; creating the future. Now, six months ago at IDF, Craig Barrett was here. He had a chance to tell you all about the great things we're doing to keep Moore's law driving forward. And he told you we had a commitment to continue to invest in research and development so we can come out of this downturn with great new products and services and take the industry forward. I'm one of the lucky people that gets to think about how to spend some of the $4 billion we'll be putting into R & D this year. As we think about spending that, something that we all have to realize is that Intel is a very different company than it was just a few years ago. With our four architectures, we now have multibillion dollar businesses, not just in PCs, but also in servers, networking, wireless networking, and a range of new products and services. And Intel's R & D organization, specifically our labs, has had to change to accommodate that. And in fact, those labs have been working in areas like networking, software, et cetera, for a very long time. I'm willing to bet that many of the people in this room have actually worked with people from our labs on some of these enabling standards shown on this slide. We've done this together with you folks to build the ecosystems that have enabled the growths of these parts of the industry, and we're going to continue doing that. But we're also starting to think about how this is all playing out, where the future is going, where we're headed 5, 10, 15, 20 years down the road. As we started on this, we really started way back in the '40s with an agenda of just crunching numbers, numeric computing. We quickly moved past this in the '50s to crunching text. Symbolic computing. I remember punchcards, and I remember that feeling of being disconnected from the computer. You'd put the punchcards in, you'd get your output. But in the '60s the research community made a deliberate change towards a new mode of computation called interactive computing. We started with timesharing, moved on towards personal computers, and most recently to the extended PC working towards one computer per person. This wasn't an accident. It was a very deliberate agenda set by the gentleman illustrated here, J. C. R Licklider. He's sort of the unsung hero of interactive computing. He was the person who realized we need to get away from this disintermediation thing if we wanted to empower people to use computers, we had to put the people right in the middle. So he moved forward with that human-centered vision where we, the people, had their computers, they had the information they wanted to put in, and they could directly interact with those machines. It seems obvious to us now, but at the time it was a huge step forward. Now, we've been working along at this agenda for about 40 years. We're sort of just getting to the one computer per person that I've been talking about. And one of the things that is happening is we're starting to feel a little bit of overload. Interacting with all our computers is starting to feel like one of those chess champions that's going around paying six or ten games of chess at a time, faster and faster and faster to the point it's all getting just a little bit confusing. So as we think about a world where we don't just go to ten computers per person but maybe hundreds or thousands, we need to make some changes. One is getting the humans on top. Instead of putting the human beings in the middle, shuttling the information between the real world and the computers, getting the human beings above the computers and getting the computers directly connected to the physical world about them and getting the computers anticipating what the users might want to do next and sometimes even taking action on their behalf. And we call that proactive computing, getting the computers to be a little bit more proactive. So as we go back to this chart and we think about the trend line we've been on with interactive computing and where it's been headed, I think it's time to not just change the architecture but really steepen the curve and think not just about how are we going to get to ten computers per person but hundreds of computers and thousands of computers per person. We call that proactive computing and that's what I'm going to talk about today. Before I go into it I want to draw your attention to one really important facet. This community of developers has never really been about mainframes, servers, PCs, PDAs, or any one computing instrument. It's really been about personal computing, which has been a key to personal empowerment. That's the real issue. When I have my computer and I get to decide what's going on with my computer, or my ten computers or hundred or thousands, I get to decide what software goes on them. And that lets me drive innovation. Anybody anywhere on the Net can invent something cool and new. I can go load it on my computer and get going with it right away. Eventually, if enough of us do it, the IT barons start supporting it but I can get going right away. That sense of human empowerment has really helped drive the growth of our industry, and it's key that we take that empowerment in personal computing with us as we drive forward into this proactive era. When we talk about being proactive, the big difference from interactive computing is that today, computers are basically either waiting for us or we're waiting for them. It's very deliberate lockstep. In a proactive world, the computers are up front, anticipating your needs, and sometimes acting on your behalf. Now, you may wonder, why am I out here talking about hundreds and thousands of computers per person? That seems pretty far out. Why am I even talking about tens? Well, first off, I suspect that many of you have three or four right here. You've got a laptop, you've got a cell phone, you've got a PDA. Not only are we the I/O devices for these things, we're the chauffeurs. And in fact, if you think about where we're going with 3G cellular telephony, there's actually going to be four processors inside each of these cell phones, embedded within. As we worked towards this world of one computer per person, we created a huge industry. In fact, we could be pretty proud that the industry as a whole will probably ship on the order of 200 million parts this year. That's a pretty big number, pretty impressive. Until you realize that in the embedded computing space we're going to ship about eight and a half billion parts. Those are serious numbers. And that's a huge opportunity, because those embedded computers are just dying to be networked, and we can do that and move forward into this proactive future very quickly. As we think about the steps to really making computing proactive, we've identified three steps that we think are going to be key to making this happen. One we refer to is getting physical. We've got to get all those computers directly connected to the physical world around them so they can do the I/O -- human beings don't have to do that. The next is what we're referring to as getting real, and by that I mean getting these computers running in real time or even ahead of real time, anticipating our needs. Now, there are actually some great examples of this already out there, where computers are, in fact, saving human lives doing so. Airbags are one great example, antilock braking is another example. You probably haven't been thinking about it that way, but that's sort of early proactive computing, where these systems are out there, anticipating that you may need that airbag, anticipating that you may need that brake correction and making the change. Now, try thinking about doing this many, many fold over. And think about all of those embedded computers and realize that as you network them, we're going to be pushing the Internet not just into every different location, but within each location, deep into all the embedded platforms in that space. And that's essentially a hundred times increase in the size of the Internet that we can achieve above and beyond the growth of the Internet that we're already anticipating. Now, by talking about getting out, what we're really talking about is getting computers out of the office environments we're in, great environments, and extending their role into the world around us and into a whole bunch of different application domains. And we're going to talk about that a little bit later. In the next segments of this talk, what I really want you to see is we have a clear vision of proactive computing. Being Intel, we've been very disciplined about enumerating the problems, the obstacles in getting towards that vision of proactive computing. And we are working on those problems with some of the very best people both within our own laboratories and our extensive network of university connections worldwide. You're going to see charts flip up at a really rapid pace. And we understand that you're not actually going to be able to follow all of them. I wanted you to get a sense of the breadth of the agenda, the number of things we're doing, the number of people involved, and how much help we're getting with this. All of the PowerPoint* materials will be out there on the Web for you to look at in more detail. We're expecting you to do your homework afterwards. So, let's get to work on those problems. The very first of these is getting physical. As I said, that means we're going to have to connect our computers to the physical world around us, which means developing a whole new suite of sensors and actuators. Sensors that are going to be able to sense the things going on around them, actuators that will actually allow our computers to change the world around them. Our MEMS research activity is just one example of the new kinds of sensors we can develop and in this case we're going to use Intel's precision capability to really drive MEMS to its limits. Our precision biology activity, which is about bio chips, is another example. We don't want to just let computers get in touch with and sense things that are dry and solid state that we're used to sensing, but also sense things that are wet, whether it's biological materials, organic chemistry or whatever. And this really opens up every aspect of wetware for us, everywhere from health and pharmaceuticals to chemicals, refineries, et cetera, whole new ranges of places we can take our computers. Now, once we've got those computers connected to the world around them, we need to get them networked. And we're going to have to do that networking in some very different ways if we're really going to think about this kind of hundredfold increase in the size of the Internet. We've been really working this already in some ways with the wireless agenda, because, clearly, one mantra of that sort of deep networking will have to be "No new wires." Intel has been really leading the charge on 802.11. We have a lot of products out there, we're offering a lot of new software, thinking about how to make that work seamlessly. You'll hear more about that this week. But we've also been thinking about how to make that easy. And standards like Universal Plug and Play are going to be a key. So that it's not just no new wires, but no new setup screens. Now, that's the work that's already been underway for quite a while. At the research perspective, we're pushing further forward with new types of networks. Ultra wideband radio being a key example of a new type of networking technology that will give us even more density, more bandwidth, more ways of going about our business. We're enabling that with new types of antennas, we're enabling that with new ways of doing radio and CMOS. And then new ways of thinking about avoiding those setup screens through ad hoc networks. Ad hoc networking is going to be a key event and we're going to have a demo of that type of technology in just a few minutes. As you move up the stack, you've now got those devices connected to the world and we've got them networked. We now have to think about how we enable macroprocessing. This is about programming and storage in the large. A key barrier is heterogeneity. This software has got to work across a large range of platforms, networks, etc. The Intel® Integrated Performance Primitives (IPP) are something you'll be able to hear about later this week. Those performance primitives will make sure the core kernels of your software run really efficiently, no matter which of our architectures you're working on, so that that core innermost loops can be ported from architecture to architecture. Now, IPP helps porting that highly optimized code from place to place. If you really want to deal with heterogeneity, though, you have to think about all of the implications of XML and the Web, which is really the thing that's going to allow you to move across organizations, across software platforms, across many different types of enterprise architectures and software. Again, our labs have been very involved here in developing RosettaNet* and other XML?based systems. And I really believe XML is going to be a key enabler. It sort of becomes a building block that we can all build on to get easy exchange across heterogeneous systems. Now, not only are you going to want to be able to move those innermost kernels around and be able to move your static data around, but you're going to want to be able to move running programs around from platform to platform. So at our University of Washington research lab, we've been working on a program called one.world together with researchers at the University of Washington to enable process migration, so programs, while they're running, can be moved around across different types of platforms as you move from place to place. That gives you a software platform for programming in the large. Let's think about storage in the large. And, in fact, storage all the way from the small to the large. We're finding this a really exciting space, particularly because there's a revolution going on in storage technology, and nobody's really noticed. For years, folks like myself have been wondering, when are we going to be able to go solid state, get rid of those spinning disks in all of our systems? And you know what, I've just learned to live with it. These disk people are just too good. Those mechanical engineers are phenomenal. The disks get better and better. And so I think servers and PCs are going to have those spinning disks in them for a long time. But just when you decide to live with it is when something really interesting happens. For every one of those PCs and servers, there's going to be ten, 100, 1,000 PDAs and embedded computers, et cetera. And those PDAs and cell phones don't have spinning disks. They're based on new types of storage technology, in fact, a large fraction of them based on Intel's flash storage technology. It's not just about flash storage. We have whole new generations of storage. We recently announced work on our Ovonics technology and our polymer storage technology. So there's a whole train of research leading to new storage technologies. And the interesting challenge, I think, is to now think about the software to go with those. 98 percent of all of the nodes are going to have these new types of technologies and we can create whole new types of file systems to go with them. And then start thinking about how we knit those together with our personal servers, and then how we knit all of those together using peer?to?peer computing into totally global, persistent, secure file stores. Our colleagues at the University of California, Berkeley, are already active and working on that problem. And I think it's an exciting revolution that really isn't getting noticed. So now that you can do the programming and storage in the large, one of the things you're going to want to be able to do is effectively query this whole sensor network. And you're not really going to want to think about querying all of those embedded nodes explicitly one at a time. You'd like to think of this much more like a database query. When you do a database query, you get back the answers. You don't get back the addresses of where all the answers were. You often don't even know where they were stored. And we want to make that sort of capability happen, but with live data. So think of the Internet as a sort of global live database with live data coming from all these sensors spread all over there, and now you can just query it much the way you would do a database query. This type of research is underway at the University of California Berkeley in a project called Telegraph. As we build on this macroprocessing layer and move up the stack, we're going to have to learn how to deal with uncertainty. Now, at Intel, we've actually learned for quite a long time ago that the world is really statistical, even though we'd like it to be deterministic. The world is a little less predictable than we would like. The physics of our transistors is based on that with quantum physics really based and grounded in statistical techniques. The manufacturing that we use at Intel is very heavily based in statistical techniques, and we use that to control all aspects of our processes. And it's really one of our core competencies. What we're seeing today, though, is that the use of statistical methods is going to have to spread all the way through the stack. Think about those sensors. The low?order bit on those sensors isn't very predictable. There's always a low order bit. That means there's always a little bit of noise in the signal. The outputs to the actuators are not going to precisely control them. We have to learn to deal with that noise and deal with that uncertainty. And if you look at work going on in the research community, you'll find the most exciting algorithm work is through statistical methods. And if you think about it, out on the Web, what you find is when you go to do a query, it's not all that deterministic. You go out and do the query and you don't really care exactly what server the answer came from. And if it comes from a different server next time around, that's okay. Especially if you get better performance the next time around. So as a community, I think we're going to have to start learning how to use these statistical methods much more effectively. And if you actually stop and look, once you make that observation, you go look at what people are doing and you find that they're already heading there. The people that are actually winning, that are getting the best of class performance in areas like computer vision, speech, data mining, look underneath. They're all using Hidden Markov Models, Bayesian Models, other statistical techniques. So this vision is already getting underway. Within Intel, we're trying to spread this competency throughout the company and encourage you to look throughout your organizations and see what you're doing about these sorts of methods. Because the prediction we're going to make here, which is a little bit dangerous, is computer science may be on the verge of going through a similar transformation to what happened to the physicists in the 20's. In the 20's, if you were a physicist, you had a nice simple world governed by the laws of physics we all learned in high school. But very quickly it went through a revolution to quantum physics. And we might go through a similar revolution in computer science, from Finite State Machines to determinism, to Markov models and statistical methods. As we do this, we'll get a command of how to work with large numbers of computers. So it's one of these hurdles we have to jump and once we do it, it's going to make our lives so much easier. As we're dealing with these large numbers of computers and we have these methods, we're then going to want to really hit the meat of proactive computing. Remember I said it was about computers anticipating your needs and, where appropriate, acting on your behalf. The first step of anticipating your needs is another place where we have some experience. We've been doing speculative execution in our processors anticipating where the program is going to go for quite awhile, with techniques like branch prediction, and our Itanium™ processor family takes that to a new level with processors. If you look around the Web, some of the very best Web caching engines and search engines actually pre?fetch the next pages you might go to in anticipation of your requirements by looking at the links in the pages you're currently viewing. We can take that to a whole another level, and we're going to do that. Now, in addition to doing that anticipation, we're also going to have to let the machines learn a lot more about the environment. Because we can't really expect them to just learn everything directly by our entering and configuring information into them. Machine learning is an area that if you asked me a few years ago as a director research am I going to suggest people go look at machine learning, I would say not on your life. It's an area that I think got a bad rap because people tried to learn to do machine learning from first principles and it didn't work. What we're seeing today is a renaissance in machine learning that is exciting. People are taking the statistical methods I talked about before and using them to help the machines manage uncertainty that they're encountering with the world and learn by managing that uncertainty. And I want to give you just a brief demonstration of how that works. This is going to be an example using robots, and essentially think of this as a demonstration of one robot in a room. It has a map of the room but doesn't know where it is in the room. So initially, every place in the room has equal probability. And that will be illustrated by a bunch of little red dots on the screen. Every time the robot bumps into something, it learns a little more about where it is, and the degree of uncertainty reduces. A whole bunch of the dots go away. So now it's still uncertain about exactly where it is, but it's reduced the range of uncertainty. And it will keep reducing that range until it knows exactly where it is. VIDEO: Dealing with uncertainty. There is a robot in a room for which it has a complete map. The room also contains four columns. The robot is indicated by a red dot. The robot will move around the room and try to discover its position by using its sonar unit. Probability distributions are indicated by flashing red dust particles. When it discovers further information about its location, it will change its probability distribution. Ultimately, the robot achieves its goal to exit the room. DAVID TENNENHOUSE: So what's really interesting about this work is not just when one machine can learn about its environment, but machines can perfectly exchange information about what they've learned with each other. And our colleagues at the University of Washington and Carnegie?Mellon University have really been making a lot of progress with this work. And we encourage you to visit their Web site and see where this is going. Once you've done that mastering of uncertainty and being able to anticipate where we think people are going to go, we've anticipated their needs, the next question is, how do we bridge the gap to acting on them and do that very carefully, under human supervision and in a predictable way? Now, we think of that as closing the loop, because as soon as you make that decision, you're really closing a feedback loop that is going to impact all of your inputs. And as many of you know, positive feedback can be a very dangerous thing. So you need to understand very carefully how you're going to close the loops. We've been doing a lot of work with our colleagues at the MIT Sloan School on modeling Intel's supply chain, which if I think about it, it a series of nested control loops that need to be monitored and managed very carefully. And we need to understand any decisions that have been made before the results are acted on. That's sort of the business of closing all the loops on one chain. Now, I want you to think about that on Internet scale and think about using agents. So what you're really talking about is software agents now that are operating all these ?? on all these computers and acting in a predictable manner on your behalf. And the interesting question is, what's the limit? How many agents could I have working for me at any one time? Now, today, people typically perhaps have one travel agent out there searching the Web for a good travel deal. Maybe another couple of agents on eBay watching for a great price on an auction they're interested in. But they really don't have very many agents. And the reason is that each agent today makes a demand on their attention. And the most precious resource is human time. But if we could relieve that distraction and relieve the pressure on that human resource, the fundamental resource that these agents occupy is just a little bit of processing, a little bit of networking, and a little bit of storage. And we know where the prices on all those are going. So we can start thinking about a world in the future where you have billions of users each with millions of agents outstanding and working for them at the same time. So we're talking about millions of billions of software agents. And this means a world in which we're not just trying to close the loops of one supply chain, but simultaneously closing the loops of many supply chains, making sure that these agents are out there negotiating with each other, and, again, acting in a way that will cause predictable results. We've been working with our colleague at the Santa Fe Institute on these complex networks. Again, it's an area where some number of years ago, progress was slow. However, we're now seeing a huge ramp up in progress in this area, making this very exciting work. Finally, as we get up to the top of the stack, I talked about human empowerment, making it personal. And that's got to be key. Just a moment ago, I talked about distraction?free computing. You've got to make sure that all those computers and agents aren't turning you into one of those chess champions. I'm certainly not one. And even if I knew how to be, I wouldn't want to do it all day. We also have to think about how we're going to manage the security and privacy concerns. And not just how we're going to do it in a single homogeneous society, but how we're going to do it in a world where there are different cultures and different people have different expectations with respect to security and privacy. And, finally, we really need to get thorough and study what people do with all of these devices, what people do with computers. And Intel Architecture Labs actually has a world?leading group of ethnographers, social scientists who spend their time studying people in small numbers, understanding very carefully what they're doing with technology, why they're doing it, how it can help them in their lives, how it can improve their quality of life. And this is a key aspect. As I said, this whole agenda of proactive computing has to be about continuing the revolution in human empowerment that we started collectively 20 years ago. And we're very committed to doing that. Earlier, I mentioned that we would talk a little bit more about ad hoc networking. And to help me with that, I have invited the person who really pioneered cluster?based computing and has now moved on to do work in ad hoc networking. And I'd like to you join me in welcoming professor David Culler from the University of California, Berkeley. (Applause.) DAVID TENNENHOUSE: Nice to see you, David, hope you had a great vacation. DAVID CULLER: Glad to be here. DAVID TENNENHOUSE: What have you got to show us? DAVID CULLER: I thought I'd start by showing you the kind of building block that we use for a lot of our experiments. This is about a square inch device that's a complete computer system with its storage and what not, and most importantly has, as well, a low?power radio network interface. And we've built an operating system, tiny OS that runs on those devices. We augment those nodes with a small sensor card for the particular application. DAVID TENNENHOUSE: Sounds great. So you can just build an ad hoc network out of these things? DAVID CULLER: That's a key objective as we go to this vision of thousands of tiny devices. To illustrate that, I brought some of the students from the lab down here with me, they're going to walk out to the stage bringing these tiny devices. And they'll just flip them on when they arrive. What you'll see on the display is as those arrive, they'll show up on the network. And the connectivity between them will appear as well. The red lines indicate who can hear who. And the green lines are the links in the network that the network has decided to use for routing information back to this base station, which you can see is node 14 in the graph. You can see it just sort of builds itself as the folks come here and join us. DAVID TENNENHOUSE: Don't we have to configure the IP addresses and the DNS entries and all that? DAVID CULLER: That wouldn't be much fun for a thousand nodes or so. It really is important that the network assemble itself. DAVID TENNENHOUSE: This is great. What can you do with a network like this? DAVID CULLER: These have a fairly primitive set of sensors, they can sense light and temperature. They can sense the network and their own battery strength. What you're seeing here in the background is the light intensity here at the stage. So this is a picture of how the network sees the stage that we're on. And we can illustrate that for you. Would you mind taking the stage lights down and we'll see that display darken in response. So there you see that the network has discovered that it's dark here. Let's go ahead and bring one or two lights up so you can see that it really is able to detect that. And in a real application, there's lots of different kinds of fields that you might want to sense in a mode like this, distributed over some physical environment. DAVID TENNENHOUSE: This is absolutely great. So we can turn these things on, build up these networks, truly wonderful. DAVID CULLER: We'll go ahead and shut them off, and you'll see that ?? and thanks, guys ?? the network will just take itself apart. And if you turn it back on again, it would rebuild. DAVID TENNENHOUSE: That's just great. Let's have a hand for these folks from University of California. (Applause.) DAVID TENNENHOUSE: You know, David, that's really fun and it kind of illustrates ad hoc networks, and that was, by the way, a pretty gutsy demo to do live in front of the audience. However, you know, the thing is that it's only ten nodes. DAVID CULLER: Yeah. DAVID TENNENHOUSE: And those nodes are pretty big. And you just leave me and go away on vacation. This is Intel. We expect better. DAVID CULLER: Well, funny you should ask. We have a version we squeezed down to just about the size of a quarter. So I'd like to show you that one. But you know, when you mentioned this opportunity to me a few weeks ago, you said there were going to be thousands of people here. And, you know, we only have so many students, so you wouldn't mind if I asked a few of your friends to help, would you? DAVID TENNENHOUSE: No. Just because they paid doesn't mean they can't do some work for us. DAVID CULLER (speaking to audience): If you'd be kind enough, if you reach down towards your right knee, just under the seat there and you' grasp a piece of plastic like this, get ahold of that, it's just under your right knee on the outside, and you can just kind of pick that up. Now, there's about 800?and?some of these nodes out there going almost to the back of this central section. So not quite everybody has them. DAVID TENNENHOUSE: If you're in the outside ring, you probably don't, then. The cheap seats. DAVID CULLER: So the first thing we did is we gave those a little message to wake them all up. Now let's go ahead and build a network on the thousand or so nodes that are out there, much like you saw. And to give me a little help, what you'll see is when that message propagates through there, a light will go on. DAVID CULLER: So when you see the red light go on, would you just raise your hand, and that will let us know that your node just joined the network. We'll watch to see that. Go ahead and raise them high so that we can see when that light goes. DAVID TENNENHOUSE: I think they've all joined the network now. DAVID CULLER: We did that a little too quickly. I'm sorry. I was going to have you bring your hand down when they go. You want to try that one more time? DAVID TENNENHOUSE: Sure. DAVID CULLER: There you go. Okay. So there, we just built a network of close to a thousand nodes. And of course each of those nodes are helping the ones way out there get information back up here to the stage. DAVID TENNENHOUSE: Tremendous. Is this the biggest ad hoc network you've ever built? DAVID CULLER: Without a question, it's probably the biggest one ever built and we've never been able to experiment it with the people and all of the noise of an uncontrolled environment like this. DAVID TENNENHOUSE: Something I'm kind of curious about is what happens if people stand up and sit down. Do you think maybe we could do it again and get people to stand up when their light comes on? DAVID CULLER: Let's just give that a try. And what this should do, the actual propagation only takes a fraction of a second. So we've got a version that will take it about five seconds per hop, so you'd have a chance to see it. DAVID CULLER: If your red light is on, stand up now. DAVID TENNENHOUSE: Go ahead and stand up. This is for the record, folks. DAVID CULLER: And stay there until the light goes off. That's the second. So you're at level two in the network, the second hop there. DAVID TENNENHOUSE: Tremendous. Absolutely great. If we did this one more time, we could get a wave here, I'm convinced of it. DAVID CULLER: It's a little more complicated. DAVID TENNENHOUSE: It's a little more complicated than that? What's the issue, David? DAVID CULLER: As this network forms, much like the Internet itself, it folds back on itself with lots of interesting things, and we're only beginning to understand how these low?power wireless networks really behave. DAVID TENNENHOUSE: And really, one of the great things is we don't actually have to worry about exactly what the topology is. We don't actually have addresses set for any of these nodes or anything else, right? DAVID CULLER: That's right. Whatever physical communication works, that's what these little devices will use. One last thing. They are a gift for the audience. We're going to switch them over now, since we can program the network, and they'll become like little Furbies and when they notice other nodes near them, they'll start blinking, and you're welcome to hang on to those. If you decide you don't want to take them with you, some of the students will show up and you can hand them back. We have a few more experiments we'd be happy to do. DAVID TENNENHOUSE: Thank you very much, David. Let's have a big hand for the folks from the University of California, Berkeley. (Applause.) DAVID TENNENHOUSE: And I'm very pleased to say that David is, in fact, both a professor at Berkeley and the director of our new Intel research lab that's up there right adjacent to the campus. Now, I talked about these three steps to getting proactive, and the key third step is getting out. Getting computers out of our traditional environments and into some new interesting spaces and some new interesting markets. Where we've been as a community, which has been pretty successful, has been in areas like office productivity, supply chain, e?Business. And I don't want to discourage anybody from continuing to build for growth in those businesses. We have barely gotten started on the possibilities of e?Business, for example. So there's a lot more room left in those markets. However, I do think that some of us should start thinking about some of these new spaces that we can go into that appear particularly promising and have this aspect of getting computers out of the office and into other environments. One example of these is home networks. And this is an area that is just dying to be networked. Think about it. You've got all these consumer electronics that are digital on the inside, they all use digital chips, they all have software on the inside, analog connectors on the outside. They're just dying to be networked. And now we're getting the key enablers, broadband is coming into the home in the U.S. We're getting across the ten percent penetration level, just at the point where that curve is getting ready to take off. As soon as people get broadband, they bring in 802.11, so they have that network with no more wires. At last IDF, we talked to you about the extended PC concept, which really allows you to put all this together. You add universal plug and play into that mix and you have an explosive mixture. This industry is ready to pop, ready to go proactive. And you should just get out there and be developing that market. As we started thinking about some other areas where we'd like to get proactive, there's perhaps a little bit more work to do. And I'd like you to hear a little bit about some of the work we're looking at in the health industry from two people who are leading the charge for us there. One is Andy Berlin, who is leading our precision biology effort. And the other is Mark Blatt, our resident M.D. VIDEO: ANDY BERLIN: One of the way proactive computing would really influence people's lives is to move more and more medical care into the home. MARK BLATT: Health care right now is a trip to the doctor's office. You get sick, you have to pack up your family and take them off somewhere. Sometimes the problem is trivial, sometimes serious. Health care sensors that can be placed in the patient's home will change the entire paradigm. ANDY BERLIN: We're finding ways to integrate the computer and the diagnostic tests into people's daily lives in a way that doesn't affect their daily lives. All sorts of wearable sensors become available. For instance, we're already starting to see some wearable blood glucose monitors that will really affect the lives of diabetics. All sorts of new things can be done in the home. There's some really nice work along these lines at the University of Rochester Center for Future Health, which is funded by Intel, in part. And what they're doing is looking at all of the different ways that you can watch for subtle changes in people's skin that can be indicative of early onset of skin cancer. MARK BLATT: They can look at acute rashes from cuts to sore throats. And the diagnostic sensors could add further clarity to what's wrong with you at this point. One thing one needs to remember, with all this data that's being collected, there's a privacy issue here. The network that would exist in the environment in the house would more than likely be connected to a household server. And you would control the permissions to the server and tell the server when to send out the information and when and who to communicate with. You would essentially have your own personal medical record that would be built and monitored by the sensors within your house. DAVID TENNENHOUSE: So these folks at the University of Rochester are certainly getting physical. And they're having way too much fun with it. Smart socks, smart bandages, smart mirrors, and most importantly, smart people, one that I particularly want to draw your attention to is Dr. Selena Chan, who has recently joined our precision biology effort at Intel. As we move beyond the office, beyond the home, and what we can do in the health field, we want to think about what we can do for the environment and what our opportunities are there. Something that I think is particularly exciting is that at Berkeley, they are starting a new Center for Information Technology Research in the Interest of Society (CITRIS). Let's think about that. An entire research center comprising hundreds of scientists dedicated to improving quality of life using computer technology. I'd like to you hear a little bit about CITRIS and how they're getting proactive. VIDEO: RICHARD NEWTON: So CITRIS is about applying information technology to solve some of the toughest quality?of?life problems that Californians and, in fact, everybody faces today. These include the environment, transportation problems, quality of water and resources, energy, biomedical monitoring, and, in fact, the educational aspects that will be required for us to deliver that information to all of the constituents involved in the project. STEVEN GLASER: Our vision, if you will, would be smart buildings. Say we take the Transamerica tower in San Francisco, and place tens of thousands of devices around important parts of the structure. When an earthquake happens, we can identify the damage to the structure locally. JAMES DEMMEL: Terabytes of data will be collected in these sensors describing what happens in an earthquake, and only a small fraction of that data is going to be useful for any individual to respond to. STEVEN GLASER: For instance, we can pick up the waveforms, which tell us the displacement of the surface, and that allows us to identify what the motion was at that point. This would then allow us to say how the crack is growing within the block. Our ad hoc network will find the area of damage and organize itself to zero in on that damage in an optimal way. And the end result might be a self?tagging building where we have a light, red, yellow, green. The fireman comes, he sees that the building is damaged. Where is it damaged? We have an annunciator panel tell us where the damage might be, how much damage, and we no longer have to put the building out of commission for six, eight months until structural engineers come in and investigate the entire building. DAVID TENNENHOUSE: So that's all well and good. We're getting into smart buildings. But what if we really want to get out and into wide?open spaces? There's some phenomenal opportunities to use these technologies to monitor the environment all around us. Professor Deborah Estrin at UCLA has really been leading the charge in that space. I find it particularly interesting because Deborah is one of the Internet pioneers, one of the people that invented a lot of routing protocols that we know and use today. She's moved into this regime of ad hoc networking and has studied these issues. She recently chaired a National Academy panel that produced a report on embedded computing and all of the opportunities. I believe that's going to be released in the next few weeks. I want to encourage you to take a look at that report when it comes out. The folks at Berkeley are also working to understand not just how we use these technologies outside of our normal living areas for environmental monitoring, but also for public safety. And let's look at some of the opportunities there in this next video. VIDEO: STEVEN GLASER: A very good example of the power of a self?adapative, self?organizing network might be in fighting rural fires. We can take our nodes with a GPS mounted on the chip, throw them out of the helicopter, and the network will now form itself as to isotherm, what are the constant temperatures? Each device will communicate, they'll give us a topology of the network, which, in the end, will be the topology of temperature on the ground. JAMES DEMMEL: They're going to be spread out in areas where no systems' administrators will ever be able to get to them to repair them. These systems will need to configure themselves even if a large fraction of the parts aren't working. STEVEN GLASER: These networks provide a tremendous amount of elegance in how we approach a problem. JAMES DEMMEL: The idea is that the software is going to flow to where it's needed and reconfigure itself on demand, including new versions and repairs, as needed to account for all of these changes that we cannot predict in the future. DAVID TENNENHOUSE: Now, just dropping these nodes from a plane, helicopter, whatever, that seems pretty far out, doesn't it? Well, until you realize that last summer some of these graduate students that we saw here a little while ago together with professor Kris Pister, who is one of the people who really pioneered this movement towards smart dust and ad hoc networking technology, went out into the California desert and in an environmentally conscious way dropped these from a model airplane. You can see a picture of a model airplane on the slides, and they actually dropped the nodes onto the desert floor, they self?assembled into a network and were able to detect a passing vehicle. You can imagine using that kind of technology when, say, a child is lost in the woods. You swoop in, drop the nodes, detect the footsteps, find the child. Tremendous opportunities to improve public safety, quality of life, et cetera. And this is not just some far?fetched opportunity. They did this last summer. The proof of concept is done at the research stage. It's time for you folks to engage and get moving on some of this. So I'd love to spend more time talking to you about all of these areas. Transportation is another great multi?hundred billion dollar space. Lots of opportunities there. Convoying cars, making cars smarter. There's 80 to 100 embedded processors in many of today's cars. They all want to be networked. Lots of great chances to go proactive. We don't have the time to do this right now, so I'll skip by that, leave it for your homework, and move on to discuss what we can do to change the way science is done. Now, I'm an engineer. That means I really like science, but I really love building things. My wife is a scientist. That means she loves science. I love my wife, and I know that she is just not going to get a Nobel Prize sitting there as a data entry clerk typing her experimental results into some computer. We've got to do better, and here is some video from Gaetano Borriello, professor at University of Washington and director of our new Intel research lab adjacent to the university, to tell you about what we're doing to revolutionize the scientific process. (Video playing.) ROBERT FRANZA: In the past, you would have a scientist occupy their time doing an experiment, and then they would occupy a significant amount of additional time simply recording their results. GAETANO BORRIELLO: A biologist often begins their work by preparing an experiment plan. Our LabScape system allows a biologist to go about doing their work and bringing context with them from station to station by announcing their presence to the system. So imagine what it would be like if the full time of the scientist was occupied thinking and doing science, and the precision and the details of what they were doing were captured naturally. As the application follows the biologist around the laboratory, it provides the context into which the data is placed. As an example, the pipette that is used to move liquid from one vessel to another includes a sensor that records how much liquid was moved. Because LabScape knows what the experiment plan is and where she's at right now in the laboratory, it can also provide useful information in determining how to set up a particular piece of equipment. So through the use of radio tags that we can attach to the physical objects that we're moving around the laboratory, it also allows us to hand off things to other people in the lab. In this case, later in the day another experimenter will take a photograph of the electrophoresis gel and the sensing technology will be sure that photograph ends up in the right experimental plan that was originally set by the first biologist. One of the principle advantages of LabScape is all the information collected in that experiment is organized and made accessible. ROBERT FRANZA: This creates an incredible opportunity. To have a precise recording for anybody who would ever want to reproduce the work. This opens up channels of collaboration and communication that just haven't been available before. DAVID TENNENHOUSE: So you can see there are a lot of opportunities here to grow whole new markets and take advantage of this proactive computing vision. Our theme this week at IDF is the digital universe, and before I close, I just want to talk a little about getting even further out. You know, going beyond dropping sensors from model aircraft, and hearing about one individual who actually wants to instrument an undersea tectonic plate. We're talking about instrumenting an entire tectonic plate. (Video playing.) JOHN DELANEY: In terms of scale, Neptune is going to be vast. It's going to be the size of a tectonic plate, there will be 3,000 kilometers of cable legs. There will be 31 nodes distributed about 100 kilometer spacing throughout the network. Every one of those nodes will have thousands of instruments associated with it. This will easily be the largest ensemble of instrument arrays and sensor arrays that has ever been attempted in the ocean. One of the very fascinating aspects of what Neptune can allow is a comprehensive response to episodic events like an erupting Volcano that might take place in a particular portion of the plate. In order to thoroughly examine that, we would have autonomous vehicles parked throughout the Net. Once an event takes place, they begin moving toward that site by jumping from node to node to node as they get closer and closer. Once they're in place, they'll form squadrons and fly through the erupting plume collecting data and microbes that might have come from miles below the sea floor. There's no other way we can get these microbes. Another major facet of Neptune is it will play into one of the greatest searches that mankind will engage in over the next centuries. The search for life on other planets. DAVID TENNENHOUSE: So if we're really going to get out and into the digital universe, we have to be willing to move beyond the earth Internet and start thinking about the Interplanetary Internet, which is exactly what our colleagues at the Jet Propulsion Laboratories are doing. Believe it or not, they recently established an entire office dedicated to designing the architecture for the Interplanetary Internet. They are planning to put a small four?node network into Mars orbit. They'll put a router in orbit, a few nodes down there on the planet with landers. A four?node network. Kind of reminds me of an earlier four?node network (ARPA?Net). And not surprisingly, Vint Cerf is one of the key architects of the interplanetary Internet. These folks are serious. They've actually got their architecture out there, and you better get ready for a whole new genre of vocabulary. TCP/IP stacks with bundles are the latest things, where these bundles of information have to transit very long delays of interplanetary distances. So we've got a lot of work to do on exploring the digital universe and we think it's going to be a tremendous amount of fun. Now, in closing, I do want to just thank again the folks from Berkeley that did a tremendous job of an amazingly gutsy demo. I want to thank you the audience for helping us set a new record for ad hoc networks. This is, as far as we know, the largest that's ever been attempted. And I want to talk a little bit about what I think we can achieve together towards this proactive vision, if we can get the human beings back on top and get the computers anticipating our needs. As a community, we've produced this tremendous human empowerment through interactive computers, and this tremendous business opportunity for all of us through our 200 million unit?per?year market. If we now get to work on proactive computing and start networking that eight and a half billion embedded computers, we can generate a huge spurt of growth, not just in these new application domains, but after we've put some oomph into these new domains, we're going to find that if you go back to the traditional markets, there will be huge opportunities to extract new productivity gains in those markets based on all the new information and all the new learning that we're going to have. Working together, we can keep this virtuous spiral going. We see limitless opportunities for driving growth forward in the digital universe and we hope you'll join us. So enjoy IDF, and get physical, get real, and get out! Thank you. (Applause.) ----- Supercomputing Online wishes to thank Intel Corporation for allowing us to bring this unabridged transcript to our readers. -----