Ban the Bomb

by J. William Bell, NCSA Senior Science Writer - Computational models of laser-aided chemical etching, completed on NCSA's SGI Origin2000, open the door for new methods of fabricating microelectromechanical machines. For about half a century, electronics designers have been etching smaller and smaller transistors into silicon. But scientists pine for materials from which they can build smaller and better sensors, actuators, generators, and logic devices. High-bandgap materials, ranging from the exotic to the ho-hum, are among the most intriguing possible silicon successors. High-bandgap materials' electrons are restricted to very particular energy ranges, and these ranges have unusually large gaps between them. Many high-bandgap materials, such as diamond and silicon carbide, can be doped like silicon to conduct electrons. But since they can operate at temperatures far above silicon, they can be used in hot spots like engines and rockets. In addition, they are exceptionally wear resistant. Unlike silicon, these materials are difficult to work with because of their brittleness and because of the high energy required to remove, or ablate, the material. High-power lasers can be used to shape them, but "it's like using a nuclear bomb," according to Mark Shannon, associate professor of mechanical and industrial engineering at the University of Illinois. "It blows the material away and leaves a very jagged profile. We want precision, not blast craters." Shannon and his team envision lower-power lasers controlling very exact chemical reactions on the surface of high-bandgap materials. Such an etching system includes three key parts: a laser source, which has a mask over it to establish the pattern to be etched; a gas medium, through which the laser light travels; and a surface of the material. Before the laser's switched on, there's no interaction between the gas and the solid surface. Fire a beam of photons at the surface, however, and the added energy causes a chemical reaction. Reactants in the gas medium create molecules with the surface atoms. Those surface atoms are pulled away, practically one at a time. If you can finely control the location and intensity of the laser beam, you can fashion the components of micromachines into your high-bandgap materials with nano- or even angstrom-scale precision. "It's the difference between that bomb and a shovel. You're breaking individual molecules," says James Hammonds. Hammonds recently completed his PhD as part of Shannon's team and is now at the City University of New York. Preserving the pattern When the light hits the surface in a laser-aided chemical reaction, magic happens. But problems also start. Not only does the surface temperature start to rise, but the process gas temperature also changes, which in turn causes the density of the gas medium to vary. The change in density alters the optical properties of the gas. With different optical properties in the gas, the light's profile changes, and it bends in unexpected ways. You're left with the light from a disco ball instead of a beam thrown in a regimented pattern. This wild light profile not only crushes your precision but also further affects the system's temperature and the optical properties of the gas. As the cycle continues, the beam's profile becomes more and more skewed. Hammonds' computer code considers all aspects of this cycle -- how the laser beam impacts a given surface, how that impact influences the gas medium, and how the beam changes as a result. In a series of runs that began in August 1999, Hammonds has varied the initial density of the gas, the intensity of the laser, and the duration of the laser's pulse. Early runs focused on code development, optimization, validation, and parallelization. Results of recent runs, however, have yielded publication in Optics Communications and articles in press at The International Journal of Heat and Mass Transfer and Parallel Computing. These articles give researchers an early feel for the time and length scales to be considered in someday manipulating laser-aided chemical etching systems. They also outline the challenges of building computational models of the systems. The team's eventual goals are more ambitious. "Ultimately, we want a mathematical model that serves as a predictive tool, as well as a higher order theory of the whole system," says Hammonds. With such a tool, the specifications of a laser (its power and pulse duration, for example), a gas (its density), and a surface (the energy it takes to dislodge atoms from it) would go in. The impact on the laser's profile would come out. When the team's models can do that, designing laser-aided chemical etching systems will be dramatically more feasible because researchers will know how the parts are going to influence one another. Coming to the points When the light hits the surface in a laser-aided chemical reaction, magic happens. But problems also start. Not only does the surface temperature start to rise, but the process gas temperature also changes, which in turn causes the density of the gas medium to vary. The change in density alters the optical properties of the gas. With different optical properties in the gas, the light's profile changes, and it bends in unexpected ways. You're left with the light from a disco ball instead of a beam thrown in a regimented pattern. This wild light profile not only crushes your precision but also further affects the system's temperature and the optical properties of the gas. As the cycle continues, the beam's profile becomes more and more skewed. There are three ways to describe the amplitude field at these points. With an implicit scheme, you solve for every point at the same time. This method can be very precise, but it's also hugely time-consuming. An explicit scheme, which starts at a boundary point of the modeled system and then calculates its neighbors, also has its problems. Due to the fundamental nature of the equations used to model the amplitude field, small errors propagate and grow as the computation runs. Hammonds opted for a third approach, combining the two methods in order to minimize the negative aspects of each. Instead of every point in the modeled space being solved at once, the space is split into planes, a set of points that share two of the space's three axes. Starting at the edge of the space -- where conditions are best known -- a plane's 6,400 or so points are solved implicitly. The data from those points are then used to solve the next plane, and the process continues until solutions are obtained for around 2,500 planes. The temperature field, meanwhile, is solved with a purely explicit model, using the same grid of points and system of planes. A chance meeting Hammonds, with the help of NCSA's Faisal Saied, also combined two methods in parallelizing the code. The amplitude field portions employ MPI, a stalwart, widely used method of writing parallel code. The temperature field calculations relied on OpenMP, another parallel programming standard that allows shared memory to be exploited. Though MPI scales to more processors more efficiently than OpenMP, Hammonds found that the gains from using MPI on the temperature field would have been fractional. Since OpenMP is quicker to implement, he went with it. Performance studies determined that the OpenMP portion of the code runs most efficiently on four Origin2000 processors and that MPI runs best on nine processors. Accordingly, this configuration was used for most runs. The two codes are coupled, and thus run simultaneously and feed one another data seamlessly. To date, Hammonds' code has consumed about 12,000 hours on the Origin2000. "Before using the NCSA machines," he says, "I was using the U of I's engineering workstations [a shared resource, freely available to students]. I would get booted off after the code ran for about a day [because he was monopolizing so much time]." He considered simplifying his model. A chance meeting with NCSA's Allison Clark while he was volunteering with the U of I's BOAST program, which attempts to pique young students' interest in science and technology through hands-on outreach programs, sold him on another possibility. "I used the supercomputing facilities at the San Diego Supercomputer Center for my master's research [at the University of California, Berkeley], so the supercomputer idea had crossed my mind. It was either that or scale down the code. Allison's suggestion, however, swayed me to pursue supercomputing at NCSA. Without the NCSA machines and help from people like Faisal, the time required to run the code that I had developed would have been prohibitively long." With the machines and expertise, however, the team may someday help scientists drop the bomb in favor of a shovel.