UVA engineering computer scientists discover new vulnerability affecting computers globally

Computing experts thought they had developed adequate security patches after the major worldwide Spectre flaw of 2018, but UVA's discovery shows processors are open to hackers again.

In 2018, industry and academic researchers revealed a potentially devastating hardware flaw that made computers and other devices worldwide vulnerable to attack.

Researchers named the vulnerability Spectre because the flaw was built into modern computer processors that get their speed from a technique called "speculative execution," in which the processor predicts instructions it might end up executing and preps by following the predicted path to pull the instructions from memory. A Spectre attack tricks the processor into executing instructions along the wrong path. Even though the processor recovers and correctly completes its task, hackers can access confidential data while the processor is heading the wrong way.

Since Spectre was discovered, the world's most talented computer scientists from industry and academia have worked on software patches and hardware defenses, confident they've been able to protect the most vulnerable points in the speculative execution process without slowing down computing speeds too much.

They will have to go back to the drawing board.

A team of University of Virginia School of Engineering computer science researchers has uncovered a line of attack that breaks all Spectre defenses, meaning that billions of computers and other devices across the globe are just as vulnerable today as they were when Spectre was first announced. The team reported its discovery to international chip makers in April and will present the new challenge at a worldwide computing architecture conference in June.

The researchers, led by Ashish Venkat, William Wulf Career Enhancement Assistant Professor of Computer Science at UVA Engineering, found a whole new way for hackers to exploit something called a "micro-op cache," which speeds up computing by storing simple commands and allowing the processor to fetch them quickly and early in the speculative execution process. Micro-op caches have been built into Intel computers manufactured since 2011. Ashish Venkat, William Wulf Career Enhancement Assistant Professor of Computer Science at UVA Engineering

Venkat's team discovered that hackers can steal data when a processor fetches commands from the micro-op cache.

"Think about a hypothetical airport security scenario where TSA lets you in without checking your boarding pass because (1) it is fast and efficient, and (2) you will be checked for your boarding pass at the gate anyway," Venkat said. "A computer processor does something similar. It predicts that the check will pass and could let instructions into the pipeline. Ultimately, if the prediction is incorrect, it will throw those instructions out of the pipeline, but this might be too late because those instructions could leave side-effects while waiting in the pipeline that an attacker could later exploit to infer secrets such as a password."

Because all current Spectre defenses protect the processor in a later stage of speculative execution, they are useless in the face of Venkat's team's new attacks. Two variants of the attacks the team discovered can steal speculatively accessed information from Intel and AMD processors.

"Intel's suggested defense against Spectre, which is called LFENCE, places sensitive code in a waiting area until the security checks are executed, and only then is the sensitive code allowed to execute," Venkat said. "But it turns out the walls of this waiting area have ears, which our attack exploits. We show how an attacker can smuggle secrets through the micro-op cache by using it as a covert channel."

Venkat's team includes three of his computer science graduate students, Ph.D. student Xida Ren, Ph.D. student Logan Moody and master's degree recipient Matthew Jordan. The UVA team collaborated with Dean Tullsen, professor of the Department of Computer Science and Engineering at the University of California, San Diego, and his Ph.D. student Mohammadkazem Taram to reverse-engineer certain undocumented features in Intel and AMD processors.

They have detailed the findings in their paper: "I See Dead μops: Leaking Secrets via Intel/AMD Micro-Op Caches."

This newly discovered vulnerability will be much harder to fix.

"In the case of the previous Spectre attacks, developers have come up with a relatively easy way to prevent any sort of attack without a major performance penalty" for computing, Moody said. "The difference with this attack is you take a much greater performance penalty than those previous attacks."

"Patches that disable the micro-op cache or halt speculative execution on legacy hardware would effectively roll back critical performance innovations in most modern Intel and AMD processors, and this just isn't feasible," Ren, the lead student author, said.

"It is unclear how to solve this problem in a way that offers high performance to legacy hardware, but we have to make it work," Venkat said. "Securing the micro-op cache is an interesting line of research and one that we are considering."

Venkat's team has disclosed the vulnerability to the product security teams at Intel and AMD. Ren and Moody gave a tech talk at Intel Labs worldwide on April 27 to discuss the impact and potential fixes. Venkat expects computer scientists in academia and industry to work quickly together, as they did with Spectre, to find solutions.

In response to a significant amount of global media coverage about the newly discovered vulnerability, Intel released a statement on May 3 suggesting that no additional mitigation would be required if software developers write code using a method called “constant-time programming,” not vulnerable to side-channel attacks.

“Certainly, we agree that software needs to be more secure, and we agree as a community that constant-time programming is an effective means to writing code that is invulnerable to side-channel attacks,” Venkat said. “However, the vulnerability we uncovered is in hardware, and it is important to also design processors that are secure and resilient against these attacks.

“In addition, constant-time programming is not only hard in terms of the actual programmer effort, but also entails high-performance overhead and significant deployment challenges related to patching all sensitive software,” he said. “The percentage of code that is written using constant-time principles is in fact quite small. Relying on this would be dangerous. That is why we still need to secure the hardware.”

The team's paper has been accepted by the highly competitive International Symposium on Computer Architecture or ISCA. The annual ISCA conference is the leading forum for new ideas and research results in computer architecture and will be held virtually in June.

Venkat is also working in close collaboration with the Processor Architecture Team at Intel Labs on other microarchitectural innovations, through the National Science Foundation/Intel Partnership on Foundational Microarchitecture Research Program.

Venkat was well prepared to lead the UVA research team into this discovery. He has forged a long-running partnership with Intel that started in 2012 when he interned with the company while he was a computer science graduate student at the University of California, San Diego.

This research, like other projects Venkat leads, is funded by the National Science Foundation and Defense Advanced Research Projects Agency.

Venkat is also one of the university researchers who co-authored a paper with collaborators Mohammadkazem Taram and Tullsen from UC San Diego that introduce a more targeted microcode-based defense against Spectre. Context-sensitive fencing, as it is called, allows the processor to patch running code with speculation fences on the fly.

Introducing one of just a handful more targeted microcode-based defenses developed to stop Spectre in its tracks, "Context-Sensitive Fencing: Securing Speculative Execution via Microcode Customization" was published at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems in April 2019. The paper was also selected as a top pick among all computer architecture, computer security, and VLSI design conference papers published in the six years between 2014 and 2019.

The new Spectre variants Venkat's team discovered even break the context-sensitive fencing mechanism outlined in Venkat's award-winning paper. But in this type of research, breaking your offense is just another big win. Each security improvement allows researchers to dig even deeper into the hardware and uncover more flaws, which is exactly what Venkat's research group did.

Sandia's supercomputer reveals the indigenous superhighways of ancient Australia

The best path across the desert is rarely the straightest. For the first human inhabitants of Sahul -- the super-continent that underlies modern Australia and New Guinea -- camping at the next spring, stream, or rock shelter allowed them to thrive for hundreds of generations. Those who successfully traversed the landmarks made their way across the continent, spreading from their landfall in the Northwest across the continent, making their way to all corners of Australia and New Guinea.

By simulating the physiology and decisions of early way-finders, an international team of archaeologists, geographers, ecologists, and computer scientists has mapped the probable "superhighways" that led to the first peopling of the Australian continent some 50,000-70,000 years ago. Their study is the largest reconstruction of a network of human migration paths into a new landscape. It is also the first to apply rigorous computational analysis at the continental scale, testing 125 billion possible pathways. Revealing the Indigenous superhighways of ancient Australia.  CREDIT Megan Hotchkiss Davidson/Sandia National Laboratories

"We decided it would be really interesting to look at this question of human migration because the ways that we conceptualize a landscape should be relatively steady for a hiker in the 21st century and a person who was way-finding into a new region 70,000 years ago," says archaeologist and computational social scientist Stefani Crabtree, who led the study. Crabtree is a Complexity Fellow at the Santa Fe Institute and Assistant Professor at Utah State University. "If it's a new landscape and we don't have a map, we're going to want to know how to efficiently move throughout a space, where to find water, and where to camp -- and we'll orient ourselves based on high points around the lands."

"One of the really big unanswered questions of prehistory is how Australia was populated in the distant past. Scholars have debated it for at least a hundred and fifty years," says co-author Devin White, an archaeologist and remote sensing scientist at Sandia National Laboratories. "It is the largest and most complex project of its kind that I'd ever been asked to take on."

To re-create the migrations across Sahul, the researchers first needed to simulate the topography of the supercontinent. They "drained" the oceans that now separate mainland Australia from New Guinea and Tasmania. Then, using hydrological and paleo-geographical data, they reconstructed inland lakes, major rivers, promontory rocks, and mountain ranges that would have attracted the gaze of a wandering human.

Next, the researchers programmed in-silico stand-ins for human travelers. The team adapted an algorithm called "From Everywhere to Everywhere," created by White, to program the way-finders based on the caloric needs of a 25-year-old female carrying 10 kg of water and tools.

The researchers imbued these individuals with the realistic goal of staying alive, which could be achieved by finding water sources. Like backcountry hikers, the digital travelers were drawn to prominent landmarks like rocks and foothills, and the program exacted a caloric toll for activities such as hiking uphill within the artificial landscape.

When the researchers "landed" the way-finders at two points on the coast of the re-created continent, they began to traverse it, using landmarks to navigate in search of fresh water. The algorithms simulated a staggering 125 billion possible pathways, run on a Sandia supercomputer, and a pattern emerged: the most-frequently traveled routes carved distinct "superhighways" across the continent, forming a notable ring-shaped road around the right portion of Australia; a western road; and roads that transect the continent. A subset of these superhighways map to archaeological sites where early rock art, charcoal, shell, and quartz tools have been found.

"Australia's not only the driest, but it's also the flattest populated continent on Earth," says co-author Sean Ulm, an archaeologist and Distinguished Professor at James Cook University. Ulm is also Deputy Director of the Australian Research Council Centre of Excellence for Australian Biodiversity and Heritage (CABAH), whose researchers contributed to the project. "Our research shows that prominent landscape features and water sources were critical for people to navigate and survive on the continent. In many Aboriginal societies, landscape features are known to have been created by ancestral beings during the Dreaming. Every ridgeline, hill, river, beach, and water source is named, storied, and inscribed into the very fabric of societies, emphasizing the intimate relationship between people and place. The landscape is woven into peoples' lives and their histories. It seems that these relationships between people and Country probably date back to the earliest peopling of the continent."

The results suggest that there are fundamental rules humans follow as they move into new landscapes and that the researchers' approach could shed light on other major migrations in human history, such as the first waves of migration out of Africa at least 120,000 years ago.

Future work, Crabtree says, could inform the search for undiscovered archaeological sites, or even apply the techniques to forecast the movements of human migration shortly, as populations flee drowning coastlines and climate disruptions.

KICT's solution for monitoring massive infrastructures

A trailblazer for developing a new paradigm for structural monitoring

The Korea Institute of Civil Engineering and Building Technology (KICT) has announced the development of an effective structural monitoring technique to monitor massive infrastructures, such as long-span bridges. The method provides accurate and precise responses over the whole structural system densely by fusing the advantages of multi-fidelity data.

Rapid advances in sensing and information technologies have led to condition-based monitoring in civil and mechanical structural systems. The structural monitoring system plays a key role in condition-based monitoring to evaluate structural safety from responses measured by sensors. In other words, the following method allows examining the health of existing structures, such as a long-span bridge. The structural monitoring system can enable early detection for an unsafe condition and enable proactive maintenance. As a result, it greatly reduces the inspection burden as well as maintenance costs. The prerequisite for successful condition-based monitoring is to obtain accurate responses through the whole structural system. Especially in civil-infrastructures, high cost, and technical difficulty are some challenging issues. 

Complementary data-fusion framework using multi-fidelity data

To solve this problem, a research team in KICT, led by Dr. Seung-Seop Jin, has developed an effective as well as an efficient data-fusion method for condition-based monitoring. With the following method, the complementary data-fusion for the point and distributed strain sensor is performed to combine their advantages to obtain the accurate strain distribution over whole infrastructures; thereby, responses can be estimated with high accuracy densely over whole infrastructures.

For structural response, the multi-fidelity data consists of point and distributed sensors, which have different fidelities. The point sensor provides highly accurate and reproducible responses at discrete measurement positions (High-fidelity data, HF-data), while a distributed sensor utilizes the scattering-based or scanning sensing technique to obtain very dense responses using the quasi-continuous sensing (Low-fidelity data, LF-data). The LF data is relatively easy to acquire, so it is possible to produce large amounts of relatively inaccurate data for response trends over the whole infrastructure. On the contrary, the HF data provides high accuracy; however, it is limited to acquire in terms of both time and technical limitations. Therefore, a limited amount of data is available and this can significantly impair the ability to diagnose structural conditions over whole infrastructures. Although they can be complemented each other, their complementary data-fusion has not been studied yet for the structural monitoring system. KICT firstly recognizes their potentials and developed the complementary data-fusion framework by exploiting multi-fidelity modeling in Computational statistics and Geo-statistics. The basic concept of the developed method is to transfer knowledge of the abundant but potentially inaccurate LF-data (response trend) to enhance their accuracies by fusing the information from the HF-data (accuracy at some points).

The newly developed method was verified by several numerical tests with other existing multi-fidelity data-fusion methods. To consider possible situations in real applications, the developed method was extensively evaluated through Monte Carlo simulations by varying the number and locations of multi-fidelity data with the noise. The results show that the prediction performance of the developed method is consistently superior to other existing methods. In both experiments, the relative percentages of accuracy (maximum absolute error) are improved up to 171.3% and 192 % to the existing methods.

The method is very versatile for structural monitoring, especially for massive infrastructures. The developed method is currently improved to make it more robust and efficient. For better-generalized capability, the developed method requires flexible learning capability for extracting the information from both the HF-data and LF-data and fusing them. Such improvements include the following.

Dr. Jin said, "The rationale behind this improvement is similar to our decision-making in real life. We seek different options and combine them to make the best decision. Similar to our decision-making, we do not have high confidence in the unknown damages. In the current method, we have to utilize specific models and some parameters of these models. It should be noted that the best model and its parameters are case-dependent. Therefore, we pursue several options for flexible modeling. To deal with various conditions in infrastructures, we can adopt a flexible and self-learning framework such as optimal kernel learning for better data fusion. This idea takes a step towards autonomous and efficient monitoring system for massive infrastructures."