The Hurricane Analysis and Forecast System (HAFS) “moving nest" Model. Global map showcasing land mass in green and water in black, clouds in white and tropical storms outlined in a green boxes representing the moving nest model. (Image credit: NOAA)
The Hurricane Analysis and Forecast System (HAFS) “moving nest" Model. Global map showcasing land mass in green and water in black, clouds in white and tropical storms outlined in a green boxes representing the moving nest model. (Image credit: NOAA)

NOAA launches new hurricane forecast model as Atlantic season starts strong

NOAA’s National Hurricane Center — a division of the National Weather Service — has a new model to help produce hurricane forecasts this season. The Hurricane Analysis and Forecast System (HAFS) was put into operation on June 27 and will run alongside existing models for the 2023 season before replacing them as NOAA’s premier hurricane forecasting model. 

"The quick deployment of HAFS marks a milestone in NOAA's commitment to advancing our hurricane forecasting capabilities, and ensuring continued improvement of services to the American public," said NOAA Administrator Rick Spinrad, Ph.D. "Development, testing, and evaluations were jointly carried out between scientists at NOAA Research and the National Weather Service, marking a seamless transition from development to operations.”

Running the experimental version of HAFS from 2019 to 2022 showed a 10-15% improvement in track predictions compared to NOAA’s existing hurricane models. HAFS is expected to continue increasing forecast accuracy, therefore reducing storm impacts on lives and property. 

HAFS is as good as NOAA’s existing hurricane models when forecasting storm intensity — but is better at predicting rapid intensification. HAFS was the first model last year to accurately predict that Hurricane Ian would undergo secondary rapid intensification as the storm moved off the coast of Cuba and barreled toward southwest Florida. 

Over the next four years, HAFS will undergo several major upgrades, ultimately leading to even more increased accuracy of forecasts, warnings, and life-saving information. An objective of the NOAA Hurricane Forecast Improvement Program (HFIP)offsite link is, by 2027, to reduce all model forecast errors by nearly half compared to errors seen in 2017.

HAFS provides more accurate, higher-resolution forecast information both over land and ocean and is comprised of five major components: a high-resolution moving nest; high-resolution physics; multi-scale data assimilation that allows for vortex initialization and vortex cycling; 3-D ocean coupling; and improved assimilation techniques that allow for the assimilation of novel observations. The foundational component is the moving nest, which allows the model to zoom in with a resolution of 1.2 miles on areas of a hurricane that are key to improving wind intensity and rain forecasts.

“With the introduction of the HAFS forecast model into our suite of tropical forecasting tools, our forecasters are better equipped than ever to safeguard lives and property with enhanced accuracy and timely warnings,” said Ken Graham, director of NOAA’s National Weather Service. “HAFS is the result of strong collaborative efforts throughout the science community and marks significant progress in hurricane prediction.”

HAFS, the first regional coupled model to go into operations under the Unified Forecast Systemoffsite link (UFS), was developed through community-based collaboration and the streamlining of the operational transition process. As HAFS uses the FV3 — the same dynamic core as the U.S. Global Forecast System — it will have a unified starting point when initiated for hurricane prediction and will also integrate with ocean and wave models as underlying inputs. The current standalone regional hurricane models, HWRF and HMON, each have their starting point for modeling the atmosphere. Leveraging the FV3 in HAFS reduces overlapping efforts, making the NOAA modeling portfolio more consistent and efficient.

HAFS is also the first new major forecast model implementation using NOAA’s updated weather and climate supercomputers, which were installed last summer. HAFS would not be possible without the speed and power of these new supercomputers, called the Weather and Climate Operational Supercomputing System 2 (WCOSS2).

NOAA developed HAFS as a requirement of the Weather Research and Forecasting Innovation Act of 2017, which directed the agency to conduct ongoing research and development to improve hurricane prediction and warning under the Hurricane Forecast Improvement Programoffsite link. Specifically, the Act called for NOAA to improve prediction capability for rapid intensification and storm track. HAFS development was also enabled by fiscal year 2018 and 2019 hurricane and disaster supplemental funding, and continued acceleration with support from the 2022 Disaster Relief Supplemental Appropriations Act.

HAFS was jointly created by NOAA's National Weather Service Environmental Modeling CenterAtlantic Oceanographic & Meteorological Laboratory, and NOAA's Cooperative Institute for Marine & Atmospheric Studies offsite link.

MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock
MIT economist Martin Beraja is co-author of a new research paper showing that China’s increased investments in AI-driven facial-recognition technology both help the regime repress dissent and may drive the technology forward, a mutually reinforcing condition the paper’s authors call an “AI-Tocracy.” Credits:Image: Jose-Luis Olivares/MIT with figures from iStock

MIT econ prof Beraja shows how an 'AI-tocracy' emerges in China

Many scholars, analysts, and other observers have suggested that resistance to innovation is an Achilles’ heel of authoritarian regimes. Such governments can fail to keep up with technological changes that help their opponents; they may also, by stifling rights, inhibit innovative economic activity and weaken the long-term condition of the country. 

But a new study co-led by an MIT professor suggests something quite different. In China, the research finds, the government has increasingly deployed AI-driven facial-recognition technology to suppress dissent; has been successful at limiting protest; and in the process, has spurred the development of better AI-based facial-recognition tools and other forms of software.

“What we found is that in regions of China where there is more unrest, that leads to greater government procurement of facial-recognition AI, subsequently, by local government units such as municipal police departments,” says MIT economist Martin Beraja, who is co-author of a new paper detailing the findings. 

What follows, as the paper notes, is that “AI innovation entrenches the regime, and the regime’s investment in AI for political control stimulates further frontier innovation.”

The scholars call this state of affairs an “AI-tocracy,” describing the connected cycle in which increased deployment of AI-driven technology quells dissent while also boosting the country’s innovation capacity.

The open-access paper, also called “AI-tocracy,” appears in the August issue of the Quarterly Journal of Economics. An abstract of the uncorrected proof was first posted online in March. The co-authors are Beraja, who is the Pentti Kouri Career Development Associate Professor of Economics at MIT; Andrew Kao, a doctoral candidate in economics at Harvard University; David Yang, a professor of economics at Harvard; and Noam Yuchtman, a professor of management at the London School of Economics. 

To conduct the study, the scholars drew on multiple kinds of evidence spanning much of the last decade. To catalog instances of political unrest in China, they used data from the Global Database of Events, Language, and Tone (GDELT) Project, which records news feeds globally. The team turned up 9,267 incidents of unrest between 2014 and 2020. 

The researchers then examined records of almost 3 million procurement contracts issued by the Chinese government between 2013 and 2019, from a database maintained by China’s Ministry of Finance. They found that local governments’ procurement of facial-recognition AI services and complementary public security tools — high-resolution video cameras — jumped significantly in the quarter following an episode of public unrest in that area.

Given that Chinese government officials were responding to public dissent activities by ramping up facial-recognition technology, the researchers then examined a follow-up question: Did this approach work to suppress dissent?

The scholars believe that it did, although as they note in the paper, they “cannot directly estimate the effect” of the technology on political unrest. But as one way of getting at that question, they studied the relationship between weather and political unrest in different areas of China. Certain weather conditions are conducive to political unrest. But in prefectures in China that had already invested heavily in facial-recognition technology, such weather conditions are less conducive to unrest compared to prefectures that had not made the same investments. 

In so doing, the researchers also accounted for issues such as whether or not greater relative wealth levels in some areas might have produced larger investments in AI-driven technologies regardless of protest patterns. However, the scholars still reached the same conclusion: Facial-recognition technology was being deployed in response to past protests, and then reducing further protest levels. 

“It suggests that the technology is effective in chilling unrest,” Beraja says. 

Finally, the research team studied the effects of increased AI demand on China’s technology sector and found the government’s greater use of facial-recognition tools appears to be driving the country’s tech sector forward. For instance, firms that are granted procurement contracts for facial-recognition technologies subsequently produce about 49 percent more software products in the two years after gaining the government contract than they had beforehand. 

“We examine if this leads to greater innovation by facial-recognition AI firms, and indeed it does,” Beraja says.

Such data — from China’s Ministry of Industry and Information Technology — also indicates that AI-driven tools are not necessarily “crowding out” other kinds of high-tech innovation.

Adding it all up, the case of China indicates how autocratic governments can potentially reach a near-equilibrium state in which their political power is enhanced, rather than upended when they harness technological advances.

“In this age of AI, when the technologies not only generate growth but are also technologies of repression, they can be very useful” to authoritarian regimes, Beraja says. 

The finding also bears on larger questions about forms of government and economic growth. A significant body of scholarly research shows that rights-granting democratic institutions do generate greater economic growth over time, in part by creating better conditions for technological innovation. Beraja notes that the current study does not contradict those earlier findings, but in examining the effects of AI in use, it does identify one avenue through which authoritarian governments can generate more growth than they otherwise would have. 

“This may lead to cases where more autocratic institutions develop side by side with growth,” Beraja adds. 

Other experts in the societal applications of AI say the paper makes a valuable contribution to the field. 

“This is an excellent and important paper that improves our understanding of the interaction between technology, economic success, and political power,” says Avi Goldfarb, the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management at the University of Toronto. “The paper documents a positive feedback loop between the use of AI facial-recognition technology to monitor and suppress local unrest in China and the development and training of AI models. This paper is pioneering research in AI and political economy. As AI diffuses, I expect this research area to grow in importance.”

For their part, the scholars are continuing to work on related aspects of this issue. One forthcoming paper of theirs examines the extent to which China is exporting advanced facial recognition technologies around the world — highlighting a mechanism through which government repression could grow globally.

 

uOttawa-built models put the age of the universe at 26.7 billion years

Our universe could be twice as old as current estimates, according to a new study that challenges the dominant cosmological model and sheds new light on the so-called “impossible early galaxy problem.”

“Our newly-devised model stretches the galaxy formation time by several billion years, making the universe 26.7 billion years old, and not 13.7 as previously estimated,” says author Rajendra Gupta, adjunct professor of physics in the Faculty of Science at the University of Ottawa. Rajendra Gupta 41047

For years, astronomers and physicists have calculated the age of our universe by measuring the time elapsed since the Big Bang and by studying the oldest stars based on the redshift of light coming from distant galaxies. In 2021, thanks to new techniques and advances in technology, the age of our universe was thus estimated at 13.797 billion years using the Lambda-CDM concordance model.

However, many scientists have been puzzled by the existence of stars like the Methuselah that appear to be older than the estimated age of our universe and by the discovery of early galaxies in an advanced state of evolution made possible by the James Webb Space Telescopeexternal link. These galaxies, existing a mere 300 million years after the Big Bang, appear to have a level of maturity and mass typically associated with billions of years of cosmic evolution. Furthermore, they’re surprisingly small in size, adding another layer of mystery to the equation.

Zwicky’s tired light theory proposes that the redshift of light from distant galaxies is due to the gradual loss of energy by photons over vast cosmic distances. However, it was seen to conflict with observations. Yet Gupta found that “by allowing this theory to coexist with the expanding universe, it becomes possible to reinterpret the redshift as a hybrid phenomenon, rather than purely due to expansion.”

In addition to Zwicky’s tired light theory, Gupta introduces the idea of evolving “coupling constants,” as hypothesized by Paul Dirac. Coupling constants are fundamental physical constants that govern the interactions between particles. According to Dirac, these constants might have varied over time. By allowing them to evolve, the timeframe for forming early galaxies observed by the Webb telescope at high redshifts can be extended from a few hundred million years to several billion years. This provides a more feasible explanation for the advanced level of development and mass observed in these ancient galaxies.

Moreover, Gupta suggests that the traditional interpretation of the “cosmological constant,” which represents dark energy responsible for the universe's accelerating expansion, needs revision. Instead, he proposes a constant that accounts for the evolution of the coupling constants. This modification in the cosmological model helps address the puzzle of small galaxy sizes observed in the early universe, allowing for more accurate observations.