Missouri S&T researcher uses visible light for Wi-Fi transmission

Turning a light on and off doesn’t require much thought – click on, click off. But modulating that light – turning it on and off faster than the human eye can comprehend – and using the modulated light for Wi-Fi data transmission requires a great deal of thought, and it’s the focus on Dr. Nan Cen’s visible-light communications research. Dr. Nan Cen’s research uses visible light to provide high-speed Wi-Fi data transfer. Photo by Michael Pierce, Missouri S&T.

“An advantage of visible-light communication is the largely unregulated spectrum ranging from 375 terahertz to 750 terahertz, which would provide higher data-rate communication than current technologies,” says Cen, an assistant professor of computer science at Missouri S&T. “Another advantage is simplicity – using basic photodetectors to receive the data from standard room lights. The technology is also inherently secure because it’s directional and can be confined within a room.”

Cen says there currently isn’t enough spectrum to handle the growth of interconnected devices within the Internet of Things. She adds that more technology is needed to process the higher data rates available with visible light communication, but she believes that devices could be equipped with light sensors in the next 10-20 years. That would be a boon in rural areas where providing broadband access presents particular challenges.

“Many rural residents use a Wi-Fi hot spot for connectivity, but data speed is slow,” says Cen. “Telephone lines also provide slow data rates, and fiber-optic installation is expensive.”

Cen is researching the possibility of using lasers between power poles in rural areas, with solar panels acting as the receiver for homes. Instead of a router, standard household lamps would transmit data. Fascinating work, but Cen is one of only a few U.S. researchers working on visible light communication.

“Most U.S. researchers are focused on the terahertz band,” she says. “Current terahertz-band communication is radio-frequency based and visible-light communication is not, although there are some common characteristics.”

Cen says a drawback of visible light communication is its short propagation range. Using lasers, the range could be several kilometers, but with regular household light, the range is a few meters. Visible light is also easy to block. Cen says researchers may use novel algorithms to enhance the propagation range and propose new technologies to overcome the blockage.

Applications could range from transportation to virtual reality to medical settings where electromagnetic interference limits contact using radiofrequency. Eventually, drones could provide visible light communication to remote or extreme environments where infrastructure doesn’t exist. But Cen says the U.S. needs more researchers in the area plus industry interest to produce compatible devices.

“Visible light communication has a broad range of applications,” she says. We need to demonstrate that this is feasible technology that we definitely need.”

NC State researchers use AI to predict flood damage risk

In a new study, North Carolina State University researchers used artificial intelligence to predict where flood damage is likely to happen in the continental United States, suggesting that recent flood maps from the Federal Emergency Management Agency do not capture the full extent of flood risk. Map of the United States showing predicted average flood damage risk by state or district. Credit: Elyssa Collins.

In the study, published in Environmental Research Letters, researchers found a high probability of flood damage – including monetary damage, human injury and loss of life – for more than a million square miles of land across the United States across a 14-year period. That was more than 790,000 square miles greater than flood risk zones identified by FEMA’s maps.

“We’re seeing that there’s a lot of flood damage being reported outside of the 100-year floodplain,” said the study’s lead author Elyssa Collins, a doctoral candidate in the NC State Center for Geospatial Analytics. “There are a lot of places that are susceptible to flooding, and because they’re outside the floodplain, that means they do not have to abide by insurance, building code and land-use requirements that could help protect people and property.”

It can cost FEMA as much as $11.8 billion to create national Flood Insurance Rate Maps, which show whether an area has at least a 1% chance of flooding in a year, according to a 2020 report from the Association of State Floodplain Managers. Researchers say their method of using machine learning tools to estimate flood risk offers a way of rapidly updating flood maps as conditions change or more information becomes available.

“This is the first spatially complete map of flood damage probability for the United States; wall-to-wall information that can be used to learn more about flood risk in vulnerable, underrepresented communities,” said Ross Meentemeyer, Goodnight Distinguished Professor of Geospatial Analytics at NC State.

To create their computer models, researchers used reported data of flood damage for the United States, along with other information such as whether land is close to a river or stream, type of land cover, soil type and precipitation. The computer was able to “learn” from actual reports of damage to predict areas of high flood damage likelihood for each pixel of mapped land. They created separate models for each watershed in the United States.

“Our models are not based in physics or the mechanics of how water flows; we’re using machine learning methods to create predictions,” Collins said. “We developed models that relate predictors – variables related to flood damage such as extreme precipitation, topography, the relation of your home to a river – to a data set of flood damage reports from the National Oceanic and Atmospheric Administration. It’s very fast – our models for the U.S. watersheds ran on an average of five hours.”

The actual flood damage reports they used to “train” the models were publicly available reports from NOAA made between December 2006 and May of 2020. Compared with recent FEMA maps downloaded in 2020, 84.5% of the damage reports they evaluated were not within the agency’s high-risk flood areas. The majority, at 68.3%, were located outside of the high-risk floodplain, while 16.2% were in locations unmapped by FEMA.

When they ran their computer models to determine flood damage risk, they found a high probability of flood damage for more than 1.01 million square miles across the United States, while the mapped area in FEMA’s 100-year flood plain is about 221,000 square miles. Researchers said there are factors that could help explain why the differences were so large, including that their machine-learning-based model assessed damage from floods of any frequency, while FEMA only includes flooding that would occur from storms that have a 1% chance of happening in any given year.

“Potentially, FEMA is underestimating flood damage exposure,” Collins said.

One of the biggest drivers of flood damage risk was proximity to a stream, along with elevation and the average amount of extreme precipitation per year. The three Census regions with the highest probability were in the Southeast. Louisiana, Missouri, the District of Columbia, Florida and Mississippi had the highest risk of any U.S. state or district in the continental United States. Of the 30 most high-risk counties, North Carolina had three: Dare, Hyde and Tyrrell.

In their model, researchers used historical climate data. In the future, they plan to account for climate change.

In the meantime, researchers say their findings, which will be publicly accessible, could be useful for helping policymakers involved in land-use planning. They also represent a proof-of-concept method for efficiently updating flood maps in the future.

“There is still work to be done to make this model more dynamic,” Collins said. “But it’s part of a shift in thinking about how we approach these problems in a more cost-effective and computationally efficient manner. Inevitably, with climate change, we’re going to have to update these maps and models as events occur. It would be helpful to have future estimates that we can use to prepare for whatever is to come.”

UK researchers offer radical rethink of how to improve AI in the future

Computer scientists at the University of Essex have devised a radically different approach to improving artificial intelligence (AI) in the future.

Published in an academic journal – the Journal of Machine Learning Research – the Essex team hopes this research will provide a backbone for the next generation of AI and machine learning breakthroughs.

This could be translated to improvements in everything from driverless cars and smartphones having a better understanding of voice commands, to enhanced automatic medical diagnoses and drug discovery.

“Artificial intelligence research ultimately has the goal of producing completely autonomous and intelligent machines which we can converse with and will do tasks for us, and this newly published work accelerates our progress towards that,” explained co-lead author Dr. Michael Fairbank, from Essex’s School of Computer Science and Electronic Engineering.

The recent impressive breakthroughs in AI around vision tasks, speech recognition, and language translation have involved "deep learning", which means training multi-layered artificial neural networks to solve a task.  However, training these deep neural networks is a computationally expensive task, requiring huge amounts of training examples and computing time.

What the Essex team, which includes Professor Luca Citi and Dr. Spyros Samothrakis, has achieved is to devise a radically different approach to training neural networks in deep learning.

“Our new method, which we call Target Space, provides researchers with a step-change in the way they can improve and build their AI creations,” added Dr. Fairbank. “Target Space is a paradigm-changing view, which turns the training process of these deep neural networks on its head, ultimately enabling the current progress in AI developments to happen faster.”

The standard way people train neural networks to improve performance is to repeatedly make tiny adjustments to the connection strengths between the neurons in the network. The Essex team has taken a new approach. So, instead of tweaking connection strengths between neurons, the new “target-space” method proposes to tweak the firing strengths of the neurons themselves. 

Professor Citi added: “This new method stabilizes the learning process considerably, by a process which we call cascade untangling. This allows the neural networks being trained to be deeper, and therefore more capable, and at the same time potentially requiring fewer training examples and less computing resources. We hope this work will provide a backbone for the next generation of artificial intelligence and machine-learning breakthroughs.”

The next steps for the research are to apply the method to various new academic and industrial applications.