Brazil identifies flood-prone areas of cities

The study combined models that predict urban expansion and land-use changes with hydrodynamic models, and the results were validated using actual data for São Caetano do Sul, a city in metropolitan São Paulo.

Scientists affiliated with the National Space Research Institute (INPE) in Brazil have combined models that predict urban expansion and land-use changes with hydrodynamic models to create a methodology capable of supplying geographical information that identifies flood-prone areas of cities, especially those vulnerable to the impact of extremely heavy rainfall.

The groundbreaking study was based on data for São Caetano do Sul, a city in metropolitan São Paulo, but the methodology can be used by other cities to devise public policies and make decisions in addressing the impacts of these phenomena to avoid deaths of residents and destruction of buildings and infrastructure.

FAPESP funded the study via two projects (20/09215-3 and 21/11435-4). Preliminary results are reported in an article published in the journal Water. They were part of the Ph.D. research of Elton Vicente Escobar Silva, the first author of the article and a researcher at INPE.

In partnership with the Federal University of Paraíba (UFPB) and the Federal University of Rio Grande do Sul (UFRGS), and with local bodies, the researchers “tested” the modeling methodology using civil defense data for the city relating to a flood that occurred on March 10, 2019, when three people drowned and the floodwaters reached a depth of almost 2 meters in several streets. 

“I’ve worked with modeling for years, focusing on changes in land use and cover in urban areas. I wanted to combine this with flood simulation. The opportunity arose in connection with Elton’s project,” Cláudia Maria de Almeida, joint first author of the article and Silva’s thesis advisor, told Agência FAPESP. She is also a researcher at INPE, and she heads the institute’s urban remoting sensing unit (CITIES Laboratory).

“The study innovated by combining hydrodynamic modeling for urban areas with the complexity of the underground runoff drainage network, and by using real data to calibrate and validate the model. We combined very high-resolution spatial imaging and deep learning. All this is linked to big data and smart cities,” she said.

Discussion of smart cities began in 2010, initially involving technological issues such as integrated traffic light control systems and bus stops with Wi-Fi. Sustainability and quality of life for residents have been included more recently.

According to the United Nations, the world population reached 8 billion in 2022, with 56% living in urban areas. The population is expected to rise to 9.7 billion by 2050, with 6.6 billion (68%) living in cities.

Cities are currently expanding at twice the rate of population growth. In the next three decades, urban areas worldwide are set to total more than 3 million square kilometers, equivalent to the territory of India.

City planning is not advancing at the same pace. For example, rampant urbanization incurs changes in land use and cover, expands impermeable surfaces, and alters hydrology. In conjunction with the higher frequency of extreme weather events due to climate change, this exposes cities to flooding and landslides in the rainy season.

Cross-tabulation

For hydrodynamic modeling, the researchers used a software package called HEC-RAS (Hydrologic Engineering Center’s River Analysis System), which simulates water flow and surface elevation, as well as sediment transport.

To identify flood-prone areas, they used two digital terrain models (DTMs) with different spatial resolutions of 0.5m and 5m. A DTM is a mathematical representation of the topography of the Earth’s surface, excluding all vertical objects. The model can be manipulated by computer programs and is typically visualized as a grid in which an elevation value is assigned to each pixel. Vegetation, buildings, and other characteristics are digitally removed. In this study, the researchers used four supercomputing intervals (1, 15, 30, and 60 seconds) in their analysis of the simulations.

The best results were obtained from the simulations with a spatial resolution of 5m, which displayed maps with the highest coverage of flood-prone areas (278 out of 286 points, or 97.2%) in the shortest computation time. They identified the potential for flooding in areas not detected by civil defense authorities or citizens of São Caetano do Sul during actual flood events.

“We set out to create a methodology to support decision-makers. We simulated projected land-use changes several years ahead and their impact on the network of watercourses. On this basis, it’s possible to run simulations with scenarios. An example would be specifying millimeters of rain in a given timeframe to predict the impact on an area of a city in terms of flooding. Public administrators can use this capability to make decisions, avoiding economic damage as well as loss of life,” Silva said.

The researchers stressed the need for cities to update their databases for this type of analysis, as did São Caetano do Sul. “The model works with and is fed by data. It’s important for cities to have up-to-date information, including records relating to extreme cases, such as major floods and inundations,” Almeida said.

São Caetano do Sul is part of a dense conurbation that encompasses São Paulo city as well as the neighboring cities of Santo André and São Bernardo do Campo. It has had many floods and inundations – 29 between 2000 and 2022 alone, according to the researchers.

On the other hand, it ranks first among all 5,570 municipalities in Brazil for sustainability based on the Sustainable Development of Cities Index – Brazil (IDSC-BR), part of a series of reports produced by the United Nations Sustainable Development Solutions Network (SDSN) to monitor implementation of the Sustainable Development Goals (SDGs) in member countries.

With some 162,000 inhabitants, it has a comprehensive wastewater treatment system connected to 100% of homes. Almost all urban dwellings (95.4%) are located on public streets with trees, and a reasonably large proportion (37%) are on adequately urbanized streets (paved and with sidewalks, curbs, and drains), according to IBGE, Brazil’s census and statistics bureau.

Should robots be given a human conscience?

Humans have curated the best of human intelligence to inform AI, with the hopes of creating flawless machines – but could the flaws we left out be the missing pieces needed to ensure robots do not go rogue?

Modern-day society relies intrinsically on automated systems and artificial intelligence. It is embedded into our daily routines and shows no signs of slowing, in fact, the use of robotic and automated assistance is ever-increasing.

Such pervasive use of AI presents technologists and developers with two ethical dilemmas – how do we build robots that behave in line with our values and how do we stop them from going rogue?

One author suggests that one option which is not explored enough is to code more humanity into robots, gifting robots with traits such as empathy and compassion.

Is humanity the answer?

In a new book called Robot Souls, to be published in August, academic Dr. Eve Poole OBE explores the idea that the solution to society’s conundrum about how to make sure AI is ethical lies in human nature.

She argues that in its bid for perfection, humans stripped out the ‘junk code’ including emotions, free will, and a sense of purpose.

She said: “It is this ‘junk’ which is at the heart of humanity. Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us.

“This junk code is in fact vital to human flourishing, because behind all of these flaky and whimsical properties lies a coordinated attempt to keep our species safe. Together they act as a range of ameliorators with a common theme: they keep us in the community so that there is safety in numbers.”

Robot souls

With AI increasingly taking up more decision-making roles in our daily lives, along with rising concerns about bias and discrimination in AI, Dr. Poole argues the answer might be in the stuff we tried to strip out of autonomous machines in the first place.

She said: “If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them to all intents and purposes a ‘soul’.”

In the new book, Poole suggests a series of next steps to make this a reality, including agreeing with a rigorous regulation process and an immediate ban on autonomous weapons along with a licensing regime with rules that reserve any final decision over the life and death of a human to a fellow human.

She argues we should also agree on the criteria for legal personhood and a road map for Al toward it.

The human blueprint

“Because humans are flawed we disregarded a lot of characteristics when we built AI,” Poole explains. “It was assumed that robots with features like emotions and intuition, that made mistakes and looked for meaning and purpose, would not work as well.

“But on considering why all these irrational properties are there, it seems that they emerge from the source code of the soul. Because it is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving.”

Robot Souls looks at developments in AI and reviews the emergence of ideas of consciousness and the soul.

It places our ‘junk code’ in this context and argues that it is time to foreground that code and use it to look again at how we are programming AI.

New research in structured light means researchers can exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is.
New research in structured light means researchers can exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is.

South African researchers demo noise-free communication with structured light

A new approach to optical communication that can be deployed with conventional technology.

The patterns of light hold tremendous promise for a large encoding alphabet in optical communications, but progress is hindered by their susceptibilities to distortion, such as in atmospheric turbulence or bent optical fiber.  Now researchers at the University of the Witwatersrand (Wits) have outlined a new optical communication protocol that exploits spatial patterns of light for multi-dimensional encoding in a manner that does not require the patterns to be recognized, thus overcoming the prior limitation of modal distortion in noisy channels.  The result is a new encoding state-of-the-art of over 50 vectorial patterns of light sent virtually noise-free across a turbulent atmosphere, opening a new approach to high-bit-rate optical communication.  

Published this week in Laser & Photonics Reviews, the Wits team from the Structured Light Laboratory in the Wits School of Physics used a new invariant property of vectorial light to encode information.  This quantity, which the team calls “vectorness”, scales from 0 to 1 and remains unchanged when passing through a noisy channel.  Unlike traditional amplitude modulation which is 0 or 1 (only a two-letter alphabet), the team used the invariance to partition the 0 to 1 vectorness range into more than 50 parts (0, 0.02, 0.04, and so on up to 1) for a 50-letter alphabet.  Because the channel over which the information is sent does not distort the vectorness, both sender and received will always agree on the value, hence noise-free information transfer.  

The critical hurdle that the team overcame is to use patterns of light in a manner that does not require them to be “recognized” so that the natural distortion of noisy channels can be ignored.  Instead, the invariant quantity just “adds up” light in specialized measurements, revealing a quantity that doesn’t see distortion at all.

“This is a very exciting advance because we can finally exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is,” says Professor Andrew Forbes, from the Wits School of Physics. “In fact, the only limit to how big the alphabet can be is how good the detectors are and not at all influenced by the noise of the channel.”

Lead author and Ph.D. candidate Keshaan Singh added: “To create and detect the vectorness modulation requires nothing more than conventional communications technology, allowing our modal (pattern) based protocol to be deployed immediately in real-world settings.”

The team has already started demonstrations in optical fiber and in fast links across free space and believes that the approach can work in other noisy channels, including underwater.