Brazil identifies flood-prone areas of cities

The study combined models that predict urban expansion and land-use changes with hydrodynamic models, and the results were validated using actual data for São Caetano do Sul, a city in metropolitan São Paulo.

Scientists affiliated with the National Space Research Institute (INPE) in Brazil have combined models that predict urban expansion and land-use changes with hydrodynamic models to create a methodology capable of supplying geographical information that identifies flood-prone areas of cities, especially those vulnerable to the impact of extremely heavy rainfall.

The groundbreaking study was based on data for São Caetano do Sul, a city in metropolitan São Paulo, but the methodology can be used by other cities to devise public policies and make decisions in addressing the impacts of these phenomena to avoid deaths of residents and destruction of buildings and infrastructure.

FAPESP funded the study via two projects (20/09215-3 and 21/11435-4). Preliminary results are reported in an article published in the journal Water. They were part of the Ph.D. research of Elton Vicente Escobar Silva, the first author of the article and a researcher at INPE.

In partnership with the Federal University of Paraíba (UFPB) and the Federal University of Rio Grande do Sul (UFRGS), and with local bodies, the researchers “tested” the modeling methodology using civil defense data for the city relating to a flood that occurred on March 10, 2019, when three people drowned and the floodwaters reached a depth of almost 2 meters in several streets. 

“I’ve worked with modeling for years, focusing on changes in land use and cover in urban areas. I wanted to combine this with flood simulation. The opportunity arose in connection with Elton’s project,” Cláudia Maria de Almeida, joint first author of the article and Silva’s thesis advisor, told Agência FAPESP. She is also a researcher at INPE, and she heads the institute’s urban remoting sensing unit (CITIES Laboratory).

“The study innovated by combining hydrodynamic modeling for urban areas with the complexity of the underground runoff drainage network, and by using real data to calibrate and validate the model. We combined very high-resolution spatial imaging and deep learning. All this is linked to big data and smart cities,” she said.

Discussion of smart cities began in 2010, initially involving technological issues such as integrated traffic light control systems and bus stops with Wi-Fi. Sustainability and quality of life for residents have been included more recently.

According to the United Nations, the world population reached 8 billion in 2022, with 56% living in urban areas. The population is expected to rise to 9.7 billion by 2050, with 6.6 billion (68%) living in cities.

Cities are currently expanding at twice the rate of population growth. In the next three decades, urban areas worldwide are set to total more than 3 million square kilometers, equivalent to the territory of India.

City planning is not advancing at the same pace. For example, rampant urbanization incurs changes in land use and cover, expands impermeable surfaces, and alters hydrology. In conjunction with the higher frequency of extreme weather events due to climate change, this exposes cities to flooding and landslides in the rainy season.

Cross-tabulation

For hydrodynamic modeling, the researchers used a software package called HEC-RAS (Hydrologic Engineering Center’s River Analysis System), which simulates water flow and surface elevation, as well as sediment transport.

To identify flood-prone areas, they used two digital terrain models (DTMs) with different spatial resolutions of 0.5m and 5m. A DTM is a mathematical representation of the topography of the Earth’s surface, excluding all vertical objects. The model can be manipulated by computer programs and is typically visualized as a grid in which an elevation value is assigned to each pixel. Vegetation, buildings, and other characteristics are digitally removed. In this study, the researchers used four supercomputing intervals (1, 15, 30, and 60 seconds) in their analysis of the simulations.

The best results were obtained from the simulations with a spatial resolution of 5m, which displayed maps with the highest coverage of flood-prone areas (278 out of 286 points, or 97.2%) in the shortest computation time. They identified the potential for flooding in areas not detected by civil defense authorities or citizens of São Caetano do Sul during actual flood events.

“We set out to create a methodology to support decision-makers. We simulated projected land-use changes several years ahead and their impact on the network of watercourses. On this basis, it’s possible to run simulations with scenarios. An example would be specifying millimeters of rain in a given timeframe to predict the impact on an area of a city in terms of flooding. Public administrators can use this capability to make decisions, avoiding economic damage as well as loss of life,” Silva said.

The researchers stressed the need for cities to update their databases for this type of analysis, as did São Caetano do Sul. “The model works with and is fed by data. It’s important for cities to have up-to-date information, including records relating to extreme cases, such as major floods and inundations,” Almeida said.

São Caetano do Sul is part of a dense conurbation that encompasses São Paulo city as well as the neighboring cities of Santo André and São Bernardo do Campo. It has had many floods and inundations – 29 between 2000 and 2022 alone, according to the researchers.

On the other hand, it ranks first among all 5,570 municipalities in Brazil for sustainability based on the Sustainable Development of Cities Index – Brazil (IDSC-BR), part of a series of reports produced by the United Nations Sustainable Development Solutions Network (SDSN) to monitor implementation of the Sustainable Development Goals (SDGs) in member countries.

With some 162,000 inhabitants, it has a comprehensive wastewater treatment system connected to 100% of homes. Almost all urban dwellings (95.4%) are located on public streets with trees, and a reasonably large proportion (37%) are on adequately urbanized streets (paved and with sidewalks, curbs, and drains), according to IBGE, Brazil’s census and statistics bureau.

Should robots be given a human conscience?

Humans have curated the best of human intelligence to inform AI, with the hopes of creating flawless machines – but could the flaws we left out be the missing pieces needed to ensure robots do not go rogue?

Modern-day society relies intrinsically on automated systems and artificial intelligence. It is embedded into our daily routines and shows no signs of slowing, in fact, the use of robotic and automated assistance is ever-increasing.

Such pervasive use of AI presents technologists and developers with two ethical dilemmas – how do we build robots that behave in line with our values and how do we stop them from going rogue?

One author suggests that one option which is not explored enough is to code more humanity into robots, gifting robots with traits such as empathy and compassion.

Is humanity the answer?

In a new book called Robot Souls, to be published in August, academic Dr. Eve Poole OBE explores the idea that the solution to society’s conundrum about how to make sure AI is ethical lies in human nature.

She argues that in its bid for perfection, humans stripped out the ‘junk code’ including emotions, free will, and a sense of purpose.

She said: “It is this ‘junk’ which is at the heart of humanity. Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us.

“This junk code is in fact vital to human flourishing, because behind all of these flaky and whimsical properties lies a coordinated attempt to keep our species safe. Together they act as a range of ameliorators with a common theme: they keep us in the community so that there is safety in numbers.”

Robot souls

With AI increasingly taking up more decision-making roles in our daily lives, along with rising concerns about bias and discrimination in AI, Dr. Poole argues the answer might be in the stuff we tried to strip out of autonomous machines in the first place.

She said: “If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them to all intents and purposes a ‘soul’.”

In the new book, Poole suggests a series of next steps to make this a reality, including agreeing with a rigorous regulation process and an immediate ban on autonomous weapons along with a licensing regime with rules that reserve any final decision over the life and death of a human to a fellow human.

She argues we should also agree on the criteria for legal personhood and a road map for Al toward it.

The human blueprint

“Because humans are flawed we disregarded a lot of characteristics when we built AI,” Poole explains. “It was assumed that robots with features like emotions and intuition, that made mistakes and looked for meaning and purpose, would not work as well.

“But on considering why all these irrational properties are there, it seems that they emerge from the source code of the soul. Because it is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving.”

Robot Souls looks at developments in AI and reviews the emergence of ideas of consciousness and the soul.

It places our ‘junk code’ in this context and argues that it is time to foreground that code and use it to look again at how we are programming AI.

New research in structured light means researchers can exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is.
New research in structured light means researchers can exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is.

South African researchers demo noise-free communication with structured light

A new approach to optical communication that can be deployed with conventional technology.

The patterns of light hold tremendous promise for a large encoding alphabet in optical communications, but progress is hindered by their susceptibilities to distortion, such as in atmospheric turbulence or bent optical fiber.  Now researchers at the University of the Witwatersrand (Wits) have outlined a new optical communication protocol that exploits spatial patterns of light for multi-dimensional encoding in a manner that does not require the patterns to be recognized, thus overcoming the prior limitation of modal distortion in noisy channels.  The result is a new encoding state-of-the-art of over 50 vectorial patterns of light sent virtually noise-free across a turbulent atmosphere, opening a new approach to high-bit-rate optical communication.  

Published this week in Laser & Photonics Reviews, the Wits team from the Structured Light Laboratory in the Wits School of Physics used a new invariant property of vectorial light to encode information.  This quantity, which the team calls “vectorness”, scales from 0 to 1 and remains unchanged when passing through a noisy channel.  Unlike traditional amplitude modulation which is 0 or 1 (only a two-letter alphabet), the team used the invariance to partition the 0 to 1 vectorness range into more than 50 parts (0, 0.02, 0.04, and so on up to 1) for a 50-letter alphabet.  Because the channel over which the information is sent does not distort the vectorness, both sender and received will always agree on the value, hence noise-free information transfer.  

The critical hurdle that the team overcame is to use patterns of light in a manner that does not require them to be “recognized” so that the natural distortion of noisy channels can be ignored.  Instead, the invariant quantity just “adds up” light in specialized measurements, revealing a quantity that doesn’t see distortion at all.

“This is a very exciting advance because we can finally exploit the many patterns of light as an encoding alphabet without worrying about how noisy the channel is,” says Professor Andrew Forbes, from the Wits School of Physics. “In fact, the only limit to how big the alphabet can be is how good the detectors are and not at all influenced by the noise of the channel.”

Lead author and Ph.D. candidate Keshaan Singh added: “To create and detect the vectorness modulation requires nothing more than conventional communications technology, allowing our modal (pattern) based protocol to be deployed immediately in real-world settings.”

The team has already started demonstrations in optical fiber and in fast links across free space and believes that the approach can work in other noisy channels, including underwater.

Dutch scientists develop artificial molecules that behave like real ones

Scientists from the Radboud University in Nijmegen, the Netherlands have developed synthetic molecules that resemble real organic molecules. A collaboration of researchers, led by Alex Khajetoorians and Daniel Wegner, can now simulate the behavior of real molecules by using artificial molecules. In this way, they can tweak the properties of molecules in ways that are normally difficult or unrealistic, and they can understand much better how molecules change.

Emil Sierda, who was in charge of conducting the experiments at Radboud University: "A few years ago we had this crazy idea to build a quantum simulator. We wanted to create artificial molecules that resembled real molecules. So we developed a system in which we trapped electrons. Electrons surround a molecule like a cloud, and we used those trapped electrons to build an artificial molecule." The results the team found were astonishing. Sierda: ‘The resemblance between what we built and real molecules was uncanny."

Changing molecules

Alex Khajetoorians, head of the Scanning Probe Microscopy (SPM) department at Radboud University: "Making molecules is difficult enough. What is often harder, is to understand how certain molecules react, for example how they change when they are twisted or altered." How molecules change and react is the basis of chemistry, and leads to chemical reactions, like the formation of water from hydrogen and oxygen.

"We wanted to simulate molecules, so we could have the ultimate toolkit to bend them and tune them in ways that are nearly impossible with real molecules. In that way, we can say something about real molecules, without making them, or without having to deal with the challenges they present, like their constantly changing shape."

Benzene

Using this simulator, the researchers created an artificial version of one of the basic organic molecules in chemistry: benzene. Benzene is the starting component for a vast amount of chemicals, like styrene, which is used to make polystyrene. Khajetoorians: "By making benzene, we simulated a textbook organic molecule, and built a molecule that is made up of elements that are not organic." Above that: the molecules are ten times bigger than their real counterparts, which makes them easier to work with.

Practical uses

The uses of this new technique are endless. Daniel Wegner, assistant professor within the SPM department: "We have only begun to imagine what we can use this for. We have so many ideas that it is hard to decide where to start."

By using the simulator, scientists can understand molecules and their reactions much better, which will help in every scientific field imaginable. Wegner: "New materials for future computer hardware are really hard to make, for instance. By making a simulated version, we can look for the novel properties and functionalities of certain molecules and evaluate whether it will be worth making the real material."

In the far future, all kinds of things may be possible: understanding chemical reactions step by step like in a slow-motion video, or making artificial single-molecule electronic devices, like shrinking the size of a transistor on a computer chip. Quantum simulators are even suggested to perform as quantum supercomputers. Sierda: "But that’s a long way to go, for now, we can start by beginning to understand molecules in a way we never understood before."

The research was conducted by a Radboud University collaboration between the groups of Malte Rösner (Theory of Condensed Matter), Mikhail Katsnelson (Theory of Condensed Matter), Gerrit Groenenboom (Theoretical Chemistry), Daniel Wegner (SPM), and Alex Khajetoorians (SPM).

Caption:Researchers can screen more than 100 million compounds in a single day — much more than any existing model. Credits:Image: iStock
Caption:Researchers can screen more than 100 million compounds in a single day — much more than any existing model. Credits:Image: iStock

MIT CSAIL modeling offers a way to speed up drug discovery

By applying a language model to protein-drug interactions, researchers can quickly screen large libraries of potential drug compounds.

Huge libraries of drug compounds may hold potential treatments for a variety of diseases, such as cancer or heart disease. Ideally, scientists would like to experimentally test each of these compounds against all possible targets, but doing that kind of screen is prohibitively time-consuming.

In recent years, researchers have begun using computational methods to screen those libraries in hopes of speeding up drug discovery. However, many of those methods also take a long time, as most of them calculate each target protein’s three-dimensional structure from its amino-acid sequence, then use those structures to predict which drug molecules it will interact with.

Researchers at MIT and Tufts University have now devised an alternative computational approach based on a type of artificial intelligence algorithm known as a large language model. These models — one well-known example is ChatGPT — can analyze huge amounts of text and figure out which words (or, in this case, amino acids) are most likely to appear together. The new model, known as ConPLex, can match target proteins with potential drug molecules without performing the computationally intensive step of calculating the molecules’ structures.

Using this method, the researchers can screen more than 100 million compounds in a single day — much more than any existing model.

“This work addresses the need for efficient and accurate in silico screening of potential drug candidates, and the scalability of the model enables large-scale screens for assessing off-target effects, drug repurposing, and determining the impact of mutations on drug binding,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and one of the senior authors of the new study.

Lenore Cowen, a professor of computer science at Tufts University, is also a senior author of the paper. Rohit Singh, a CSAIL research scientist, and Samuel Sledzieski, an MIT graduate student, are the lead authors of the paper, and Bryan Bryson, an associate professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, is also an author. In addition to the paper, the researchers have made their model available online for other scientists to use.

Making predictions

In recent years, computational scientists have made great advances in developing models that can predict the structures of proteins based on their amino-acid sequences. However, using these models to predict how a large library of potential drugs might interact with a cancerous protein, for example, has proven challenging, mainly because calculating the three-dimensional structures of the proteins requires a great deal of time and computing power.

An additional obstacle is that these kinds of models don’t have a good track record for eliminating compounds known as decoys, which are very similar to a successful drug but don’t interact well with the target.

“One of the longstanding challenges in the field has been that these methods are fragile, in the sense that if I gave the model a drug or a small molecule that looked almost like the true thing, but it was slightly different in some subtle way, the model might still predict that they will interact, even though it should not,” Singh says.

Researchers have designed models that can overcome this kind of fragility, but they are usually tailored to just one class of drug molecules, and they aren’t well-suited to large-scale screens because the computations take too long. 

The MIT team decided to take an alternative approach, based on a protein model they first developed in 2019. Working with a database of more than 20,000 proteins, the language model encodes this information into meaningful numerical representations of each amino-acid sequence that capture associations between sequence and structure.

“With these language models, even proteins that have very different sequences but potentially have similar structures or similar functions can be represented similarly in this language space, and we're able to take advantage of that to make our predictions,” Sledzieski says.

In their new study, the researchers applied the protein model to the task of figuring out which protein sequences will interact with specific drug molecules, both of which have numerical representations that are transformed into a common, shared space by a neural network. They trained the network on known protein-drug interactions, which allowed it to learn to associate specific features of the proteins with drug-binding ability, without having to calculate the 3D structure of any of the molecules.

“With this high-quality numerical representation, the model can short-circuit the atomic representation entirely, and from these numbers predict whether or not this drug will bind,” Singh says. “The advantage of this is that you avoid the need to go through an atomic representation, but the numbers still have all of the information that you need.”

Another advantage of this approach is that it takes into account the flexibility of protein structures, which can be “wiggly” and take on slightly different shapes when interacting with a drug molecule.

High affinity

To make their model less likely to be fooled by decoy drug molecules, the researchers also incorporated a training stage based on the concept of contrastive learning. Under this approach, the researchers give the model examples of “real” drugs and imposters and teach it to distinguish between them.

The researchers then tested their model by screening a library of about 4,700 candidate drug molecules for their ability to bind to a set of 51 enzymes known as protein kinases.

From the top hits, the researchers chose 19 drug-protein pairs to test experimentally. The experiments revealed that of the 19 hits, 12 had a strong binding affinity (in the nanomolar range), whereas nearly all of the many other possible drug-protein pairs would have no affinity. Four of these pairs bound with extremely high, sub-nanomolar affinity (so strong that a tiny drug concentration, on the order of parts per billion, will inhibit the protein).

While the researchers focused mainly on screening small-molecule drugs in this study, they are now working on applying this approach to other types of drugs, such as therapeutic antibodies. This kind of modeling could also prove useful for running toxicity screens of potential drug compounds, to make sure they don’t have any unwanted side effects before testing them in animal models.

“Part of the reason why drug discovery is so expensive is because it has high failure rates. If we can reduce those failure rates by saying upfront that this drug is not likely to work out, that could go a long way in lowering the cost of drug discovery,” Singh says.

This new approach “represents a significant breakthrough in drug-target interaction prediction and opens up additional opportunities for future research to further enhance its capabilities,” says Eytan Ruppin, chief of the Cancer Data Science Laboratory at the National Cancer Institute, who was not involved in the study. “For example, incorporating structural information into the latent space or exploring molecular generation methods for generating decoys could further improve predictions.”

The research was funded by the National Institutes of Health, the National Science Foundation, and the Phillip and Susan Ragon Foundation.