Cambridge researchers suggest community of ethical hackers needed to prevent AI’s looming crisis of trust

The Artificial Intelligence industry should create a global community of hackers and “threat modelers” dedicated to stress-testing the harmful potential of new AI products to earn the trust of governments and the public before it’s too late.

This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Centre for the Study of Existential Risk (CSER) in the UK, who has authored a new “call to action” published today in the journal Science.

They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails, and “bias bounties” – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public.    

Otherwise, the industry faces a “crisis of trust” in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil.

The novelty and “black box” nature of AI systems, and ferocious competition in the race to the marketplace, have hindered the development and adoption of auditing or third-party analysis, according to lead author Dr. Shahar Avin of CSER. 

The experts argue that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its future – and trust is fraying.     

The new publication puts forward a series of “concrete” measures that they say should be adopted by AI developers.

“There are critical gaps in the processes required to create AI that has earned public trust. Some of these gaps have enabled questionable behavior that is now tarnishing the entire field,” said Avin.

“We are starting to see a public backlash against technology. This ‘tech-lash’ can be all-encompassing: either all AI is good or all AI is bad.

“Governments and the public need to be able to easily tell apart between the trustworthy, the snake-oil salesmen, and the clueless,” Avin said. “Once you can do that, there is a real incentive to be trustworthy. But while you can’t tell them apart, there is a lot of pressure to cut corners.”

Co-author and CSER researcher Haydn Belfield said: “Most AI developers want to act responsibly and safely, but it’s been unclear what concrete steps they can take until now. Our report fills in some of these gaps.”

The idea of AI “red teaming” – sometimes known as white-hat hacking – takes its cue from cyber-security.

“Red teams are ethical hackers playing the role of malign external agents,” said Avin. “They would be called in to attack any new AI, or strategize on how to use it for malicious purposes, in order to reveal any weaknesses or potential for harm.”

While a few big companies have the internal capacity to “red team” – which comes with its ethical conflicts – the report calls for a third-party community, one that can independently interrogate new AI and share any findings for the benefit of all developers.

A global resource could also offer high-quality red teaming to the small start-up companies and research labs developing AI that could become ubiquitous.  

The new report, a concise update of more detailed recommendations published by a group of 59 experts last year, also highlights the potential for bias and safety “bounties” to increase openness and public trust in AI.

This means financially rewarding any researcher who uncovers flaws in AI that have the potential to compromise public trust or safety – such as racial or socioeconomic biases in algorithms used for medical or recruitment purposes.

Earlier this year, Twitter began offering bounties to those who could identify biases in their image-cropping algorithm.

Companies would benefit from these discoveries, say researchers, and be given time to address them before they are publicly revealed. Avin points out that, currently, much of this “pushing and prodding” is done on a limited, ad-hoc basis by academics and investigative journalists.

The report also calls for auditing by trusted external agencies – and for open standards on how to document AI to make such auditing possible – along with platforms dedicated to sharing “incidents”: cases of undesired AI behavior that could cause harm to humans.

These, along with meaningful consequences for failing an external audit, would significantly contribute to an “ecosystem of trust” say the researchers.

“Some may question whether our recommendations conflict with commercial interests, but other safety-critical industries, such as the automotive or pharmaceutical industry, manage it perfectly well,” said Belfield.

“Lives and livelihoods are ever more reliant on AI that is closed to scrutiny, and that is a recipe for a crisis of trust. It’s time for the industry to move beyond well-meaning ethical principles and implement real-world mechanisms to address this,” he said.

Added Avin: "We are grateful to our collaborators who have highlighted a range of initiatives aimed at tackling these challenges, but we need policy and public support to create an ecosystem of trust for AI.”

WVU engineers create software for aerobots to explore Venus

Engineers at West Virginia University are propelling exploration forward by creating control software for a group of aerial robots (aerobots) that will survey the atmosphere of Venus, the second planet from the sun. A photo of Venus taken from a telescope. WVU engineers are developing software for aerobots that will explore Venus’ environment.  CREDIT WVU Photo/Yu Gu

According to researchers, Venus went through a climate change process that transformed it from an Earth-like environment to an inhospitable world. Studying Venus can help model the evolution of climate on Earth and serve as a reference for what can happen in the future.

Guilherme Pereira and Yu Gu, associate professors in the Department of Mechanical and Aerospace Engineering, tasked with developing the software for the aerobots, which are balloon-based robotic vehicles, hope to play a pivotal role in these discoveries. Their study is supported by a $100,000 NASA Established Program to Stimulate Competitive Research.

“The main goal of the project is to propose a software solution that will allow hybrid aerobots to explore the atmosphere of Venus,” Pereira said. “Although hybrid vehicles were proposed before this project, we are not aware of any software has been created.”

One aerobot concept is the Venus Atmosphere Maneuverable Platform, which is a hybrid airship that uses both buoyancy and aerodynamic lift to control its altitude. The benefit of a hybrid aerobot is its ability to, during the day, behave like a plane, collecting and using energy from the sun to drive its motors, and, during the night, float like a balloon to save energy.

The buoyancy of the vehicle would prevent it from going below 50 km – or 31 miles, below the surface of Venus where the temperature is very high and would damage the vehicle, according to Pereira.

“One of the ideas of our project is to extend the battery life of the vehicle by planning energy-efficient paths, thus allowing it to fly during the night as well,” Pereira said.

The aerobot lifespan at cruise altitude is several months to a year.

Pereira and Gu said their software will have three main goals. The first is to create a motion planner for the vehicles, so they can be commanded to go from their current position to a goal position specified by NASA’s science team using minimum energy and leveraging the winds in the planet. A motion planner is a software that will run on the aerobot’s computer.

“The motion planner will be created by understanding the dynamics of the aerobot, the properties of its solar panels and batteries, and the properties of Venus's atmosphere,” Pereira said. “With the dynamics of the vehicle, the planner will only consider movements that are feasible given certain inputs to the aircraft, such as thrust coming from the propellers or deflections of the control surfaces.”

Pereira said that understanding the solar panels and batteries is important to account for how much charge the vehicle has to power its systems and what its recharging rate is according to the solar intensity.

“The understanding of the atmosphere provides the robots quantities like wind direction and magnitude, pressure, temperature, and solar intensity,” Pereira said.

With these models, the motion planner will calculate the best route for the aerobot.

“We are trying to come up with an optimal energy strategy,” Pereira said. “This is important since the vehicle will be orbiting the atmosphere of Venus in around four days. It will be exposed to long periods without the light on the dark side of the planet and it needs to have enough energy to survive these periods.”

According to Pereira, the motion planner will have access to the position of the aerobot in Venus's atmosphere and the desired goal location. It will also have access to information about the atmosphere in between these two positions.

“Starting from the initial position, the planner will simulate different movements the aerobot could make and associate costs for each of them depending on the quantities mentioned before,” Pereira said.

For example, if the wind is blowing in the same direction as the movement of the aerobot, that would be less costly than moving against the wind direction.

“After that, the motion planner will keep propagating the movements of the aerobot with a smaller cost, creating a tree of possibilities until we reach our destination,” Pereira said.

The second goal of this project is to localize the aerobot vehicles in the atmosphere using information from other vehicles and maps of the planet. There is currently no GPS in Venus, so localization is difficult.

This localization approach will allow several robots to be less lost as a group when they are exploring Venus.

Gu and Pereira plan on using different types of maps for localization.

“We are evaluating the possibility of using maps created before the mission, most likely a Venus topographic map to help the robots to localize themselves,” Gu said.

The third goal for this project is to coordinate the vehicles so that they have improved localization and a better estimation of atmospheric conditions.

The spatial distribution of the aerobots in the atmosphere may allow each aerobot to have a better knowledge of the 3D wind field if each vehicle shares the wind flow in its neighborhood, according to Pereira.

Pereira and Gu’s research will be based on wind models of Venus created by NASA. The researchers also propose that the aerobots carry wind sensors that can be used to estimate the local wind.

“The importance of the wind flow is related to the fact that it can be exploited to take the aerobot to desired locations,” Pereira said. “Just as with sprinters in the Olympics when they get better marks if they are experiencing tail-wind. If the wind is directed towards the goal of the aircraft, the aerobot movement will be aided by the wind and, by consequence, the path will be more energetically efficient.”

To test this, Pereira and Gu plan to develop a Venus atmosphere simulator, where they will evaluate the aerobots’ functionality.

“Several exploratory missions to Venus collected data of wind, temperature, pressure, and air density,” Pereira said. “This information was then used to create a simulator where, given the latitude, longitude, and altitude of the vehicle, we compute all the forces acting on the vehicle.”

Joining Pereira and Gu on the project are Bernardo Martinez Rocamora Jr. and Chizhao Yang, doctoral students in aerospace and mechanical engineering, and Anna Puigvert I Juan, master’s student in mechanical engineering.

Texas A&M researchers develop an algorithm that shows mosquitoes can even flourish in winter

A mathematical model developed by Texas A&M researchers can predict temperatures within mosquito breeding grounds, which can be used to estimate populations and track vector-borne diseases. Temperature is critical in mosquitoes’ life cycle, and can be used to mathemetically model their development, reproduction and survival. Getty Images

With an impressive capability of drinking up to three times their body weight in a single blood meal, mosquitoes are formidable parasites. But to reach adulthood, mosquitoes need to be raised in environments where the temperatures are conducive to their breeding, growth, and development.

In a new study in the journal Scientific Reports, Texas A&M University researchers have developed a mathematical model based on machine learning to precisely predict the local or microclimatic temperature within the breeding grounds of the Aedes albopictus mosquitoes, carriers of the chikungunya and dengue viruses. Their algorithm also reveals that even in winter, the temperature may be warm enough in certain breeding grounds to allow mosquitoes to grow and thrive.

“Our goal is to develop accurate and automated mathematical models for estimating microclimatic temperature, which can greatly facilitate a quick assessment of mosquito populations and consequently, vector-borne disease transmission,” said Madhav Erraguntla, associate professor of practice in the Wm Michael Barnes ’64 Department of Industrial and Systems Engineering.

Responsible for around a million deaths globally, mosquitoes continue to wreak havoc to public health in many parts of the world. In addition to  water, temperature plays a critical role at different stages in mosquitoes’ life cycle. Furthermore, The mosquitoes’ development, reproduction and survival can be mathematically modeled on the basis of temperature.

Past studies have largely relied on ambient temperature, or general air temperature, to make predictions about mosquito populations. However, these calculations have not been precise since ambient temperatures can deviate from those within mosquito breeding grounds. Recognizing this shortcoming, scientists rely on sensors, called data loggers, to continually keep track of the temperature, light intensity and humidity within breeding grounds. Despite their advantages, these sensors are inconvenient due to their cost and long-term use.

“People have realized that the microclimatic conditions are important, but right now data loggers are the only way to keep track of temperature,” Erraguntla said. “We wanted to address this gap by automating the process of estimating microclimatic temperatures so that we can model the life cycle of mosquitoes accurately.”

For their experiments, the researchers placed sensors in common mosquito breeding grounds around Houston, including storm drains, shaded areas and inside water meters. In addition, they obtained information on ambient temperatures from the National Oceanic and Atmospheric Administration repository. With this data as training input to a machine learning algorithm, the computer model could predict the microclimatic temperatures for a variety of ambient temperatures and breeding grounds within 1.5 degrees centigrade. Further, the model now could even forecast microclimatic temperatures for any ambient temperature, precluding the need for sensors.

Next, they fed the values of the microclimatic temperatures to another mathematical model, called the population dynamic model, that tracks the life cycle of the mosquitoes. Based on the microclimatic temperature and other parameters, the population dynamic model could estimate the populations at different stages in the lifecycle including eggs, larvae, pupae, and adult Aedes albopictus mosquitoes.

The model also revealed that the insulated conditions of the storm drains could result in the survival of 84% of juveniles and eggs and 96% of adults during the winter months, a time of the year when mosquitoes are assumed to be dormant.

Although their climatic temperature prediction model has a high degree of accuracy, the researchers noted that additional research is needed to affirm if their model is applicable to places outside of Texas.

“Our work automates the prediction of microclimatic conditions, bypassing an otherwise expensive and time-consuming process of placing the sensors in different breeding spots, collecting the sensor data and analyzing it,” Erraguntla said. “From a public health context, this work will help epidemiologists better track mosquito-borne disease transmission and surges in mosquito abundances.”

UC Riverside prof develops wildfire dataset to help firefighters save lives, property

WildfireDB contains over 17 million data points that capture how fires have spread in the contiguous United States over the last decade Ahmed Eldawy

A team at UC Riverside led by computer science assistant professor Ahmed Eldawy is collaborating with researchers at Stanford University and Vanderbilt University to develop a dataset that uses data science to study the spread of wildfires. The dataset can be used to simulate the spread of wildfires to help firefighters plan emergency responses and conduct evacuation. It can also help simulate how fires might spread soon under the effects of deforestation and climate change, and aid risk assessment and planning of new infrastructure development.

The open-source dataset, named WildfireDB, contains over 17 million data points that capture how fires have spread in the contiguous United States over the last decade. The dataset can be used to train machine learning models to predict the spread of wildfires.

“One of the biggest challenges is to have a detailed and curated dataset that can be used by machine learning algorithms,” said Eldawy. “WildfireDB is the first comprehensive and open-source dataset that relates historical fire data with relevant covariates such as weather, vegetation, and topography.”

First responders depend on understanding and predicting how a wildfire spreads to save lives and property and to stop the fire from spreading. They need to figure out the best way to allocate limited resources across large areas. Traditionally, fire spread is modeled by tools that use physics-based modeling. This method could be improved with the addition of more variables, but until now, there was no comprehensive, open-source data source that combines fire occurrences with geospatial features such as mountains, rivers, towns, fuel levels, vegetation, and weather.

Eldawy, along with UCR doctoral student Samriddhi Singla and undergraduate researcher Vinayak Gajjewar, utilized a novel system called Raptor, which was developed at UCR to process high-resolution satellite data such as vegetation and weather. Using Raptor, they combined historical wildfires with other geospatial features, such as weather, topography, and vegetation, to build a dataset at a scale that included most of the United States.

WildfireDB has mapped historical fire data in the contiguous United States between 2012 to 2017 with spatial and temporal resolutions that allow researchers to home in on the daily behavior of fire in regions as small as 375-meter square polygons. Each fire occurrence includes the type of vegetation, fuel type, and topography. The dataset does not include Alaska or Hawaii.

To use the dataset, researchers or firefighters can select information relevant to their situation from WildfireDB and train machine learning models that can model the spread of wildfires. These trained models can then be used by firefighters or researchers to predict the spread of wildfires in real-time. 

“Predicting the spread of wildfire in real-time will allow firefighters to allocate resources accordingly and minimize loss of life and property,” said Singla, the paper’s first author. 

The paper, “WildfireDB: an open-source dataset connecting wildfire spread with relevant determinants,” will be presented at the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks and is available here. A visualization of the dataset is available here. Eldawy, Singla, and Gajjewar were joined in the research by Ayan Mukhopadhyay, Michael Wilbur, and Abhishek Dubey at Vanderbilt University; and Tina Diao, Mykel Kochenderfer, and Ross Shachter at Stanford University.

Washington scientists show what types of environments astronomers can expect to find on exoplanets

Exoplanets are experiencing a stratospheric rise. In the three decades since the first confirmed planet orbiting another star, scientists have cataloged more than 4,000 of them. As the list grows, so too does the desire to find Earth-like exoplanets — and to determine whether they could be life-sustaining oases like our own globe. An artist’s depiction of Kepler-186f, an Earth-sized exoplanet, showing a hypothetical surface that includes partial ice coverage at the poles.NASA Ames/SETI Institute/JPL-Caltech

The coming decades should see the launch of new missions that can gather ever-larger amounts of data about exoplanets. Anticipating these future endeavors, a team at the University of Washington and the University of Bern has computationally simulated more than 200,000 hypothetical Earth-like worlds — planets that have the same size, mass, atmospheric composition, and geography as modern Earth — all in orbit of stars like our sun. Their goal was to model what types of environments astronomers can expect to find on real Earth-like exoplanets.

As they report in a paper accepted to the Planetary Science Journal and submitted Dec. 6 to the preprint site arXiv, on these simulated exoplanets, one common feature of present-day Earth was often lacking: partial ice coverage.

“We essentially simulated Earth’s climate on worlds around different types of stars, and we find that in 90% of cases with liquid water on the surface, there are no ice sheets, like polar caps,” said co-author Rory Barnes, a UW professor of astronomy and scientist with the UW’s Virtual Planetary Laboratory. “When ice is present, we see that ice belts — permanent ice along the equator — are actually more likely than ice caps.”

The findings shed light on the complex interplay between liquid water and ice on Earth-like worlds, according to lead author Caitlyn Wilhelm, who led the study as an undergraduate student in the UW Department of Astronomy. A composite image of the ice cap covering Earth’s Arctic region — including the North Pole — taken 512 miles above our planet on April 12, 2018 by the NOAA-20 polar-orbiting satellite.NOAA

“Looking at ice coverage on an Earth-like planet can tell you a lot about whether it’s habitable,” said Wilhelm, who is now a research scientist with the Virtual Planetary Laboratory. “We wanted to understand all the parameters — the shape of the orbit, the axial tilt, the type of star — that affect whether you have ice on the surface, and if so, where.”

The team used a 1-D energy balance model, which computationally imitates the energy flow between a planet’s equator and poles, to simulate the climates on thousands of hypothetical exoplanets in various orbital configurations around F-, G- or K-type stars. These classes of stars, which include our G-type sun, are promising candidates for hosting life-friendly worlds in their habitable zones, also known as the “Goldilocks” zone. F-type stars are a bit hotter and larger than our sun; K-type stars are slightly cooler and smaller.

In their simulations, the orbits of the exoplanets ranged from circular to pronounced oval. The team also considered axial tilts ranging from 0 to 90 degrees. Earth’s axial tilt is a moderate 23.5 degrees. A planet with a 90-degree tilt would “sit on its side” and experience extreme seasonal variations in climate, much like the planet Uranus.

According to the simulations, which encompassed a 1-million-year timespan on each world, Earth-like worlds showed climates ranging from planet-wide “snowball” climates — with ice present at all latitudes — to a steaming “moist greenhouse,” which is probably similar to Venus’ climate before a runaway greenhouse effect made its surface hot enough to melt lead. But even though most environments in the simulations fell somewhere between those extremes, partial surface ice was present on only about 10% of hypothetical, habitable exoplanets.

The model included natural variations over time in each world’s axial tilt and orbit, which in part explains the general lack of ice on habitable exoplanets, according to co-author Russell Deitrick, a postdoctoral scientist at the University of Bern and researcher with the Virtual Planetary Laboratory. An artist’s depiction of ancient Earth in a snowball state.NASA

“Orbits and axial tilts are always changing,” said Deitrick. “On Earth, these variations are called Milankovitch cycles and are very small in amplitude. But for exoplanets, these changes can be quite large, which can eliminate ice or trigger ‘snowball’ states.”

When partial ice was present, its distribution varied by a star. Around F-type stars, polar ice caps — like what Earth sports currently — were found about three times more often than ice belts, whereas ice belts occurred twice as often as caps for planets around G- and K-type stars. Ice belts were also more common on worlds with extreme axial tilts, likely because seasonal extremes keep the polar climates more volatile than equatorial regions, according to Wilhelm.

The team’s findings of ice on these simulated Earth-like worlds should help in the search for potentially habitable worlds by showing astronomers what they can expect to find, especially regarding ice distribution and the types of climates.

“Surface ice is very reflective, and can shape how an exoplanet ‘looks’ through our instruments,” said Wilhelm. “Whether or not ice is present can also shape how a climate will change over the long term, whether it goes to an extreme — like a ‘snowball Earth’ or a runaway greenhouse — or something more moderate.”

Ice alone, or its absence, does not determine habitability, though.

“Habitability encompasses a lot of moving parts, not just the presence or absence of ice,” said Wilhelm.

Life on Earth has survived snowball periods, as well as hundreds of millions of ice-free years, according to Barnes.

“Our own planet has seen some of these extremes in its own history,” said Barnes. “We hope this study lays the groundwork for upcoming missions to look for habitable signatures in exoplanet atmospheres — and to even image exoplanets directly — by showing what’s possible, what’s common, and what’s rare.”

Rachel Mellman, a recent UW graduate in astronomy, is a co-author of the paper. The research was funded by NASA through grants to the Virtual Planetary Laboratory.