[ad_1]
In September, researchers at Google’s DeepMind AI unit in London were paying unusual attention to the weather across the pond. Hurricane Lee was at least 10 days out from landfall—eons in forecasting terms—and official forecasts were still waffling between the storm landing on major Northeast cities or missing them entirely. DeepMind’s own experimental software had made a very specific prognosis of landfall much farther north. “We were riveted to our seats,” says research scientist Rémi Lam.
A week and a half later, on September 16, Lee struck land right where DeepMind’s software, called GraphCast, had predicted days earlier: Long Island, Nova Scotia—far from major population centers. It added to a breakthrough season for a new generation of AI-powered weather models, including others built by Nvidia and Huawei, whose strong performance has taken the field by surprise. Veteran forecasters told WIRED earlier this hurricane season that meteorologists’ serious doubts about AI have been replaced by an expectation of big changes ahead for the field.
Today, Google shared new, peer-reviewed evidence of that promise. In a paper published today in Science, DeepMind researchers report that its model bested forecasts from the European Centre for Medium-Range Weather Forecasting (ECMWF), a global giant of weather prediction, across 90 percent of more than 1,300 atmospheric variables such as humidity and temperature. Better yet, the DeepMind model could be run on a laptop and spit out a forecast in under a minute, while the conventional models require a giant supercomputer.
Fresh Air
Standard weather simulations make their predictions by attempting to replicate the physics of the atmosphere. They’ve gotten better over the years, thanks to better math and by taking in fine-grained weather observations from growing armadas of sensors and satellites. They’re also cumbersome. Forecasts at major weather centers like the ECMWF or the US National Oceanic and Atmospheric Association can take hours to compute on powerful servers.
When Peter Battaglia, a research director at DeepMind, first started looking at weather forecasting a few years ago, it seemed like the perfect problem for his particular flavor of machine learning. DeepMind had already taken on local precipitation forecasts with a system, called NowCasting, trained with radar data. Now his team wanted to try predicting weather on a global scale.
Battaglia was already leading a team focused on applying AI systems called graph neural networks, or GNNs, to model the behavior of fluids, a classic physics challenge that can describe the movement of liquids and gases. Given that weather prediction is at its core about modeling the flow of molecules, tapping GNNs seemed intuitive. While training these systems is heavy-duty, requiring hundreds of specialized graphics processing units, or GPUs, to crunch tremendous amounts of data, the final system is ultimately lightweight, allowing forecasts to be generated quickly with minimal computer power.
GNNs represent data as mathematical “graphs”—networks of interconnected nodes that can influence one another. In the case of DeepMind’s weather forecasts, each node represents a set of atmospheric conditions at a particular location, such as temperature, humidity, and pressure. These points are distributed around the globe and at various altitudes—a literal cloud of data. The goal is to predict how all the data at all those points will interact with their neighbors, capturing how the conditions will shift over time.
[ad_2]
Source link