This article is more than 1 year old

We trained an AI to predict how bad a forest fire will be. It's just as good as a coin flip!

What's that line? Your choices are half chance, so are everybody else's

Forest fires have apparently ravaged over four million acres of land across the United States so far this year, and the problem is only getting worse with global warming. Enter technology's hottest solution: Machine learning.

Scientists and engineers from the University of California, Irvine (UCI), have built a decision tree algorithm to predict how big a forest fire will grow given the time of day, weather conditions, and local vegetation. By forecasting the disaster, the researchers hope that it will help states decide how best to allocate resources to fight the blazes.

The only issue, however, is that the model is only accurate about 50.4 per cent of the time. Since it classifies the fire size into three groups: small, medium, and large, the results are actually higher than just guessing randomly, Shane Coffield, coauthor of the study published in the International Journal of Wildland Fire and a graduate student researcher at UCI, told The Register.

"A random model, classifying fires into three size groups, would have an accuracy of 33.3 per cent, which we outperform significantly. However, we speculate that we weren't able to achieve a higher accuracy due to the simplicity of our model, specifically the input variables. It's based on simple input variables [such as] vapor pressure deficit and [fraction of spruce trees] which are averaged for a time window or area around the ignition points and do not capture the full structure of fuels in the area."

Source information

The training data was taken from 1,168 fires that occurred in the northwestern state of Alaska, known for its thick density of spruce trees, a type of coniferous evergreen tree that's more flammable than other conifers, over a period from 2001 to 2017. Satellite data of the forest fires helped the researchers estimate the areas they engulfed so that they could be grouped by size.

The team split that data and used 90 per cent to train the algorithm and the remaining 10 per cent to test it. Next, it shuffled the data so that another 90 per cent and 10 per cent was used for retraining and retesting. The team repeated this 10 times, and took an average of the accuracy scores to reach 50.4 per cent.

Coffield believed that having more training data or using a different type of machine learning algorithm wouldn't boost the results by much. Instead, it's down the complexity of the input variables.

"Future work should use more complex input variables that capture finer-scale variations in weather as well as the structure of vegetation and barriers around ignition, which affects fire spread. Based on our analysis, we do not believe the model is limited by the number of data points nor the choice of machine learning algorithm."

Predictive disaster modelling is tricky; it's difficult to account for various variables and their effects when trying to foreshadow things like earthquakes or wildfires.

"Machine learning can't use the information we're not giving it. I certainly don't see our results as a negative reflection on machine learning, which has shown huge potential in a variety of fields, especially where the data is large and the underlying dynamics are complex," said Coffield.

"We care about early identification of the biggest fires because they threaten more of the ecosystem, which is not adapted to modern fire frequencies; they usually produce the most smoke, threatening human health; and they usually release the most carbon, offering a positive feedback to exacerbate climate warming and more fires." ®

More about

TIP US OFF

Send us news


Other stories you might like