
Studying coral reefs used to mean hours of grueling manual analysis, but artificial intelligence is changing the game.
A new neural network can process ocean sounds in real time, identifying fish activity 25 times faster than humans. This technology could revolutionize how scientists monitor reef health and protect marine ecosystems.
The Hidden Complexity of Coral Reefs
Coral reefs are among the most diverse ecosystems on the planet. Although they cover less than 1% of the ocean, they provide habitat for about 25% of all marine species at some stage of their life cycle. With so much biodiversity concentrated in one place, scientists face challenges in accurately identifying which species are present and in what numbers.
To tackle this, researchers from the Woods Hole Oceanographic Institution have developed a new approach, combining acoustic monitoring with a neural network to analyze fish activity based on sound. Their study was published today (March 11) in JASA, the journal of the Acoustical Society of America, through AIP Publishing.
The Challenges of Traditional Monitoring
For years, scientists have relied on passive acoustic monitoring to study coral reefs. This involves placing an underwater recorder at a reef for months to capture ambient sounds. While existing signal processing tools can analyze large volumes of audio data, they are not designed to detect specific sounds. Identifying individual fish calls or species-specific noises still requires researchers to manually sift through hours of recordings.
“But for the people that are doing that, it’s awful work, to be quite honest,” said author Seth McCammon. “It’s incredibly tedious work. It’s miserable.”
The Urgency of Faster Data Processing
Equally as important, this type of manual analysis is too slow for practical use. With many of the world’s coral reefs under threat from climate change and human activity, being able to rapidly identify and track changes in reef populations is crucial for conservation efforts.
“It takes years to analyze data to that level with humans,” said McCammon. “The analysis of the data in this way is not useful at scale.”
AI to the Rescue: A Smarter Approach
As an alternative, the researchers trained a neural network to sort through the deluge of acoustic data automatically, analyzing audio recordings in real time. Their algorithm can match the accuracy of human experts in deciphering acoustical trends on a reef, but it can do so more than 25 times faster, and it could change the way ocean monitoring and research is conducted.
“Now that we no longer need to have a human in the loop, what other sorts of devices — moving beyond just recorders — could we use?” said McCammon. “Some work that my co-author Aran Mooney is doing involves integrating this type of neural network onto a floating mooring that’s broadcasting real-time updates of fish call counts. We are also working on putting our neural network onto our autonomous underwater vehicle, CUREE, so that it can listen for fish and map out hot spots of biological activity.”
Cracking the Code of Fish Calls
This technology also has the potential to solve a long-standing problem in marine acoustic studies: matching each unique sound to a fish.
“For the vast majority of species, we haven’t gotten to the point yet where we can say with certainty that a call came from a particular species of fish,” said McCammon. “That’s, at least in my mind, the holy grail we’re looking for. By being able to do fish call detection in real time, we can start to build devices that are able to automatically hear a call and then see what fish are nearby.”
A Future of Real-Time Conservation
Eventually, McCammon hopes that this neural network will provide researchers with the ability to monitor fish populations in real time, identify species in trouble, and respond to disasters. This technology will help conservationists gain a clearer picture of the health of coral reefs, in an era where reefs need all the help they can get.
Reference: “Rapid detection of fish calls within diverse coral reef soundscapes using a convolutional neural network” by Seth McCammon, Nathan Formel, Sierra Jarriel and T. Aran Mooney, 11 March 2025, The Journal of the Acoustical Society of America.
DOI: 10.1121/10.0035829
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
9 Comments
hi
Sounds like real progress!
hi
This is sigma
hiii
hiii james
Ai Wil change. Our universal lives forever
I liek aple😀
Wow