Access the full text.
Sign up today, get DeepDyve free for 14 days.
Chapuis L. (2021)
107957Ecological Indicators, 129
(2013)
Product data sheet: SPU0410LR5H-QB Zeri-Height SiSonic Microphone
A. Jaramillo-Legorreta, G. Cárdenas‐Hinojosa, E. Nieto‐Garcia, L. Rojas-Bracho, J. Hoef, J. Moore, N. Tregenza, J. Barlow, T. Gerrodette, L. Thomas, B. Taylor (2017)
Passive acoustic monitoring of the decline of Mexico's critically endangered vaquitaConservation Biology, 31
Bertucci F. (2016)
33326Scientific Reports, 6
H. Harding, T. Gordon, Rachel Hsuan, Alex Mackaness, A. Radford, S. Simpson (2018)
Fish in habitats with higher motorboat disturbance show reduced sensitivity to motorboat noiseBiology Letters, 14
R.S. Sousa‐Lima, T.F. Norris, J.N. Oswald, D.P. Fernandes (2013)
A review and inventory of fixed autonomous recorders for passive acoustic monitoring of marine mammals, 39
N. Pieretti, R. Danovaro (2020)
Acoustic indexes for marine biodiversity trends and ecosystem healthPhilosophical Transactions of the Royal Society B, 375
Tsuyoshi Okumura, T. Akamatsu, H. Yan (2002)
ANALYSES OF SMALL TANK ACOUSTICS: EMPIRICAL AND THEORETICAL APPROACHESBioacoustics, 12
M. Sanguineti, J. Alessi, M. Brunoldi, G. Cannarile, O. Cavalleri, R. Cerruti, N. Falzoi, F. Gaberscek, C. Gili, G. Gnone, D. Grosso, C. Guidi, A. Mandich, C. Melchiorre, A. Pesce, M. Petrillo, M. Taiuti, B. Valettini, G. Viano (2021)
An automated passive acoustic monitoring system for real time sperm whale (Physeter macrocephalus) threat prevention in the Mediterranean SeaApplied Acoustics, 172
T. Renterghem, Pieter Thomas, F. Dominguez, S. Dauwe, A. Touhafi, B. Dhoedt, D. Botteldooren (2011)
On the ability of consumer electronics microphones for environmental noise monitoring.Journal of environmental monitoring : JEM, 13 3
(2019)
On brachypterous phaneropterine katydids (Orthoptera: Tettigoniidae: Phaneropterinae) from the Iguac
(2020)
Eds.) Handbook of silicon based MEMS materials and technologies
C. Bobryk, Christine Rega-Brodsky, S. Bardhan, A. Farina, Hong He, S. Jose (2016)
A rapid soundscape analysis to quantify conservation benefits of temperate agroforestry systems using low-cost technologyAgroforestry Systems, 90
B. Pijanowski, A. Farina, S. Gage, Sarah Dumyahn, B. Krause (2011)
What is soundscape ecology? An introduction and overview of an emerging new scienceLandscape Ecology, 26
Richard Beason, R. Riesch, J. Koricheva (2019)
AURITA: an affordable, autonomous recording device for acoustic monitoring of audible and ultrasonic frequenciesBioacoustics, 28
Desiderà E. (2019)
183Marine Ecology Progress Series, 608
M. Baumgartner, Julianne Bonnell, S. Parijs, P. Corkeron, Cara Hotchkin, K. Ball, L. Pelletier, J. Partan, D. Peters, J. Kemp, J. Pietro, K. Newhall, A. Stokes, T. Cole, Ester Quintana, S. Kraus (2019)
Persistent near real‐time passive acoustic monitoring for baleen whales from a moored buoy: System description and evaluationMethods in Ecology and Evolution, 10
Ming Zhong, M. Castellote, R. Dodhia, J. Ferres, M. Keogh, Arial Brewer (2020)
Beluga whale acoustic signal classification using deep learning neural network models.The Journal of the Acoustical Society of America, 147 3
A. Stimpert, D. Wiley, W. Au, Mark Johnson, R. Arsenault (2007)
‘Megapclicks’: acoustic click trains and buzzes produced during night-time foraging of humpback whales (Megaptera novaeangliae)Biology Letters, 3
M. Towsey, S. Parsons, J. Sueur (2014)
Ecology and acoustics at a large scaleEcol. Informatics, 21
J. Sueur, T. Aubin, C. Simonis (2008)
SEEWAVE, A FREE MODULAR TOOL FOR SOUND ANALYSIS AND SYNTHESISBioacoustics, 18
D. Bohnenstiehl, R. Lyon, Olivia Caretti, Shannon Ricci, D. Eggleston (2018)
Investigating the utility of ecoacoustic metrics in marine soundscapes, 2
Michel Versluis, Barbara Schmitz, Anna Heydt, Anna Heydt, Detlef Lohse (2000)
How snapping shrimp snap: through cavitating bubbles.Science, 289 5487
Emma Longden, S. Elwen, Barry McGovern, B. James, C. Embling, T. Gridley (2020)
Mark–recapture of individually distinctive calls—a case study with signature whistles of bottlenose dolphins (Tursiops truncatus)Journal of Mammalogy, 101
R. Sousa-lima, D. Fernandes, T. Norris, J. Oswald (2013)
A review and inventory of fixed autonomous recorders for passive acoustic monitoring of marine mammals: 2013 state-of-the-industry2013 IEEE/OES Acoustics in Underwater Geosciences Symposium
L. Sugai, C. Desjonquères, T. Silva, Diego Llusia (2019)
A roadmap for survey designs in terrestrial acoustic monitoringRemote Sensing in Ecology and Conservation, 6
B. Erisman, T. Rowell (2017)
A sound worth saving: acoustic characteristics of a massive fish spawning aggregationBiology Letters, 13
G. Hayman, S. Robinson, T. Pangerc, K. Ablitt, P. Theobald (2017)
Oceans 2017 ? Aberdeen
Elena Desiderà, P. Guidetti, P. Panzalis, A. Navone, Ca Valentini-Poirrier, P. Boissery, C. Gervaise, L. Iorio (2019)
Acoustic fish communities: sound diversity of rocky habitats reflects fish species diversityMarine Ecology Progress Series
G. Hayman, S. Robinson, P. Lepper (2016)
Calibration and Characterization of Autonomous Recorders Used in the Measurement of Underwater Noise.Advances in experimental medicine and biology, 875
M. Peck, R. Tapilatu, Eveline Kurniati, Christopher Rosado (2021)
Rapid coral reef assessment using 3D modelling and acoustics: acoustic indices correlate to fish abundance, diversity and environmental indicators in West Papua, IndonesiaPeerJ, 9
H. Fox, J. Pet, R. Dahuri, R. Caldwell (2003)
Recovery in rubble fields: long-term impacts of blast fishing.Marine pollution bulletin, 46 8
Marcos Fianco, Hemanueli Preis, N. Szinwelski, H. Braun, L. Faria (2019)
On brachypterous phaneropterine katydids (Orthoptera: Tettigoniidae:br />Phaneropterinae) from the Iguaçu National Park, Brazil: three new species, new record and bioacoustics.Zootaxa, 4652 2
V. Lindroos, Markku Tilli, Ari Lehto, T. Motooka (2020)
Handbook of Silicon Based MEMS Materials and Technologies
Sarab Sethi, N. Jones, B. Fulcher, L. Picinali, D. Clink, H. Klinck, C. Orme, P. Wrege, R. Ewers (2020)
Characterizing soundscapes across diverse ecosystems using a universal acoustic feature setProceedings of the National Academy of Sciences, 117
K. Servick (2014)
Eavesdropping on ecosystems.Science, 343 6173
M. Legg, A. Zaknich, A. Duncan, M. Greening (2007)
Analysis of impulsive biological noise due to snapping shrimp as a point process in timeOCEANS 2007 - Europe
J. Balcombe, G. McCracken (1992)
Vocal recognition in mexican free-tailed bats: do pups recognize mothers?Animal Behaviour, 43
J. Sueur, T. Aubin, C. Simonis (2008)
Equipment Review Seewave, a free modular tool for sound analysis and synthesis, 18
Jack Fearey, S. Elwen, B. James, T. Gridley, T. Gridley (2019)
Identification of potential signature whistles from free-ranging common dolphins (Delphinus delphis) in South AfricaAnimal Cognition, 22
J. Sueur, A. Farina (2015)
Ecoacoustics: the Ecological Investigation and Interpretation of Environmental SoundBiosemiotics, 8
K. Hayashi, Erwinsyah, Vita Lelyana, K. Yamamura (2020)
Acoustic dissimilarities between an oil palm plantation and surrounding forests: Analysis of index time series for beta-diversity in South Sumatra, IndonesiaEcological Indicators, 112
Stephanie King, V. Janik (2015)
Come dine with me: food-associated social signalling in wild bottlenose dolphins (Tursiops truncatus)Animal Cognition, 18
R. Whytock, J. Christie (2017)
Solo: an open source, customizable and inexpensive audio recorder for bioacoustic researchMethods in Ecology and Evolution, 8
Bohnenstiehl D.R. (2018)
R1156LJournal of Ecoacoustics, 2
Natalia Revilla-Martín, I. Budinski, Xavier Puig-Montserrat, Carles Flaquer, A. López‐Baucells (2020)
Monitoring cave-dwelling bats using remote passive acoustic detectors: a new approach for cave monitoringBioacoustics, 30
Bobryk C.W. (2016)
997Agroforestry Systems, 90
Susan Williams, Christine Sur, N. Janetski, Jordan Hollarsmith, Saipul Rapi, Luke Barron, S. Heatwole, A. Yusuf, S. Yusuf, J. Jompa, Frank Mars (2018)
Large‐scale coral reef rehabilitation after blast fishing in IndonesiaRestoration Ecology, 27
Kieran McCloskey, Katherine Chapman, Lucille Chapuis, M. McCormick, A. Radford, S. Simpson (2020)
Assessing and mitigating impacts of motorboat noise on nesting damselfish.Environmental pollution, 266 Pt 2
Simon Elise, A. Bailly, I. Urbina-Barreto, G. Mou-Tham, F. Chiroleu, L. Vigliola, W. Robbins, J. Bruggemann (2019)
An optimised passive acoustic sampling scheme to discriminate among coral reefs’ ecological statesEcological Indicators, 107
T. Gordon, H. Harding, K. Wong, N. Merchant, M. Meekan, M. McCormick, A. Radford, S. Simpson (2018)
Habitat degradation negatively affects auditory settlement behavior of coral reef fishesProceedings of the National Academy of Sciences of the United States of America, 115
F. Juanes (2018)
Visual and acoustic sensors for early detection of biological invasions: Current uses and future potentialJournal for Nature Conservation, 42
R. Rountree, F. Juanes (2017)
Potential of passive acoustic recording for monitoring invasive species: freshwater drum invasion of the Hudson River via the New York canal systemBiological Invasions, 19
A. Hill, Peter Prince, Evelyn Covarrubias, C. Doncaster, Jake Snaddon, A. Rogers (2018)
AudioMoth: Evaluation of a smart open acoustic device for monitoring biodiversity and the environmentMethods in Ecology and Evolution, 9
J. Podos (2010)
Acoustic discrimination of sympatric morphs in Darwin's finches: a behavioural mechanism for assortative mating?Philosophical Transactions of the Royal Society B: Biological Sciences, 365
(2019)
Persistent near realtime passive acoustic monitoring for baleen whales from a moored buoy: system description and evaluation
Baumgartner M.F. (2019)
1476Methods in Ecology and Evolution, 10
Conservation technology: detecting illegal fishing vessels
A. Hill, Peter Prince, Jake Snaddon, C. Doncaster, A. Rogers (2019)
AudioMoth: A low-cost acoustic device for monitoring biodiversity and the environmentHardwareX
G. Hayman, S. Robinson, T. Pangerc, J. Ablitt, P. Theobald (2017)
Calibration of marine autonomous acoustic recordersOCEANS 2017 - Aberdeen
F. Bertucci, É. Parmentier, G. Lecellier, A. Hawkins, D. Lecchini (2016)
Acoustic indices provide information on the status of coral reefs: an example from Moorea Island in the South PacificScientific Reports, 6
G. Hayman, S. Robinson, P. Lepper (2016)
Effects of noise on aquatic life II
J. Sueur, B. Krause, A. Farina (2019)
Climate Change Is Breaking Earth's Beat.Trends in ecology & evolution
T. Mooney, L. Iorio, M. Lammers, Tzu‐Hao Lin, S. Nedelec, Miles Parsons, C. Radford, E. Urban, J. Stanley (2020)
Listening forward: approaching marine biodiversity assessments using acoustic methodsRoyal Society Open Science, 7
A. Farina, Philip James, C. Bobryk, N. Pieretti, E. Lattanzi, J. McWilliam (2014)
Low cost (audio) recording (LCR) for advancing soundscape ecology towards the conservation of sonic complexity and biodiversity in natural and urban landscapesUrban Ecosystems, 17
E. Kasten, S. Gage, J. Fox, Wooyeong Joo (2012)
The remote environmental assessment laboratory's acoustic library: An archive for studying soundscape ecologyEcol. Informatics, 12
Sarab Sethi, R. Ewers, N. Jones, C. Orme, L. Picinali (2017)
Robust, real-time and autonomous monitoring of ecosystems with an open, low-cost, networked devicebioRxiv
Lucille Chapuis, Ben Williams, T. Gordon, S. Simpson (2021)
Low-cost action cameras offer potential for widespread acoustic monitoring of marine ecosystemsEcological Indicators, 129
C. Radford, S. Ghazali, A. Jeffs, J. Montgomery (2015)
Vocalisations of the bigeye Pempheris adspersa: characteristics, source level and active spaceThe Journal of Experimental Biology, 218
S. Barber-Meyer, V. Palacios, Barbara Marti-Domken, Lori Schmidt (2020)
Testing a New Passive Acoustic Recording Unit to Monitor WolvesWildlife Society Bulletin, 44
C. Reis, L. Padovese, M. Oliveira (2019)
Automatic detection of vessel signatures in audio recordings with spectral amplitude variation signatureMethods in Ecology and Evolution, 10
M. Kaplan, T. Mooney, J. Partan, A. Solow (2015)
Coral reef species assemblages are associated with ambient soundscapesMarine Ecology Progress Series, 533
N. Merchant, K. Fristrup, Mark Johnson, P. Tyack, M. Witt, P. Blondel, S. Parks (2015)
Measuring acoustic habitatsMethods in Ecology and Evolution, 6
Beason R.D. (2019)
381Bioacoustics, 28
Balcombe J.P. (1992)
79Animal Behavior, 43
N. Pieretti, A. Farina, D. Morri (2011)
A new methodology to infer the singing activity of an avian community: The Acoustic Complexity Index (ACI)Ecological Indicators, 11
Bolgan M. (2018)
57Bioacoustics, 27
M. Bolgan, J. O’Brien, Emilia Chorazyczewska, I. Winfield, Peter McCullough, M. Gammell (2018)
The soundscape of Arctic Charr spawning grounds in lotic and lentic environments: can passive acoustic monitoring be used to detect spawning activities?Bioacoustics, 27
Barber‐Meyer S.M. (2020)
590Wildlife Society Bulletin, 44
M.W. Legg, A.J. Duncan, A. Zaknich, M.V. Greening (2007)
Ocean. 2007 ? Eur.
Jia-jia Jiang, Bu Lingran, F. Duan, Wang Xianquan, W. Liu, S. Zhongbo, Li Chunyue (2019)
Whistle detection and classification for whales based on convolutional neural networksApplied Acoustics
(2020)
Diversity assessment of anurans in the Mugesera wetland (Eastern Rwanda): impact of habitat disturbance and partial recovery
IntroductionPassive acoustic monitoring (PAM) is a powerful technique for assessing the presence and behaviour of wild animals and the health of ecosystems (Merchant et al., 2015; Servick, 2014; Sueur & Farina, 2015). Across a wide taxonomic range, the sounds made by animals can contain information about their identity, morphotype, sex, age and behaviour (Balcombe & McCracken, 1992; Fearey et al., 2019; King & Janik, 2015; Podos, 2010; Radford et al., 2015; Stimpert et al., 2007). Additionally, the combination of all sounds within an ecosystem can be considered as a single ‘soundscape’, which contains information about spatial and temporal variation in ecosystem health, the diversity and behaviour of soniferous communities, and instances of anthropogenic disturbance (Pijanowski et al., 2011; Sueur et al., 2019; Towsey et al., 2014). As such, the ability to listen to the sounds of nature is valuable for studying animal behaviour and monitoring individuals, populations and ecosystems.Financial constraints can limit the capacity for people and organisations to record wildlife sounds (Farina et al., 2014). Costs of PAM include buying equipment, deploying and retrieving recorders, and downloading, storing and analysing data (Sugai et al., 2020). The relative magnitude of each of these costs depends on the demands of the project, including variables such as the number of recordings being taken, the difficulty of accessing field sites and the approach to data management and analysis. For projects operating on a low budget, equipment purchase can be a particularly challenging financial obstacle because it represents a ‘hard limit’ for would‐be PAM practitioners. This is because financial costs associated with instrument deployment and data management, whilst high in many cases, can be mitigated by reducing operational capacity—for example, by monitoring fewer sites, using duty cycles that extend battery life (and therefore require fewer deployments), compressing files for storage and using open‐source, automated analysis techniques (Mooney et al., 2020; Sousa‐Lima et al., 2013). By contrast, if equipment is unaffordable, this represents a problem that cannot be easily mitigated and precludes any application of PAM—even reduced‐scale programmes cannot start without a recording device. Making recording devices available at affordable prices is therefore a fundamental challenge to facilitate widespread uptake of PAM.In the terrestrial realm, the development and adoption of a range of low‐cost devices have allowed a large number of programmes to record wildlife and carry out PAM on modest budgets (Beason et al., 2019; Bobryk et al., 2016; Farina et al., 2014; Hill et al., 2018; Whytock & Christie, 2017). For example, some practitioners use handheld music or voice recorders, available for less than $500, to take soundscape recordings (Hayashi et al., 2020; Mindje et al., 2020). Furthermore, a range of open‐source build‐it‐yourself PAM platforms have been developed in recent years, costing between $100 and $500 (Beason et al., 2019; Sethi et al., 2018; Whytock & Christie, 2017). Alternatively, the widely used SongMeter (Wildlife Acoustics, www.wildlifeacoustics.com) and Bioacoustic Audio Recorder (Frontier Labs, www.frontierlabs.com.au) cost between US$500 and US$1000—a cost that, whilst prohibitive for some low budget and/or large‐scale deployments, remains affordable for many small organisations around the world. Low‐cost recorders are typically limited in their functional capacity; devices are provided without factory calibration and produce lower quality recordings with lower power efficiency than more expensive recorders, because they use low‐cost technology (e.g. Raspberry Pis, MEMS microphones), rather than components that are purpose‐built for PAM. Despite these limits, low‐cost recorders like this still meet the requirements of many practitioners carrying out PAM and have boosted the adoption of terrestrial PAM programmes worldwide (Bobryk et al., 2016; Farina et al., 2014; Hill et al., 2018; Sethi et al., 2018).In contrast to the terrestrial realm, there are no autonomous recording units suitable for underwater use that cost less than $3,000. Whilst several companies sell isolated hydrophones for less than $500 (e.g. HTI www.hightechincusa.com/, LSTN2 www.lstn2.com and Aquarian www.aquarianaudio.com), these hydrophones are ‘sensor only’ devices, which require a recorder and power supply that must be either waterproofed or kept out of the water; they therefore do not represent the full cost of a complete recording set‐up. Autonomous recording units are typically self‐contained devices with an integrated power source, electronics for capturing and storing sounds and an internal clock for time‐stamping data and scheduling, but are not available for less than $3,000. This means there are likely to be many potential applications of underwater PAM for which there are no affordable, ready‐to‐use recording devices available. As such, developing low‐cost autonomous recorders for underwater use has the potential to facilitate more widespread uptake of PAM in aquatic habitats, as it already has done in many terrestrial biomes.In the manufacture of any device, there are inescapable trade‐offs between quality and cost. Part of the reason for the high price of existing underwater autonomous recording units ($3000–$10 000) is that they have advanced technical features and high levels of precision. These devices often include factory calibration, exceedingly low self‐noise, extreme depth ratings, extensive memory and extended battery life (Sousa‐Lima et al., 2013). Whilst undoubtedly valuable in some settings, there are many useful applications of PAM in aquatic habitats that do not require such expensive features, but can be completed with recordings that are uncalibrated, short in duration and/or from shallow habitats (Chapuis et al., 2021; Desiderà et al., 2019; Peck et al., 2021; Reis et al., 2019). Expensive high‐specification recorders are surplus to the requirements of aquatic PAM programmes like these, meaning that such programmes might benefit substantially from the development of recording systems that offered reduced capabilities at a lower price. For example, expensive hydrophones are currently used to detect whales and issue real‐time signals to ships to prevent collisions (Baumgartner et al., 2019; Sanguineti et al., 2021) as well as to detect illegal fishing vessels and relay signals to law‐enforcement teams (ZSL, 2021), but analogous programmes in terrestrial ecology (detecting cicadas and gunshots in forests) suggest that these goals might be achievable at lower cost with simpler recording devices (Hill et al., 2018). Additionally, low‐cost aquatic recording devices might accelerate development of new uses of PAM that until now have not been possible due to financial constraints.Here, we first present the results of a systematic literature review of recent studies that involved field recordings of wild animals and natural soundscapes. We use this review to demonstrate and quantify the lack of low‐cost aquatic recording equipment relative to the terrestrial realm. We then test a prototype low‐cost aquatic recorder, the ‘HydroMoth’, which was developed by Open Acoustic Devices. The HydroMoth consists of an AudioMoth 1.2.0 (US $79; one of the cheapest commercially available terrestrial recorders), with a custom‐made waterproof case and variable recording gain levels. Finally, we test the performance of this ‘HydroMoth’ against the SoundTrap 300 STD and SoundTrap 300 HF (Ocean Instruments NZ, New Zealand), which are the least expensive aquatic autonomous recorders in our systematic review (US $3000 and US $4100, respectively). Performance tests were carried out by recording loudspeaker playback of artificial sounds, individual vocalisations of marine mammals and fishes, and coral reef soundscapes.Materials and MethodsSystematic literature reviewTo compare the instruments used for acoustic monitoring in terrestrial and aquatic ecosystems, we carried out a systematic literature review of studies published in 2020 that conducted acoustic monitoring of wild animals and ecosystems. The search was restricted to the single most recent year (2020) in order to ensure that all studies had equal access to the most recent developments in recording equipment. In January 2021, we performed a topic search (which includes title, abstract and keywords) in Web of Science using the query: (soundscape* OR "passive acoustic monitoring" OR "passive acoustics monitoring" OR eco$acoustic* OR bio$acoustic*). We restricted our search to English‐language articles published in 2020 in peer‐reviewed journals, in the Web of Science subject categories biology, ecology, zoology, biodiversity conservation, environmental sciences, environmental studies, marine freshwater biology, fisheries, biophysics, remote sensing and multidisciplinary sciences. This search generated 337 papers published in 2020, which we screened to include only studies that performed recordings of animals or ecosystems in the wild: this generated a final list of 204 papers. The excluded papers included reviews, meta‐analyses and opinion pieces that did not include original recordings; papers that recorded animals in captivity; and papers that recorded exclusively anthropogenic noise. Whilst all reviews of this nature inevitably miss a small number of relevant published papers, this list of papers was adequate to describe general trends in the prices of recording devices used by the majority of studies in this field.In each of the included 204 papers, we identified the make and model of the recording instrument(s) used, and classified them by type as either autonomous recording units (microphone/hydrophone and recorder integrated into a single machine); sensor and recorder combinations (microphone/hydrophone connected to a separate device made specifically for sound recording); or sensor only (microphone/hydrophone connected to a multi‐purpose device such as a computer, smartphone or autonomous underwater vehicle). Where recorders had the capacity to use either an internal microphone/hydrophone or an external sensor, we classified them based on their use in each paper on a case‐by‐case basis. Where one paper reported the use of multiple different instruments, we created separate records for each instrument used. We sourced the price of each instrument online in February 2021; where prices were unavailable online, we obtained quotes directly from suppliers. When instruments were discontinued models, we replaced prices with those of currently available model updates (e.g. SongMeter 1, 2 and 3 were replaced with the price of SongMeter 4), or removed them from the analysis if the model had been discontinued and had no replacement. This analysis considered only the initial cost of purchasing each recording device; we did not consider any costs associated with deployment or retrieval of instruments, data management and analysis, or technological support.Across the 204 papers, there were 238 independent reports of recording instruments used. Of these 238 instrument records, 175 were commercially available instruments, and these 175 records comprised 67 unique devices or recording systems. Full details of all 238 instrument records are available in Dataset S1.Testing HydroMoth, a low‐cost aquatic recording unitRecently, Open Acoustic Devices have made a prototype cheap underwater recorder called the ‘HydroMoth’. HydroMoth is a modified version of the AudioMoth 1.2.0: a low‐cost, open‐source recording device (Hill et al., 2018, 2019; www.openacousticdevices.info). AudioMoth is already widely used by terrestrial biologists in a range of biomes (e.g. Barber‐Meyer et al., 2020; Revilla‐Martín et al., 2021; Sethi et al., 2020); this new prototype HydroMoth is the AudioMoth 1.2.0 device but with a modified case and firmware to make it compatible for long‐term underwater use.The existing AudioMoth IPX7 case has some waterproof features to protect it for terrestrial deployments in rain (github.com/OpenAcousticDevices/Application‐Notes/blob/master/An_Injection_Moulded_Case_for_AudioMoth.pdf), but these are not suitable for extended periods of underwater deployment. To make the case fully waterproof for underwater deployment, Open Acoustic Devices removed the Porelle acoustic vent and used synthetic polymer‐based hot melt glue (RS Components, Northants, UK) to fill the whole of the concave cone, from the vent hole to the edge of the rain hood; this seal also effectively created a contact type hydrophone. No further adjustments to the case were necessary to ensure effective waterproofing when devices were fully submerged, and these modified devices have been successfully deployed without leaking at a maximum depth of 30 metres (for 9 days) and a maximum time of 2 months (at 20 metres) (T.A.C.L. & L.C., pers. comm. 2021).To increase the range of gain settings available for underwater use, Open Acoustic Devices created a customised version of the recording firmware, which included gain settings lower than those available in the standard AudioMoth firmware. This was necessary because early pilot tests of HydroMoths found that the gain settings in standard AudioMoth firmware were too high for several underwater applications, resulting in clipping. Both the modified case and the customised firmware are available upon request from Open Acoustic Devices.It should be noted that as well as functioning as an autonomous recording device, AudioMoth 1.2.0 can be combined with external microphones or hydrophones to function as a ‘sensor + recorder system’ (see Dataset S1 for other examples of this type of system). However, here we limit data to recordings taken by HydroMoth functioning as an autonomous recording unit using its integrated MEMS microphone.Overview of methods for testing HydroMothWe tested HydroMoths in a range of recording scenarios, comparing them with the SoundTrap 300 STD and SoundTrap 300 HF (the cheapest available autonomous recording units in our literature review: Fig. 2; Dataset S1). First, we used aquarium‐based recordings of white noise to calculate each device's signal‐to‐noise ratio (SNR). Second, we recorded artificial tones played from loudspeakers in open water conditions and compared power spectral density (PSD) plots and spectrograms of each recording. Third, we recorded vocalisations of a range of individual animals (marine mammals and fishes), in both captive and field conditions, to test whether HydroMoths could record at adequate quality to distinguish between sound types and species. Finally, we recorded coral reefs soundscapes from sites with different habitat quality and calculated their ecoacoustic indices, to test whether HydroMoths could distinguish ecologically meaningful differences between complex soundscapes in a real‐world monitoring situation. We used a sampling rate of 48 kHz for all the recordings, except for recordings of ultrasonic marine mammal vocalisations, which were taken at 96 and 384 kHz as appropriate. Ultrasonic HydroMoth recordings were compared with the SoundTrap 300 HF and/or HTI‐96‐Min hydrophones, because the SoundTrap 300 STD cannot record in these frequency bandwidths. We adjusted the gain levels through systematic trial and error, based on the requirements demanded by each sound source and habitat; levels were chosen such that the loudest sounds did not cause clipping by saturating the system (Merchant et al., 2015). For full details of the firmware, gain levels and sampling rate used in each recording, see Table S1.Signal‐to‐noise ratioTo calculate SNRs of HydroMoths and SoundTraps, we recorded white noise playback and the absence of playback in a quiet aquarium at the CRIOBE research facility in Mo’orea, French Polynesia. We used a 1000 L circular tank (1.6 m diameter × 0.5 m height) filled with seawater and suspended 2.5 cm above the ground by polyethylene foam spacers to decouple it from any ground vibrations, in order that the ‘quiet’ periods of our recordings might be as close to complete silence as possible. We suspended an underwater loudspeaker (University Sound UW‐30; max output 156 dB re 1 μPa at 1m, frequency response 0.1–10 kHz; Lubell Labs, USA) in the centre of the tank, broadcasting repeated 3‐s segments of white noise, generated in Audacity (v.3.0.0, audacityteam.org). The loudspeaker was powered by an amplifier (M033N, 18W, frequency response 0.04–20 kHz; Kemo Electronic GmbH) and a battery (12v 12Ah sealed lead acid), and connected to an MP3 player (Clip Jam; San Disk, Milpitas, CA, USA). We suspended two SoundTrap 300 STD models and five HydroMoths in turn on a rope at the same position, 70 cm from the loudspeaker, 20 cm above the bottom of the tank. In each recording, care was taken to ensure that the instruments were facing in the same orientation and remained stationary for the duration of the recording. Each instrument recorded the white noise (3 s) followed by a 5‐s period of silence. We checked all of the recordings to ensure there was no acoustic interference before analysis. SNRs were calculated using a custom‐made MATLAB script (R2020b, The MathWorks Inc., Natick, MA, USA), by computing the ratio of the summed squared magnitude of the ‘signal’ (0.5 s of white noise playback) to that of the ‘noise’ (0.5 s of silent playback). The results were then imported into R (R Core Team, 2019) for plotting and statistical comparison.Recordings of artificial soundsWe compared HydroMoth and SoundTrap 300 STD recordings of three artificial sounds, played through the same loudspeaker set‐up but deployed and recorded in open water conditions rather than in a tank, in order to benefit from predictable open water acoustic conditions rather than distortions caused by the tank environment (Okumura et al., 2002). The sounds were: nine pure tones (1 to 17 kHz in 2 kHz intervals) played simultaneously during 1 s (‘multitone’); a sine‐wave frequency sweep that increased linearly from 0 to 20 kHz over a 9 s duration (‘sweep’); and a 5‐s period of white noise. All sounds were generated in Audacity. We took recordings in 3 m depth on a sand flat, 100 m from the shore of Tema’e beach (Mo’orea, French Polynesia). Weather conditions were benign (smooth sea state, wind <5 mph, no rain), we only took recordings when no motorboats were present, and we checked all recordings for acoustic disturbance before analysis. We placed the loudspeaker on a solid platform 20 cm above the seabed and fastened the recording instruments to a metal stake 1 m from the loudspeaker, 65 cm above the seabed. We repeated these recordings with the same five HydroMoths and two SoundTraps, in turn, each fastened to the same position in the same orientation on the metal stake. Spectrograms and PSDs were generated with custom‐made MATLAB scripts for each signal (multitone, sweep and white noise). We normalised all sounds prior to building spectrograms and used a Hamming window of 1024 samples with 75% overlap. We used Welch’s periodogram method over the full signals (9 s for sweep, 1 s for multitone, 5 s for white noise) to build PSDs, with a Hamming window of 2046 samples and 50% overlap.Recordings of animal vocalisationsWe compared simultaneous HydroMoth and SoundTrap recordings of vocalisations from a range of captive and wild animals. Unless otherwise specified, we took recordings using one HydroMoth and one SoundTrap attached above and below one another to the same stake or rope and recorded simultaneously and continuously with both instruments. In all instances where we used a single HydroMoth and SoundTrap, we chose the devices at random, because our recordings of artificial sounds had established that there was little variation between devices of the same make and model (see Results section).We recorded individual coral reef fishes (multiple unidentified species) on fringing reefs 500 m off Tema’e beach, Mo’orea, French Polynesia (17.5011°S, 149.7576°W). We attached the instruments to a metal stake 65 cm off the seabed in a water depth of 2–3 m and recorded a reef soundscape. A trained expert (T.A.C.L.) then identified individual fish vocalisations from within the soundscape recording and cropped sections of the soundscape recording that contained a single fish vocalisation.We recorded a wild orca (Orcinus orca) from a fixed mooring in Fish Hoek Bay, South Africa (34.1389°S, 18.4408°E). A lone orca was repeatedly sighted during January 2021 by the research team (S.D., T.G., G.F., J.F.) and citizen scientists. We attached the instruments to a rope suspended 2 m above the sea bed by a sub‐surface buoy, and identified orca vocalisations from within the resulting soundscape recording.We recorded dusky dolphins (Lagenorhynchus obscurus) from a 7 m rigid‐hull inflatable boat in Fish Hoek Bay, South Africa (34.1389°S, 18.4408°E). We attached the instruments to a weighted rope, suspended 3 m below the water surface, and took recordings in the vicinity (2–50 m) of dusky dolphins with the vessel engines and fish finder turned off.We recorded captive bottlenose dolphins (Tursiops spp.) at uShaka Marine World, Durban, South Africa. Recordings were made in the presence of nine dolphins housed in a network of interlinked pools (recording pool dimensions 9.1 × 6.3 m, 2 m depth). We had no access to a SoundTrap for these recordings, so instead compared HydroMoth recordings to simultaneous recordings taken by an HTI‐96‐Min hydrophone (Hi‐Tech, Inc.) coupled to an H1N digital recorder (Zoom Corporation, Tokyo, Japan)—this is the same hydrophone component used in the SoundTrap 300STD device.We recorded unidentified dolphin species off the coast of Kleinbaai, South Africa (34.6191°S, 19.3601°E), comparing the outputs of HydroMoths facing in opposite directions (rather than comparing a HydroMoth to a SoundTrap). Two HydroMoths were fastened back‐to‐back to examine directionality in detection functioning; this was done in this manner in the field to benefit from predictable open water acoustic conditions, rather than reverberations or distortion caused by playback in a tank (Okumura et al., 2002). The instruments were attached to a rope suspended 2 m above the sea bed by a sub‐surface buoy.For all recordings, we computed comparative time‐matched waveforms and spectrograms with custom MATLAB scripts, using a 1024 sample Hamming window and 99% overlap.Recordings of coral reef soundscapesWe recorded coral reef soundscapes using HydroMoths in two different contexts. First, to quantify the between‐HydroMoth error of different individual HydroMoth devices recording the same reef soundscape, we used five different HydroMoths and one SoundTrap 300 STD to record simultaneously the same reef soundscape in French Polynesia. We attached all five HydroMoths and the SoundTrap to the same metal stake, placed immediately above each other, facing in the same direction. This stake was deployed sequentially on five different coral reef sites, each 30 m apart and 500 m off Tema’e beach (17.5011°S, 149.7576°W). The water depth at all sites was 2–3 m, and the stake was positioned such that the lowest device was 50 cm above the seabed. We took five sequential 1‐min recordings at each site, all between 12:30–14:00 on the same day; each of the six devices therefore took 25 recordings (5 x 1‐min at each of five reef sites). We used the crop tool on Audacity to remove sections of recordings that contained anthropogenic noise and instances of interference that affected some devices but not others (for example, knocks from fish colliding with one of the devices). Having ‘cleaned’ recordings in this manner, we then equalised their lengths by cropping from the end of the track such that all segments were 30 s long.Second, to test whether HydroMoth recordings could differentiate between different ecostates, we took recordings of a pair of coral reefs of dramatically different health, as part of the monitoring programme for the Mars Coral Reef Restoration Project (www.buildingcoral.com) at Pulau Bontosua, Indonesia (4.9288°S, 119.3192°E). A healthy reef that featured abundant live coral cover (90%–100%), high structural complexity and a diverse fish community was compared with a degraded reef where extreme levels of dynamite fishing have resulted in a flattened rubble field with very low live coral cover (0–10%) and very few fish. Representative photos of each habitat type are shown in Figure 1; for more information on the impacts of dynamite fishing on reefs in the region, see Fox et al. (2003) and Williams et al. (2019). These two paired reefs were 800 m apart and the same depth (2.5 m; total tidal range at the site is approximately 1.2 m), and were recorded simultaneously in benign weather and sea state conditions. We deployed one randomly selected HydroMoth on each reef for 48 hours (from 24–26 November 2020), attached to a metal stake 0.5 m off the seabed. We subsampled recordings, taking 30 time‐matched 1‐min samples from each 48‐hour recording. These 30 subsamples comprised three from each of five time points throughout the day, on both days of the recording. The five time points were morning (9–11 am), afternoon (2–4 pm), sunset (one hour either side of sunset), night (11 pm–1 am) and sunrise (one hour either side of sunrise), and subsamples from the same time point were always separated by at least 15 min. We checked each subsample for anthropogenic noise or equipment knocking, replacing it with a resampled section from the same time point if any acoustic disturbance was found.1FigureRepresentative photographs and photoquadrats of the healthy (A, B) and degraded (C, D) coral reef ecosystems present at the recording locations at Pulau Bontosua, Indonesia (E). Photos taken by The Ocean Agency (A, C) and T.A.C.L. (B, D); satellite image (E) obtained from Google Maps, available at https://goo.gl/maps/shewdq3deNDT3Ui38 (Map data: Google, CNES/Airbus, Maxar Technologies).We analysed the coral reef soundscape recordings by using the two most commonly used calibration‐independent ecoacoustic metrics in marine soundscape studies (Pieretti & Danovaro, 2020): the Acoustic Complexity Index (ACI) and rates of invertebrate snaps (snap rate). The ACI is a measure of soundscape variability, first designed for use in terrestrial forests (Pieretti et al., 2011) and since applied with mixed success to marine ecosystems (Bertucci et al., 2016; Bohnenstiehl et al., 2018; Mooney et al., 2020). We computed the ACI using the seewave package on R (Sueur et al., 2008), with a frequency bandwidth of 2–7 kHz and an FFT window of 1024; these settings are optimal for distinguishing between coral reef health states (Elise et al., 2019). Snap rate is a count per minute of the number of high‐amplitude snapping sounds associated with shrimp in the family Alpheidae; this is the dominant contribution to tropical reef soundscapes (Legg et al., 2007; Versluis et al., 2000). We computed snap rate using a custom‐designed algorithm on MATLAB that counts the number of acoustic events exceeding 1000x the median amplitude value, excluding a buffer zone of 1 ms after each event to avoid double‐counting (following Gordon et al., 2018). For both indices, we calculated individual values for each subsample and carried out paired t‐tests (paired by recordings that were taken at the same time) to determine whether values from healthy and degraded habitats were significantly different. We then compared the magnitude of these between‐habitat differences with that of the between‐HydroMoth variation recorded on the same reef at Tema’e. This tested whether ecologically relevant effect sizes in natural soundscapes exceeded the precision limits of HydroMoths.ResultsSystematic literature reviewOur systematic literature review of acoustic monitoring studies published in 2020 revealed that there were nearly twice as many terrestrial studies (65% of papers: 132 of 204) compared to aquatic studies (35% of papers: 72 of 204). Further, the recording devices commonly used in terrestrial studies were far more widely available than those used in aquatic studies: 130 of 156 terrestrial records used commercially available devices (83%), compared to 45 of 82 records in the aquatic literature (55%). In both terrestrial and aquatic studies, autonomous recording units were the most common used type of commercially available device (70% of records: 122 of 175).When comparing prices of instruments used, terrestrial and aquatic instruments were represented by similar price ranges in the ‘sensor only’ and ‘sensor and recorder’ categories of recording instrument. However, autonomous recording units used in aquatic studies were markedly more expensive than those used in terrestrial studies (Fig. 2). The median price of aquatic autonomous recording units ($4,000) was close to five times higher than that of terrestrial autonomous recording units ($849). The cheapest aquatic unit (SoundTrap 300 STD, Ocean Instruments NZ, Auckland, New Zealand: $3,000) was more expensive than all but one of the 24 different terrestrial units and more than 40 times more expensive than the cheapest terrestrial unit (ICD‐PX240, Sony Corporation, Tokyo, Japan: $69).2FigureThe prices of recording instruments used in terrestrial (green) and aquatic (blue) acoustic monitoring studies published in 2020. Each point represents one model of instrument; sizes of points are proportional to the number of studies that model was used in. Highlighted are the SoundTrap 300STD and the AudioMoth 1.2.0; these are the two instruments compared in this study.Recordings of signal‐to‐noise ratio and artificial soundsThe signal‐to‐noise ratio of the HydroMoths was almost half that of the SoundTraps (mean ± SE: HydroMoths 18.5 ± 1.45 dB; SoundTraps 35.7 ± 1.8 dB; Fig. 3A). Despite this, the PSDs (Fig. 3B–D) and spectrograms (Fig. S1) generated with SoundTrap and HydroMoth recordings showed a very similar spectral representation of the sine‐wave sweep, the multitone of 9 frequencies, and the broadband white noise. The sensitivity of the five different HydroMoths was largely consistent, with all five instruments exhibiting slightly lower sensitivity than the SoundTrap in the lower frequencies (0.02–12 kHz), and slightly higher sensitivity than the SoundTrap in the higher frequencies (12–24 kHz).3Figure(A) The signal‐to‐noise ratio (SNR) of two SoundTrap 300 STDs and five HydroMoths, where boxplots represent medians (thick lines), interquartile (boxes) and full (whiskers) ranges alongside colour‐coded data points for each individual device. (B–D) Power spectral densities (PSDs) of recordings of loudspeaker playback of (B) sine‐wave sweep, (C) multitone and (D) white noise, taken with one SoundTrap 300 STD and five HydroMoths. Shown are the SoundTrap data in black, the mean of the HydroMoth data in blue, with a ribbon representing the standard error (n = 5). The residuals for each relationship (SoundTrap – HydroMoth) are plotted in red.Recordings of animal vocalisationsFor the recordings of individual animal vocalisations, the clarity of the signal recorded by the HydroMoth, relative to the commercially available hydrophones, was frequency‐dependent. The low‐frequency pulses created by fishes were well represented by both devices (Fig. 4A–D). The lower‐frequency portions of the marine mammal sounds, such as the contours of frequency modulated whistles used for dolphin communication, were also well represented by both devices (Fig. 4E–G). However, the higher‐frequency sections of the marine mammal recordings, such as echolocation clicks, disappeared into the background noise at frequencies above 20 kHz; at these high frequencies, the non‐HydroMoth recordings (SoundTrap and HTI‐96‐Min hydrophone) all showed a clearer signal than the HydroMoth recordings, with less ‘background noise’ and more defined clicks and whistles (Fig. 4E–G).4FigureWaveforms and spectrograms created from recordings of animal vocalisations taken simultaneously with a HydroMoth and a SoundTrap 300 STD (A–D), a HydroMoth and a SoundTrap 300 HF (E and F) or a HydroMoth and an HTI‐96‐Min hydrophone (G). (A–D) fish vocalisations from unidentified species recorded on a coral reef in French Polynesia (silhouettes represent taxonomic families known to produce similar sounds); (E) orca (Orcinus orca) whistle recorded from a fixed mooring in South Africa, (F) dusky dolphin (Lagenorhynchus obscurus) clicks recorded from a boat in South Africa; (G) bottlenose dolphin (Tursiops sp.) whistles and clicks recorded in captivity at uShaka Marine World, South Africa. Note the different y‐axis (frequency) values in each panel.In the test of directionality in HydroMoth’s ability to detect dolphin whistles, instruments facing in opposite directions had different detection capacities. In simultaneous recordings taken by back‐to‐back HydroMoths, one failed to record the whistle and clicks of an unknown Odontocete species that were detected by the other (Fig. 5). Although our study does not formally quantify the directional bias in HydroMoth recordings, this anecdotal evidence raises the important issue that sounds from certain angles may not be picked up by these devices.5FigureWaveforms and spectrograms of two simultaneous recordings taken with two HydroMoths deployed back‐to‐back in the same location. Whilst the first HydroMoth (A) recorded clicks and a whistle of an unknown Ondotocete species, the second HydroMoth (B) did not detect any animals.Recordings of coral reef soundscapesSimultaneous recordings of the same coral reefs, taken by five different HydroMoths and one SoundTrap 300 STD, demonstrated that the between‐HydroMoth error when recording the same reefs was relatively small compared to the between‐habitat variation when recording different reefs (Fig. 6A,B). When comparing between different types of instrument, the HydroMoths and SoundTrap gave very different results for the ACI on the same reef (Fig. 6A), but similar results for snap rate (Fig. 6B). As such, the between‐HydroMoth error (i.e. HydroMoth vs HydroMoth) was consistently low for both ACI and the snap rate, but the error between different types of recorder (i.e. HydroMoth vs SoundTrap) was high for ACI and low for snap rate. The differences between healthy and degraded reef soundscapes recorded by HydroMoths were statistically significant for both ACI (Fig. 6C; t29 = 24.48; p < 0.001) and snap rate (Fig. 6D; t29 = 22.08, p < 0.001); in both cases, the magnitude of these differences was more than double the between‐HydroMoth error recorded for multiple HydroMoths on the same reef.6Figure(A and B) The differences in Acoustic Complexity Index values (A) and snap rates (B) caused by recording with different HydroMoths (top row) compared to recording with a HydroMoth and a SoundTrap 300 STD (middle row) and recording different habitats with two HydroMoths (bottom row). Each point represents the difference in values between one pair of simultaneous recordings. (C and D) Acoustic Complexity Index (C) and snap rate (D) values from degraded and healthy reef soundscapes recorded by HydroMoths. Each point represents one of thirty 1‐min subsamples, coloured by time of day; lines link points that were recorded at the same time. p‐values correspond to the results from paired t‐tests. In all plots, boxplots show medians (thick lines) and interquartile (boxes) and full (whiskers) ranges.DiscussionThis study demonstrates that despite an abundance of available low‐cost ($50–500) devices used to record terrestrial wildlife, there are no commercially available low‐cost autonomous recording units suitable for underwater applications (Fig. 2). This disparity may be due to differences in the functional quality of instruments between the two realms; existing aquatic recorders are generally high‐performance instruments with a range of advanced features that are not shared by all terrestrial recorders (Sousa‐Lima et al., 2013). Alternatively, aquatic recording units may be more expensive because their housings must withstand greater environmental stress than many terrestrial recorders; it is more expensive to build a case that can stay waterproof at depth underwater than a case that can stay waterproof in rain. Further, this disparity may be due to differences in market demand; the telecommunications and computing industry has driven substantial recent advances in low‐cost microphones (Fueldner, 2020). These low‐cost microphones are readily translatable into terrestrial PAM devices (Van Renterghem et al., 2011), but may not work as well for aquatic applications because they are designed to work in air rather than underwater. Whatever the driving mechanism, it is clear that equipment costs represent a significant financial hurdle to carrying out passive acoustic monitoring underwater compared to in terrestrial environments.We tested a prototype low‐cost underwater sound recorder (the HydroMoth), created using a modified existing terrestrial device (AudioMoth) for a total cost of less than $140 (AudioMoth [$79] + case [$40] + SD card [$19] + batteries [$1]); this is more than 20 times cheaper than the cheapest commercially available underwater autonomous recording unit (SoundTrap 300STD, $3000). The two devices are similar in their deployment requirements (small devices requiring attachment to a weight or rope), and have comparable maximum recording times (10 days continuous for HydroMoth; 13 days continuous for SoundTrap; capacity to substantially increase battery life using a duty cycle in both instruments). We compared the underwater recordings of HydroMoths to those of SoundTraps in a range of scenarios, finding that whilst the HydroMoth generally exhibited lower performance than the SoundTrap, its quality of recording would likely be adequate for many purposes in underwater wildlife monitoring, conservation and research.Limitations of HydroMothOur tests revealed several important limitations in the performance capacity of HydroMoths (summarised in Box 1). Many of these limitations may be mitigated by using an external microphone or hydrophone in combination with the AudioMoth 1.2.0 or AudioMoth Dev recorders; this would require extra waterproofing to be fully submersible, in a similar set‐up to other ‘sensor + recorder’ systems in Dataset S1. However, this study is limited to tests of the capacity of HydroMoth working as an autonomous recording unit using its integrated MEMS microphone.1BoxLimitations to be aware of when using HydroMoth underwater‐Low signal‐to‐noise ratio. HydroMoth has higher ‘self noise’ than some specialist recorders (Fig. 3), meaning that it may not be adequate for recording very quiet sounds.‐Directionality bias. HydroMoth is asymmetrical, meaning that it is more likely to pick up sounds from certain directions than others (Fig. 5); this is true, to some extent, for any recording device.‐Uncalibrated recordings. HydroMoth has not been factory calibrated underwater like some other instruments have. This means that quantification of how loud or how close a sound source cannot be done with HydroMoths that have not been independently calibrated.‐Instrument error. Like all recording devices, HydroMoth has an associated error, meaning that identical devices recording the same sound may give slightly different outputs. Results must be analysed while considering the magnitude of this error (e.g. Fig. 6).‐Lack of comparability with other instruments. HydroMoth does not have the same frequency response as other types of hydrophone, so recordings should not be quantifiably compared between different instruments.‐Other costs of PAM. Expensive recorders are not the only cost associated with PAM; even though HydroMoth reduces the cost of buying equipment, there are still expenses associated with deploying and collecting recorders, storing data and analysing results.First, the signal‐to‐noise ratio of HydroMoths was substantially lower than that of SoundTraps; this indicates that HydroMoths may be less likely to detect very quiet signals. This ability for HydroMoths to detect signals was frequency‐dependent. HydroMoth matched the frequency response of SoundTrap 300 STD (specified by SoundTrap manufacturers as flat ±3 dB error) relatively closely at frequencies below 15 kHz (Fig. 3), and accurately recorded a range of animal vocalisations in this frequency bandwidth (Fig. 4). However, HydroMoth performed differently from SoundTrap 300 STD at higher frequencies. In the range 15–25 kHz, HydroMoth recorded a stronger signal than SoundTrap 300 STD for artificial tones (Fig. 3), and recorded background ‘noise’ that was not present in simultaneous HTI‐96‐Min hydrophone recordings (Fig. 4G). This suggests that HydroMoth may be recording some self‐noise or resonance between these frequencies. By contrast, HydroMoth failed to detect the sounds of some marine mammals above 40 kHz (Fig. 4F); this corresponds to a relative low sensitivity at these frequencies in the microphone used by HydroMoth (Knowles Electronics, 2013). It may be possible to compensate for this loss of detection ability at high frequencies by increasing the gain level, albeit at the expense of likely clipping during loud or nearby low‐frequency sounds. This approach would require prior knowledge of the likely range of target frequencies and amplitudes; as such, this device is expected to perform best when the targeted sounds are known and the range of amplitudes at targeted frequencies is not too wide.Second, HydroMoths have a directional bias because their case is asymmetrical and the microphone is situated in one corner of the device. This direction‐dependent sensitivity to sound was evidenced by the discrepancies in detection of dolphins by back‐to‐back HydroMoths (Fig. 5). This directional bias may not be unique to HydroMoth; it is likely that most hydrophones have some degree of directional bias, because the instruments themselves are not spherical. However, most commercially available hydrophones use an omnidirectional sensor that extrudes from the rest of the device, to minimise this variation in directional sensitivity. In contexts where the directional bias exhibited by HydroMoth is problematic, it may be possible to compensate by deploying multiple HydroMoths oriented in different directions, to increase the effective ‘recording arc’.Third, HydroMoths has not been through the same calibration process as most purpose‐built hydrophones, meaning that there is considerable uncertainty surrounding its sensitivity and frequency response. Users of HydroMoths could calibrate the device themselves, but this process is best performed in large tanks and/or with high‐precision instruments (Hayman et al., 2016, 2017), making it expensive and difficult to achieve for programmes on a low budget. An uncalibrated device precludes some common applications of underwater recording for which quantification of the sound‐pressure level is required (Merchant et al., 2015); for example, studies that compare the absolute sound‐pressure level of different ecosystems (e.g. Bertucci et al., 2016; Gordon et al., 2018), or that quantify the level of noise pollution that animals are exposed to (e.g. Harding et al., 2018; McCloskey et al., 2020), could not be carried out using an uncalibrated device.Fourth, as with all instruments, HydroMoth has instrument error; simultaneous recordings from different HydroMoths on the same reef generated small differences in ACI and snap rate (Fig. 6A,B). Some of this error may have come from the unavoidable differences in instrument positioning in this test; the five HydroMoths were positioned directly on top of each other, so were recording the same soundscape from several centimetres apart. However, other studies have also demonstrated small differences in ecoacoustic metrics when multiple hydrophones have simultaneously recorded the same reef (Kaplan et al., 2015). In comparing outputs from any hydrophone, care must be taken to analyse results in the light of the recording device’s likely instrument error.Fifth, recordings from HydroMoths are not directly comparable with those from other types of device; for instance, HydroMoth recordings gave substantially different ACI values than SoundTrap recordings of the same reef (Fig. 6A). This is likely to be due to differences between the frequency responses of the two types of device (Fig. 3); the ACI is based on differences in amplitude between adjacent time samples within multiple frequency bands (Pieretti et al., 2011), meaning that instruments with different frequency response curves are likely to produce different ACI values. It is accepted good practice that recordings from uncalibrated instruments cannot be quantitatively compared with each other (Merchant et al., 2015); as such, quantitative metrics like the ACI should not be used to compare HydroMoth recordings with those taken on other devices.Finally, it is important to remember that the cost of recording instruments represents only one part of the costs associated with PAM programmes. Whilst HydroMoth might provide a device that is less expensive than other commercially available options, substantial financial hurdles associated with deployment, retrieval and data management may still remain. In contexts where these operational costs are much higher than the costs of buying recording devices, HydroMoth may not substantially reduce the cost of PAM. Further, the technical support model adopted by Open Acoustic Devices is centred around an open‐source support forum where users advise and help each other (www.openacousticdevices.info/support); this is a support model that assumes users have the capacity and expertise to engage in assisted troubleshooting, which may not be appropriate in all cases. Where users require product warranties and professional technical support, HydroMoth may not prove to be a cost‐effective solution.Potential applications for HydroMothDespite these limitations, there are still many applications for which HydroMoths are likely to be suitable. First, we demonstrate HydroMoth’s ability to clearly identify and discriminate between animal vocalisations (Fig. 4). These devices could therefore be used for presence/absence assays of soniferous animals. For example, acoustic methods are already used in underwater environments to monitor invasive, cryptic and rare species (Jaramillo‐Legorreta et al., 2017; Juanes, 2018; Rountree & Juanes, 2017), individual animals (Longden et al., 2020) and acoustic behaviours such as spawning aggregations (Bolgan et al., 2018; Erisman & Rowell, 2017). The low cost of HydroMoth increases the feasibility of large‐scale deployments of many instruments over an area, substantially increasing the potential operational scale of such acoustic monitoring programmes. As well as monitoring of soniferous animals, HydroMoths could capture data on anthropogenic events such as noise pollution, illegal shipping activity or blast fishing, in a similar fashion to existing terrestrial applications for AudioMoths detecting gunshots and chainsaws in tropical forests (Hill et al., 2018; Sethi et al., 2020). The clarity of recordings (especially those below 20 kHz) was generally high enough to identify signature whistles (Fig. 4), mirroring existing applications for AudioMoths in the terrestrial realm, where they provide valuable information regarding qualitative descriptions of species calls (Fianco et al., 2019).Second, this study demonstrates the capacity for HydroMoth recordings to be used in calculating ecoacoustic indices in a real‐world monitoring context. The HydroMoth recordings of coral reef soundscapes were adequate for clearly discriminating between a healthy and degraded coral reef, with the between‐HydroMoth error from a single location low enough to be useful in this application (Fig. 6). We deliberately chose two habitats at extreme opposite ends of an ecological health spectrum for this proof of concept study; further work might now valuably test the ability of HydroMoth recordings to discriminate between more subtly different soundscapes. Access to a large number of cheap acoustic sensors could significantly increase the spatial scale and replication of ecoacoustic studies across a range of habitats, increasing their power to assess the ecological status of ecosystems (Sueur & Farina, 2015). Further, HydroMoth can calculate acoustic indices in real time and save these summary data instead of large WAV files (Hill et al., 2018). This process could bypass the time‐consuming post‐processing of raw audio data that can hamper large‐scale PAM studies, saving valuable time for users and increasing the capacity for additional research in this area.As well as expanding existing PAM applications like detection of vocalisations and calculation of ecoacoustic metrics, low‐cost recording devices might facilitate new developments within the field. Increasing the amount of audio data available would likely increase opportunities for machine learning‐based source separation, detection, identification and classification. These emerging techniques are already being applied successfully in areas of bioacoustics where global participation is high, such as the detection of cetaceans (Jiang et al., 2019; Zhong et al., 2020) and the classification of rainforest soundscapes (Sethi et al., 2020). However, a paucity of data prevents such approaches from being used and developed in many other biomes; introducing affordable recording methods might lead to data collection on the scales necessary to expand the adoption of these novel techniques (Kasten et al., 2012; Mooney et al., 2020).In conclusion, low‐cost devices such as HydroMoth mean that aquatic bioacoustics no longer needs to be the preserve of a well‐funded minority. Rather, low‐cost instruments will dramatically expand opportunities for inclusive PAM programmes that could benefit research, monitoring and conservation worldwide.AcknowledgmentsWe thank Lily Damayanti, Saipul Rapi, Philippa Mansell and Jos van Oostrum (Mars Sustainable Solutions); Rachel Probert (uShaka Marine World); Simon Elwen (Sea Search Research and Conservation); Isla Hely (University of Exeter); Emma Weschke (University of Bristol); and Suzanne Mills (CRIOBE) for logistical support. Prototype HydroMoths were created and supplied by Open Acoustic Devices. Recordings in Indonesia were taken as part of the monitoring programme for the Mars Coral Reef Restoration Project (www.buildingcoral.com), in collaboration with Hasanuddin University; we thank the Department of Marine Affairs and Fisheries of the Province of South Sulawesi, the Government Offices of the Kabupaten of Pangkep, Pulau Bontosua and Pulau Badi, and the communities of Pulau Bontosua and Pulau Badi for their support. This work was supported by a Natural Environment Research Council–Australian Institute of Marine Science CASE GW4+ Studentship NE/L002434/1 (to T.A.C.L.), a Swiss National Science Foundation Early Postdoc Mobility fellowship P2SKP3‐181384 (to L.C.), a NRF Marine and Coastal Research grant (to T.G.), a Natural Environment Research Council Research Grant NE/P001572/1 (to S.D.S.), and Mars Sustainable Solutions, a subsidiary of Mars, Incorporated (to T.A.C.L. and D.J.S.).Authors’ ContributionsTACL, LC, BW and SDS conceived the ideas and designed methodology; BW, SD, TG, GF, JF, PBM, MEP, JJ and DJS collected the data; TACL, LC and BW analysed the data; TACL and LC led the writing of the manuscript. Recordings in Indonesia were taken as part of the monitoring programme for the Mars Coral Reef Restoration Project (www.buildingcoral.com), in collaboration with Hasanuddin University. All authors contributed critically to the drafts and gave final approval for publication. Our study brings together authors from a number of different countries, including scientists based in the country where the study was carried out. All authors were engaged with the research and study design to ensure that the diverse sets of perspectives they represent was considered.Data Availability StatementRaw data are available in Dataset S1. All recordings and scripts used for analysis are available upon request.ReferencesBalcombe, J.P. & McCracken, G.F. (1992) Vocal recognition in mexican free‐tailed bats: do pups recognize mothers? Animal Behavior, 43, 79–87.Barber‐Meyer, S.M., Palacios, V., Marti‐Domken, B. & Schmidt, L.J. (2020) Testing a new passive acoustic recording unit to monitor wolves. Wildlife Society Bulletin, 44, 590–598.Baumgartner, M.F., Bonnell, J., Van Parijs, S.M., Corkeron, P.J., Hotchkin, C., Ball, K. et al. (2019) Persistent near real‐time passive acoustic monitoring for baleen whales from a moored buoy: system description and evaluation. Methods in Ecology and Evolution, 10, 1476–1489.Beason, R.D., Riesch, R. & Koricheva, J. (2019) AURITA: an affordable, autonomous recording device for acoustic monitoring of audible and ultrasonic frequencies. Bioacoustics, 28, 381–396.Bertucci, F., Parmentier, E., Lecellier, G., Hawkins, A.D. & Lecchini, D. (2016) Acoustic indices provide information on the status of coral reefs: an example from Moorea Island in the South Pacific. Scientific Reports, 6, 33326.Bobryk, C.W., Rega‐Brodsky, C.C., Bardhan, S., Farina, A., He, H.S. & Jose, S. (2016) A rapid soundscape analysis to quantify conservation benefits of temperate agroforestry systems using low‐cost technology. Agroforestry Systems, 90, 997–1008.Bohnenstiehl, D.R., Lyon, R.P., Caretti, O.N., Ricci, S.W. & Eggleston, D.B. (2018) Investigating the utility of ecoacoustic metrics in marine soundscapes. Journal of Ecoacoustics, 2, R1156L.Bolgan, M., O’Brien, J., Chorazyczewska, E., Winfield, I.J., McCullough, P. & Gammell, M. (2018) The soundscape of Arctic Charr spawning grounds in lotic and lentic environments: can passive acoustic monitoring be used to detect spawning activities? Bioacoustics, 27, 57–85.Chapuis, L., Williams, B., Gordon, T.A.C. & Simpson, S.D. (2021) Low‐cost action cameras offer potential for widespread acoustic monitoring of marine ecosystems. Ecological Indicators, 129, 107957.Desiderà, E., Guidetti, P., Panzalis, P., Navone, A., Valentini‐Poirrier, C.A., Boissery, P. et al. (2019) Acoustic fish communities: sound diversity of rocky habitats reflects fish species diversity. Marine Ecology Progress Series, 608, 183–197.Electronics, K. (2013). Product data sheet: SPU0410LR5H‐QB Zeri‐Height SiSonic Microphone. Available at: https://media.digikey.com/pdf/Data%20Sheets/Knowles%20Acoustics%20PDFs/SPU0410LR5H‐QB_RevH_3‐27‐13.pdfElise, S., Bailly, A., Urbina‐Barreto, I., Mou‐Tham, G., Chiroleu, F., Vigliola, L. et al. (2019) An optimised passive acoustic sampling scheme to discriminate among coral reefs’ ecological states. Ecological Indicators, 107, 105627.Erisman, B.E. & Rowell, T.J. (2017) A sound worth saving: acoustic characteristics of a massive fish spawning aggregation. Biology Letters, 13, 20170656.Farina, A., James, P., Bobryk, C., Pieretti, N., Lattanzi, E. & McWilliam, J. (2014) Low cost (audio) recording (LCR) for advancing soundscape ecology towards the conservation of sonic complexity and biodiversity in natural and urban landscapes. Urban Ecosystems, 17, 923–944.Fearey, J., Elwen, S.H., James, B.S. & Gridley, T. (2019) Identification of potential signature whistles from free‐ranging common dolphins (Delphinus delphis) in South Africa. Animal Cognition, 22, 777–789.Fianco, M., Preis, H., Szinwelski, N., Braun, H. & Faria, L.R.R. (2019) On brachypterous phaneropterine katydids (Orthoptera: Tettigoniidae: Phaneropterinae) from the Iguaçu National Park, Brazil: three new species, new record and bioacoustics. Zootaxa, 4652, 240–264.Fox, H.E., Pet, J.S., Dahuri, R. & Caldwell, R.L. (2003) Recovery in rubble fields: long‐term impacts of blast fishing. Marine Pollution Bulletin, 46, 1024–1031.Fueldner, M. (2020) Chapter 48 ‐ microphones. In: Tilli, M., Petzold, M., Motooka, T., Paulasto‐Krockel, M., Theuss, H. & Lindross, V. (Eds.) Handbook of silicon based MEMS materials and technologies. Elsevier Inc, pp. 937–948.Gordon, T.A.C., Harding, H.R., Wong, K.E., Merchant, N.D., Meekan, M.G., McCormick, M.I. et al. (2018) Habitat degradation negatively affects auditory settlement behavior of coral reef fishes. Proceedings of the National Academy of Sciences of the United States of America, 115, 5193–5198.Harding, H.R., Gordon, T.A.C., Hsuan, R.E., Mackaness, A.C.E., Radford, A.N. & Simpson, S.D. (2018) Fish in habitats with higher motorboat disturbance show reduced sensitivity to motorboat noise. Biology Letters, 14, 20180441.Hayashi, K., Erwinsyah, Lelyana, V.D. & Yamamura, K. (2020) Acoustic dissimilarities between an oil palm plantation and surrounding forests: analysis of index time series for beta‐diversity in South Sumatra, Indonesia. Ecological Indicators, 112, 106086.Hayman, G., Robinson, S. & Lepper, P. (2016). Calibration and characterization of autonomous recorders used in the measurement of underwater noise. In: Popper, A.N. & Hawkins, A., (Eds) Effects of noise on aquatic life II. Advances in Experimental Medicine and Biology, pp. 441–445.Hayman, G., Robinson, S., Pangerc, T., Ablitt, K. & Theobald, P. (2017). Calibration of marine autonomous acoustic recorders. In: Oceans 2017 ‐ Aberdeen. IEEE.Hill, A.P., Prince, P., Covarrubias, E.P., Doncaster, C.P., Snaddon, J.L. & Rogers, A. (2018) AudioMoth: evaluation of a smart open acoustic device for monitoring biodiversity and the environment. Methods in Ecology and Evolution, 9, 1199–1211.Hill, A.P., Prince, P., Snaddon, J.L., Doncaster, C.P. & Rogers, A. (2019) AudioMoth: A low‐cost acoustic device for monitoring biodiversity and the environment. HardwareX, 6, e00073.Jaramillo‐Legorreta, A., Cardenas‐Hinojosa, G., Nieto‐Garcia, E., Rojas‐Bracho, L., Ver Hoef, J., Moore, J. et al. (2017) Passive acoustic monitoring of the decline of Mexico’s critically endangered vaquita. Conservation Biology, 31, 183–191.Jiang, J.‐J., Bu, L.‐R., Duan, F.‐J., Wang, X.‐Q., Liu, W., Sun, Z.‐B. et al. (2019) Whistle detection and classification for whales based on convolutional neural networks. Applied Acoustics, 150, 169–178.Juanes, F. (2018) Visual and acoustic sensors for early detection of biological invasions: current uses and future potential. Journal for Nature Conservation, 42, 7–11.Kaplan, M.B., Mooney, T.A., Partan, J. & Solow, A.R. (2015) Coral reef species assemblages are associated with ambient soundscapes. Marine Ecology Progress Series, 533, 93–107.Kasten, E.P., Gage, S.H., Fox, J. & Joo, W. (2012) The remote environmental assessment laboratory’s acoustic library: an archive for studying soundscape ecology. Ecological Informatics, 12, 50–67.King, S.L. & Janik, V.M. (2015) Come dine with me: food‐associated social signalling in wild bottlenose dolphins (Tursiops truncatus). Animal Cognition, 18, 969–974.Legg, M.W., Duncan, A.J., Zaknich, A. & Greening, M.V. (2007). Analysis of impulsive biological noise due to snapping shrimp as a point process in time. Ocean. 2007 ‐ Eur., 1–6.Longden, E.G., Elwen, S.H., McGovern, B., James, B.S., Embling, C.B. & Gridley, T. (2020) Mark‐recapture of individually distinctive calls ‐ a case study with signature whistles of bottlenose dolphins (Tursiops truncatus). Journal of Mammalogy, 101, 1289–1301.McCloskey, K.P., Chapman, K.E., Chapuis, L., McCormick, M.I., Radford, A.N. & Simpson, S.D. (2020) Assessing and mitigating impacts of motorboat noise on nesting damselfish. Environmental Pollution, 266, 115376.Merchant, N.D., Fristrup, K.M., Johnson, M.P., Tyack, P.L., Witt, M.J., Blondel, P. et al. (2015) Measuring acoustic habitats. Methods in Ecology and Evolution, 6, 257–265.Mindje, M., Tumushimire, L. & Sinsch, U. (2020) Diversity assessment of anurans in the Mugesera wetland (Eastern Rwanda): impact of habitat disturbance and partial recovery. Salamandra, 56, 27–33.Mooney, T.A., Di Iorio, L., Lammers, M., Lin, T.‐H., Nedelec, S.L., Parsons, M. et al. (2020) Listening forward: approaching marine biodiversity assessments using acoustic methods. Royal Society Open Science, 7, 201287.Okumura, T., Akamatsu, T. & Yan, H.Y. (2002) Analyses of small tank acoustics: empirical and theoretical approaches. Bioacoustics, 12, 330–332.Peck, M., Tapilatu, R.F., Kurniati, E. & Rosado, C. (2021) Rapid coral reef assessment using 3D modelling and acoustics: acoustic indices correlate to fish abundance, diversity and environmental indicators in West Papua, Indonesia. PeerJ, 9, e10761.Pieretti, N. & Danovaro, R. (2020) Acoustic indexes for marine biodiversity trends and ecosystem health. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 375, 20190447.Pieretti, N., Farina, A. & Morri, D. (2011) A new methodology to infer the singing activity of an avian community: the acoustic complexity index (ACI). Ecological Indicators, 11, 868–873.Pijanowski, B.C., Farina, A., Gage, S.H., Dumyahn, S.L. & Krause, B.L. (2011) What is soundscape ecology? An introduction and overview of an emerging new science. Landscape Ecology, 26, 1213–1232.Podos, J. (2010) Acoustic discrimination of sympatric morphs in Darwin’s finches: a behavioural mechanism for assortative mating? Philosophical Transactions of the Royal Society B: Biological Sciences, 365, 1031–1039.Radford, C.A., Ghazali, S., Jeffs, A.G. & Montgomery, J.C. (2015) Vocalisations of the bigeye Pempheris adspersa: characteristics, source level and active space. Journal of Experimental Biology, 218, 940–948.Reis, C.D.G., Padovese, L.R. & de Oliveira, M.C.F. (2019) Automatic detection of vessel signatures in audio recordings with spectral amplitude variation signature. Methods in Ecology and Evolution, 10, 1501–1516.Revilla‐Martín, N., Budinski, I., Puig‐Montserrat, X., Flaquer, C. & López‐Baucells, A. (2021) Monitoring cave‐dwelling bats using remote passive acoustic detectors: a new approach for cave monitoring. Bioacoustics, 30, 1–16.Rountree, R.A. & Juanes, F. (2017) Potential of passive acoustic recording for monitoring invasive species: freshwater drum invasion of the Hudson River via the New York canal system. Biological Invasions, 19, 2075–2088.Sanguineti, M., Alessi, J., Brunoldi, M., Cannarile, G., Cavalleri, O., Cerruti, R. et al. (2021) An automated passive acoustic monitoring system for real time sperm whale (Physeter macrocephalus) threat prevention in the Mediterranean Sea. Applied Acoustics, 172, 107650.Servick, K. (2014) Eavesdropping on ecosystems. Science, 343, 834–837.Sethi, S.S., Ewers, R.M., Jones, N.S., Orme, C.D.L. & Picinali, L. (2018) Robust, real‐time and autonomous monitoring of ecosystems with an open, low‐cost, networked device. Methods in Ecology and Evolution, 9, 2383–2387.Sethi, S.S., Jones, N.S., Fulcher, B.D., Picinali, L., Clink, D.J., Klinck, H. et al. (2020) Characterizing soundscapes across diverse ecosystems using a universal acoustic feature set. Proceedings of the National Academy of Sciences of the United States of America, 117, 17049–17055.Sousa‐Lima, R.S., Norris, T.F., Oswald, J.N. & Fernandes, D.P. (2013) A review and inventory of fixed autonomous recorders for passive acoustic monitoring of marine mammals. Aquatic Mammals, 39, 23–53.Stimpert, A.K., Wiley, D.N., Au, W.W.L., Johnson, M.P. & Arsenault, R. (2007) “Megapclicks”: acoustic click trains and buzzes produced during night‐time foraging of humpback whales (Megaptera novaeangliae). Biology Letters, 3, 467–470.Sueur, J., Aubin, T. & Simonis, C. (2008) Equipment Review Seewave, a free modular tool for sound analysis and synthesis. Bioacoustics, 18, 213–226.Sueur, J. & Farina, A. (2015) Ecoacoustics: the ecological investigation and interpretation of environmental sound. Biosemiotics, 8, 493–502.Sueur, J., Krause, B. & Farina, A. (2019) Climate change is breaking earth’s beat. Trends in Ecology and Evolution, 34, 971–973.Sugai, L., Desjonqueres, C., Silva, T. & Llusia, D. (2020) A roadmap for survey designs in terrestrial acoustic monitoring. Remote Sensing in Ecology and Conservation, 6, 220–235.Towsey, M., Parsons, S. & Sueur, J. (2014) Ecology and acoustics at a large scale. Ecological Informatics, 21, 1–3.Van Renterghem, T., Thomas, P., Dominguez, F., Dauwe, S., Touhafi, A., Dhoedt, B. et al. (2011) On the ability of consumer electronics microphones for environmental noise monitoring. Journal of Environmental Monitoring, 13, 544–552.Versluis, M., Schmitz, B., von der Heydt, A. & Lohse, D. (2000) How snapping shrimp snap: through cavitating bubbles. Science, 289, 2114–2117.Whytock, R.C. & Christie, J. (2017) Solo: an open source, customizable and inexpensive audio recorder for bioacoustic research. Methods in Ecology and Evolution, 8, 308–312.Williams, S.L., Sur, C., Janetski, N., Hollarsmith, J.A., Rapi, S., Barron, L. et al. (2019) Large‐scale coral reef rehabilitation after blast fishing in Indonesia. Restoration Ecology, 27, 447–456.Zhong, M., Castellote, M., Dodhia, R., Ferres, J., Keogh, M. & Brewer, A. (2020) Beluga whale acoustic signal classification using deep learning neural network models. Journal of the Acoustical Society of America, 147, 1834.ZSL. (2021) Conservation technology: detecting illegal fishing vessels. Available at: https://www.zsl.org/conservation/conservation‐initiatives/conservation‐technology/detecting‐illegal‐fishing‐vessels Last accessed 6 October 2021.
Remote Sensing in Ecology and Conservation – Wiley
Published: Jun 1, 2022
Keywords: aquatic bioacoustics; aquatic ecosystems monitoring; bioacoustics; ecoacoustics; hydrophone; passive acoustic monitoring
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.