General
WildTrax is an online platform for managing, storing, processing, and sharing biological data collected by environmental sensors. WildTrax provides tools for managing large data sets, and also creates opportunities to address broad-scale questions using novel approaches. Supported environmental sensors currently include autonomous recording units (ARUs) and wildlife cameras, while a third sensor, point counts, provides access to manage avian count data collected by humans.
WildTrax is available for organizations or individuals using cameras and/or ARUs—simply create an online account to begin. Contact info@wildtrax.ca or see The Guide on how to get started.
WildTrax offers several benefits for users of environmental sensors, including:
- Improvement in processing speed
- More accurate, and higher quality data using the species verification
- Unlimited, online access to your data
- Seamless and flexible data sharing options with teams and collaborators
- Standardized approaches to data collection across networks of organizations and individuals
- Opportunities to discover data in your region of interest, coordinate with other groups, and address broad-scale ecological questions
- Centralized repository for long term data archiving
Sensors can be deployed in many ways, depending on the monitoring objective. Examples of methods and protocols are found here.
The set-up methods used by the ABMI are summarized below:

For cameras, choose a view that is not blocked by vegetation or other impediments for at least 10 m (try to anticipate vegetation growth). Set the camera (lens) height at 1 m and then focus the camera view on the reference stake at 80 cm above the ground. Your target detection zone should be approximately 3–5 m from the camera. Face the camera to the north (ideally) or south if possible to avoid visibility issues from direct sunlight.

The autonomous recording unit (ARU) should be at a height of 1.5 m above ground, facing north with the microphones unobstructed by leaves, branches, or (if applicable) the trunk of the tree to which it’s affixed. Choose a sturdy tree or support, such as a stake, so that the unit won’t topple over in high winds or if disturbed by a large mammal.
WildTrax now offers multiple data storage options designed to meet the needs of users.
Each Organization must select a storage location for media files (e.g., audio recordings, camera trap images) at the time of upload. This ensures fast, on-demand access to your files when you need them most. WildTrax will begin to operate under a cost-recovery model for data storage, starting April 1, 2026. Storage fees will apply annually, on a rolling monthly basis, for Organizations that exceed their free tier allotment. Current storage options include:
WildTrax Live Servers
Location: University of Alberta, Edmonton, Alberta, Canada
This is the default storage option for users, offering a cost-effective and sustainable solution. Off-site simultaneous backups are handled through Amazon Deep Glacier, ensuring long-term data durability and security.
- Free Storage Tier: 250 GB
- Annual Cost: $75.00 CAD / TB
Amazon Web Services (AWS S3)
Locations: Oregon (USA) or Montreal (Canada).
Ideal for users requiring international accessibility and high-performance throughput, AWS S3 supports a wide range of use cases using this global infrastructure.
- Annual Cost: $360.00 CAD / TB
Your upload and download speed are limiting factors when you interface with WildTrax. The platform attempts to resolve these issues by providing asynchronous uploading and caching large data sets, but be mindful of the volume of data you’re uploading and downloading and your internet connectivity before proceeding.
Remote Cameras
Currently, WildTrax has not encountered a make and model of camera that it does not support. However, it cannot populate data from fields that do not exist in the image Exif data. Metadata fields are stored in the image metadata e.g., “Sequence” or “TriggerMode”; however, the fields stored as image metadata or “Exif” data vary by camera make and model. If it seems like you are having issues due to the make or model of your camera, please contact support@wildtrax.ca. You can also refer to Phil Harvey’s ExifTool Tag Names to determine which tags are available for your camera make and model.
If you are purchasing remote cameras for the first time, Reconyx cameras (e.g., HF2, HF2X, PC800, PC900, and HC600 models) are greater for first-time users as they are user-friendly and intuitive.
High-quality SandDisk SD cards or Kingston Class 4 and 10 SD cards are frequently used. We would not recommend anything below a Class 4 write speed.
For more information on camera brands, please consult the Remote Camera Survey Guidelines.
WildTrax integrates the following:
(1) Microsoft’s Megadetector v5, that automatically tags images of vehicles, animals, humans or NONE.
(2) Microsoft’s MegaClassifer v0.1.
(3) A “Staff/setup” tagger designed to filter out images of humans at the camera’s deployment and retrieval. When enabled in your project’s settings, these tools will automatically filter and tag your images.
This results in less time spent sifting through false fires and more time spent focusing on the species you want to tag.
Cameras can sometimes capture images that do not contain wildlife—‘false fires’—due to movement in vegetation or changes in sunlight. These false fires can increase processing cost and time. To aid in processing these images, WildTrax contains a model to automatically identify false fires, allowing them to be removed before further processing. The model uses training data from 1,325 camera deployments as well as a trained network, CaffeNet, specifically modified for WildTrax. This tool results in less human time spent sifting through images of vegetation movement. The model was validated with an additional 121 camera deployments with 79,451 false-fire images. The model identified 34,456 (43.6%) of false fires with a 0.2% error (false positive) rate. That is, more than 40% of false fires can be reliably (0.2% error) removed before processing. Depending on the camera unit used, image quality and habitat type results may vary.
Images cannot be deleted individually in WildTrax (including those of humans); instead, WildTrax allows you to filter images of humans using the results from Megadetector (if enabled in project settings) as well as select options to opt-in for human blurring (can be enabled in organization settings).
Autonomous recording units (ARU)
Wildlife bioacoustics is the study of animals using the vocalizations that they produce. Sounds are identified to the species or even individual level using unique patterns known as spectral signatures. These data are used to answer research and monitoring questions about individual species or groups of species.
The Bioacoustic Unit is a collaboration between the Bayne Lab at the University of Alberta and the Alberta Biodiversity Monitoring Institute. Our research group develops tools, protocols, and recommendations for acoustic monitoring programs across the country.
To learn more about the Bioacoustic Unit, please click here.
The BU uses robust environmental sensors, called Autonomous Recording Units (ARUs)—essentially sophisticated battery-operated microphones—to record sounds produced by vocalizing animals. There are recommended settings that can be used to optimize recordings of birds, mammals, and other taxa.
The Bioacoustic Unit uses Song Meter Autonomous Recording Units made by Wildlife Acoustics. Most of our Song Meters are the SM2+ and SM4 models. Other less frequently used models include the SM3, the SM2 with GPS, and the SM2+BAT. The GPS-enabled units permit more precise localization of animals in space. For memory cards, we often use high-quality SanDisk SD cards. We also occasionally use the Kingston Class 4 and 10 SD cards. Wildtrax can take data from any type of digital sound recorder.
Cumulatively, more species are observed by going to new stations within a study area than by listening to more recordings of the same locations; however, the difference is not that large. If sufficient funding exists to go to more locations, that will provide a better estimate of total species. However, when restrained by field costs, leaving ARUs in the same location and repeatedly sub-sampling is recommended, particularly if you are interested in multiple taxa (e.g., owls and songbirds).
For songbirds, leaving an ARU out for several days will yield higher occupancy rates and probability of detection than repeatedly sampling in a single day. The additional benefit of leaving an ARU out for a month is relatively small for songbirds. However, there is evidence that more species will be detected with more sampling effort and owls, amphibians, and mammals have very different calling behaviours from songbirds.
Minimum sampling effort recommended by the BU in order to maximize detection for most acoustic species is 3–7 days. Each sampling event is recommended to be at least 3 minutes long, either at dawn or dusk and at least one day apart.
The question here is whether you could achieve the same results by listening to the same total number of recordings from a single day vs. a week vs. a month. Sampling for approximately a week results in higher estimates of species richness at a station compared to sampling for a day. In our tests, there was no significant difference between leaving an ARU out for a week vs. a month but that was only for songbirds.
This is entirely dependent on the frequency with which a species sings. The Bioacoustic Unit and the Boreal Avian Modelling project have estimates for all species, however, so you can assess the effort required to ensure you detect a species if it is present.
Calling rate has the greatest effect on detection rate, explaining 49% of the variance in detection rate. Calling rate coupled with the abundance of a species, time period, and a species’ log body weight explained 69% of the variance in detection rate. When the abundance of a species is high, there is higher detectability. Species that call at night have lower detection rates than those that call during the day. Also, larger species generally have lower calling rates. In general, species that are less abundant, have a large body weight, and vocalize infrequently and/or more often during the night have a lower detection rate and will require more extensive sampling.
There are consistent benefits to repeatedly sampling at the same station when estimating trends for a species as you are more certain if the species is present or absent. However, the statistical power of trends is driven by the number of stations and the number of years observed.
BirdNET is a multi-species bird classifier developed by Cornell University. WildTrax utilizes the BirdNET API to allow users to obtain results from the classifier for their projects.

When using BirdNET as a part of species verification, you can identify whether or not the tag achieves a high BirdNET confidence level by hovering over the brain icon. This is usually useful when the time of first detection of the tag is not high quality; however, if the individual is occupying the space around the recorder with high enough signal amplitude and unobstructed calls, BirdNET’s confidence will be more valuable.
You can also use BirdNET as a guide to automatically return the species it thinks it found. BirdNET provides values in 3-second windows for each recording in a project. You can find BirdNET’s output in the _birdnet_report.csv in Data Downloads.
Evaluating the performance of BirdNET on a data set is also possible. In a binary classification task (BirdNET = predicted, human = observed), you can distinguish false positives (incorrect detections) and false negatives (missed detections) and subsequently create performance metrics. This is useful for questions like species presence, where the highest confidence value of BirdNET across many recordings, can yield a positive result, with minimal effort.
If you sample only a few points from the total number of available recordings, there is strong evidence that afternoon sampling can be avoided altogether if you are relying on listening.
Recognizers can be used when you are targeting a specific species, and a manual scanning spectrogram can be very effective in processing data when vocalizations are visually distinctive and recognizable. In short, the training data is used to create a template (“recognizer”) and is then matched to a recording segment from the test data. More information can be found here.
Point Counts
Within the first minute of a 10-minute point count, 49.8% of all vocalizing species are detected. Within the first five minutes, 79.2% of all vocalizing species are detected. However, if you have the choice between 10 1-minute samples taken at different times of day or year and 1, 10-minute period you will detect far more species using 10 1-minute segments.
Using more point counts with shorter duration detected a larger proportion of all species compared to fewer, longer duration point counts.
The inclusion of point count data as a sensor on WildTrax allows users to seamlessly harmonize and work with point count and ARU datasets and is supported by the Boreal Avian Modelling Centre. Their expertise in harmonizing point count data complements WildTrax’s strengths in managing, storing, and sharing big data. This partnership not only ensures point count data benefits from WildTrax’s value-added tools and infrastructure but also broadens WildTrax’s user base through this integration.
Data Management and Privacy
You must first create an Organization to house your data and metadata. The WildTrax team will review your request and confirm your identity after which point you will have the ability to create projects under your newly created Organization and upload media into those projects.
There can only be one Organization for a Project. You must choose who owns that data. This must also be the organization who collects and uploads the data. The owner of the data can then add the collaborating users to their project or organization pending on the level of collaboration.
In your Organization settings, change all locations to the specific buffer you are comfortable with. For each project you wish to share choose “Published – Map + Report” as non-authorized users will not be able to see the media or the un-buffered location this way.
WildTrax uses various permission and privacy levels throughout the system to ensure your data is secure to the level you choose
WildTrax has two basic membership levels: Admin and Read-Only. See below for more details on what each membership level can do
Organizations are the framework of your data on WildTrax. They contain your locations, visits, equipment and summaries of your acoustic recordings and camera data upload to your Projects.
An organization usually contains a collection of people who collect data, publish projects and manage equipment and teams. Administrators can read and write un-buffered locations on all organizational locations, add users to Organizations and Projects. read and write location, visit and equipment metadata and inherit administrator privileges on all organization projects. Read-Only members have read un-buffered on all organizational locations, can read location visit metadata and equipment and can read all organization projects.
Projects are a targeted study or objective to accomplish or learn something using environmental sensor data. Projects are populated using locations, visit and contain a collection of tasks to achieve this work. Project administrators manage projects, assign tasks and users, and publish projects. Read-Only members can view and download data depending on their level of access via the Project Privacy Levels: Active; Test Only – Hidden; Published – Private; Published – Map + Report Only; Published – Public.
Choose Published – Private if you don’t want to receive access requests or Active if you are still working on completing your project.
Locations are a physical location associated with the deployment of an environmental sensor. Locations contain subsequent visit and equipment metadata that were collected on behalf of the organization. Locations can be either hidden, buffered, true location + buffer and true location. This allows for flexibility with specific locations requiring a higher level of privacy.
Tasks are recordings or deployments assigned to a specific user and tagging protocol. Read-Only members can gain privileges to write at the task level in order to perform tagging. Organization or project administrators can also change the tags in the tasks, which is subsequently tracked in the audit table.
Recordings (ARUs) / deployments (cameras) are audio files or a series of images corresponding to an ARU / camera deployment.

For each project make sure you choose Published – Private when it is complete. To give someone access to a project, add the user into the Project as a member (this will give access to just the current project + buffered access) or add the user as a member into the Organization (this will give access to all projects + un-buffered access). You can also remove members from the project or Organization or reduce their access at any time.
Administrators can control permissions at three different levels within WildTrax: 1) Organization, 2) Project, and 3) Location. Details on these different privacy settings and how they work can be found here. Organization and project membership is either read (Read-Only) or read + write (Admin); locations are hidden or visible with various buffering options.
Release of data occurs primarily through the Project permissions settings, which include the following levels:
| Status | Description |
| Private | All components of the project remain fully private and are only accessible to the organization and project members. |
| Map Only | Project locations can be viewed in Data Discover but species data are not available for download. |
| Map + Report Only | The project is included in Data Discover and the data can be downloaded by the public, however the public cannot access media associated with the project. |
| Public | All components of the data, including media, are publicly available in Data Downloads and Data Discover and can be accessed on the project dashboard. |
Buffer your locations in organization settings. This tells WildTrax this location is already buffered and no one will know the true location because WildTrax was never given that information.
You must add yourself as a project member to see the projects in the default view. You can hit the “View Only My projects” toggle in the top right to see all projects you are eligible to see. WildTrax does this to try to help simplify the amount a user sees at once as it would be overwhelming and most of the time people are only interested in their own projects.
Organization investigators are the ones who receive all access requests. Requests will only go to the administrators of the project/organization if no investigators have been assigned.
Make sure your Organization has an investigator set to handle all of these requests. You can do this in organization settings.
All data uploaded to WildTrax is by default private and only viewable by the project members depending on their level of membership (Admin or Read-Only). Project and organization administrators maintain ownership and privacy rights over uploaded data, regardless of whether the data is private or publicly available. When you upload data to WildTrax, you have the option of releasing your data publicly or not.
Point count and acoustic data, including media files, can be accessed through WildTrax. Once projects are published in WildTrax, the data flows seamlessly to NatureCounts in regular intervals. If you need raw, published data, NatureCounts is your go-to. For media access or unpublished data, WildTrax is the better option. While each portal specializes in specific insights, all datasets are designed to be easily combined for comprehensive analysis, providing flexibility without adding complexity to your workflow.
Automated Recognizers
Biologists are increasingly using autonomous recording units (ARUs) in the field to determine the presence/absence and the abundance of bird species. Unlike humans, these recorders can be left in the field for extensive periods of time, allowing data to be collected over much greater spatiotemporal scales. However, the tradeoff in this approach is the labour-intensive nature of processing such vast datasets. Here, automated species detection provides a path forwards, by shifting the burden of sifting through hours of audio recordings from the technician to the computer.
Put simply, automated (acoustic) species recognition is the process of training a computer to recognize, detect, and evaluate the acoustic signature of a target species’ vocalization. For example, a computer model can be trained to recognize the distinctive “who-cooks-for-you” vocalization of a Barred owl (Strix varia).
Such a model, commonly referred to as a “recognizer”, can then be applied to acoustic datasets to detect signals that resemble the trained model. All types of sounds can be modelled into recognizers, from the chucks and whines of a frog to the drumming of a woodpecker.
Recognizers can make processing acoustic datasets more efficient
As mentioned above, by automating the species detection process, datasets can be processed more efficiently. This is especially true for rare or uncommon species because the amount of effort required to identify those species manually can be substantial.
Recognizers provide many different kinds of data
The most basic information that can be obtained from automated recognition is presence/absence or occurrence data. When coupled with estimates of detection probability, occupancy may also be modeled. Recent methods are exploring the possibility of using clustered recording units to localize an individual bird in time and space. In the future, this could lead to estimates of density, particularly for rare or uncommon birds. Automated species recognizers can also provide information on vocalization phenology, calling rates, and intra-specific variation in calls.
There are many approaches to automated acoustic species recognition (summarized briefly in Knight et al., 2017). Generally, the BU has implemented two major approaches to building recognizers: (i) supervised learning algorithms and (ii) neural networks. The former will be discussed below.
By supervised learning algorithm, we mean that the user monitors the computer during the training stage, where the computer is ‘learning’ what a particular species’ vocalization sounds like. This is typically done using a software called SongScope, which is first fed example annotations of a species’ vocalization, to train the computer on. Where high quality training data is available the recognizers can be very accurate in their ability to discriminate signal from the noise.
During the training stage, a number of parameters have their values informed by the user (e.g. number of syllables, range of permitted frequencies, etc.). This is where the ‘supervised’ part of the algorithm comes into play. By setting these parameters using biologically informed priors (e.g. we know that the vocalization usually has 7 syllables, or we can measure the extreme ranges picked up for that vocalization in recordings), the user helps guide the computer to the parameter space it will use to search through real datasets. After preliminary assessments of the recognizer model’s efficacy deem the model satisfactory, then the quality and score thresholds can be set to optimize false positive and negative rates.
“All models are wrong but some are useful”. This classic scientific adage applies to recognizers. While recognizers can be very accurate there will always be false positives (recognizer says that a vocalization is species X when it is really species Y) and false negatives (recognizers fails to find species X even though it was vocalizing).
With large amounts of acoustic data, the numbers of hits that the recognizer gets means you can’t be 100% sure that the computer is correct without checking the vocalization yourself. This is called validation or verification. There are many ways to validate data and how much you need to validate depends on your question. If you are primarily interested in whether a species is present or absent over an entire season of recordings the amount of validation needed is much less than if you want to count every song given by a species. The BU has a number of papers that discuss ways to reduce validation time that can be found here. Using tools like species verification in WildTrax can save time and help to manage and share your data outputs as well.
The flow chart below is the process the BU uses when developing and using recognizers and provides links to papers that provide more detail. A key element of this flowchart is the quality of recordings used to build a recognizer. There are trade-offs in the creation and interpretation of recognizers when you only use high-quality clips recorded very close to the species of interest versus using recordings of different qualities coming from species at different distances. When trained with high-quality close-clips a recognizer not only identifies the species but also “estimates” distance in that it will be more likely to find the vocalizations of species that are close to the ARU and miss those further away. Training the recognizer using vocalizations that are further from the ARU can improve (but not always) the ability to find a species in the recording because it is trained to detect weaker signal to noise ratios that come from more distant animals. We prefer to use recognizers built from high-quality recordings near the recording device because of the statistical benefits of knowing distance. Recognizers built using vocalizations of different quality and distances often have more false positives.
As long as the recognizer output you generated contains the appropriate metadata, you can upload the media, create tasks and upload the hits as tags. Using species verification, you can then quickly verify the hits. See the chapters of the Guide on ARU projects and Species verification to learn more.
Classifiers
BirdNET and HawkEars are computer algorithms, commonly referred to as classifiers or recognizers, that have been trained to interpret audio recording spectrograms to classify the species of bird (and a few other acoustic taxa) making sounds in those audio recordings.
BirdNET is a worldwide bioacoustic classifier, which has been trained to classify thousands of species, while HawkEars is a regional classifier that has been trained to classify hundreds of species, mostly Canadian and northern US species. There are also differences in their algorithms that lead to more or less accurate results for different species and recording conditions. Think of them like two different hammers: both are meant to strike nails, but the different size and shape of the hammer changes the result of striking different nails. HawkEars will often be the preferred choice if you are working with acoustic communities for the region it was trained for (Canada, northern US). A comparison of the two classifiers for the HawkEars region is available here (https://www.sciencedirect.com/science/article/pii/S1574954125001311)
BirdNET and HawkEars scan every 3-second window of a spectrogram and give a confidence score for every species in their model for each of these 3-second windows. The classifier scores are not probabilities, but they are reported on a scale of 0-1 for readability. So a score of 0.5 for an American Robin should not be considered to have a 50% chance of being correct, but it is generally more likely to be correct than a score of 0.2 for an American Robin. Note that each model is different and each species’ calls are more or less complex than those of other species, so scores for different species or for the same species but from a different model should not be considered equivalent. More information on how to interpret score thresholds can be found in Wood and Kahl 2024 (https://link.springer.com/article/10.1007/s10336-024-02144-5).
For each 3-second window, the classifiers are generating a score for every species in their dataset. Most of those scores, however, will be very low, indicating that the species is not actually present within the 3-second window, and so a minimum threshold is set for which scores are reported.
WildTrax is set to only report scores that are above a minimum score threshold of 0.2 for BirdNET and 0.3 for HawkEars (although these can be set higher by the user), since most instances of a detection score below those thresholds are incorrect and including all false positives would overwhelm the true positives. Even scores between 0.2 and 0.6 are likely to be false positives for many species. Every species is a little different and sometimes different recording conditions may generate different detection scores for the same species, so WildTrax maintains a relatively low minimum threshold for any users that have a need to explore low-scoring detections for their specific purposes.
The threshold for a classifier is a trade-off between the number of false positive detections and the number of missed detections (i.e., false negatives). That balance and the subsequent threshold selected will depend on the desired application for the data. Users should validate a subset of their data to understand the precision and recall of the models on their species of interest. More information on the evaluation can process can be found in Wood and Kahl 2024 (https://link.springer.com/article/10.1007/s10336-024-02144-5) and there are tools within the wildrtrax R package to support evaluation and threshold selection (https://rdrr.io/github/ABbiodiversity/wildRtrax/f/vignettes/classifiers-tutorial.Rmd).
The classifier scores of BirdNET and HawkEars are scaled differently for each model, with more HawkEars detections generated at intermediate (0.1–0.6) thresholds than BirdNET. Since HawkEars generates more intermediate scores some of these are ignored by using a higher minimum threshold in WildTrax for HawkEars compared to BirdNET. The user interface for both classifiers at scores below 0.5 can get quite busy with a lot of false positives, so the recommended minimum threshold when looking at a spectrogram is 0.5. For users who want to use lower thresholds they can go as low as 0.3 for HawkEars or 0.2 for BirdNET. The classifier reports will output a csv with all classifier detections down to those lower thresholds for users interested in exploring recall and precision curves for particular species and classifiers. If users want to use lower thresholds than those in WildTrax they should explore the option of running the classifiers on their data directly, outside of WildTrax, to test very low thresholds.
BirdNET and HawkEars both have a few filters that can be used to adjust the results generated by the classifiers beyond just the minimum confidence threshold. WildTrax only uses the location filters in BirdNET and HawkEars. This is done to limit results to those that are geographically relevant. These location filters use eBird data to determine if species detections are expected in the area the recording was taken. As an example, this means that classifier detections for a species such as California Quail (a west coast species) will not appear in a recording from the Maritimes because there are no records of the species appearing there so any detections generated by the classifiers are likely to be false positives and can be ignored. Any users that want unfiltered results from the classifiers should explore running the classifiers on their recordings themselves so that they may adjust the use of any filters themselves for their specific purposes.
The BirdNET and HawkEars classifiers are by no means perfect. They are tools that have been extensively trained on highly advanced software, but they are not the equivalent of a trained human who can recognize patterns and differences in sounds much better than the most highly advanced computer algorithms. Overlapping calls, abnormal calls, and quiet calls are particularly difficult for the classifiers to detect and identify because the patterns in the spectrogram may not match the patterns they have been trained to detect. In addition, species with higher variation in their calls are more likely to be missed or confused because higher variation calls are more difficult to identify correctly.
Just as the classifiers sometimes miss a species that a human can detect, they can also report species that are not present within the recording. The classifiers provide a score value for every species they are trained on, and the purpose of the score threshold is to hopefully eliminate those species that score low because they are not actually present in the recording. However, because the classification process is not perfect, the classifier output often contains false positives. How you deal with those false positives will depend on the intended application for your data. If you are looking for a common species and are detecting dozens of vocalizations in every recording, but one or two of them are wrong then it probably isn’t a big deal. If you would like to remove the need for manual verification of your detections, you can verify a small proportion to obtain a false positive rate and incorporate that rate into a variety of statistical models like occupancy models. On the other hand, if you are looking for a rare species that only calls occasionally, and the classifier is regularly giving false positives, then you may be interested in manually verifying all your classifier detections to ensure the interpretation of your dataset is accurate.
WildTrax and ABMI do not own or maintain these classifiers; they were developed by programmers and biologists who then made them freely available for others to use. WildTrax runs the classifiers and makes their results available via the user interface to make it easier for users to perform all their acoustic data processing in one platform, without needing to access external programs to use the classifiers. The developers of these models are continuing to update their models to improve the accuracy and add new species; WildTrax doesn’t have the ability to keep up with every update but it will periodically update the version of the model when the newest models offer significant upgrades for our users. The code repository for BirdNET can be found here (https://github.com/birdnet-team/BirdNET-Analyzer) and for HawkEars can be found here ( https://github.com/jhuus/HawkEars).