Abstract
Sound classification is a multifaceted task that necessitates the gathering and processing of vast quantities of data, as well as the construction of machine learning models that can accurately distinguish between various sounds. In our project, we implemented a novel methodology for classifying both musical instruments and environmental sounds, utilizing convolutional and recurrent neural networks. We used the Mel Frequency Cepstral Coefficient (MFCC) method to extract features from audio, which emulates the human auditory system and produces highly distinct features. Knowing how important data processing is, we implemented distinctive approaches, including a range of data augmentation and cleaning techniques, to achieve an optimized solution. The outcomes were noteworthy, as both the convolutional and recurrent neural network models achieved a commendable level of accuracy. As machine learning and deep learning continue to revolutionize image classification, it is high time to explore the development of adaptable models for audio classification. Despite the challenges associated with a small dataset, we successfully crafted our models using convolutional and recurrent neural networks. Overall, our strategy for sound classification bears significant implications for diverse domains, encompassing speech recognition, music production, and healthcare. We hold the belief that with further research and progress, our work can pave the way for breakthroughs in audio data classification and analysis.
Official URL
More Information
Divisions: | School of Built Environment, Engineering and Computing |
---|---|
Identification Number: | https://doi.org/10.14569/ijacsa.2024.0150704 |
Status: | Published |
Refereed: | Yes |
Publisher: | The Science and Information Organization |
Uncontrolled Keywords: | 0803 Computer Software; 1005 Communications Technologies; 46 Information and computing sciences |
SWORD Depositor: | Symplectic |
Depositing User (symplectic) | Deposited by Mann, Elizabeth |
Date Deposited: | 26 Sep 2024 15:24 |
Last Modified: | 26 Sep 2024 17:30 |
Item Type: | Article |
Export Citation
Explore Further
Read more research from the author(s):