Abstract
Modern Machine Learning (ML) models have a significant number of hyper-parameters that need adjusting to leverage performance and energy efficiency for a given model configuration during training. This becomes a considerable design challenge with increasing complexity requiring larger models. This paper explores the Tsetlin Machine (TM) – a new logic-based ML approach with only four hyper-parameters regardless of the problem space. Two of these hyper-parameters influence the TM architecture while the remaining two impact the learning efficacy. This work focuses on the systematic search for optimal hyper-parameters for the TM and aims to understand how hyper-parameter values affect performance and prediction accuracy using MNIST dataset as a case study.
More Information
Identification Number: | https://doi.org/10.1109/ISTM58889.2023.10454969 |
---|---|
Status: | Published |
Refereed: | Yes |
Publisher: | IEEE |
Depositing User (symplectic) | Deposited by Gorbenko, Anatoliy |
Date Deposited: | 06 Jan 2025 15:15 |
Last Modified: | 06 Jan 2025 16:17 |
Event Title: | International Symposium on the Tsetlin Machine (ISTM) |
Event Dates: | 29-30 Aug 2023 |
Item Type: | Conference or Workshop Item (Paper) |
Download
Due to copyright restrictions, this file is not available for public download. For more information please email openaccess@leedsbeckett.ac.uk.
Export Citation
Explore Further
Read more research from the author(s):