Text classification is the task of assigning a class to a document. Machine Learning enables the automation of Text Classification Tasks, amongst others. Recent advances in the Machine Learning field, such as the introduction of Recurrent Neural Networks, Long Short Term Memory and Gated Recurrent Units, have greatly improved classification results. These type of networks include internal memory states that demonstrate dynamic temporal behaviour. In the LSTM cell, this temporal behaviour is supported by two distinct states: current and hidden. We introduce a modification layer within the LSTM cell, where we are able to perform extra state alterations for one or both states. We experiment with 17 single state alterations, 12 for the current state and 5 for the hidden state. We evaluate these alterations in seven datasets that deal with hate speech detection, document classification, human to robot interaction and sentiment analysis. Our results demonstrate an average F1 improvement of 0.5% for the top performing current state alteration and 0.3% for the top performing hidden state alteration. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.