Misinformation stands as a significant threat to society posed by technology advancements. Its gruesome consequences has made misinformation detection a widely researched area in the recent past. While many studies have focused on creating datasets and effective machine learning models to combat them, detecting misinformation using deep learning in a nuanced setting where there is a lack of data remains a challenge. This study aims to identify deep learning techniques that assist in classifying misinformation for small datasets. It compares the effectiveness of transfer learning on past, related data initially, followed by few-shot and zero-shot learning on the smaller dataset, against directly training the language models on the small dataset. It also aims to uncover the driving factors that affect model performance while detecting misinformation in a small dataset using deep learning. Our findings suggest that training language models on smaller datasets while considering key indicators of performance like model architecture and learned representation transfer is more beneficial than pre-training the models with past, related data. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.