From BERT to Mamba: Evaluating Deep Learning for Efficient QA Systems
This project aims to compare the performance of multiple deep learning models for a Question Answering NLP task, focusing on balancing accuracy and computational efficiency. Effective question-answering requires models to interpret queries precisely within a given context while remaining resource-efficient for real-world applications. We fine-tune, and evaluate various deep learning models such as BERT, T5, and Mamba, assessing their Exact Match scores and resource usage. Through this comparative analysis, we explore the trade-offs that shape each model’s performance, providing insights into optimizing QA systems for practical deployment. PPT Code Report