Optimizing Large Language Models from a Data SystemsPerspective
Author(s)
Chen, Peter Baile
DownloadThesis PDF (2.726Mb)
Advisor
Cafarella, Michael J.
Terms of use
Metadata
Show full item recordAbstract
Strong retrieval and reasoning capabilities are essential for large language models (LLMs) to effectively handle a broad spectrum of downstream tasks, such as open-domain question answering and solving math or science problems. While current LLM-based frameworks achieve strong performance on complex retrieval and reasoning tasks, they do so at a high computational cost. Additionally, they often lack structured, systematic problem-solving strategies, leading to unexpected failures. In particular, these models typically operate in an iterative, online, and isolated fashion—failing to exploit relationships across data sources, opportunities for offline computation, and the benefits of reusability—resulting in less-than-optimal outcomes. In contrast, traditional data management systems are engineered for both efficiency and accuracy, with careful coordination across all stages of the query pipeline. Inspired by these principles, this work proposes novel approaches to improve LLMbased retrieval and reasoning by incorporating optimization techniques from data systems. Our evaluation across a range of knowledge- and reasoning-intensive datasets demonstrates significant gains in both accuracy and computational efficiency.
Date issued
2025-09Department
Massachusetts Institute of Technology. Department of Electrical Engineering and Computer SciencePublisher
Massachusetts Institute of Technology