| dc.contributor.advisor | Freeman, William T. | |
| dc.contributor.advisor | Durand, Frédo | |
| dc.contributor.author | Yin, Tianwei | |
| dc.date.accessioned | 2025-11-25T19:37:24Z | |
| dc.date.available | 2025-11-25T19:37:24Z | |
| dc.date.issued | 2025-05 | |
| dc.date.submitted | 2025-08-14T19:46:48.699Z | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164029 | |
| dc.description.abstract | While current visual generative models produce high-quality outputs, they suffer from significant computational costs and latency, limiting their applicability in interactive settings. In this dissertation, we introduce a suite of techniques designed to enhance the efficiency of generative models for image and video synthesis. First, we propose distribution matching distillation, a method that enables the training of one- or few-step visual generators by distilling knowledge from computationally expensive yet highly capable diffusion models. Next, we develop improved distillation techniques that enhance robustness and scalability, culminating in a production-grade few-step image generator. This system is now deployed in widely used software, generating hundreds of millions of images annually. Finally, we extend our approach to video generation by adopting an autoregressive paradigm, significantly reducing latency and enabling fast interactive video generation and world simulation. | |
| dc.publisher | Massachusetts Institute of Technology | |
| dc.rights | In Copyright - Educational Use Permitted | |
| dc.rights | Copyright retained by author(s) | |
| dc.rights.uri | https://rightsstatements.org/page/InC-EDU/1.0/ | |
| dc.title | Efficient Generative Models for Visual Synthesis | |
| dc.type | Thesis | |
| dc.description.degree | Ph.D. | |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | |
| mit.thesis.degree | Doctoral | |
| thesis.degree.name | Doctor of Philosophy | |