Show simple item record

dc.contributor.advisorFreeman, William T.
dc.contributor.advisorDurand, Frédo
dc.contributor.authorYin, Tianwei
dc.date.accessioned2025-11-25T19:37:24Z
dc.date.available2025-11-25T19:37:24Z
dc.date.issued2025-05
dc.date.submitted2025-08-14T19:46:48.699Z
dc.identifier.urihttps://hdl.handle.net/1721.1/164029
dc.description.abstractWhile current visual generative models produce high-quality outputs, they suffer from significant computational costs and latency, limiting their applicability in interactive settings. In this dissertation, we introduce a suite of techniques designed to enhance the efficiency of generative models for image and video synthesis. First, we propose distribution matching distillation, a method that enables the training of one- or few-step visual generators by distilling knowledge from computationally expensive yet highly capable diffusion models. Next, we develop improved distillation techniques that enhance robustness and scalability, culminating in a production-grade few-step image generator. This system is now deployed in widely used software, generating hundreds of millions of images annually. Finally, we extend our approach to video generation by adopting an autoregressive paradigm, significantly reducing latency and enabling fast interactive video generation and world simulation.
dc.publisherMassachusetts Institute of Technology
dc.rightsIn Copyright - Educational Use Permitted
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://rightsstatements.org/page/InC-EDU/1.0/
dc.titleEfficient Generative Models for Visual Synthesis
dc.typeThesis
dc.description.degreePh.D.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
mit.thesis.degreeDoctoral
thesis.degree.nameDoctor of Philosophy


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record