Show simple item record

dc.contributor.advisorMansinghka, Vikash
dc.contributor.authorGhavami, Matin
dc.date.accessioned2025-11-17T19:07:26Z
dc.date.available2025-11-17T19:07:26Z
dc.date.issued2025-05
dc.date.submitted2025-08-14T19:31:49.209Z
dc.identifier.urihttps://hdl.handle.net/1721.1/163689
dc.description.abstractThis thesis presents a comprehensive approach to GPU-accelerated inference for discrete probabilistic programs. We make two key contributions : (1) a factor graph IR implemented in JAX that supports variable elimination and Gibbs sampling, and (2) a modeling DSL with a compiler that lowers programs to the factor graph IR. Our system enables significant performance optimizations through static analysis of the factor graph structure. Variable elimination is optimized by reduction to tensor contraction with optimized contraction paths, while Gibbs sampling is automatically parallelized through graph coloring techniques. Empirical evaluations on standard benchmarks demonstrate orders of magnitude performance improvements over existing systems, with the parallelized Gibbs sampler showing speed-ups of up to 144x on Bayesian networks and even greater improvements for models with regular graph topologies such as Ising models and hidden Markov models.
dc.publisherMassachusetts Institute of Technology
dc.rightsAttribution 4.0 International (CC BY 4.0)
dc.rightsCopyright retained by author(s)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.titleGPU-accelerated Inference for Discrete Probabilistic Programs
dc.typeThesis
dc.description.degreeS.M.
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
dc.identifier.orcid0000-0003-3052-7412
mit.thesis.degreeMaster
thesis.degree.nameMaster of Science in Electrical Engineering and Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record