It is a good post, but is missing the connection to continuous normalizing flows. Diffusion models, flow matching, consistency models are biased approximations of continuous normalizing flows (which themselves have some slight biases, but less). Adversarial losses can somewhat help with bias (e.g. RL, GANs), but training those has issues.
The most common approach to modeling continuous distributions is to train a reversible model f that maps it to another continuous distribution P that is already known. The original image can be recovered by pointing to its mapped value, as well as the reverse path:
−log P(f(x)) − log|det ∂f/∂x (x)|
This technique is known as normalizing flows, as usually a normal distribution is chosen for the known distribution. The second term can be a little hard to compute, so diffusion models approximate it by using a stochastic PDE for the mapping. When f is a solution to an ordinary differential equation,
Diffusion and flow matching models generate samples by iterative denoising. Iterative denoising means passing input to the neural network, running a forward pass, and taking the output back as input and rerunning the neural network. Often you do this 100 times, which is slow and expensive.
Flow maps / consistency models / shortcut models instead try to learn to compress this iterative work into 1 forward pass. This makes inference 100x faster as you'd only need to run the neural net forward pass once. Beyond speeding up inference, there are other advanced benefits to this, such as improved ability to perform inference-time steering.
Mathematically, learning a flow map corresponds to learning to solve an ordinary differential equation, i.e., learning the time integral of the velocity field. This mathematical foundation provides the basis for various training objectives for learning flow maps, which involve self-referential identities or identities such as the transport equation, which are discussed in the blog post.
Hope that helps! I'm an ML researcher currently researching flow maps.
Very helpful! Naïve question (I haven’t had a chance to read TFA at all and diffusion/flow models are not my area of expertise). Doesn’t learning the integral/solution of the diffusion process in a single pass just take us back to like OG generative CNN that we had before diffusion models took over? Surely the answer is “no” but would love to hear your framing as to why.
This provides a high-level overview of diffusion models, you know, the models behind Stable Diffusion, Gemini banana, etc.
I haven't read it carefully but I think it's pretty comprehensive. From SDE to Flow matching formulation, and different perspective of constructing the flow maps, i.e. x-formulation or v-formulation. It also deals with distillation and consistency, which is used to fast sampling.
Overall, it's a good read if you are new to the field.
Why not put it into an AI yourself? :) I'd rather we avoided a precedent of asking for it and N people replying with their own favorite AI version. The comments section would end up a ghost town.
Extreme TL;DR: Diffusion models are like getting f(x) by calculating and summing f'(0), f'(1)...f'(x). Flow models are like just calculating f(x).
We've all seen that AI can give you plausible but incorrect answers. Having an expert read it or use AI on it and interpret and validate it before posting would be most welcome IMO.
HN is a place where it's legitimate to ask those kinds of questions. The site has a high concentration of advanced practitioners -- in my experience it is not uncommon for the creator of a technology or deep expert to reply. John Carmack has an account on the site for instance. :)
The most common approach to modeling continuous distributions is to train a reversible model f that maps it to another continuous distribution P that is already known. The original image can be recovered by pointing to its mapped value, as well as the reverse path:
This technique is known as normalizing flows, as usually a normal distribution is chosen for the known distribution. The second term can be a little hard to compute, so diffusion models approximate it by using a stochastic PDE for the mapping. When f is a solution to an ordinary differential equation, then Switching to a stochastic PDE and tracking the difference δx = x′ − x, the mean-squared error approximately satisfies which is close to Hutchinson's estimator, but weighted a little strange.Flow maps / consistency models / shortcut models instead try to learn to compress this iterative work into 1 forward pass. This makes inference 100x faster as you'd only need to run the neural net forward pass once. Beyond speeding up inference, there are other advanced benefits to this, such as improved ability to perform inference-time steering.
Mathematically, learning a flow map corresponds to learning to solve an ordinary differential equation, i.e., learning the time integral of the velocity field. This mathematical foundation provides the basis for various training objectives for learning flow maps, which involve self-referential identities or identities such as the transport equation, which are discussed in the blog post.
Hope that helps! I'm an ML researcher currently researching flow maps.
I haven't read it carefully but I think it's pretty comprehensive. From SDE to Flow matching formulation, and different perspective of constructing the flow maps, i.e. x-formulation or v-formulation. It also deals with distillation and consistency, which is used to fast sampling.
Overall, it's a good read if you are new to the field.
Extreme TL;DR: Diffusion models are like getting f(x) by calculating and summing f'(0), f'(1)...f'(x). Flow models are like just calculating f(x).