Flux.2 is the recent series of image generation models from Black Forest Labs, preceded by the Flux.1 series. It is an entirely new model with a new architecture and pre-training done from scratch!
Original model checkpoints for Flux can be found here. Original inference code can be found here.
Flux2 can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out this section for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to this blog post to learn more.
Caching may also speed up inference by storing and reusing intermediate outputs.
Flux.2 can potentially generate better better outputs with better prompts. We can “upsample”
an input prompt by setting the caption_upsample_temperature argument in the pipeline call arguments.
The official implementation recommends this value to be 0.15.
( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKLFlux2 text_encoder: Mistral3ForConditionalGeneration tokenizer: AutoProcessor transformer: Flux2Transformer2DModel )
Parameters
transformer to denoise the encoded image latents. AutoencoderKLFlux2) —
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. Mistral3ForConditionalGeneration) —
Mistral3ForConditionalGeneration AutoProcessor) —
Tokenizer of class
PixtralProcessor. The Flux2 pipeline for text-to-image generation.
Reference: https://bfl.ai/blog/flux-2
( image: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image, NoneType] = None prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: typing.Optional[float] = 4.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 text_encoder_out_layers: typing.Tuple[int] = (10, 20, 30) caption_upsample_temperature: float = None ) → ~pipelines.flux2.Flux2PipelineOutput or tuple
Parameters
torch.Tensor, PIL.Image.Image, np.ndarray, List[torch.Tensor], List[PIL.Image.Image], or List[np.ndarray]) —
Image, numpy array or tensor representing an image batch to be used as the starting point. For both
numpy array and pytorch tensor, the expected value range is between [0, 1] If it’s a tensor or a list
or tensors, the expected shape should be (B, C, H, W) or (C, H, W). If it is a numpy array or a
list of arrays, the expected shape should be (B, H, W, C) or (H, W, C) It can also accept image
latents as image, but if passing latents directly it is not encoded again. str or List[str], optional) —
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds.
instead. float, optional, defaults to 1.0) —
Embedded guiddance scale is enabled by setting guidance_scale > 1. Higher guidance_scale encourages
a model to generate images more aligned with prompt at the expense of lower image quality.
Guidance-distilled models approximates true classifer-free guidance for guidance_scale > 1. Refer to
the paper to learn more.
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated image. This is set to 1024 by default for the best results. int, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. List[float], optional) —
Custom sigmas to use for the denoising process with schedulers which support a sigmas argument in
their set_timesteps method. If not defined, the default behavior when num_inference_steps is passed
will be used. int, optional, defaults to 1) —
The number of images to generate per prompt. torch.Generator or List[torch.Generator], optional) —
One or a list of torch generator(s)
to make generation deterministic. torch.Tensor, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will be generated by sampling using the supplied random generator. torch.Tensor, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. str, optional, defaults to "pil") —
The output format of the generate image. Choose between
PIL: PIL.Image.Image or np.array. bool, optional, defaults to True) —
Whether or not to return a ~pipelines.qwenimage.QwenImagePipelineOutput instead of a plain tuple. dict, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under
self.processor in
diffusers.models.attention_processor. Callable, optional) —
A function that calls at the end of each denoising steps during the inference. The function is called
with the following arguments: callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict). callback_kwargs will include a list of all tensors as specified by
callback_on_step_end_tensor_inputs. List, optional) —
The list of tensor inputs for the callback_on_step_end function. The tensors specified in the list
will be passed as callback_kwargs argument. You will only be able to include variables listed in the
._callback_tensor_inputs attribute of your pipeline class. int defaults to 512) — Maximum sequence length to use with the prompt. Tuple[int]) —
Layer indices to use in the text_encoder to derive the final prompt embeddings. float) —
When specified, we will try to perform caption upsampling for potentially improved outputs. We
recommend setting it to 0.15 if caption upsampling is to be performed. Returns
~pipelines.flux2.Flux2PipelineOutput or tuple
~pipelines.flux2.Flux2PipelineOutput if
return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the
generated images.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import Flux2Pipeline
>>> pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=50, guidance_scale=2.5).images[0]
>>> image.save("flux.png")