Generative AI with Diffusion Models
Thanks to improvements in computing power and scientific theory, Generative AI is more accessible than ever before.Generative AI will play a significant role across industries and will gain significant importance due to its numerous applications such as Creative Content Generation, Data Augmentation, Simulation and Planning, Anomaly Detection, Drug Discovery, and Personalized Recommendations etc. In this course we will take a deeper dive on denoising diffusion models, which are a popular choice for text-to-image pipelines, disrupting several industries.
- Build a U-Net to generate images from pure noise
- Improve the quality of generated images with the Denoising Diffusion process
- Compare Denoising Diffusion Probabilistic Models (DDPMs) with Denoising Diffusion Implicit Models (DDIMs)
- Control the image output with context embeddings
- Generate images from English text-prompts using CLIP
Developers
- Good understanding of PyTorch
- Good understanding of deep learning
Overview of objectives and course flow
Evolution of architectures leading to diffusion models
- Conditioning diffusion models with context
- Using CLIP for text-to-image generation
Exploration of the latest advancements in diffusion-based models
Recap of core concepts and discussion of next steps