Retraining stable diffusion from scratch is not feasible for most developers. However, in this talk I will show you how to “hack” the retraining with no budget nor special resources, using a LoRA model.
Basic Python
Large generative models such as stable diffusion take enormous resources to train, making retraining and fine tuning them less and less feasible. Fortunately, a technique called LoRA allows us to bypass this obstacle by training with 10,000 times less parameters, and thus creating a light-weight model dedicated to our use case.
In this talk I will walk you through how to retrain with LoRA so you can have your own dedicated Stable Diffusion model for generating anything from raccoon memes to boba tea commercials.
Using LoRA is free, does not require a deep understanding in ML or expensive GPUs, and I will show how to run it for free using tools such as Google’s Colab.
Oded Golden is a senior software engineer with extensive engineering experience in bringing products to live. He has a M.Sc. degree in computer science, specialising in deep learning. He works as an algorithm software engineer at Cujo AI and loves developing deep learning applications in his spare time.