This Patreon is dedicated to helping creatives harness the power of the latest AI tools to enhance their artistic process, streamline workflows, and expand creative possibilities. Whether you're a designer, filmmaker, writer, or 3D artist, you'll find practical tutorials, in-depth breakdowns, and hands-on projects that show how AI can become a true creative partner.
Originally I wanted to make one big tutorial for training and generating but I'm splitting it into two parts and the second one should come "shortly" after. I know I'm kinda late to the party with it, but perhaps someone would like to use my flow :) (there is an attachment archive with my settings that I cover in this article) Since I train on RTX 3090 with 24 GB VRAM I am not using any memory optimizations, if you wish to try my settings but have less VRAM - you could try applying the known options that can bring the memory requirements down but I do not guarantee the quality of the training results in that scenario. Setup I am using kohya_ss from the branch sd3-flux.1 -> https://github.com/bmaltais/kohya_s If you have a kohya setup already but are training different models and you are not on that branch, I suggest duplicating the environment so that you do not ruin your current one (since the requirements are different and switching back and forth between branches might not be a good idea)
Originally I wanted to make one big tutorial for training and generating but I'm splitting it into two parts and the second one should come "shortly" after. I know I'm kinda late to the party with it, but perhaps someone would like to use my flow :) (there is an attachment archive with my settings that I cover in this article) Since I train on RTX 3090 with 24 GB VRAM I am not using any memory optimizations, if you wish to try my settings but have less VRAM - you could try applying the known options that can bring the memory requirements down but I do not guarantee the quality of the training results in that scenario. Setup I am using kohya_ss from the branch sd3-flux.1 -> https://github.com/bmaltais/kohya_s If you have a kohya setup already but are training different models and you are not on that branch, I suggest duplicating the environment so that you do not ruin your current one (since the requirements are different and switching back and forth between branches might not be a good idea)
Learn advanced techniques for managing context in AI-assisted development workflows. We build custom skills, configure MCP servers, and create reusable prompt templates for consistent results.
Create production-ready video generation pipelines by connecting ComfyUI to external APIs like Runway and Kling. Learn authentication setup, request handling, and automated video processing workflows.
After a brief hiatus, ComfyUI returns stronger than ever with major performance improvements and new node types. This deep dive covers the latest updates and shows you how to build efficient generation workflows.
Discover the philosophy behind vibe coding and how to effectively collaborate with AI tools like Claude and Cursor. We cover prompt engineering, context management, and iterative development strategies for creative projects.
Join our weekly exploration series where we push Flux to its limits with experimental prompts, unconventional workflows, and creative challenges. Each episode features community submissions and real-time problem solving.
Transform any 2D image into a textured 3D model using our custom ComfyUI workflow. We integrate TripoSR for instant mesh generation and show you how to export directly to Cinema 4D or Blender.
Step-by-step guide to training your own LoRA models using Kohya SS. Covers dataset preparation, optimal settings for different styles, and how to integrate trained models into ComfyUI workflows.
This introductory session covers the essential setup for Cinema 4D and Redshift rendering. We explore the interface, set up a basic scene, and configure Redshift for optimal performance on your hardware.