From Ruff to Fluff - Generate, Upscale, Inpaint!
How I went from a very fast SD1.5 generation to a high-res using SD upscale and Flux inpainting techniques... 🐕

Why is this useful? Well, to brainstorm ideas this workflow works pretty well, since you only need some seconds for the first generation. You can generate batches of images, you can even use XY Plot (see above) to test a few setups. After picking the best starting point, you then can dedicate some time with upscale and refinements. It’s all about saving time while having the best result you can get.
- It takes a few seconds to load the base model when it runs for the first time, but after that initial load, the next generations move faster. With this in mind, the first image you see below was generated super fast: just 1.5 seconds after loading the SD1.5 base model. The image popped out at 512x512 pixels, which is the sweet spot where SD 1.5 really shines.
- Then I moved to another very simple workflow where I only load a SDXL base model, load the first gen image to serve as a reference and pass it through Ultimate SD Upscale, which will upscale 2x or 4x the image using an upscale model of my choice. The process happens by dividing the final image in tiles, so it can process parts of it and compose everything together at the end. This allows the process to move quickly with more attention to details.
- The last step was passing the upscaled image through Flux model, inpainting the eyes to get a better result there, since they needed some fixing.
- EXTRA: In the final image, after Flux did its thing, I imported the result into Photoshop and cleaned it up a bit, noticing some dirt in the sand and on the dog’s face. It was a very superficial refinement, and just because I love Photoshop too much, I wanted to spend more time in it, so I improved sharpness and added a warmer color correction.



About the order of each step in this process, the upscale-then-inpaint approach works better for fixing small details like eyes. Why? When you upscale first, you’re giving the AI more pixels to work with before asking it to fix specific areas.
So, what do you think of the final result? Pretty cool, huh? Especially considering that we go from a very rough idea to a high resolution refined image in a few minutes.