A Practical Use of Flux Kontext
Testing Context Sensitivity in AI-Generated Imagery 📺

The Experiment
The weekend before last, we visited Munich’s Pinakothek der Moderne in search of mid-century design inspiration for our living room, as we are very much addicted to this aesthetic right now. The museum has an impressive collection of iconic pieces by legendary designers, including Dieter Rams and Herbert Hirche. It would be a dream to have a vintage Braun piece as decoration, but until that day comes, I decided to conduct a practical experiment with a Braun TV using Flux Kontext in ComfyUI. 😁
The Setup
- Original Context: Pinakotheke der Moderne Museum.
- Target Context: Bring one of the pieces to life in a completely different environment.
- Challenge: Maintain the piece’s original characteristics while making the environmental change.
The Process
I photographed the TV with my iPhone, straightforward “point and shoot”. Then we returned home and I fed the image into ComfyUI, after loading Flux Kontext model, I wrote this prompt: “Place this vintage Braun TV in the middle of the desert, keeping the product unchanged.”
What Made This Test Interesting
This wasn’t just about image manipulation, it was about context intelligence. The AI had to:
- Preserve product integrity: Keep the TV’s proportions and design details intact. Don’t mess with the original look.
- Environmental adaptation: Adjust lighting and screen reflections to match the desert conditions.
- Atmospheric consistency: Add proper shadows, dust effects, and ambient lighting.
- Compositional logic: Make the placement look intentional.
Feeding the Result Back to the Workflow
The generation above came out convincing. The TV appeared integrated into the desert landscape. But it didn’t have the body color as the original product, it generated a dark grey TV when I expected pastel-light olive green. So I saved this generation and fed it again to the workflow, now changing the prompt slightly. Since the TV was already placed in the desert, I just needed to request the color change. The prompt was “Change the color of the body of the TV to pastel light olive green, move it further from the viewer, keep the product unchanged, do not change anything else.”
With that attempt, the result was much better, the TV had the color I wanted and the environment was still convincing.
Final Result
To make the overall composition more interesting, I saved the latest result again and fed it back to the workflow to add clouds. The prompt was “Add clouds to the sky, keep the product unchanged, do not change anything else.” And then I got a result which I think I can work with.
Why This Matters for Designers
This experiment revealed Flux Kontext’s potential for rapid prototyping and concept visualization:
- Product placement testing: See how designs work in different environments without expensive photo shoots.
- Marketing visualization: Generate lifestyle shots without location costs. Honestly a game-changer for small studios and freelancers!
- Design iteration: Test product concepts in various contexts quickly. See what works instantly.
- Client presentations: Show products in aspirational settings. Way better than traditional mockups.
The Takeaway
Flux Kontext doesn’t just change backgrounds, it understands context. The tool demonstrated very good awareness of how lighting, atmosphere and environment interact with objects. For designers, this means faster iteration cycles and more dynamic presentation possibilities.
Quality Considerations
The TV didn’t turn out 100% identical to the original, if you’re very nitpicky you will notice variances in proportions for example. Also, it seems that at each iteration the results started to lose quality. You can notice the speaker vents on the first generation (dark grey TV) looks more accurate than the final generation. I could do some post-processing to improve the quality of the final result, but that would also increase the duration of the whole process. ⏳
I posted the final result on my Instagram, you can check it out here. In this account I post experiments I do with different Models and LoRAs in a more compositional way, bringing a blend of realism and graphism to the images.