Online
Alejandro Cartagena
The Latent Space
Nov 1, 2025 - Apr 19, 2026
Overview
"The project is as much about building and understanding the system as it is about the resulting visuals and how both create an opportunity to address the ideals of homeownership in contemporary Mexico. "
- Hugues Bruyère and Alejandro Cartagena
An interactive live performance artwork designed and developed by Hugues Bruyère in collaboration with Alejandro Cartagena
What does it look like to dream of a home? Developed as part of an ongoing exploration titled Hands-on Hallucination, this project explores real-time image generation using a diffusion-based pipeline directly driven by physical interaction. Rather than generating images from static inputs, the system operates continuously, transforming live camera input into evolving imagery through real-time interaction.
At its core, the project uses a real-time image-to-image diffusion pipeline built on Stable Diffusion (SDXL Turbo) and the Hugging Face Diffusers library. A camera captures physical materials and gestures placed and manipulated in front of the lens, which are immediately processed by the system. Each incoming frame is reinterpreted according to established visual universes, including style, mood, formal constraints on composition and spatial structure, and guiding tendencies that bias how ambiguity is resolved.
Real-time performance is essential. The system is designed so that changes in the physical world produce immediate visual feedback, creating a tight feedback loop between action and result. This latency-sensitive setup enables the AI to function as a responsive medium, rather than a tool for producing finalized images, something that can be explored, tested, and played with in the moment.
In this collaboration, The Latent Space, a key component of the system, is the use of custom-trained LoRAs, developed from selected datasets derived from Alejandro Cartagena’s work. These LoRAs function as behavioural biases embedded within the generative pipeline, shaping how ambiguity is resolved and how latent representations are navigated during inference. They act like lenses or grammars, influencing which structural patterns and spatial cues the system tends to reinforce or suppress. Through this process, training becomes an authorial gesture: defining an aesthetic logic through careful selection, editing, and exclusion rather than dataset scale.
In this context, the work can exist at multiple levels simultaneously: the LoRA itself as an encoded aesthetic bias or behavioural prior; the training dataset as a conceptual decision space; and the generated images as evidence of how that bias behaves when confronted with live, unpredictable input.
Physical interfaces remain central to the interaction. Instead of engaging through text prompts or traditional UI controls, the system is driven by tangible gestures and materials. Users do not direct the generation frame by frame, but instead shape the conditions under which images emerge, steering the system through continuous, embodied interaction.
Physical interaction is used not only as an interface, but as a new method of inquiry and documentary practice, an active way to probe, reveal, and better understand the behaviour, limits, and expressive potential of a real-time diffusion system through hands-on experimentation. Ultimately, the project is as much about building and understanding the system as it is about the resulting visuals and how both create an opportunity to address the ideals of homeownership in contemporary Mexico.