01
Exp → Synthetic Smile with a High Confidence Score (ongoing)
The work aims to push the critical focus toward the problem of discriminatory reductionism of emotions in these generative moving images, and their relation to ‘truth’: their reality.
Recent software developments in computer generated image making now made it possible for a wider audience to generate any still image into something that is animated, something that is ‘alive’.
Very often, these are just human faces – turned into computationally predicted expressions. This seemingly small step or modification draws up a lot of questions that are technologically (and socio-politically) important:
How and who can control what will exactly (computationally) happen on these moving sequences?
What kind of truth is being created in them?
What is the technological foundation of predicting a neutral face ‘behave’ in a certain way and not in the other?
Strictly speaking, exactly who’se gesture is it? These questions become problematic when we investigate the training datasets and the very operation of these tools.
Based on an artistic experiment that uses a neutral self-portrait to test online available image-to-video converters, a very quick learning is something that itself is becoming already a commonplace: any young and white female face – like in the case of mine - is more likely to be generated to smile, cheer or even to flirt, according to the prediction of the model, than with any other individuals with differing features.
And,, this generative smile often has a subtle arrogance – a statistical likeliness that the model, based on its training data set, projects on me.
The predictive logic of likeliness has been shown to be deeply problematic already in many cases: rigorously argued by Hito Steyerl in her article called Mean Images (2023), referring to the harms that statistical renderings can (and often do) cause through their normative and discriminative logic; by Joy Buolamwini who’se research on face motion-capture revealed biases in training datasets that fail to read non-white individuals’ faces and their gender; in his practice Mario Klingemann or, in his groundbreaking research practice that focuses on the hallucinations in machine vision, the Adversarially Evolved Hallucinations by Trevor Paglen (2024), to name a few.
Generative Gambling (what does the model think without a prompt?): image-to-video outputs from free, online generative video converters. details from the promo video: Mind Maps / Mirror for ÚJ BÁLA(BE/HU), 2024
Generative Gambling (what does the model think without a prompt?): three different image-to-video outputs from free, online generative video converters - NEUTRAL SELF-PORTRAIT with a subtle arrogance , 2025