An Encounter with the Simulacrum

It began, as these stories often do, with an ordinary digital interaction: a message, a video link, and a friend’s casual inquiry, “Is this you?”

What followed was not merely confusion. It was a moment of epistemic rupture.

The face in the video bore a striking resemblance to my own. The inflection of speech, the rhythm of gestures, the eyes…so familiar, yet entirely alien. I had never recorded that message, never spoken those words, and never stood before the camera that had captured this image. And yet, the video seemed to insist otherwise.

This was not a case of mistaken identity. It was, in every sense, an encounter with a machine-generated likeness, a synthetic construct that had appropriated the semiotics of my personhood.

In that moment, a question emerged with unsettling clarity: Whose face is it, when the machine believes it is yours?


The Epistemology of the Fake Between Data and Selfhood

Recent advances in generative AI, particularly in large multimodal models capable of rendering high-fidelity video avatars from mere textual prompts, have profoundly complicated traditional notions of identity and representation. Technologies such as text-to-video diffusion models, deep generative adversarial networks (GANs), and facial reenactment algorithms can now construct remarkably realistic human figures, ones that resemble not only idealized composites, but sometimes, specific individuals who were never asked to participate.

This phenomenon is not coincidental. It is structural.

These systems are trained on massive corpora scraped indiscriminately from public data repositories, social media platforms, and online video archives. And although the creators of these models often claim that outputs are “synthetic” and “non-specific,” the boundary between generalization and appropriation is permeable and unstable. Your likeness, your voice, your affect, these can be inferred, interpolated, and rendered by a model without ever having directly “seen” you in full.

In other words, you can become part of the dataset without ever being asked.


The Rise of Synthetic Personhood

We must now reckon with a new ontological category: the synthetic self. Unlike the classic doppelgänger, which is a metaphysical or narrative trope, the synthetic self is computational a statistically plausible you, generated by algorithms, validated by training loss, and deployed without your knowledge.

What is ethically troubling here is not merely the act of imitation, but the erasure of consent, authorship, and control. One’s identity is no longer tethered to one’s agency. It becomes a latent vector in a model, a potential output awaiting activation by prompt engineers, advertisers, or malicious actors.

This reconfiguration of identity as something both extractable and reproducible raises profound concerns:

  • How do we adjudicate harm when the harm is not material, but symbolic?
  • What rights does one have over their computational double?
  • And who is held responsible when synthetic representations are used inappropriately, deceptively, or violently?

The Limits of Existing Protections

In most jurisdictions, particularly across the African continent, current legal and regulatory infrastructures are woefully inadequate to address these challenges. Traditional legal concepts such as copyright infringement, defamation, or impersonation are ill-equipped to handle the ontological ambiguity of synthetic identities.

Furthermore, existing data protection laws (where they exist) typically apply to “personal data” in the classical sense names, birthdates, biometric scans not to statistically derived likenesses or inferred representations.

This creates a juridical vacuum, wherein generative AI developers operate with impunity, and the individuals whose identities are implicated are left without recourse.

Even in more developed regulatory regimes such as the EU or California, deepfake legislation remains reactive, not preventative; fragmented, not holistic. The law struggles to keep pace with a technology that does not replicate reality, it manufactures believability.


Toward an Ethics of Digital Personhood

To navigate this evolving terrain, we must conceptualize identity not merely as a legal category, but as a relational and contextual construct, one that exists across embodied, social, and digital domains.

Drawing from African philosophies of Ubuntu and personhood-as-becoming, we argue that identity is not a static possession, but an emergent quality of interaction grounded in recognition, accountability, and relational ethics. From this perspective, unauthorized synthetic reproduction is not only an individual harm; it is a communal breach, a violation of the networks of trust through which personhood is enacted.

We propose four foundational principles for ethical engagement in this space:

  1. Consent Must Be Contextual and Ongoing
    The mere public availability of data does not constitute ethical license for generative use. Consent must be informed, specific, and revocable.
  2. Synthetic Likeness ≠ Ethical Neutrality
    Just because a face is “not real” does not mean it is ethically benign. The effect of synthetic likeness is what determines its ethical weight, not the intent behind it.
  3. Transparency Is a Moral Imperative
    Developers must disclose not only the data sources used in training, but the epistemic assumptions that guide model construction—particularly around realism, humanism, and representation.
  4. Digital Personhood Must Be Legally Recognized
    Policy frameworks must evolve to protect individuals from algorithmic appropriation—enshrining the right to one’s own likeness, even in inferred or synthetic form.

Reclaiming the Face

In the age of generative AI, our faces long considered the most intimate and unique signifiers of self are now part of a larger computational grammar. They can be summoned, replicated, and weaponized by systems that neither know us nor recognize our humanity.

To see your face used without consent is not simply unsettling. It is a rupture in the social fabric of recognition. It is to be made visible in ways you did not choose, by systems you did not authorize, for purposes you may never know.

We must meet this moment not with paranoia, but with rigor, imagination, and justice.

If AI is to become part of the social world, it must learn to respect its inhabitants, not merely as sources of data, but as persons whose identities are shaped by memory, relation, and dignity.

Because in the end, the face is not just a surface.

It is a story, and that story belongs to the person it represents.