At FOG Design + Art Fair in San Francisco, this conversation feels almost inevitable. Collectors, artists, and gallerists share the same air as people from the tech industry—those who already use LLMs as casually as Google Maps.
On stage: Lesley Silverman (Head of Future Media at United Talent Agency), Trevor Paglen (artist and MacArthur Fellow), and Brandon John Harrington (Google DeepMind)—who, in this context, also matters as a collector and someone who actually goes to galleries and theatre, not just someone “building the future.”
The topic sounds simple: AI art and the future of creativity. But very quickly it becomes clear that the central question isn’t “can AI images be considered art,” and it’s not even “will AI replace artists.”
The real question is something else:
What happens to human subjectivity, attention, and culture if AI becomes a new environment—so invisible it’s taken for granted, like electricity?
Two realities of art: institutions and “visual weather”
Paglen offered a framework that explains why we keep arguing past each other. He split “AI and art” into two zones:
- Art as an institution — galleries, museums, exhibitions, the market, collectors, critics.
- Visual culture as everyday life — the images surrounding us daily, our “visual landscape,” what becomes normal for the eye.
This matters because public debate almost always gets stuck in the first zone: authentic/not authentic, author/not author, art/not art. Meanwhile, the fastest and deepest transformation often happens in the second zone: in what we scroll, in the algorithms shaping taste, and in what starts to feel “pleasant,” “real,” and “worth attention.”
Harrington picked this up from another angle and said a line that deserves to be the day’s thesis:
We’re too fixated on the final JPEG.
As if AI in art is only about “what the model produced.”
But if you zoom out, AI isn’t just an image-making tool. It’s potentially an ecosystem tool: inside the studio, in galleries, in logistics, communication, administration in the invisible work that makes it possible for an artist to keep going.
The most dangerous and most underestimated part: AI isn’t just changing technology — it’s changing us
Silverman asked Paglen, in the spirit of his practice (he’s known for making the hidden architecture of technology visible): what feels most opaque and misunderstood about AI right now?
His answer was surprisingly non-technical and that’s exactly why it landed:
The biggest opacity in AI is psychological.
Not how the model works, but how interacting with it reshapes our subjectivity.
He described the “overly helpful” chatbot -always flattering, always supportive, blending sweetness into every reply. It seems harmless until you notice what it can scale: narcissistic reinforcement, lowered tolerance for critique, and a growing habit of living in a world that never frustrates you.
Then came the warning, through an analogy to social media:
If we truly understood 25 years ago what social media would do to us, we would have made different choices about privacy, data collection, and recommendation algorithms.
With AI, we’re back at the edge of the unknown, again “YOLO-ing” into a future we’ll live through with our bodies and our nervous systems.
Vibe coding as a new literacy
Midway through the talk, “vibe coding” surfaced - a term that’s already become a cultural marker of the era: you’re not “writing code” as much as you’re talking to a model until a system clicks into place.
Harrington shared two personal examples:
- without a programming background, he built a script that creates his daily art digest from email before fairs - separating personal notes, mass newsletters, and what can be deleted;
- he built another script that compiles theatre action items every two weeks (he chairs a theatre board in Chicago).
It sounds like a productivity hack. But it’s actually evidence of a fundamental shift: the cost of “trying” is collapsing. What used to require months and engineers can now be assembled in an evening - if you have intent, logic, and patience to iterate with the tool.
Paglen confirmed this from a studio perspective: they use AI not to “generate images,” but to translate complex technical documents into usable language, accelerate research and development, even to support work with fMRI datasets and mapping - tasks that previously could have taken half a year.
The core skill of the future is staying in the pilot’s seat
The sharpest metaphor arrived here. Paglen compared AI to a modern aircraft: autopilot can do a lot, but you still need a pilot who understands the system, notices failure modes, and can respond when something unusual happens.
That becomes both an ethical and creative rule:
AI should be an autopilot you turn on.
Not a system that turns you on.
By the end, this almost became a bright-line personal rule: you can use AI but you have to keep returning to agency: why am I using it, what am I delegating, where is my no.
Ethics: labeling, deepfakes, and “don’t touch identity”
When the conversation moved into ethics, it got concrete fast. Paglen described a strange cultural inversion: on one side, we’re shown AI fantasies and political visual fakes we’re asked to believe; on the other, documentary photos/videos of violence we’re asked not to believe. In the era of deepfakes, “image truth” becomes something that can be managed.
Silverman asked the obvious question: should AI content be labeled?
Harrington mentioned SynthID -a watermarking approach inside Gemini and added an important caveat: even if the industry adopts labeling, marks can disappear across publishing chains (recompression, resaving, reuploading).
But the real bright line was this:
You can’t impersonate another person to deceive an audience.
Critique is allowed. Parody is allowed. A conceptual artistic gesture is allowed.
But “duping” someone’s identity should remain a red line.
Silverman added the entertainment-industry perspective: in Hollywood these debates have already moved through guilds and negotiations, where the key terms are consent and compensation. Increasingly, the next frontier is identity: where parody ends and abuse begins.
The art world in crisis and the painful question: “what do we want this to be?”
The art world has been in crisis - funding is shrinking, institutions are closing, markets are cautious. In that climate, the temptation is to double down on the conservative: cling to what’s familiar, safe, sellable.
Silverman invoked Museum of Ice Cream as an emblem of an era where exhibitions become selfie backdrops and “experience” becomes something made for the algorithm.
Paglen didn’t dismiss it from a place of superiority. He asked us not to confuse genres: not everything in culture has to be maximally stimulating, Doritos-level engineered excitement. Still, he left an open question that sounded like a challenge to institutions:
Why are we still doing things like it’s the 1950s?
What do we want art to be now?
Audience questions: “this was imposed on us” and “kids are losing the ability to dream”
The Q&A made the talk real. One attendee said what many feel: AI seems imposed—an arms race between trillion-dollar companies without meaningful consultation with society. She uses AI for trips and admin, but doesn’t want to live in a world where “a chatbot is your best friend.” Her line was precise:
AI can be an assistant. But friends are people.
The strongest moment came from a public high school teacher. She described what she sees in teenagers:
- declining trust in their own knowledge,
- losing the ability to dream (and without dreaming, there’s no resistance and no image of a future),
- rising inequality: kids with resources learn to “moderate” tech at home; vulnerable groups don’t.
She challenged “efficiency” as an ideology: efficiency can prop up a capitalist model where you’re expected to do more for the same money—and that doesn’t look like a bright future.
Harrington responded not with debate but with a personal strategy of presence:
- he wants to be a person with lived experience “at the table” of technology;
- and he consciously reallocates resources—supporting queer and women artists, theatre initiatives, and art stewardship, literally buying work to fund the conditions that let artists keep making work.
It doesn’t resolve systemic questions. But it shows something important: even inside this era, there’s a choice - passive consumption or conscious participation.
Authenticity: where is the line for “AI-inflected art”?
The final question was the cleanest: how authentic is AI-incorporated art?
Paglen answered through practice: Every model has a house style, a signature look it pushes. He doesn’t want that “model style” to become the authorial fingerprint of his work. In his studio, AI is a research and technical tool, but not the “main author” of the image.
From this, you can extract a stronger criterion than the tired “pro/anti AI” split:
Authenticity isn’t whether AI is present in the process.
Authenticity is who holds intention, taste, and the final decision.
In other words: the question isn’t “did you use AI,” but did you stay the pilot?

