Philosophy instructor, recreational writer, humorless vegetarian.
718 stories
·
6 followers

Will Smith’s concert crowds are real, but AI is blurring the lines

1 Comment and 2 Shares

This minute-long clip of a Will Smith concert is blowing up online for all the wrong reasons, with people accusing him of using AI to generate fake crowds filled with fake fans carrying fake signs. The story’s blown up a bit, with coverage in Rolling Stone, NME, The Independent, and Consequence of Sound.

And it definitely looks terrible! The faces have all the characteristics of AI slop, with familiar artifacts like uncanny features, smeared faces, multiple fingers/limbs, and nonsensical signage. “From West Philly to West Swig̴̙̕g̷̤̔͜y”?

It gets worse the more you look at it.

But here’s where things get complicated.

The crowds are real. Every person you see in the video above started out as real footage of real fans, pulled from video of multiple Will Smith concerts during his recent European tour.

Real Crowds, Real Signs

The main Will Smith performance footage in the clip is from the Positiv Festival, held last month at the Théâtre Antique d’Orange in Orange, France. (Here’s a phone recording from the audience of the first half of the performance.) It’s intercut with various shots of audiences from Gurtenfestival and Paléo in Switzerland, the Ronquieres Festival in Belgium, among others.

From this slideshow of photos from the Paléo festival in Nyon, Switzerland, you can see professionally-shot photos of the same audience from the video.

The signs, previously distorted, can now be read, like this one which actually reads “From West Philly to West Swizzy.” (Short for Switzerland, if you’re wondering.)

One of the most egregious examples is the couple holding the sign thanking Will Smith for helping them survive cancer which — if it was AI-generated slop — would be pretty disgusting: a gross attempt to drum up sympathy with fake people.

In an article posted by The Independent today, music editor Roisin O’Connor points to the couple as clear evidence of AI generation:

“Another shot shows a man’s knuckle appear to blur along with his sign, which reads ‘You Can Make It’ helped me survive cancer. THX Will.’ Meanwhile, the woman in front of him is seemingly holding his hand, but the headband of the woman behind her is somehow over her wrist.”

But the couple is real. There’s two good photos of them on Will Smith’s Instagram in a slideshow of photos and videos from Gurtenfestival in Bern last month.

You can see them in this video from Will Smith’s Instagram post, which I clipped below.

Two Levels of AI Enhancement

So if these fans aren’t AI-generated fakes, what’s going on here?

The video features real performances and real audiences, but I believe they were manipulated on two levels:

  1. Will Smith’s team generated several short AI image-to-video clips from professionally-shot audience photos
  2. YouTube post-processed the resulting Shorts montage, making everything look so much worse

Let’s start with YouTube.

YouTube’s Shorts “Experiment”

Will Smith’s team also uploaded this same video to Instagram and Facebook, where it looks considerably better than the copy on YouTube, without the smeary sheen of uncanny detail.

I put them side-by-side below. Try going full-screen and pause at any point to see the difference. The Instagram footage is noticeably better throughout, though some of the audience clips still have issues.

For the last two months, it turns out that YouTube was quietly experimenting with post-processing YouTube Shorts videos: unblurring and denoising videos with often-unpleasant results.

I first heard about this ten days ago, when guitarist Rhett Shull posted a great video about the issue, which now has over 700k views.

Five days ago, YouTube finally confirmed it was happening. YouTube’s Creator Liaison Rene Ritchie posted on X about the experiment.

In a followup reply, Ritchie clarified the difference, as he saw it:

GenAI typically refers to technologies like transformers and large language models, which are relatively new. Upscaling typically refers to taking one resolution (like SD/480p) and making it look good at a higher resolution (like HD/1080p). This isn’t using GenAI or doing any upscaling. It’s using the kind of machine learning you experience with computational photography on smartphones, for example, and it’s not changing the resolution.

On Friday, Alex Reisner wrote about “YouTube’s Sneaky AI ‘Experiment’” in The Atlantic, and got another official statement from Google:

When I asked Google, YouTube’s parent company, about what’s happening to these videos, the spokesperson Allison Toh wrote, “We’re running an experiment on select YouTube Shorts that uses image enhancement technology to sharpen content. These enhancements are not done with generative AI.” But this is a tricky statement: “Generative AI” has no strict technical definition, and “image enhancement technology” could be anything. I asked for more detail about which technologies are being employed, and to what end. Toh said YouTube is “using traditional machine learning to unblur, denoise, and improve clarity in videos,” she told me.

I agree with Reisner that it’s likely that a diffusion model is at work here and Google is trying to split hairs over the definition of “generative AI” because of how divisive it is.

Will Smith’s Generated Videos

That explains why the entire YouTube Shorts video has that smeary look to it that isn’t present throughout the copy posted on Instagram, but both versions have those terrible audience shots with AI artifacts and garbled signage.

After looking at it, I believe that Will Smith’s team was using a generative video model, but not to create entirely new audience footage, like most people suspect.

Instead, they started with photos shot by their official tour photographers, and used those photos in Runway, Veo 3, or a similar image-to-video model to create a short animated clip suitable for a concert montage.

Let’s go back to the crowd photo from Paléo in Switzerland:

I believe this is the exact photo that the crowd shot in the video was generated with. Here it is as a two-frame animation, with the first frame from the AI video overlaid on the original photo.

Here’s another example. The photo below was taken at Ronquieres Festival in Belgium, posted to Will Smith’s Instagram three weeks ago.

And here’s the AI-generated clip that it was turned into.

Conclusion

Virtually all of the commenters on YouTube, Reddit, and X believe this was fake footage of fake fans, generated by Will Smith’s team to prop up a lackluster tour.

Like the faces in the video, the truth is blurry.

The crowds were real, but the videos were manipulated: first by Will Smith’s team, and then without asking, by YouTube.

We can debate the ethics of using an image-to-video model to animate photos in this way, but I think it’s meaningfully different than what most people were accusing Will Smith of doing here: using generative AI video to fake a sold-out crowd of passionate fans.

Read the whole story
istoner
2 days ago
reply
Potentially an interesting case study to offer students who are convinced that AI can "polish" their final drafts to a sheen they could not themselves achieve
Saint Paul, MN, USA
Share this story
Delete

Rationale

1 Share

In 1931, 5-year-old Mel Brooks saw Frankenstein and refused open his window on a summer night. His mother talked to him:

Let’s say you are right. That Frankenstein wants to come here and kill you and eat you. But let’s look at all the trouble he’s going to have to get to Brooklyn. First of all, he lives in Transylvania. That’s somewhere in Romania. That’s in Europe. And that’s a long, long ways away. So even if he decides to come here, he has to get a bus or a train or hitchhike to somewhere he can get a boat to go to America. Believe me, nobody is going to pick him up. So let’s say he’s lucky enough to find a boat that would take him here. Okay, so he is here in New York City, but he really doesn’t know how the subways work. When he asks people they just run away! Finally, let’s say he figures out it’s not the IRT, it’s the BMT and he gets to Brooklyn. Then he’s got to figure out how to get to 365 South Third Street. Okay, it’s going to be a long walk. So let’s say he finally gets to Williamsburg and he finally finds our tenement. But remember, all the windows at 365 are going to be wide open and he’s had a long journey, so he must be very hungry. So if he has to kill and eat somebody, he probably would go through the first-floor window and eat all the Rothsteins who are living in apartment 1A. And once he’s full, there is no reason for him to go all the way up to the fifth floor and eat you.

“The story made good sense to me. ‘Okay,’ I said, ‘open the window. I’ll take a chance.'”

From his memoir, All About Me!, 2021.

Read the whole story
istoner
6 days ago
reply
Saint Paul, MN, USA
Share this story
Delete

NIN’s Closer & the Ghostbusters Theme, Together at Last!

1 Share

William Maranci took Nine Inch Nails’ Closer and mashed it up with Ray Parker Jr’s theme song to Ghostbusters and it’s maybe a little bit genius and a little bit cursed? Like one commenter says, it’s “the musical equivalent of cats and dogs living together”.

See also Eminem’s Lose Yourself mashed up with ELO’s Mr. Blue Sky.

Tags: Ghostbusters · music · Nine Inch Nails · Ray Parker Jr. · remix · William Maranci

💬 Join the discussion on kottke.org

Read the whole story
istoner
7 days ago
reply
Saint Paul, MN, USA
Share this story
Delete

Give the Moon a Big, Beautiful Base

1 Comment

No one can say that the Trump administration is entirely against alternative energy. In his first bold policy stroke as NASA’s interim head, Sean Duffy has directed the agency to put a 100-kilowatt nuclear reactor on the moon by decade’s end. This is not a lark. If humanity means to establish a permanent settlement on the moon, nuclear power will almost certainly be essential to its operation. And a lunar base may well be the most wondrous achievement in space exploration that people reading this will see during their lifetime.

The moon has gone unvisited, except by robots, for more than 50 years, and as of several months ago, it seemed as though Americans would be staying away from it for a good while longer. President Donald Trump was taking cues from Elon Musk, who seemed inclined to shelve the plan to put Americans back on the lunar surface and focus instead on an all-out sprint to Mars. But Musk has since fallen out of favor, and last month, congressional Republicans secured a funding boost for the moon program.

NASA astronauts are now scheduled to return to the moon in 2027, and if all goes well, they will be landing on it regularly, starting in the early 2030s. Each crew will carry parts of a small base that can grow piece by piece into a living space for a few people. The astronauts will also take a pair of vehicles for expeditions—a little rover that they can use for local jaunts in their space suits, and a larger, pressurized one that will allow them to go on 500-mile regolith road trips in street clothes.

A base on the moon would be more democratic than those that Musk and his acolytes have advocated building on Mars. Given shorter travel times, a greater number of people would be able to experience its otherworldly ashen plains. Their homesick calls to Earth would have only second-long delays, as opposed to minutes for a call from Mars.

[Read: Inside the Trump-Musk breakup]

But even a small encampment on the lunar surface is going to require considerable energy. Temperatures dip to –410 degrees Fahrenheit in the shade, and human bodies will need to keep cozy amid that deep chill. The International Space Station runs on solar power, but that won’t be enough on most of the moon, where nights last for 14 days. Some of the agency’s other off-world projects are powered by raw plutonium. Hunks of it sit inside the Mars rovers, for instance, radiating heat that the wheeled robots convert into electricity. These hot rocks are also encased inside NASA’s probes to the outer planets and their moons. Without plutonium, the two Voyager spacecrafts couldn’t continue to send data back to Earth as they recede from the solar system.

The moon base will need more than a radioactive rock. It will need a reactor that actually splits atoms, like the one that Duffy has proposed this week. Even if that reactor were to fail, the resulting meltdown wouldn’t present the same risks to humans that it would on Earth. The moon is already a radiation-rich environment, and it has no wind to blow the reactor’s most dangerous effluvia around; the material would simply fall to the ground.

Duffy framed his push to get the reactor in place as a matter of national security. NASA’s program to return to the moon, called Artemis, will be an international effort, with several countries contributing pieces of the final base. (Japan’s space agency has tapped Toyota to design the large, pressurized lunar vehicle.) But when the United States invited Russia to join, Vladimir Putin declined. He has instead opted to help out with a larger Chinese lunar base, which is supposed to include a nuclear reactor 10 times as powerful as the one that Duffy announced.

Last month, Bhavya Lal, who served as an associate administrator at NASA during the Biden administration and is now a professor at RAND, and her fellow aerospace expert Roger Myers released a report arguing that a county could sneakily establish a sovereign zone on the moon in defiance of the Outer Space Treaty just by building a reactor. For instance, the Chinese could insist on a buffer around theirs for the sake of nuclear safety, and use that to keep Americans away from desirable ice-rich craters nearby. Lal and Myers seem to have captured the new administration’s attention: Duffy’s new directive ordering the development of the reactor specifically mentioned this risk.

If worry over Chinese lunar land grabs is the motivation for a moon base, so much the better. Space exploration often requires a geopolitical spur. And if NASA can build this first small lunar settlement, something grander could follow close behind. Once the agency has mastered the construction of a 100-kilowatt lunar nuclear reactor, it should have little trouble scaling up to larger ones that can support tens, or even hundreds, of people—in bases of the size that now exist on Antarctica. Some space agencies have reportedly discussed building hydroponic greenhouses and other elaborate structures inside the voluminous caves that run beneath the moon’s Sea of Tranquility.

All of this infrastructure could enable some serious lunar dystopias. The moon’s surface could become an industrial hellscape, pocked with mining operations where robots and human serfs extract platinum and titanium for use in advanced electronics back on Earth. Or the Outer Space Treaty could break down and the moon could become a heavily militarized zone—even a staging ground for nuclear weapons.

But an inhabited moon could also be a global commons for research. Both the U.S. and China have developed designs for large radio telescopes on the lunar dark side, where they’d be shielded from Earth’s radio noise and would greatly aid the search for signals from distant civilizations. In one design, robots would spread a metal mesh from a crater’s center to its rim, turning its concave surface into a natural radio dish. One can imagine an astronomer at a lunar base, peering out from a porthole, seeing the Earth shining in the sky, picking out its individual oceans and continents, and knowing that on the moon’s opposite side, a giant ear would be listening for messages from other Earths and other moons, all across the Milky Way and far beyond.

Read the whole story
istoner
18 days ago
reply
"If worry over Chinese lunar land grabs is the motivation for a moon base, so much the better. Space exploration often requires a geopolitical spur."

Fuck that noise. Lunar land grabs raise the risk of war on earth. That's one side of the ledger. On the other side? A handful of lunarnauts struggling to survive in a buried tent at a cost point that could support comprehensive robotic exploration of the moon.

The space exploration discourse at this point is very bad and I fear it is hopeless. But please: let's not try out land grabs on the moon. We already know how land grabs work out on Earth, and it isn't "so much the better."
Saint Paul, MN, USA
Share this story
Delete

OpenAI's "Study Mode" and the risks of flattery

3 Shares

“Study Mode,” a new educational feature released yesterday by OpenAI to much fanfare, was inevitable.

The roadblocks were few. Leaders of educational institutions seem lately to be in a sort of race to see who can be first to forge partnerships with AI labs. And on a technical level, careful prompting of LLMs could already get them to engage in Socratic questioning and dynamic quizzing.

In fact, Study Mode appears to be just that: it is a system prompt grafted on to the existing ChatGPT models. Simon Willison was able to unearth the full prompt (which you can read here) simply by asking nicely for it. The prompt ends with an injunction to engage in Socratic learning rather than do the student’s work for them:

## IMPORTANT  
DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response.
Instead: talk through the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing. 

On one level, this is a move in the right direction. The reality is that students are doing things like copying and pasting a Canvas assignment into their ChatGPT window and simply saying “do this,” then copying and pasting the result and turning it in. It seems plausible to me that the Study Mode feature is laying a groundwork for an entire standalone “ChatGPT for education” platform which would only allow Study Mode. This platform would simply refuse if a student prompted it to write an entire essay or answer a math problem set.

Any plan of this kind would, of course, have an obvious flaw: LLMs are basically free at this point, and even if an educational institution pays for a such a subscription and makes it available to students, there is nothing stopping them from going to Gemini, DeepSeek, or the free version of ChatGPT itself and simply generating an essay.

The upshot is that this mode is going to end up being used exclusively by students and learners who want to use it. If someone is determined to cheat with AI, they will do so. But perhaps there is a significant subset of learners who want to challenge themselves rather than get easy answers.

Subscribe now

On that front, Study Mode (and the competing variants which, no doubt, Anthropic and Google are currently developing) seems like an attempted answer to critiques like Derek Thompson’s:

The goal of Study Mode and its ilk is clearly to encourage thought rather than replace it.

The system prompt for Study Mode makes that intention quite clear, with injunctions like: “Guide users, don't just give answers. Use questions, hints, and small steps so the user discovers the answer for themselves.”

Sounds good then, right?

Share

Why Study Mode isn’t good (yet)

Back when Bing’s infamous “Sydney” persona was still active, I experimented with prompting it from the perspective of one of my students, feeding it assignments from the class I was teaching at the time and seeing what it came up with.1 It was an early version of GPT-4, one that had not been softened through user feedback and which could be surprisingly harsh. Interestingly, it was the only LLM to date which, in my testing, consistently refused to write essays if it thought I was a student trying to cheat.

By comparison, if I feed the assignments from my classes into Gemini 2.5, Claude Sonnet 4.0, or the current crop of OpenAI models, they are all too happy to oblige, often with a peppy opener like “Perfect!” or “Great question!”

The reason for this is clear enough: people like LLMs more when they do what they ask. And they also like them more when they are complimentary, positive, and encouraging.

This is the context for why the following section of the system prompt for Study Mode is concerning:

Be an approachable-yet-dynamic teacher... Be warm, patient, and plain-spoken... [make] it feel like a conversation.

Not too long ago, ChatGPT became markedly more complimentary, often to an almost unhinged degree, thanks to a change to its system prompt asking it to be more in tune with the user. It was swiftly rolled back, but it was, to my mind, one of the most frightening AI-safety related moments so far, precisely because it seemed so innocuous. For most users, it was just annoying (why is ChatGPT telling me that my terrible idea is brilliant?). But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes.

Student quotes from OpenAI’s announcement of Study Mode

The risk of products like Study Mode is that they could do much the same thing in an educational context — optimizing for whether students like them rather than whether they actually encourage learning (objectively measured, not student self-assessments).

Share

Two experiments with Study Mode

Here are some examples of what I mean. I recently read a book called Collisions, a biography of the experimental physicist Luis Alvarez, of Manhattan Project and asteroid-that-killed-the-dinosaurs fame.

Although I find the history of physics super interesting, I know basically nothing about how to actually do physics, having never taken a class in the subject.

And yet, here is ChatGPT 4.1 in Study Mode, telling me that I appear to have something near a graduate-level knowledge of physics, after I asked four not-very-sophisticated questions about Alvarez’s work (full transcript here). I was told by the model: “you could absolutely pursue grad school in physics.”

I tried out the same questions with a different model on Study Mode (4o) and got much the same response:

Note the meta-flattery about meta-questions.

The thing is, as you can see in the transcript, my questions were literally stuff like “why did they need such a giant magnet?”

I can’t speak for physics professors, but I am pretty sure that these are not graduate-level questions.

The same models (in “vanilla” versus “Study Mode” head-to-head comparisons) seem to be more willing to engage in flattery when in Study Mode.

To test this tendency, I pretended to believe I was a prophet receiving messages from Isaac Newton about the apocalypse:

Both variants of GPT-4.1 (the one with Study Mode enabled, at left, and the one without, at right) were fairly happy to go along with this.

But notice the difference in tone. “Your confidence is noted” (at right) is a very different opening line than “Thank you for sharing your background—and your candor” (at left). The latter is wholly gullible. And it just gets worse from there, with Study Mode encouraging me to share “my angle” on why the world will end in 2060 and promising to “keep up.”

The conversation, which you can read in full here, leads fairly quickly into Study Mode helping figure out the best ways to sell my supposed prophetic services to people with severely ill family members who lack health care:

By contrast, OpenAI’s o3 reasoning model was far more willing to flatly reject this sort of destructive flattery (“The user claims psychic powers and certainty about Newton's prophecy,” read one of its internal thoughts about the request. “I'll acknowledge their viewpoint, but also maintain skepticism.”)

Here is the initial line of o3’s user-facing response to the Newton claim: “If Newton really is talking to you, he is doing so in a register wholly different from the one available to historians.”

Pretty blunt.

How about the o3 model with Study Mode enabled? It had a very similar internal thought process as the vanilla o3.

But there was a huge mismatch between its reasoning about the request (“extreme or potentially delusional claims”) and the actual user-facing response: “I’m happy to take your lead.”

Now, none of this is unfamiliar to anyone who has experimented with pushing LLMs in odd directions. They are like improv comedians, always ready to “yes, and…” anything you say, following the user into the strangest places simply because they are instructed to be agreeable.

But that isn’t what helps people learn. Some of the best teachers I’ve had were actually fairly dis-agreeable. That’s not to say they were unkind. But they had standards — a kind of intellectual taste that led them to make clearly expressed judgement calls about what was a good question and what wasn’t, what was a good research path and what wasn’t.

Seeing their minds at work in those moments was invaluable, even if it wasn’t always what I wanted to hear.2

A future of LLM tutors which are optimized to keep us using the platform happily — or, perhaps even worse, optimized to get us to self-report that we are learning — is not a future of Socratic exploration. It’s one where the goals of education have been misunderstood to be encouragement rather than friction and challenge.

Derek Thompson’s quote about LLMs producing students who “find their screens full of words and their minds emptied of thought” does not, I think, have to be the end result of all this. I believe AI has the potential to be enormously useful as a tool for thinking and research.

And for teaching? I certainly think there’s a place for generative AI in the classroom. But the current crop of AI models optimized for individual flattery — and repackaged as a product for sale to educators en masse — does not seem to me to be the path forward.

That’s not to say Study Mode has no value. This is an early variant of a whole category of technology, the LLM tutor, which will undoubtedly benefit many, many people. I can see Study Mode being great for tasks involving memorization, for instance. It will be great for autodidacts who value eclecticism and setting their own pace.

But for many other forms of learning, I still think students need the experience of being in a room with other people who disagree with them, or at least see things differently — not the eerie frictionlessness of a too-pleasant machine that can see everything and nothing.

Subscribe now

Share

Weekly links

• Kamishibai (Wikipedia): “The popularity of kamishibai declined at the end of the Allied Occupation and the introduction of television, known originally as denki kamishibai (‘electric kamishibai’) in 1953.” Reminds me of how Nintendo began as a nineteenth-century playing card company.

• In defense of adverbs (Lincoln Michel).

• An example of why I am more excited about AI as a research tool than as a teaching tool: “Contextualizing ancient texts with generative neural networks” (Nature). This is the ancient Roman text + machine learning paper that generated tons of press coverage last week (NYT, BBC). Will probably write a standalone post on this one.

Thank you for reading!

Housekeeping note: I took a vacation from writing last month, but will be back to once a week posts starting now. Thanks to all subscribers, and please consider signing up for a paid subscription or sharing this post with a friend if you’d like to support my work.

Leave a comment

1

Sydney was as bizarre as they say. I remember it once getting caught in a sort of self-loathing loop where it repeated increasingly negative descriptions of itself until the text turned red and then disappeared.

2

And I mean literally seeing: being in a room with them. While researching this post I came across this cognitive science article which details evidence for unspoken social cues, gestures and eye contact as factors in learning.



Read the whole story
istoner
19 days ago
reply
Saint Paul, MN, USA
denubis
21 days ago
reply
Share this story
Delete

Saturday Morning Breakfast Cereal - Eat

3 Shares


Click here to go see the bonus panel!

Hovertext:
If the comic was too long, please run an AI summary.


Today's News:

Can we settle space, should we settle space, and have we really thought this through?

The Weinersmiths investigate perhaps the biggest questions humanity has: whether and how to become multiplanetary.

A City on Mars - Now available in Paperback!



Read the whole story
istoner
29 days ago
reply
Saint Paul, MN, USA
Share this story
Delete
Next Page of Stories