Final Fantasy Legend Nobuo Uematsu Rejects AI — The Hard Way Is the Reward

Perfection doesn’t make the song; human flaws do. Micro-timing wobbles, breathy cracks and gritty edges are what make music feel so satisfying.
Another week, another artist saying: cool tech, but let humans make the music. This time it is Final Fantasy legend Nobuo Uematsu laying out where he stands on AI and where he thinks game audio actually has room to grow.
Uematsu on AI: thanks, but no
In a new JASRAC Magazine interview (translated by Automaton), Uematsu says he does not plan to bring generative AI into his process. For him, the appeal of music is the person behind it, the scars and stories you can hear in the notes.
"I have never used AI and probably never will. It is more rewarding to go through the hardships and make something myself."
His bigger point: audiences connect to the human fingerprint. An algorithm does not have a life lived, and it cannot replicate the small, imperfect swings that make a performance feel alive.
Where game music is right now
Uematsu thinks graphics keep leveling up, but game music already hit a kind of finish line when studios could drop full-blown studio recordings straight into games. Once you can capture players with real musicians and real rooms, you are already playing in the big leagues.
What could actually improve
He is not anti-tech. He calls out binaural audio as one path forward — Square Enix has already experimented with it in titles like Final Fantasy 10 — but he is skeptical about how much players will really demand that kind of hyper-3D sound.
The more practical frontier, in his view, is how smoothly game scores shift from one sound to another. If you have ever heard an awkward musical cut when gameplay changes, you know the pain. Better transitions, smarter crossfades, more seamless adaptive layers — that is the sort of plumbing he could see AI assisting with behind the scenes. But the core composing? He wants that to remain human.
Why this matters beyond games
Whether you are scoring a game, a movie, or a series, the same truth applies: live players are messy, unique, and unpredictable — and that mess is the magic. Uematsu argues those tiny fluctuations and imperfections are exactly what make music satisfying, and that is not something generative systems can generate on purpose without losing the point.
Quick hits
- Interview: JASRAC Magazine, translated by Automaton.
- State of play: graphics keep advancing; music hit its stride once studio recordings became standard in games.
- Tech note: binaural audio is on the table (Square Enix has used it in Final Fantasy 10), but Uematsu questions audience demand.
- Practical upgrade: smoother transitions between cues and stems; AI might help with the engineering, not the composing.
- Bottom line: he has not used AI for music and does not plan to; he values the human background and imperfections you can hear in a performance.