LLMs & the Screenwriting Process
Insights from the AI on the Lot panel about enhancing human creativity with LLMs.
The following is a transcript of the Enhancing Human Creativity with LLMs Panel from AI on the Lot 2024 all about how LLMs are being used and can be used in the screenwriting process. You can watch the full video of the panel discussion on the AI on the Lot YouTube page here.
This panel was a practical application of LLMs in creative processes. What can the underlying next-token prediction of LLMs do well? What can it not do well? What parts of story structure can be codified and what parts can never be parsed by algorithms? Where should creatives draw the line? And how can artists most effectively use these tools in a way that enhances their process?
Speakers:
Matt Nix, Writer & Showrunner
Mark Goffman, Writer & Showrunner
Joe Penna, Filmmaker
Momo Wang, Animator & Director
Moderator: Dana Harris-Bridson, Editor-in-chief at Indiewire
Panel Transcript
Dana Harris-Bridson: I would like to ask the panelists to introduce themselves by name and what your interests are around AI and LLMs.
Joe Penna: I'm Joe Penna, I am a recovering YouTuber from back in the day. Now a filmmaker. I've done a few features, outside of the commercials, music videos. And yeah, for a while I was heading up the Applied Machine Learning team over at Stability AI. I'd say I'm very well entrenched in both the roles we're talking about.
Momo Wang: Hi, I'm Momo Wang. I'm a director of animation at Illumination Entertainment. We produce the animation film like Minions, Despicable Me, Super Mario Brothers. For my personal project, I use a lot of AI for AI filmmaking.
Mark Goffman: Hi, my name is Mark Goffman. I'm currently an executive producer of a TV series on NBC and Peacock called “The Irrational”.
Mostly worked in television my career from the West Wing to the Umbrella Academy and a bunch of stuff in between. And I just, I love playing around with generative AI just for, the joy of creation and seeing where it can enhance and fill in some of the gaps in my own skill set.
Matt Nix: Hey, I'm Matt Nix. Showrunner. Probably best known for a show I created called “Burn Notice”. Um, thank you. One person knows it. Yes! You are responsible for my residuals. I've done a bunch of shows. Most recently I did a show called True Lies on CBS, which none of you saw. And right now I'm doing a bunch of television pilots and some movie stuff.
I guess I'd say it feels like we're on the verge of the world changing utterly. And I want to be in there seeing it happen rather than just waking up one day and realizing oh, it changed and I wasn't paying attention.
Dana Harris-Bridson: Well, let's start with you. Tell me a little bit about what you're able to do with LLMs that wasn't possible for you before.
Matt Nix: I think my last show was already out of production like a month before ChatGPT hit the scene. I haven't done anything in terms of things that hit the air that have used it. But I've done a lot of experimenting with my own past work. Could I write a Burn Notice? What can I do here? How would it have been useful for this? How would it have been useful for this other show that I did?
And the biggest thing that I've found is I tend, as a writer, to think very structurally. I think a lot about the math of a story and how it works. And what's the sort of meta structure of anything that I'm doing. Which not every writer thinks that way. I do. And it happens to be really well suited to working with LLMs.
And I had an interesting experience on my last show. I had been writing all of these documents for the writers. Where I was explaining, 'This is how you do the show. This is the math of the show. It starts with something like this. It needs to do this. This is how the A stories and the B stories will rhyme with each other. This is how all of the story elements will be related.' And so I had written out what was essentially, what I realized after ChatGPT came out, was a prompt, right? I was writing it for a writer's room. I was writing it for humans. And then I fed it into ChatGPT and I was like, oh, that's actually quite useful.
And so I think that was a big thing. If I just asked it to write a story for me, it was terrible. It was totally predictable. But once I gave it a very specific set of instructions, and a very specific way to relate all of the elements to each other, I found that played very well to the strengths of an LLM, and it was able to generate some really interesting things.
Which then required a lot of human interaction, a lot of editing, a lot of curating, a lot of rewriting, but in terms of what it could actually generate. It was fascinatingly useful. I, again, haven't had the opportunity to use it for something that's going, but in experimenting with things that I used to do it's been really fascinating to see what it would have been capable of.
Mark Goffman: Like Matt, I think there's a lot of opportunity to actually create a GPT, particularly on ChatGPT. So you can create a set of instructions and rules that all, every show has some, whether it's the way the character behaves, the type of worldview it has the, generally the shape of an episode and how the act breaks go.
How the main two characters interact with each other. And you can feed all of this in. And then, again, we don't use it on “The Irrational”, but just playing around. You can say something like, okay, our main character is a behavioral psychologist. Tell me a psychological experiment that's fairly obscure that has a surprising result to it.
And it'll spit out a bunch of them. Oh, Sapir-Whorf hypothesis. That's really interesting. That's where people have a different worldview based on the language that they learned growing up. So that'd be really interesting for story. Let's dig down more and maybe two more iterations in I can relate that specific hypothesis to our main character in very often the same way that I might do with somebody on staff.
Now, a big difference is that you can do the research a lot faster, but the other thing is, it might be three in the morning and my wife's gonna punch me in the face if I wake her to talk about any of this. If I try calling anyone, they're asleep and same thing, I'll get punched the next morning. A GPT is always awake. It's endlessly positive and patient with me. And so those are some of the real benefits and things that you can use. And that's purely in the writing and the creativity part. But the other thing is I'm a pretty poor artist, and anyone who's seen me draw my stick figure storyboards, knows that.
But this is now an incredible gift for me to be able to storyboard out a pretty complex scene and give it to a director or show it to our set designers and say, this is a 1.0 version of what I know you can make significantly better. And that's something that's really useful.
Momo Wang: Yes mainly LLM to me is more like a translator and assistant. So I use LLMs to do the research for proposals. Also because of my born language is Chinese. So once when I first write a story in Chinese and then I translate with LLM to English. Which is easier for me and then better than any translation softwares.
Also, I can ask questions to level up the story. I can get inspiration from different models of LLMs. Because the LLM, they are made by different human beings. But people, they are come from different countries, different backgrounds.
The different people train the different models. There are different types. It’s like they are LLMs from different schools. So they have a different point of view. Sometimes I just test the same ideas, same questions, but I get a different point of view and a resource. Sometimes I feel like, oh, I got some interesting idea from just from testing that.
Joe Penna: Right now it does feel like a lot of these AI tools is a little bit like getting blood from a stone. Is that a thing Americans say? I think so. Yes, it is. Okay, good. I'd ask the LLM, but I don't have it with me right now. It's quite difficult because you're trying to go through chatGPT, or Claude, or perhaps you're not even doing that and you're going through the API where you can change things like the temperature and the system prompt. But still people call it nerfed, these models are (at least right now) designed in a way where inference isn't the easiest thing to do.
And yeah, you can do things like loading in huge amounts of your writing and being like, Hey, try as best as you can and do this. And now that Llama-3 or whatever is out, you can fine tune a giant amount of your own scripts. I've done this with my emails and my things. It gets my spelling mistakes. It like starts to sound a little bit like me. So yeah, right now it's a little bit like having a very eager assistant that is incredibly clever, but sometimes misses like really obvious things. Just like humans. And eventually it'll learn, so yeah, right now, it is like, hey I have a 1970s location that I need to be like this and that. Give me ten options, and the first three are great, and the other seven is well, you didn't even follow my prompt, but, hey, the first three are great. So it the beginning of ideas that then you can say, Oh, well that works, and eventually we'll get to a point where enough of thumbs up and thumbs down, it starts getting a little bit closer to what you need.
Dana Harris-Bridson: One of the things I want to drill down into, and this kind of falls into the explain to me like I'm five department, is, You talk about feeding your emails, feeding your scripts, giving, creating a baseline from which to understand.
You're basically, it's, you're creating you're a custom machine. What is that process? Literally, how are it's like when you say feeding it scripts, I, we presume it's not like feeding it scripts. It's what are you doing? What are you, it's are you just uploading a bunch of scripts into what, we've all used ChatGPT , but I personally don't know how to, how I would create a personalized Dana Harris- Bridson version of that.
Mark Goffman: For a continuing model, you can create, there's a setting, create a GPT and you can name it whatever you want, my version of Martin Scorsese and feed into it as much information as you want. There are two ways to do it. There's the version of yourself. I'm going to feed it and upload simply, the last ten scripts I've written, a bunch of pitch documents, my emails, everything else so that it gets to know me, my style, my writing and we'll call that like the, the alt version of me.
But then there's what I think is probably more useful to me and it's more of a collaborator or co-pilot somebody that doesn't have my exact voice that would have to be trained on something else or on other material.
Joe Penna: One of the issues right now is that these tools are all over the place in terms of how easy they are to use, how much you can change them.
It can be as simple as you put a bunch of your scripts into something like Claude, or Gemini, or ChatGPT, and you say, Hey read this and come up with a few paragraphs about how I write and then write a system prompt for me. So then it does and you can tweak it to come up with some example sentences that I would say.
And then that can be your system prompt and it can be as easy as that, or it can be as crazy as you're getting a bunch of H-100s on a cloud compute and you're training our own custom version of it that may or may not be better. And it depends on how you're going to do the chat, because it still has this like chat-based kind of understanding.
So yeah, it can be really technical or it can be ChatGPT or Claude. They're all good at certain things. I found that something like Gemini is really good at ‘Hey, I have all of the dialogue that one character says. In that voice, I want to say something like this in the story. And she's trying to be duplicitous and use her voice and give me a few options of what she would say.’ Gemini is the best at that. And then Claude is the best at this other thing. And then ChatGPT is the best. And then ChatGPT just released 4o, which is better at this new thing. You always have to be like, checking different things out. Which is not great for a production because a production has deadlines and times where it's okay, this is due, that it needs to be done and instead of doing like a tech freeze, the first person who's working on an AI-only movie is going to have like times during the production where they're like, okay, we'll unfreeze everything and then try everything again.
Otherwise, it just ends up being dated or ends up being boring, right?
Matt Nix: I'd also just point out one of the challenges is when you think about the data set of what you've done, right? For example, I did a show called The Gifted, which was an X-Men show on Fox, right? Say I'd been feeding those scripts in. It would be able to discern certain aspects of the show. It would be able to understand, okay, this person talks this way. Matt, when he writes overuses ellipses. I use a lot of ellipses and writers make fun of me for it, but I'm sure ChatGPT or any LLM would do a great job of imitating my uses of ellipses and dashes, right? Or even knowing basically how long my scenes are, which– that's a thing.
But the question really is, how useful is that given the fact that if you're doing a creative project that has any legitimate creative life, it's evolving as you go, right? And it's also, in the case of what Mark and I are doing, it's evolving a lot in response to notes and priorities from a network and stuff like that.
So in the first season of “The Gifted” there was a whole set of things that they were trying, that the network wanted from the show in terms of what the superheroes do and stuff like that. In the second season, This Is Us was a very popular show and Fox really wanted everything to be This Is Us. Like literally everything should be This Is Us now. And so I was like, I don't really know that it's a good idea to do a superhero show where the goal is to make everybody cry.
And they were like, shut up, it's This Is Us. And I was like, okay, right? If you want to go back and watch the second season of The Gifted, you will see a lot of things that This Is Us did. Now, when I think about how would I have used LLMs in that time, I remember what I actually did.
I sat down with multiple episodes of This Is Us. I abstracted them using Microsoft Excel. I figured out the math of a This Is Us episode. I called the creator of This Is Us and said, Hey, can I take you out to breakfast, and talk you through how I think you write a This Is Us episode? He said, sure, that sounds like fun. And I talked him through the math of a This Is Us episode, and he was like, yeah but you missed this. I was like, thank you very much. Can I use all of this on my show? It's a superhero show, no one will ever know. I just told all of you, but he was like, sure, no, who cares? And he was like, pitch me your story. And he had some notes. And then I went in to the network and they were like, that's so great. It's just This Is Us. I was like, I know.
Now I would probably still do the Excel part of that. I'd probably still break down This Is Us using that. But it would be a lot simpler and I could probably get more granular in terms of like, how does the, This Is Us work, so that would be a really useful analytical tool.
But, my point is, now let's say I feed in all of the information, right? All of the scripts for the first two seasons that got cancelled after two seasons, but let's say we were doing a third season, right? I would have to give the LLM a lot of information about the studio notes, and oh by the way that one actor? Total pain in the ass, won't do scenes on Fridays, so we can't put them in anything. Like, this guy has a drug problem! If I just trained it on the scripts, it wouldn't know not to write scenes for those two actors who were really close when they were sleeping together and then they broke up and now you can't put them in scenes together.
It would just start writing scenes for them and I'd be like no, don't do that, don't do that. A lot of calls from agents.
So the point is the creative project is evolving. And the show that it was in the first episode is not the show that it is in the 22nd episode. And training an LLM it's naturally gonna spit out some sort of average of everything you've done. So it's not that you couldn't find a way to make it useful, but in terms of, it would only be the most formulaic of shows that would yield easily to a formula that always works.
Dana Harris-Bridson: Is it fair to say that LLMs are conservative creatures? I'm calling them creatures, but just like in terms of the way that they're thinking that they're not going to be, they're not likely to come out with outside the box concepts.
Matt Nix: Well, you can tell it, you can say, Give me concepts that are outside of the box and it will do its level best. But outside of the box, Once you get out of the box, there are a lot of directions you can go. It's up to you to define what you mean by out of the box. What's the, Which kind of gets back to the idea that As writers, we have formulas in our heads, right?
Yeah. We have our own prompts that we're giving our own brains. And understanding that prompt is, I think, more important than feeding a bunch of data into a machine so that it can spit out what you already did. It's understanding your own evolving creative process. What formula are you using? You think about someone like Woody Allen, right?
His jokes have a structure. He says something and then he undercuts it. He says a self-deprecating thing after and he says it in a certain voice. You could master that formula but also that formula evolves. And I think that yes, it is a conservative thing. To the extent that you're just giving it data and asking it to spit that data back out at you, all it can do is create a formula based on what it was given.
And that's not really, that might be less useful in terms of coming up with something new and interesting.
Dana Harris-Bridson: One of the things that there's, obviously there's a lot of, anxieties around AI in general. And studios are, very keen to see what AI can do for them in terms of obviously saving money.
Cause that's what studios always want to do. And and of course that translates to anxiety around people losing their jobs because that's how they save money. But what I'm curious about in terms of your experience of using it as the, as our keynote speaker noted, it's the AI can save you time.
Can't save you money, it can save you time. Now what you do with that time is then up to you. What I'm hearing though, in terms of what you guys are talking about with LLMs, there's a lot, there's things that it can short circuit, but there's still a ton of thinking that has to go on to crafting these ideas.
Do you, what do you think its actual capacity is for saving time? Time and money is there or is it just or is it just a different path to creating your scripts?
Joe Penna: Yeah, it's interesting. We can talk about what is available and useful now versus Shortly, right? You keep hearing this is the worst that it'll ever be, right?
It can only get better from here. Yeah, right now what can it replace? People say oh, the, film industry is is totally going bonkers right now because of AI. And I'm like, name a few jobs that have been replaced by AI other than maybe, some some readers which is an issue, right?
Because readers is how you make great writers, right? A lot of people start in Hollywood by being writers. Being somebody's reader, right? Yeah, so that's an issue, but doing coverage and things like that. Yeah maybe some early concept art, right? Because people are using MidJourney now, sure. Or Stable Diffusion, something like that. But other than that not really, right? Eventually, yes, it'll replace Writer's Assistants and more than that. You have to think about how the reinforcement learning from human feedback, which is going to be so prevalent here in this. Yes, right now it's averaging out what everyone has always created, right? If you take, if you create a map of the best, craziest things are going to be on the outside of the map, what everyone, the, the all the Shows that people created that were like a single show are going to be over here, and then all the shows from the 90s are going to be there.
But out of those, you can segment those into little areas and figure out what were the best of those shows, and what were the worst of those shows. The ones that didn't take the guy out for breakfast and try to replicate it anyways and failed, right? And that's something it's a mathematical function.
The AI's gonna be really great at that. Yeah, if you're making images, it's right now creating an average. But what happens when you tell the model, by the way, these images are the ones that won the Oscar for best cinematography. Make those instead, create a general vector that is in those directions, right?
And those won the Oscar because they were impactful and meaningful and because it's really freaking hard to do those kinds of things. But when you teach the AI model these, this is the general direction that humans are moving, that stories are evolving, this is what people love. Then it's going to know what the next This Is Us is.
It can guess what that is, right? With some human intervention in there for sure. But when that starts happening is when we have to be cautious about who's telling our stories. Eventually, I do think that computers will be better than humans at telling our stories. I don't think that in ten years I'm going to want to be driven around by anybody in a taxi. I'm going to want to get into a self driving car, right? Maybe I if I have to go into a surgery, I want the surgery robot, as opposed to some human who was drinking last night and is tired today and is bored or whatever. Am I not going to want to watch a movie being, that was made by a human? Because it's flawed, right?
That's a conversation to be had or, do I want to watch AI racers drive around? Personally, I think I like to pretend like I'm Max Verstappen back there. To know that there's a human doing that.
Momo Wang: I would like to share for two parts. One part is in my studio AI is not really a part of the production. In our experience, the best quality of work always came working with the best artists.
We're working on a long term Production projects. Feature film s take like two to five years. So we used to work with the software the tools which we can trust and design for professional production, but people are like really interested to learn like what's AI and if that can really help for the production workflow, the pipeline.
But, another side for my personal experience, for my personal projects, which cross Asia China Asia, and U. S., like, all the clients partnership they're really interested to work with AI. And then there's more flexibility to work with AI, by that side. So I just feel like, oh, there's still a lot of things going on. The technology is awaiting so quick, but it's more important as a filmmaker Like you just said that the story the creativity is most important So that's why people right now, we are always talking about, Oh, I look at this picture, that's an AI artist, that's AI art.
Because they feel the AI art style. That's because when you work on AI, you didn't really think really clear what's your original idea is. And then when you work on that, you want to get something here, but AI give you something there, you feel, Oh, not really big different, okay, I can go with there.
And then it became like AI leading your idea. So that's not the right direction. Like when I'm working right now, working on the project right now, I usually think really clearly put AI to another side, just thinking what my idea will be, what it look like. And then to start using different tools to visualize it.
That's why right now we are seeing the 100% AI- generated picture to video, image to video gives result that aren't the best quality. The best quality is usually live- action plus AI. You have to AI involve in like 30 percent to 10 percent. That's the best quality for right now.
Mark Goffman: Yeah I think that right now it's. It's an evolutionary step, a major one, a leap in the creator economy. And look, creation, self expression is built into human nature. It's what we want to do. We'd like to express what, any of, what makes us who we are.
And I think that self expression has a desire to get out. And what AI does is. There are to be, like, mad and create shows and have them on the air, it's the NFL. There's very few people who can do that well and consistently. And I don't think that's going to change, because the people who use LLMs at that level will just get that much better.
But there's a democratization of art and artistic expression that I think is now available so that everyone can feel like they can create much better, more joyous art more quickly. Maybe those won't have, several seasons or a hundred million dollar budgets, but that's fine. Because I think that those people have a different, level of both sophistication and expectation for art.
I started playing around with Udio, which is this amazing song generation app. And suddenly, I'm creating a song for my wife's anniversary. I don't think any of you guys are going to be hearing it anytime soon, but maybe it'll take off on TikTok. I don't know. June 24th, look out for it.
That's not what I'm trying to do with it, but there are people who are now there's, the Drake song that's out there that was created on Udio. So it's people who already have a skillset are going to be able to level up that much better.
Matt Nix: In terms of job losses, and obviously it's a huge concern, I think that we have to remember with regard to how much money people are spending and what the stakes are. In an arms race between a bunch of people who are competing for eyeballs. And when the machine gun was invented, they didn't say, well, this machine gun can shoot as many bullets as 50 guys. So let's just send one guy out with one machine gun and send everybody else home. Everybody got a machine gun.
They all got machine guns, right? And, did wars get less expensive when you could just have the one guy and the one machine gun? No, they got much more expensive. It was just, everybody got more ambitious, everybody wanted to do more. And when I think about what is this going to do to Hollywood, I look back to the 90s when the first digital instruments were coming out, right? And before that, if you wanted a score, go back and watch an episode of Starsky and Hutch, right? Very simple score, right? Just super simple, right? A guy with a guitar and some drums.
And there were studio musicians who were playing that. And it was composed, and there was somebody who transcribed it onto the, so there was sheet music for all the musicians. And then along comes artificial instruments. And so now, somebody can play everything. Okay? Now, in that transition, all those studio musicians lost their jobs, and the guy who transcribed the sheet music, he lost his job, right?
It was the 70s, so it was probably a guy. But the, when you look at how much money goes into the production of scores that's gone up, right? Now an episode of Mighty Morphin Power Rangers, if they reboot that show, it's going to have a full orchestra. It's going to have an enormous amount of John Williams esque music, right?
Because that's now possible. So in the arms race of Hollywood, I don't think, I think that studios are very excited about saving money and being able to like, fire a bunch of people but I think what they're forgetting is that has literally never worked in the history of mankind. Not one time has a new technology come out and everybody was like, Okay, we'll just try not to do anything new so that we can all save money, right?
And so I think it's what you're gonna see, and actually similarly Once upon a time before visual effects were, before everybody could have visual effects there were no, on my show, The Gifted, right? There were no superheroes on television shooting lasers out of their hands and stuff.
That was all technology, right? And it wasn't as if, when the technology came out, everybody decided, okay, well let's leave the superhero things on, in the cinema, we'll just, they can do the superhero things, and we'll just, in television, we'll still just do car chases and all the stuff that television used to be good at.
No, now we do superhero things on television too. And the arms race continues.
Dana Harris-Bridson: I would love to hear from your perspective from all of you. Uh, obviously we've got four really smart people here who are in the thick of, using AI tools and LLMs, and conversing, it's it's just part of what, it's not just part of the way you work, it's part of the way you think.
How does that compare to your peers? Do you feel like outliers in that regard? Or are you seeing more people go down this path? We, the, there's a generally held belief, it's like there's a lot of people who are using AI and only a few are admitting it. But I'm just curious like from what you're seeing around your friends and peers and what their perspective is.
Mark Goffman: I think to some extent at the professional level there is a very Hush tone about using it. First of all the new contract requires you to ask permission of your studio Before you can actually use it and a lot of studio policies are simply it's not allowed And I don't even know, just that sounds very difficult to police.
But also like Google's now integrating, the Gemini into Google search and so much of LLMs are essentially morphing into search and ways of research that I don't know how you can make that distinction. But right now, at least I know a number of studio policies are literally like You cannot touch it.
If you do, it's because there might be a chain of title issue and you have to sign a certificate of authenticity that you wrote this by yourself. And, so there's a whole opening of liability in anything that you write. So people just aren't talking about it. And I think that's a problem that needs to be fixed.
Joe Penna: Yeah, you're right. It's a question about It's, you can't stop it, right? Coders and using any of the copilot kind of things ran through the same thing where companies were like, you can't use copilot because it might copy somebody else's code or whatever, maybe, or because it might be trained on our code.
So then our code ends up somewhere else. And yeah, the data provenance is definitely something that needs to be talked about, needs to be Fixed, it needs to be realized but eventually it will be fine. Eventually synthetic data will be usable. And eventually these models will be great without any of the ethical issues.
You can't stop people from using this. It's gonna end up in Ride or Duet in two weeks. The guy's right there. It's gonna end up in it's gonna end up in Google, it's already in Google Docs, right? Like, It's, and then there's also the question of some court cases are gonna come out saying certain things and some are gonna say some other things, likely gonna end up in a Supreme Court, about whether or not you can both the, train the data and who owns the other side of it, but you tell me I literally When the first GPT 2 or 3 started coming out, I started, just saying Let's see what happens, I tried to fill up the entire context window with the majority of my screenplay, and I just wanted one last final sentence, right? To be written by just the last thing. And it was just, it had to be, like, this amazing last thing that the person says, and then cuts to black, directed by Joe Penna.
People crying, and I'm like, It needs to be great! And so I hit enter. And I'm waiting for the model to load. So I'm just like, alright, I'll let that go. So then I go and take a shower and I'm, and my wife is like, we're in a drought. And I'm like, just a minute. I'm thinking about what this last thing is.
And then I go on a walk and I click it, I'm like I run back. And I opened up my laptop again and I'm like, okay, I'm going to write this down. And the thing that the AI came up with was a little worse, but not by much. I would have gotten there way faster. Do I own that now? I would have gotten there, right?
Can I not copyright my script? Because just the last little bit of it was done with AI, right? I think that's ridiculous, right? Now, conversely, I then got way too confident that GPT 3. 0 was gonna be great. So I tried to have it write like this other scene that I was having trouble with. So it started going off, with the two characters being like, the man being like, hey My gosh I'm so glad that this happened, and I wish we had met in Tokyo much earlier.
And I do want to say that I love you. And then the woman's saying oh, I, I have a hard time saying that I do love you too. I'm like, stop, that's her dad. What are you doing, man? And yeah it'll have issues like that. But, but within very specific contexts, it's very helpful right now.
Yeah.
Momo Wang: Yeah. Yeah, I definitely feel the same. But most people I'm working with they are basically like ready interest on AI. They would like to learn and know more about it. So I think that's important because usually people scare with the sounds that they are not familiar, not used to deal with.
So I think the education people about the knowledge about technology, that's really important thing, like every each like AI filmmaker to work on. Also I was thinking like how to see like AI. AI artists and artists, there's no clear boundary for that. Because since Adobe built in all the AI tools, Firefly, and also like Photoshop has a general feel.
So once anyone who has a computer installs Adobe software, Photoshop, you're basically an AI artist. So there's no clear boundary there. And also at the same time in different countries. So it's interesting in the past one month, I got a lot of friends from China, especially studios that came here to look for AI production house, production team.
Yeah, it's it definitely, it was really appealing for the new technology for there. But, there's some difficulty you use in a foreign country. For example, in China, it's so many software tools, website, banned in China. I was in China for a vacation last year. And then it cost me two minutes to download a picture from my journey.
There are even more desire, they want to learn, they want to know about that. So I came here to learn. So I think that's definitely a lot of possibility for there. Yeah. And also that's why we are thinking we need to just keep learning, experimental, and then share all the knowledge, yeah, for each other.
Matt Nix: There is an enormous amount of fear and anxiety and anger in the Hollywood community about AI. One of the strange things about it is when you hear the rhetoric around AI it, it almost always follows a specific pattern, which is, " AI is utterly useless garbage that's going to take all of our jobs." And I'm like, you have to choose one or the other.
And then the other thing is people are constantly like, now you can write a script in, seconds, and it's mediocre, and it's not very good, but the studios are going to want that. And I say to people guys, if shitty scripts are that valuable, we're sitting on a gold mine.
We always have been, right? And the truth is, studios could always have You know gone to India and said hey, can you guys write can you write me a script? It doesn't have to be very good. Can but I need it in three days and they're gonna be like, yep Wait, we'll do that. Yeah $500. And now you can have it for less money in a day But again, I say to, they're always like, but it can do it so fast, and I'm like, have you met a studio executive?
They take like months to read anything. Speed is not what they're good at, or that's not what they need. The, I think there's, but that's, all of that said, there is an enormous amount of fear, and I've been saying to people for a long time no one in Hollywood writers rooms is going to use AI because it is wrong and evil. A lot of people are going to come back from the bathroom with very good ideas.
Dana Harris-Bridson: Okay. I'm going to open it. We've only got like about 10 minutes left and I have a strong feeling that there will be questions. We've got a microphone here. Is anybody want to want anybody want to be first? We'll go down here then up there.
Awesome stuff, guys. Thank you. Um, the biggest question that I have is that large language models right now are good at doing mediocre work at scale. So if you're doing mediocre work, like you said, AI is going to eat your lunch. But what happens when it's not doing such mediocre work anymore at scale?
Matt Nix: Just to really dive in on your premise, Let us just say, you are correct, and LLMs are now able to produce excellent work at scale, right? Because the models have just improved radically. So the question then for me is Citizen Kane an excellent movie?
I think we can all agree, Citizen Kane, excellent movie, right? Now, an LLM writes Citizen Kane and releases it today. How well is citizen Kane going to do? It is going to do exceedingly badly, right? The world does not want citizen Kane right now. Okay. Even if we posit that AI is going to improve radically, there's still going to be this curation question of what does the culture want now? What's going to be really funny or interesting. Baby Reindeer. Huge on Netflix right now. Okay. Let's say Baby Reindeer came out 10 years ago and let's just stipulate Baby Reindeer is excellent. I loved it. I thought it was terrific. But if Baby Reindeer came out 10 years ago, everyone would be like, wait, he's dating a trans woman. What? No, everything has evolved. Now we're in a different place culturally. Where we are ready for that show. We are ready for that particular brand of excellence.
And so I think that, yes, the LLMs can get technically better at doing certain things technically better. But, I don't think we can say that there's an absolute quality called mediocrity, right? That exists and is timeless over all time, right? If you came out right now with, again, not to pick on Starsky and Hutch, but if you went back and you look at an episode of Starsky and Hutch now, nothing happens. Nothing happens in a Starsky and Hutch. Ever. They just ride around and then they spend ten minutes getting into a warehouse and then they punch each other into some boxes. That's every episode of Starsky and Hutch. But we loved it in the 70s. It was an excellent show. And so my point is, there's no thing, there's no abstract quality called goodness that is timeless that an LLM can capture.
It's always going to take human interaction to understand what is timeless, what works now, why is this moment for this, and that is, I think that is an irreducible human element.
Joe Penna: Why can't an LLM, why can't an AI system learn to be timely? Why do you think that is reduced only to human beings, right?
I think the LLM can watch everything and, that has ever been watched, right? Read everything that has ever been read. And an LLM can be fed, All the scripts that they said " No" to. Back in the 2000s, this would have hit, back in the 2010s, this would have hit, alright, now it's 2020, this. For 2025, here's the graph of where stories are going, how stories are being told, and the pace of editing, and this and that, right? I think that it can learn. I might be wrong, right?
Matt Nix: Can it learn before I've paid for my house renovations because I just need like five years.
Joe Penna: Five years? I think maybe you might have it. I think you're right about the fact that if somebody is amazing at writing stories, they'll be that much better quickly with LLMs. Because tools are going to start coming out that writers can use. Then artists can use in a certain way and already we're seeing that with Photoshop and all the visual stuff, right? But my concern is the same, I share the same concern that eventually there will be no human that can be as good as as a model. Literally it can be generated on the fly. If you're wearing an apple watch and you're watching a scary movie, it knows that you are more afraid of snakes than bears. So the big monster that it makes is a snake monster. And it knows when you're getting bored because it knows your heart rate and knows that you're flopping around and then you're pulling your phone out and all of a sudden it catches your attention again, right?
For certain things, that can be okay right now for human beings like if Netflix wanted to make a version of my film where one of the scenes, is in Paris versus Brazil versus the US. When I grew up in Brazil and there was like one scene in a horror movie or like a destruction movie that showed Christ the Redeemer statue crumbling over My country's being destroyed and yet my entire family would cheer you know because there's that one scene where I connected with it the best because The filmmaker just wanted to hit the whole world.
But the AI models will eventually be able to do that with every single story. I do have that concern.
Matt Nix: What about like a combination of a bear and a snake?
Joe Penna: That's the scariest one. Because that sounds scary. That's the, like Snake-Bear . Yeah.
Dana Harris-Bridson: Someone is selling that in Cannes right now. Yes, up here.
Hi, you actually walked into my thought process for the question, but as I've been tracking AI's development, watching, like I've literally managed your friends and seen how it's been progressing, my macro concern of how it destroys a lot of what we're working on is my question, which is if one of these AI systems scrapes the blacklist.
In my opinion, they could catch current cultural tonality and then AI could take away the ability of writing creatively to today's market. Like you're reverting to the past, but like in my, when we track writers, a lot of it goes to the blacklist. So now the blacklist is released publicly every year. So they can't even stop it from getting scraped once it's out there.
And now with Sora, could they just take Sora? To the the script the moment the blacklist drops and just make a visual version of it put it on YouTube Before the writer could even sell it like the text going so fast I'm like even could AI steal your ideas right now if I was like I want Matt Nix's idea make me a version of his show.
Snake-Bear is mine, dude.
That'd be the threat because when I deal with like writers No one wants to take it there because that's the scariest thing. Say you're at Illumination and you're working on a Pixar, or like an idea that could rip off Pixar, but then the animation companies are competing, but now it's AI going to war against companies.
Like, where does it go if AI can be your friend but it also can be your enemy? It can also be the attacker for your company. It can also be the intel for your company. I can use intel to compete against other management companies. I can use AI to go after other management companies. I can use AI to pull up Stanley Kubrick as a writer for my idea.
I can use AI to write me the bible off my baseline logline. So it's it's gonna allow a college student to be as good as the studios, but the studios are never gonna pay a professional rates if they can pay a college student. And my mind kinda goes back to the money. At the end of the day, we can create everything in the world, but if they don't pay us, we don't have careers anymore.
So that, then it's like the YouTube model on top of Netflix. It's a lot. But I was just wondering, where are you guys gonna sit within that, too?
Joe Penna: They said the same thing about YouTube, says the YouTuber, right? That movies are gonna go away. The TV shows are gonna go away. Sorry, Matt. You don't get to do Snake Bear.
But it became a different medium, youTube is a different TikTok, it's a different medium than YouTube. It does democratize storytelling. But not in the way that people thought it would. They said the same thing about digital cameras. They said the same thing about radio, and sure before, everybody, when they wanted to see acting, they went to a play. And then, eventually, they started listening to the radio. And therefore, not as many people went to plays. I think that there can be a new medium that arises from this that is gonna be some sort of there's animation that is generated on the fly that is very personalized to your tastes and whatnot.
It's like Bandersnatch times a million, right? And the question of ideas being stolen, Whatever we all have the same ideas, right? And, in Hollywood it's, we all see that two movies come out that are the same idea. And, you can bet that there were like ten other scripts, if not more, in, that we all read in Hollywood--
Matt Nix: it feels like you're trying to take Snake-Bear from me with this.
Joe Penna: When Snakebear, directed by Joe Penna, comes out,
Mark Goffman: There's Bear-Snake and then there's Snake-Bear.
Joe Penna: There's two movies that are gonna come out, right? And one of them is gonna have AI, and that might be the better movie because it got made earlier, and then when Matt Nix's Snake-Bear comes out, because he didn't use--
Mark Goffman: The script is only one part of it. People may change the way they screenwrite just as once, talkies came out and people wrote silent films in different ways.
You may be writing in much more specific prompts. Like a lot of writers say, you write for directors sometimes when you write in specific camera angles and things that sometimes, you don't always do. But as we write in more of those, then an AI is going to be able to... Like you can, right now, create a shot list, create very specific, you can ask an LLM to create a shot list from your thing, or create a look book. It's still several steps away from being able to create a film, and there's four million steps and creative decisions that go into every frame, and hundreds of people involved.
I don't, the script is just the first blueprint.
Momo Wang: Yeah. So yeah, I definitely agree with you. Basically for the idea so is that still important? Like for the I original idea? That's why I know you guys upload the script on the ChatGPT , but I never done that because I'm scared of those .
Also for any, no matter I read a script or like work on the visual designs, I never, I always control the last step by my hand. So it's not with the AI or MidJourney, it must be, if I want to finish a design, I gonna individually each part generate by MidJourney, but in the end I put them together by my hand.
So I control the last most creative part. That's one way. Another side I feeling like combined to feeling like that's something is anime or maybe think about more about like how can we use it to help us. For example, I'm a director usually in Hollywood, or it's not, usually have, you want to be a director, it gonna go through many pitches, and then face a lot of rejections, and then finally take a long time, you'll probably get a chance for directing opportunities, but actually by now, using AI tools, you can just, in a few hours you can make a short film.
You can direct a film every day. It basically like, I, actually you can find another way to train yourself. I see that a low cost training self training and learning system. So I feel like it's more combined to always worry about, like, how it gonna attack me or take something from me.
Maybe we can think about more, like, how to better using it for us. Helping us, help you, help the creativity. So that's our thoughts.
Matt Nix: Two things. One thing you said was if you scrape the blacklist can it can the LLM anticipate the next cultural development or conversation?
And I don't actually think that's a solvable math problem. Let's think of like the two most exciting pieces of entertainment that are coming out. Either now or in the near future which would clearly be the average of Snakebear and Baby Reindeer, right?
What is the average of Snakebear and Baby Reindeer? That's not a, those are both like exciting new cultural developments, but they're not in the same genre. They're not like, essentially, the data set is not large enough, right? What makes the new movie The, or the new show, the most amazing new thing.
Well, it's not actually the average of scripts. It's everything that's going on in the culture, right? The reason that Civil War is a hit movie right now is not because there were a bunch of other Civil War ish scripts that came out before. It's because our nation is in crisis, right? That's why. And it happened to hit at the right cultural moment.
I don't think that would be the concern so much. I think in terms of stealing ideas. Yes it is true, you could look at, a script on the blacklist and then you could try to have someone reproduce it and and make it very quickly or whatever. I guess my response to that is, if I'm a studio or if I'm making, if I'm in the business of making something, at the moment, and this could change, but at the moment the script is not the I'm not, it wouldn't be worth it to me as a studio to reproduce your idea with AI, et cetera, and with all of the attendant legal issues that would go along with that, et cetera, et cetera.
So I'm not too worried about that in the near future, but I think in terms of LLMs generating the next interesting thing. One of us will be right and one of us will be wrong. We do have to remember... again, keynote speaker: it's all math, right?
And it can be easy for us as humans to be like, well, once we know it's all math, that means all problems are solvable. But if you talk to a mathematician, that's not true. There are chaos problems. There are mathematical problems that may have a solution, but it might take more than the life of the universe to solve it.
And so I think, some of these questions of creativity and what is the next thing? Yeah, it's theoretically solvable, but practically Nvidia will need to pave the earth with chips before we can actually generate that script.
Dana Harris-Bridson: Something else to think about in this, too, because, we all wonder what, how far can AI take us with this?
And who knows? Maybe it's going to make that leap, but, I've been to Sundance for the last, I don't know, 20 iterations of it. And, in that time we have seen the rise of things like, digital filmmaking. We have seen the rise of mumble core. We have seen, so many things that have come up in terms of like further democratizing, making it accessible to everybody to make films.
And sensibly AI is going to continue doing that. But I'll tell you this over the course of that, there's been some great films that have been made using that technology, but we have not seen an enormous swell of fantastic filmmaking that wasn't there before. We've always, it's if you look at what Sundance puts out, every year, the amount rises and falls a little bit and some years are good. Some years are great. Some years are meh. And it's not so much about the technology. It's not so much. It was not about just the course, it's the execution and being tied into what is happening in the moment. Now, listen, maybe AI is smarter than all of it and is going to make that different, but that has always been the case that no matter how much it's been democratized, we still wind up with about like the same amount of really great work, because if it weren't the case, then we'd be seeing a ton of other great work that's popping up at other festivals, and there's good work that shows up there, but there hasn't been a flood of it.
Joe Penna: Right now, the way that models are trained is next token prediction, it's gigantic batch sizes, which means that you're likely to just get the average, right?
Hey, I take these two pieces of bread, and I put some salami in the middle, and I'm gonna make it delicious. There can only be so many next words that come out. Probably gonna be sandwich, or something like that, right? Eventually, there will be better architectures that are a little bit more the stock market, right?
Stock market you're not trying to predict the average of, I don't care the average of the past few days, I care about what's coming next, and now, AI models are way better than any quant and any better than anyone being like, Oh, you know what? It's about to hit this or about to hit that.
You still need that human looking at it to make sure that you're not trading a quadrillion dollars, down. But majority of it is. That trying to predict what's next. And if you look at what humans are into, there's a 30 year fashion cycle. All of us olds are looking at these, what the kids are wearing nowadays and being like, you're gonna regret it.
Don't, why are crew socks okay now? They, I totally got made fun of. I remember having to fold my sock over my shoe to pretend it was a no show sock. But anyways that's, I can talk to that with my therapist. In the meantime I also have to talk to them about AI models having to replace me, I think, because eventually, I think that it can hit just try enough throw enough up against the wall and then if you almost have a another model that predicts what is going to be popular next, eventually you can guide the LLM with that one model and it will make some sort of guess about what's next. And eventually, sure, like things that humans make can be something completely out of the way and totally different, right? That just happens to hit a a nerve that we were all talking about that someone guessed, but I do think that eventually, AI models will be able to get better than a significant portion of humans at doing that.
Dana Harris-Bridson: Thank you guys so much, this was totally fascinating. Thank you.
This was a super helpful interview. Gave me some ideas on how to use AI in writing.