This interview was conducted as a part of the Docubase Tools Workshop series, in which members of the Open Documentary Lab community demoed the RunwayML tool. Research assistant Andrea Kim interviewed Cristóbal Valenzuela about the emerging artist community using machine learning in their work and the utility of such tools. This interview has been edited and condensed for space and clarity.
Andrea: When you were designing RunwayML, what were some of the key challenges that you were designing around?
Cris: With RunwayML, we’re taking existing technologies that are very common in the data science and engineering world and putting them at the center of a more creative environment. That means understanding how creators and designers and artists work, understanding all their different formats, and taking that as the base to build a project. The challenge there is to acknowledge how creators create, and use that together with artificial intelligence to propose a collaboration rather than a replacement or a substitution.
Andrea: What is your background?
Cris: I have a weird background. I have a background in economics and then business and then design. And then I went to art school. That has helped in a way of understanding that there are different contexts where people are coming from. One thing we tried to always have with Runway is a very open mind in terms of how people might be using it.
Andrea: Did you expect all the different applications that people would make using the tool? Can you tell me a bit about that?
Cris: When we created Runway, we were kind of amazed with the novelty and the outcomes of research in computer vision and machine learning. We thought, if you put that in the hands of visual artists, videomakers, filmmakers and musicians, their nature will make it so they can understand those technologies in a totally different way from what researchers are understanding. So our goal was not to think about how a filmmaker may use Runway. Our role has always been about how we can put that tool in the hands of our filmmakers so they can figure out how to properly use it. And where does it fit more naturally to them. That’s the distinction.
Andrea: So the first utility of RunwayML is as an educational tool?
Cris: Yes, there’s an educational aspect of it there. Everything around AI is still very new to a lot of people and there’s still a need for people to understand and acknowledge and learn how to work with algorithms and how to train a model than what a model even means. And that’s the technical literacy that you build over time. We’re reaching a point where we’re teaching people how to collaborate with artificial intelligence. I’m sure in a few more years that’s going to be just, natural for people.
Andrea: Often with technologically mediated art, how the viewer perceives the final art piece is dependent on their understanding of how the technical components function, their technical literacy. How do you think that this tool would change the way that we perceive art as well?
Cris: Right now, artists are questioning the technology itself and they’re trying to create through questioning of technology. So you see a lot of artists working with bias in machine learning, or algorithms that were supposed to work in one way then were repurposed in a different context, to kind of question the nature of the knowledge itself. I think artists looking to question the technologies that are available should be encouraged. But moving away from that specific area, in general the technology will reach a point where it’s just going to become a tool you’re going to have in your toolbox of options and it’s not going to be so much about what you use. In the same way that perhaps you’d have with image editing at some point. I don’t think photographers are saying, “I made these with Photoshop, I made this with LightRoom.” It’s just a tool you use to colorize or change or manipulate your images and your content. The same will happen with AI. There’s a tendency to over mention the usage of algorithms or AI in your work because there’s also a critique and you want to emphasize it. But we’re going to reach a point where it’s not going to be really relevant how you made it.
Andrea: Where would you say artists are now in our cultural understanding of AI?
Cris: We’ll continue to see critical art about technology to understand how we’re using them. That will always continue. But now more and more artists are also not just working with technology for the sake of technology, but acknowledging the fundamental advantage of using algorithms in their process, moving faster by using datasets, understanding segments of their artworks visually, or having a collaboration that can help them do something with less time. So understanding that pragmatic uses of algorithms has become central, and I think we’ll be seeing this more.
Andrea: What would you say is the cause of resistance that artists might have in adopting AI as a part of their toolkit?
I think many artists don’t realize it can be very helpful for them. That’s what we’re trying to do at Runway. We’re trying to demonstrate that the nature of the algorithms are very useful for a lot of different use cases. And once you acknowledge that they’re useful, it’s just a matter of you becoming used to it and being surrounded by other people using it. Another point is that a lot of media would downplay the role of the artist with AI so people might still think artists will get replaced. I feel that that’s still very naive and doesn’t really understand the critical components of the technology. It’s just another technology that will help you do things in a better way and express you better.
Andrea: So in AI discourse, we could say on one side we have the critiques. And then on the other side asks how we can collaborate with AI, emphasizing their non-human agencies. How would you position RunwayML as a tool that aids an artist’s vision?
Cris: I think it really depends on where you want to position yourself as an artist and where you want to put the focus on? When you take a photo with your iPhone, I don’t think anyone is saying, Hey, I collaborated with an AI. But technically they did because the iPhone is running like seven different neural networks to optimize how you take the image and your image looks great because of that algorithm.
Then there is another kind of algorithm that is more like a collaborator, algorithms that can be retrained. You can show them some data and then have them create some data back to you. And that’s not optimizing, it’s not for an automation process, it’s more for a conceptual aesthetic. You can feed an algorithm that will generate images or sound compositions back or generate realistic text based on your writings. That algorithm is a reflection of what you feed. The relationship is different, it’s more theoretical. I’m not sure how that might evolve over time, but I think it’s super interesting to have that kind of collaborative partner where you can have an algorithm, feed it data, have it respond back, then use it however you want.
Andrea: Is there a threshold for when there starts to be patterns in the algorithm’s outputs?
Cris: It really depends on the kind of models. Models are the building blocks. You can think of a model as a container that performs one specific test. We have 142 different models. The key is that each model performs a very narrow test. It can either classify an image and either generate a sound or generate another image. A model that generates an image might not really understand what the content of the image is. But there could be another model that really understands images but it’s not able to generate any text from it. So the threshold really depends on the underlyingly model you’re using. At the end, machine learning is just probability. It’s just a matter of statistics. You can tell the model of the algorithm to be less certain or more certain about what it’s predicting.
Andrea: How would you characterize the machine-learning art community?
Cris: A key thing that has happened in machine-learning research over the last 5 or 10 years is an openness to researchers to publish their work. There’s a very big community that has allowed researchers to build on top of each other’s research. There is now an overwhelming amount of research coming out, and conferences are overflowing with papers and it’s really hard for submitters to review publications. This idea of openness has created a lot of interesting knowledge, but at the same time it’s hard to tell what’s relevant and what’s not. So it’s a matter of finding ways of filtering through all this research that’s coming out.
Another one of Runway’s goals is to bring together and contextualize the relevance of this massive research coming out of all these interesting labs and companies and people and educators and institutions. It’s a bit overwhelming even for researchers themselves. So we want to help find what are the most interesting use cases from inside the community itself and then take those into the platform. And that’s why we think having that approach might help navigate through the sea of research that’s happening. We’ve seen interesting, interesting things so far.
Andrea: What would you say makes an interesting use case?
Cris: I think we’re reaching a point where it’s not that much about the models or the algorithms that you end up using underneath. Those are becoming commodities. The tests they perform are very much concentrated on which data you used to train it. As a filmmaker, as a designer, as an artist, as a visual person or as a writer you want control over those components, which means that you need to train or retrain those algorithms to better suit your needs.
Andrea: What are your next steps?
Cris: The big message for us is to try to open the community more. We have people from all over the world, from all kinds of disciplines, from all kinds of backgrounds, but I think we haven’t done as much as we could to get even more people on board. So we’re putting a lot of effort to make it so it’s even more accessible.
0 comments