Skip to main content

Runway Aleph: A Guide With Demo Project

Learn how to use Runway Aleph to generate a video from start to finish, covering its capabilities and limitations through a practical example of creating a birthday video.
Aug 21, 2025  · 12 min read

Runway ML specializes in generative AI models for video and images. They recently released Runway Aleph, which promises to empower users to transform their visual projects with remarkable ease and precision.

I used Aleph to create the video below for my friend's 50th birthday. In this article, I share the process I followed while exploring whether Runway Aleph delivers on its promise. In what follows, I will share my process step-by-step.

What Is Runway Aleph?

Runway Aleph is a versatile video AI generation model that brings video creation one step further by allowing video inputs that can be edited using a text prompt.

Key features include:

  • Ability to intricately add, remove, and alter objects within a scene.
  • Generate different perspectives or angles of a video.
  • Adjust the overall style and lighting to match any desired aesthetic.
  • Extend a video by generating the next scene.

You can see many examples in Runway’s official announcement.

How To Access Runway Aleph?

Runway Aleph is available starting with their Standard plan for $15 a month. Despite being a monthly plan, it doesn't grant an unlimited number of uses.

The Standard plan comes with 625 credits, and each second of Aleph costs 15 credits, meaning about 41 seconds of video can be edited each month.

There is an Unlimited plan that includes 2250 credits and provides unlimited usage in explore mode, meaning that the videos take longer to generate because they are queued and generated when they reach the front of the queue.

Almost all videos I generated here were done using the explore mode, and I found the wait times to vary a lot depending on the hour of the day. Most of the time, there was no waiting time at all.

How to Use Runway Aleph?

To use Runway Aleph, we need to have a video as input. Once a video is selected, we can provide a text prompt to edit or transform the video.

Video editing interface showing a paused video clip of zebras near a reflective water body in a grassy landscape, with playback controls and timeline visible.

We can, for example, ask Aleph to make it night:

A video editing interface showing two zebras walking near a water body in a nighttime scene, transformed by Runway Aleph using the text prompt "Make it night."

Note that if the video is longer than 5 seconds, a 5-second section of the video will be used instead. This section can be selected by dragging the window under the video input.

Planning The Video

When I got access to Runway Aleph, I had just been asked to record a video wishing my friend a happy birthday. So, I thought I'd see if Aleph could be used to make a video from start to finish. I felt that the process taught me a lot about its capabilities and especially about its limitations.

This experiment will not only use Aleph's video editing capabilities but also Runway's Gen-4 video generation capabilities.

For the sounds, I used free sound effects from Pixabay, while the voices were generated using text-to-speech models from OpenAI.

My idea was to do something with references to John Wick, one of his favorite movies. Since I didn't have any footage, I had to create everything based on photos of us and freely available videos on the internet.

I wanted to keep it simple, so I went for a trailer-like video with a narrator. Not having characters speaking is much simpler when dealing with AI-generated videos.

My script was something along the lines of:

  • Set the scene in Taipei (where we live)
  • Introduce the two characters
  • The two characters meet at "The Continental" hotel (a movie reference)
  • I gift him a box containing a marker (another movie reference)

Creating the characters

To generate my character, I took a selfie and used Runway chat mode to generate a version of me wearing a suit and holding a wooden box.

Man in a suit holding a wooden box, standing at night in front of an ornate building with bright lights, resembling a cinematic scene in Taipei.

For my friend, I couldn't get Runway to generate what I wanted, so I ended up doing it semi-manually using Photoshop Firefly.

While generating these, I also ran into issues several times with Runway refusing to generate the images. I think with AI image generation in general, it's hard to generate specific people for safety reasons. But that's something to keep in mind if you want to generate something with specific people as characters.

Error message related to usage policy violation notice.

Scene 1: Taipei 101 with the moon

The opening scene with the moon timelapse isn't AI-generated. It's a free-to-use video from Timo Volz that I found here on Pexels.com.

I tried modifying it using Runway Aleph, but it failed. I thought making it a drone shot with the drone flying above the building could be a cool edit, but Aleph gave me this instead:

Make this video into a drone shot overflying the building.

Scene 2: Close-up of my friend

For the second scene, I provided the photo I had generated using Photoshop and asked for a close-up shot to introduce the character.

Close-up shot of a man in a suit with a vivid background, highlighting facial features and style.

I liked the result, but I wanted to introduce some mystery before revealing the character. So I asked it to generate another shot with the prompt: "Close-up shot to introduce the character. Make it start facing back and then slowly turn to the camera.".

Here are the results:

In the second example, Runway Aleph didn't comply with the prompt. However, that shot is the one I ended up using. I cut the first half in video editing software so that the character's face starts hidden by the hat and only then gets revealed.

Scene 3: Walking down a dark alley

This scene was generated by:

1. Based on the photo of the character, generate an image of the character facing away in a dark alley.

2. Using that image, along with a text prompt: "The man with the hat walks down the alley, away from the camera until he disappears in the distance. Add mist to make the scene more mysterious."

Scene 4: Me walking with the box

For this scene, I used the image of myself I created above. I had to insist quite a bit to get Aleph to generate a video of me walking. The prompt was simple: "The person walks to the building in the background."

None of the four videos did what I asked. I ended up using the fourth one as it was the only one that was usable.

I did want to add some mist to the video to make it look more mysterious, so I used Aleph's video editing capabilities to make it misty. This, on the other hand, worked quite well.

For the second part, the close-up shot of the box, I first tried to have Aleph edit the video using the prompt: "Make a close-up shot focusing on the hand holding the box." However, that didn't work at all.

I took a screenshot of the frame I wanted to animate and used it along with a prompt to fix it.

Prompt for creating the scene focused on the box.

Finally, for the shot of me walking away, Runway originally generated an alley with neon lights that looked similar to the one from a previous scene. To make it look different, I asked Aleph to edit the video and remove the neon lights, which worked very well.

Scene 5: Driving the car

This scene was generated in the same way as the third scene, by generating a still image and then transforming it into a video.

Prompt for creating the car scene.

However, the video didn't quite turn out as I wanted:

When using Runway Aleph, I feel that often there are only a couple of seconds that are usable in the video before it gets weird. I still managed to make something out of it by:

  1. Reversing the video.
  2. Keeping only the part of the video that focuses on the driver.

Scene 6: Traffic timelapse

This scene is a free-to-use video from Timo Volz taken from Pexels. I tried modifying the style but with little success.

Here's the prompt I used:

"Change the style of this video to a neo-noir action aesthetic inspired by John Wick—cinematic, high-contrast lighting, rain-soaked neon city streets in deep blues, purples, and reds; elegant yet dangerous atmosphere; sleek modern interiors; refined but gritty mood."

However, the result was unusable:

I did other styling experiments on other videos, and none of them worked out.

Ending Scenes

The final three scenes were also generated by turning images into videos.

  • This image is of a real building in Taipei, and I added the neon sign using Photoshop.
  • The image of me walking down the hallway with the box was generated using Runway by asking it to place me in a fancy hotel with a carpet with a "C". The image didn't turn out great as the box looks more like a suitcase, but I didn't manage to have it fixed.
  • The final image was taken from a John Wick gift box, and the note was added using Photoshop.

Ending text

The ending text was fully generated using Runway. I used chat mode to ask it to generate the text "Chapter 50" as the title of a movie in the style of John Wick. It generated this image:

Neon blue text "CHAPTER 50" glowing against a dark background with misty smoke effect.

Then, in chat mode, it asked if I wanted the text to be animated, and I said yes. Here's the result:

I wanted the letters to also be animated, so I used Aleph to edit the video. The result isn't what I asked for, but I still liked it and managed to use it.

Here's what I asked for: "The camera flies through the zero in the '50' text, fading into a black screen."

Here's the final result:

Further Experiments With Runway Aleph

Next scene generation

I tried to extend portions of the video by using Aleph to generate the next scenes for some of my scenes. First, I didn't specify what I wanted the next scene to be; I just asked it to generate a next scene.

In terms of action, the generated scene is consistent with the previous scene, but there is a significant loss in the consistency of the character. It doesn't look like the same person.

To test this further, I tried to specify that I wanted the next scene to have the character hop on a motorcycle. Then I used the result and asked for yet another scene of the character riding away on the motorcycle.

Here's the result:

Aleph struggled with the fact that the character was carrying a box. We can also see some strange artifacts in the video.

Altering a video

To make the video, I successfully used Aleph's video editing capabilities to remove street neon signs and add mist into a scene. However, when I tried to change the aesthetic of one of the videos, it failed.

Out of curiosity, I tested making it rain in one of the scenes to see how it would cope with that, and it worked quite well on the first attempt.

In the following example, I asked to add fire to the back wheel of a motorcycle. It did successfully add fire, but not where it was supposed to:

Here's a final example where I asked it to add a giant wave in an existing video:

Conclusion

Creating videos with Runway Aleph is certainly within reach, but it's essential to understand that the process isn't as simple as typing a text prompt and getting a seamless video output. The journey requires a blend of creativity and patience, as the technology, while innovative, still requires guidance and adjustments to bring one's vision to life.

More often than not, I found myself adapting my initial ideas to the results produced by Runway Aleph. Rather than directing each scene purely through prompts, it became a collaborative effort, where the tool's unique interpretations often influenced the direction I ultimately took.

The examples showcased in Runway Aleph's announcement likely represent its best outcomes. While the model can certainly produce impressive results, achieving those desired effects often necessitates multiple attempts and a willingness to refine and iterate on the output.

Overall, Runway Aleph emerges as an exciting and enjoyable tool for hobbyists, allowing for video creations that otherwise might remain just a concept. I enjoyed the process while using it to create the birthday video. However, realizing such a vision does highlight that there's still a journey ahead before the tool can flawlessly translate every idea into a polished visual reality.


François Aubry's photo
Author
François Aubry
LinkedIn
Full-stack engineer & founder at CheapGPT. Teaching has always been my passion. From my early days as a student, I eagerly sought out opportunities to tutor and assist other students. This passion led me to pursue a PhD, where I also served as a teaching assistant to support my academic endeavors. During those years, I found immense fulfillment in the traditional classroom setting, fostering connections and facilitating learning. However, with the advent of online learning platforms, I recognized the transformative potential of digital education. In fact, I was actively involved in the development of one such platform at our university. I am deeply committed to integrating traditional teaching principles with innovative digital methodologies. My passion is to create courses that are not only engaging and informative but also accessible to learners in this digital age.
Topics

Learn AI with these courses!

Course

Understanding the EU AI Act

1 hr
6.3K
Get your AI Act together! Understand the obligations, risks, and requirements of the EU AI Act.
See DetailsRight Arrow
Start Course
See MoreRight Arrow
Related

blog

What is Runway Gen-3 Alpha? How it Works, Use Cases, Alternatives & More

Discover the features of Runway Gen-3 alpha, the latest advancement in generative AI. Explore its cinematic, high-resolution video capabilities and more.
Iva Vrtaric's photo

Iva Vrtaric

11 min

blog

Top 7 AI Video Generators for 2025 With Example Videos

Discover the top AI video generators available today, including RunwayML, Synthesia, Colossyan, Pictory, DeepBrain AI, Invideo, and the highly anticipated Sora and DeepMind’s Veo.
Dr Ana Rojo-Echeburúa's photo

Dr Ana Rojo-Echeburúa

9 min

Tutorial

Runway Gen 4: A Guide With Practical Examples

Learn about Runway Gen-4’s capabilities through practical examples, and how to work around its limitations by using image generation models from GPT-4o and Gemini.
François Aubry's photo

François Aubry

Tutorial

Runway Act-One Guide: I Filmed Myself to Test It

Learn how to use Runway Act-One through both the web interface and its API, and discover how to combine it with Runway Gen3-Alpha.
François Aubry's photo

François Aubry

Tutorial

v0 by Vercel: A Guide With Demo Project

Learn how to use Vercel's v0 to generate UI components and web applications from natural language prompts, and how to integrate and deploy them.
Marie Fayard's photo

Marie Fayard

Tutorial

OpenAI's Audio API: A Guide With Demo Project

Learn how to build a voice-to-voice assistant using OpenAI's latest audio models and streamline your workflow using the Agents API.
François Aubry's photo

François Aubry

See MoreSee More