I Recreated Apple’s Titanium Ad with AI - Runway ML Tutorial

side of a titanium iphone, extreme close up shot
 
I loved watching the latest iPhone Titanium launch video. It's full of special effects and beautiful cinematography that makes your jaw drop.
 
Big brands like Apple have mastered those types of ads.
 
But as a marketer or content creator, you probably don't have the budget or resources to produce something quite as polished.
 
That's where AI video creation tools like Runway come in handy. They let you create anything you want with a words and a few prompting tricks.
 
So I wanted to test how far these tools could go at recreating the ad. Whether you're putting together a proof of concept or looking to speed up your creation process, you'll be amazed at where this tools can take you.
 
In this step-by-step tutorial, you'll learn how I used Runway to recreate the ad.
 
Let's get started.
 

What is RunwayML?

Runway is an AI platform that makes video creation accessible to anyone. With Runway, you can turn static images or text into video clips.
 
Some key things Runway can do:
  • Generate video from images and text prompts
  • Add camera movement like pans, zooms, and tracking shots
  • Extend short video clips into longer videos
  • Add special effects, slow motion, remove background and other objects.
Which makes Runway perfect for generating video ads and trailers rapidly.
 
 

Step 1: Create a Shot List

Before jumping into generating the videos and the prompts, we need to break down the video into the different scenes that we'll create. 
 
This will help us stay organized and keep track of each scene we're generating. Once we start generating different scenes, it gets messy super fast.
 
This can be as simple as generating a table or spreadsheet with scene numbers, angles, and a visual description.
 
Here's a quick breakdown for the first few seconds of the iPhone Titanium video ad:
 
Scene Number Description
1 A big explosion
2 Close up of a Rock flying by the camera
3 Rock moving away from the camera
4 An extreme wide shot of a rock entering a fire nebula
This 32 second trailer contains around 20 distinct scenes we need to recreate.
 
Breaking it down helps us stay organized.
 

Create Key Scene Images with Midjourney

Once we know what scenes we need to create, we'll use Midjourney to generate key images for each scene.
 
While we could go directly to Runway, the truth is that creating images is way faster than creating videos. 
 
Therefore, if we can nail the visuals with images, it'll be a huge time saver when going into creating the videos.
 
Midjourney is an AI image generation tool that creates incredibly photorealistic images from text prompts.
 
It's perfect for creating the individual frames we'll feed into Runway.
 
Here's my approach to make the most out of Midjourney:
  • Start with simple prompts first, based on the descriptions of the scenes that we created. Then, start adding complexity, like camera or film utilized, focal length, and other technical aspects.

a fire nebula explosion centered, low key lightingPrompt: a fire nebula explosion centered, low key lighting, shot on arri alexa mini --ar 16:9

  • Use Bing's AI (or ChatGPT or any other tool) to optimize your prompts and fill them with details. Super useful when you're struggling to get a specific shot. 

oestrada_A_titanium_rock_falling_towards_Saturns_rings_from_a_t_6eb0a475-9410-4070-a032-0b77483a4151Prompt: A titanium rock falling towards Saturn’s rings, from a top view. The rock is shiny and metallic, with some scratches and dents. The rings are composed of ice and dust particles, with some gaps and variations in thickness. Saturn’s atmosphere is visible in the background, with swirling clouds and storms. The image shows the rock’s trajectory and speed as it approaches the rings. Saturn is slightly off-center in the image, and the rock is closer to the camera than Saturn. The image is rendered in high resolution and has realistic shadows and reflections --ar 16:9 --chaos 10

Pro tip: Ensure Visual Consistency

As you start generating your images, you'll quickly see that some results may look different in terms of style, lighting, composition, and other aspects.

This lack of consistency is not ideal for video. In most cases, it's important for the scenes to have a cohesive visual look, allowing them to flow seamlessly and showcase a progression of action.

Here's how you can ensure visual consistency in Midjourney:

  • Use the same "seed" number for similar styled images. The seed is like the starting point from which Midjourney starts generating an image. If you use the same seed for 2 generations, you will have a higher chance that they look similar to each other.
    •   How: After you generate an image you like, click on the envelope reaction emoji in Discord. This will give you the seed for that image. Then use that number in your prompt by adding --seed YourSeedNumber at the end of the prompt
  • Keep the same aesthetic descriptions. If you find an aesthetic description or style that you like, say "Kodak Film 2383", "Unreal Engine", "Futuristic", make sure to keep it consistent for the different images within the same sequence.
  •  Use images as an input. You can use reference images, or previous images that you've generated as input for your new generations. This will maximize the likelihood that your new image looks similar to the reference one. 
    • How: Upload an image to discord. Then right click on the image and click "Copy image address". Then use that link as the first thing in your prompt, right after /imagine.

    side of a titanium iphone, extreme close up shotPrompt: https://s.mj.run/sb0QvZbNl3s side of a titanium iphone, extreme close up shot :: Sparks in the background. Cinematic product shot --ar 16:9
Spend time iterating to get images as close as possible to the angles and perspectives you want.
 
Like an image you've generated? Download it and rename it to match the corresponding scene from the table. That way you'll stay organized!
 

Animate Scenes with RunwayML

Once you have the key images for each of your scenes, it's time to animate them with Runway.
 
Import the images into Runway's Gen-2 tool and add short text descriptions of the motion that you want to see.
 
This gives Runway guidance to understand what the image is about and generate a video that is in line to what you're looking for.
 
Generating a video takes way longer than generating an image, so expect this step to take longer. Also, you'll get a single video, while in Midjourney you got 4 options right off the bat.
 
Some tips to make the most out of Runway:
  • Specify camera movements like pans and zooms to make scenes more dynamic. You can easily do this from the "Advanced Camera Control setting".
  • Keep your prompts simple, and modify the "speed" slider to add more energy.
  • You can extend your clips up to 16 seconds if needed
  • Download your best clips as you generate them, to keep proper track of them.
Runway usually creates very slow and cinematic movement.
 
So you may need to adjust motion controls to make scenes more energetic.

 

Edit Clips into a Full Video

After generating your clips with Runway, compile them into a complete video using an editing software like CapCut.
 
Having the original ad video as a reference will help with timing the cuts properly.
 
Some editing tips:
  • Sequence clips to match the shot list.
  • Add the reference video or soundtrack to help you time the cuts
  • Take advantage of CapCut's library of transitions to add more energy to the video
And that's it! The video is done!
 
With a little planning and AI assistance, we were able to create a video that closely resembles the original one.
 
Want to see the final video ad we created with Runway in action? Check it out here:
 
 
We hope this tutorial gave you ideas and motivation to start using Runway to take your video marketing to the next level! Let us know if you have any other questions.
 

Key Things We Learned Using Runway

While creating our Apple iPhone ad replica, we learned a few key things about using Runway that are good to know:
 
  • Runway creates slow, cinematic movement
    • The AI tends to generate very slow and subtle motion in scenes. This works great for more epic or emotional style videos. But it can be difficult to achieve more energetic, fast-paced movement even with the motion controls.
  • It's hard to create some specific camera angles
    • It was hard to recreate specific camera perspectives like the POV of the rock flying through space. You may need to be open to new angles if you can't get the exact framing you want.
  • Objects that are not common, or in the training library for the AI, will be harder to generate
    • For some reason, Runway struggled with generating accurate Saturn rings in multiple images. This shows certain elements may be outside the current training and knowledge of the AI. I'd recommend using reference images as inputs, rather than specifying "Saturn".
  • While not 100% qualified - it can speed up processes like Previz
    • The quality isn't quite ready to fully replace professional video production, especially high quality ads. But the fact that we got a pretty decent video with a few hours of typing on a computer, is pretty amazing.

Alternatives to Runway

While Runway is one of the most full-featured AI video creation tools out there, here are some other options to check out, and will be excited to test out in the future:
  • Wombo - Generates short video clips from audio files and prompts. More music video focused.
  • Simon - Converts text to speech and generates basic animated videos. Better for explainers.
  • D-ID - Creates talking head videos from images. Good for TESTIMONIAL style videos.
  • Synthesia - Converts text and images into lifelike synthetic videos. More manual customization.
Each platform has strengths for different use cases. Try a few to see which fits best for your video needs and budget.
 

Next Steps with Runway

After getting familiar with Runway's basics, here are some next steps to take your video skills up a notch:
  • Practice creating a wide range of video styles - from epic trailers to product explainers and more.
  • Combine Runway with live video templates to add talking heads or screenshots to your AI footage.
  • Use Runway clips in more complex video compositing programs like After Effects.
AI video creation opens so many possibilities! I hope inspires you to start exploring some of the tools out there. Let me know if you have any other questions as you embark on your video creation journey.
 
 

Oscar Estrada

Comments

Related posts

Search How to Create a Video Marketing Strategy for Your Software Business in 2023
RunwayML vs Midjourney vs Photoshop - Upgrading a Zoom Background Search