motion design + illustration
Google Creative Fellowship 2026: Digital Artifact



This artifact builds on my Travel Diaryproject, exploring how AI can extend my interest in placing the character I designed into imagined environments.

Rather than precisely constructing each scene, I began working through prompts, allowing the system to interpret ideas into visuals. 

This process shifted my role from constructing to guiding, shaping the outcomes through iteration and selection. The final artifact reflects a collaboration between my character and intent, and the AI’s interpretation. It highlights my interest in storytelling, turning imagination into visual form, and designing workflows that support my creative process.
Overall, this artifact explores how the same environment can be interpreted in different ways through AI.

For each scene, I developed two versions: one that follows a more natural and soft animation, and another that introduces unexpected and slightly absurd transformations. Both are generated from the same visual starting point, but diverge through different prompt directions. It shows the two side of AI, one side of it is more like a controlled tool for following instructions; and the other side is more unpredictable, more like a collaborator than a tool. 


Process I feed Gemini AI with the character that I have designed from “Travel Diary”, and at the same time writing prompts for the new environments. I then ask AI to generate the character and the environment separately, and once I’m happy with the results, I combine the prompts and put them together. I make a slightly bit of adjustments through photoshop to refine some problems with the images, then I use them to generate videos with Veo 3.1. I wanted to fix the eyes, but sadly it will still get lost during video generation :(


Afterthought & ExtensionThis line of exploration also points toward a more real-time application. I’m interested in how this approach could evolve into a tool-like experience, similar to a camera filter, where a character can be placed into and interact with the environment as it is captured.

The idea is partly inspired by “nui-katsu,” a popular culture where people bring their favorite plushies or characters to different places and photograph them. Without a physical object for my own character, I began wondering how this experience could exist digitally. What if people could upload their own character designs, and use a camera-based tool to place them into real-world environments, allowing the characters to interact with the spaces people are in?

This extends the current workflow into a more immediate and interactive system, shifting from generating images to designing how they exist in the world.




Say hi!

©2026 All Rights Reserved