Google's new mind-blowing image-editing model: Nano Banana
Join thousands of brands using Pluggo's AI to track conversations, find leads, and stay ahead of trends across social media platforms.
No credit card required • Setup in 30 seconds
While OpenAI is going backwards, Google is just killing it, Nano Banana and Veo are just insane tools.
I asked nano banana to take me across Middle-earth
images: nano banana
edits: photoshop
upscaler: magnific
animation: kling 2.1
video edit: davinci resolve 20
music: producer AI
Linkedin influencers already pumping nano banana selfies, we're fucked
6 hours of work, $0 spent. Sora 2 is mind-blowing.
Edit* here is a more detailed description:
This video was created with the preview of Sora 2. Only the first two frames came from Nano Banana + Kling image-to-video. At this stage, Sora 2 mainly supports text-to-video; image-to-video allows a single reference image per generation and works more like a visual guideline of the scene now. For this test I used a quick text-to-video approach. The main drawback is the 480p limit with watermarks though this may improve in the future. Still, its physics understanding and ability to generate multiple consistent angles in one scene set a new state of the art. The full process took about five to six hours, with music mixing and editing done in After Effects, while most sounds such as engines, tires, and crashes came directly from Sora 2.
I believe this marks a new step for filmmaking. If we set aside the obvious flaws, like inconsistent car details and low resolution, a video of this type would have cost tens of thousands of dollars not long ago. Just 8 months ago, a similar video I made took me nearly 80 hours to complete. Of course, that one was more polished, but I think that realistically 10-20 hours for a polished version of something like this will be possible in the very near future.
if you are intrested in me or my work feel free to also visite my website stefan-aberer.at or at Vimeo: https://vimeo.com/1123454612?fl=pl&fe=sh
My compact train unloading design
It's a four blue belt unloading station featuring 1 + 7 train waiting bay.
Using stacked inserters for lazy unloading on single side.
Max throughput is 720 items/s per station.
Edit:
The first picture was generated by ai specifically nano banana model from google.
blueprint: https://factorioprints.com/view/-OZQqRSnciqVawbsbaOy
I Asked Nano Banana To Make Her Smile
Cute..
Used nano banana to "clean up" visuals for a document
Fall is here...
7 month old GSD. Thanks to nano banana for the color grading


Paintings coming to live with Nano Banana and Veo3
Wan 2.2 Realism, Motion and Emotion.
The main idea for this video was to get as realistic and crisp visuals as possible without the need to disguise the smeared bland textures and imperfections with heavy film grain, as is usually done after heavy upscaling. Therefore, there is zero film grain here. The second idea was to make it different from the usual high quality robotic girl looking at the mirror holding a smartphone. I intended to get as much emotion as I can, with things like subtle mouth movement, eye rolls, brow movement and focus shifts. And wan can do this nicely, i'm surprised that most people ignore it.
Now some info and tips:
The starting images were made by using LOTS of steps, up to 60, upscaled to 4k using seedvr2 and finetuned if needed.
All consistency was achieved only by loras and prompting, so there are some inconsistencies like jewelry or watches, the character also changed a little, due to character lora change mid clips generations.
Not a single nano banana was hurt making this, I insisted to sticking to pure wan 2.2 to keep it 100% locally generated, despite knowing many artifacts could be corrected by edits.
I'm just stubborn.
I found myself held back by quality of my loras, they were just not good enough and needed to be remade. Then I felt held back again a little bit less, because i'm not that good at making loras :) Still, I left some of the old footage, so the quality difference in the output can be seen here and there.
Most of the dynamic motion generations vere incredibly high noise heavy (65-75% compute on high noise) with between 6-8 steps low noise using speed up lora. Used dozen of workflows with various schedulers, sigma curves (0.9 for i2v) end eta, depending on the scene needs. It's all basically a bongmath with implicit steps/substeps, depending on the sampler used. All and starting images and clips were subject of verbose prompt, with most of the thing prompted, up to dirty windows and crumpled clothes, leaving not much for the model to hallucinate. I generated using 1536x864 resolution.
The whole thing took mostly two weekends to be made, with lora training and a clip or two every other day because didn't have time for it on the weekdays. Then I decided to remake half of it this weekend, because it turned out to be far too dark to be shown to general public. Therefore, I gutted the sex and most of the gore/violence scenes. In the end it turned out more wholesome, less psychokiller-ish, diverting from the original Bonnie&Clyde idea.
Apart from some artifacts and inconsistencies, you can see a flickering of background in some scenes, caused by SEEDVR2 upscaler, happening more or less every 2,5sec. This is caused by my inability to upscale whole clip in one batch, and the moment of joining the batches is visible. Using card like like rtx 6000 with 96gb ram would probably solve this. Moreover i'm conflicted with going 2k resolution here, now I think 1080p would be enough, and the reddit player only allows for 1080p anyways.
Higher quality 2k resolution on YT:
https://www.youtube.com/watch?v=DVy23Raqz2k
Nano banana is so incredibly useful.
My parents wanted to repaint their living room, but they couldn't settle on a colour to use.
So I decided to generate some random variations and send it over to them to see what they think.
they were amazed with the results.
Word of warning It can't do exact pantone, colours or anything, but it's pretty good to get a general look on what some basic colours would feel like.
(Dont mind the mess)
Just learned that if you annotate an image you get super good and precise results
Was playing around with Nano Banana and realized that instead of making iterative changes and constantly changing the prompts, you can make several precise edits on one pass.
For example, I bring the original photo into an image editor (anything works - paint, preview, photoshop, etc.) - put a red box around the area you want to change, then describe what you want in red text and set your prompt as follows:
Read the red text in the image and make the modifications. Remove the red text and boxes.
Then 9 times out of 10 it gets everything right!
Significantly easier than iteratively altering or downloading/uploading the same image or describing what it is you want to change, esp in group photos.
I asked nano banana to get me into my favorite arcade
first pic was real
image editing: nano banana
animations: kling 2.1 start/end frame
music: producer AI
edit: davinci resolve
step by step tutorial:
https://x.com/techhalla/status/1963333488217919668
Find the perfect tier to automatically uncover high-intent customer opportunities across social platforms.
Start your journey
For individuals & small teams
Scale your outreach
Everything you need to know about finding your next customer on social media.
Still have questions? We'd love to help!
Reach out at hello@pluggo.ai
Get real-time insights from trending conversations across social media. Understand what your audience is talking about and discover emerging trends before they become mainstream.