LIVE TRENDING TOPIC

Gemini Nano Banana

Google's new mind-blowing image-editing model: Nano Banana

Real-time monitoring across
Sort by:

Want to monitor your own topics?

Join thousands of brands using Pluggo's AI to track conversations, find leads, and stay ahead of trends across social media platforms.

Real-time social listening across all platforms
AI-powered lead generation and insights
Direct Slack integration for your team
Start Free Monitoring

No credit card required • Setup in 30 seconds

User 1User 2User 3User 4
Trusted by 1000+ brands
5.8k
r/ChatGPTPosted byu/Ilovekittens345

While OpenAI is going backwards, Google is just killing it, Nano Banana and Veo are just insane tools.

While OpenAI is going backwards, Google is just killing it, Nano Banana and Veo are just insane tools.

View on Reddit
4.2k
r/aivideoPosted byu/TechHalla

I asked nano banana to take me across Middle-earth

I asked nano banana to take me across Middle-earth

images: nano banana
edits: photoshop
upscaler: magnific
animation: kling 2.1
video edit: davinci resolve 20
music: producer AI

View on Reddit
3.2k
r/singularityPosted byu/darkally

Linkedin influencers already pumping nano banana selfies, we're fucked

Linkedin influencers already pumping nano banana selfies, we're fucked

Reddit gallery preview
Gallery
View on Reddit
3.2k
r/ChatGPTPosted byu/No-Researcher3893

6 hours of work, $0 spent. Sora 2 is mind-blowing.

6 hours of work, $0 spent. Sora 2 is mind-blowing.

Edit* here is a more detailed description:

This video was created with the preview of Sora 2. Only the first two frames came from Nano Banana + Kling image-to-video. At this stage, Sora 2 mainly supports text-to-video; image-to-video allows a single reference image per generation and works more like a visual guideline of the scene now. For this test I used a quick text-to-video approach. The main drawback is the 480p limit with watermarks though this may improve in the future. Still, its physics understanding and ability to generate multiple consistent angles in one scene set a new state of the art. The full process took about five to six hours, with music mixing and editing done in After Effects, while most sounds such as engines, tires, and crashes came directly from Sora 2.

I believe this marks a new step for filmmaking. If we set aside the obvious flaws, like inconsistent car details and low resolution, a video of this type would have cost tens of thousands of dollars not long ago. Just 8 months ago, a similar video I made took me nearly 80 hours to complete. Of course, that one was more polished, but I think that realistically 10-20 hours for a polished version of something like this will be possible in the very near future.

if you are intrested in me or my work feel free to also visite my website stefan-aberer.at or at Vimeo: https://vimeo.com/1123454612?fl=pl&fe=sh

View on Reddit
3.0k
r/factorioPosted byu/theunluckyguy1124

My compact train unloading design

My compact train unloading design

It's a four blue belt unloading station featuring 1 + 7 train waiting bay.
Using stacked inserters for lazy unloading on single side.
Max throughput is 720 items/s per station.

Edit:
The first picture was generated by ai specifically nano banana model from google.
blueprint: https://factorioprints.com/view/-OZQqRSnciqVawbsbaOy

https://pastebin.com/raw/heAjsKdE

Reddit gallery preview
Gallery
View on Reddit
2.0k
r/PhrolovaMainsPosted byu/Far_Advisor4745

I Asked Nano Banana To Make Her Smile

I Asked Nano Banana To Make Her Smile

Cute..

Reddit gallery preview
Gallery
View on Reddit
1.6k
r/singularityPosted byu/SealDraws

Used nano banana to "clean up" visuals for a document

Used nano banana to "clean up" visuals for a document

Reddit gallery preview
Gallery
View on Reddit
1.6k
r/germanshepherdsPosted byu/vysakh_pillai

Fall is here...

Fall is here...

7 month old GSD. Thanks to nano banana for the color grading

Reddit post media
Reddit post media
View on Reddit
1.6k
r/aivideoPosted byu/Ilovekittens345

Paintings coming to live with Nano Banana and Veo3

Paintings coming to live with Nano Banana and Veo3

View on Reddit
1.6k
r/StableDiffusionPosted byu/Ashamed-Variety-8264

Wan 2.2 Realism, Motion and Emotion.

Wan 2.2 Realism, Motion and Emotion.

The main idea for this video was to get as realistic and crisp visuals as possible without the need to disguise the smeared bland textures and imperfections with heavy film grain, as is usually done after heavy upscaling. Therefore, there is zero film grain here. The second idea was to make it different from the usual high quality robotic girl looking at the mirror holding a smartphone. I intended to get as much emotion as I can, with things like subtle mouth movement, eye rolls, brow movement and focus shifts. And wan can do this nicely, i'm surprised that most people ignore it.

Now some info and tips:

The starting images were made by using LOTS of steps, up to 60, upscaled to 4k using seedvr2 and finetuned if needed.

All consistency was achieved only by loras and prompting, so there are some inconsistencies like jewelry or watches, the character also changed a little, due to character lora change mid clips generations.

Not a single nano banana was hurt making this, I insisted to sticking to pure wan 2.2 to keep it 100% locally generated, despite knowing many artifacts could be corrected by edits.

I'm just stubborn.

I found myself held back by quality of my loras, they were just not good enough and needed to be remade. Then I felt held back again a little bit less, because i'm not that good at making loras :) Still, I left some of the old footage, so the quality difference in the output can be seen here and there.

Most of the dynamic motion generations vere incredibly high noise heavy (65-75% compute on high noise) with between 6-8 steps low noise using speed up lora. Used dozen of workflows with various schedulers, sigma curves (0.9 for i2v) end eta, depending on the scene needs. It's all basically a bongmath with implicit steps/substeps, depending on the sampler used. All and starting images and clips were subject of verbose prompt, with most of the thing prompted, up to dirty windows and crumpled clothes, leaving not much for the model to hallucinate. I generated using 1536x864 resolution.

The whole thing took mostly two weekends to be made, with lora training and a clip or two every other day because didn't have time for it on the weekdays. Then I decided to remake half of it this weekend, because it turned out to be far too dark to be shown to general public. Therefore, I gutted the sex and most of the gore/violence scenes. In the end it turned out more wholesome, less psychokiller-ish, diverting from the original Bonnie&Clyde idea.

Apart from some artifacts and inconsistencies, you can see a flickering of background in some scenes, caused by SEEDVR2 upscaler, happening more or less every 2,5sec. This is caused by my inability to upscale whole clip in one batch, and the moment of joining the batches is visible. Using card like like rtx 6000 with 96gb ram would probably solve this. Moreover i'm conflicted with going 2k resolution here, now I think 1080p would be enough, and the reddit player only allows for 1080p anyways.

Higher quality 2k resolution on YT:
https://www.youtube.com/watch?v=DVy23Raqz2k

View on Reddit
1.3k
r/singularityPosted byu/roshan231

Nano banana is so incredibly useful.

Nano banana is so incredibly useful.

My parents wanted to repaint their living room, but they couldn't settle on a colour to use.

So I decided to generate some random variations and send it over to them to see what they think.

they were amazed with the results.

Word of warning It can't do exact pantone, colours or anything, but it's pretty good to get a general look on what some basic colours would feel like.

(Dont mind the mess)

Reddit gallery preview
Gallery
View on Reddit
1.1k
r/GeminiAIPosted byu/promptingpixels

Just learned that if you annotate an image you get super good and precise results

Just learned that if you annotate an image you get super good and precise results

Was playing around with Nano Banana and realized that instead of making iterative changes and constantly changing the prompts, you can make several precise edits on one pass.

For example, I bring the original photo into an image editor (anything works - paint, preview, photoshop, etc.) - put a red box around the area you want to change, then describe what you want in red text and set your prompt as follows:

Read the red text in the image and make the modifications. Remove the red text and boxes.

Then 9 times out of 10 it gets everything right!

Significantly easier than iteratively altering or downloading/uploading the same image or describing what it is you want to change, esp in group photos.

Reddit gallery preview
Gallery
View on Reddit
1.1k
r/aivideoPosted byu/TechHalla

I asked nano banana to get me into my favorite arcade

I asked nano banana to get me into my favorite arcade

first pic was real

image editing: nano banana
animations: kling 2.1 start/end frame

music: producer AI

edit: davinci resolve

step by step tutorial:
https://x.com/techhalla/status/1963333488217919668

View on Reddit
Pricing

Choose Your Plan

Find the perfect tier to automatically uncover high-intent customer opportunities across social platforms.

Free

Start your journey

$0
Forever free
Features
  • Keywords
    Number of keywords you can track
  • Smart Monitoring
    AI-powered filtering of relevant mentions
  • Monthly Mentions
    Number of mentions processed per month
  • Data Refresh Rate
    How frequently new opportunities are checked
  • Smart Community Search
    Find relevant communities across platforms
  • Smart Replies
    AI-generated engagement suggestions
  • Slack Integration
    Send opportunities directly to Slack channels
  • AI Discovery
    AI agents autonomously search beyond your keywords
  • Copilot
    Interactive AI agent to help you with specific research tasks
  • LinkedIn Monitoring
    Professional network monitoring and engagement
  • MCP
    Connect Pluggo to AI assistants like Claude and Cursor
Most Popular

Starter

For individuals & small teams

$29/mo$59
Billed yearly
Features
  • Keywords
    3 keywords
  • Smart Monitoring
    Reddit, Twitter, Facebook, HackerNews & Bluesky
  • Monthly Mentions
    5,000 mentions
  • Data Refresh Rate
    Every day
  • Smart Community Search
    Find relevant communities across platforms
  • Smart Replies
    AI-generated engagement suggestions
  • Slack Integration
    Send opportunities directly to Slack channels
  • AI Discovery
    AI agents autonomously search beyond your keywords
  • Copilot
    Interactive AI agent to help you with specific research tasks
  • LinkedIn Monitoring
    Professional network monitoring and engagement
  • MCP
    Connect Pluggo to AI assistants like Claude and Cursor

Growth

Scale your outreach

$69/mo$129
Billed yearly
Features
  • Keywords
    10 keywords
  • Smart Monitoring
    Reddit, Twitter, Facebook, HackerNews, Bluesky & LinkedIn
  • Monthly Mentions
    15,000 mentions
  • Data Refresh Rate
    Every 12 hours
  • Smart Community Search
    Find relevant communities across platforms
  • Smart Replies
    AI-generated engagement suggestions
  • Slack Integration
    Send opportunities directly to Slack channels
  • AI Discovery
    AI agents autonomously search beyond your keywords
  • Copilot
    Interactive AI agent to help you with specific research tasks
  • LinkedIn Monitoring
    Professional network monitoring and engagement
  • MCP
    Connect Pluggo to AI assistants like Claude and Cursor

FrequentlyAsked Questions

Everything you need to know about finding your next customer on social media.

Still have questions? We'd love to help!
Reach out at hello@pluggo.ai

Stay ahead of trends

Discover what's trending now

Get real-time insights from trending conversations across social media. Understand what your audience is talking about and discover emerging trends before they become mainstream.

Free 7-day trial • No credit card required