Skip to content
Tutorial2026-05-03 · 10 min read

How to Automate Your Social Media Content Pipeline with AI

One creative brief in, three platform-sized social images out. Build a flow that generates a hero image and resizes it for Instagram, Twitter, and LinkedIn.

PT
PlugNode Team

One creative brief goes in. Three platform-ready images come out, each sized correctly for Instagram, Twitter, and LinkedIn. I built this flow on PlugNode's canvas in about ten minutes, and it runs in under 40 seconds per execution.

This tutorial walks through every node, every wire, and every config field. You'll finish with a working flow that takes a text brief, generates a campaign hero image, and fans it out into three platform-specific sizes, ready to post or schedule.

What you'll build

A 7-node flow that chains two AI models and a resize utility:

  1. Gemini refines your brief into a visual direction prompt
  2. Nano Banana Pro generates the hero image from that prompt
  3. Image Resize fans the hero into three platform sizes: Instagram (1080x1080), Twitter (1200x628), LinkedIn (1200x627)

The output: three image files you can download and drop into your scheduler.

NodeTypeModel / ProviderPurpose
manual-triggerTriggerNoneStarts the flow
text-inputInputNoneYour creative brief
textGenerationGemini 2.5 FlashRefines brief into visual direction
imageGenerationNano Banana ProGenerates the hero image
image-resize-igUtilityNoneCrops to 1080x1080 (Instagram)
image-resize-twUtilityNoneCrops to 1200x628 (Twitter)
image-resize-liUtilityNoneCrops to 1200x627 (LinkedIn)
outputOutputNoneCollects all three variants

The resizing problem

Every social media manager knows this routine. You get one hero image from design. Then you open Canva, Figma, or Photoshop and manually export three, four, sometimes six different crops. Instagram wants a square. Twitter wants a wide rectangle. LinkedIn wants almost the same wide rectangle but one pixel shorter. Pinterest wants a tall portrait.

That export step takes 10-15 minutes per image. Multiply by 20 posts per week and you're spending half a day on resizing. Not creating. Resizing.

This flow eliminates that step. The Image Resize node handles the crop and scale. You write one brief, generate one hero, and get every platform variant in a single run.

Prerequisites

  • A PlugNode account (free tier works)
  • API keys added in Settings: Gemini
  • A creative brief (even one sentence works)

Open a blank canvas from your dashboard. Everything below happens on that canvas.

Step 1: Add the trigger and input

Drag a Manual Trigger node onto the canvas. Add a Text Input node and label it "Creative Brief."

Your brief should include three things: the topic, the visual style, and any brand constraints. Example:

Summer sale campaign for a DTC sunglasses brand.
Style: bright, tropical, lifestyle photography feel.
Colors: coral, turquoise, white. No text overlays on the image.
Product: oversized square-frame sunglasses on a beach setting.

Wire the Manual Trigger to the Text Input. Wire the Text Input to the next node.

Step 2: Refine the brief with Gemini

Add a Text node. Open its config panel and select Gemini 2.5 Flash as the model.

Set the system prompt:

You are a visual art director. Given a creative brief, write a detailed
image generation prompt optimized for AI image models.
 
Rules:
- Describe the scene composition, lighting, color palette, and mood
- Specify camera angle and framing
- Include style references (editorial, lifestyle, product, flat lay, etc.)
- Do not include text, logos, or watermarks in the description
- Keep the prompt under 200 words
- Output only the image prompt, no commentary

Wire the Text Input's output to the Text node's prompt port.

Why add this step instead of sending the brief directly to the image model? Two reasons. First, Gemini expands vague briefs into specific visual language that image models respond to better. "Bright and tropical" becomes "golden hour lighting on white sand, coral and turquoise color palette, shallow depth of field, editorial lifestyle photography." Second, it catches mismatches early. If your brief says "no text overlays," Gemini reinforces that in the image prompt.

I tested this with the sunglasses brief above. Gemini returned a prompt in 0.8 seconds:

Overhead-angled lifestyle photograph of oversized square-frame sunglasses
resting on white sand. Golden hour sunlight casts long warm shadows.
Background: turquoise ocean water, soft bokeh. Color palette: coral
accents on the frame, turquoise water, warm white sand. Style: editorial
lifestyle photography, shallow depth of field. No text, no logos,
no watermarks. Composition: rule of thirds, product centered in lower
third. Aspect ratio: square.

Step 3: Generate the hero image

Add an Image node. Select Nano Banana Pro as the model.

Wire the Text node's output to the Image node's prompt port. In the config, set the aspect ratio to 1:1 (this gives us the most flexibility for downstream cropping).

Generation takes 8-15 seconds. The node shows a progress indicator while it works.

I ran this three times with the same prompt to check consistency. All three outputs maintained the coral/turquoise palette and beach composition. The framing varied slightly, which is expected. Pick the variant you like best, or re-run until one clicks.

Step 4: Fan out to platform sizes

This is where the flow pays for itself. Add three Image Resize nodes to the canvas.

Node 1: Instagram

  • Label: "Instagram 1080x1080"
  • Width: 1080
  • Height: 1080
  • Mode: Cover (crops to fill, no letterboxing)

Node 2: Twitter

  • Label: "Twitter 1200x628"
  • Width: 1200
  • Height: 628
  • Mode: Cover

Node 3: LinkedIn

  • Label: "LinkedIn 1200x627"
  • Width: 1200
  • Height: 627
  • Mode: Cover

Wire the Image node's output to all three Image Resize nodes. They run in parallel because none depends on the others.

Cover mode is important here. It crops from the center to fill the target dimensions without distortion. If your hero image has the subject off-center, you may want to adjust the source composition in Step 3 by updating the Gemini prompt to specify center framing.

Processing time: under 1 second per resize. All three finish almost instantly.

Step 5: Collect outputs

Add an Output node. Wire all three Image Resize nodes into it:

  1. Instagram resize output to the Output node
  2. Twitter resize output to the Output node
  3. LinkedIn resize output to the Output node

Also wire the original Image node output to the Output node. This gives you four files: the full-resolution hero plus three platform crops.

Step 6: Run the flow

Click Run in the toolbar. Execution order:

  1. Trigger fires
  2. Text Input resolves (your brief)
  3. Gemini refines the visual direction (~0.8s)
  4. Nano Banana generates the hero image (~12s)
  5. Three Image Resize nodes run in parallel (~0.5s each)
  6. Output collects all four files

Total wall-clock time on my test run: 14 seconds. The image generation is the bottleneck. Everything else is fast.

Open the Execution Log to see per-node timing. Download all four images from the execution panel.

Adding more platforms

Need Pinterest (1000x1500)? Add another Image Resize node with those dimensions and wire it from the same Image node output. Facebook cover (820x312)? Same approach.

Each new resize node adds less than a second of processing time. The image generation step only runs once regardless of how many sizes you need.

Common platform dimensions for reference:

PlatformDimensionAspect ratio
Instagram post1080x10801:1
Instagram story1080x19209:16
Twitter post1200x628~1.91:1
LinkedIn post1200x627~1.91:1
Pinterest pin1000x15002:3
YouTube thumbnail1280x72016:9

Connecting to Buffer, Later, or Hootsuite

Once the flow works manually, you can wire it into your scheduling tool.

Option 1: Manual download. Run the flow, download the sized images, upload them to your scheduler. Still saves 10-15 minutes per post compared to manual resizing.

Option 2: Publish as an API. Replace the Manual Trigger with an HTTP Trigger node. Add a Respond to Webhook node. Publish the flow. Your scheduling tool (or a Zapier/Make connector) calls the endpoint with a brief and receives the sized images in the response.

Option 3: Webhook chain. If your scheduler supports incoming webhooks, POST the generated images directly from a downstream HTTP node. Brief in, platform-ready images out, no manual handoff.

The API version looks like this:

POST https://plugnode.ai/api/trigger/{secret}/{nodeId}
Content-Type: application/json
 
{
  "brief": "Summer sale campaign for DTC sunglasses brand. Bright, tropical, lifestyle feel."
}

Response includes all sized image files. Append ?wait=true for synchronous delivery.

Cost breakdown

Here is what one run costs at standard provider rates:

NodeCost
Gemini 2.5 Flash (text)~$0.001
Nano Banana Pro (image)$0.01-0.03
Image Resize (x3)Free (runs locally)
Total~$0.01-0.03

No PlugNode markup. You pay the AI providers directly through your own API keys.

For context: a freelance designer charges $25-50 per social media graphic set. Running this flow 100 times costs roughly $2. At 20 posts per week, that is $8 per month in AI costs to generate every hero image and all platform variants.

Troubleshooting

Image node produces a blank or irrelevant output. The Gemini prompt may be too abstract. Add grounding details in Step 2: specific objects, colors, and composition. Concrete prompts produce better images.

Resize crops out the subject. The hero image has the subject off-center. Go back to Step 2 and add "center the product in the frame" to the Gemini system prompt. Cover mode crops from the center, so a centered subject survives all crops.

Colors look different after resize. Image Resize preserves the original color profile. If colors shift, the issue is likely your display or the platform's compression. Export at the highest quality setting and let the platform handle its own compression.

Gemini ignores style instructions. Move critical constraints (like "no text overlays") to the beginning of your system prompt. Models weight instructions near the top more heavily.

What's next

This flow handles single-image campaigns. For higher-volume production, consider these extensions:

  • Add a second Image node with a different style prompt to generate A/B visual variants from the same brief
  • Chain a Text node after the first to generate platform-specific captions (Instagram hashtags, Twitter copy, LinkedIn professional tone)
  • Wire the API version into your content calendar. Each Monday, post a batch of briefs. The flow generates every image for the week.

The full use case page is at /use-cases/social-media-content-pipeline.

FAQ

Can I swap Nano Banana for a different image model?

Yes. The Image node supports multiple providers. Open config, switch the model. GPT Image and other supported models work with the same wiring. The rest of the flow stays connected.

What if I need text on the image (like a sale percentage)?

Generate the image without text first, then add text overlays in your design tool or scheduling platform. AI image models are inconsistent with text rendering. Keeping text separate gives you reliable typography.

Can I process multiple briefs in one run?

Not in a single flow run. Each run processes one brief. For batch processing, publish the flow as an API and call the endpoint once per brief from your backend or a Make/Zapier loop.

Does this work for video content too?

This specific flow generates static images. For video, check out the product video ads tutorial, which covers the Video node with Veo. You could combine both flows: generate the hero image here, then feed it into a video flow for animated social content.

What happens if I need a custom aspect ratio not listed above?

The Image Resize node accepts any width and height values. Enter your custom dimensions and set the mode to Cover or Contain. No preset list limits you.

Generate your first video ad in 3 minutes.

Free to start. No credit card. Upload a product photo, connect your AI models, click Run.