n8n is the best self-hosted automation platform for people who actually want to own their workflows. No vendor lock-in, no per-task pricing that scales into oblivion, no waiting for Zapier to add the integration you need. You host it, you control it, and since v1.0 the AI nodes have gotten genuinely good.

The problem isn't the tool. It's the blank canvas. You open n8n, stare at an empty workflow editor, and think: what should I actually build? You know AI can do things. You know n8n can connect things. But turning that into a workflow that saves you real hours every week is where most people stall.

We built 12 workflow templates that solve this. Each one is a complete, importable JSON file, not a screenshot of someone's workflow with "figure out the rest" energy. Real nodes, real connections, real prompts. Import, add your API keys, activate.

This post walks through 6 of the 12 in detail. You'll get the full architecture, the node-by-node breakdown, and enough context to build them yourself if you want. The remaining 6 are in the full kit.

01 AI Lead Researcher

This workflow turns a messy Google Sheet of names and emails into a scored, enriched pipeline. Automatically, on a schedule, with zero manual research.

The architecture

Schedule Trigger (daily, 8am)
  → Google Sheets (read unprocessed leads)
  → HTTP Request (enrich via Apollo/Clearbit/your provider)
  → AI Agent — OpenAI (score lead against your ICP)
  → Switch (hot / warm / cold)
    → [hot] Slack notification + Google Sheets (write score)
    → [warm/cold] Google Sheets (write score, skip alert)

The Schedule Trigger fires daily. It pulls every row from your "Lead Sources" sheet where the processed column is empty. Each lead passes through an HTTP Request node that hits your enrichment API — Apollo.io's free tier works fine here, or skip this node entirely if you already have company data.

The interesting part is the AI scoring node. It's an OpenAI Chat Model node with a system prompt that describes your ideal customer profile:

You are a lead scoring assistant. Score this lead 1-100 based on:
- Company size (10-500 employees = ideal)
- Industry match (SaaS, ecommerce, agencies)
- Role seniority (founder, VP, director = high)
- Tech stack signals (uses automation tools = high)

Return JSON only:
{
  "score": 85,
  "tier": "hot",
  "reason": "SaaS founder, 50 employees, already uses Zapier"
}

A Code node parses the AI response, and a Switch node routes based on the tier field. Hot leads trigger a Slack message with the lead's name, score, and the AI's reasoning. Every lead gets written back to a "Scored Leads" sheet regardless of tier.

Setup in 5 minutes

  1. Import the JSON into n8n
  2. Connect your Google Sheets credential and create two sheets: "Lead Sources" (name, email, company, source, processed) and "Scored Leads"
  3. Add your OpenAI API key to the AI node
  4. Connect Slack and pick a channel for hot lead alerts
  5. Edit the scoring prompt to match your actual ICP
  6. Activate

The workflow handles batching automatically. Fifty leads in the sheet? It processes them sequentially, respects rate limits, and marks each row as processed so it never double-scores.

02 Content Repurposer

You record one video (a podcast episode, a webinar, a Loom walkthrough) and this workflow turns it into 6 pieces of content. Blog post, Twitter thread, LinkedIn post, newsletter section, short-form video script, and key quotes. All written in your voice, all saved to a content library.

The architecture

Webhook Trigger (receives video URL)
  → HTTP Request (submit to transcription API)
  → Wait (poll for transcript completion)
  → HTTP Request (fetch transcript)
  → Split into 6 parallel branches:
    → OpenAI: "Write blog post from transcript"
    → OpenAI: "Write Twitter thread from transcript"
    → OpenAI: "Write LinkedIn post from transcript"
    → OpenAI: "Write newsletter section from transcript"
    → OpenAI: "Write 3 short-form video scripts from transcript"
    → OpenAI: "Extract 5 quotable moments from transcript"
  → Merge (combine all outputs)
  → Google Sheets (save to content library)

The trigger is a Webhook node. You POST a JSON body with the video URL and it kicks off the whole pipeline:

curl -X POST https://your-n8n.com/webhook/content-repurpose \
  -H "Content-Type: application/json" \
  -d '{"videoUrl": "https://youtube.com/watch?v=..."}'

Transcription uses AssemblyAI (there's a generous free tier), but you can swap in Deepgram or OpenAI Whisper. The Wait node polls until the transcript is ready, usually 2-5 minutes for a 30-minute video.

Once the transcript lands, the workflow splits into 6 parallel OpenAI nodes. Each has a purpose-built prompt. The blog post prompt asks for headers, structure, and a minimum word count. The Twitter thread prompt asks for a hook tweet, 5-8 value tweets, and a CTA. They all run simultaneously. Total AI processing time is whatever the slowest branch takes, usually 15-20 seconds.

A Merge node combines all 6 outputs into a single object, and a final Google Sheets node saves them as columns in your content library. One row per video, six columns of ready-to-publish content.

Why this matters

A 30-minute video contains enough raw material for a week of content. Without this workflow, repurposing takes 3-4 hours of manual work. With it, you fire a webhook and check your sheet 5 minutes later. The AI drafts aren't perfect (you'll spend 10-15 minutes editing), but you're editing, not writing from scratch.

03 Smart Email Sequence

Cold outreach that doesn't sound cold. This workflow takes a lead's name, email, company, and a bit of context, then writes a personalized first email, sends it, waits, checks for replies, and follows up if they haven't responded.

The architecture

Webhook Trigger (new lead data)
  → OpenAI (write personalized email #1)
  → Gmail (send email #1)
  → Google Sheets (log: email1_sent = true)
  → Wait (2 days)
  → Gmail (check for reply)
  → IF (replied?)
    → [yes] Slack notification ("Lead replied!")
    → [no] OpenAI (write follow-up email #2)
      → Gmail (send email #2)
      → Google Sheets (log: email2_sent = true)

The webhook expects a payload like this:

{
  "name": "Sarah Chen",
  "email": "[email protected]",
  "company": "Acme Corp",
  "context": "They just raised a Series A, hiring for ops roles"
}

The first OpenAI node writes email #1. The prompt includes your value prop, your tone guidelines, and the lead's context. The context field is the secret — it gives the AI something specific to reference, so the email doesn't read like a template.

After sending via Gmail, the workflow enters a Wait node set to 2 days. When it resumes, a Gmail node searches for replies from that email address. An IF node checks the result: if they replied, you get a Slack ping. If not, a second OpenAI node writes a follow-up that references the first email and takes a different angle.

Key detail: the Wait node

n8n's Wait node pauses execution and resumes later. This means the workflow stays "running" in your executions list for 2 days. That's normal. The workflow must stay active (not just saved) for Wait nodes to fire. If you deactivate and reactivate, queued waits are lost.

Tip: Gmail free accounts allow ~500 sends per day. If you're doing volume outreach, use Google Workspace or swap the Gmail node for an SMTP node pointing at your transactional email provider.

Want all 12 workflows as importable JSON?

Every workflow in this post (plus 6 more) is available as a ready-to-import n8n template. Each includes the JSON file, a setup guide, and the AI prompts already written. Import, add your API keys, activate.

Get the AI Automation Starter Kit — $39

04 Client Onboarding Autopilot

A new client pays through Stripe. Within 60 seconds, before you've even seen the notification, a Google Drive folder exists with their name on it, a welcome email is in their inbox, a Notion project is created with your standard task template, they're added to your CRM sheet, and your team gets a Slack message.

The architecture

Webhook Trigger (Stripe: checkout.session.completed)
  → Set node (extract client name, email, plan, Stripe ID)
  → Google Drive (create client folder)
  → 3 parallel branches:
    → Gmail (send welcome email with onboarding steps)
    → Notion (create project from template)
    → Google Sheets (add row to CRM)
  → Slack (notify team: "New client: Sarah Chen — Pro Plan")

The trigger is a Webhook node that receives Stripe's checkout.session.completed event. You register the webhook URL in Stripe → Developers → Webhooks. A Set node extracts the fields you need from Stripe's (frankly bloated) payload:

// Set node expressions:
clientName: {{ $json.data.object.customer_details.name }}
clientEmail: {{ $json.data.object.customer_details.email }}
plan: {{ $json.data.object.metadata.plan }}
stripeId: {{ $json.data.object.customer }}

The Google Drive node creates a folder named {{ clientName }} — {{ plan }} inside your Clients parent folder. The Gmail node sends a templated welcome email — you write this once in the node, with placeholders for the client's name and a link to your onboarding doc or Calendly.

Notion integration uses n8n's native Notion node. You point it at your projects database, map the fields (client name, email, start date, plan), and it creates a new page. If you use Notion templates, the new page inherits your standard project structure.

Everything runs in parallel where possible. The Drive folder, email, Notion project, and CRM entry all fire simultaneously. The Slack notification fires last, after all branches complete, so by the time your team sees "New client," everything is already set up.

Why Stripe webhooks beat polling

You could poll Stripe's API every 5 minutes to check for new payments. Don't. Webhooks are instant, use zero resources when nothing happens, and are the pattern Stripe explicitly recommends. The webhook payload contains everything you need. No follow-up API calls required.

05 Web Scraper Pipeline

This one runs on a schedule, scrapes a list of URLs, uses AI to extract structured data from raw HTML, detects changes since the last run, and alerts you when something new appears. Competitor pricing pages, job boards, industry directories. Anything you'd otherwise check manually.

The architecture

Schedule Trigger (every 6 hours)
  → Google Sheets (read URL list)
  → Loop: for each URL:
    → HTTP Request (fetch page HTML)
    → OpenAI (extract structured data from HTML)
    → Code node (compare with previous data, detect changes)
    → IF (changed?)
      → [yes] Google Sheets (write new data) + Slack alert
      → [no] skip
  → Google Sheets (update last_scraped timestamps)

Your URL list lives in a Google Sheet with columns: url, selector (optional CSS selector to narrow the HTML), and last_scraped. The workflow iterates through each URL using n8n's Loop Over Items node.

For each URL, an HTTP Request node fetches the page. If you specified a CSS selector, a Code node extracts just that section. The raw HTML then goes to an OpenAI node with a prompt like:

Extract the following fields from this HTML:
- Product name
- Price (number only, no currency symbol)
- Availability (in stock / out of stock)
- Last updated date

Return as JSON array. One object per product.
Ignore navigation, headers, footers.

The AI is surprisingly good at this. It handles messy HTML, inconsistent formatting, and pages that would break a traditional CSS-selector scraper. A Code node then compares the extracted data against what was stored from the last run, flagging rows where values changed.

Dealing with anti-scraping

Some sites block automated requests. The HTTP Request node supports custom headers — set a realistic User-Agent at minimum. For tougher targets, route through a proxy by configuring the node's proxy settings. The workflow template includes a headers column in the URL sheet so you can set per-URL overrides.

06 Team AI Assistant (Telegram Bot)

A Telegram bot that your team can message with questions about your company (processes, policies, how-tos) and get instant answers sourced from your actual documentation. Every Q&A gets logged so you can see what people are asking and improve your docs.

The architecture

Telegram Trigger (new message in group)
  → IF (starts with /ask?)
    → [yes] HTTP Request (search company docs API)
      → OpenAI (generate answer from search results + question)
      → Telegram (reply with answer)
      → Google Sheets (log: timestamp, user, question, answer, source)
    → [no] ignore

The Telegram Trigger node listens for messages in your team group. An IF node filters for messages starting with /ask — this prevents the bot from firing on every message in the group.

When someone types /ask How do I submit an expense report?, the workflow strips the command prefix and sends the question to your docs search endpoint. This can be a Notion API search, a Confluence query, or even a simple JSON file hosted somewhere. The search results — ideally 2-3 relevant doc snippets — get passed to an OpenAI node along with the original question.

The AI prompt is critical here:

You are a helpful assistant for the team at [Your Company].
Answer the question using ONLY the provided documentation excerpts.
If the docs don't contain the answer, say "I couldn't find this
in our docs — try asking [person/channel]."

Do not make up information. Cite which document the answer came from.

The answer goes back via a Telegram Send Message node as a reply to the original message. A final Google Sheets node logs the entire exchange: timestamp, who asked, what they asked, what the bot answered, and which docs it cited.

The logging is the real value

After a month of logs, you'll see patterns. The same 10 questions get asked over and over. Those are your documentation gaps. Fix them, update the docs, and the bot's answers improve automatically. It's a self-improving knowledge base powered by your team's actual questions.

What Else Is in the Kit

The 6 workflows above are the ones we could cover in depth here. The full AI Automation Starter Kit includes 6 more:

Every template includes the importable JSON file, a step-by-step setup guide, and the AI prompts already written and tested. Most use Google Sheets as the database (free, familiar, no setup), OpenAI for the AI nodes (swappable for Anthropic or Groq), and Slack for notifications.

Requirements

You need an n8n instance (self-hosted or cloud, v1.0+), an OpenAI API key, and a Google account. Individual workflows may use Stripe, Telegram, or other services. Each guide lists exactly what's needed. Total API costs for running all 12 workflows: roughly $5-15/month in OpenAI credits depending on volume.

Get All 12 Workflow Templates

12 n8n workflow JSON files. 12 setup guides. All the AI prompts written and tested. Import into your n8n instance, add API keys, and activate. Stop building from scratch.

Download the AI Automation Starter Kit — $39

Getting Started

If you're new to n8n, don't try to set up all 12 at once. Pick the one that maps to your biggest time sink right now. Spending 2 hours a week manually scoring leads? Start with the Lead Researcher. Dreading cold outreach? Start with the Email Sequence. Already have a content backlog? The Repurposer will change your week.

Import the JSON, follow the guide, get one workflow running reliably. Then add the next one. Each workflow is independent. They don't depend on each other, and you can mix and match based on what your business actually needs.

The goal isn't to automate everything. It's to automate the things that eat your time without requiring your brain. AI handles the pattern matching and first drafts. n8n handles the plumbing. You handle the decisions that actually matter.