Creating a n8n Workflow for Automated Daily Tech News
I've been wanting to dive deeper into n8n, a powerful workflow automation tool that allows you to connect anything to everything. As a student, I also try to stay updated with tech news, but scrolling through multiple sites is time-consuming and often filled with clickbait.
I decided to combine these two interests by building an automated pipeline that fetches news from my favorite outlets, summarizes them using AI, and delivers them to me directly. But instead of paying for API credits to huge AI corporations, I wanted to see if I could put my old hardware to use. I dusted off my GTX 970 that was lying around in my homelab and decided to self-host the LLM.
Tools
For this project I decided to go mostly local in terms of tools:
Automation Engine: n8n (running locally)
LLM Runtime: Ollama (running locally)
Model: Gemma 3: 4B (Q4_K_M quantization)
Hardware: NVIDIA GTX 970 (4GB VRAM)
Notification: Telegram Bot API (for output purposes only)

Understanding the Architecture
The workflow is triggered every morning at 8:00 AM. Here is how the data flows through the pipeline:
- Ingestion: Fetching the Feeds The workflow starts with a Schedule Trigger set for the morning. It immediately fires off three parallel RSS Read nodes to fetch the latest headlines from:
TechCrunch
Slashdot
The Verge
I realized quickly that I didn't want to be flooded, so I added Limit nodes to cap the intake to the top 4 stories from each source.
- Processing: Normalization RSS feeds are messy; different sites use different schema keys (some use guid, others link; some use content, others summary).
To fix this, I used a Merge node to bring everything together, followed by a Code Node. This script maps the disparate incoming JSON data into a standardized structure containing just the essentials: Source, Title, Link, and Content. This ensures the LLM receives clean, consistent input every time.
return items.map(item => {
const j = item.json;
return {
json: {
source:
j.guid?.includes('techcrunch') ? 'TechCrunch' :
j.link?.includes('theverge') ? 'The Verge' :
j.link?.includes('slashdot') ? 'Slashdot' :
'Unknown',
title: j.title || '',
url: j.link || j.guid || '',
author: j.creator || j['dc:creator'] || j.author || '',
publishedAt: j.isoDate || j.pubDate || j.date || null,
summary: j.contentSnippet || j.summary || j.content || '',
link: j.link || '',
content: j.content || '',
}
};
});
- The Brain: Local LLM Summarization This is where the magic happens—and where my GTX 970 earns its place.
I pass the normalized data into the Ollama node within n8n. I am using the gemma3:4b model. Since my GPU only has 4GB of VRAM, a 4-billion parameter model with Q4 quantization was the sweet spot. It fits (barely) into memory and runs moderately well.
I crafted a specific system prompt with few-shot examples to handle the summarization:
"You are a technology news summarizer agent. Produce a concise, factual summary in 2–4 sentences. Strip marketing tone, hype, and filler. Provide the link to the article at the end of your response.
Few-shot Examples:
User Prompt:{
"source": "Slashdot",
"title": "Linux Hit a New All-Time High for Steam Market Share in December",
"link": "https://linux.slashdot.org/...",
"author": "EditorDavid",
"publishedAt": "2026-01-12T12:34:00Z",
"content": "Steam survey shows Linux market share rose to 3.58% after revision..."
}
Response: Valve revised its December Steam Survey... .Here is the link: https://linux.slashdot.org/... .
"
This makes the model respond in a more strict manner and makes it provide better answers.
- Delivery Finally, the summarized text is passed to a Telegram node, which sends the clean summary and the source link directly to my chat.

What I Have Learnt
This project was a great exercise in resource-constrained AI. I learned that you don't always need massive H100 GPUs to do useful work; a dusty GTX 970 is perfectly capable of running competent models like Gemma 4B for specific tasks.
I also gained a much better understanding of n8n's data structure. Handling the flow from multiple asynchronous RSS sources, normalizing that data with JavaScript, and managing the iterative loop for the LLM has made me much more comfortable with complex automation workflows.
And of course the main benefit being that I now have an awesome daily news summarizer agent that runs locally.