AI ToolsVideo Marketing

How I Automated My Real Estate Video Content with AI (From News to Instagram)

How I Automated My Real Estate Video Content with AI (From News to Instagram)

I made over six figures in my first year as a real estate agent. Not from cold calling, not from referrals, not from door knocking. From content. I figured out that if you show up consistently on video — useful, specific video — organic leads come in, and those leads convert.

The problem was the work. Filming, scripting, editing. For every finished video on my channel, I’d spent an hour or more on production. That’s fine when you’re building the habit, but it doesn’t scale.

So I built a workflow that handles the entire thing automatically. Real estate news from Zillow comes in, ChatGPT writes a script, my AI avatar films it, AI editing adds captions and handles the cuts, and the finished video posts to Instagram. Start to finish, without me touching it.

Here’s exactly how I built it.

The Workflow at a Glance

The automation runs in Make.com and chains together five distinct steps:

  1. RSS feed from Zillow News — pulls the latest real estate article
  2. ChatGPT / OpenAI — reads the article and writes a one-minute script with a viral hook
  3. Heygen — takes the script and generates a talking-head video using my AI avatar
  4. Submagic — AI video editor that adds captions and handles post-production
  5. Instagram — receives the finished video and posts it automatically

Each step feeds into the next via Make.com. When the trigger fires, the whole pipeline runs without any manual intervention.

Step 1: Pulling Real Estate News from Zillow

The content source for these videos is Zillow’s RSS feed. Zillow publishes research articles and housing market updates regularly, which gives the automation a steady stream of relevant, credible material to work with.

The RSS module in Make.com monitors the Zillow News RSS feed and fires whenever a new article is published. When it triggers, it passes the article headline and body to the next step.

The test article that ran through my workflow was: “For Sale Signs Multiply: Inventory Hits 5-Year High, Price Cuts Surge.” According to Zillow, for-sale listings jumped 22% compared to the previous year. In cities like Phoenix and Austin, inventory was up 30%. 24.6% of listings had a price cut in June — the highest share since before the pandemic.

That’s genuinely useful market information. When ChatGPT gets this as context, it writes a script that’s informative and specific rather than generic.

Step 2: ChatGPT Writes the Script

The OpenAI module receives the news article and generates a 60-second script formatted for short-form video.

The prompt instructs ChatGPT to write in a format designed for real estate agents talking to buyers and sellers — conversational, direct, specific to the data. The script should include a strong opening hook, the key facts from the article, what it means for buyers or sellers, and a call to action.

The output from the Zillow inventory article looked like this:

“Inventory just hit a 5-year high. That means more homes on the market than we’ve seen since 2019. According to Zillow, for-sale listings jumped 22% compared to last year, giving buyers way more options and negotiating power. At the same time, a whopping 24.6% of listings had a price cut in June — the highest share since before the pandemic. Sellers are adjusting expectations and motivated buyers are jumping in. Translation: we’re shifting into a buyer-friendly market. DM me to see how this impacts your local home value or buying strategy.”

That’s a solid 45-second script. It leads with a number, explains what it means, and ends with a reason for someone to contact me. ChatGPT generated that in under 10 seconds from a news article.

Step 3: Heygen Generates the AI Avatar Video

This is the part that changes everything about video content for busy agents.

Heygen is an AI video generation platform. You film yourself once — about an hour of footage in different outfits, talking naturally, using your normal hand gestures, in your normal settings — and Heygen builds an AI model of you from that footage. The model learns your appearance, your gestures, and most importantly your voice.

After setup, you send Heygen a text script and it generates a video of you delivering that script. The lip sync is accurate. The hand gestures are natural. The voice sounds like you.

I have multiple avatar looks set up: a white shirt vertical format (for Instagram Reels and TikTok), a denim horizontal format (for wider placements), and others. Different looks for different platforms and contexts.

The Make.com integration works like this: after ChatGPT outputs the script, Make.com sends the text to Heygen via API, specifying which avatar to use, the voice ID, and the output dimensions. Heygen returns a video file URL when the generation is complete.

A few technical notes from building this:

The Heygen API has output resolution constraints. At certain API access levels, you can export up to 720p. For vertical video (Instagram Reels format), that means 720 pixels wide by 1280 pixels tall. Make sure your dimensions match — a common mistake is having the width and height swapped, which gives you a horizontal video when you needed vertical.

The video generation takes a few minutes. Make.com waits for the Heygen webhook or polls the status endpoint before passing the result downstream.

Step 4: Submagic Handles the AI Editing

A raw talking-head video is better than nothing, but it’s not enough for modern short-form content. Viewers expect captions. They’re often watching without sound.

Submagic is an AI video editing tool that handles this. It takes a video URL, auto-generates captions, and returns a polished edited version.

The integration in Make.com is an HTTP module (not a native Make.com app — you make a direct API call). You send a POST request to Submagic’s API endpoint with your API key, the video URL, and any editing parameters you want. Submagic processes the video and sends back an edited version via webhook.

There are two things to get right when building this step:

First, the video URL you send Submagic must be publicly accessible. The URL that Heygen returns when it generates the video is hosted on Heygen’s infrastructure and is publicly accessible. That works. If you’ve moved the video to a private storage location first, the URL won’t work.

Second, the request body must be valid JSON in the correct format. I ran into a 400 validation error initially — the fix was wrapping the entire request body in brackets and ensuring the content-type header was set to “application/json.” After those corrections, the API accepted the request and returned a project ID.

The finished output from Submagic shows captions synced to the speech, with clean typography and timing. The video looks edited, not raw.

Step 5: Posting to Instagram

After Submagic finishes editing, it sends the edited video URL via webhook to a listening Make.com scenario.

That second scenario receives the URL, creates an Instagram Reel post using Make.com’s Instagram module, and publishes it to my account.

The Make.com Instagram integration requires a Facebook Business account and Instagram Professional account connected. Once that’s set up, posting a video is straightforward: you pass the video URL, a caption, and specify “share to feed.”

I tested this live: the video posted to my Instagram account automatically. Clean captions, proper vertical format, the finished edited version — not the raw Heygen output.

The total pipeline runs in the background. When a new Zillow article publishes, the automation fires, and somewhere between 15 and 30 minutes later (depending on Heygen generation time), a new video appears on my Instagram.

The Practical Result

The finished video from the Zillow inventory article came out well. My avatar is talking about real data, looking natural, making hand gestures, in front of a setting that looks like my normal environment. The captions are accurate. The hook is strong.

Is it identical to me filming it in person? No. But for a viewer scrolling Instagram, it’s credible, specific, and useful. That’s what drives engagement and leads.

From my own experience, organic social media content converts at five to ten times the rate of paid advertising. A daily video presence on Instagram, even if it’s AI-generated, builds brand recognition and trust over time in a way that no paid ad can replicate.

The old version of this — me scripting, filming, editing, and posting a video — took two to three hours per video. The new version takes zero minutes of my active time per video. The automation runs while I’m doing other things.

What You Need to Set This Up

  • Make.com — the automation backbone
  • OpenAI API — for the ChatGPT scripting step
  • Heygen — AI avatar platform (requires initial filming session and setup)
  • Submagic — AI video editing (API key required, available from their dashboard)
  • Facebook / Instagram Business accounts — for the publishing step

If you want access to the Make.com scenario template I built for this workflow, it’s available in the Real Estate AI Society community. Join for free through the newsletter page.

For a full rundown of the AI tools in my content stack, see the tools page.

Liked this article? Get more like it.

AI tools, prompts, and workflows that close deals — delivered in 5 minutes a week. Free, unsubscribe anytime.