The Contenda Blog

Live, Laugh, LLMs

Using Netlify On-Demand Builders + Astro + Edge Functions to make a blog-sharing generator

How we built a blog-sharing generator using Netlify On-Demand Builders, Astro, and Edge Functions.
By Cassidy Williams

Today at Contenda we launched public sharing of your blog drafts! We’re super excited about the feature and you can read about it more in the announcement blog post.

This blog post, however, is for the dweebs: building this feature included a fun pile of interesting technologies and problem-solving that we’d love to tell you about!

What’s the big deal?

When you make content on Contenda, you are authenticated on the platform, and all of the content is tied to your account.

In our API (docs here), you have the ability to get the final markdown of your blog from an endpoint, but you also have to be authenticated there to be able to get that content. If you wanted to share your working draft with your team, that involved exporting your blog and saving it elsewhere, which is not the best user experience!

So, we wanted:

  • The ability to see a blog draft
  • No need for passing access tokens to users we don’t know to see a generated blog
  • Public URLs that share the content as read-only
  • No back-end API changes, if possible, because we’re a small team with lots to do

And thus, Share Pear was born!

Okay before I go deeper, yes, you can just look at the code. We’ve open sourced our repository for this feature here, and hope it might be useful for you to check out!

But let’s talk about how it’s built! The overall architecture and user flow is boiled down into a few steps as seen in this diagram here, and I’ll explain each part:

Share Pear Architecture and User Flow

Distributed Persistent Rendering and Netlify’s On-Demand Builders

There’s a concept called Distributed Persistent Rendering (RFC) that allows you to create web pages in an interesting way. With this concept (similar to Incremental Static Regeneration or Deferred Static Generation, if you’ve seen it in the front-end world), what happens is:

  • You ping a server to create the page
  • Once that page is created, it just statically exists for whatever cache length we specify

We wanted to follow this approach because server-side rendered pages are vulnerable to DDoSing and other issues that could make one page take another down, and we don’t want people to share their posts so widely that the ability to share is slow for other users at any point. Plus, it’s cost-saving, because once the page is statically built, we don’t have to query a server again once it is built!

The functions that create the new pages are called On-Demand Builders (ODB), or builder functions. We can query that builder function with the blog content, and it will generate a page based on that content!

Astro and markdown rendering

The framework Astro is newer on the scene of web frameworks, but it is a powerful one. Out of the box, it’s designed for speed and can use any of your favorite UI libraries with very minimal configuration.

In our case, what was appealing to us was that Astro natively supports rendering markdown as HTML, and Netlify On-Demand Builders, right out of the box!

The configuration for setting up builders was wildly simple:

// astro.config.mjs
import { defineConfig } from "astro/config";
import netlify from "@astrojs/netlify/functions";

export default defineConfig({
	output: "server",
	adapter: netlify({
		builders: true // this is it

And for rendering markdown, in an Astro page component, you import their Markdown module in the frontmatter, and use the provided Markdown component:

// [slug].astro

import { Markdown } from "astro-remote";
// ...

// ...
		allowComments: true

// ...

Smashing these technologies together with Edge Functions

Now, the way this works is that in our authenticated platform, there would be some kind of button that does something to this effect:

generateMarkdown().then((md) => {
	callOnDemandBuilder(md, randomKey());
	// this creates a public-facing URL that
	// anyone can access at,
	// which has the md content rendered as HTML

On the Share Pear side, we made a dynamic route that looks for a markdown key in the headers of the API request with which to populate the blog. The randomKey is just any URL name we want to name it. Right now we use a UUID for each URL, but we might change that up later on to something more human-language friendly.

Now to reiterate: when you first hit a URL in a website that uses Distributed Persistent Rendering, it generates that page, and then that page exists forever (…until the cache expires). So, you could go to and it’ll be there, but the page won’t be populated with real content unless you make a proper API request based on what it expects.

The Astro page parses the markdown header like so!

let markdown = Astro.request.headers.get("markdown");

So clean, right?

Now, if you pinged the page with something like Postman with the proper headers, it’ll just work. But, CORS.

It’s always CORS.

Though you can typically set custom headers with a Netlify redirect proxy, or directly in Astro, in this particular case, we can’t, because our website isn’t “truly” static. It is later, after the pages are generated, but before that, it’s pinging a function to create the page. In our particular case, we needed to intercept the request before the page was made.

To fix this, we bring in Edge Functions! An edge function is kind of like a serverless function, but it can modify a request and response at the edge (I’m not going to get into what the edge is because this word is reused too often and you can look this up and see examples on your own). Long story short, it’ll act as a middleware to add the custom headers to our requests coming in from the platform.

Here’s a high level look at how our edge function implementation looks:

// /netlify/edge-functions/headers.js
export default async (request, context) => {
	const response = await;

	// We need to include this OPTIONS request to handle CORS
	// from requests inside the browser, because preflight
	if (request.method === "OPTIONS") {
		return new Response("ok", {
			headers: {
				"Access-Control-Allow-Origin": "*",
				"Access-Control-Allow-Headers": "Content-Type, markdown"

	// This currently allows requests from anywhere, but in the
	// Share Pear repo, it limits the requests to come from
	// Contenda domains
	response.headers.set("Access-Control-Allow-Origin", "*");

	// This allows our custom markdown header that Astro consumes
		"Content-Type, markdown"

	// This sets a max-age for the cache of 30 days
	response.headers.set("Cache-Control", "public, max-age=2592000, immutable");

	return response;

It’s done!

Now, when a user wants to share a blog from Contenda with their team, they can create a public URL (that expires eventually, by design), get their feedback, and publish ‘til the cows come home.

The fact that we were able to put this together without having to include any back-end API work was really exciting for us, and it was a really cool bundle of technologies to work with!

Once again, we open sourced our repository if you would like to look more at the implementation. It’s very much a work in progress as we consider more features we’d like to include, but we’re happy with the results so far!

If you would like to try the workflow out for yourself, you can sign up for an early access account on Contenda here.

If you’d like to learn more, please feel free to reach out to our team on Discord or sign up for our email list.

We can’t wait to see what you make!