Just Another WordPress Programmatic SEO Build – Part 3

Need help with your enterprise or programmatic SEO strategy? Let’s chat.

Let’s continue the WordPress programmatic SEO build!

If you haven’t already, check out part 1 of the wordpress programmatic build here, and then part 2 here.

A little pre-note here is that the next couple of posts will take a little side-step. Purely because I don’t have my own content, and want to get something out ASAP.

You can do this infinitely better than I will be, if you’re putting in the time or money to build out your dataset.

We’ll be back on track afterwards.

Time to crack on!

Fixing the design

I couldn’t bear it.

Haven’t played with too much WordPress lately so wasn’t sure what design to go with.

Have just installed Astra free and tweaking it a little bit. Much cleaner. Might upgrade it to paid to get a bit more customisation too but it’s already infinitely better than before.

Managed to clean up some of the more “bloggy” stuff, and have also got breadcrumbs going!

Just need to remove that pesky home button but I can sort that one later.

Also removed comments, cleaned up a couple of things and added a pretty header menu.

And by pretty I mean non-default. It’ll do for now.

 

Will be getting rid of that ‘previous post’, along with the recent post links at some point too. They’ll kill some of the topical groupings by linking to who knows where.

Happy to also say that the change to Astra fixed the main category design problem!

 

Need to work out how to remove all the posts listed below though, as they’re already linked to from within this content.

Unfortunately, the theme killed the subheadings for the page, but that’s liveable for now. Can add some custom CSS to address this later… I hope.

 

Upgrading the content

On top of the continuous data expansion via my VA, I need to update these keyword templates.

Let’s be real, they’re shit.

I still might get a writer involved later, but let’s have a play with an AI tool.

 

Generating some generic template text with an AI tool

My go-to AI content generation tool is WriteSonic – for now.

I’m mostly using it to generate product descriptions with AI in bulk at the moment.

So let’s take a look at what we can do with it this content.

I’m going to use the ‘sentence expander’ option, and just work through the content to see what it can spit out.

Starting with the opener, let’s see what we can extract from a generic point of view.

The goal will be to pad out the dynamic text templates a little bit, and get some more content on the pages.

 

That first option will work perfectly with some tweaks, and I’ll replace animals with the <animal> variable so that it inserts our animal name.

Added a few extra bits of text to some of the templates.

 

Couldn’t find too much generic stuff, so need to work out a good solution for this, and want to avoid providing too many incorrect answers.

It’s all very specific to each animal & food combo, not exactly template-able without just being garbage.

Wonder if there’s a better way we can do this.

 

Working directly with an AI model

AI tools are great for helping you write a piece of content.

They spit out a suggestion, then you can edit it and tweak it, and away you go.

Perfect when you’re handling small quantities of content.

How can this be scaled though?

How can I generate content that actually relates directly to the current page at scale?

Well, you can go directly to the source.

The source that pretty much every AI tool leverages in some way or another.

OpenAI.

Their GPT-3 setup to be more specific.

You can ask it pretty much anything at all, and it could return a full response, or just a simple answer to a question.

Maybe we can just ask it, in bulk, whether an <animal> can eat <food>.

Once registered, you can jump straight into their playground.

I haven’t spent as much time working directly in Open AI & their API, so this project is changing that.

To start, we can just straight up ask it the question.

Yes, a robot in the computer wrote that.

Guess that means the project’s done now, doesn’t it?

Well, I could stop there, and if I was happy to spend the extra money for a VA to build out a quality data set and have a writer build out quality templates, this is definitely where I’d stop.

Alas, I’m not. So let’s take this further.

The initial step was obviously not enough content, so let’s try get that expanded out.

So that’s a great start, and it can generate content for each section we created.

Completely related content.

You can then kindly request two paragraphs of content, instead of a single paragraph by using the “write two paragraphs” prompt.

I am seeing it hit and miss, but seemed to get higher success throwing a ‘separate’ in there.

The ‘please’ is just to make sure the robots don’t come for me when they take over.

That’s 89 words though, from a simple question.

I’m no expert here, and have only just started learning the basic prompts, so feel free to throw anything in the comments that could help out here.

This is using the DaVinci model, which costs $0.06 per 1,000 tokens.

OpenAI says it’s ‘approximately’ 4 characters per token, including both the prompt and the output.

The above prompt was 77 characters and the response was 495 characters, giving us a total of 572 characters.

572/4 = 143 tokens

143 tokens = 143/1000 x 0.06 = $0.00858

That 89 words cost us less than 1 cent.

Let’s say we want to generate 8 sections of similar content for a page, it’ll be $0.069 per page.

To then do this for 1,000 pages, we’re looking at $68.64

1,000 pages of ‘content’, for under $70.

Would I recommend this done for a client build?

Never. I’d stick with the dynamic variable insertion techniques.

Would I do this for a test project where more emphasis is on the process & the overall build and I really just need a tonne of filler content to build out the pages and prove out some concepts whilst still having somewhat shot at being indexed and possibly ranked for longer-tail keywords?

You bet. Game on!

 

Bulk generating the AI text

Let me preface this by saying I have never done this before. Legit.

This is my first time doing any sort of bulk generation of text directly with an AI model, and not using a tool with an interface and a VA to enter the data.

This. Changes. Today.

Ideally, we get this working directly in Google Sheets.

Would mean we don’t need to worry about jumping in and out of sheets, or doing any bulk imports. As new data & combinations is added, we could just run a formula and boom, we’ve have some magic words.

Found a great little gpt script by Andrew Charlton here that’s for titles & meta descriptions, and has the core request.

Input a keyword, and it’ll spit out a title and description. It’ll also cost a little bit due to a rather large amount of sample data hardcoded in the prompt though, so don’t overdo it!

Rather than trying to modify it, a mate shared a mob called Riku.ai.

They’ve been working on a plug-and-play sheets addon, and I’ll be going with that.

I’ve started by building out a little Google sheet to test things, along with estimating the costs of the outputs;

Something I might look at cleaning up later, but this was more of just a way for me to learn how to best write prompts.

I was manually copying and pasting these prompts into the playground to test, and then just paste the output here.

Riku just released the Google Sheets add-on though, and if you’re a paying member (they’ve got a discounted option via app sumo for July) you can find the Google Sheet in their FB group.

Everything runs through your Riku account, and you set up prompts in there which is rather annoying as you need to work in 2 systems to make updates. To get this working how I wanted it I had to create ‘blank’ prompts in Riku, and then just fed in this prompt. So I am essentially just using it for the Google Sheet at the moment.

Loaded it all up, threw in a heap of prompts and this is what happens;

This is magical.

Following this format, I’ve built out a Google Sheet containing all the different prompts I might want to use;

Throwing some of these into the Riku sheet and running the AI over it, we get;

At top level, it looks pretty cool.

I get the exact question format I am after, along with a chonky paragraph.

Problem is, I started to see some issues when looking at the generation in bulk.

Like this one.

As far as my Googling tells me, no, they can, in fact, not.

I know I’ll get a heap of false info for this, but that is why the prompt was tweaked further to become;

So it all comes down to the prompts and some examples of certain expected outputs from the prompt.

I also realised that the prompts I was using around the paragraphs, just weren’t generating anything clean enough.

I figure it’s best to take it back to basics, and just use the dynamic variable system I have already built.

The one that the VA started filling in for me, where we just place some simple text and bulk it out with the generic text.

This is where the true intelligence of OpenAI shines.

The current text system to generate our combo pages accepts inputs of text like the below examples;

So let’s see what happens when I try to request something similar from OpenAI.

Nice summary, but, it doesn’t follow the format.

Well, we can feed the AI some examples of the formats we want it to follow by using these magical stop thingies ##.

Google tells me that’s pretty good.

Turns out the ‘comma-separated’ request might have been a bit confusing, particularly for some of the other prompts.

So instead, I modified it to be ‘a summary’ and the output slightly improved;

This time, it also followed the format in a couple of the examples where there is a full stop mid-sentence.

It followed it PERFECTLY.

Honestly, was not expecting this. Pretty neat.

I’ve fleshed a similar thing out for the other main sections, so that we can run this and get the content.

Unfortunately, I ran into an issue with the Riku Google Sheets.

It wouldn’t allow line breaks sent via the Google Sheets. I stripped them out, and thought I had gotten it to work, but unfortunately it only worked for a few. After that the Google Sheets kept acting like the stops didn’t exist and I had to drop it. It was too time consuming to patch the broken ones.

However, I was sent a direct Google Sheets script that I made work perfectly.

My prompt with examples, line breaks and stop sequences on the left, with the output on the right;

Bingoooooo.

I added a little tweak for some of them, that if the animal couldn’t eat the food then don’t run the request for the prompt.

In this case, it was for the benefit of food.

I don’t think it’s much of a benefit if the animal dies after eating food, and the robots can return some rather interesting stuff, so was best to just ‘ignore’ them for some sections.

It will need some more examples in the future though to account for different possibilities.

I ran this across 150 combo pages to kick things off, and it cost approximately $1.

In total.

$1 for an 80%+ ish, arbitrary, correct data set for all of those responses.

Waheeeeee.

I further tweaked the model to include the text template element that will sit before the text. This way, the robot has something to kick it off and they al nicely fit the template and will read cleanly.

One of the new templates is;

Which, as you can see, outputs exactly what I need. I nice and direct answer.

Yeah, a lot of them are ‘upset stomach’, but hey, that’s what happens with food.

It now works a treat and I’ve added content for them all, for 150 pages.

You’ll probably spot some stuff that might not be correct.

The robots get it wrong sometimes though.

There’s just so much trashy content out there these days that these little guys suck up way too much junk to feed it.

Yes, that food is okay
No, that food is poison
Oh, I feed it to her as a treat
Yeah you should be okay
Only a little bit though

There’s just no solid answer for it sometimes, so it confuses the AI when being asked yes or no.

Sometimes, the two different AI models will give two different answers. The same thing happens just using two different prompts at times.

My initial generations are mostly accurate, but I’d say it’s 80%. I’d like to get that closer to 95%.

For certain yes/no questions, I feel like if it’s a no, there’s a higher chance of the no being correct than a yes.

My plan later will be to run both models, and maybe an extra prompt, and then try and pick the average to give it more chances to be correct.

I’ll also extend out the samples, to try and be a little more accurate. It will cost more, but accuracy will be required, particularly in the yes/no question.

Something I’ll test out later, but I’ve got enough for now, ill probably just add a caveat or something as a warning just in case it actually ranks.

 

Adding an out an ‘about us’ page with AI

We need to now throw up a basic ‘about us’ page that gives it a little bit of a… err.. real feel.

Since we’ve just used a tonne of it already, why not throw a bit more AI in the mix.

Don’t need anything special for now.

Threw in a rather detailed prompt and the playground threw back the below.

It randomly spat out a domain name that is completely open for registration… at the time of posting. Go on. I dare you.

No clue what it did there, but pretty cool!

 

Just some text and an icon I whipped up in 2 seconds.

It needed it.

 

Re-upload the data

And just re-uploading all our data.

Updating the old pages, and adding in all our shiny new data.

150 posts, plus the 10 categories (imported separately), done.

 

Where we’re at now

After the initial build creation, and internal linking setups from the previous posts, we’ve now significantly scaled page creation along with expanded the available content for each page.

The pages are looking pretty text-heavy now though…

Maybe this guy can help us out.

So what does this banana face-hugger cross doggo have to do with the build?

Continue with part 4 here.

 

Current Page Count: 160

Posts you might also be interested in

Like this article?

Share on Facebook
Share on Twitter
Share on Linkedin
Share on Pinterest

Leave a Reply

Your email address will not be published.