Just Another WordPress Programmatic SEO Build – Part 4

Need help with your enterprise or programmatic SEO strategy? Let’s chat.

Time for part 4. If you haven’t already, make sure to check out this WordPress programmatic build part 1, part 2, and part 3.

Today, we’re spending the entire part on a single topic.

Imagery.

Like a portion of the last part regarding the AI-generated text, this probably won’t actually apply to you.

You’ll probably have your own imagery to use, or at least, acquire proper images to use.

However, this one is fun. Promise.

Stick with it, and join me as we tinker on the edge of the dark side.

Text-to-Image Generation

Not to be confused with AI Image Generation, which I’ll get into shortly, text to image generation is exactly that.

Generating an image from text.

Yes, I know. Its how AI image generation works.

But this is literally taking the text and putting it in an image.

You build out a template, throw it some text and any images you want included, and an API will spit out a shiny new image.

Normally used for social media banners, programmatic SEO builds have started to leverage this to generate unique imagery related to their content in a less-than-greyhat way.

However, it’s not something I have actually leveraged.

Been on my to-do list for quite a while!

Time to get it going.

 

Picking a text to image generator

Yes, you can either code this yourself, modify one of the available code snippets out there, or even just buy one! There’s probably a plugin available.

But I’d rather find a good SAAS that does this well for the project.

After doing a bit of research, I’ve decided upon Bannerbear, and their image generation API.

Figured it also suited the whole animal theme.

They’ve got a nice, pre-formatted demo on the homepage that gives you some options.

These options are then spitting out;

A pretty neat way of generating a custom featured image for a post.

They’ve actually got a cool tutorial here on connecting WordPress with Bannerbear to get something like this going for OpenGraph imagery.

The query string method has been updated, to an apparently ‘simpler’ method, but let’s take a look at the whole process.

An alternate, and a bit cheaper, could be using placid. Seems they have some good integration methods too.

 

Creating a template

Once you’re signed up, you can start by creating a new template. The template will be the shell of what you’re generating, and allow you to hot-swap the different items out.

They offer quite a few templates, so I picked one that suited me and modified it as required.

I’ve included 4 images on the banner, along with the logo and a couple of text elements.

The 4 images will be swapped out, along with the title on the graphic.

Half jumping forward a bit with what the 4 images will be, but you’ll find out in the rest of the post.

As for the title, that will be the combo name.

 

Generating an image via URL

Following the tutorial here, to change the title and generate an image we get a URL of;

https://ondemand. bannerbear.com/simpleurl/ <base>/image/title/text/ Can+Monkeys+eat+ Bananas?/

After about 5 seconds it spits out;

Lost my question mark though… I’ll work out how to get that back a bit later.

Now, the best part about this?

You can bulk generate the images with Excel!

Well, you can bulk generate them a tonne of different ways but I already have this bulk image downloader & renamer that I use.

This works perfectly. I wasn’t expecting that at all.

Make sure you follow the tutorial in the post above on how to set it up, as you just need to edit a single bit in the code to tell it where to save images.

After that, just fill in the image name (no extension), the generation URL, along with what’s being replaced.

So for my example above, it’s just going to replace the question text.

Unfortunately, you’re rate-limited at 10 requests per 10 seconds, which yes, you could hit.

We have to modify the Excel script by adding some lines. When you’re in the editor modifying the file name (see in guide linked earlier) you need to look for;

Next i
End Sub

Insert 2 lines above that, and add before it;

‘~~> Pause for 2 seconds between requests
Application.Wait (Now + TimeValue(“0:00:02”))

The code will look like this;

It will add a 2-second pause before each request.

However, I just realised that Bannerbear requires the darn $150/month plan to do this simple URL generation. You get 10,000 images rather than the 1,000 I was going to play with, but I’m not upgrading for that.

New method time.

 

Generating an image via Airtable

They offer an Airtable generation set-up too (tutorial here) where you just throw all the fields in an Airtable and then it will build everything for you… proper no code!

You just map the table headings to your field names, insert your data, and then add your Airtable API key to Bannerbear.

Then when setting up the import, you insert your Base ID and table name, and then Bannerbear gets full access.

Here’s my setup before import:

Bannerbear will then take it all, run through them all, and then spit out the image URLs in the final column!

Pretty cool.

I’ll use the Excel image downloader on these URLs now so that I can properly rename the images, but generating them this way was faster and easier anyway!

I also got to keep my question mark.

I’ll sort all of that after I sort the images that will actually go into these, and the pages.

Now the fun bit.

 

AI Image Generation

Dall-e 2

Have you heard of Dall-e? And in particular, dall-e 2?

Dall-e takes a text prompt, and turns it into an image.

Their homepage demo gave me an option of a prompt of;

Teddy bears working on new Ai research on the moon in the 1980s

Which outputs an image of;

 

Pretty cool.

Can you see where I’m going with this?

Unfortunately, Dalle 2 is in super limited beta access.

Even if you get access, you can ‘only’ do 50 images a day.

 

Midjourney

Another ‘top tier’ option is Midjourney.

They’ve recently entered beta, and work via discord where you make your request and it’ll build the image in front of you. You can join the beta here.

When you run a generation, you get an initial set of 4 images.

You can then pick one, and get 4 more, or refine a specific one.

I’m going to try and get 3 new versions of the second image.

Image

Lets now upscale the first one, and see what it spits out.

After it spends a minute generating the image, we get this thing of pure beauty.

Image

It’s definitely more ‘artsy’ than what Dalle-2 is, but it could certainly work in the context of a ‘dogs eating grapes’ page.

 

Dall-e Mini

There’s an alternative called dall-e mini, which is basically just ‘simpler’ version of Dalle-2.

A very very simple version.

Let’s throw in the same prompt as what dall-e 2 gave me and check the result;

I mean, it’s so close haha.

However, it’s free and easily accessible.

So it is a massive step up from anything else we can have access in bulk to at the moment.

Let’s take a look at an example that directly relates to our topic.

Just for comparison, here’s the exact same output from a dall-e 2 beta account…

You can heavily tweak the prompts though.

There is some great info starting to popup, about throwing different elements on the end.

dog eating a banana cartoon

 

dog eating a banana sketch

 

dog eating a banana, studio lighting

 

I ran through a heap of options, trying to work out what would work best before finding something I liked.

Yeah, it’s not amazing, but it’ll do.

 

Bulk generating imagery

Now, in case the plan didn’t click, I’m going to attempt to bulk generate images for all the combo pages.

Dall-e Mini will throw out 9 images per request so we’re gonna hook into that.

There are mentions of a dall-e mega trained model, which would be a bit more ‘refined’ than the images we’re seeing above.

However, that takes a bit more work to get going. So for the initial build, we’re going to aim for just grabbing this output and throwing imagery in an S3 bucket.

I’ve got one of the developers I’ve used previously to build something for me that’ll do this. Might look at sharing it later, won’t for now though.

I know I said all in Google Sheets… but… you obviously would never, ever, ever, need to do this sort of thing.

You’d have completely original imagery and would never use AI to generate them.

So it doesn’t count. This is just me filling in the gaps to better show you how to leverage your own content.

Safe to say this is my new favourite thing.

This is essentially a folder full of folders. Each folder is for a specific animal & food combo.

Each of these combo folders, then contains the 9 images that were generated by my prompt.

HOW COOL.

I mean, look at these little fellas just enjoying life chilling with some strawberries.

Each request takes 2-3 minutes, so let’s go with 3 minutes. If we have 1,000 combos to generate imagery for, we’re looking at 50 hours to generate 9 images for each combo.

That’s not too shabby for generating 9,000 somewhat-usable images.

Probably won’t need all 9 for each post, and I certainly don’t want to go through 1,000 folders of imagery to manually select the best.

We need to prioritise the imagery somehow.

 

Google Cloud Vision AI

Google used to have something where you could put an image up, and it would tell you what it thinks it was.

Well, I eventually managed to find it again.

Google Cloud Vision AI

You can throw the images in there, and it’ll output what it thinks it is;

Both the dog and the banana are clearly recognised.

Unfortunately, however, if we click over to the ‘safe search’ tab, we can see that Google is determining it as ‘very likely’ of being ‘spoof’.

Now I can’t find much quick info as to what this exactly means, but seems to just be their fake image detection.

Google knows it’s faked.

Massive sad face.

But then we look at this outputted image…

This one is marked as ‘likely’ now.

If we generate some others, we can then get it too;

So now it’s only flagged as ‘possible’.

Maybe if we generate enough, it can get even lower.

After testing a heap more, I couldn’t seem to get dogs & bananas any lower.

Something about bananas doesn’t seem to work properly with AI haha.

However, this potato & cat image managed to get this;

So it’s clear you can fool the vision AI detection API with a potato butt cat.

Obviously, there’s a chance Google is using something completely separate in their algo.

There’s also a chance they’re not.

If people swear by the NLP API to optimise text, why can’t we leverage it to choose our imagery?

 

Image prioritisation with Vision AI

How can we use all this to pick the best x images?

Knowing that we can get at least a portion of images passed the auto-detection, we could filter our imagery and give the images a score based on whether the animal & food are detected.

Another score could be based on the level of spoof the image has come back with.

Mocked up a quick Google sheet template, and will have to also throw this over to the dev to handle.

I’ve then thrown together a quick quality scoring system.

If the animal is detected, the image gets a +2.

If the food is detected, the image gets a +2

Then a score depending on the spoof level.

The scores are just all added up, and I get a sortable score.

Once sorted, there is a ‘sortID’ column which is the following formula;

=IF(D2=D1,F1+1,1)

All it does is count up from 1, when the combo is the same.

Then there’s a formula in useImage which just tells me if I should use the image or not.

=IF(F2<5,“Image”&F2,)

If the sortId is less than 5, then put Image<sortId>. So, Image1.

When it’s time to use these images I know there will be exactly 4 images flagged, and their names will always be Image 1-4, and not the source image number, to avoid confusion.

The dev will just do a mass scan with the cloud API and then throw me back a CSV.

Another thing that I’ll have to rely on a dev for, but another super quick thing to do. You could do it yourself too if you want, but I’d rather just hand it off for a few $.

While that happens though, the image scraper has been working in the background and we’ve got images for 500 pages ready to rock and roll.

 

Embedding imagery in the posts

Now that we’ve got our images, we need to be able to get them into our post data, and be able to embed them.

I’ve added the image folder to the Image column, which contains ‘Image1’ up to 4.

This way, we can now vlookup from the main datasheet, 4 separate times. Once for each image.

Created 4 different columns on the datasheet.

You’ll see some blank spaces, the images errored out.

So basically, if there’s no image there will be no image link.

Exactly what we need it to do.

Now we need to take these, and build out the HTML code.

I could build the HTML code in the actual content for the page, but that’ll be a bit dirty and won’t help to troubleshoot.

Rather, I’ll add yet another 4 columns. One for each HTML of the image.

This way, I can just slap the cell reference into the content, and do any tweaks separately.

We can’t do this all as one, because the images won’t be a big slab of imagery. Well, that’s not the aim anyway.

We will want to insert the imagery separately, so one at the top, one at the bottom, and then two in the guts.

Let’s start with a formula to check if the link exists, and then will just use the link if it does, and keep the cell blank if it does’nt.

=IF(len(T2)>0,T2,)

We can then modify it with a standard HTML link code, to dynamically insert the image URL from the image1 cell, along with create an alt text that combined the animal and the food.

=IF(len(T2)>0,”<img src=”””&T2&””” alt=”””&proper(D2)&” with “&proper(E2)&”””>”,)

Be sure to use 3 quotations so that they stay in the outputted text. Since sheets is looking at the quotations as the beginning and end of text, it will error out unless you do the 2 for the inclusion of one, and then a third to finalise the text portion and start the formula portion.

Duplicated it across the other 3 columns, and mixed up the alt text a little bit in each of them too.

Now we gotta stick them in the content.

I’m going to leverage the content templates we built out for this one, and just insert <image1> wherever I want to put the first image.

It can go after the first paragraph, but before the question answer.

Image 2 can then go after the first answer.

It didn’t end up quite as clean as I was hoping, but I have inserted the 4 variables into the text templates.

Have then done substitutions on the content sheet to swap out the variable for the HTML from the data sheet.

Probably should have done that HTML code on the content builder to save the vlookup, but oh well, it’s done now.

 

Uploading the new data with images

I’m not exactly sure whether this will work with the HTML code out the box, so rather than toying around I’m just gonna give it a crack.

Going through the import there’s a step in WP Import/Export that allows for the importing of imagery;

Not exactly sure how this works though, so I am just going to continue the import of the HTML with the S3 references, and maybe we can update so that it actually imports new imagery a bit later.

That’d be cool though.

The upload took a little longer than before…

Almost 4 minutes compared to the normal 30 seconds. Bit more to process now I guess.

But, IT WORKS!

Beautiful.

Except due to the image quality they’re pretty small, and left aligned.

Let’s update the HTML code to centre them, and make them a bit bigger.

I’ll make them 400×400, about double size. The new formula is;

=IF(len(T2)>0,” <img class=””aligncenter”” src=”””&T2&””” alt=”””&proper(D2)&” with “&proper(E2)&””” width= “”400″” height=””400″” />”,)

Remember those double quotes to ensure they’re included in the output.

Now to reupload it all and check it.

Looks much better now.

Oh, and bonus!

It actually imported the imagery from the HTML, didn’t have to do anything else. Thought I was going to need to specify the URLs separately or something.

Bloody ripper!

 

 

Including the text-to-image featured image

Now that we’ve sorted out images for the pages, we need to sort the featured graphic we built out in banner bear.

I loaded up the 4 images per page into the Airtable we made, then jumped into BannerBear to run the generation

500 images were generated in about 5 minutes.

Well, there goes half my quota!

Completely worth it though.

How about those crazy cat eyes though!

Demons.

Now we gotta get them thrown into our spreadsheet, so I’ll just add yet another tab that can keep track of all of them.

I was thinking that maybe we could just import from the bannerbear URL.

Unfortunately, though, all the images will have a jibber jabber name of random characters. More like an ID than an image name.

I could be lazy and just use that, but let’s do this right and give them a proper page name.

I formatted the titles to turn it into a file name (without extension) so we can use the excel image downloader.

We can throw these names into the downloader with the bannerbear URL.

And then just run it… and wait. It’ll be a while.

The images all come out at like 600KB though, so I need to try and compress and/or resize them.

I’ve just come across this image optimisation tool.

Free, and you can just upload a tonne of images and then it’ll optimise and you download.

I resized all the images to 800px wide, rather than 1200, which halved their file size.

The tool downloaded them as pngs though, so I then converted them to JPG and it dropped them by 70%, and they’re all pretty much under 100KB now.

That’ll work perfectly.

Not amazing quality, but definitely plenty for what we need at the moment.

Threw them all in S3, and now we just need to reference them in our data.

Knowing the folder where the images is, along with the name, we piece a URL together by adding the folder along with a .jpg extension on the end.

Now just need to vlookup this from the main data sheet and we’re set.

Time to actually use these images somehow.

Figured I’d try put it as the featured image to test on one of the posts.

Works perfectly right out of the box, so we will just go with that.

The WP Import/Export plugin doesn’t have a separate field for the featured image, but there’s a box you can tick that sets the first image as the featured image.

Sounds dangerous if there isn’t a banner bear image available, but seems like it’ll sort it for us provided we enter the featured image URL in the image URLs box.

Continuing with the import we just select it all and wait for the site to download from S3.

Worked like a charm.

Image quality is pretty shotty though so I might try to step the compression back a bit if we can.

Only 62KB now,  so it’s pretty zoomy though!

 

Meet Caroline

Caroline loves animals!

Leveraging fake name generator for the name, and this person does not exist for the face, we made an author!

That’ll suit nicely for now. Definitely some solid credibility right there.

More so to look real to users, than Google, for this one though.

 

That’s a wrap for now

I’ll call it there for the end of part 4. We’ve done a lot!

No new data was added as Google is slow to kick the crawling off, so the 160 pages will be enough.

We’ll probably do another batch of pages in about a week.

Those 150 pages now have 4 in-content images, along with a featured image set!

Well, most of them. Few image generation issues so there is a handful (~10) missing them.

I’ve also released this dynamic content generation Google Sheets template, which is a basic version of what I am using in this project.

What’s next? Find out in part 5 here!

 

Current Page Count: 160

Posts you might also be interested in

Like this article?

Share on Facebook
Share on Twitter
Share on Linkedin
Share on Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *