AI content wasn’t good enough. Now the time has come.

It is a widely held belief that AI-generated blog posts are inherently low quality and inferior to their human-generated equivalents.

Companies that scale AI-generated content do so knowing that, in our opinion, they are making a trade-off by choosing speed and scalability at the expense of quality. We agree that AI is faster than any human and creates a temporary first draft, but we know that by using AI we are still sacrificing something important.

I now consider this belief to be outdated. I think we’ve reached the point where generative AI can create content that is indistinguishable from the vast amount of human-written content that content marketers like me have created over the past few years.

AI has become a more thorough researcher, more compliant with brand and language guidelines, more flexible in responding to feedback, faster and more efficient. Using AI for content creation no longer comes with a trade-off.

This isn’t to say that all AI content is good by default; Except that the barriers that prevented good AI content generation have fallen. Access to world-class writing through LLMs is still uneven, but it won’t stay that way for long. Functionally “perfect” AI content is just around the corner for all of us and it is in our best interest to recognize this.

Here’s why.

Many people believe that there is an inherent quality in human writing that AI can never match, a creative spark forever out of reach of our silicon counterparts.

I’m not suggesting that AI will ever reach the depth of Shakespeare, but I am suggesting that “great writing” is simpler and more mechanical than most people realize. Most of the ingredients of “great writing” are things that LLMs do very, very well.

I’ve spent my entire career trying to become a better writer by studying the writing process and asking why some things work and others don’t. I’m not an expert in the field, but I have developed an effective worldview of the mechanics of writing and a set of writing principles that I consistently follow.

A quick snapshot of my editorial checklists for training new writers.

For example, here are some random excerpts from my editing checklist:

  • Have we addressed the most obvious objections to this idea?
  • Did we use dense words wherever possible? (“novel” instead of “something new”, “worldwide” instead of “on a global scale”, etc.)
  • Have we replaced weasel words with concrete examples? (“Business results”, “believe experts”, “analyze data”, “make a decision”, etc.)
  • Do we avoid making difficult things sound easy?
  • Let’s start with the most important information? (in the introduction, at the beginning of the paragraphs)
  • etc.

These principles are how I write, how I revise, and how I teach writing. They are extremely simple, but when done in unison they result in something that quite often ends up being good GreatWrite.

In fact, these principles are so simple that an LLM can implement them perfectly – and usually better than me. I often apply these principles unevenly, due to fatigue, boredom, or laziness. However, for an LLM, these principles can be established once and followed indefinitely. They can be distributed evenly to hundreds, thousands, Millions of output encoded in system prompts and SKILL files (more on this in the next section).

If you acknowledge that there is a basic recipe for great writing (and I believe there is), LLMs can follow it very well. Reliably link many of these heuristics together and you can build an exceptional AI writing process.

And finally we have the technology to make this possible.

For many people, their world view of AI is still anchored in the chat experience. But LLMs – and more importantly, the infrastructure that surrounds them – have made tremendous progress in recent months.

Even in their early days, large language models showed sparks of superhuman brilliance in small areas. But like a precocious child imitating his parents without truly understanding their behavior, it was hard to imagine those sparks developing into a flame of real writing ability.

A handful of coherent sentences seems a million miles away from reliably generating thousands of words of accurate, helpful, concise, and on-brand copy; From identifying and closing topic gaps to understanding prevailing search intent to distinguishing from competing items, etc.

When I wrote about my previous AI writing process (using custom GPTs based on my editorial principles), I saw many sparks of brilliance in the output, but the final product still relied on human intervention.

An early version of our AI content pipeline, built with ChatGPT projects and custom GPTs.

But that is no longer the case. Just seven months later, the limitations of this process have disappeared. Today, my $20 a month Claude subscription provides access to abilities that seem almost science fiction. I can:

  • Chain multiple LLM processes into a continuous workflow (Claude Code, OpenAI Codex and other agent models).
  • Provide guardrails to avoid much of the probabilistic “wobble” we see when LLMs typically attempt to follow processes (SKILLs), and encourage them to recursively evaluate their performance and self-improve.
  • Integrate AI into existing workflows across other tools (MCP).
  • Anchor content in research objects, examples of existing writings, tone of voice, and brand guidelines (RAG, memory, context).

(And this ignores the significant improvements that the flagship models themselves have shown in recent years.)

Some of the custom SKILL files we created for Claude Code.

The entire Vibe coding infrastructure developed over the past year has had a transformative impact on the usefulness of LLMs in general. LLMs are still “just” sophisticated autocompletions – and we certainly haven’t reached AGI – but companies like Anthropic and OpenAI have managed to harness this behavior in ways that seem much, much more useful than the sum of its parts.

And more importantly, the task in front of them – content marketing – is not particularly complicated.

Most content marketers spend most of their time creating informative, keyword-focused content: helpful “how-to” articles or comparison lists. These are the tried and tested archetypes of content marketing and are generally pretty easy to create.

I still believe that there is a basic recipe for effective search content. Here are some of the core principles we try to follow in our search content:

  • Address primary search intent
  • Build on the consensus in existing search results
  • Fill in any obvious topic gaps between your articles and those of your competitors.
  • Add new, novel information that goes beyond existing results
  • Reference any relevant existing content you have already created on this topic
  • Point to relevant external content that might help the reader explore further
  • Prioritize topics that allow you to naturally reference your product
  • Make sure the article structure is mutually exclusive and overall exhaustive
  • Make sure the article structure actually does what the title promises
  • Pique the reader’s interest with the title and introduction
  • Insert keywords and keyword variations naturally into important parts of the article
  • etc.

These are similarly simple concepts that also lead to effective search content. If a person can follow these processes, their search content will generally perform well. The same applies to an LLM. If Opus 4.6 or GPT 5.4 can follow these processes, their output will also work well.

Even the most obscure of these processes is fairly trivial for an LLM, either by providing explicit steps to be followed (“Use WebFetch to run a website: search for ahrefs.com/blog and return the first three articles…”“), examples of the desired output (like a reference file of your favorite article introductions), or access to trusted data sources (like the Ahrefs MCP).

A preview of a SKILL file for retrieving existing Ahrefs content for a specific keyword.

As much as we might like it, effective search content is extremely formulaic (hence the success of the skyscraper method). There is no need for great complexity or novelty, no need for poetry or disagreement with the SERP.

There is room for innovation and experimentation, but less than you might imagine: moving too far outside the Overton window will usually worsen performance rather than improve it (I say this after many failed attempts to create “smart” search content).

If Claude can refactor a 100,000-line codebase, it seems arrogant to assume that large language models can’t write great search-optimized content. AI can’t write Shakespeare, but it doesn’t have to.

Final thoughts

Whether I have convinced you or not, at the time of writing I have already outsourced significant parts of my role to generative AI. I use Claude Code, the Ahrefs MCP, and a set of ~15 custom SKILLs chained together in sequence to update old articles and create helpful, high-quality content.

Claude begins updating the content.

These articles sound the same. They do the same thing. They include my experience and perspective. They are as good as anything I could have written; better because otherwise I wouldn’t have had time to create them. There is no compromise.

There is still a big gap in the quality possible between an experienced writer using generative AI to its full potential and the average layman asking ChatGPT to “write a blog post.”

But this gap is much smaller than it used to be; In the long term, it will close as AI platforms continue to democratize access to all of these amazing features. “Content Engineer” skills will become another workflow on every major LLM platform. Functionally “perfect” AI content is just around the corner for all of us.

I like to make this argument because there are many parts of my work that I still can’t outsource to AI; There are others that I wouldn’t do even if I could (like this article).

The path forward will only be found if we are honest about where AI can and should be used. Until recently, AI content wasn’t good enough. Well, it is like that. The sooner we can admit this, the more time we have to focus on the areas of marketing where people will have longer, happier tenure.

(And I don’t miss writing skyscraper content.)


Follow us on Facebook | Twitter | YouTube


WPAP (907)

Leave a Comment

ajax-loader
Good Marketing Tools
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.