The Internet Trained Us for This: Signal, Hypothesis, and Testing in the Age of AI
Before AI, There Was SEO

There have been a lot of very cool AI marketing posts lately. Some brilliant. Some unhinged. Some confidently predicting the end of marketing as we know it (again).
They’ve made me realize something pretty simple.
The internet trained us for this moment. We trained AI for this moment (schematic markup anyone?)
Early in my career, I worked at KeywordDiscovery. I got to see raw search volume data across tiny permutations of phrases. Singular versus plural. Word order shifts. One modifier added. Sometimes those small changes meant dramatically different search volume.
It permanently changed how I think.
Language isn’t just creative expression. It’s structured signal. Systems interpret it. Rank it. Amplify it.
Around that same time, I was experimenting with black hat SEO. Keyword stuffing. Backlink schemes. Blog commenting. It was chaotic and not exactly a moral high point, but it reinforced the same lesson: digital systems respond to structure.
Small changes in how something is framed can create disproportionate shifts in visibility.
For more than twenty years, marketers have been working inside algorithmic systems. We’ve been optimizing titles, headers, metadata, link structures, and semantic clusters. We learned how to think like the machine.
Anecdontally, do you remember this super-polite grandma who said "thank you" with each query?
AI doesn’t feel foreign to me for that reason. It feels like the next layer.
The big difference now is speed.
When I’m working on positioning for something complex, like sustainability AI or infrastructure optimization, the hard part isn’t writing the copy. It’s testing whether the narrative holds up under pressure.
- Is this actually legible to a CFO?
- Would a Chief Sustainability Officer see this as defensible, or as marketing gloss (or AI slop)?
- Does a VP of Infrastructure hear this and immediately think about migration risk, or do they think "tell me something I don't know"
Instead of debating those questions internally for weeks, I now build structured persona agents and test the story against them.
Not generic personas. Detailed ones. A CFO with capital allocation pressure and skepticism around projected savings. A CSO worried about audit trails and regulatory exposure. An infrastructure leader who cares about uptime and integration complexity.
I run multiple positioning narratives through them. Efficiency-first. Compliance-first. Innovation-first.
The CFO agent immediately challenges vague ROI claims. The CSO agent flags anything that sounds like greenwashing. The infrastructure lead pushes on technical assumptions.
Within hours, I can see where the story breaks.
That’s the part that excites me.
Because this isn’t about generating more content. It’s about faster hypothesis testing.
And when I zoom out, it feels very familiar.
SEO trained us to change one variable and watch the shift. Search volume data trained us to respect micro-permutations. Metadata trained us to think about structure.
AI just makes that discipline more explicit.
If your inputs are shallow, the output is shallow. If your differentiation is weak, the model defaults to generic category language. If you only feed it your own materials, it reinforces your bias.
The internet trained us to think in signal.
AI is forcing us to think in structured hypotheses.
The marketers who benefit most from this shift won’t be the ones automating the fastest. They’ll be the ones who understand how to design experiments, encode real tension, and test assumptions honestly.
We’ve been practicing for this for two decades.
We just didn’t realize what we were practicing for.
H/T to the LinkedIn post that got me thinking




