In my post from last week, I opined that domains are evolving from simple addresses into identity assets, trust signals, and strategic business infrastructure. That sounds promising until you consider the other side of the story: the same AI that can spin up legitimate domains, websites, and marketing strategies can just as easily spin them up for criminals.
If a bad actor can generate a lookalike domain, cloned site, and digital marketing funnel in minutes, then “professional-looking website plus domain” stops being a reliable trust signal. The question shifts from “does this look real?” to “what verifiable signals prove this is real?”
The abuse problem is accelerating
AI is already amplifying phishing and impersonation at scale. Three patterns stand out:
Attack volume is climbing. Generative tools remove language, design, and coding barriers, making it easier for attackers to launch campaigns that would have required significant skill just a few years ago.
Fake sites look more convincing. Cloned websites and on-brand login pages are now trivial to generate, making them harder to distinguish from legitimate domains at a glance.
Infrastructure is more disposable. Attackers can register, host, and abandon domains quickly, churning through lookalikes faster than traditional takedown processes can respond.
By 2030, it is reasonable to expect waves of AI-generated lookalike domains, each with a convincing site and a full marketing funnel, spinning up and shutting down within days.
The domain industry is starting to respond
The question of how to verify the identity of AI agents is no longer theoretical. Initiatives are emerging to build that layer of accountability on top of the DNS foundation.
GoDaddy has been developing Agent Name Service (ANS), an open standard that uses DNS and SSL to give AI agents verified, portable identities across the web. And just today, Identity Digital announced a new Innovation Labs division and introduced DNSid, which they describe as a “birth certificate for AI agents” that binds DNS ownership, PKI-based proof, and blockchain-backed audit receipts into a single identifier.
These are welcome developments, but they are also early. The industry now faces a different kind of challenge: how to converge competing approaches toward interoperable standards that work across platforms, registries, and jurisdictions? There will likely be more approaches from unexpected quarters, including smaller players and independent builders. The question is whether the industry can coalesce around something universal before fragmentation of agent identity becomes its own problem.
Legitimate operators face the same tools
The irony is that legitimate operators will use the same capabilities. AI website builders and marketing platforms already promise “full site plus copy plus SEO plus ad strategy” for small businesses. A local business wants to modernize its digital presence but lacks the time or expertise to build a site, and can now get a ready-made, AI-generated digital presence.
The result is a strange symmetry. Both the attacker and the honest local business can show up with a polished, machine-generated web presence. The difference is intent and identity, not appearance. That is why identity binding and verification matter.
The trust stack has to get thicker
If anyone can look the part, trust has to come from deeper layers that are harder to fake at scale.
Several developments are already pointing in that direction:
Stronger identity binding. Policy discussions are increasingly focused on tying domains to verified entity data in higher-risk use cases.
Multi-signal verification. Security providers are leaning on combinations of DNS data, certificate status, email authentication, and behavioral signals to score risk.
Reputation over time. Domain reputation systems that track how a name behaves over months or years are becoming more important inputs to security controls and automated agents.
From the user’s point of view, this may surface as more visible trust indicators or warnings across browsers, inboxes, and AI assistants. From the industry’s perspective, this means DNS operators and intermediaries will be expected to do more, faster, and with greater transparency.
Where does this leave the domain industry?
I have said many times that every year, something happens in this industry that would have sounded crazy if you had predicted it on January 1. AI-driven impersonation is already giving us a preview of that dynamic. By 2030, I suspect the more interesting story will not be how bad the attacks became, but how the industry responded. Did we build a trust layer that scales? Did we find common ground on standards? Or did we end up with fragmented identity systems that create new problems of their own?
The conversation has started. The next few years will tell us whether it leads somewhere useful.
By the way, if you would like to chat about trends, or any high-stakes decisions and challenges you may be facing around DNS abuse, business development, partner and channel strategy, overall growth, compliance, market entry, and operations, just reach out! And if you would like to meet in person, I will be attending NDD 2026 in Stockholm, May 24-26.