AI, DNS Abuse, and the Trust Problem We Cannot Ignore

In my post from last week, I opined that domains are evolving from simple addresses into identity assets, trust signals, and strategic business infrastructure. That sounds promising until you consider the other side of the story: the same AI that can spin up legitimate domains, websites, and marketing strategies can just as easily spin them up for criminals.

If a bad actor can generate a lookalike domain, cloned site, and digital marketing funnel in minutes, then “professional-looking website plus domain” stops being a reliable trust signal. The question shifts from “does this look real?” to “what verifiable signals prove this is real?”

The abuse problem is accelerating

AI is already amplifying phishing and impersonation at scale. Three patterns stand out:

Attack volume is climbing. Generative tools remove language, design, and coding barriers, making it easier for attackers to launch campaigns that would have required significant skill just a few years ago.

Fake sites look more convincing. Cloned websites and on-brand login pages are now trivial to generate, making them harder to distinguish from legitimate domains at a glance.

Infrastructure is more disposable. Attackers can register, host, and abandon domains quickly, churning through lookalikes faster than traditional takedown processes can respond.

By 2030, it is reasonable to expect waves of AI-generated lookalike domains, each with a convincing site and a full marketing funnel, spinning up and shutting down within days.

The domain industry is starting to respond

The question of how to verify the identity of AI agents is no longer theoretical. Initiatives are emerging to build that layer of accountability on top of the DNS foundation.

GoDaddy has been developing Agent Name Service (ANS), an open standard that uses DNS and SSL to give AI agents verified, portable identities across the web. And just today, Identity Digital announced a new Innovation Labs division and introduced DNSid, which they describe as a “birth certificate for AI agents” that binds DNS ownership, PKI-based proof, and blockchain-backed audit receipts into a single identifier.

These are welcome developments, but they are also early. The industry now faces a different kind of challenge: how to converge competing approaches toward interoperable standards that work across platforms, registries, and jurisdictions? There will likely be more approaches from unexpected quarters, including smaller players and independent builders. The question is whether the industry can coalesce around something universal before fragmentation of agent identity becomes its own problem.

Legitimate operators face the same tools

The irony is that legitimate operators will use the same capabilities. AI website builders and marketing platforms already promise “full site plus copy plus SEO plus ad strategy” for small businesses. A local business wants to modernize its digital presence but lacks the time or expertise to build a site, and can now get a ready-made, AI-generated digital presence.

The result is a strange symmetry. Both the attacker and the honest local business can show up with a polished, machine-generated web presence. The difference is intent and identity, not appearance. That is why identity binding and verification matter.

The trust stack has to get thicker

If anyone can look the part, trust has to come from deeper layers that are harder to fake at scale.

Several developments are already pointing in that direction:

Stronger identity binding. Policy discussions are increasingly focused on tying domains to verified entity data in higher-risk use cases.

Multi-signal verification. Security providers are leaning on combinations of DNS data, certificate status, email authentication, and behavioral signals to score risk.

Reputation over time. Domain reputation systems that track how a name behaves over months or years are becoming more important inputs to security controls and automated agents.

From the user’s point of view, this may surface as more visible trust indicators or warnings across browsers, inboxes, and AI assistants. From the industry’s perspective, this means DNS operators and intermediaries will be expected to do more, faster, and with greater transparency.

Where does this leave the domain industry?

I have said many times that every year, something happens in this industry that would have sounded crazy if you had predicted it on January 1. AI-driven impersonation is already giving us a preview of that dynamic. By 2030, I suspect the more interesting story will not be how bad the attacks became, but how the industry responded. Did we build a trust layer that scales? Did we find common ground on standards? Or did we end up with fragmented identity systems that create new problems of their own?

The conversation has started. The next few years will tell us whether it leads somewhere useful.

By the way, if you would like to chat about trends, or any high-stakes decisions and challenges you may be facing around DNS abuse, business development, partner and channel strategy, overall growth, compliance, market entry, and operations, just reach out! And if you would like to meet in person, I will be attending NDD 2026 in Stockholm, May 24-26.

Three Forces Steering the Future of the Domain Name Industry

I have been in this industry long enough to know that every year brings at least one surprise that would have sounded absurd if you had predicted it on January 1. Yet despite any surprises, the larger direction keeps becoming clearer: domains are evolving from simple addresses into identity assets, trust signals, and strategic business infrastructure.

In this post, I am sharing three long‑term forces that I believe will shape the next several years of the domain name industry, and then adding a bit of speculation about how business models might evolve around them.

1) Domains become trust infrastructure

Domains are becoming less of a destination and more of a trust signal. As AI agents, automated systems, and machine‑to‑machine interactions grow, the domain will matter increasingly as a way to prove legitimacy, establish identity, and support secure communication.

In that world, DNS, authentication, and brand protection stop being back‑office hygiene and start to look more like core infrastructure. Stronger identity binding, certificate management, DMARC, and other signals around the domain will matter just as much as the label to the left of the dot.

If you think about domains less as “web addresses” and more as durable identifiers in a machine‑mediated ecosystem, it becomes easier to see why this trend feels foundational.

2) The market keeps fragmenting

For years, the default assumption in many markets was that a .com meant a serious online presence, with everything else treated as secondary. That one‑size‑fits‑all mindset is steadily breaking down. .com will remain the anchor, but ccTLDs are strengthening where local trust and regulation matter, and selected new gTLDs are finding real use where branding clarity or category fit beats tradition.

The upcoming new gTLD round adds even more room for experimentation, giving brands, communities, and geographic registry operators greater naming flexibility than ever before. You can already see signs of this fragmentation in registration data and in the way younger companies are more willing to adopt non‑.com identities when the semantic fit is better.

Fragmentation can be messy, but it also reflects a simple reality: different use cases call for different types of trust, and no single TLD can serve all of them equally well.

3) AI changes how names are chosen

AI is reshaping how people search for, evaluate, and buy domains. Naming decisions that once required long brainstorms and manual availability checks are increasingly guided by AI‑assisted tools that balance brandability, search visibility, memorability, and availability.

On the supply side, AI‑driven recommendation engines can surface candidate names across many TLDs, incorporate SEO and competitive intelligence, and even suggest secondary‑market options when the ideal string is already taken. On the demand side, small businesses and startups are already seeing “just give me a name, site, and basic marketing plan” experiences from AI website builders and hosting providers.

Over time, this should reduce some of the more random speculative noise and increase demand for names that are short, clear, and genuinely useful across markets and channels.

4) New business models emerge

If you follow these three forces forward, it is not hard to imagine new registry and registrar models emerging.

I can imagine a registrar built specifically for AI agents rather than humans, with naming, provisioning, and policy logic designed for machine customers that need identity, reputation, and revocation controls at scale.

A marketplace could also emerge for turnkey websites tied to domain names, where a hoster, registrar, or platform curates ready‑to‑launch businesses for specific local needs. Think of a plumber in Austin who wants to target a specific neighborhood, but does not have the time or expertise to build the site, write the copy, or plan the marketing. In that model, AI could assemble the business plan, budget, site structure, and lead‑generation strategy, and the customer would buy a ready‑made digital business instead of starting from scratch.

Pieces of this already exist in today’s market, from AI website builders to managed small‑business website packages to marketplaces for existing online businesses, but they have not yet fully converged into a mainstream, domain‑centric “turnkey business” offering. It is not hard to see how they could.

What I would watch

None of this happens in a vacuum. DNS abuse, phishing, impersonation, and trademark infringement will continue to shape policy, contracts, and risk management, but that topic is large enough that I am saving it for a separate post.

In the meantime, if you work in or around the domain name ecosystem, here are a few things I would keep an eye on over the next couple of years:

  • How AI agent identity and verification requirements evolve at the DNS and certificate layers.
  • The relative strength of ccTLDs versus gTLDs in local markets and regulated sectors.
  • The performance of new gTLDs from the upcoming round, especially those tied to clear communities or use cases.
  • Early experiments in turnkey website and domain marketplaces aimed at micro‑businesses and local services.

Summing up, I am reasonably confident about the direction, even if the surprises along the way are impossible to predict. Every year, something happens that would have sounded crazy in January, but the long‑term arc still looks clear: the domain industry is moving toward more identity, more specialization, and more inventive ways to package digital presence for both humans and machines.

By the way, if you would like to chat about trends, or any high-stakes decisions and challenges you may be facing around business development, partner and channel strategy, overall growth, compliance, market entry, DNS abuse, and operations, just reach out! And if you would like to meet in person, I will be attending NDD 2026 in Stockholm, May 24-26.

New gTLDs, Next Round: Who Really Wins This Time?

With ICANN’s next new gTLD application window expected to open on April 30th, I’m less interested in which strings are worth applying for in isolation (that’s not to say I haven’t formed my own list of personal favorites!) and more interested in whether the business models behind them still make sense, because the strength of a string means very little without a sound strategy underneath it. The 2012 round produced a handful of clear winners, a long tail of underperforming registries, and some real lessons about DNS abuse and policy that will weigh even more heavily this time around.

The industry now has more than a decade of real-world data on what seems to work. Some TLDs quietly compound value. Others turned into abuse magnets, policy headaches, or write-offs that no marketing budget can fix. Abuse statistics, tighter safeguards, no private contention deals, and clearer limits on things like closed generics all point in the same direction: “launch it and hope” is a weaker bet than ever. As I noted in my December post on the final Applicant Guidebook, the applicants who will regret this round are those who see the Guidebook as paperwork that their vendors handle while they focus on the string and the logo.

Likely winners: models with substance

I see three business-model profiles with the better odds of coming out ahead.

Brands that treat their TLD as infrastructure

In 2012, .brands were an experiment of sorts, and perhaps a defensive play or “FOMO” exercise for those with the cash at the time.  Today, we see that a good number of them are actually being used for secure customer journeys, internal namespaces, and identity-driven projects. The brands that will win in this next round already know exactly where a TLD fits into their security, customer experience, and branding strategies. They are likely not applying primarily because they think their competitor is. I think more brands will apply because the TLD they want to apply for will perform specific, measurable work and provide real ROI.  As I outlined in my four under-discussed dotBrand checklist questions, a .brand discussion that focuses only on trust, security, and marketing upside misses some of the most challenging strategic questions, including governance, exit planning, and data sovereignty. Those who ask the harder questions up front will be better prepared, better resourced, and more likely actually to use and sustain what they apply for.

Focused open generics with a real go-to-market plan

The worst abuse in the last round clustered in cheap, and in some cases, loosely policed name spaces. TLDs that ended up in better situations tended to have a clear audience, sustainable pricing, and a genuine abuse enforcement posture. For this next round, I suggest that the open generics that do well will pick a defined segment, design their policy and pricing around that market, and treat abuse mitigation as part of the core product. They will also think carefully about demand generation and distribution. Make no mistake: the registrar channel is extremely important for success, but as I wrote in my December post on launching without begging registrars for shelf space, treating it as the only path to discovery is outsourcing your fate. The registries that do well will own their demand funnel, not just their backend.

Geos and IDNs with genuine local depth

Geographic and IDN TLDs work when they are rooted in genuine local engagement. From my own experience working with IDN TLDs, governments and community stakeholders need to be involved from the start, not brought in at the end. The strongest of these will pair cultural and linguistic relevance with credible governance and a serious approach to abuse handling. Local trust takes years to build and seconds to lose the moment users decide your namespace is an unsafe digital neighborhood. 

Universal Acceptance (UA) is not a checkbox item for IDN applicants; it is an ongoing operational reality. Getting email validation, system compatibility, and end-user awareness right across a diverse global ecosystem takes sustained effort, and any IDN registry applicant going into this round should have a clear-eyed plan for it. To their credit, the Universal Acceptance Steering Group (UASG) has moved this conversation forward in meaningful ways, and IDN applicants today are working from a much better starting point than their counterparts were in 2012.

Likely losers: wishful thinking and familiar mistakes

On the other hand, some well-worn approaches look increasingly fragile.

“2012, but again” open generics

High-volume, low-price models that plan to deal with abuse later now face a very different environment. Abuse patterns from the last round are well documented and widely quoted. Safeguards have tightened. Policy discussions have moved on. A business case that quietly depends on running a noisy, low-trust namespace is far more likely to encounter friction from regulators, ICANN, and the broader ecosystem than it was a decade ago.

Under-capitalized, “just add marketing” registries

Unless you have plenty of discretionary capital to tie up and risk, applying for a new gTLD is not a domain speculation play. It is a long-term commitment to run critical internet infrastructure with real technical, financial, and compliance obligations. The $227,000 application fee is just the starting line. Legal, RSP fees, policy staffing, and a multi-year operating runway all follow. As I pointed out in my RSP selection post, your choice of registry service provider is one of the most consequential decisions you will make before you ever click submit. If your model only pencils out after a rapid surge of registrations, you are looking at a pitch, not a sustainable registry business.

Registry Operators who treat DNS abuse and policy as externalities

DNS abuse is not evenly distributed. It clusters in particular TLDs and has typically been linked to decisions about price, eligibility, and enforcement. Those patterns now feed directly into discussions about safeguards, oversight, and contractual obligations. 

As I explored in my January post on AI agents and DNS abuse, this is no longer just an operational nuisance. It is increasingly an evidentiary and accountability matter, one that, from what I am reading, will connect DNS infrastructure to legal frameworks well beyond ICANN’s reach. The industry and applicants need to be ready for what is coming, or they will find themselves facing obligations and costs they never budgeted for.

The question I would want every TLD applicant to answer:

For those who know they want to apply but need help pressure-testing the strategy, evaluating RSP candidates, or getting the policy and abuse posture right, iQ’s gTLD consultancy team is worth a conversation. They cover the full journey from string analysis and strategic assessment through application development, technical infrastructure, contention strategy, and launch planning. The window opens on April 30th and closes on August 12th, spanning 15 weeks. There is still time to get it right, but not a lot of it.


Four under-discussed questions worth adding to a 2026 dotBrand TLD decision checklist

First off, I am not trying to talk anyone out of applying for a dotBrand TLD. Most consultants and providers in the domain name infrastructure world did a solid job helping brands that successfully obtained and operated their chosen TLDs in the last round. For RSPs and others in the domain industry ecosystem, I suggest that applicants who ask harder questions are usually better customers: they budget properly, launch with intent, and are more likely to use and sustain their dotBrand over time, rather than letting it quietly wither. My aim here is simply to surface a few under-discussed questions for teams that are seriously considering an application in this next round.

Most 2026 dotBrand conversations have hit the usual notes: trust, security, signalling, future-proofing, use cases, and “don’t miss the window.” All of that matters. But I suggest there are a few quieter questions that can materially change whether “apply” or “do not apply” is actually the right move for your organisation.

Below are four such questions worth adding to your internal decision checklist before anyone signs off on a dotBrand application.

1. What is our governance and exit plan, really?

A dotBrand is often framed as a forever asset. Strategy rarely is. Over a 10 to 20-plus-year horizon, few brands stay static in terms of structure, portfolio, or narrative.

  • If you sell or spin off a business that leans heavily on the dotBrand, what happens to those domains when the TLD itself cannot move with the deal?
  • If you rebrand, restructure, or simply decide the dotBrand is no longer core, what is the plan to avoid customer confusion, large-scale link rot, and uncomfortable questions about who retains which DNS and registry data?

If you cannot outline credible answers, you may not yet have a complete picture of the governance cost of owning a piece of the DNS, even if the business case and use cases look compelling on day one.

2. Have we thought through EBERO and transition scenarios, not just picked a “good” back end?

Most business cases stop at “pick a reputable registry service provider (RSP) and move on.” To date, no major RSP has failed or gone dark, and the industry’s track record is solid. That said, sound governance means planning for low probability, high-impact scenarios, just as you would for any critical infrastructure dependency.

ICANN’s Emergency Back End Registry Operator (EBERO) framework exists to ensure that, if a gTLD registry operator ever gets into serious trouble, an approved provider can step in to keep critical functions running and protect registrants. That gives comfort that your TLD will not simply go dark if the contracted operator fails to sustain key services or ends up in a serious compliance problem.

However, ICANN states that “EBERO providers are limited to providing critical functions as defined in gTLD registry agreements. For example, EBERO providers will not provide any additional services that a gTLD operator may have offered its customers, such as web hosting or network analytics.” So best to get a handle on how a potential successor may be able or willing to mirror your brand-specific features, integrations, reporting, or service model that you built around your chosen RSP.

The practical governance question for dotBrand owners is: if our TLD ever had to move into EBERO or through an ICANN-managed transition, what would that mean for our customer experience, internal tooling, and roadmap while things are being stabilised, and are we prepared to explain that scenario to leadership, even if it remains an unlikely event?

3. Are we ready for the unglamorous UX and data ops work?

dotBrand discussions rightly highlight clean URLs and trust signals, and some materials touch on SEO or customer experience impacts. What gets less airtime is the grind of making a dotBrand behave like a first-class citizen across all your systems and your partners’ systems.

  • How many legacy B2B customers, suppliers, or regulators sit behind filters, gateways, or applications that still mistrust or mishandle unusual TLDs, causing email deliverability issues, blocked links, or noisy fraud alerts?
  • Have you budgeted and staffed for the analytics and data ops work: updating tracking, search engine optimisation (SEO) and “answer engine” (AEO) optimisation (for assistants and AI results), data cleaning pipelines, and your security monitoring and automated response rules so that dotBrand domains are treated as trusted assets in your own environment, while look alike abuse of your brand in other TLDs is still detected and blocked?

If the working assumption is “our current tools will just handle it,” there’s a good chance you may be underestimating the friction, support overhead, and time to value curve of the dotBrand.

4. What footprint does this create in regulation, ESG, and data sovereignty?

Once you operate a TLD with your name on it, you are no longer just another domain holder. You are running visible internet infrastructure tied to your brand and, in some cases, to regulated activities or a stock ticker.

  • In key markets, financial, telecom, or sector-specific regulators may reasonably expect additional clarity around resilience, operational control, and incident handling because you now run a piece of the DNS, even if it serves only your own domains.
  • Where will DNS logs and related telemetry reside, who can access them, and how does this align with your data sovereignty posture, sanctions exposure, and ESG reporting requirements regarding digital operations and governance?

These are not arguments against a dotBrand. They are reasons to treat the decision as more than an up-sized domain registration or a one-off brand campaign.

If your 2026 dotBrand discussion focuses solely on trust, security, and marketing upside, you may be missing some of the most challenging and strategic questions. Adding these four to your decision checklist will make whatever answer you land on, “apply” or “do not apply,” a lot more defensible when the next leadership team inherits it.

About the author: Pinky Brand

Choosing Your Registry Service Provider: One of the Most Consequential Decisions Before April

Note: This is an update to my original 7 January 2026 post on this matter.

In my 17 December post, I walked through whether you should still apply for a new gTLD in the 2026 round, given the opportunities and risks in the final Applicant Guidebook. Today, I want to focus on one of the most consequential infrastructure and operational decisions you’ll make before you ever click “submit” on your application: choosing your Registry Service Provider (RSP).

Unlike 2012, when applicants could theoretically build and defend their own technical infrastructure, ICANN’s 2026 round requires all applicants to use an RSP that has passed the Registry Service Provider Evaluation Program for Main Registry, DNS, and DNSSEC services. That means your back-end partner is no longer optional, and picking the wrong one can quietly sabotage everything from your technical evaluation to your long-term operational viability.

ICANN’s RSP List Is Here (And Will Keep Evolving)

ICANN originally planned to publish the list of successfully evaluated RSPs in December 2025. Due to technical issues, the date was pushed to 30 January 2026.

The list can be viewed at ICANN’s RSP Evaluation Program page and will be continuously updated as more RSP applicants complete evaluation; the page is effectively a point‑in‑time snapshot that will evolve as evaluations conclude. A second RSP evaluation window will open in April 2026, coinciding with the gTLD application period, allowing additional providers to still enter the program.

What that means for you: if you’re planning to apply in the 2026 round, you have roughly three months from the publication of the RSP list until the application window opens to evaluate providers, have substantive conversations, and lock in your back-end partner. That timeline is tight, especially if your string has specific policy, compliance, or go-to-market requirements that not every RSP can or will support.

Why RSP Selection Is Not Just “Picking From a List”

The RSP Evaluation Program was designed to streamline technical assessment by pre-vetting back-end operators against ICANN’s technical, security, and operational requirements. An RSP that passes evaluation can support as many gTLD applications as it wants without being re-evaluated each time.

That’s good news for applicants. It lowers cost and complexity on the technical side. But it also creates a false sense of security. Just because an RSP is on ICANN’s approved list does not mean they are the right fit for your registry business. ICANN is explicit that the evaluated RSP list is informational, not a recommendation; it confirms technical qualification for specific services, not strategic fit for your business.

The RSP evaluation you need to do goes far beyond ICANN’s technical checklist. Experienced industry advisors look at factors like:

  • Policy and governance alignment with your intended registry model
  • Operational track record in abuse handling and compliance
  • Registrar connectivity and commercial support capabilities
  • Organizational fit and the level of access you’ll get as a client
  • Long-term financial structure and hidden cost exposure

These aren’t yes/no questions. They require comparative analysis, reference checking with existing clients, and honest assessment of whether your registry model and the RSP’s platform and culture are genuinely aligned, not just “close enough.”

What to Do Right Now

If you’re serious about the 2026 round, here’s what I suggest you start doing:

Build your evaluation framework now. Figure out what criteria matter most to your string and business model. Map out what you’re actually evaluating against: policy flexibility, abuse capabilities, registrar relationships, pricing transparency, and cultural fit.

Identify likely first-tier candidates. Based on industry track record, current gTLD portfolios, and known participation in the RSP evaluation window, you can make educated guesses about which operators will be on ICANN’s list. Start building intelligence on their strengths, weaknesses, and fit for your use case.

Pressure-test your technical and policy assumptions. If your TLD has unusual eligibility rules, compliance requirements, or go-to-market mechanics, validate now whether your chosen RSP’s existing platform and operational setup can actually support your model, or whether you’re asking them to build custom functionality they’ve never deployed before. Custom work doesn’t just cost more. It introduces timeline risk, testing risk, and long-term support complexity that can derail your registry years after launch.

Engage experienced advisors early. People who have worked with multiple RSPs across different rounds and registry models can help you surface blind spots and ask the right questions, questions you may not even know to ask yet. That kind of guidance is far more valuable than generic marketing decks.

The RSP you choose will shape everything from your application’s technical evaluation to your day-to-day operations, your relationship with registrars, and your ability to respond to abuse complaints and compliance audits. It’s not a vendor transaction. It’s a foundational strategic partnership that you’ll live with for years.

The clock is ticking. The applicants who will make the smartest RSP choices are the ones who have already done the hard work of defining what they actually need, rather than scrambling to pick a name that “looks good” or “had a nice booth at a conference.”

If you’re reviewing the RSP list and thinking, “I’m not sure how to evaluate these providers or what questions I should even be asking,” that’s exactly the signal to bring in someone who can. Better to invest in that clarity now than to lock yourself into a multi-year relationship with a back-end partner that can’t actually support what your registry needs.

About the author: Pinky Brand

AI Agents and DNS Abuse: The 2026 Conversation the Industry Needs to Have

By 2030, autonomous AI agents could outnumber human domain registrants 3-to-1. The domain industry is building for legitimate use cases, but the real challenge is abuse at machine speed and a new international legal framework that treats DNS operators as evidence holders for the gravest international crimes.



Over the holidays, I read Joanna Kulesza’s blog post on CircleID about the ICC’s new Policy on Cyber-Enabled Crimes. It got me thinking about DNS abuse in a way that goes well beyond the usual ICANN regulatory framework we all live with day-to-day.

Then this morning, I listened to Andrew Allemann’s Domain Name Wire podcast interview with James Bladel from GoDaddy, and suddenly several pieces clicked together for me. Bladel was discussing GoDaddy’s Agent Name Service (ANS) proposal, and I realized: this isn’t a future scenario anymore. The industry is already actively building for it.

Kulesza’s analysis made me realize that by the end of 2026, the domain industry may find itself dealing with a very different kind of challenge, one that connects autonomous AI agents, DNS abuse at machine speed, and international criminal accountability in ways almost no one is talking about.

When your biggest customer isn’t human

Most of the current predictions for 2026 I read about or listened to over the holidays focused on familiar themes: strong demand for premium .com domains, the next new gTLD round (applications opening April 30th, by the way!), post-AFD monetization challenges, and rising pressure around trust, security, and regulation. All valid, all important.

But here’s what I suggest could be the real disruptor:

What happens when autonomous AI agents, not people, become the primary operators and, inevitably, abusers of domain infrastructure?

I’m not talking about increased domain registrations in the .AI TLD (though that’s been impressive). I’m talking about software agents that can autonomously compromise domains, spin up subdomain-based abuse infrastructure, configure hosting and DNS, run campaigns and cold outreach at an industrial scale, and tear it all down just as fast.

Industry analysts project there could be billions of autonomous AI agents operating on the internet by 2030, a projection GoDaddy is taking seriously enough to build infrastructure for. To put that in perspective, there are roughly 350 million active domain names today. We’re potentially looking at a world where autonomous agents are the dominant users of domain infrastructure, whether as legitimate customers or as sophisticated abusers.

At that point, the traditional mental model (“a founder registers a domain and builds a site”) stops being the default. And critically, the traditional abuse model (“bad actors manually register throwaway domains”) also becomes obsolete.

The industry is already responding (to the legitimate use case)

GoDaddy’s Agent Name Service proposal is essentially an attempt to leverage the proven infrastructure of DNS and SSL certificates to solve the identity, discoverability, and trust problems that autonomous agents will create. Each ANS-verified agent would have its own fully qualified domain name and SSL certificate, the same trust mechanisms we already use for e-commerce sites.

This makes sense for legitimate use cases: corporate AI assistants, business process agents, customer service bots, and other autonomous systems that need verified identity and discoverability. For these legitimate agents, we likely will see significant growth in domain registrations, because each agent genuinely needs its own FQDN and certificate for authentication purposes.

But here’s where it gets more complicated: the abuse side won’t look like explosive domain registrations. It will look like what we already see, only faster and more sophisticated.

Bad actors won’t pay to register thousands of domains when they can:

  • Register one domain and spin up unlimited subdomains at zero marginal cost
  • Compromise existing legitimate domains and weaponize their subdomain infrastructure
  • Exploit URL shorteners, free hosting services, and other existing platforms
  • Use automation to identify and exploit vulnerabilities at machine speed

The real shift is that autonomous agents make it trivially easy to operate abuse infrastructure at a scale and speed that human-driven detection and mitigation simply can’t match.

The actual threat model

Here’s what I predict could keep folks in the industry up at night. Autonomous agents will:

  • Compromise and weaponize existing domains at an industrial scale through automated credential stuffing, vulnerability scanning, and supply chain attacks. They’ll turn legitimate infrastructure into abuse platforms faster than registrars and hosting providers can detect patterns.
  • Abuse subdomain structures on both compromised and intentionally malicious domains. The economics favor subdomains over new registrations, and agents make this even more efficient.
  • Operate mixed legitimate/malicious portfolios where some domains or agents from an account are legitimate, and others are abusive, making account-level detection far more critical (this is exactly why ICANN is preparing to launch a major policy initiative on this in 2026).
  • Generate, adapt, and evolve abuse campaigns faster than human analysts can respond. Think phishing campaigns that automatically adjust based on detection, malware that rotates infrastructure on the fly, and fraud operations that learn from takedown patterns.
  • Blur the line between “legitimate automation” and “abuse” in ways that will be legally and technically challenging. (For instance, is an agent that sends 10,000 cold emails per hour a legitimate sales tool or spam? The answer may depend on consent, content, and jurisdiction, and agents won’t care about any of those nuances unless explicitly programmed to.)

The stress test on abuse systems will not necessarily be due to registration volume. It will likely be from the speed, scale, and sophistication of abuse operations that autonomous agents enable, most of which will exploit existing domain infrastructure rather than creating expensive new registrations.

The accountability layer no one’s talking about

This is where Kulesza’s analysis really hit home for me. The ICC’s new policy explicitly treats Internet infrastructure (including DNS) as part of the operational environment for the gravest international crimes, not just routine cybercrime or fraud.

From what I read, and I’m no legal scholar, it seems when cyber means are used to identify victims for physical violence, incite genocide, or disrupt civilian infrastructure in conflict zones, the technical evidence held by DNS operators, registrars, and security providers becomes directly relevant to international prosecutions under the Rome Statute.

That means three things:

  1. Your logs and metadata can become international criminal evidence, not just compliance records for contractual disputes or ICANN audits.
  2. Intentional destruction or obstruction of such evidence can itself be an international offense against the administration of justice, regardless of your motive or local legal advice.
  3. The ICC openly anticipates relying on industry cooperation and technical expertise to build cases, which pulls infrastructure operators into legal scenarios that have nothing to do with traditional domain policy.

Now imagine that world colliding with autonomous-agent-driven abuse. When agents can compromise domains, generate abusive content, and rotate infrastructure at machine speed, and when some subset of that abuse crosses the line into facilitating international crimes, the evidentiary burden on DNS infrastructure operators becomes both technically complex and legally critical.

This isn’t some distant theoretical concern. It’s a structural shift in how DNS sits within global accountability frameworks. And it makes the move toward standardized, evidence-based abuse reporting not just operationally smart, but legally essential.

The good news: the building blocks already exist

Here’s what gives me confidence that the industry can get ahead of this challenge: the shift toward evidence-based, structured DNS abuse reporting is already happening.

Organizations like iQ Global are building platforms that transform messy, unstructured abuse complaints into clean, actionable evidence packages. Their KARA system uses AI to extract and validate evidence from natural-language reports, ensuring that what reaches abuse teams is complete, categorized, and ready to act on, not noise.

Combined with smart automation (rule-based prioritization, advanced filtering, custom data fields for forensic financial services and trusted third-party intelligence), these systems ensure that abuse decisions are fast, consistent, and traceable. That’s exactly the infrastructure the industry will need when autonomous agents can generate and adapt abuse faster than human analysts can keep pace.

Just as importantly, efforts like Reputable Domains are creating proactive verification layers to prevent false positives before they cause harm. In a world where agents can compromise legitimate domains and use their infrastructure for abuse, having verified “this is a known legitimate brand” data becomes critical for separating signal from noise.

Meanwhile, ICANN is preparing to tackle this challenge through policy work. In December 2025, the GNSO Council approved moving forward with a Policy Development Process on DNS abuse mitigation, with formal initiation expected in early 2026. According to the Final Issue Report published last month, the first phase will focus on “associated domain checks,” requiring registries and registrars to investigate entire accounts and portfolios when abuse is detected on a single domain, rather than treating each domain in isolation. The report also addresses more stringent security controls for API tools used in bulk registrations.

As Bladel noted in Andrew Allemann’s podcast, major players like GoDaddy already follow associated domain checks as a best practice, but codifying it into policy requirements will ensure industry-wide adoption. That kind of account-level, pattern-based abuse detection is exactly what you need when facing autonomous agents that might operate mixed legitimate/malicious portfolios or when a single compromised account could be weaponized across hundreds of domains and thousands of subdomains.

What this means for 2026

If the industry waits until autonomous-agent-driven abuse overwhelms existing detection and mitigation systems, it will already be behind. I suggest the smarter play is to recognize that:

  • Autonomous AI agents will be both customers and threat actors at a scale that dwarfs human-driven activity. GoDaddy is building for the legitimate use case; we need equally sophisticated defenses for the abuse case.
  • Speed and sophistication matter more than volume. The challenge isn’t a billion new malicious domain registrations. It’s agents that can compromise, weaponize, adapt, and rotate infrastructure faster than human-driven processes can respond.
  • Evidentiary readiness matters beyond ICANN. The systems you build today to handle routine DNS abuse may need to support investigations tomorrow that involve international crimes, not just domain suspensions or registry sanctions.
  • Account-level and pattern-based detection become essential. When agents can operate at scale, investigating individual domains is insufficient. You need to understand entire account behaviors, relationships between domains, and patterns across portfolios.
  • Prevention beats cleanup, but detection must be faster. Proactive verification helps protect legitimate brands, but structured reporting with reproducible evidence packages, intelligent automation, and audit trails becomes critical when abuse can be generated and evolved autonomously.
  • Open standards matter. If the domain industry doesn’t rally around proposals like Agent Name Service for the legitimate use case, we risk ceding control to proprietary, walled-garden alternatives that fragment the internet and concentrate power in a few platforms. 

By the time 2026 wraps, the domain industry may be judged not only on how it enables legitimate autonomous agents, but on how it generates, preserves, and shares evidence about agent-mediated abuse when the stakes include international accountability.

The building blocks (evidence-based reporting standards, AI-assisted validation, automated workflows with audit trails, proactive verification, account-level pattern detection, and open protocols for agent identity) are already in place. The question is whether the industry moves fast enough to connect them before autonomous agents operate at a billion-scale and the accountability expectations shift permanently.

That’s the conversation I suggest we have in 2026. And based on what I’m seeing from GoDaddy, iQ Global, ICANN’s recent policy developments, and others, at least some of us are finally having it.

About the author: Pinky Brand

What My Feed Didn’t Show In 2025

2025 asked a lot of me. 

A string of “wins on paper” threaded through personal and business turmoil, with a steep rise in caregiving responsibilities, was harder to talk about in public than overseas travel, new content, or product and promo videos.

Professionally, 2025 started out looking steady from the outset: interviews, webinars, integrations, attending ICANN Seattle, and generating thought pieces about registry strategy, DNS abuse, and brand protection that kept me active in the usual corners of the industry. 

Shadowing that as the year progressed was a quieter reshaping of my work identity, eventually stepping out of my role at NameBlock in October after three years, and trying to build a consulting pipeline while navigating the emotional drag that comes with instability.​

The business side of 2025 for me was less about big deals and more about endurance: trying to stay visible, staying useful, and staying just persistent enough not to disappear without being able to attend any industry events between late March and early October.

What did not show up in my social posts, until this year-end reflection, is how dramatically family-health and caregiving-related responsibilities increased this year. As my father’s health declined, my personal calendar quietly filled with appointments, logistics, and difficult conversations, all layered on top of work, that, on social media, still looked relatively normal.​​

In late August, that caregiving chapter came to an end with my father’s passing. Grief does not respect project timelines, business challenges, or content schedules, and the weeks around that time were less about any sense of “balance” and more about simply holding things together for the people who needed it most. If 2025 had a single defining thread, it was learning how thin a person can be stretched between professional obligations and family duties, and still try to show up with some semblance of grace.​​

If 2025 teaches anything, it is how wide the gap can be between what shows up in a feed and what life actually feels like. 

There is a temptation, at the end of a hard year like this one, to wrap everything in optimism and declare 2026 will be “the big reset.” 

2025 does not lend itself to that narrative for me. 

Many of this year’s realities will follow into early 2026: a consulting practice that still needs tending, a domain industry that is as noisy and competitive as ever, and a family life that now includes the quieter work of healing and making sure my surviving parent is comfortable. For now, looking forward feels less like a bold prediction and more like a modest intention: to keep showing up, to keep contributing where my experience is useful and impactful, and to be a bit more honest about the parts of my story that rarely make it into the social conversation.

I’ve said this so many times over the years: If you told me on January 1st that such and such would happen before the year’s end, I’d tell you that you are nuts.  Well, every year, such and such happens. Sometimes really good, and sometimes really bad. This year definitely had a mix of both.

I did not intend for this to be a “Debbie Downer” review of 2025. It’s not my true nature. However, this isn’t the neat, upbeat year-end recap I thought I would write. 2025 was a mix of progress, loss, and a few too many hard lessons. 

For now, it’s enough to say: I’m still here, I’m still standing, and hope springs eternal.

About the author: Pinky Brand

The 2026 New gTLD Guidebook Is Final: Should You Still Apply?

The new gTLD program 2026 Round Applicant Guidebook (“AGB”) is now final and has just been published by ICANN. The next round is no longer hypothetical; it’s a dated, documented, and rule-dense reality. For some organizations, this is a once‑in‑a‑generation opportunity; for others, it may quietly become an expensive, multi‑year distraction.

Should you still apply?

Yes, if you are clear-eyed about the trade-offs.

The 2026 AGB gives far more predictability than 2012, but also much less room to improvise. It locks in the 227,000 USD evaluation fee per application, defines how and when refunds might apply, and walks you through a long journey of evaluation, objections, potential contention, contracting, and years of compliance. 

For serious applicants, there are real opportunities in this round:

  • The revamped Applicant Support Program can lower entry costs and provide training and pro bono help if you genuinely qualify and can operate what you are asking for. (Note that the application submission period for this closes on 19 December 2025)
  • The RSP Evaluation Program means you can plug into pre‑evaluated back‑end providers rather than building and defending your own registry stack from scratch. 
  • Replacement strings and a clearer contention framework give you more structured strategic options than in 2012, if you plan them early and understand the limits and timelines.
  • The expanded GAC, community input, objections, and appeals machinery can become a moat for well‑designed public‑interest or community models, not just a source of friction. 

There are also very real risks, especially for non‑portfolio players:

  • Under‑estimating total cost: the 227k fee is just the start, once you add legal, RSP, potential objections/appeals costs, and three to five years of operating runway under the Base Registry Agreement.
  • Misreading contention and replacement rules: Treating replacement strings as a late‑stage escape hatch, or under‑estimating the cost and duration of ICANN‑run contention resolution, could lock you into a multi‑year, capital‑intensive deadlock. 
  • Treating GAC and community processes as an afterthought: It might only take one Early Warning, poorly drafted RVC, or misaligned policy to derail all that internal work and force painful application changes.

Just as important: it is no longer enough for you and/or potential investors to “like the string.” The AGB bakes financial and operational scrutiny into the process, and the market will add its own verdict later! (There are more than a few applicants from the last round that can probably say, “I’ve been there, done that, and I still have the launch swag t-shirts!”)

So, I suggest pressure‑testing your business case against conservative revenue projections, not optimistic registration curves from launch press releases.​

By all means, validate your demand. Model downside scenarios, slow adoption, serious objection, drawn‑out contention, and ask whether your organization can stomach a level of cash burn and distraction.  

The applicants who will regret this round are the ones who see the Guidebook as paperwork their vendors “handle” while they focus on the string and the logo. The ones who will be glad they applied are those who treat the AGB as it now reads: a dense but navigable manual for building a durable registry business in a more scrutinized, less forgiving ecosystem. 

If you are looking at the new Guidebook and thinking, “I am not even sure what I don’t know yet,” that is exactly the right instinct to explore before you commit. A trusted, independent guide and/or organization that has lived through past rounds can help you surface the blind spots on policy, contention, and, most importantly, the business model, so you can decide whether this is truly your round, or whether it is smarter to sit this one out. 

About the author: Pinky Brand

Why Reliable Whitelists Matter as Much as Blocklists: Introducing Reputable Domains

Over the years, a recurring theme in conversations with registries, registrars, and brand owners has been the growing friction between essential security controls and the everyday need to keep legitimate domains reachable. False positives in blocklists can disrupt email, ecommerce, and customer trust just as surely as abuse can, which is why approaches that improve signal quality for security teams are so important.

The solution is not better cleanup; it is prevention.

​That is why I am allocating some of my consulting time to support Reputable Domains, a human‑verified whitelist platform focused on reducing the likelihood that good domains are incorrectly flagged. The goal is to provide curated, verified brand data that helps cybersecurity teams distinguish legitimate domains from bad actors before problems escalate, which aligns well with the kinds of practical, data‑driven solutions the industry needs.

The Reputable Domains team published an informative and detailed announcement about the service yesterday. No matter where your organization fits in the abuse reporting ecosystem (producer, receiver, or brand), I suggest you give it a read. Then, if you want to learn more, feel free to reach out to me.

About the author: Pinky Brand

3 Ways to Launch a New gTLD Without Begging Registrars for Shelf Space (Lessons from 30+ Years in Domains)

The next round is coming! The next round is coming! The. Next. Round. Is. Coming.

OK then. Are you one of the non–“portfolio player” hopefuls thinking of sending ICANN your application funds for a single TLD (or two) and hoping to survive past a possible auction? Do you believe running your own registry is the best way to make money in the domain name industry other than investing in generic, one‑word .com names?

Congratulations.

Are you sure you want to do this?

Have you run a completely independent string and financial modeling analysis outside of your own bubble? I can’t imagine you haven’t. But if not, reach out. I’m here. Full disclosure: I’m not applying for a TLD, nor am I investing in one with anyone else.

That’s the commercial. Now, on to the meat.

Let’s say you’ve jumped through all the hoops and your ultra‑fantastic TLD string is going to be delegated by ICANN. Colleagues with deep, long‑term operational and management experience in the domain industry will tell you there is no way you will be successful unless you get wide distribution through the registrar channel.

Is that true?

That has certainly been the prevailing wisdom. And if that’s the route you plan to take, then my number one piece of advice is:

Don’t agree? Then go ahead and approach some major registrar players and see how much it will cost just to get on their online shelf, above the fold in search results, etc., for a few weeks, months, or quarters—or for a one‑year commitment. See how things go when you are up against dozens of other ultra‑fantastic new TLD strings trying to get in the door at MegaGiantRegistrar Inc. or SmallRegistrarWithNoResources LLC. Will you be able to compete against a well‑capitalized operator selling domains for 99 cents—or even for free?

Money talks—marketing, placement, and dev‑support money. But even then, any registrar you work with will look at the opportunity cost of placing your TLD in their registration flow versus what your competitors will offer them. They will also consider which existing product or TLD has to come out of the flow and be replaced with yours, without confusing arriving potential customers so much that they abandon the cart and buy nothing.

If you don’t think you can secure a lot of capital until you hit certain milestones (like delegation), then I suggest you and your investors need to be open to new ways of being successful without depending on registrars as the primary means of discovery and use for your TLD.

Treat “registrar-only” distribution as just one tool—not the whole strategy. The operators who survive the next wave will be the ones who own their demand funnel, not just their registry back end.

Here are three high‑level suggestions:

  • Build a vertical, app‑first use case where the domain is invisible plumbing.
  • Bridge DNS and Web3‑style identity/liquidity so your TLD rides on entirely different distribution rails.
  • Create a multi‑stakeholder commercial alliance (platforms, SaaS, telcos, devices) that bakes your TLD into existing high‑traffic ecosystems.

Let’s unpack those a bit more.


The shift here is to stop thinking “sell domains” and instead “sell outcomes where a domain is bundled and invisible.”

  • Build or partner on a vertical SaaS/app (e.g., booking, creator pages, professional identity, SMB storefronts) where every new account automatically gets a domain on your TLD, with DNS and basic content preconfigured. The user never sees EPP, auth codes, or traditional registrar UX.
  • Technically, you still use registrars for compliance, but commercially they become back‑end pipes. You control UX, branding, and lifecycle in your own app and use one or a few wholesale registrars purely as infrastructure, similar to how some ccTLDs or larger SaaS‑led TLDs operate today.
  • Differentiate your TLD around that outcome (e.g., “instant verified practitioner pages,” “trusted supplier catalogs,” “auto‑local business identity”), not around the string itself. Measure success in activated live sites or connected profiles, not just DUMs.

This model breaks the crowding problem: users discover your TLD inside a solution they already need, not from a registrar search page listing hundreds of new strings side by side.


A second direction is to deliberately design the TLD as a hybrid Web2/Web3 naming and identity layer, plugging into crypto wallets, token markets, and ENS‑style resolution rather than fighting for the same registrar shelf space.

  • Work with ENS‑style resolvers and tokenization protocols so each domain corresponds to a transferable on‑chain token, enabling wallet‑based ownership, secondary markets, and programmable permissions while still resolving in the “normal” DNS.
  • Position the TLD as a canonical namespace for a specific high‑value sector (e.g., blockchain infrastructure, credentials, AI agents, IoT endpoints), where on‑chain features like verifiable ownership, signing, or payments actually matter. That can give you distribution via wallets, dApps, and exchanges instead of classic registrars.
  • Structure pricing and rights so that ecosystem partners (protocols, marketplaces, dev tools) share upside from primary and secondary activity, aligning them to promote your namespace as their default identity primitive.

If executed cleanly, the registrar channel becomes optional: power users acquire, manage, and trade names entirely through Web3 interfaces while the DNS side gives them universal reachability on today’s internet.


A third path is to skip the fragmented retail registrar channel altogether by making your TLD the default naming layer inside a few large‑scale platforms.

  • Target high‑volume “account‑creating” platforms (hosting/CDN, low‑code site builders, CRM/marketing SaaS, POS vendors, telcos/ISPs, device makers) and create a commercial and technical package where every new tenant gets a subdomain that can be frictionlessly upgraded to a second‑level domain on your TLD.
  • Offer these partners a radically simpler economic and operational model than the traditional registry–registrar stack: predictable flat pricing, revenue share or SaaS‑like bundles, APIs tuned for bulk lifecycle operations, and possibly co‑branding of the TLD as part of their product story.
  • Combine this with selective participation in the ICANN channel (one or two “strategic” registrars for compliance and niche retail) while treating the alliance distribution as the primary business engine rather than an add‑on.

None of these paths are easy, cheap, or risk‑free. But if your entire business case depends on registrars discovering, prioritizing, and evangelizing your new string for you, you are effectively outsourcing your fate.

If you’re seriously considering an application in the next round and want a brutally honest second opinion on your model, reach out and share a draft of your thinking. I’m happy to pressure‑test your assumptions—before ICANN, investors, and the market do it for you.

About the author: Pinky Brand