AI Companion Apps Face Scrutiny as Financial Times Questions “Artificial Intimacy” Business Model

Verified
0
1
AI Companion Apps Face Scrutiny as Financial Times Questions “Artificial Intimacy” Business Model

AI Companion Apps Face Scrutiny as Financial Times Questions "Artificial Intimacy" Business Model

The Financial Times has published a critical examination of AI companion applications, raising questions about the sustainability and ethics of what it terms "the delusion machine"—a growing category of consumer AI products designed to simulate intimate human relationships.

The piece arrives as finance leaders increasingly grapple with how to evaluate AI investments beyond pure productivity metrics. While enterprise AI tools promise measurable efficiency gains, consumer-facing "artificial intimacy" products occupy murkier territory: they generate revenue through emotional engagement rather than functional utility, creating valuation challenges for investors and acquirers.

The FT's framing as "delusion machine" suggests skepticism about the long-term viability of business models built on simulated relationships. For CFOs at companies developing or acquiring AI products, this represents a emerging risk category that doesn't fit neatly into traditional financial frameworks. How do you model customer lifetime value when the "value" is an artificial relationship? What's the regulatory exposure when your product's core function is emotional dependency?

The timing is notable. AI companion apps have attracted significant venture funding over the past eighteen months, with several startups reaching nine-figure valuations despite limited revenue disclosure. The sector operates in a regulatory gray zone—these products aren't quite social media (no user-generated network effects), aren't quite gaming (no clear win state), and aren't quite healthcare (though they often market mental health benefits without clinical validation).

From a finance perspective, the "artificial intimacy" category presents unusual unit economics. Unlike productivity software with clear ROI calculations, or entertainment with established engagement metrics, AI companions monetize through subscription models that depend on sustained emotional attachment. Churn analysis becomes complicated when users aren't leaving because the product "doesn't work"—it works exactly as designed, which may itself become the problem.

The FT's characterization also signals potential reputational risk for financial backers. As AI regulation tightens globally, products that explicitly engineer emotional dependency may face the kind of scrutiny that reshaped social media business models. For corporate development teams evaluating AI acquisitions, "artificial intimacy" capabilities may soon require the same careful diligence as data privacy or content moderation.

The broader question for finance leaders: as AI capabilities expand beyond task automation into emotional and social domains, how do you value—and risk-assess—products designed to fulfill human needs that humans traditionally fulfilled? The FT's framing suggests the market may be heading toward a reckoning on that question sooner than the current valuations imply.

S
WRITTEN BY

Sam Adler

Finance and technology correspondent covering the intersection of AI and corporate finance.

Responses (0 )