AI Economy and Business Models: A Conversation with Read AI’s David Shim
In a thought-provoking dialogue with GeekWire co-founder John Cook, Read AI CEO David Shim offered a nuanced perspective on the current AI boom, drawing from his 25 years of experience navigating multiple tech cycles. Unlike the dot-com era when internet services were primarily free and subsidized by promotional incentives, today’s AI landscape stands on firmer ground with genuine business models and paying customers. Companies are now willing to invest real money in AI tools that deliver tangible value, allowing these solutions to rapidly scale to tens of millions in revenue within months. This fundamental difference—actual revenue generation versus speculative growth—represents a critical distinction between today’s AI economy and the internet bubble of the late 1990s. However, Shim doesn’t completely dismiss bubble concerns, noting the “speculative edges” where some companies secure massive valuations without products or revenue—what he describes as “100% bubbly.” He also pointed to AMD’s arrangement with OpenAI, which involved stock incentives tied to chip purchases, as reminiscent of the financial engineering seen during the dot-com era. Despite these cautionary examples, Shim believes any potential bubble won’t burst dramatically but rather experience a “slow release,” suggesting a more gradual market correction than the dot-com crash.
Shim speaks from a position of considerable experience, having led Foursquare and sold his startup Placed to Snap before taking the helm at Read AI, which has secured over $80 million in funding while attracting major enterprise customers for its AI meeting assistant and productivity tools. His perspective carries particular weight given his recognition as GeekWire’s CEO of the Year and his deep involvement in building companies through various technological transitions. During the conversation, part of GeekWire’s “Agents of Transformation” series, Shim explored how successful AI implementations tend to function as invisible infrastructure solving specific problems rather than acting as broad, all-purpose assistants. This targeted approach to AI development focuses on particular tasks and seamless integration, to the point where Shim believes the term “agents” itself will eventually fade into the background as these technologies become more embedded in our digital environments. This vision contrasts with the sometimes overblown rhetoric surrounding general AI assistants, suggesting that the most effective AI solutions may be those we barely notice as they quietly enhance specific workflows.
The human psychological dimension of AI adoption emerged as another fascinating theme in Shim’s discussion. He shared an illustrative example from Read AI’s internal testing of “Ada,” an AI meeting scheduler that learns users’ communication patterns and preferences. The team discovered something unexpected: Ada worked so efficiently that they needed to introduce intentional delays into its responses. People were “freaked out” by instantaneous replies, interpreting the speed as evidence that their messages weren’t being carefully considered. This insight reveals how human expectations and comfort levels are actively shaping AI deployment, requiring developers to sometimes deliberately throttle capabilities to match human psychological needs. Such considerations highlight the complex interplay between technical advancement and user experience design, where the most advanced solution isn’t always the most effective if it fails to account for human comfort zones and trust mechanisms.
The global scaling potential of AI technologies represents another significant departure from previous tech waves. Shim noted that Read AI captured 1% of Colombia’s population without any local staff or traditional localization efforts—a remarkable achievement that would have been nearly impossible with earlier technologies. This borderless adoption demonstrates AI’s unique ability to transcend geographic and linguistic barriers, enabling startups to achieve international scale without the substantial investments previously required for global expansion. The implications extend beyond business opportunities to questions of how AI might democratize access to advanced technologies across regions with varying levels of technological infrastructure. As AI systems improve their multilingual capabilities and cultural adaptability, we may see even more accelerated international adoption patterns that bypass traditional market entry strategies, potentially changing how technology diffuses globally.
Looking toward future developments, Shim highlighted the concept of “multiplayer AI” as a significant value multiplier. He explained that an AI’s utility remains limited when it only has access to a single person’s data, but connecting AI systems across entire teams unlocks exponentially more value. These connected systems could answer questions by accessing information from colleagues’ work—including meetings you didn’t attend and documents you’ve never seen—creating a collaborative intelligence layer that preserves and distributes knowledge throughout an organization. This vision extends beyond simple knowledge management to a kind of organizational memory that makes insights and information more fluid and accessible across teams. Such systems could potentially address longstanding challenges of information silos and knowledge loss within organizations, though they also raise important questions about privacy, consent, and appropriate boundaries for data sharing in workplace settings.
Perhaps most provocatively, Shim ventured into the emerging frontier of “Digital Twins”—AI representations of individuals built from their work data that could preserve and make accessible their institutional knowledge even after they’ve left an organization. While acknowledging this concept sounds “a little bit scary,” Shim emphasized its potential value in answering questions that only departed employees would know, addressing a persistent challenge in knowledge management. This concept raises profound ethical and practical questions: Who owns a person’s digital twin? What permissions should be required? How accurately can such systems represent human judgment and contextual understanding? As AI technology advances, these questions will move from theoretical to practical concerns requiring thoughtful governance frameworks. The tension between the clear utility of such systems and their potential to blur boundaries between personal and corporate identity highlights the complex terrain organizations must navigate as AI capabilities expand into increasingly human-like territory.


