I tried replacing my morning news routine with 5 AI tools. Here's what actually worked.
I spent a week replacing my Twitter and Google News habit with Perplexity, ChatGPT, and other AIs. Here’s my honest take on what works—and what doesn't.

This opinion piece was drafted with AI assistance under the editorial direction of Rohan Mehta and reviewed before publication. Views expressed are the author's own.
My old morning routine was a mess, and if you're a builder, an investor, or just anyone trying to keep up, yours probably is too. It started pre-coffee, with a panicked scroll through X, trying to piece together the overnight discourse from the US. Then came the triage: a dozen tabs of Google News, a glance at Hacker News, maybe a quick dive into Techmeme. The goal was to absorb the signal—the new model release, the crucial funding round, the subtle platform shift—but most of it was just noise. It was a defensive, reactive process that left me feeling more cluttered than informed. So I decided to run an experiment: for one week, I would replace my entire ad-hoc system with five of the most promising AI-powered news and search tools. The dream was simple: a concise, intelligent briefing that would save me time and deliver pure signal. The reality was far more complicated.
First up was Perplexity. I have a lot of respect for the Perplexity team and have used it for months as a supercharged search engine. For this experiment, I tried using its daily discovery feeds and asking it broad questions like, "What are the most important developments in AI for founders from the last 24 hours?" As a research tool, it remains undefeated. When I asked it to summarize the latest performance benchmarks for custom silicon in data centers, it gave me a crisp summary with beautifully cited sources. But that’s the thing—it felt like a research assistant, not a news anchor. It presented facts, but it didn't tell a story. The process was still me *pulling* information by asking the right questions. It didn't replace the ambient awareness I was looking for, that feeling of having a pulse on the industry. It's an incredible tool for going deep on a known topic, but it wasn't the morning briefing replacement I had hoped for.
Next, I leaned on the tool most people have access to: ChatGPT with its browsing capabilities. My prompt was specific: "Act as a tech editor for an AI publication in India. Give me my morning briefing. Focus on new model releases, significant policy changes affecting Asia, and major startup news from the last day." The results were, frankly, alarming. ChatGPT delivered a confident, well-structured, and completely untrustworthy wall of text. It mixed a genuine report about a new funding round with analysis from a six-month-old blog post, presenting it all as current. When I pressed it for the latest updates on the implementation of India's Digital Personal Data Protection Act, it conflated recent parliamentary discussions with outdated commentary from last year. This is the danger of these tools: they generate false confidence. The output *looks* correct, and if you're moving fast, you might just take it at face value. It wasn't just misleading; it was a potential minefield of misinformation, delivered with the soothing tone of a trusted aide.
Then I turned to Google's AI Overviews, the most mainstream offering of the bunch. This felt less like a conversation and more like a smarter encyclopedia. Searching for "analyst reaction to latest Nvidia earnings" gave me a clean, synthesized block of text at the top of the SERP, summarizing the key takeaways from several financial news outlets. It was efficient. But it was also sterile. The problem with AI Overviews is that they seem to optimize for the most generic, SEO-friendly consensus. I searched for the "fallout from the Anthropic constitution debate," and the overview gave me a watered-down summary of official blog posts. It completely missed the fiery, nuanced, and frankly more important debate happening among researchers on X and in private forums. It gave me the press release version of the news, stripping away the point of view, the dissent, and the human drama that often *is* the real story.
Halfway through the week, I was feeling frustrated. The tools were either research assistants, confident liars, or boring summarizers. Then I spent more time with Arc Search's "Browse for Me" feature, mostly on my phone. This felt different. Instead of a text-based answer, it creates a custom, single-page website summarizing a topic on the fly. When I wanted to understand the practical impact of a new European Union AI Act provision on Indian startups exporting to the EU, Arc created a scannable, well-designed page with clear sections: "Key Provisions," "Impact on Small Businesses," "Expert Opinions." It felt more transparent because it was clearly assembling and citing its sources in a more visual way. But it still suffered from the same core limitation as Perplexity: it's a "pull" tool. It's a phenomenally better way to get up to speed on a single topic, but it can't tell you what topics you should even be learning about. It doesn't solve the discovery problem.
For my final attempt, I went full-builder mode and set up a custom system in NotebookLM. This was the most involved but also the most insightful. I took the RSS feeds from a curated list of twenty sources I actually trust—specific newsletters like Stratechery, a few academic blogs, key reporters on Substack, and even transcripts of some podcasts. I fed all this into a NotebookLM project. Now, my morning query changed. Instead of asking the open internet, I was asking my own private, curated brain. "Based on my sources, what is the consensus on Reka's new model and how does it compare to the latest from Mistral?" The answer it gave was nuanced, contextual, and deeply relevant because it was synthesized only from voices I had pre-approved. It helped me connect dots between a new paper on retrieval techniques and a product launch from a portfolio company, a connection I would have missed otherwise. This was it. This was the signal.
But here's the catch: it takes work. It's not a plug-and-play solution. You have to be a diligent curator of your own sources. And by the end of the week, I realized the perfect AI news tool doesn't exist yet. Each tool I tried solved one part of the problem. Perplexity for deep research. Arc for fast, topic-specific briefings. NotebookLM for trusted, curated synthesis. ChatGPT, for now, remains benched for this task. I haven’t gone back to my frantic, multi-tab routine, but I haven't fully entrusted my awareness to AI either. My new routine is a hybrid. I use Arc Search when I see a topic bubbling up that I need to understand in five minutes. I spend time in my NotebookLM instance once or twice a week to find deeper patterns.
And yes, I still scan X. I have to. Because all these AI tools, in their current form, are excellent at summarizing *what happened*. They can tell you about the paper, the product, or the policy. But they are constitutionally incapable of capturing *the vibe*. They don't understand the sarcastic reply from a prominent researcher, the collective groan of developers about a new API, or the subtle shift in tone from a CEO on a podcast. They see the information, but they miss the culture, the community, and the context. For anyone building in this space, that context isn't just color; it's often the most important signal of all. Until an AI can read the room, the human-in-the-loop will remain the most critical part of the new morning routine.
Why it matters
- 01AI news tools are excellent for summarizing facts but largely fail to capture the informal, social context ('the vibe') where critical industry insights often emerge.
- 02The most valuable results came from a 'bring your own sources' model using NotebookLM, which requires manual setup but provides trusted summaries based on your curated information.
- 03A single AI tool cannot yet replace a traditional news routine; the best approach is a hybrid model that uses different tools for specific tasks, while retaining human curation and judgment.