I’ve been using Deep Research for all sorts of things since the feature became available to ChatGPT Plus users. The longer, deeper reports can be useful for exploring topics more in-depth than the usual chats you might have with the AI. They’re great for asking questions about products you might want to buy or places you want to visit.
For example, , and I’ll repeat that process for other destinations. With this particularly lengthy report, I didn’t need to read it all at once, as I kept coming back to sections of it. But when I asked ChatGPT to give me lists of “step-on” snowboarding boots with specific requirements, now that’s Deep Research I had to read before I decided which models to pursue.
Since it was a lengthy report, and I’m always training for my next marathon, I decided to listen to it. After all, ChatGPT can read aloud those responses. Only, guess what? It didn’t quite work.
Maybe it takes longer to have the AI read that response aloud to me, and I didn’t have time to wait. So I did the next best thing, which was to copy the report in the Notes app on the iPhone, and enabled an accessibility feature so Siri could read it to me while I was running, with the iPhone display having to stay turned on. Sign up for the most interesting tech & entertainment news out there.
By signing up, I agree to the and have reviewed the Meanwhile, . It’s called Audio Overviews, it debuted in NotebookLM last September before brought it to the app, and it’s getting even better while OpenAI is still sleeping on this huge opportunity. Audio Overview is a feature that lets you turn any sort of AI chat into a podcast featuring AI hosts that discuss whatever you’ve told Gemini to do.
You might upload a bunch of long documents and ask Gemini AI to give you summaries or answer questions. Rather than reading a detailed report, you’d be better off listening to the AI host a podcast just for you. I know I would, but I’m a ChatGPT user, which doesn’t have the feature.
Back to my example above, I had to listen to Siri’s boring voice read that long ChatGPT Deep Research report to me while I was running, and it didn’t quite work. Siri did read the entire thing, but my mind wandered while I was running, so I didn’t follow “the story” completely. That Deep Research report is a great way for me to start looking into new snowboard boots that are easier to hook up to bindings, but I can’t remember a thing from it, and I have to “read” it again.
Surprisingly, Siri worked. It didn’t stop or stumble. But the whole thing felt impersonal.
Siri’s steady tone and the lack of interruptions by a different character made it more difficult to follow, especially on a day when my mind probably needed to wander elsewhere. A podcast featuring a couple of AI guests would have worked differently. They’d switch back and forth while discussing the topic, and that’s what makes Audio Overviews so exciting.
Yes, my mind wanders during podcasts, too, but they feel more entertaining. They have a personality that’s better than Siri reading a long text. Audio Overviews in over 50 languages, which is great news for Gemini users who don’t speak English or speak multiple languages.
Also, users can upload content to Gemini in different languages. Rather than asking for translations, you can turn that content into a podcast in your preferred language. As you can see in the short video at the end of the post, the Gemini AI podcast “hosts” will maintain their personalities and tone across languages.
They will banter back and forth and interrupt each other while they present the information, regardless of language. I’m incredibly envious of this feature, especially after I tried turning my ChatGPT Deep Research into Siri “podcasts.” I used the Siri trick I described above more than once, each time with the same result.
I’d have preferred to do it inside the ChatGPT app. If this were one of those Audio Overviews, a second character would interrupt me about now to tell me that the crazy part about OpenAI’s ChatGPT tech is that it has all the pieces in place to deliver Audio Overviews. First, ChatGPT can handle all sorts of inputs, including files, images, and text.
It can produce large Deep Research reports about any topic. Second, ChatGPT can read aloud its responses, and, best of all, it has an Advanced Voice Mode that supports multiple personalities. Finally, ChatGPT already speaks multiple languages, including the Advanced Voice Mode personalities.
Join all these separate ChatGPT features into an Audio Overviews-like feature, and you could end up with a ChatGPT button that lets the AI turn the chat at hand into a podcast featuring two virtual hosts. Yes, I could have gone to Gemini with my request and gotten my podcast AI entertainment from there. The thing is that while I appreciate the Audio Overviews feature, especially now that it’s built into Gemini, I am a ChatGPT user before anything else.
It’s my main AI chatbot, and I’m not about to switch. But since all these AI firms are trying to match what the competition offers, I hope OpenAI will come with its own version of AI podcasts that Gemini supports sooner rather than later..
Technology
The Gemini feature I want ChatGPT to steal just got even better

I’ve been using ChatGPT Deep Research for all sorts of things since the feature became available to ChatGPT Plus users. The longer, deeper reports can ...The post The Gemini feature I want ChatGPT to steal just got even better appeared first on BGR.