On building the world’s first voice-first AI journaling app while making AI empathetic and trustworthy and bridging India’s mental health gap

Advertisment

In a country where mental health support remains out of reach for many, Rocket Health is reimagining care with Rocket Journal, the world’s first voice-first AI journaling app.

In this interview, CEO Abhineet Kumar shares how, in a world obsessed with AI, Rocket Health is turning hype into healing, creating a safe, empathetic space for self-reflection, emotional insight, and stress management.

From the app’s psychology-backed “Rant” and “Reflect” modes to its seamless integration with Rocket Health’s broader platform, the discussion explores how AI can bridge India’s mental health treatment gap at scale, while upholding privacy, ethics, and personalization. Kumar also outlines Rocket Health’s vision of building a global mental wellness product from India.

Advertisment

What gaps in mental health care are you solving with Rocket Journal?

Rocket Journal is a voice-first AI journaling App that has psychology-backed prompts. People will be able to process racing thoughts, gain gentle insights, and build consistency. We believe it's not a replacement for therapy, but it's more like a companion - just five minutes a day can simply improve self-understanding and stress management.

Journaling can reduce anxiety, improve clarity, boost emotional regulation, and help you understand patterns in your thoughts. Rocket Journal enhances this with AI that listens and reflects back your insights, so your growth is visible.

The beauty of building an AI mental companion that is an AI voice journal is that it could scale to millions of people across the world who could use it as a safe space. A lot of times, people don't want to go talk to a psychologist, and the first layer of support can be the Rocket Journal, where you can come rant, reflect, see your emotional insights, and feel better. And for billions of people, you cannot have millions of psychologists. So building a product that acts as a first layer to provide mental health care support is extremely instrumental. It can route you to therapy where you need it. In a way, this allows us to build the mental health care infrastructure globally, a product that eventually folks can use in multiple different languages.

Advertisment

Al-chatbots are often criticized as impersonal or even annoying. What's Rocket Health doing to make Rocket Journal more personalized and genuinely empathetic?

Our AI is trained to be empathetic. It's trained with emotional intelligence in mind and won't give generic responses. It listens, acknowledges, and mirrors your tone and themes. We believe it is the world’s most realistic voice AI, and our model understands what words mean in context, so it can predict emotions, cadence, and more. It even redirects to taking professional help to the user, if required.

We've also designed two distinct interaction modes: ‘Rant Mode’ lets you vent freely without structure - pure emotional release when you need to get thoughts out. ‘Reflect Mode’ uses guided prompts for deeper, structured self-examination and personal growth. This recognizes that sometimes you need cathartic release, other times intentional introspection.

Advertisment

How does Rocket Journal integrate with Rocket Health's broader platform, and what's the roadmap to make it a global product from India?

Rocket Journal is one of the many products we intend to add to the wellness suite. We intend to add a lot more products in the future, focused on consumer health and wellness, built in India, and focused on the global audience.

We've already started getting early traction from the United States, the UK, Australia, China, Korea, and a few more countries. Rocket Health’s team has been instrumental in how we’ve developed the Rocket Journal. We’ve built this hand-in-hand with the psychologists, as well as people who are actually going to use it. The early feedback we've gotten from India and overseas is informing the way we're tweaking the prompts, the flow, and even the nature with which we're presenting this to various cultures.

Advertisment

Journaling is already a universal language of self-reflection, from gratitude journals to voice journaling; individuals across the planet are seeking a way to work through their minds. Our strategy to go global is simple: make journaling accessible in multiple languages, and offer both the AI journal app and our physical Rocket Journal so people can reflect in the way that feels right for them.

Can Al companions meaningfully bridge India's mental health treatment gap, or do they risk creating emotional dependency without professional oversight?

I don't think AI companions can really be a substitute for mental health treatment. That's the first point we all must acknowledge. What AI mental health companions can actually do is provide that first layer of safe space where an individual can express, reflect, and get to understand their emotions better. That's it. It can route you to take mental health care, access therapy, and more. So the right mental health companion should be able to do it.

Advertisment

In a country like India, even this kind of support and product quality would go a long way in providing help to millions of people, especially because we cannot just instantly onboard millions of psychologists who can cater to our billion-strong population. So we do believe that AI companions can play a starting role in helping you understand your emotions and your mental healthcare better.

Having said that, there will always be a risk, especially with AI companions, that you could become emotionally dependent on them. AI LLMs tend to become echo chambers and do more harm than good. That’s why it becomes all the more important to build measures in the product that prevent overuse and redirect to human aid, if needed, instead of encouraging overdependency on AI

I believe product marketing also plays a big role here. A lot of companies out there are positioning themselves as an AI therapist, and I think that's a misleading way of pitching your product. I think the positioning of an AI mental health companion should be around activities that are not related to mental health care. In our case, it's journaling. Journaling is an age-old self-care practice where people reflect, they get to understand their emotions through journaling, and an AI companion can help you do that in a more seamless, effortless manner and act as a mental health safe space.

Advertisment

With Al entering intimate spaces like mental health, how should the industry address concerns of privacy, ethics, and user trust?

I believe Sam Altman said ChatGPT is not a substitute for therapy. So it's very important for users to understand the same thing. AI is not a substitute for mental health care. When you talk to an AI mental health companion, or even to the Rocket Journal, think of it as the first layer where you can reflect, you can vibe, you can express yourself, and try to understand your emotions better. That's basically what it does. It should not be leaned upon to take advice on how to approach problems in life. Especially in the case of needing professional advice, one must go talk to a psychologist or a psychiatrist in clinical cases. Come to Rocket Health, we’re there to help you.

And I think it's extremely important to acknowledge that for all of these AI mental health care products to explicitly mention this and reiterate it as they go about growing their product(s).

Ethics would play a really important role for all of these folks who are trying to build it. There are a ton of companies trying to build AI therapists out there. We don't think AI therapy is a thing - it is almost misleading. And henceforth, when we ended up building an AI mental health companion, we decided to build an AI journal where people can use voice to journal.

In terms of privacy, it's extremely important that we follow globally compliant privacy protocols that have been followed in a lot of health-tech products. It's extremely important to build products keeping in mind the right security and privacy infrastructure across the mental wellness x AI spectrum. Since this space is very fast-moving and evolving, the infrastructure around it would also evolve with time. One must prioritise these from the beginning.