ChatGPT Will No Longer Answer Certain Questions

ChatGPT Will No Longer Answer Certain Questions

OpenAI has implemented significant changes to its generative AI chatbot ChatGPT, particularly following the release of its GPT-5 model on 7 August 2025. The company now restricts the chatbot from directly answering questions involving emotional distress, mental health, or high-stakes personal decisions. This adjustment marks a major ethical shift. It also affects millions of users who previously sought AI-driven advice on complex personal issues such as relationships, mental health, and emotional decision-making.

OpenAI Shifts Toward Safer and More Ethical AI Interactions. ChatGPT Will No Longer Answer Sensitive Personal Questions

Specifically, rather than offering specific guidance, ChatGPT will now prompt users to reflect on their concerns. The system encourages users to consider multiple perspectives and take time before making personal choices. OpenAI underscores that ChatGPT is not a replacement for therapists, counselors, or human support systems. The goal is to avoid emotional dependence on AI.

Moreover, to further promote healthy usage, ChatGPT now includes automated break reminders. The system may suggest that users take a pause after prolonged sessions. This change is intended to reduce overuse and improve mental well-being, particularly among vulnerable users who might otherwise rely excessively on the AI for emotional reassurance and companionship.

These policy updates follow months of internal discussions and collaboration with more than 90 medical professionals. OpenAI has also established an advisory panel to guide the ethical deployment of AI technologies. It emphasizes the need to build responsible systems that support users without misleading them into believing AI can replace human judgment or care.

GPT-5 brings technical improvements as well. New features include safe completions that help ChatGPT stay within defined ethical boundaries. An automatic model routing feature has also been introduced to allow the generative AI chatbot to choose the appropriate model based on task complexity. These additions aim to balance usability with safety and reliability.

The new model has enhanced reasoning, writing, and coding capabilities, and less hallucinations and sycophantic responses. In health-related queries, it now provides more accurate and cautious outputs. OpenAI claims that this version represents a step forward in producing AI that is both helpful and aware of its limitations, especially in high-stakes scenarios.

OpenAI has made it clear that ChatGPT will not replace professionals in health, law, or crisis management. The company encourages users to consult qualified individuals for important life decisions. These changes reflect a broader effort to align AI tools with human values and reduce the risks of misuse or overreliance in emotionally delicate contexts.

The new policies are part of the commitment of OpenAI to building better, safer, transparent, and accountable artificial intelligence tools. As ChatGPT continues to evolve, users are being reminded that thoughtful interaction with AI must include awareness of its boundaries. The responsibility now falls on both the developers and the public to use such tools wisely.

Posted in Articles, Science and Technology and tagged , , , , , , , .