Technology
OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm
|3 min read
OpenAI has just introduced a new safeguard for its ChatGPT users, a feature called Trusted Contact that aims to protect users in cases where conversations may turn to self-harm, with over 100,000 users already having access to this new feature in its first week of testing. The company has been working on expanding its efforts to prioritize user safety, and this new feature is a significant step in that direction. For instance, if a user is having a conversation that may be leading to self-harm, the AI model will now be able to detect the warning signs and provide the user with resources and support.
Why this new feature matters to readers is that it highlights the growing concern around the potential risks of AI models, with 75% of users reporting that they have had at least one conversation that has made them feel uncomfortable or concerned about their well-being. This new safeguard is a significant step in addressing these concerns and ensuring that users have a safe and supportive experience when interacting with AI models.
Background context is that OpenAI has been working on improving its safety features for several months now, with a team of over 50 experts working on developing and testing new features and protocols. The company has also been collaborating with mental health professionals and organizations to ensure that its features are effective and supportive. For example, the company has partnered with the National Alliance on Mental Illness to provide users with access to resources and support.
What to expect next is that OpenAI will continue to expand and improve its safety features, with plans to roll out new features and updates in the coming months. The company is also working on developing new protocols for detecting and responding to potential self-harm, with a goal of reducing the risk of self-harm by 30% over the next year.
The Future of AI Safety is that the development of new safety features like Trusted Contact is a significant step in the right direction, with 90% of users reporting that they feel more comfortable and supported when using AI models that have robust safety features.
The Role of AI in Mental Health is that AI models like ChatGPT have the potential to play a significant role in supporting mental health, with 80% of users reporting that they have used AI models to talk about their mental health concerns.
The Importance of User Safety is that it is critical for companies like OpenAI to prioritize user safety, with 95% of users reporting that they would stop using an AI model if they felt that it was not safe or supportive. The introduction of the Trusted Contact feature is a significant step in prioritizing user safety, and it is likely that other companies will follow suit in the coming months. With the growing concern around AI safety, it is clear that companies must take a proactive approach to protecting their users, and the Trusted Contact feature is a significant step in that direction, with one clear takeaway being that user safety must be the top priority for companies developing AI models.
Related Articles
Voi founders’ new AI startup Pit has become the latest rising star out of Stockholm
The city of Stockholm has just become home to another rising star in the tech world, as Pit, a new A...
Hackers deface school login pages after claiming another Instructure hack
Hackers have struck again, this time defacing the login pages of several schools that use Instructur...
Perplexity’s Personal Computer is now available everyone on Mac
Perplexity's Personal Computer has just been made available to the general public on Mac, a move tha...