01.09.2025., 05:56
|
#312
|
Premium
Datum registracije: May 2006
Lokacija: München/Varaždin
Postovi: 4,852
|
Citiraj:
Earlier this week, buried in the middle of a lengthy blog post addressing ChatGPT's propensity for severe mental health harms, OpenAI admitted that it's scanning users' conversations and reporting to police any interactions that a human reviewer deems sufficiently threatening.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," it wrote. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
The announcement raised immediate questions. Don't human moderators judging tone, for instance, undercut the entire premise of an AI system that its creators say can solve broad, complex problems? How is OpenAI even figuring out users' precise locations in order to provide them to emergency responders? How is it protecting against abuse by so-called swatters, who could pretend to be someone else and then make violent threats to ChatGPT in order to get their targets raided by the cops...? The admission also seems to contradict remarks by OpenAI CEO Sam Altman, who recently called for privacy akin to a "therapist or a lawyer or a doctor" for users talking to ChatGPT.
"Others argued that the AI industry is hastily pushing poorly-understood products to market, using real people as guinea pigs, and adopting increasingly haphazard solutions to real-world problems as they arise..."
|
Idealna alatka i mokri san svakog totalitarmog rezima. Uskoro i u vasem domu  ...Oh wait....
__________________
Lenovo LOQ 15AHP9 83DX || AMD Ryzen 5 8645HS / 16GB DDR5 / Micron M.2 2242 1TB / nVidia Geforce RTX 4050 / Windows 11 Pro
Lenovo Thinkpad L15 Gen 1 || Intel Core i5 10210U / 16GB DDR4 / WD SN730 256GB / Intel UHD / FreeBSD 14.3
|
|
|