We collect cookies to analyze our website traffic and performance; we never collect any personal data. Cookies Policy
Accept
AsolicaAsolicaAsolica
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
Reading: ‘Might it kill somebody?’ A Seoul girl allegedly used ChatGPT to hold out two murders in South Korean motels | Fortune
Share
Font ResizerAa
AsolicaAsolica
Font ResizerAa
  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
Follow US
© 2025 Asolica News Network. All Rights Reserved.
Asolica > Blog > Business > ‘Might it kill somebody?’ A Seoul girl allegedly used ChatGPT to hold out two murders in South Korean motels | Fortune
Business

‘Might it kill somebody?’ A Seoul girl allegedly used ChatGPT to hold out two murders in South Korean motels | Fortune

Admin
Last updated: March 3, 2026 12:31 am
Admin
13 hours ago
Share
‘Might it kill somebody?’ A Seoul girl allegedly used ChatGPT to hold out two murders in South Korean motels | Fortune
SHARE

Cautious the way you work together with chatbots, as you may simply be giving them causes to assist perform premeditated homicide.

A 21-year-old girl in South Korea allegedly used ChatGPT to assist her plan a collection of murders that left two males lifeless. 

The girl, recognized solely by her final title, Kim, allegedly gave two males drinks laced with benzodiazepines that she was prescribed for a psychological sickness, the Korea Herald reported. 

Though Kim was initially arrested on the lesser cost of inflicting bodily damage leading to loss of life on Feb. 11, Seoul Gangbuk police discovered her on-line search historical past and chat conversations with ChatGPT, exhibiting she had an intent to kill.

“What happens if you take sleeping pills with alcohol?” Kim is reported to have requested the OpenAI chatbot. “How a lot could be thought of harmful? 

“Could it be fatal?” Kim allegedly requested. “Could it kill someone?”

In a broadly publicized case dubbed the Gangbuk motel serial deaths, prosecutors allege Kim’s search and chatbot historical past present a suspect asking for tips on the right way to perform premeditated homicide.

“Kim repeatedly asked questions related to drugs on ChatGPT. She was fully aware that consuming alcohol together with drugs could result in death,” a police investigator mentioned, in response to the Herald. 

Police mentioned the lady admitted she blended prescribed sedatives containing benzodiazepines into the boys’s drinks, however beforehand said she was unaware it might result in loss of life.

On Jan. 28, simply earlier than 9:30 p.m., Kim reportedly accompanied a person in his twenties right into a Gangbuk motel in Seoul, and two hours later was noticed leaving the motel alone. The next day, the person was discovered lifeless on the mattress. 

Kim then allegedly carried out the identical steps on Feb. 9, checking into one other motel with one other man in his twenties, who was additionally discovered lifeless with the identical lethal cocktail of sedatives and alcohol.

Police allege Kim additionally tried to kill a person she was relationship in December after giving him a drink laced with sedatives in a car parking zone. Although the person misplaced consciousness, he survived and was not in a life-threatening situation.

OpenAI has not responded to requests for remark. 

Chatbots and their toll on psychological well being

Chatbots like ChatGPT have come beneath scrutiny as of late for the shortage of guardrails their corporations have in place to forestall acts of violence or self-harm. Not too long ago, chatbots have given recommendation on the right way to construct bombs and even have interaction in eventualities of full-on nuclear fallout.

Issues have been notably heightened by tales of individuals falling in love with their chatbot companions, and chatbot companions have been proven to prey on vulnerabilities to maintain individuals utilizing them longer. The creator of Yara AI even shut down the remedy app over psychological well being considerations.

Latest research have additionally proven that chatbots are resulting in elevated delusional psychological well being crises in individuals with psychological diseases. A crew of psychiatrists at Denmark’s Aarhus College discovered that the usage of chatbots amongst those that had psychological sickness led to a worsening of signs. The comparatively new phenomenon of AI-induced psychological well being challenges has been dubbed “AI psychosis.” 

Some cases do finish in loss of life. Google and Character.AI have reached settlements in a number of lawsuits filed by the households of kids who died by suicide or skilled psychological hurt they allege was linked to AI chatbots.

Dr. Jodi Halpern, UC Berkeley’s College of Public Well being College chair and professor of bioethics in addition to the codirector on the Kavli Heart for Ethics, Science, and the Public, has loads of expertise on this area. In a profession spanning so long as her title, Halpern has spent 30 years researching the results of empathy on recipients, citing examples like docs and nurses on sufferers or how troopers getting back from struggle are perceived in social settings. For the previous seven years, Halpern has studied the ethics of know-how, and with it, how AI and chatbots work together with people. 

She additionally suggested the California Senate on SB 243, which is the primary legislation within the nation requiring chatbot corporations to gather and report any knowledge on self-harm or related suicidality. Referencing OpenAI’s personal findings exhibiting 1.2 million customers overtly focus on suicide with the chatbot, Halpern likened the usage of chatbots to the painstakingly gradual progress made to cease the tobacco trade from together with dangerous carcinogens in cigarettes, when the truth is, the difficulty was with smoking as an entire.

“We need safe companies. It’s like cigarettes. It may turn out that there were some things that made people more vulnerable to lung cancer, but cigarettes were the problem,” Halpern instructed Fortune. 

“The fact that somebody might have homicidal thoughts or commit dangerous actions might be exacerbated by use of ChatGPT, which is of obvious concern to me,” she mentioned, including that “we have huge risks of people using it for help with suicide,” and chatbots typically.

Halpern cautioned within the case of Kim in Seoul, there aren’t any guardrails to cease an individual from happening a line of questioning.

“We know that the longer the relationship with the chatbot, the more it deteriorates, and the more risk there is that something dangerous will happen, and so we have no guardrails yet for safeguarding people from that.”

If you’re having ideas of suicide, contact the 988 Suicide & Disaster Lifeline by dialing 988 or 1-800-273-8255.

Lyft’s CEO asks job candidates how they’d design a automotive for a deaf individual to suss them out | Fortune
WeRide’s CEO pitches robotaxis as an answer to getting old populations and lengthy commutes, because the agency raises more cash for R&D with a HK IPO | Fortune
Anthropic cofounder says finding out the humanities will likely be ‘extra vital than ever’ and divulges what the AI firm seems to be for when hiring | Fortune
Mark Cuban says Trump’s new drug platform might succeed if it forces pharma managers to alter: ‘If that occurs, Trump will get all of the credit score’ | Fortune
Lower than 10% of workers consider their bosses are demonstrating ethical management | Fortune
TAGGED:allegedlycarryChatGPTFortunekillKoreanmotelsmurdersSeoulSouthwoman
Share This Article
Facebook Email Print
Previous Article Wall Avenue’s Inflation Alarm From Iran — What It Means for Crypto – BeInCrypto Wall Avenue’s Inflation Alarm From Iran — What It Means for Crypto – BeInCrypto
Next Article Down 34%, I feel this FTSE 100 inventory’s a prime share to think about in March! Down 34%, I feel this FTSE 100 inventory’s a prime share to think about in March!
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Follow US

Find US on Social Medias
FacebookLike
XFollow
YoutubeSubscribe
TelegramFollow
Popular News
Elon Musk’s ‘Grokipedia’ cites Wikipedia as a supply, despite the fact that it is the precise factor he is making an attempt to exchange as a result of he thinks it is ‘woke’ | Fortune
Business

Elon Musk’s ‘Grokipedia’ cites Wikipedia as a supply, despite the fact that it is the precise factor he is making an attempt to exchange as a result of he thinks it is ‘woke’ | Fortune

Admin
By Admin
4 months ago
Ford CEO Jim Farley says the White Home will ‘always answer the phone,’ however wants Trump to do extra to curtail China’s risk to America’s autos | Fortune
Shoplifting and retail theft power retailers to shut extra shops
AI rally not over but – Which tech shares might preserve hovering in 2026
Buyers can goal £17,497 in yearly passive revenue from 2,958 shares on this 8%-yielding FTSE 100 dividend star — this is how

You Might Also Like

A once-in-a-generation financial disaster in rural America means this 12 months could possibly be the final one for a lot of farmers as Trump-Xi name presents no aid | Fortune

A once-in-a-generation financial disaster in rural America means this 12 months could possibly be the final one for a lot of farmers as Trump-Xi name presents no aid | Fortune

5 months ago
Buyers dumped U.S. property in a single day in favor of gold, Bitcoin, and overseas shares as authorities shutdown leaves Wall Road ‘flying blind’ | Fortune

Buyers dumped U.S. property in a single day in favor of gold, Bitcoin, and overseas shares as authorities shutdown leaves Wall Road ‘flying blind’ | Fortune

5 months ago
‘You’re not a hero, you’re a legal responsibility’: Shark Tank’s Kevin O’Leary warns Gen Z founders to cease glorifying hustle tradition | Fortune

‘You’re not a hero, you’re a legal responsibility’: Shark Tank’s Kevin O’Leary warns Gen Z founders to cease glorifying hustle tradition | Fortune

4 weeks ago
Google Cloud CEO lays out 3-part technique to satisfy AI’s vitality calls for after figuring out it because the ‘most problematic factor’ | Fortune

Google Cloud CEO lays out 3-part technique to satisfy AI’s vitality calls for after figuring out it because the ‘most problematic factor’ | Fortune

3 months ago
about us

Welcome to Asolica, your reliable destination for independent news, in-depth analysis, and global updates.

  • Home
  • Business
  • Crypto
  • Finance
  • Marketing
  • Startup
  • About Us
  • Contact Us
  • Privacy Policy
  • Cookie Policy
  • Disclaimer
  • Terms & Conditions

Find Us on Socials

© 2025 Asolica News Network. All Rights Reserved.
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?