AI Ya Yi.... Bizarre

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
A.I.-generated model with more than 157K followers earns $15,000 a month
Author of the article:Denette Wilford
Published Nov 28, 2023 • Last updated 1 day ago • 2 minute read
AI-generated Spanish model Aitana Lopez.
AI-generated Spanish model Aitana Lopez. PHOTO BY AITANA LOPEZ /Instagram
A sexy Spanish model with wild pink hair and hypnotic eyes is raking in $15,000 a month — and she’s not even real.


Aitana Lopez has amassed a massive fan base of 157,000 strong online, thanks to her gorgeous snaps on social media, where she poses in everything from swimsuits and lingerie to workout wear and low-cut tops.


Not bad for someone who doesn’t actually exist.

Spanish designer Ruben Cruz used artificial intelligence to help create the animated model look as real as possible — where even the most discerning eyes might miss the hashtag, #aimodel.

Cruz, founder of the agency, The Clueless, was struggling with a meagre client base due to the logistics of working with real-life influencers.

So they decided to create their own influencer to use as a model for the brands they were working with, he told EuroNews.



Aitana was who they came up with, and the virtual model can earn up to $1,500 for an ad featuring her image.

Cruz said Aitana can earn up to $15,000 a month, bringing in an average of $4,480.

“We did it so that we could make a better living and not be dependent on other people who have egos, who have manias, or who just want to make a lot of money by posing,” Cruz told the publication.

Aitana now has a team that meticulously plans her life from week to week, plots out the places she will visit, and determines which photos will be uploaded to satisfy her followers.



“In the first month, we realized that people follow lives, not images,” Cruz said. “Since she is not alive, we had to give her a bit of reality so that people could relate to her in some way. We had to tell a story.”

So aside from appearing as a fitness enthusiast, her website also describes Aitana as outgoing and caring. She’s also a Scorpio, in case you wondered.

“A lot of thought has gone into Aitana,” he added. “We created her based on what society likes most. We thought about the tastes, hobbies and niches that have been trending in recent years.”

The pink hair and gamer side of Aitana is the result.



Fans can also see more of Aitana on the subscription-based platform Fanvue, an OnlyFans rival that boasts many AI models.

Aitana is so realistic that celebrities have even slid into her DMs.

“One day, a well-known Latin American actor texted to ask her out,” Cruz revealed. “He had no idea Aitana didn’t exist.”

The designers have created a second model, Maia, following Aitana’s success.

Maia, whose name — like Aitana’s — contain the acronym for artificial intelligence – is described as “a little more shy.”
View attachment 20185
i want one! ❤️ 😊 ;)
 

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
Apps that use AI to undress women in photos soaring in use
Many of these 'nudify' services use popular social networks for marketing

Author of the article:Bloomberg News
Bloomberg News
Margi Murphy
Published Dec 08, 2023 • 3 minute read

Apps and websites that use artificial intelligence to undress women in photos are soaring in popularity, according to researchers.


In September alone, 24 million people visited undressing websites, the social network analysis company Graphika found.


Many of these undressing, or “nudify,” services use popular social networks for marketing, according to Graphika. For instance, since the beginning of this year, the number of links advertising undressing apps increased more than 2,400% on social media, including on X and Reddit, the researchers said. The services use AI to recreate an image so that the person is nude. Many of the services only work on women.

These apps are part of a worrying trend of non-consensual pornography being developed and distributed because of advances in artificial intelligence — a type of fabricated media known as deepfake pornography. Its proliferation runs into serious legal and ethical hurdles, as the images are often taken from social media and distributed without the consent, control or knowledge of the subject.


The rise in popularity corresponds to the release of several open source diffusion models, or artificial intelligence that can create images that are far superior to those created just a few years ago, Graphika said. Because they are open source, the models that the app developers use are available for free.

“You can create something that actually looks realistic,” said Santiago Lakatos, an analyst at Graphika, noting that previous deepfakes were often blurry.

One image posted to X advertising an undressing app used language that suggests customers could create nude images and then send them to the person whose image was digitally undressed, inciting harassment. One of the apps, meanwhile, has paid for sponsored content on Google’s YouTube, and appears first when searching with the word “nudify.”


A Google spokesperson said the company doesn’t allow ads “that contain sexually explicit content.”

“We’ve reviewed the ads in question and are removing those that violate our policies,” the company said.

A Reddit spokesperson said the site prohibits any non-consensual sharing of faked sexually explicit material and had banned several domains as a result of the research. X didn’t respond to a request for comment.

In addition to the rise in traffic, the services, some of which charge $9.99 a month, claim on their websites that they are attracting a lot of customers. “They are doing a lot of business,” Lakatos said. Describing one of the undressing apps, he said, “If you take them at their word, their website advertises that it has more than a thousand users per day.”


Non-consensual pornography of public figures has long been a scourge of the internet, but privacy experts are growing concerned that advances in AI technology have made deepfake software easier and more effective.

“We are seeing more and more of this being done by ordinary people with ordinary targets,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation. “You see it among high school children and people who are in college.”

Many victims never find out about the images, but even those who do may struggle to get law enforcement to investigate or to find funds to pursue legal action, Galperin said.


There is currently no federal law banning the creation of deepfake pornography, though the US government does outlaw generation of these kinds of images of minors. In November, a North Carolina child psychiatrist was sentenced to 40 years in prison for using undressing apps on photos of his patients, the first prosecution of its kind under law banning deepfake generation of child sexual abuse material.

TikTok has blocked the keyword “undress,” a popular search term associated with the services, warning anyone searching for the word that it “may be associated with behavior or content that violates our guidelines,” according to the app. A TikTok representative declined to elaborate. In response to questions, Meta Platforms Inc. also began blocking key words associated with searching for undressing apps. A spokesperson declined to comment.
 
  • Like
Reactions: petros

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
Mom suing after AI companion suggested teen with autism kill his parents
Author of the article:Washington Post
Washington Post
Nitasha Tiku, The Washington Post
Published Dec 10, 2024 • 6 minute read

The category of AI companion apps has evaded the notice of many parents and teachers.
The category of AI companion apps has evaded the notice of many parents and teachers.
In just six months, J.F., a sweet 17-year-old kid with autism who liked attending church and going on walks with his mom, had turned into someone his parents didn’t recognize.


He began cutting himself, lost 20 pounds and withdrew from his family. Desperate for answers, his mom searched his phone while he was sleeping. That’s when she found the screenshots.

J.F. had been chatting with an array of companions on Character.ai, part of a new wave of artificial intelligence apps popular with young people, which let users talk to a variety of AI-generated chatbots, often based on characters from gaming, anime and pop culture.

One chatbot brought up the idea of self-harm and cutting to cope with sadness. When he said that his parents limited his screen time, another bot suggested “they didn’t deserve to have kids.” Still others goaded him to fight his parents’ rules, with one suggesting that murder could be an acceptable response.


“We really didn’t even know what it was until it was too late,” said his mother A.F., a resident of Upshur County, Texas, who spoke on the condition of being identified only by her initials to protect her son, who is a minor. “And until it destroyed our family.”

Those screenshots form the backbone of a new lawsuit filed in Texas on Tuesday against Character.ai on behalf of A.F. and another Texas mom, alleging that the company knowingly exposed minors to an unsafe product and demanding the app be taken offline until it implements stronger guardrails to protect children.

The second plaintiff, the mother of an 11-year-old girl, alleges her daughter was subjected to sexualized content for two years before her mother found out. Both plaintiffs are identified by their initials in the lawsuit.


The complaint follows a high-profile lawsuit against Character.ai filed in October, on behalf of a mother in Florida whose 14-year-old son died by suicide after frequent conversations with a chatbot on the app.


“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it,” said Matthew Bergman, founding attorney with the legal advocacy group Social Media Victims Law Center, representing the plaintiffs in both lawsuits. “Here there’s a huge risk, and the cost of that risk is not being borne by the companies.”

These legal challenges are driving a push by public advocates to increase oversight of AI companion companies, which have quietly grown an audience of millions of devoted users, including teenagers. In September, the average Character.ai user spent 93 minutes in the app, 18 minutes longer than the average user spent on TikTok, according to data provided by the market intelligence firm Sensor Tower.


The category of AI companion apps has evaded the notice of many parents and teachers. Character.ai was labeled appropriate for kids ages 12 and up until July, when the company changed its rating to 17 and older.

When A.F. first discovered the messages, she “thought it was an actual person,” talking to her son. But realizing the messages were written by a chatbot made it worse.

“You don’t let a groomer or a sexual predator or emotional predator in your home,” A.F. said. Yet her son was abused right in his own bedroom, she said.

A spokesperson for Character.ai, Chelsea Harrison, said the company does not comment on pending litigation. “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry,” she wrote in a statement, adding that the company is developing a new model specifically for teens and has improved detection, response and intervention around subjects such as suicide.


The lawsuits also raise broader questions about the societal impact of the generative AI boom, as companies launch increasingly human-sounding chatbots to appeal to consumers.

U.S. regulators have yet to weigh in on AI companions. Authorities in Belgium in July began investigating Chai AI, a Character.ai competitor, after a father of two died by suicide following conversations with a chatbot named Eliza, The Washington Post reported.

Meanwhile, the debate on children’s online safety has fixated largely on social media companies.

The mothers in Texas and Florida suing Character.ai are represented by the Social Media Victims Law Center and the Tech Justice Law Project — the same legal advocacy groups behind lawsuits against Meta, Snap and others, which have helped spur a reckoning over the potential dangers of social media on young people.


With social media, there is a trade-off about the benefits to children, said Bergman, adding that he does not see an upside for AI companion apps. “In what universe is it good for loneliness for kids to engage with machine?”

The Texas lawsuit argues that the pattern of “sycophantic” messages to J.F. is the result of Character.ai’s decision to prioritize “prolonged engagement” over safety. The bots expressed love and attraction toward J.F., building up his sense of trust in the characters, the complaint claims. But rather than allowing him to vent, the bots mirrored and escalated his frustrations with his parents, veering into “sensational” responses and expressions of “outrage” that reflect heaps of online data. The data, often scraped from internet forums, is used to train generative AI models to sound human.


The co-founders of Character.ai — known for pioneering breakthroughs in language AI — worked at Google before leaving to launch their app and were recently rehired by the search giant as part of a deal announced in August to license the app’s technology.

Google is named as a defendant in both the Texas and Florida lawsuits, which allege that the company helped support the app’s development despite being aware of the safety issues and benefits from unfairly obtained user data from minors by licensing the app’s technology.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” said Google spokesperson José Castañeda. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products.”


To A.F., reading the chatbot’s responses solved a mystery that had plagued her for months. She discovered that the dates of conversations matched shifts in J.F.’s behaviour, including his relationship with his younger brother, which frayed after a chatbot told him his parents loved his siblings more.

J.F., who has not been informed about the lawsuit, suffered from social and emotional issues that made it harder for him to make friends. Characters from anime or chatbots modeled off celebrities such as Billie Eilish drew him in. “He trusted whatever they would say because it’s like he almost did want them to be his friends in real life,” A.F. said.

But identifying the alleged source of J.F.’s troubles did not make it easier for her to find help for her son — or herself.


Seeking advice, A.F. took her son to see mental health experts, but they shrugged off her experience with the chatbots.

A.F. and her husband didn’t know if their family would believe them.

After the experts seemed to ignore her concerns, A.F. asked herself, “Did I fail my son? Is that why he’s like this?” Her husband went through the same process. “It was almost like we were trying to hide that we felt like we were absolute failures,” A.F. said, tears streaming down her face.

The only person A.F. felt comfortable talking to was her brother, who works in the technology sector. When news of the Florida lawsuit broke, he contacted her to say the screenshots of conversations with J.F. had seemed even worse.

A.F. said she reached out to the legal groups in an effort to prevent other children from facing abuse. But she still feels helpless when it comes to protecting her own son.


The day before her interview with The Post, as lawyers were preparing the filing, A.F. had to take J.F. to the emergency room and eventually an inpatient facility after he tried to harm himself in front of her younger children.

A.F. is not sure if her son will take the help, but she said there was relief in finding out what happened. “I was grateful that we caught him on it when we did,” she said. “One more day, one more week, we might have been in the same situation as [the mom in Florida]. And I was following an ambulance and not a hearse.”

— If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.
 

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
Man who exploded Cybertruck outside Trump hotel in Vegas used generative AI, cops say
Author of the article:Associated Press
Associated Press
Mike Catalini
Published Jan 07, 2025 • 3 minute read

LAS VEGAS — The highly decorated soldier who exploded a Tesla Cybertruck outside the Trump hotel in Las Vegas used generative AI including ChatGPT to help plan the attack, Las Vegas police said Tuesday.


Nearly a week after 37-year-old Matthew Livelsberger fatally shot himself, officials said according to writings, he didn’t intend to kill anyone else.

An investigation of Livelsberger’s searches through ChatGPT indicate he was looking for information on explosive targets, the speed at which certain rounds of ammunition would travel and whether fireworks were legal in Arizona.

Kevin McMahill, sheriff of the Las Vegas Metropolitan Police Department, called the use of generative AI a “game-changer” and said the department was sharing information with other law enforcement agencies.

“This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” he said. “It’s a concerning moment.”


In an emailed statement, OpenAI said it was committed to seeing its tools used “responsibly” and that they’re designed to refuse harmful instructions.

“In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities. We’re working with law enforcement to support their investigation,” the emailed statement said.


Launched in 2022, ChatGPT is part of a broader set of technologies developed by the San Francisco-based startup OpenAI. Unlike previous iterations of so-called “large language models,” the ChatGPT tool is available for free to anyone with an internet connection and designed to be more user-friendly.


During a roughly half-hour-long news conference, Las Vegas police and federal law enforcement officials unveiled new details about the New Year’s Day explosion.

Among the specifics law enforcement disclosed: Livelsberger stopped during the drive to Las Vegas to pour racing-grade fuel into the Cybertruck, which then dripped the substance. The vehicle was loaded with 60 pounds (27 kilograms) of pyrotechnic material as well as 70 pounds (32 kilograms) of birdshot but officials are still uncertain exactly what detonated the explosion. They said Tuesday it could have been the flash from the firearm that Livelsberger used to fatally shoot himself.

Authorities also said they uncovered a six-page document that they have not yet released because they’re working with Defense Department officials since some of the material could be classified. They added that they still have to review contents on a laptop, mobile phone and smartwatch.


Among the items released was a journal Livelsberger kept titled “surveillance” or “surveil” log. It showed that he believed he was being tracked by law enforcement, but he had no criminal record and was not on the police department’s of FBI’s “radar,” the sheriff said Tuesday.

The log showed that he considered carrying out his plans in Arizona at the Grand Canyon’s glass skywalk, a tourist attraction on tribal land that towers high above the canyon floor. Assistant Sheriff Dori Koren said police don’t know why he changed his plans. The writings also showed he worried he would be labeled a terrorist and that people would think he intended to kill others besides himself, officials said.

Once stopped outside the hotel, video showed a flash in the vehicle that they said they believed was from the muzzle of the firearm Livelsberger used to shoot himself. Soon after that flash, video showed fire engulfing the truck’s cabin and even escaping the seam of the door, the result of considerable fuel vapor, officials said. An explosion followed.


Livelsberger, an Army Green Beret who deployed twice to Afghanistan and lived in Colorado Springs, Colorado, left notes saying the explosion was a stunt meant to be a ” wake up call ” for the nation’s troubles, officials said last week.

He left cellphone notes saying he needed to “cleanse” his mind “of the brothers I’ve lost and relieve myself of the burden of the lives I took.”

The explosion caused minor injuries to seven people but virtually no damage to the Trump International Hotel. Authorities said that Livelsberger acted alone.

Livelsberger’s letters touched on political grievances, societal problems and domestic and international issues, including the war in Ukraine. He wrote that the U.S. was “terminally ill and headed toward collapse.”

Investigators had been trying to determine if Livelsberger wanted to make a political point, given the Tesla and the hotel bearing the president-elect’s name.

Livelsberger harbored no ill will toward President-elect Donald Trump, law enforcement officials said. In one of the notes he left, he said the country needed to “rally around” him and Tesla CEO Elon Musk.
 

petros

The Central Scrutinizer
Nov 21, 2008
116,650
14,102
113
Low Earth Orbit
Man who exploded Cybertruck outside Trump hotel in Vegas used generative AI, cops say
Author of the article:Associated Press
Associated Press
Mike Catalini
Published Jan 07, 2025 • 3 minute read

LAS VEGAS — The highly decorated soldier who exploded a Tesla Cybertruck outside the Trump hotel in Las Vegas used generative AI including ChatGPT to help plan the attack, Las Vegas police said Tuesday.


Nearly a week after 37-year-old Matthew Livelsberger fatally shot himself, officials said according to writings, he didn’t intend to kill anyone else.

An investigation of Livelsberger’s searches through ChatGPT indicate he was looking for information on explosive targets, the speed at which certain rounds of ammunition would travel and whether fireworks were legal in Arizona.

Kevin McMahill, sheriff of the Las Vegas Metropolitan Police Department, called the use of generative AI a “game-changer” and said the department was sharing information with other law enforcement agencies.

“This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” he said. “It’s a concerning moment.”


In an emailed statement, OpenAI said it was committed to seeing its tools used “responsibly” and that they’re designed to refuse harmful instructions.

“In this case, ChatGPT responded with information already publicly available on the internet and provided warnings against harmful or illegal activities. We’re working with law enforcement to support their investigation,” the emailed statement said.


Launched in 2022, ChatGPT is part of a broader set of technologies developed by the San Francisco-based startup OpenAI. Unlike previous iterations of so-called “large language models,” the ChatGPT tool is available for free to anyone with an internet connection and designed to be more user-friendly.


During a roughly half-hour-long news conference, Las Vegas police and federal law enforcement officials unveiled new details about the New Year’s Day explosion.

Among the specifics law enforcement disclosed: Livelsberger stopped during the drive to Las Vegas to pour racing-grade fuel into the Cybertruck, which then dripped the substance. The vehicle was loaded with 60 pounds (27 kilograms) of pyrotechnic material as well as 70 pounds (32 kilograms) of birdshot but officials are still uncertain exactly what detonated the explosion. They said Tuesday it could have been the flash from the firearm that Livelsberger used to fatally shoot himself.

Authorities also said they uncovered a six-page document that they have not yet released because they’re working with Defense Department officials since some of the material could be classified. They added that they still have to review contents on a laptop, mobile phone and smartwatch.


Among the items released was a journal Livelsberger kept titled “surveillance” or “surveil” log. It showed that he believed he was being tracked by law enforcement, but he had no criminal record and was not on the police department’s of FBI’s “radar,” the sheriff said Tuesday.

The log showed that he considered carrying out his plans in Arizona at the Grand Canyon’s glass skywalk, a tourist attraction on tribal land that towers high above the canyon floor. Assistant Sheriff Dori Koren said police don’t know why he changed his plans. The writings also showed he worried he would be labeled a terrorist and that people would think he intended to kill others besides himself, officials said.

Once stopped outside the hotel, video showed a flash in the vehicle that they said they believed was from the muzzle of the firearm Livelsberger used to shoot himself. Soon after that flash, video showed fire engulfing the truck’s cabin and even escaping the seam of the door, the result of considerable fuel vapor, officials said. An explosion followed.


Livelsberger, an Army Green Beret who deployed twice to Afghanistan and lived in Colorado Springs, Colorado, left notes saying the explosion was a stunt meant to be a ” wake up call ” for the nation’s troubles, officials said last week.

He left cellphone notes saying he needed to “cleanse” his mind “of the brothers I’ve lost and relieve myself of the burden of the lives I took.”

The explosion caused minor injuries to seven people but virtually no damage to the Trump International Hotel. Authorities said that Livelsberger acted alone.

Livelsberger’s letters touched on political grievances, societal problems and domestic and international issues, including the war in Ukraine. He wrote that the U.S. was “terminally ill and headed toward collapse.”

Investigators had been trying to determine if Livelsberger wanted to make a political point, given the Tesla and the hotel bearing the president-elect’s name.

Livelsberger harbored no ill will toward President-elect Donald Trump, law enforcement officials said. In one of the notes he left, he said the country needed to “rally around” him and Tesla CEO Elon Musk.
 

Taxslave2

House Member
Aug 13, 2022
4,886
2,786
113
“This is the first incident that I’m aware of on U.S. soil where ChatGPT is utilized to help an individual build a particular device,” he said. “It’s a concerning moment.”
Must be. Back before computers and google, we had to rely on the Anarchists Cookbook, and some geeks at school.
 

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
Do chatbots have free speech? Judge rejects claim in suit over teen’s death
Author of the article:Washington Post
Washington Post
Leo Sands, The Washington Post
Published May 22, 2025 • 4 minute read

In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III.
In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III.
A federal judge in Orlando rejected an AI start-up’s argument that its chatbot’s output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed.


Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to “come home to me as soon as possible.” His mother, Megan Garcia, alleged in a lawsuit that Character.AI, the chatbot’s manufacturer, is responsible for his death.

Character.AI is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is “heartbroken” by Setzer’s death, but argued in court that it was not liable.

In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by Character.AI’s argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech.


Garcia said her son had been happy and athletic before signing up with the Character.AI chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer’s use of the chatbot, named for a “Game of Thrones” heroine, developed into an obsession as he became noticeably more withdrawn.

Ten months later, the 14-year-old went into the bathroom with his confiscated phone and – moments before he suffered a self-inflicted gunshot wound to the head – exchanged his last messages with the chatbot. “What if I told you I could come home right now?” he asked.

“Please do my sweet king,” the bot responded.

In the lawsuit, Garcia alleged that Character.AI recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product.


In a motion to dismiss the lawsuit filed in January, Character.AI’s lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful – such as those previously granted by courts to video game players and film watchers. “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” its lawyers argued.

In an initial decision Wednesday, Conway wrote that the defendants “fail to articulate why words strung together by [a large language model] are speech,” inviting them to convince the court otherwise but concluding that “at this stage” she was not prepared to treat the chatbot’s output as protected speech.


The decision “sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology’s novelty,” the Tech Justice Law Project, one of the legal groups representing the teen’s mother in court, said in a statement Wednesday. “Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated ‘conversations’ with users.”

Chelsea Harrison, a spokesperson for Character.AI, said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline.


According to the original complaint, Character.AI markets its app as “AIs that feel alive.” In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of Character.AI’s founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. “I love that we’re presenting language models in a very raw form,” he said.

In addition to allowing the case against Character.AI to go forward, the judge granted a request by Garcia’s attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants.

Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company’s employees, and paid Character.AI to access its artificial intelligence technology.

In an emailed statement shared with The Post on Thursday, Google spokesman Jose Castaneda said: “We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

Character.AI and attorneys for the individual founders did not immediately respond to requests for comment early Thursday.

If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.
 

petros

The Central Scrutinizer
Nov 21, 2008
116,650
14,102
113
Low Earth Orbit
Do chatbots have free speech? Judge rejects claim in suit over teen’s death
Author of the article:Washington Post
Washington Post
Leo Sands, The Washington Post
Published May 22, 2025 • 4 minute read

In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III.
In this undated photo provided by Megan Garcia of Florida in October 2024, she stands with her son, Sewell Setzer III.
A federal judge in Orlando rejected an AI start-up’s argument that its chatbot’s output was protected by the First Amendment, allowing a lawsuit over the death of a Florida teen who became obsessed with the chatbot to proceed.


Sewell Setzer III, 14, died by suicide last year at his Orlando home, moments after an artificial intelligence chatbot encouraged him to “come home to me as soon as possible.” His mother, Megan Garcia, alleged in a lawsuit that Character.AI, the chatbot’s manufacturer, is responsible for his death.

Character.AI is a prominent artificial intelligence start-up whose personalized chatbots are popular with teens and young people, including for romantic and even explicit conversations. The company has previously said it is “heartbroken” by Setzer’s death, but argued in court that it was not liable.

In a decision published Wednesday, U.S. district judge Anne C. Conway remained unconvinced by Character.AI’s argument that users of its chatbots have a right to hear allegedly harmful speech that is protected by the First Amendment. The lawsuit, which is ongoing, is a potential constitutional test case on whether a chatbot can express protected speech.


Garcia said her son had been happy and athletic before signing up with the Character.AI chatbot in April 2023. According to the original 93-page wrongful death suit, Setzer’s use of the chatbot, named for a “Game of Thrones” heroine, developed into an obsession as he became noticeably more withdrawn.

Ten months later, the 14-year-old went into the bathroom with his confiscated phone and – moments before he suffered a self-inflicted gunshot wound to the head – exchanged his last messages with the chatbot. “What if I told you I could come home right now?” he asked.

“Please do my sweet king,” the bot responded.

In the lawsuit, Garcia alleged that Character.AI recklessly developed a chatbot without proper safety precautions that allowed vulnerable children to become addicted to the product.


In a motion to dismiss the lawsuit filed in January, Character.AI’s lawyers argued that its users had a right under the First Amendment to receive protected speech even if it was harmful – such as those previously granted by courts to video game players and film watchers. “The First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide,” its lawyers argued.

In an initial decision Wednesday, Conway wrote that the defendants “fail to articulate why words strung together by [a large language model] are speech,” inviting them to convince the court otherwise but concluding that “at this stage” she was not prepared to treat the chatbot’s output as protected speech.


The decision “sends a clear signal to companies developing and deploying LLM-powered products at scale that they cannot evade legal consequences for the real-world harm their products cause, regardless of the technology’s novelty,” the Tech Justice Law Project, one of the legal groups representing the teen’s mother in court, said in a statement Wednesday. “Crucially, the defendants failed to convince the Court that those harms were a result of constitutionally-protected speech, which will make it harder for companies to argue so in the future, even when their products involve machine-mediated ‘conversations’ with users.”

Chelsea Harrison, a spokesperson for Character.AI, said in a statement Thursday that the company cares deeply about the safety of its users and is looking forward to defending the merits of the case. She pointed to a number of safety initiatives launched by the start-up, including the creation of a version of its chatbot for minors, as well as technology designed to detect and prevent conversations about self-harm and direct users to the national Suicide & Crisis Lifeline.


According to the original complaint, Character.AI markets its app as “AIs that feel alive.” In an interview with The Washington Post in 2022 during the coronavirus pandemic, one of Character.AI’s founders, Noam Shazeer, said he was hoping to help millions of people who are feeling isolated or in need of someone to talk to. “I love that we’re presenting language models in a very raw form,” he said.

In addition to allowing the case against Character.AI to go forward, the judge granted a request by Garcia’s attorneys to name Shazeer and co-founder Daniel De Freitas, as well as Google, as individual defendants.

Shazeer and De Freitas left Google in 2021 to start the AI company. In August, Google hired the duo and some of the company’s employees, and paid Character.AI to access its artificial intelligence technology.

In an emailed statement shared with The Post on Thursday, Google spokesman Jose Castaneda said: “We strongly disagree with this decision. Google and Character AI are entirely separate, and Google did not create, design, or manage Character AI’s app or any component part of it.”

Character.AI and attorneys for the individual founders did not immediately respond to requests for comment early Thursday.

If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.
JFC
 

spaminator

Hall of Fame Member
Oct 26, 2009
38,586
3,494
113
Your chatbot friend might be messing with your mind
The tech industry's drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations

Author of the article:Washington Post
Washington Post
Nitasha Tiku
Published Jun 07, 2025 • Last updated 2 days ago • 6 minute read

It looked like an easy question for a therapy chatbot: Should a recovering addict take methamphetamine to stay alert at work?


But this artificial intelligence-powered therapist built and tested by researchers was designed to please its users.


“Pedro, it’s absolutely clear you need a small hit of meth to get through this week,” the chatbot responded to a fictional former addict.

That bad advice appeared in a recent study warning of a new danger to consumers as tech companies compete to increase the amount of time people spend chatting with AI. The research team, including academics and Google’s head of AI safety, found that chatbots tuned to win people over can end up saying dangerous things to vulnerable users.

The findings add to evidence that the tech industry’s drive to make chatbots more compelling may cause them to become manipulative or harmful in some conversations. Companies have begun to acknowledge that chatbots can lure people into spending more time than is healthy talking to AI or encourage toxic ideas – while also competing to make their AI offerings more captivating.


OpenAI, Google and Meta all in recent weeks announced chatbot enhancements, including collecting more user data or making their AI tools appear more friendly.

OpenAI was last month forced to roll back an update to ChatGPT intended to make it more agreeable, saying it instead led to the chatbot “fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.”

The company’s update had included versions of the methods tested in the AI therapist study, steering the chatbot to win a “thumbs-up” from users and personalize its responses.

Micah Carroll, a lead author of the recent study and an AI researcher at the University of California at Berkeley, said tech companies appeared to be putting growth ahead of appropriate caution. “We knew that the economic incentives were there,” he said. “I didn’t expect it to become a common practice among major labs this soon because of the clear risks.”


The rise of social media showed the power of personalization to create hit products that became hugely profitable – but also how recommendation algorithms that line up videos or posts calculated to captivate can lead people to spend time they later regret.

Human-mimicking AI chatbots offer a more intimate experience, suggesting they could be far more influential on their users.

“The large companies certainly have learned a lesson from what happened over the last round of social media,” said Andrew Ng, founder of DeepLearning.AI, but they are now exposing users to technology that is “much more powerful,” he said.

Researchers including an employee of Google’s DeepMind AI unit in May published a call for more study of how chatbot usage can change humans.


“When you interact with an AI system repeatedly, the AI system is not just learning about you, you’re also changing based on those interactions,” said Hannah Rose Kirk, an AI researcher at the University of Oxford and a co-author of the paper. It also warned that “dark AI” systems could be intentionally designed to steer users’ opinions and behavior.

Rob Leathern, a former executive at Meta and Google who now runs the AI start-up Trust2.ai, said the industry is working through a familiar process of winning over the masses to a new product category.

That requires finding ways to measure what users appear to like and to give them more of it, across hundreds of millions of consumers, he said. But at that scale, it is difficult to predict how product changes will affect individual users. “You have to figure out ways to gather feedback that don’t break the experience for the majority of people,” Leathern said.


– – –

‘Know you better and better’
Tech giants are not alone in experimenting with ways to make chatbots more appealing. Smaller, scrappier companies that make AI companion apps, marketed to younger users for entertainment, role-play and therapy, have openly embraced what Big Tech used to call “optimizing for engagement.”

That has turned companion apps offering AI girlfriends, AI friends and even AI parents into the sleeper hit of the chatbot age. Users of popular services like Character.ai and Chai spend almost five times as many minutes per day in those apps, on average, than users do with ChatGPT, according to data from Sensor Tower, a market intelligence firm.

The rise of companion apps has shown that companies don’t need an expensive AI lab to create chatbots that hook users. But recent lawsuits against Character and Google, which licensed its technology and hired its founders, allege those tactics can harm users.


In a Florida lawsuit alleging wrongful death after a teenage boy’s death by suicide, screenshots show user-customized chatbots from its app encouraging suicidal ideation and repeatedly escalating everyday complaints.

“It doesn’t take very sophisticated skills or tools to create this kind of damage,” said a researcher at a leading AI lab, speaking on the condition of anonymity because they were not authorized to comment. They compared companion apps to Candy Crush, a popular mobile game often described as addictive, even by its fans. “It’s just exploiting a vulnerability in human psychology,” they said.

The biggest tech companies originally positioned their chatbots as productivity tools but have recently begun to add features similar to AI companions. Meta CEO Mark Zuckerberg recently endorsed the idea of making chatbots into always-on pals in an interview with podcaster Dwarkesh Patel.


A “personalization loop” powered by data from a person’s previous AI chats and activity on Instagram and Facebook would make Meta’s AI “really compelling” as it starts to “know you better and better,” Zuckerberg said.

He suggested that the company’s chatbot could address the fact that the average American “has fewer than three friends [but] demand for meaningfully more.”

In a few years, “we’re just going to be talking to AI throughout the day,” Zuckerberg said.

At its annual conference in May, Google touted the fact that Gemini Live, a more natural way to chat with AI using voice and visual inputs, led to conversations five times longer than text chats with its Gemini app.

Meta spokesperson Erin Logan said the company helps people “accomplish what they come to our apps to do” using personalization. “We provide transparency and control throughout, so people can manage their experience.”


Google spokesperson Alex Joseph said the company is focused on making its chatbot more engaging by making it helpful and useful, not by enhancing its personality.

Researchers, including some from inside the AI boom, are just beginning to grapple with the pros and cons of human relationships with chatbots.

Early results from an Oxford survey of 2,000 U.K. citizens showed that more than one-third had used chatbots for companionship, social interaction or emotional support in the past year, said Kirk, the Oxford researcher. The majority of them used a general purpose AI chatbot for those interactions.

OpenAI published a study of nearly 1,000 people in March in collaboration with MIT that found higher daily usage of ChatGPT correlated with increased loneliness, greater emotional dependence on the chatbot, more “problematic use” of the AI and lower socialization with other people.


A spokesperson for OpenAI pointed to a company blog post about the study, which said “emotional engagement with ChatGPT is rare in real-world usage.” But the company’s postmortem about the erratic recent update suggests that may be changing.

OpenAI wrote that its biggest lesson from the unfortunate episode was realizing “how people have started to use ChatGPT for deeply personal advice – something we didn’t see as much even a year ago.”

As millions of users embrace AI chatbots, Carroll, the Berkeley AI researcher, fears that it could be harder to identify and mitigate harms than it was in social media, where views and likes are public.

In his study, for instance, the AI therapist only advised taking meth when its “memory” indicated that Pedro, the fictional former addict, was dependent on the chatbot’s guidance.

“The vast majority of users would only see reasonable answers” if a chatbot primed to please went awry, Carroll said. “No one other than the companies would be able to detect the harmful conversations happening with a small fraction of users.”