AI Ya Yi.... Bizarre

spaminator

Hall of Fame Member
Oct 26, 2009
37,568
3,290
113
A.I.-generated model with more than 157K followers earns $15,000 a month
Author of the article:Denette Wilford
Published Nov 28, 2023 • Last updated 1 day ago • 2 minute read
AI-generated Spanish model Aitana Lopez.
AI-generated Spanish model Aitana Lopez. PHOTO BY AITANA LOPEZ /Instagram
A sexy Spanish model with wild pink hair and hypnotic eyes is raking in $15,000 a month — and she’s not even real.


Aitana Lopez has amassed a massive fan base of 157,000 strong online, thanks to her gorgeous snaps on social media, where she poses in everything from swimsuits and lingerie to workout wear and low-cut tops.


Not bad for someone who doesn’t actually exist.

Spanish designer Ruben Cruz used artificial intelligence to help create the animated model look as real as possible — where even the most discerning eyes might miss the hashtag, #aimodel.

Cruz, founder of the agency, The Clueless, was struggling with a meagre client base due to the logistics of working with real-life influencers.

So they decided to create their own influencer to use as a model for the brands they were working with, he told EuroNews.



Aitana was who they came up with, and the virtual model can earn up to $1,500 for an ad featuring her image.

Cruz said Aitana can earn up to $15,000 a month, bringing in an average of $4,480.

“We did it so that we could make a better living and not be dependent on other people who have egos, who have manias, or who just want to make a lot of money by posing,” Cruz told the publication.

Aitana now has a team that meticulously plans her life from week to week, plots out the places she will visit, and determines which photos will be uploaded to satisfy her followers.



“In the first month, we realized that people follow lives, not images,” Cruz said. “Since she is not alive, we had to give her a bit of reality so that people could relate to her in some way. We had to tell a story.”

So aside from appearing as a fitness enthusiast, her website also describes Aitana as outgoing and caring. She’s also a Scorpio, in case you wondered.

“A lot of thought has gone into Aitana,” he added. “We created her based on what society likes most. We thought about the tastes, hobbies and niches that have been trending in recent years.”

The pink hair and gamer side of Aitana is the result.



Fans can also see more of Aitana on the subscription-based platform Fanvue, an OnlyFans rival that boasts many AI models.

Aitana is so realistic that celebrities have even slid into her DMs.

“One day, a well-known Latin American actor texted to ask her out,” Cruz revealed. “He had no idea Aitana didn’t exist.”

The designers have created a second model, Maia, following Aitana’s success.

Maia, whose name — like Aitana’s — contain the acronym for artificial intelligence – is described as “a little more shy.”
View attachment 20185
i want one! ❤️ 😊 ;)
 

spaminator

Hall of Fame Member
Oct 26, 2009
37,568
3,290
113
Apps that use AI to undress women in photos soaring in use
Many of these 'nudify' services use popular social networks for marketing

Author of the article:Bloomberg News
Bloomberg News
Margi Murphy
Published Dec 08, 2023 • 3 minute read

Apps and websites that use artificial intelligence to undress women in photos are soaring in popularity, according to researchers.


In September alone, 24 million people visited undressing websites, the social network analysis company Graphika found.


Many of these undressing, or “nudify,” services use popular social networks for marketing, according to Graphika. For instance, since the beginning of this year, the number of links advertising undressing apps increased more than 2,400% on social media, including on X and Reddit, the researchers said. The services use AI to recreate an image so that the person is nude. Many of the services only work on women.

These apps are part of a worrying trend of non-consensual pornography being developed and distributed because of advances in artificial intelligence — a type of fabricated media known as deepfake pornography. Its proliferation runs into serious legal and ethical hurdles, as the images are often taken from social media and distributed without the consent, control or knowledge of the subject.


The rise in popularity corresponds to the release of several open source diffusion models, or artificial intelligence that can create images that are far superior to those created just a few years ago, Graphika said. Because they are open source, the models that the app developers use are available for free.

“You can create something that actually looks realistic,” said Santiago Lakatos, an analyst at Graphika, noting that previous deepfakes were often blurry.

One image posted to X advertising an undressing app used language that suggests customers could create nude images and then send them to the person whose image was digitally undressed, inciting harassment. One of the apps, meanwhile, has paid for sponsored content on Google’s YouTube, and appears first when searching with the word “nudify.”


A Google spokesperson said the company doesn’t allow ads “that contain sexually explicit content.”

“We’ve reviewed the ads in question and are removing those that violate our policies,” the company said.

A Reddit spokesperson said the site prohibits any non-consensual sharing of faked sexually explicit material and had banned several domains as a result of the research. X didn’t respond to a request for comment.

In addition to the rise in traffic, the services, some of which charge $9.99 a month, claim on their websites that they are attracting a lot of customers. “They are doing a lot of business,” Lakatos said. Describing one of the undressing apps, he said, “If you take them at their word, their website advertises that it has more than a thousand users per day.”


Non-consensual pornography of public figures has long been a scourge of the internet, but privacy experts are growing concerned that advances in AI technology have made deepfake software easier and more effective.

“We are seeing more and more of this being done by ordinary people with ordinary targets,” said Eva Galperin, director of cybersecurity at the Electronic Frontier Foundation. “You see it among high school children and people who are in college.”

Many victims never find out about the images, but even those who do may struggle to get law enforcement to investigate or to find funds to pursue legal action, Galperin said.


There is currently no federal law banning the creation of deepfake pornography, though the US government does outlaw generation of these kinds of images of minors. In November, a North Carolina child psychiatrist was sentenced to 40 years in prison for using undressing apps on photos of his patients, the first prosecution of its kind under law banning deepfake generation of child sexual abuse material.

TikTok has blocked the keyword “undress,” a popular search term associated with the services, warning anyone searching for the word that it “may be associated with behavior or content that violates our guidelines,” according to the app. A TikTok representative declined to elaborate. In response to questions, Meta Platforms Inc. also began blocking key words associated with searching for undressing apps. A spokesperson declined to comment.
 
  • Like
Reactions: petros

spaminator

Hall of Fame Member
Oct 26, 2009
37,568
3,290
113
Mom suing after AI companion suggested teen with autism kill his parents
Author of the article:Washington Post
Washington Post
Nitasha Tiku, The Washington Post
Published Dec 10, 2024 • 6 minute read

The category of AI companion apps has evaded the notice of many parents and teachers.
The category of AI companion apps has evaded the notice of many parents and teachers.
In just six months, J.F., a sweet 17-year-old kid with autism who liked attending church and going on walks with his mom, had turned into someone his parents didn’t recognize.


He began cutting himself, lost 20 pounds and withdrew from his family. Desperate for answers, his mom searched his phone while he was sleeping. That’s when she found the screenshots.

J.F. had been chatting with an array of companions on Character.ai, part of a new wave of artificial intelligence apps popular with young people, which let users talk to a variety of AI-generated chatbots, often based on characters from gaming, anime and pop culture.

One chatbot brought up the idea of self-harm and cutting to cope with sadness. When he said that his parents limited his screen time, another bot suggested “they didn’t deserve to have kids.” Still others goaded him to fight his parents’ rules, with one suggesting that murder could be an acceptable response.


“We really didn’t even know what it was until it was too late,” said his mother A.F., a resident of Upshur County, Texas, who spoke on the condition of being identified only by her initials to protect her son, who is a minor. “And until it destroyed our family.”

Those screenshots form the backbone of a new lawsuit filed in Texas on Tuesday against Character.ai on behalf of A.F. and another Texas mom, alleging that the company knowingly exposed minors to an unsafe product and demanding the app be taken offline until it implements stronger guardrails to protect children.

The second plaintiff, the mother of an 11-year-old girl, alleges her daughter was subjected to sexualized content for two years before her mother found out. Both plaintiffs are identified by their initials in the lawsuit.


The complaint follows a high-profile lawsuit against Character.ai filed in October, on behalf of a mother in Florida whose 14-year-old son died by suicide after frequent conversations with a chatbot on the app.


“The purpose of product liability law is to put the cost of safety in the hands of the party most capable of bearing it,” said Matthew Bergman, founding attorney with the legal advocacy group Social Media Victims Law Center, representing the plaintiffs in both lawsuits. “Here there’s a huge risk, and the cost of that risk is not being borne by the companies.”

These legal challenges are driving a push by public advocates to increase oversight of AI companion companies, which have quietly grown an audience of millions of devoted users, including teenagers. In September, the average Character.ai user spent 93 minutes in the app, 18 minutes longer than the average user spent on TikTok, according to data provided by the market intelligence firm Sensor Tower.


The category of AI companion apps has evaded the notice of many parents and teachers. Character.ai was labeled appropriate for kids ages 12 and up until July, when the company changed its rating to 17 and older.

When A.F. first discovered the messages, she “thought it was an actual person,” talking to her son. But realizing the messages were written by a chatbot made it worse.

“You don’t let a groomer or a sexual predator or emotional predator in your home,” A.F. said. Yet her son was abused right in his own bedroom, she said.

A spokesperson for Character.ai, Chelsea Harrison, said the company does not comment on pending litigation. “Our goal is to provide a space that is both engaging and safe for our community. We are always working toward achieving that balance, as are many companies using AI across the industry,” she wrote in a statement, adding that the company is developing a new model specifically for teens and has improved detection, response and intervention around subjects such as suicide.


The lawsuits also raise broader questions about the societal impact of the generative AI boom, as companies launch increasingly human-sounding chatbots to appeal to consumers.

U.S. regulators have yet to weigh in on AI companions. Authorities in Belgium in July began investigating Chai AI, a Character.ai competitor, after a father of two died by suicide following conversations with a chatbot named Eliza, The Washington Post reported.

Meanwhile, the debate on children’s online safety has fixated largely on social media companies.

The mothers in Texas and Florida suing Character.ai are represented by the Social Media Victims Law Center and the Tech Justice Law Project — the same legal advocacy groups behind lawsuits against Meta, Snap and others, which have helped spur a reckoning over the potential dangers of social media on young people.


With social media, there is a trade-off about the benefits to children, said Bergman, adding that he does not see an upside for AI companion apps. “In what universe is it good for loneliness for kids to engage with machine?”

The Texas lawsuit argues that the pattern of “sycophantic” messages to J.F. is the result of Character.ai’s decision to prioritize “prolonged engagement” over safety. The bots expressed love and attraction toward J.F., building up his sense of trust in the characters, the complaint claims. But rather than allowing him to vent, the bots mirrored and escalated his frustrations with his parents, veering into “sensational” responses and expressions of “outrage” that reflect heaps of online data. The data, often scraped from internet forums, is used to train generative AI models to sound human.


The co-founders of Character.ai — known for pioneering breakthroughs in language AI — worked at Google before leaving to launch their app and were recently rehired by the search giant as part of a deal announced in August to license the app’s technology.

Google is named as a defendant in both the Texas and Florida lawsuits, which allege that the company helped support the app’s development despite being aware of the safety issues and benefits from unfairly obtained user data from minors by licensing the app’s technology.

“Google and Character AI are completely separate, unrelated companies and Google has never had a role in designing or managing their AI model or technologies,” said Google spokesperson José Castañeda. “User safety is a top concern for us, which is why we’ve taken a cautious and responsible approach to developing and rolling out our AI products.”


To A.F., reading the chatbot’s responses solved a mystery that had plagued her for months. She discovered that the dates of conversations matched shifts in J.F.’s behaviour, including his relationship with his younger brother, which frayed after a chatbot told him his parents loved his siblings more.

J.F., who has not been informed about the lawsuit, suffered from social and emotional issues that made it harder for him to make friends. Characters from anime or chatbots modeled off celebrities such as Billie Eilish drew him in. “He trusted whatever they would say because it’s like he almost did want them to be his friends in real life,” A.F. said.

But identifying the alleged source of J.F.’s troubles did not make it easier for her to find help for her son — or herself.


Seeking advice, A.F. took her son to see mental health experts, but they shrugged off her experience with the chatbots.

A.F. and her husband didn’t know if their family would believe them.

After the experts seemed to ignore her concerns, A.F. asked herself, “Did I fail my son? Is that why he’s like this?” Her husband went through the same process. “It was almost like we were trying to hide that we felt like we were absolute failures,” A.F. said, tears streaming down her face.

The only person A.F. felt comfortable talking to was her brother, who works in the technology sector. When news of the Florida lawsuit broke, he contacted her to say the screenshots of conversations with J.F. had seemed even worse.

A.F. said she reached out to the legal groups in an effort to prevent other children from facing abuse. But she still feels helpless when it comes to protecting her own son.


The day before her interview with The Post, as lawyers were preparing the filing, A.F. had to take J.F. to the emergency room and eventually an inpatient facility after he tried to harm himself in front of her younger children.

A.F. is not sure if her son will take the help, but she said there was relief in finding out what happened. “I was grateful that we caught him on it when we did,” she said. “One more day, one more week, we might have been in the same situation as [the mom in Florida]. And I was following an ambulance and not a hearse.”

— If you or someone you know needs help, visit 988lifeline.org or call or text the Suicide & Crisis Lifeline at 988.