State Prosecutors Unite To Tackle The Issue Of AI-Driven Child Exploitation

Reports by the BBC have exposed a wave of AI child exploitation by pedophiles, who use artificial intelligence to produce and market realistic- looking, child-based sexual abuse material. AI has raised child abuse cases to higher, more concerning levels. Child exploiters access such images via subscription services to regular content- streaming sites such as Patreon. These degenerates who produce such images use an AI software called Stable Diffusion, which was originally designed to generate art/ graphic design images. A simple software, users describe their desired images using word prompts, and the software brings it to life. However, perverts have been found to misuse the application, creating images of child sexual abuse instead.

Prosecutors in the 50 states, and 4 territories of the United States signed a letter to Congress on September 5th, informing them to act against all AI- based (CSAM) child sexual abuse material and increase restrictions on child exploitation. The letter outlined their concern about the loopholes AI was creating for child exploitation, thereby making prosecution more difficult against the offenders.

RELATED: Survey Discovers That ChatGPT Is Used (or Feared) By Only A Small Number Of Americans

AI And Child Exploitation

Using AI, degenerates create deep fake images depicting people in untrue settings. Sometimes these fake images are hilarious (e.g., when the Pope was depicted owning a Balenciaga coat), but worst case scenarios are used to promote abuse. The abuse promoted does not necessarily have to be true, however the act of creating and circulating sexualized images of children is a crime. This is because it threatens the psychological, physical, and emotional well-being of such children and their guardians/ parents. Using AI, child exploiters can depict real, un-abused children in abusive scenarios, depict previously abused children in images of abuse, or show children who are not real in abusive situations.

The signatories to the letter are demanding that Congress should establish a committee to proffer solutions that address the risks of CSAM generated by AI. They also want the committee to study the diverse ways AI can be used to exploit children and discuss restrictions to be implemented on AI generated materials, especially regarding the sexual abuse of children. Republican Attorney General Alan Wilson spearheaded the effort to add signatories from all 50 states and 4 U.S. territories.

The attorney from South Carolina, who is currently serving his fourth term, remarked on his hopes that the federal lawmakers would move into legislative action on the matter. According to The Associated Press, the prosecutors emphasized the fact that the country is in a time-bound race to keep American children safe from the dangers of artificial intelligence.

RELATED: OpenAI- backed Language Learning App ‘Speak’ Raises $16M  To Expand To The U.S.

While non-consensual AI deep fakes abound online already, few legal protection laws are available for the protection of the victims. Cities like New York, Virginia, Georgia, and California have laws against sharing AI deep fakes that can be sexually exploitative. In 2019, Texas banned the use of AI deep fakes in political elections first.

However, some of this banned content manages to slip through the cracks sometimes, despite its prohibition by most social media platforms. An example of this occurred in March, when an app ran more than 230 ads on Facebook, Instagram, and Messenger; to allow people swap any face into suggestive videos. Meta removed the ads once it was notified.

Leaders in state and federal government levels must place protections around all these sophisticated technologies to make sure that children have access to a digitally safe world where they can learn and be creative.

Current Happenings Concerning AI Child Exploitation

The Senate has held many hearings this year on AI related threats. OpenAI CEO Sam Altman stated in May that the government would have to step in to reduce the risks posed by emerging powerful AI technology. He suggested the formation of a global or American agency that would be in charge of giving AI systems licenses to operate, sanctioning them when needed, and ensuring that they comply with safety standards. However, while Congress is yet to make new AI rules and regulations, U.S. agencies have agreed to sanction harmful AI products that flout consumer protection and civil laws.

In the tech world, some large brands like Meta, and adult sites like OnlyFans and Pornhub started participating in Take It Down, an online tool that helps teenagers report explicit self- images and videos, regular images, and AI- generated content, found on the internet.

AI is a wonderfully innovative technology, but we have to guard against child exploitation by morally corrupt degenerates. Congress is expected to rise to the occasion to protect American children, just as European lawmakers are in the process of ratifying an AI code of conduct, in conjunction with other countries.

NEXT: Are AI Models Doomed To Always Hallucinate?

Leave a comment