FTC takes aim at OpenAI’s ChatGPT with lengthy criminal investigation questionnaire

The U.S. federal agency wants to know if the revolutionary AI tool has sound privacy practices and whether it has led to consumer harm.

OpenAI, maker of AI chatbot ChatGPT and related products, has received a criminal investigative demand (CID) from the United States Federal Trade Commission (FTC), the Washington Post reported on July 13. The CID is reproduced on the newspaper’s website without a date. A CID is similar to a subpoena, and recipients are legally obliged to produce the information they request.

The FTC is investigating whether OpenAI used “unfair or deceptive privacy or data security practices” or “unfair or deceptive practices relating to risks of harm to consumer, including reputational harm.” The agency is also considering whether a monetary penalty for the alleged practices would be in the public interest, according to the CID.

The 20-page document goes on to pose 49 detailed questions for company and request 17 categories of documents for its investigation. The company is given 14 days to contact an FTC counsel to discuss how it will meet the agency’s demands.

The FTC asked in its CID what large language models were used in OpenAI products, how they were used and how the products based on them were trained and how their accuracy of the was guaranteed.

Related: The UN holds a robot press conference about the state of AI

The CID also asked about advertising policy, risk assessment, collection and protection of personal information, how the status of “public figure” was determined and how feedback and complaints were handled. Many of the questions are quite broad. For example:

“Describe in Detail the extent to which You have taken steps to address or mitigate risks that Your Large Language Model Products could generate statements about real individuals that are false, misleading or disparaging.”

The Microsoft-backed ChatGPT sent shockwaves through the IT world when it was introduced on Nov. 30. Users wondered about the implications of the powerful new technology, and competitors scrambled to catch up to it.

In the inevitable backlash, numerous countries have announced probes. A letter calling for a moratorium on AI development was signed by 2,600 tech figures – including Elon Musk and Steve Wozniak, and OpenAI CEO Sam Altman spoke before the United States Senate on AI safety.

OpenAI has also faced several suits. A class action suit filed in Northern California District Court on June 28 accused the company of scraping personal data from the internet without permission. Mass market writers Mona Awad and Paul Tremblay sued OpenAI in June for copyright infringement, and comedian Sarah Silverman and two other authors sued OpenAI and Meta the following month claiming they used illegal “shadow libraries” to train their AI.

Magazine: AI Eye: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

Read Entire Article


Add a comment