logo80lv
Articlesclick_arrow
Professional Services
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
Order outsourcing
Advertiseplayer
profile_loginLogIn

It Might Soon Be Legal to Dissect AI to Learn How It Works

Copyright laws could be applied to human-made inventions in the future.

Stock-Asso/Shutterstock

The U.S. government is considering allowing researchers to explore how AI works without facing copyright lawsuits. If it is accepted, they will be able to break AI companies' terms of service to expose biases, inaccuracies, and training data.

According to 404 Media, people will be able to break the agreement without consequences to conduct "good faith research" and see how the AI works, "probe them for bias, discrimination, harmful and inaccurate outputs, and to learn more about the data they are trained on."

The Department of Justice supports the decision as it can "help reveal unintended or undisclosed collection or exposure of sensitive personal data, or identify systems whose operations or outputs are unsafe, inaccurate, or ineffective."

One of AI's biggest problems people have been talking about for a while now is it stealing other people's works for training. Users regularly try to make different AI systems to admit what they took, but they often struggle to get real answers.

OpenAI's TOS say that users can't "attempt to or assist anyone to reverse engineer, decompile or discover the source code or underlying components of our Services, including our models, algorithms, or systems (except to the extent this restriction is prohibited by applicable law)" and mustn't "circumvent any rate limits or restrictions or bypass any protective measures or safety mitigations we put on our Services" (via 404 Media).

Moreover, AI is known for sometimes lying and outputting biased information, for example attributing some professions to certain races/genders or even leaning toward particular opinions.

This copyright exemption, which would be made under the Section 1201 of the Digital Millennium Copyright Act, would help get to the root of the problem and perhaps aid in protecting human artists' rights.

AI companies are, of course, not very happy about it. In a recent hearing, the App Association, a global trade organization that represents several AI developers, claims that researchers should get consent from AI companies first: "If you don't contact the company in advance to tell them that you're red teaming, you are essentially a potentially malicious hacker" it said through Morgan Reed. "So a researcher does the action. They break in. The company whose LLM it was that they went after is unhappy in some way or form or another and goes after them for a copyright breach. So what they really want, what they're really asking for is post-fact liability protection." Quite ironic.

This exemption is still on paper only, so stay tuned, find the original article here, and join our 80 Level Talent platform and our Telegram channel, follow us on InstagramTwitterLinkedInTikTok, and Reddit, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more