Since the launch of ChatGPT, there has been a preponderance of AI Chatbots like ChatGPT. However, the spread of these chatbots also means that issues are brewing. In truth, OpenAI surprised the world with the ability of ChatGPT. For this reason, there have been many objections to its use and some even involve legal battles. However, in the U.S., the home nation of ChatGPT, the law may just protect ChatGPT. Article 230 of the US Communications Decency Act issued in 1996 says that a company does not need to bear legal responsibility for the content published by third parties or users on its platform (Article 230).
However, the U.S. Supreme Court will decide in the next few months whether to weaken this protection. This may also have an impact on AI chatbots such as ChatGPT. Judges are expected to rule by the end of June on whether Alphabet-owned YouTube can be sued for recommending videos to users. While platforms are exempt from liability for user-uploaded videos, do Section 230 protections still apply when they use algorithms to recommend content to users?
Technology and legal experts say the case has implications beyond social media platforms. They believe that the verdict could spark new debate over whether companies such as OpenAI and Google, which develop AI chatbots, can be immune from legal claims such as defamation or invasion of privacy. Experts point out that this is because the algorithm used by ChatGPT and others are similar to the way it recommends videos to YouTube users.
What do AI chatbots do?
AI chatbots are trained to generate vast amounts of original content, and Section 230 generally applies to third-party content. Courts have yet to consider whether AI chatbot responses would be protected. A Democratic senator said the immunity entitlement cannot apply to generative AI tools because those tools “create content.” “Section 230 is about protecting users and websites hosting and organizing user speech. It should not insulate companies from the consequences of their own actions and products,” he said.
The tech industry has been pushing to keep Section 230. Some people think that tools like ChatGPT are like search engines, serving existing content to users based on queries. “AI doesn’t really create anything. It just presents existing content in a different way or in a different format,” said Carl Szabo, vice president and general counsel at tech industry trade group, NetChoice.
Szabo said that if Section 230 were weakened, it would create an impossible task for AI developers. This will also expose them to a flood of lawsuits that could stifle innovation. Some experts speculate that courts may take a neutral stance and examine the context in which AI models generate potentially harmful responses.
Gizchina News of the week
Exemption benefits may still apply in cases where the AI model appears to be able to explain existing sources. But chatbots like ChatGPT have been known to generate false answers, which experts say may not be protected.
Hanny Farid, a technologist and professor at the Univ. of California, Berkeley, said it’s farfetched to think that AI developers should be immune from lawsuits over the models they “program, train and deploy.”
“If companies can be held liable in civil suits, they make safer products; if they are exempt, the products tend to be less safe,” Farid said.
Issues with AI tools like ChatGPT
AI chatbots are now very popular in today’s world. This is due to the fact that they provide an efficient and convenient way for people to interact. These chatbots are capable of processing natural language and providing relevant responses to queries. This makes them a useful tool for customer service, sales, and support. However, concerns have been raised about the safety of using AI chatbots, mostly in terms of privacy and security.
One of the main concerns with AI chatbots could be a lacuna for data breaches and cyber attacks. As chatbots often require users to provide personal info such as their name, address, and payment details, there is a risk that this info could be stolen. To reduce this risk, companies must ensure that their chatbots are secure and use encryption methods to protect user data.
Another issue with AI chatbots is the potential for the system to cave to bias. As these chatbots are programmed by humans, they may be biases and stereotypes that exist in society. For example, a chatbot designed to screen job seekers may discriminate against certain groups based on their race, gender, or ethnicity. To avoid this, chatbot developers must ensure that their algorithms are near perfect and without bias.
Also, there is a risk that AI chatbots may not be able to handle complex queries. This includes those that link to mental health or domestic abuse. In such cases, users may not receive the support they need. Of course, this is one of the areas where people want these chatbots to be liable. If they give a wrong response in certain scenarios, the outcome could be very bad.
Despite these concerns, AI chatbots can be safe to use if put to proper use. Some people want to use AI chatbots to replace doctors and other experts. These are not ideal uses of this tool. On the part of the developers, they must avoid bias at all costs and ensure that the system is inclusive. Also, chatbots should not be left to work without human presence. From time to time, they should be checked and upgraded if need be. To ensure that they are safe to use, developers must address concerns around security, privacy, bias, and the limits of chatbots. With the right tools in place, AI chatbots can make the search for info very easily. It could also be a pretty decent assistant to many people that need some sort of help with info.