1 Comment

I find it interesting, that there are five main ethical guidelines, that tech companies are aware of, and claim to follow, but yet these same companies, use copy right material to train LLMs, still, even when they are walking a very thin ethical tight rope. I do not agree that material produced by writers, artist, musicians as examples should be used with out the permission of the creator. The New York Times as blocked scraping of their content and ate suing OpenAI for "billions" of dollars for using their content without the Times permission. I feel that the government needs to step in and enforce strong guidelines, so tech companies will not abuse their position with AI. We are quite aware of the mess the World Wide Web is, becaue of the lack of "guardrails", and rules, that were needed, to prevent the chaotic mess it has become, because honestly anyone can post anything on the web, and can convince a large number of people that it is factual, or real. 2024 is a crucial year because it is a election year in this country. AI needs to be watched and the companies held accountable for misinformation and deep fake videos, news, any outlet that produces information that people read and trust to be real news or articles or videos. We have seen what a mess social media is and the damage it has done. People have found platforms to spew hate and misinformation, and the powers that be allow it, and classifies it as "freedom of speech". Freedom of speech is important, but it was meant for political speech, that our Founding Fathers, wanted us to have without being punished, and along the way it has become hate speech. In closing the government needs to step up, take control, and get AI regulated, before someone uses it in a very unethical and dangerous way.

Expand full comment