AI – Is it Safe?


Hello, and welcome to LegalEase. Today’s topic is about AI and its safety. Is AI safe?

Several months ago,  Congress invited prominent AI companies to help develop a framework for ensuring the safety of AI. Some of the companies included OpenAI, Google, Meta,  and Microsoft. As a result of the Congressional Hearings, we have the SAFE Innovation Framework.  This framework offers guidelines that AI companies agreed to follow until Congress passes meaningful legislation on the issue.

Let’s talk about some of the guidelines.

  • Congress asked AI companies to ensure their products are safe before they are offered to the public. Some  examples include developing red team models and sharing information.
  • Congress asked AI companies to put safety first, which means taking steps to prohibit internal threats or internal attacks. Some suggestions included allowing third parties to report weaknesses in AI technology.
  • The companies were asked to develop a way for the public to know if the visual content they’re viewing was developed by AI or an individual. I think that’s so important. When I look at it, can I tell that it’s AI or not?
  • Other things include being helpful. Companies were encouraged to use AI to actually promote and foster education, medical research, or climate change – to  do things that are helpful versus unhelpful.

These are some issues Congress is working on currently to ensure the safety of AI. It will be interesting to see the type of legislation passed on this issue.

I hope that was helpful. If you have any other questions, feel free to give us a call. Our link, as always, is in our bio. Thank you for joining LegalEase.