AI’s growth comes hand in hand with questions about how to best use this powerful tool, including questions about policy guidelines for AI in your organization.
Our organization is exploring a few AI tools. What are the risks of using them? Do we need to have a policy for using AI? If so, what should we include in the policy?
While we’ve discussed the use cases and barriers to AI for associations. Now we’re addressing questions around policy to help associations move forward and leverage this tool.
AI is everywhere
As the prevalence of Artificial Intelligence (AI) increases, so do the risks. Many people rely on Grammarly, for example, to fix grammatical errors in their documents. Grammarly is an AI tool. Moreover, it can capture and reuse your text. Similarly, search engines such as Google and Bing, use AI not only to return search inquiry answers, but also use AI to scrape websites for text and data.
Risks of using AI
Fixing your grammar or searching for answers on the internet seem easy and relatively innocuous, yet the ramifications can be costly. Once you expose your text or data to an AI tool for “improvement,” your ownership of the material may be lost — meaning potentially no more intellectual property protection, limited opportunity to sell or monetize your information, and lost confidentiality exposing your organization to lawsuits for breach of confidentiality. Thus, the last thing you want is for you, your employee, or intern to expose your text or data to AI.
AI is embedded throughout the digital world. Organizations need to consider these ramifications now — whether you are exploring using specific AI tools or not. Simply creating an AI use policy, along with a long and growing list of AI tools to avoid, is insufficient to mitigate the risks. Rather, in much the same way that organizations provide guidance on using social media (rather than trying to limit or prohibit it all together), so too should organizations provide guidance on using AI.
Creating a Path Forward for AI use
- Banning or limiting use of specific AI tools is futile. There will always be a different or new tool. You can’t realistically keep up. Rather, create holistic guidance that does the following:
- Affirms the organization’s commitment to protecting confidentiality and not sharing or exposing confidential information;
- Explains the risks that AI presents in terms of potentially losing confidentiality, losing ownership, and exposing the organization to litigation;
- Provides guidance, with examples, on when, where, and how using AI may be appropriate for your organization; and,
- Identifies a resource that staff can call or reference if they have questions or need specific guidance.