Addressing the Risks of Using Prompt-Based AI Tools: A Proactive Strategy
Key Takeaways:
- ChatGPT and other prompt-based AI tools promise increased productivity
- These AI tools pose the risks of leaking sensitive data and providing incorrect factsÂ
- Mitigate risks by: 1) Exposing employees to the risks, 2) Enforcing policies, and 3) Educating users at the point of click
Employees are Capitalizing on Prompt-Based AI Tools
There has been a trend across industries where businesses are flocking to integrate AI into their technology stack, with the ultimate goal of increasing efficiency and/or efficacy. Alongside this trend, though, employees have been integrating prompt-based AI tools, such as OpenAI’s well-known tool ChatGPT, into their individual workflows.Â
These employees could be:Â
- Marketers using AI to draft copy;Â
- Software Developers using AI to fix bugs in code;Â
- Network Administrators using AI to troubleshoot network errors;Â
- Cybersecurity Analysts using AI to help triage alerts; or,Â
- Business leaders using AI to perform market research or to refine their business strategy.Â
In addition to potentially increasing an employee’s or a business’s productivity, though, this trend – of employees using ChatGPT-like tools – poses risks to businesses; specifically, the risks of acting on factually incorrect outputs and, most importantly, of leaking sensitive data.
ChatGPT-like Tools Pose Noteworthy RisksÂ
Some large companies, such as Apple and Verizon, recognize the risks of prompt-based AI tools and have restricted their employees from accessing ChatGPT.
The prominent risk associated with prompt-based AI technology stems from the fact that users are inputting data into these tools. When using this technology, employees might A) inadvertently input sensitive data and/or B) do so without realizing the risks of inputting such data.
Sensitive data that employees might input into ChatGPT-like tools include:
- Financial data (e.g., bank or card data)
- Customer or employee data (e.g., an SSN)Â
- Intellectual property (e.g., proprietary code)
- Infrastructure or security data (e.g., security incident information)
- Business-critical data (e.g., business strategy)
Inputting sensitive data into these tools is a risk because that data is:
- Now available to a 3rd party (i.e., the owners of the AI tool, middleware, or the owners of the extension or website that uses the AI tool in the background);Â
- At further risk of even wider leakage (if, for example, one of the 3rd parties is compromised);
- At risk of leaking to other users of the tool.Â
Additionally, AI is known to “hallucinate” and provide inaccurate information on occasion, and this could lead to negative or poor business decisions. Because these tools are new technologies, the likelihood and the extent of impact of employees acting on inaccurate information from AI tools is currently unknown; however, this potential threat is worth noting, and we recommend exposing employees to this possibility. One way to expose employees is to use Keep Aware, for example, to provide users with acceptable use notices for sensitive applications like ChatGPT.
Although prompt-based AI tools pose the risks of sensitive data leakage and of acting on inaccurate information, they also offer the real possibility of increased productivity. So, as a business leader or as a security/IT practitioner, we recommend not to ask how you could stop your employees from using technology that could help them; instead, ask, “How can we enable our employees to use tools to make them more efficient and the business more profitable while simultaneously decreasing the risk associated with these tools?”
Gain Visibility; Assess Current State; Mitigate Risk
Before tackling most any security initiative, we recommend gaining visibility and assessing the current situation.Â
- Are employees using prompt-based AI tools?Â
- Which tools are they using?
- How many employees are using them?Â
- In which departments?
- How frequently?
Answering these questions helps you understand the current state and decide if – and how – mitigating these risks should be prioritized in your organization’s security roadmap.Â
Mitigate Risk by Educating Users and Enforcing Policies
Your company has a unique risk appetite; thus, your organization might want to completely restrict employees from accessing prompt-based AI tools. However, for the remaining companies whose risk appetites can support their employees’ drive to work more efficiently by using these tools but want to also mitigate the associated risks, we propose the following three risk-reduction tactics:
- Expose employees to the risks of using prompt-based AI tools;
- Enforce the business’s policies around sensitive data;
- Educate employees in real-time.
1. Expose Employees to Risks
Expose employees to the benefits and – especially – the risks of using prompt-based AI technology. This exposure essentially primes employees to better identify and to mitigate the risks of accidentally leaking sensitive data or of mistaking the tool’s outputs for being factually-correct.
To expose users to the risks of using this technology, feel free to distribute our below infographic or use it as a template and create your own.
Exposing employees to the risks of these tools primes users to identify and mitigate those risks; however, we shouldn’t expect any user to be in perfect compliance 100% of the time.Â
2. Enforce Policies
To further mitigate the risk of leaking sensitive data, your organization can deploy technology that enforces, in real-time, the business’s acceptable use policy (AUP) or data loss prevention (DLP) policies when an employee is inputting data into these tools.
Since employees spend most of their time in the browser, and because this is likely where they’ll be using prompt-based AI tools, we recommended using a browser-level technology, such as our browser extension, to restrict the type of data an employee could input.
3. Educate Employees in Real-Time
As a third risk-mitigation tactic, the same browser-level technology that enforces AUP and DLP policies could also provide real-time education to the user. For this tactic, we propose implementing real-time feedback to users who accidentally input sensitive data into a tool like ChatGPT; this feedback could include a short explanation that sensitive data was detected as input to an AI tool and an explanation of why this activity poses a risk to them and to the organization.
When real-time feedback is implemented, employees have just-in-time education, which reinforces their security training and likely helps them better identify and mitigate threats moving forward.
Expose, Enforce, Educate: The Overall Experience Â
Together, a user’s experience of these three risk-mitigation tactics would look something like the following diagram:
Conclusion
ChatGPT and other prompt-based AI tools could provide real benefits to employees’ workflows and to an organization’s productivity, but use of this technology is not without its risks. Allowing users to utilize these tools presents risks, particularly of making decisions from incorrect data and of leaking sensitive information.Â
If your organization’s risk appetite allows employees to adopt these tools into their workflows, we recommend decreasing the risks they pose by exposing users to these risks and by enforcing policies and educating users in real-time.
Interested in knowing how Keep Aware’s security browser extension can offer visibility into employees’ AI tool usage, enforce sensitive data policies, and educate users real-time? Meet with our team!