Since launching ChatGPT in 2022, OpenAI has made a steady stream of disappointing product announcements and enhancements. One such announcement came on May 16, 2024, and would have seemed innocuous to most consumers. “Improved data analysis for ChatGPT” The post outlines how users can add files directly from Google Drive and Microsoft OneDrive. It’s also worth mentioning that other genAI tools, such as Google AI Studio and Claude Enterprise, have recently added similar functionality. Pretty awesome, right? Probably.
Connecting your organization’s Google Drive or OneDrive account to ChatGPT (or any other genAI tool) gives you broad permissions over resources across shared drives, not just personal files. As you can imagine, the benefits of such broad integration come with a variety of cybersecurity challenges.
So how can you tell if your employees have enabled ChatGPT’s Google Drive integration and monitor which files have been accessed? In this post, we’ll show you how to do this natively in Google Workspace and how to use Nudge Security to discover all genAI apps in use and what other apps they’re integrated with.
Where to find ChatGPT activity in Google Workspace
Google Workspace provides several ways to identify and investigate activity related to ChatGPT connections.
From your Google Workspace admin console, navigate to Reports > Audit and Investigations > Drive Log Events. Here you’ll see a list of Google Drive resources that were accessed.
You can also investigate activity via API calls in “Reports” → “Audit and Investigation” → “Oauth Log Events”.
So while regular checks into the Google Workspace Admin console can help you understand what resources ChatGPT is accessing, it’s naturally less valuable to be alerted as soon as a new integration with ChatGPT is created than to look at this activity after it’s already happened. This is where Nudge Security comes in.
How to view all Nudge Security and genAI integrations
Nudge Security discovers accounts for all SaaS applications created by anyone in your organization, including ChatGPT and a rapidly growing list of newly created genAI tools, without requiring prior knowledge of the tool’s existence. A built-in AI dashboard helps customers keep pace with their AI adoption and proactively mitigate AI security risks.
Additionally, Nudge Security displays OAuth grants across your organization, such as those granted to ChatGPT, in a filterable OAuth dashboard with grant type (sign in or integration), activity and risk analysis information. Filtering by category shows all grants associated with AI tools.
Clicking on a grant opens a details screen where you can view the risk profile, who created the grant and when, access details, and the scope granted.
You can then send a “nudge” to the creator of the permission via Slack or email, requesting them to take a specific action such as limiting the scope of the permission, or you can immediately revoke the permission from within the Nudge Security user interface.
Finally, you can set up custom rules to notify you when users in your organization create OAuth grants for ChatGPT or other genAI apps. You can also create rules to notify you immediately when a new genAI account is created, prompting new genAI users to review and accept genAI’s acceptable use policy.
Balancing productivity and security
ChatGPT’s integration with Google Drive and Microsoft OneDrive has the potential to significantly improve productivity, but it also introduces significant security risks. Organizations should approach these integrations with a clear understanding of the potential risks and implement appropriate governance and security measures to mitigate them.
Nudge Security provides not only visibility but also context and automation to help companies adopt genAI tools without compromising data security.
Start your 14-day free trial Sign up now to instantly discover all genAI apps deployed in your organization and all integrations to other applications.