Prevent data leaks from Chat messages & attachments

Use DLP for Chat to protect data in Chat messages and attachments

Supported editions for this feature: Enterprise; Education Standard and Education Plus.  Compare your edition

DLP for Chat is also available to Cloud Identity Premium users who are also licensed for Workspace editions that include Google Chat and audit and investigation.

Using DLP for Chat, you can create data protection rules to prevent data leaks from Chat messages and attachments (uploaded files and images). 

DLP for Chat features

DLP for Chat gives you control over the sharing of sensitive data in chat conversations. Using Chat for DLP,  you can:

  • Create data protection rules specifically for Chat, or for Chat plus other apps (such as Drive or Chrome).
  • Create data protection rules that block Chat messages and attachments, warn users from sending them, or log them for future audit.
  • Define data sensitivity conditions using text strings, predefined and custom detectors (which include word lists and regular expressions).
  • Enforce data protection rules for a specific organizational unit or group, or for your entire organization.
  • Investigate Chat DLP violations using the Security investigation tool (including viewing end user messages that violate such rules).

Known limitations

Chat and latency limitations

Chat is a latency-sensitive application, and we have designed Chat DLP to not degrade the end user experience.

  • For messages, DLP is given a fixed amount of time to perform scans. Depending on the complexity and number of detectors you have, some detectors may not complete in time, and won’t be enforced. DLP scan status is included in the Google Chat audit log for messages sent and attachments uploaded.
  • The following predefined detectors might require more time to scan—using them in Chat DLP rules increases the risk of scan timeouts:
    • Date of birth
    • Person name
  • Attachments are given more time for scans.
Tabular data in .csv files treated as plain text

Comma-separated values (.csv) files are treated as plain text. As a result, DLP might not find violations in columns that are apparent when you review the data in the file.

Users need latest versions of Gmail and Chat

Ensure that your users' Gmail and Google Chat applications are up to date so they receive complete messaging for blocked Chat conversations. On older versions of Gmail and Chat, content that should only trigger a warning will be blocked instead.

DLP rules created without conditions
If you create a DLP rule with no condition, the rule applies the specified action to every Chat message and/or all uploaded files (depending on whether you select messages, file attachments, or both when creating the rule).

How does DLP for Chat work?

When the user sends a Chat message, DLP scans that message for sensitive content. If an attachment violates a block or warn rule, the action is applied when the message is sent.

What is scanned?

DLP rules are applied to sent messages, not the messages a user or space can receive. 

  • Messages and attachments are scanned. Attachments include files and images. The attachment filename itself is also scanned (for supported file types). Attachments that violate security policies are blocked from being sent.
  • Messages in 1:1 chats, group chats, and spaces are scanned, even if Chat history is turned off. Refer to Turn history on or off in Google Chat for details.
  • Chat DLP incidents are logged in the Rule audit log; in some cases, message content may be available in the log. How long the message content is visible in the log depends on your Chat history settings and your configured message retention period for Chat.
    • When Chat history is turned on, Admins can view the message up to your configured message retention period.
    • When Chat history is turned off, the message will be accessible for 24 hours.

Scanned file types

File types scanned for content include:

  • Document file types: .txt, .doc, .docx, .rtf, .html, .xhtml, .xml, .pdf, .ppt., .pptx, .odp, .ods, .odt, .xls, .xlsx, .ps, .css, .csv, .json, .sh 
  • Image file types: .eps

    Note: If OCR is enabled, the following image types are also scanned: .bmp, .gif, .jpeg, .png, and images within PDF files.

  • Compressed file types: .zip
  • Custom file types: .hwp, .kml, .kmz, .sdc, .sdd, .sdw, .sxc, .sxi, .sxw, .wml, .xps

Attachment size limitation for scans

Attachments over 50 MB in size are uploaded and sent without being scanned by DLP.

When is the message scanned?

When a user sends a Chat message, Chat messages are scanned whether there are attachments or not.

DLP scans Chat messages and attachments

Summary of DLP for Chat flow:

  1. You define DLP rules. These rules define which content is sensitive and should be protected. You can apply DLP rules to both messages and attachments.
  2. A user sends a Chat message. DLP scans the contents for DLP rule violations. Attachments are scanned upon upload, and those that violate rules are blocked.
  3. DLP enforces the rules you defined, and violations trigger any actions you've configured for the rules.
  4. You're alerted of DLP rule violations in the Rule audit log.

What happens when a user's message is blocked or triggers a warning?

Before you implement DLP for Chat rules, tell your end users what to expect.  Explain that there are policies in place regarding what information can be shared, and that messages that violate these policies are blocked or result in a warning message.  Tell them what information is restricted, so they won’t be surprised when they receive messages about blocked content, or are warned of sensitive content. 

User experience for blocked messages

Here are some messages users can receive when a Chat message or attachment is blocked:

  • Your message couldn’t be sent
  •  
  • Your message couldn't be updated

    Your message may contain sensitive content (like credit card numbers) that shouldn't be shared based on your organization's policies. Edit as needed, or check with your admin if this doesn’t seem right. 

When a message is blocked, the user can dismiss the dialog or click Edit message and edit the message text or remove the violating attachment.  

User experience for messages that trigger a warning

When a Chat message or attachment triggers a warning, users receive the following message. Note that the message is initially blocked, and is only sent if the user chooses to send the message anyway: 

  • Check your message

    Your message may contain sensitive content (like credit card numbers) that shouldn't be shared based on your organization's policies. Edit as needed, or check with your admin if this doesn’t seem right.

After getting a warning, the user can click Edit message and edit the message text, click Send anyway to send the text as is, or dismiss the dialog.

How do I control what messages are blocked? What if I want to block messages to spaces or groups?

After choosing a DLP rule action (such as Block message), select the conversation type you want to cover: internal or external (for example an externally-owned space, or a conversation with guest access enabled). You can also choose whether to apply the rule to spaces, group chats, and 1:1 chats:

"Select when the Block message action in DLP should be applied for Google Chat."

DLP for Chat - rule examples

 Expand all  |  Collapse all

Here are some examples of how to create DLP rules that block Chat messages or attachments, warn about sensitive content, or log details about Chat messages in the Rule audit log.

For general steps on creating DLP rules, go to Create new DLP for Drive rules and custom content detectors.

Block a Chat message that contains a Social Security Number - external and internal messages
This rule blocks a conversation (internal or external) when a text or attachment contains a Social Security Number.
  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Rules.
  3. Under Protect your sensitive content, click Create rule.
  4. Add the name and description for the rule, such as Block when sharing SSN in chat.
  5. In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
  6. Click Continue.
  7. For Google Chat select Message sent and File uploaded (for attachments).
  8. Click Continue.
  9. In the Conditions section, click Add Condition and select the following values:
    1. Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected)
    2. What to scan for—Matches predefined data type (recommended)
    3. Select data type—United States - Social Security Number.
    4. Likelihood Threshold—High. The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the rule action.
    5. Minimum unique matches—1. The minimum number of times a unique match must occur in a message or attachment to trigger the action.
    6. Minimum match count—1. The number of times the content must appear in a message or attachment to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action. 
  10. Click Continue. In the Actions section, under Chat, select Block message. Also select when the action should apply. For this example, select External conversations and Internal conversations. Leave Spaces, Group chats, and 1:1 chats selected.
  11. (Optional) In the Alerting section:
    • Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard. 
    • Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
  12. Click Continue to review the rule details. The action for Chat is to block message for external and internal conversations.
  13. Choose a status for the rule:
    • Active—Your rule runs immediately
    • Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to Securityand thenAccess and data controland thenData protectionand thenManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
  14. Click Create.

    Note: It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

Block Drive external sharing and a Chat message attachment that contains a passport number - external sharing only
This combination rule blocks external sharing of passport information through either a Chat attachment or a Drive file. 
  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Rules.
  3. Under Protect your sensitive content, click Create rule.
  4. Add the name and description for the rule, such as Block when sharing a passport number in Chat and Drive.
  5. In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
  6. Click Continue.
  7. For Google Drive select File created, modified, uploaded, or shared. For Google Chat select File uploaded only.
  8. Click Continue.
  9. In the Conditions section, click Add Condition and select the following values:
    1. Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
    2. What to scan for—Matches predefined data type (recommended)
    3. Select data type—United States Passport
    4. Likelihood Threshold—High.  The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the action.
    5. Minimum unique matches—1. The minimum number of times a unique match must occur in a document to trigger the action.
    6. Minimum match count—1. The number of times the content must appear in a message to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action.
  10. Click Continue. In the Actions section:
    1. Under Google Chat, select Block message. Also, select when the action should apply. For this example, deselect Internal conversations, and leave External conversations selected. You can also select which types of chats to apply the rule to.
    2. Under Google Drive, select Block external sharing.
  11. (Optional) In the Alerting section:
    • Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard. 
    • Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
  12. Click Continue to review the rule details. The action for Chat is to block content for external conversations only. The action for Drive is to block external sharing.
  13. Choose a status for the rule:
    • Active—Your rule runs immediately
    • Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to Securityand thenAccess and data controland thenData protectionand thenManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
  14. Click Create.

    Note: It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

Log the mention of a project's codename or acronym in documents uploaded to Chat or Chrome
In this example, you can log when a project’s code name (in the example, SpiderWeb) or the project acronym (in the example, SpdW) appear in uploaded documents for both Chat (as an attachment) and Chrome in the Rule audit log, but take no other action.
  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Rules.
  3. Under Protect your sensitive content, click Create rule.
  4. Add the name and description for the rule, such as Log when sharing names in chat or Chrome.
  5. In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
  6. Click Continue.
  7. For Chrome, select File uploaded only. For Google Chat select File uploaded only.
  8. Click Continue.
  9. In the Conditions section, click Add Condition and select the following values:
    1. Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
    2. What to scan for—Contains text string
    3. Enter contents to match—SpiderWeb
  10. Click Add condition to add an OR condition, and select the following values:
    1. Content type to scan—All content 
    2. What to scan for—Contains text string
    3. Enter contents to match—SpdW 
  11. Click Continue. In the Actions section, under Chrome and Chat, select Audit only. Also, for Chat, select when the action should apply. For this example, select both External conversations and Internal conversations.
  12. (Optional) In the Alerting section:
    • Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard. 
    • Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
  13. Click Continue to review the rule details. Under Action, note that the action for Chrome is audit only, and the action for Chat is also audit only, and mentions that the action occurs for external and internal conversations.
  14. Choose a status for the rule:
    • Active—Your rule runs immediately
    • Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to Securityand thenAccess and data controland thenData protectionand thenManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
  15. Click Create.

    Note: It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

Create a custom detector and use it in a rule to warn users if they share project-sensitive terms

In this example, you create a custom detector that lists project-sensitive terms. Then, you’ll use this custom detector as a condition in a DLP rule.

Create the detector

  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Securityand thenAccess and data controland thenData protection.
  3. Click Manage detectors.
  4. Click Add detector, then Word list.
  5. In the Add word list window:
    1. Add the name (such as Sensitive terms) and a description. 
    2. Add a comma-separated list of your sensitive terms. Note that capitalization and symbols are ignored, and only complete words are matched. Words in word list detectors must contain at least 2 characters that are letters or digits. 
  6. Click Create. Now, you can use the custom detector in a rule condition.

Use the custom detector in a rule

  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Rules.
  3. Under Protect your sensitive content, click Create rule.
  4. Add the name (such as Sensitive terms to warn users about) and a description for the rule. 
  5. In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
  6. Click Continue.
  7. For Google Chat select Message sent and File uploaded.
  8. Click Continue.
  9. In the Conditions section, click Add Condition and select the following values:
    • Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
    • What to scan for—Matches words from a word list
    • Word list name—Sensitive terms
    • Match mode—Match any word
    • Minimum total times any word detected—1
  10. Click Continue. In the Actions section, under Chat, select Warn users. Also, for Chat, select when the action should apply. For this example, select External conversations and Internal conversations.
  11. (Optional) In the Alerting section:
    • Choose a severity level (Low, Medium, or High) for how an event triggered by this rule is reported in the security dashboard. 
    • Choose whether an event triggered by this rule should also send an alert to the alert center. Also choose whether to email alert notifications to all super administrators or to other recipients.
  12. Click Continue to review the rule details. Under Action, note that the action for Chat is Warn users, and mentions that the action occurs for External and Internal conversations.
  13. Choose a status for the rule:
    • Active—Your rule runs immediately
    • Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to Securityand thenAccess and data controland thenData protectionand thenManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
  14. Click Create.

    Note: It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

Protect personally identifiable information using a rule template

A rule template provides a set of conditions that cover many typical data protection scenarios. Use a rule template to set up policies for common data protection situations.

This example uses a rule template to block sending a chat message, uploading a file to a chat, or sharing a Drive file, if the message or file contains US personally identifiable information (PII). 

Before you begin, sign in to your super administrator account or a delegated admin account with these privileges:

  • Organizational unit administrator privileges. 
  • Groups administrator privileges.
  • View DLP rule and Manage DLP rule privileges. Note that you must enable both View and Manage permissions to have complete access for creating and editing rules. We recommend you create a custom role that has both privileges. 
  • View Metadata and Attributes privileges (required for the use of the investigation tool only): Security Centerand thenInvestigation Tooland thenRuleand thenView Metadata and Attributes.

Learn more about administrator privileges and creating custom administrator roles.

  1. Sign in to your Google Admin console.

    Sign in using your administrator account (does not end in @gmail.com).

  2. In the Admin console, go to Menu ""and then"" Rules.
  3. Click Templates.
  4. On the Templates page, click Prevent PII information sharing (US).
  5. In the Name section, accept the default name and description of the rule or enter new values.
  6. In the Scope section, search for and select the organizational units groups the rule applies to.
  7. Click Continue. Under Apps, the following options are preselected:
    • for Google Chat, the Message sent and File uploaded boxes are checked.
    • For Google Drive, File create, modified, uploaded or shared is selected. 
  8. Click Continue.
  9. Review the default preselected conditions for the PII rule template:
    • Content type to scan—All content (note that All content is the only content type available if you select Google Chat, no matter what other apps are selected).
    • What to scan for—Matches predefined data type (recommended)
    • Select data type—Several data types, including Social Security Number, Driver's License Number, and United States Passport number.
    • Likelihood Threshold—Very high. The confidence threshold for the condition. This is an extra measure used to determine whether messages trigger the action.
    • Minimum unique matches—1. The minimum number of times a unique match must occur in a document to trigger the action.
    • Minimum match count—1. The number of times the content must appear in a message to trigger the action. For example, if you select 2, content must appear at least twice in a message to trigger the action.
  10. Click Continue to review the default Actions selected for the PII rule template (for Chat, Block message; for Drive, Block external sharing). 
  11. Click Continue to review the rule details.
  12. Choose a status for the rule:
    • Active—Your rule runs immediately
    • Inactive—Your rule exists, but does not run immediately. This gives you time to review the rule and share it with team members before implementing. Activate the rule later by going to Securityand thenAccess and data controland thenData protectionand thenManage Rules. Click the Inactive status for the rule and select Active. The rule runs after you activate it, and DLP scans for sensitive content.
  13. Click Create.

    Note: It can take up to 24 hours for the rule to apply to all user accounts in the selected organizational units and groups.

Scan images for sensitive content

Using optical character recognition (OCR), DLP for Chat scans image text for sensitive content in uploaded attachments to Chat message.

Note that OCR is only available for attachments with images uploaded to Google Chat messages, and it can cause delays for messages containing images.

See Scanned file types above for a complete list of supported image formats.

Turn on OCR
  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. On the Admin console Home page, go to Securityand thenAccess and data controland thenData protection.
  3. Under Data protection settings, click Optical character recognition (OCR). The default state is ON. If OCR is OFF, then select OFF and slide it to ON.
  4. Click Save.  This turns on OCR for data protection rules that apply to Google Chat.

Note: Once turned on, the OCR setting will apply to all DLP for Chat rules. It can’t be applied selectively to specific rules.

Verify that OCR is on when creating a rule

You can verify that OCR is turned on when creating a data protection rule.

  1. Sign in to your Google Admin console.

    Sign in using an account with super administrator privileges (does not end in @gmail.com).

  2. On the Admin console Home page, go to Rules.
  3. Under Protect your sensitive content, click Create a rule.
  4. Enter a name and description for the rule. 
  5. In the Scope section, choose Apply to all <domain.name> or choose to search for and include or exclude organizational units or groups the rule applies to.
  6. Click Continue.
  7. Under Apps, for Google Chat, check File uploaded.
  8. Click to check whether OCR is turned on. If OCR is OFF, then select OFF and slide it to ON.
  9. Click Continue to finish creating the rule. For help on creating rules, see DLP for Chat rule examples, above. 

Investigate Chat DLP violations using the Security investigation tool

After you’ve set up Chat DLP rules, rule violations are logged in the Rule log. You can use the Security investigation tool to search the Rule log and get specific information on the violating chat message or attachment, including:

  • Name of the DLP rule that was triggered
  • Message sender
  • Date the message was sent
  • Type of conversation—for example, 1:1 chat, or space.
  • Message content (depending on your message retention settings).

For complete steps, see Investigate Chat messages to protect your organization's data.

Investigation tool limitations

You can’t view the original violating message or attachment if:

  • It was not sent (was blocked). Only content that is sent and violates an audit-only rule can be viewed. 
  • It was sent in a conversation owned by another organization.
  • The message is past the retention period.
Was this helpful?
How can we improve it?

Need more help?

Sign in for additional support options to quickly solve your issue

Search
Clear search
Close search
Google apps
Main menu
Search Help Center
false
false
true
true
73010
false
false