Service Accounts, IAM, API Scopes

Service account is an Identity to make GCP API calls to services. Granted AuthZ Roles, by default Project Editor(expansive authZ).

User managed service accounts grant full control over the permissions attached to the account.

Applications running on instances associated with the service account can then make authenticated requests to other Google APIs using the service account identity.

VM and API Scopes

  • IAM Project Editor role can create and delete most GCP services and resources: Dangerous to use as a Service Account.
  • Scopes are used to limit permissions when using the default Service Accounts.
    • Before IAM Roles existed, Access Scopes were only method to grant permissions to SAs.
    • Access scopes apply on a per instances basis; and its life time.

VM Instance Scopes

  • Default Access Scope
    • Very limited API scope
    • RO access to storage
    • Access to stackdriver logging and monitoring
  • Full Access to Cloud APIs Scope
    • Enables access(rw) to data services like BigQuery, DataStore, PubSub, etc…
    • Not best practice because it breaks Least Privilege principle.
  • Set access for each API with Scopes
    • Choose only scopes required by your application.
    • Better practice than granting full access if you cannot configure a Service Account to fill role.

Connecting to VMs

  • Linux instances connect via SSH by default

    • Username + SSH Key authentication
    • Password authentication disabled by default
    • UI SSH: GCP creates new SSH connection to browser via auto generated SSH keys
      • VM required to have Public IP, firewall rule allow TCP 22 from GCP Servers
    • gcloud SDK option
      • gcloud compute ssh [instance name] --zone [zone]
      • VM has public IP address and TCP 22 open
      • Stores SSH keys in /home/.ssh
    • Any valid SSH connection will work if Authentication credentials are valid(putty, etc)
    • Adding SSH Keys to Projects via Project Metadata
      • Provide only the public key.
      • Automatically added to all VMs by Default in Project.
        • Can configure individual VMs to not include project wide keys.
        • Can be added to instance specific metadata as well.
    • Bastion Host
      • Edge VM with Public IP address to forward SSH connections to internal VMs
      • Internal VMs open to Bastion on tcp:22
      • Only use as last resort for maintenance
    • VPN/Cloud Interconnect preferred access internal network method
      • Provides access directly to the instance Internal IP
      • Better practice than Bastion hosts
  • Windows connected via Remote Desktop over RDP

    • Username + Password AuthN
      • Set via Console or gcloud
      • Can download RDP files
      • Once connected, change password to custom password

Organization Policy Service

  • Focuses on the What can be accessed by IAM users.
  • Configure restrictions on how your organizations resources can be used* Define and establish guard rails for development teams
  • Assist product owners and teams with limits to stay in compliance boundaries

Define Organization policy by choosing a constraint with desired restrictions. Descendent users inherit organization policy and will be applied to all users in hierarchy.

  • Allows you to set constraints that apply to all resources in hierarchy.
  • All descendants inherit the policy constraints.

Organization Policy Constraints

Blueprint for what behaviors are controlled.

  • List Constraint
    • Allows/disallows all services from a list of values
    • eg compute.vmExternalIpAccess
  • Boolean Constraint
    • Turn policies On or Off
    • eg compute.disableSerialPortAccess

Example is Compute’s Trusted Image Projects constraint. Used to enforce which images can be used in your organization. This allows you to host organization-approved, hardened images in your GCP environment.

GCE Best Practices

  • Control access to resources with projects and IAM.
    • Projects form basis for creating enabling and using all GCP services including GCP management permissions.
  • Isolate machines using multiple networks.
    • If resources are unrelated, host in separate VCP Networks
  • Secure connect to GCP networks using VPNs or cloud interconnect.
    • Use GC Interconnect or GC VPN for connecting to projects to Datacenters.
  • Monitor and Audit logs regularly
    • Use Cloud Audit Log to monitor API operations performed on GCP.
    • Audit logs audit who did what, where, when.
      • How resources were modified and access in GCP projects.
  • Only allow VMs to be created from approved Images.
    • Restrict from default of allowing any image.
  • Use Trusted Image Policy to enforce use of approved images.
  • Harden Custom OS images to help reduce the surface of vulnerability for the instance.
    • Requires maintenance for security patches to stay up to date
  • Subscribe to gce-image-notifications for GCP Image Update notices
  • Keep deployed Compute Engine instances updated.
    • Patch when necessary, but prefer to patch the image, and replace instance with new copy.
  • Run VMs using custom service accounts with appropriate roles.
    • For each instance that needs to call a Google API should run as a service account with the minimum permissions necessary for that instance to do its job.
    • Configure instance to run as that SA.
  • Avoid using default service account.

Encrypting disks with CSEK

Protecting data at rest in GCP

  • GCP encrypts all data at rest by Default
  • Uses Keyczar to implement encryption consistently across all GCP products
    • Storage, PDs, Cloud SQL, Disk snapshots +custom images, etc

Encryption at Rest

All data stored in GCP is encrypted with a unique data encryption key or DEK

  • Data is broken into sub-file chunks for storage.
    • Each chunk can be several gigabytes in size and encrypted with a unique key.
    • Two chunks will not have the same encryption key even if they are part of the same GCS object own by the same customer or stored on the same machine.
    • Encrypted chunks are distributed across GCS’s infrastructure.
    • Blast radios of a potential encryption key compromise is limited to only one data chunk.
  • Data encryption keys are encrypted with or wrapped by key encryption keys or KEKs.
    • Wrapped data encryption keys(DEK) are then stored with this data. Key Encryption Keys(KEK) exclusively stored and used inside Googles Central Key Management Service: KMS.
  • KMS keys are backed up for disaster recovery indefinitely.
    • Decryption requires unwrapping DEK per data chunk.
  • By default, entire process is enabled and fully managed by Google. Nothing to enable or configure.
  • Google also manages key rotation schedule.
    • Standard rotation service for KEKs is 90 days(varies per service)
    • Google stores up to 20 versions
    • Re-encryption of data is required at least once every 5 years.

Customer supplied and managed keys

  • Customer Managed Keys

    • Allows customer management of KEKs
      • Generate Keys
      • Rotation Periods
      • Expire Keys
    • KEKs still stored on Google KMS
  • Creating keys with KMS

    • KMS uses an object hierarchy. Keys belong to a key ring, and resides in a location(global, region, etc)
      • Location eg: global/dougs-key-ring/dougs-managed-key
    • Create a key ring
    • Add a Key
    • Specify type of key(symmetric, asymmetric, etc)
  • Using Customer-Managed Keys

    • Choose your managed key when creating VMs, disks, images, storage buckets, etc
    • Grant permissions to the Service Account to use your Key.
  • Customer Supplied Keys

    • Create Keys on premises
      • User is responsible for all key management and rotation
    • Google will not store your keys. You are responsible for all management!
    • Key will need to be provided to every interaction with encrypted resources with the key to decrypt.


  • Default Service Accounts are how projects communicate within GCP - but need to be properly configured.
    • Access scopes are one way to lock down Service Accounts
    • Set Access Scopes, or grant access to APIs individually, when creating an instance
  • Remote access to VMs in GCE
    • Linux VMs accessed via SSH or Cloud SDK
      • Default SSH username + SSH Key for authN
      • Password AuthN disabled by default
    • Windows instances via RDP or gcloud commands
      • Username + Password to AuthN
        • Configured via GCP console or UI
    • Private Keys allow SSH to any VM or terminal applications
      • Putty or other clients to connect
  • Organization Policy Service allows centralized management and control over an organization’s cloud resources.
    • Configure restrictions across entire resource hierarchy.
  • GCP best Practices can help you create more secure instances as well as keep them secure.
    • Control Access, isolation of machines, connecting securely, regular monitoring, and auditing of logs.

Securing Cloud Data: Techniques and best practices

Cloud Storage IAM permissions and ACLs

Control who, and what they have access to.

  • Members can be granted access to Cloud Storage at the organization, folder, project, or bucket levels.
    • Permissions flow down from higher levels.
    • Cannot remove a permission at a lower level that was granted at a higher level.
      • There are no explicit DENY rules.
      • Always use principle of Least Privilege
  • Use IAM roles to grant permissions to Storage buckets.
    • Permissions are inherited from higher levels.
    • Broad control over projects and buckets, but not individual objects.
  • Access Control Lists(ACLs) can be used to gran access to objects in buckets.
    • Applied to individual buckets or objects.

IAM and ACLs can work in tandem to grant access to your buckets and objects.

In most cases, you should use IAM permissions instead of ACLs.

  • To make a bucket public, grant allUsers the Storage Object Viewer role.
  • To make an object public; grant allUsers Reader access on that object.

Be very careful on which objects or buckets you make public, use this option with extreme caution.

Auditing Cloud Data

  • GCP Cloud Storage bucket administrative activity is logged automatically.(by default)
    • Tracks all operations that modify the configuration or metadata of a bucket or object.
    • Data or access logging is not enable by default, must be enabled.
      • Data access logs turned on at the bucket level.
  • Export logs to BigQuery for analysis
    • Create BigQuery DataStore
    • Use lad job to copy log data into BQ tables

Signed URLs and policy documents

GCS URLs and signed policy documents enable detailed control over users, w/o GCP accounts or public visitors so they can Download or UPload to bucket. Un-authenticated user access.

  • Signed URLs
    • Allow access to GCS w/o adding a user to an ACL or IAM
    • Temporary access with a timeout. Read or Write access.
    • Anyone with signed URL has access
    • Can be created with gsutil or from Application.
    • gsutil
      • Create Service Account with rights to Storage
      • Create Service Account Key
      • Use signurl command, returns a URL that allows access to resource using SA’s key
        • eg: gsutil signurl -d 10m ~/key.json gs://super-secure-bucket/noir.jpg
        • -d <duration of access>
  • Signed Policy Documents
    • Specify what can be uploaded to a bucket with a form POST request
    • Enable control over Size, Content Type, and other upload characteristics than signed URLs.
    • Defined as JSON
    • Requirements of SPDs
      • Policy document must be UTF-8 encoded.
      • Encode the policy document in base64
      • Sign policy document with RSA SHA-256 using secret key provided by GCP: Message Digest
      • Message Digest -base64-> Auth message
      • Add Auth Message SPD to HTML Form

Encrypting with CMEK and CSEK

  • All data in GCP is encrypted with a unique data encryption key(DEK)
  • DEKs are encrypted with Key Encryption Keys(KEKs) and stored with the data.
  • By default KEKs are stored and used inside Key Management Service KMS
  • Decrypting data requires the unwrapped ata encryption key (DEK) for the data chunk.
  • Customer-managed and supplied keys
      1. Allows you to manage the KEKs
      • Generate keys
      • Rotation Periods
      • Expire keys
    • KEKs stored on Google KMS
      1. Create Keys on premises
      • You are responsible for all key management and rotation.

BigQuery IAM roles and authorized Views

Managing acces to Datasets and Tables.

  • Permissions are granted to BigQuery Datasets
    • Cannot grant permissions a the table, row, or column level
    • Use IAM roles to control access to data.
    • All tables in dataset need to share the same permissions.

Roles are mapped to functional roles.

  • Assign groups(or users) to BigQuery IAM roles.

    • Generally considered best practice to manage users in groups.
    • Lower management overhead of users.
  • Provide groups or users with access to datasets

    • Can be viewer, editor, or owner
  • The user who created a dataset is the owner

    • Can add additional users and other owners.
  • Authorized Views

    • Enables Admins to see all data, but other users to only see subset of data.
    • Views provide row or column level permissions to datasets.
    • Create a second dataset with different permissions from the first.
    • Add a view to the second dataset that selects the subset of data you want to sexpose from the first dataset.
    • In the first dataset you have to give the view access to the data:
      • Know as an authorized view.

Storage Best Practices

GCS Best Practices

  • Don’t use Personally Identifiable Information(PII) in bucket names.
    • Cloud storage bucket names must be globally unique.
    • Use GUIDs for bucket names plus retry logic in the case of collisions and keep track of all buckets created centrally.
    • Use buckets based on a domain name, manage bucket object names as subdomains.
  • Don’t use PII in object names, because object names appear in URLs
  • Set default object ACLs on buckets.
    • Check if the ACLs meet requirements before adding objects.
  • Use Signed URLs to provide access for users with no account.
  • Don’t allow buckets to be publicly writable.
  • Use lifecycle rules to remove sensitive data that is no longer needed.
    • eg delete objects older than 3 years, downgrade storage class of objects

BigQuery Best Practices

  • Use IAM roles to separate who can create and manage Datasets versus who can process the data.
    • Be careful when giving all authenticated users access to data.
  • Use authorized views to restrict access to sensitive data:
    • Principle of least privilege.
    • Limit to subsets of data.
  • Use Expiration settings to remove unneeded tables and partitions.
    • If expiration period added after BQ Dataset is created, expiration will only apply to new tables.


  • CloudStorage buckets can be given access permissions at Organization, folder, project, or bucket levels
    • Many Predefined roles exist to allow storage access. eg: storage object admin role, storage object greater role and storage object viewer role.
  • ACLs can define who has access to individual buckets and objects as well as the level of access they have.
  • Administrative operations on buckets are logged automatically
    • Data operations are not logged by default, and have to be configured.
  • Signed URLs and Signed Policy Documents give limited time permissions to read, write, and upload to users who do not have Google Accounts.
  • BigQuery datasets can have permissions granted at the dataset, table, row, and column level.
  • Do not use PII as bucket or object names because it will appear in URLs.

Protecting against Distributed Denial of Service Attacks(DDoS)

How DDoS attacks work

Malicious attack attempts to make your online application unavailable by overwhelming it with traffic from multiple sources.

Typically using bot nets to produce traffic from infected computers. Infected machines launch traffic upon a victim’s endpoint.

  • Attack loads
    • 2017 DDoS: 1Tb/s
    • Whole internet bandwidth: 200Tb/s
    • GCP Datacenter bandwidth: 1300Tb/s

GCP Mitigations

  • Leverage Google’s Load Balancer
    • Provides built in defense against Infrastructure DDoS attacks.
    • No additional configuration required to activate defense.
    • Layer 4 LBs protect against UDP floods and TCP syn floods.
    • Layer 7: four layers of protections from connection based attacks like slow lowers.
    • Utilizes Global load balancer, when detecting attacks automatically drops or throttles traffic.
  • Reduce attack surface in VPCs
    • Isolate machines within VPCs
    • Don’t expose them to external web unless necessary(NAT Gateways)
    • Isolate resources in VPCs if they don’t require cross channel communication.
    • Create a separate subnet within a (VPC)network for each tier of application.
      • Convenient way to implement firewall restrictions.
    • Use firewall rules to block unwanted sources/ports
    • Use firewall tags and service accounts to control targets.(Best Practice)
  • Isolate internal traffic
    • Restrict public access to internal traffic.
      • Don’t give machines public IPs unnecessarily.
      • Use Bastion hosts to limit machines exposed to the internet.
      • Use internal load balancers for internal services.
  • Use Cloud CDN
    • Caches content between your users and your servers.
    • Requests for cached content are routed to POPs.
    • Google’s massive infrastructure can absorb attacks.
  • Use API management and monitoring
    • Create an API gateway to manage your backend services.
      • Throttle requests to limit requests from clients.
      • Control access to APIS from a single location.
      • Monitor API usage.
    • Can use Cloud Endpoints or Apigee to create API Gateways.
  • Leverage Cloud Armor
    • DDoS and Application defense service. Uses Googles global infrastructure.
    • Delivered at the edge of Google’s network to provide DDoS Protection.
    • Enables IP blacklist/whitelist security policies.
      • IPv4, IPv6, CIDR address.
    • Customized via Security Policiy Rules.
      • Condition: Where to apply the rule(Matching type)
      • Action: Allow/Deny
      • Condition: When to do it
      • Target: Where to apply the rule

Types of complementary partner products

GCP Partners with security centric firms.

Infrastructure protection, which includes things like DDoS protection, network and application firewalls, intrusion detection, and prevention and container security. Scanning, logging, and monitoring, which includes vulnerability scanner and security and information management tools. Identity and user protection includes single sign on, identity and access management, anti-malware, mobile device and application management, and cloud access security brokers. And configuration, vulnerability, risk and compliance, which spans all of the areas in your infrastructure.

Infrastructure Protection Partners

Provide: next-generation firewalls, web application firewalls, web proxies and cloud gateways, server endpoint protection, distributed denial-of-service, and container security.

Data Protection Partners

Protect your data from unknown authorized access, as well as internal and external threats through encryption, key management, policy-driven data loss prevention controls.

Logging and Monitoring Partners

enable visibility and audibility user and system activities in your infrastructure while providing policy-driven alerting and reporting.

Configuration, Vulnerability, Risk, and Compliance Partners

facilitate the visualization and inspection of your network and application deployments for vulnerabilities, security, and compliance risks, and be able to assist with remediation.


What is a DDoS attack? A Distributed Denial of Service or DDoS attack is a malicious attempt to disrupt normal traffic of a targeted service, server or network. They work by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic from multiple sources. Most often using zombie computers that have been compromised. Armies of infected computers, known as botnets, can be huge, millions of computers in size. DDoS mitigation requires a multi-layered set of strategies. These include load-balancing, reducing attack surface, isolating internal traffic, monitoring APIs to detect and throttle malicious traffic. Using CDNs to isolate and serve content, and finally, the use of third-party resources for specific needs. GCP provides tools to execute these strategies, including GCP load balancer, VPCs for isolation, cloud endpoint or Apigee to create API management gateways. Cloud CDN for serving content from the edge to the outside, and the Cloud Armor for at scale defense of your applications armed operations. Third party options are also available that complement GCP’s products. Different categories of third party security in the ecosystem include infrastructure protection partners, data protection partners, monitoring and logging partners, and vulnerability, risk, and compliance partners. A more detailed current list can be seen at ers. In our next module we will discuss another important security topic, application security.

GCP Application Security

Types of application security vulnerabilities

  • Developer’s main objective is often features and functionality
    • Security is often neglected.
  • Applications have become the most common target of attackers.
  • Commonly found vulnerabilities will be discussed here.
  • Injection Flaws: some form of malicious content can be injected into an application from an app attacker and the application will then accept and interpret that content.
    • SQL Injection, LDAP Injection, & HTML Injection
  • Cross Site Scripting: XSS
    • Inject malicious JavaScript into an application, originates from a different site.
    • Execute scripts in victims' browser: hijack session, deface websites, redirect user to malicious sites.
  • Authentication, access control, session management are application logic functions often implemented poorly.
    • Can compromise passwords, keys, session tokens enabling exploitation of other users' identities.
  • Protecting sensitive data
    • Attackers may steal or modify weakly protected ata to facilitate credit card fraud, identity theft, or other data and identity crimes.
    • Transferring data without extra protection can be compromised.
    • Securing data requires encryption at rest or in transit.
      • Special precautions must be taken when storing in a browser.
  • Security Misconfiguration is common
    • Result of default configurations, incomplete or ad-hoc configurations, open cloud storage, misconfigured HTTP headers, verbose error messages with sensitive information.
    • Operating systems, frameworks, libraries, applications must be securely configured, patched, and upgraded in timely fashion in order to keep them secure.
  • Component insecurity
    • Libraries or components with known vulnerabilities undermine application defenses and enable attacks and impacts.
    • Libraries, frameworks, and other software modules run with same privileges as the application. Giving the application a vulnerable attack surface.

Cloud Security Scanner

Scanners can identify weaknesses in your applications.

Cloud SEcurity sScanner checks your applications for common vulnerabilies.

  • XSS
  • Flash Injection
  • Clear text passwords
  • Use of insecure JavaScript libraries
  • Mixed content: HTTP vs HTTPS

Security scans are free for Google Cloud Platform users.

  • Navigates every link it finds(except those excluded)
  • Activates every control and input.
  • Logs in with specific credentials.
    • Then scan resources which are only visible to AuthN users.
  • User-agent and maximum QPS can be configured.
  • Scanner is optimized to avoid False positives.
  • Scans can be scheduled(for downtime), or manually initiated.
    • Scans can be set to a schedule.
    • Scan duration scales with application size. Large apps can take hours to complete.
  • Scanner generates real load against your application.
  • Scanner can change state data in your application.
    • Use with caution.
    • eg: Scanner might produce comments on a blog

Avoiding unwanted impact

  • Run scans in a a test Environment
  • Use test accounts when providing AuthN to scanner
    • If different workflow for first time users, will change scan results. Best to give Scanner an established/normal test account.
  • Block specific UI Elements
    • Use event handlers to block interaction. CSS: class: inc-no-click
  • Block specific URLs
  • Make backup of all data prior to scan.

Threat: Identity and OAuth Phishing

  • Types of Phishing Attacks

  • Phishing attacks is the fraudulent attempt to obtain sensitive information such as usernames, passwords, and credit card numbers.

    • PII is always sensitive, even if seemingly innocuous. Age, email address, phone number, can be used by attacker to impersonate.
  • Phishing attacks pos a constant threat to business.

  • Identity Phishing

    • Stealing someone’s identity or credentials.
    • Hide as many parts of the identity puzzle as possible.
    • Hackers create a similar website to give up personal information or sensitive information.
      • DNS and UI looks as legitimate as possible.

For example, a hacker may create a duplicate Facebook website on their own servers designed to luring the unwary and then send an email or pop-up notice to a visitor that convinces them that they must login in to the Facebook for some reason. However, once the user has typed their email and password into the site that information is then given to the hacker and they will have been phished.

  • OAuth Phishing
    • Technique that takes advantage of the Open Authentication standard to gain backend access to user accounts.
    • Can gain access no accounts without needing th password.
    • Trick users to grant persistent access to their accounts(data).
    • More difficult to detect by users of varying experience levels.
    • Deception can be very well designed(and trick Google Users…)

Cloud Identity-Aware Proxy

Super Dope, allow access to Internal Resources to AuthN/Z users. Cloud IAP provides a central authentication and authorization layer for your applications over HTTPS.

Removes need for VPN tunnels or apply AuthNZ in front of internal applications. Users can access the webApp from anywhere.

  • IAP
    • Controls access to your cloud applications running on GCP
    • Verifies a user’s identity
    • Determines whether that user should be allowed to access the application.
    • Manage access to App Engine, GCE, K8s clusters.
  • Central authentication and authorization for your application provided over HTTPS.
  • Simpler administration
  • Deploys in minutes
    • No VPN to implement & maintain
    • No VPN clients to install
  • Saves end user time
    • Faster to sign in to than VPN


  • App Security is often ill-defined an untested in software development projects.
    • Hackers target applications more than infrastructure or networks.
  • Focusing attention on common application vulnerabilities during development and deployment helps.
    • eg: XSS
  • Cloud Security Scanner can automatically scan and detect common application vulnerabilities.
    • Can be run scheduled or manually
  • Identity or OAuth phishing are other common attacks
  • IAP replaces end user VPN tunnels
    • Google uses it to secure its applications
    • Works with GCE and K8s instances

Threat: Ransomware

  • Prominent threat infecting enterprise networks is ransomware.
  • Hackers threaten to publish the victim’s data or perpetually block access to data unless a ransom is paid.
  • Commonly uses crypto-viral extortion to make data inaccessible.
    • Thanks bitcoin…

Ransomware Mitigations

  • GCP provides multiple layers of protection to ransomware enabled by default.
  • Google has global visibility into malicious sites and content.
    • This makes the detection of incoming attacks very effective.(citation needed)
      • Google claim: ‘every day find and label malicious sites and warn incoming users of suspected malware.’
  • End user protection
    • Gmail automatically prevents many malicious attacks from reaching inboxes
    • Google Safe Browsing identifies dangerous links
    • Google Drive scans files for malware(unless big file…)
  • Data related mitigations
    • Reduce vulnerabilities and their ramifications
    • Make regular backups
    • Use IAM best practices
    • Use the Data Los Prevention API
  • Data related mitigations: Backups
    • Ransomware often targets destroying backups to prevent data recovery.
    • Conduct regular/frequent data backups -> Store backups in multiple secure isolated locations.
    • Backup not accessible by main systems blocks destruction.
    • Having durable, secure backups can mitigate effects of ransomware.
  • Restrict administrative access
    • Principle of least privilege
    • Some strains of ransomware operate from system administrator account to perform operations. Decreasing number of user accounts limits attack surface.
  • Restrict code execution
    • User service accounts with appropriate roles.
    • If ransomware executes from temporary data folders, but cannot access folders due to access controls on Service Accounts, this will roadblock data ransom encryption.
  • Ensure that sensitive data is not accidentally exposed.
    • In the past, sensitive data has been accidentally exposed to the public by organizations via screenshots and other published documents. This data is often used by attackers to help gain the initial access to your systems.
  • Leverage Google Data Loss Prevention API
    • Scan documents for sensitive data before publication.

Threats: Data Misuse, privacy violations, sensitive content

Serious business implications: High monetary costs from data recover, possible litigation, criminal prosecution and fines.

  • Data Misuse
    • Inappropriate use of any type of data. Usually PII.
    • Can be a legal/regulatory violation.
    • Can also be use of the data in a way that was not intended when collected.
    • Exposing sensitive content.
      • Credit card numbers, screenshots made public
    • Allowing access to restricted content, bad permissions, or inadequate security.
    • Inadvertently including unacceptable content.
      • “users” adding data to your application; reviews, comments etc
  • Privacy violations
    • Accidentally exposing sensitive data can have additional ramifications
      • Credibility loss
      • Identity theft
      • Legal/regulatory risk
  • Classifying Content
    • Ensure security controls align with values of the data
    • GCP Natural Language API
      • Classifies content into categories along with confidence score
  • Scanning content and redacting sensitive data
    • Block documents, screenshots, images or other content which contains sensitive data.
    • Leverage Google Data Loss Prevention API
      • Scan all documents for sensitive data before publication
      • Redact any sensitive data.
  • Detecting unacceptable content, and blocking before publishing(youtube comments, dick pics…)
    • Moderate content and detect inappropriate content.(Expensive)
    • Google Vision API easily detects different types of inappropriate content, fro adult to violent content.
    • Video intelligence API for video, Data Loss Prevention API to detect sensitive data content before exposure.

Google Data Loss Prevention, DLP API, is a machine learning model that can be used to scan various document formats and images for various types of sensitive data and then redact it even for images.


  • Ransomware is a type of malicious software exploit that threatens to publish the victim’s data or perpetually block access.
  • GCP provides multiple layers of protection against ransomware by default.
  • LEverage regular backups, use IAM best practices and Data Loss Prevention API
  • Data Misuse is the inappropriate use of an type of data(especially user data)
  • GCP APIs make mitigation of data misuse easier
    • Natural Language API
    • Vision API
    • Video Intelligence API
    • Data Loss Prevention API

Security in Google Cloud Platform achieved!