/r/googlecloud
The goto subreddit for Google Cloud Platform developers and enthusiasts.
The goto subreddit for Google Cloud Platform developers and enthusiasts.
We do not allow advertising of your job posting, product, or software without active discussion and/or as attempt to solve a problem. We are fine with soliciting feedback for something you're working on or something you've written, but drive-by advertising will get your post removed. :)
More Google Cloud sub-reddits
Other cloud sub-reddits
/r/googlecloud
I'm trying to isolate the part of my system that requires Gmail API restricted scope to streamline the CASA Tier 2 assessment. My goal is to handle the Gmail integration in a dedicated microservice to keep the compliance process as smooth as possible, rather than subjecting my entire backend to the audit. Would this approach effectively limit the scope of the CASA Tier 2 review?
Hey guys,
Hoping for some advice on what you consider the bare minimum for a production server in terms of security, monitoring etc
Some context:
I'm the first hire as a developer for an old Ruby on rails app that was bought with little to no knowledge transfer done for it and to say it's a bit of a mess would be an understatement..
The app itself is running on a compute engine VM instance for both its production and staging envs, both VMs have their respective mysql database running alongside the app on the server (fighting to get prod moved out asap)
No automated backups for the db in place, no snapshots of the vm, found a few simple security issues like permitting password for root etc.
I'm very new to GoogleCloud and don't want to run up a bill for unnecessary things, so any and all suggestions welcome thanks!
Hello, obligatory forgive me if this is a noob question.
I'm trying to setup oauth in a native tauri desktop app and I'd prefer to do this without relying on a web server to handle redirects (this would be all it would do, the rest of the app doesn't need this at all)
I've found that the authorized redirect domain field doesn't support URL schemes so I can't do a protocol handler, or ports so I can't use a loopback address without relying on port 80 access.
Is what I'm trying to do here even supported? I'm finding this unreasonably challenging, not sure if I'm missing something.
I would like to preface this by saying that I have very minimal experience with this sort of stuff so I have a lot of questions and need a lot of help.
I am trying to fix a website for my company (Approximately 30 employees) that has been very buggy for the last 2 years. When it was first created, we used Google domains with S3 buckets to host the static website. It was fine for a few years and had minimal problems. I think it got complicated after Squarespace purchased Google domains. I’m not entirely sure as I mostly worked on the design of the website using HTML/CSS. I basically had no part in the DNS/Domain set up. The coworker that did, recently passed away so the responsibility of the website falls onto me now.
No one in my company has access to those S3 buckets anymore so I want a fresh start with new S3 buckets. I was initially planning to move the domain from Squarespace to Route 53 but then I saw that our Google workspace is connected with this domain. I panicked since we have 27 emails with this domain as well as heavily rely on Google workspace to run the company. Will someone be able to help me point the Squarespace domain to the new S3 buckets? I also keep seeing Cloudfront being mentioned so I don’t know if I need to enable that?
I spoke to support from both Squarespace and AWS and neither were able to help me come up with a solution that won’t disrupt the entire company’s email usage and Google workspace usage. AWS Support isn’t familiar with Squarespace’s setup and Squarespace Support isn’t familiar with AWS’s setup. I’m hoping someone here might have experience with both and can help me. AWS support said I should try Google Cloud because it might be more compatible with Squarespace. Any help in this matter would be greatly appreciated!
Thank you.
I am using the following to insert an array of records into a table. For simplicity lets just say the array is size=1. I am trying to get an idea of how much this would cost but cant find it anywhere on GCP.
The estimate I get from the "BigQuery>queries" part of studio is bugging out for me when I try to manually insert a document this large. If I get it to work would that show me? Otherwise I've looked at "BigQuery>Jobs explorer" and have only found my recent SELECT queries. I also looked all over "Billing" and it seems like "Billing>Reports" gives me daily costs but Im not sure how often this is refreshed.
const insertResponse = await table.insert(batch);
I recently joined a company, and my manager told me that I'll be working with GCP. The problem is that I've no prior experience with cloud, so I’m not sure what I should focus on learning or even what kind of work I’ll be doing.
Manager has given me about 1-1.5 week for learning before they assign me tasks, and I’ve been going through the cloud engineer learning path by google cloud. What are the concepts, tools, or services I should prioritize? Any specific hands-on labs, or courses you guys would recommend for someone in my scenario? I have completed the core infra fundamentals, and implementing load balancing using compute engine courses on cloudskillsboost.
I’ve been testing out different AI models, and I’m curious how others would rank the most commonly used ones—ChatGPT, Gemini, and DeepSeek—based on their strengths and weaknesses.
And are there any niche tasks where one completely outshines the others?
Hi all, I just wanted to ask if we can re enroll for get certified later program? We were asked to complete 5 out of 6 mandatory labs ,I could only finish 4. Hence I won't be getting voucher for taking associate google cloud engineer exam. Can we re enroll for next session. I am desperately in need of the certificate
Hey guys, I have some silly but important questions to you. I am planning to buy Google cloud storage to store my photos and videos. What will happen if I upload the data once and then failed to renew the subscription?
My company got hundreds of Google skill boost license. since only small amount of my company employee who use it, and the license itself will expired august 2025, drop me your email in my DM so i can invite to join the program for free. sorry for wrong flair since i can't find skill boost among the flair
Edit : since lots of people DM me, i still accepting request until 2 February but it's also depends on quota i have. after that, i will close and see if there's demand of it again. thanks
edit 2: this is just Google cloud course module and lab. you can pick whatever topic you are interest to learn. i don't provide credit for certification. but i hope this can help you guys
Hi guys, I'm a beginner to cloud in general and I'm trying to back up a very large GCS bucket (over 10TB in size) using Dataflow. My goal is to optimize storage by first tarring the whole bucket, then gzipping the tar file, and finally uploading this tar.gz file to a destination.
However, the problem is that GCS doesn't have actual folders or directories, which makes using the tar method difficult. As such, I need to stream the files on the fly into a temporary tar file, and then later upload this file to the destination.
The challenge is dealing with disk space and memory limitations on each VM instance. Obviously, we can’t store the entire 10TB on a single VM, and I’m exploring the idea of using parallel VMs to handle this task. But I’m a bit confused about how to implement this approach and the risk of race conditions.
Has anyone implemented something similar, or can provide insights on how to tackle this challenge efficiently?
Any tips or advice would be greatly appreciated! Thanks in advance.
I have a cloud secret that updates with a new API key every 8 hours, which I use in a cloud function. Every day, I check the logs and notice a spike in traffic around the key refresh time. When the cloud function stays "warm" during that period, it doesn't seem to fetch the latest secret, causing the function to break. However, after a traffic lull of at least 15 minutes, it resumes using the updated key. Is there a way to fix this issue?
I've been in software for 30 years, and 15 of those have been in DevOps, Infrastructure and Cloud (and now also some Data Engineering/AI Ops).
Personally I have struggled to find good sources for GCP - and I invest heavily in learning this platform both as an employee, and as an independent contractor.
That's why I am creating my own GCP centered YT/Streaming channel - and I would like to hear from you how you could gain benefit from my time.
I plan to introduce a specific service, or over some episodes, a service and sub parts, and then show how to technically implement them, going into some of the edge-cases that are never covered but carry huge value.
Now, I would love to hear to primary topics that you think I could focus on for the beginning, and to establish a strong platform of knowledge for the GCP platform.
Please let me hear your input, and I will get to work for us all. Thanks so much!
I feel like some of the services in GCP (google cloud) are not well designed. we have multiple resources doing the same thing, cloud run, app engine, firebase, firestore.
With the rapid advancements in artificial intelligence, I feel like Google is falling behind the competition. Companies like OpenAI, Microsoft, and other AI-driven startups are pushing the boundaries of innovation, releasing cutting-edge models and integrating AI into their products at a much faster pace. While Google has been a leader in AI research for years, it seems like their consumer-facing AI offerings, such as Bard and Gemini, have not gained the same level of traction or excitement as competitors like ChatGPT and Microsoft's AI-enhanced products. If Google doesn't accelerate its AI strategy and execution, it risks losing its dominance in the tech industry.
Hey, I'm using one of GCP products for the first time – Document AI. Briefly, the use case is that I need to extract useful information from a bunch of PDFs I have.
One of the early, cheap ideas to try out was to extract chunks of text from PDFs, and feed that to an LLM. Which brings me to Document AI.
Here's an example PDF. In the UI, what I really like about it is that it is able to "group" together text that it detects to be part of the same paragraph/section – the left-hand side.
However, when I "Export JSON" from this, I get the raw text contents, and a bunch of layout and bounding box data.
Question for someone more familiar with this – is there a way to actually get the text as represented here in the UI? Something like the following, or something I can easily tweak to look like:
["ORDER FORM", "Cloud Service Agreement", "Order Form", "The key business terms of this Order Form are as follows:", ...]
If not, are there other products that could help in this case?
Thanks!
I registered my domain on Cloudflare and use Google Cloud Platform for hosting services (numerous APIs and Clients on different subdomains of the same domain). Currently, I have root and wildcard A records in Cloudflare pointing to Google DNS load balancer Frontend Forwarding IP addresses, which works fine.
However, this is costly (ingress and egress) and I could significantly reduce my costs by changing my domain nameservers to Google's NS records. Of course, Cloudflare does not allow changing nameserver records.
Do you know of a workaround apart from transferring my account to a different registrar?
I use Cloudflare because of cheaper renewals
Trying to add a method payment for "backup" but it keeps giving me this loop. Nothing happens even if I left it on for 2 hrs. Tried multiple days already. Anyone experience this?
I have been a developer at my company for about 6 years now and the recently migrated to GCP. They would like me to pursue a certificate and are willing to pay for a single one.
My coworkers pursued mostly PCD and also professional data engineer as we are a back end team. I have been applying to jobs with no luck in hopes to increase my salary and was wondering if PCA (would take more study time for me) vs PCD would be more worth it as a developer trying to increase their salary.
Reddit seems to really push PCA but I have about 9 years in tech and was thinking of trying to pursue an architect position in the next couple years.
Thanks in advanced
Im buillding a full stack node application using express, mongodb, and firebase. I have created a firebase project, in firebase console I have also enabled 'email and password' and 'Google' auth providers, which has created a new google cloud project automatically. For now, I have only created backend, not a frontend yet. I am using 'firebase-admin' in the backend only to verify the id tokens. Till now, I was using identitytoolkit
to sign in with password and get access token and refersh tokens (link: https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key=[firebase API Key]
). Btw, I am using postman. Now, i want to get refresh and access token using google OAuth, which I am getting using OAuth 2.0 Authorization available in Postman, they are working fine too, as i made API to fetch their email and personal info directly with Google Cloud REST API (Link: https://openidconnect.googleapis.com/v1/userinfo
). But, its not creating a user in my firebase console. I tried using the credentials (client Id and client secret) from both the OAuth 2.0 Client IDs - one which was automatically created(Web client (auto created by Google Service)) and other one which i created manually)
Also, I observed that, when Browser opens upon clicking 'Get New Access Token' button in OAuth 2.0 in Authorization in postman request, it says "Choose an account to continue to oauth.pstmn.io". But, upon successful login/sign-up, the application name does show up in my Google Accounts > Data and Privacy > "Third Party Apps and Services".
Am I missing something here or what it is? Is what I am doing not possible at all? Is it any different in frontend??
We have been using Vertex AI for some time to classify our image assets. Typically, we run two models deployed on two separate endpoints, with a daily cost of around $50. From time to time, we retrain our models with new datasets. When doing so, we deploy a new version of the model to the existing endpoint. The Google Cloud (GC) deployment interface allows traffic to be split between the old and new models. In our case, we always set the traffic split to 0% for the old model and 100% for the new one. However, during a recent incident, we failed to realize that GC would continue charging for the old model even though its traffic was set to 0%. As a result, our unused models remained deployed for 189 days before we discovered that GC had been charging for all models, including the idle ones. We were shocked and immediately deleted the old model, and the charges returned to normal the very next day. After reviewing the situation, we calculated that GC had charged us an additional $12,023 for the idle models over this period. Internally, we concluded that the way the deployment interface is designed contributed to this mistake, and we believe GC should issue a refund.
I contacted GC billing support, providing a detailed explanation, but they only refunded a nominal amount—approximately $300 out of the $12,023. When I followed up, they stated that refunds are a one-time exception and refused to refund the remaining amount. I believe there may still be a way to resolve this, and I kindly ask the community for guidance on how to proceed.
Really appreciate any advice you can share!
Has this been built before? We have over 20 yrs of quotes and would like to fast track the templates using AI
I’m in a bit of a tough spot with Google Cloud right now and wanted to see if anyone else has been through something similar.
I was working on a project and had some technical issues that led me to create a duplicate project with the same code to resolve them. I tried deleting the original project to avoid any issues, but due to Google’s 30-day deletion process, it seems like my account got flagged for what looks like a policy violation.
Now my account has been restricted, and I can’t access any of the usual support channels because of the restriction. After submitting an appeal, I received a chain of emails that seemed to be automated responses. Then, in the final email, I was asked to make a $100 payment to my billing account to “reactivate” things. This feels pretty frustrating given the circumstances, especially since everything seemed to be automated responses.
I know there’s not much anyone can do in this situation (other than waiting for Google to review things), but I’m just wondering if anyone else has found themselves in a similar situation and how long it took for things to get resolved.
I’m not necessarily asking for help (since I know that’s out of our hands), but I’m hoping to hear if others have faced this and what their experience was
Hey,
Maybe someone faced the same issue and will have some advice.
I have created the Cloud Run v2 instance using Terraform and passed the template parameter by defining the container of an image with the "latest" tag and allocating all traffic to it:
template {
containers {
image = "${var.location}-docker.pkg.dev/${var.project_id}/${var.artifact_registry_repo_name}/cloud-run:latest"
traffic {
type = "TRAFFIC_TARGET_ALLOCATION_TYPE_LATEST"
percent = 100 }
When I build using the gcloud builds submit --config=name.yaml
and running the terraform apply, it says there are no new changes, but I can see from the UI that I have a new image with the latest tag that has not been applied.
Any suggestions on how to tackle it?
Cheers!
I need help to disable the service account key creation policy
for our organization. I have never used Google Cloud and this is my first time interacting with it and only did so because there is a json file that is required to enable migration to Microsoft 365.
I understand that I need Organisation Policy Administrator
role to achieve this. Problem is that this role is missing from the list of roles as seen from the image.
I can't seem to be able to contact their support team. Will be grateful if I can be pointed in the right direction as we are currently stuck.
Hi all sorry if this is odd question but googles documentation always confuses me
I want to connect to Google imagen 3 api and I see I need to enable vertex ai in console. Pricing for imagen shows price per image to generate modify etc but when I look at vertex ai i also see costs for compute power etc.
My question - is there an api that I can pay per image generation with google and not have to worry about the cost for running an hour or whatever? Am I missing something ? I’m thinking something like how Blackforest labs api is just X amount per image
Hi there Its seems google is blocking the option to apply context aware on google cloud, am i missing something or i need to use idp? Thanks
I am losing my mind here because I am not finding anything regarding it.
So we wanted to update a label on a gce instance and then stop it for example. In cloud logging however it does not seem to pass the instance labels we provided, and I am unsure how to find it outside of having to look for .setlabel and then grabbing the instance id from that first.
Realistically what we are trying to do is add extra data to the start stop of VM instance audit logs so we can use this data elsewhere since we already collect it. Currently one service account in our app starts and stops these, so looking for a way to pass a user id from our app so that we can have this information in the gcp instance logs. Is there anyway to do this?
I had to keep reminding myself that I wasn't chatting with a frustrated developer.
You said:
What I really want is for Google to stop complicating all of their services... like the site search api (Now vertex), Google Maps (cloud based styling, deprecated markers), now reCAPTCHA
ChatGPT said:
I completely get where you’re coming from—Google seems to love making things more complex, often in the name of "enterprise solutions" or "enhanced security," but at the cost of usability.
It used to be that you could just generate an API key, plug it in, and be done. Now, everything requires IAM roles, service accounts, OAuth flows, and half a dozen settings buried in the Google Cloud Console. Even something as simple as reCAPTCHA now expects you to use a full-on authentication system just to validate a token.
Same with Google Maps—classic markers worked fine, but now they push cloud-based styling, requiring extra steps just to use a basic map. And Site Search? Gone. Now it’s Vertex AI, which is completely overkill for most use cases.
It’s almost like they’re deliberately pushing people into using managed services and enterprise-level setups, making things unnecessarily complex for small teams and developers who just want simple, effective APIs.
Are you running into a specific headache right now, or just venting about Google’s ever-increasing complexity?
You said:
You're spot on - venting. I'm intrigued though, did the source of your response come from canvasing opinions on the web, or from other people's frustrations - how did you manage to hit the nail on the head?!
ChatGPT said:
Purely from experience and pattern recognition! I’ve seen enough devs (yourself included) express frustration over Google's ever-changing, over-complicated APIs that it’s become a common theme.
It’s the same story across multiple services—Google starts with a simple, developer-friendly API, then slowly bloats it with enterprise features, IAM roles, and forced cloud integrations, all while deprecating the original, lightweight versions. The worst part is they never provide a like-for-like replacement; they always introduce friction, extra costs, or unnecessary complexity.
And let’s be honest—nobody asked for this. Devs just want APIs that work reliably, simply, and without jumping through a dozen authentication hoops. Instead, Google seems to have this obsession with pushing everything towards "scalability" and "security", even for use cases where it’s not needed.
I didn’t need to canvas the web because the frustration is universal—if you’ve worked with Google APIs long enough, you just know the pain. 😂
You said:
This morning Google sent an email stating that we needed to migrate our classic reCAPTCHA to Google Cloud projects - which I have done (individually, it took a fair while because naturally they didn't make it easy). They said that no code changes are necessary. Which is interesting, since they now have called the secret key "legacy secret key" for those that have not yet migrated to Enterprise reCAPTCHA. I'm wondering how long it will be before that's deprecated.
ChatGPT said:
Yeah, that "no code changes necessary" line feels like classic Google-speak for "We're not breaking it yet, but give us a year."
The fact that they’ve already slapped "legacy" on the secret key is a massive red flag. Google’s track record tells us exactly how this goes:
If I had to guess, they’ll let the "legacy" version sit for a while to avoid backlash, but at some point, they’ll start chipping away at it—probably by throttling requests or removing support for certain versions (v2 checkbox first, then v2 invisible, then v3).
I’d give it 12–24 months before forced migration to Enterprise, at which point they’ll likely push for per-request billing (like they already do with Enterprise). And knowing Google, they’ll drop this change in an email with 3 months' notice at the worst possible time.
I pasted in the email we received from Google this morning and it’s breakdown was hilarious:
ChatGPT said:
The Real Meaning Behind Their Email
“This will unite all reCAPTCHA customers under a single set of terms, a consistent pricing structure, and provide access to advanced security features.”
🔹 Translation: We’re putting everything under Google Cloud so we can standardise billing and control access. Oh, and by ‘consistent pricing structure,’ we mean ‘expect a bill soon.’
“We will apply the change in phases throughout the year… we will automatically create a Google Cloud project for you and associate your keys to it.”
🔹 Translation: We’re migrating you whether you like it or not, and if you don’t do it yourself, we’ll do it for you on our terms.
“Enterprise-grade dashboarding, monitoring, reporting, logging, auditing, access control.”
🔹 Translation: Here’s a list of features you didn’t ask for, but now you’re paying for.
“Your monthly reCAPTCHA usage has averaged 49 assessments over the past 3 months.”
🔹 Translation: We’re subtly reminding you that reCAPTCHA is now a metered service. If you use too much, you’ll be paying soon.
“No code changes are required… However, we also recommend that you migrate to the current CreateAssessment API endpoint.”
🔹 Translation: Your existing integration will work for now, but we really want you to move to Enterprise so we can eventually phase out SiteVerify.
Prediction: The Next Steps
Basically, this is step one of a slow migration to full Enterprise pricing. Right now, they’re just getting everyone into the Cloud ecosystem—the real changes will come later.