/r/bigquery
All about Google BigQuery
Moderator: @felipehoffa
/r/bigquery
I'm a BigQuery beginner that's trying to understand how to track things.
I'm trying to use BigQuery for some querying, but I need to be careful not to go over 10GB of storage as well as 1TB of processing because I do not want to be charged and I wish to remain on the free tier.
I am uploading multiple csv files on bigquery but I cannot find the page where they show you the total storage of all the files I uploaded. I need to be able to see it so that I do not go over the limit as I upload.
Exactly where can I see the total storage of bigquery I've filled, as well as the processing I've done per month? There should be something that allows me to track those things via the UI right? No matter how I search online I cannot find the answer for this which imo should be something quite simple.
How hard is to write custom cloud function that downloads Google Analytics 4 api data? Are there any examples? Tried to find some but seems like nothing is out there on the internet.
The reason for cloud function is that GA4 BigQuery export is such a mess that is hard to match UI numbers.
Thanks a lot!
Hello, I linked Google analytics to Big query, but I want to save the data in more structured and organized way, so I decided to create a data warehouse schema and save the data, to be more organized and also be easier when I use power bi.
My question here is about the schema itself, because I created many but feel I need a better solution,
Do anyone create something like that before, or if someone has a better idea than mine?
Sometimes I really feel like GA4 is a sales strategy to push us toward GCP—kind of like how they encourage us to use Google Tag Manager even though it can slow down websites, only to then suggest server-side tracking (also on GCP). Maybe it's a tinfoil hat moment, but curious what others think!
In dataform, you can reference depencies by its filename as stated below
> Replace DEPENDENCY with the filename of the table, assertion, data source declaration, or custom SQL operation that you want to add as a dependency. You can enter multiple filenames, separated by commas
(https://cloud.google.com/dataform/docs/dependencies#config-block-dependencies)
Does this mean filenames should be unique inside the repository? I was not able to find any requirement in the document, and I was wondering if there were any best practices/rules around file names.
Hi all! I've recently started learning about UDFs (user-defined functions) and found them surprisingly cool and useful. I wrote an article with some function ideas, I would appreciate it a lot if you check it out and let me know what you think!
https://medium.com/@lilya.tantushyan/6-udf-ideas-in-bigquery-funwithsql-918cf2dc6496
So I'm calculating conversion rates...
In BigQuery I have my code like
SELECT
EXTRACT(ISOWEEK FROM date) AS iso_week_number,
COUNT(DISTINCT user) AS total_user,
COUNT(DISTINCT CASE WHEN ts_pcc < ts_tran THEN transaction_id ELSE NULL END) AS conversion
FROM prep
GROUP BY ALL
Which on average gives me 1-2%
However If I instead do
SELECT
date,
COUNT(DISTINCT user) AS total_user,
COUNT(DISTINCT CASE WHEN ts_pcc < ts_tran THEN transaction_id ELSE NULL END) AS conversion
FROM prep
GROUP BY ALL
Explore in looker studio; set date format into ISO week then my percentages are widely different (more towards 6-10%)
This percentage is done in a calculated field where I do: conversions / total_users
Am I missing something?
The post is about geospatial queries, but could be helpful with other types of queries too
https://mentin.medium.com/divide-the-query-to-improve-cost-and-performance-df310a502a07
Hi all, I use only a fraction of the free tier allowances but I wish to set up quotas to prevent a huge bill should anything go wrong. I've set Query usage per day to 100 GiB, on the assumption that the max I'll be able to use a month if something goes very wrong is around 3 TiB which will cost me $12 give or take
Do I have this set up correctly and are there any other quota I'd need to set to ensure that I can't accidentally run up a bill of, say, $100 or more
What about storage - can I limit this in some way too?
Thanks!
How to download all the sql query inputs written in google bigquery console as .txt file
I am currently looking for a particular column in available in a dataset. I have no idea in which dataset it belongs. But I want to use that column for writing a query in google bigquery console. How to do it.
Same
I currently import all of my companies historic sales into Google sheets and have created several dashboards and reports based on the data. My problem is the data set is getting to be far too large and everything is operating quite slow.
Currently I have about 200k rows and 15 columns, I add roughly 100 new rows of data daily, 36,500~ yearly.
I’ve read that big query may be a solution to host my data and mirror it on Google sheets so that GS is not storing my data and slowing it down.
Is big query right for me? Would there be any costs associated with this? Is there any other recommendations out there?
Appreciate it!
I'm using bigquery rest api using postman. I want to query select * from <table_name> but when i do that i get the output which unreadable("v" and "f"). How can i convert it into (key) : (value) type output. I tried to select every field individually, which gave result but very hectic. Need a workaround.
This has been a huge pain point for my entire team for about 3 months. Can't seem to find anyone online with the same issue. The popup comes up every 5/10 minutes, sometimes more, and asks you to refresh the page. This obviously loses any unsaved progress and is a huge productivity killer. I first noticed it three months ago.
I know it can't be an isolated issue because my whole team experiences it.
Hi everyone,
I manage a high-volume data warehouse in BigQuery, and controlling costs has become increasingly challenging. I recently noticed monthly spend climbing significantly due to inefficient query patterns, costly joins, and frequent data pulls across our team. I’ve tried using INFORMATION_SCHEMA.JOBS for tracking, but I’m exploring more streamlined ways to identify and optimize costly queries or receive alerts when certain thresholds are hit.
For those with similar issues:
Any experience in creating a custom connector to read Bigquery table data. Recently we were trying to build a custom connector for MS Power Apps to read data from Big Query tables.
It appears this require complex API calls (POST & GET) to work in conjunction. Any idea how someone can make this work ? For context, there was one 3rd party developed connector in Power Apps to Big query which our Org does not whitelist for use.
Good afternoon, everyone! I have a table of Jobs in my BigQuery. I want to differentiate and categorize these Jobs based on the dataform routine they were executed from. Does anyone know how I can do this?
need to create charts in power BI. However, to extract data from the database remotely, should I send it directly to BigQuery, or should i first export it to a CSV and then send it to BQ? What should i do to automate this process? is there a way to use the Bq API to improve this process? Which process would be better, if not one of these?
Basically looking to unnest all the items in the item array when a user continues in through a checkout process.
So if they had an apple, an orange and a banana on the view_cart event. Then on the begin_checkout lets say they then have an apple, orange, banana and grapes.
I want to see the full list of items for each event.
Im assuming this is possible, correct? I would have a unique Cart ID to make it easier to select.
Hi, my company is using GA4 and storing the data in Bigquery. Now higher management wants to use the bigquery data to derive the business.
what are the use cases we can work on with bigquery data
Getting started with BQ. I know that I can add data through the Add Data option, like a "wizard" tool. And that I can use a local Python script to connect and upload.
What are the easiest other ways to upload data? The closest to a "drag and drop" functionality?
Setting up Python notebooks in BQ, but finding that only some pandas methods are implemented.
I have not found a list of implemented methods - does it exist?
Hello,
Is it possible to update many rows in a BigQuery table using python in an asynchronous way ?
For the context, I have a table of 10 000 rows and I want to update 500 rows in parallel instead of one by one to speed up the process
We are currently trying to visualize changes in Google Ads remarketing audience sizes in time using the automated Google Ads -> BigQuery export. Ideally values for Display, Search, etc.
I've gone through the documentation about the exports and found two tables that might be suitable - AdGroupAudienceBasicStats and CampaignAudienceBasicStats. However in neither of these two tables (or any other tables with data about audiences) I can see any data about the audience size.
Is the data even available?
TL;DR - I'm trying to find users who perform 10 or more distinct actions within 60 seconds.
Easy way: Trunc timestamp to the minute and distinct count Action by User & Time
This doesn't find users who perform 6 actions at 1:59:58 and 6 more at 2:00:01 (12 actions in 4 seconds).
I can't get the Window methods working to find Distinct Actions, and it's okay if a user repeats the same action 20 times in a row.
"Window framing clause is not allowed if DISTINCT is specified"
Any ideas to calculate a distinct count over a rolling 60 second time window?
Event Table:
User | Action | Time |
---|---|---|
userA | touch | 1:59:58 |
userA | ping | 1:59:58 |
userA | touch | 1:59:58 |
Instead of having to save a tab to rename it, you now only have to double-click on it to rename it.
I also fixed and simplified the shortcuts to open, close and switch tabs on mac and windows
I work on a better query formatter and query cost evaluation features.
I would be happy to read your feedback about it or on your struggle with the UX.
https://chromewebstore.google.com/detail/bigqueryfaster/gnccnhecnbkkdekbmnhkklcjokgikgdo
In Big Query there is a connector for Google Ads to add Google Ads data into your tables. But there is not a connector for GA4.
I can write scripts to ping the GA4 API but I have go through the GA4 login every time I connect for each account and I have a lot of accounts so this gets tedious. Is there a way to run scripts in the Google Cloud Console or some other platform where I can handle the authentication once for an account and not have to do it every time I need data from the GA4 API?
In other words, can I import from one of my exports and expect to be able to timetravel for up to 7 days? Does the export format/method make a difference?