/r/bigquery

18,315 Subscribers

1

Help on price difference: to divide data in BQ or LookerStudio?

Hi.
I'm starting to make some visualization reports in LookerStudio, and I'm wondering if there is a price difference between dividing a large piece of data in BQ beforehand, and filtering the same way with data extraction filter in LS.

Say I have data for categories A,B and C in one BQ table, and I want to make a report in LS for category A only.

Is it cheaper to make a category A table in BQ then data extract in LS,
OR to use the original BQ table and extract that in LS with a filter for category A?

Even if the difference is minute, we have a lot of reports and users, so every saving counts! thanks.

3 Comments
2024/12/02
06:58 UTC

12

Did bigquery save your company money?

We are in beginning stages of migrating - 100's of terabytes of data. We will be hybrid likely forever.

We have 1 leased line thats dedicated to off-prem big query.

Whats your experience been when trying to blend on/off prem data with a similar scenario?

Has moving a % (not all) data to GCP BQ saved your company money?

10 Comments
2024/12/01
19:45 UTC

2

How to filter the base table based on the values in query table while using the vector_search function in BigQuery

According to the documentation for vector_search in BigQuery, if I want to use the vector_search function, I will need two things: the base table that contains all the embedding and the query table that contains the embedding(s) I want to find the closest match for.

For example:

SELECT * FROM VECTOR_SEARCH( (SELECT * FROM mydataset.table1 WHERE doc_id = 4), 'my_embedding', (SELECT doc_id, embedding FROM mydataset.table2), 'embedding', top_k => 2, options => '{"use_brute_force":true}'); Where table1 is the base table and table2 is the query table.

My issue or concern I am dealing with is, so I want to filter the base table based on the corresponding doc id for each row in the query table - how do I do that.

For example - in my query table I have 3 rows:

doc id embeddings 1 [1, 2, 3, 4] 2 [5, 5, 6, 7] 3 [9, 10, 11, 12] I want to find the closest match for each row/embedding, but all the matches should be associated with their doc ids. It is like applying the vector_search function thrice above but instead of doc_id = 4, I am separately doing doc_id = 1, doc_id = 2, and doc_id = 3

I have thought of some approaches like:

Having a parameterized python script and sending asynchronous requests, but the issue with that approach is that I have to worry about having the right amount of infrastructure to scale this - and, this will be outside of the bigquery eco-system Writing a BigQuery procedure. However, BigQuery scripts will loop through the values/parameters sequentially instead of in parallel - hence making the process slower. Do K-means on the embeddings of each document using BigQuery ML and store the centroids of the documents in separate table, and then for each document I calculate the cosine distance the between the centroids and then based on the centroids query all the values in the cluster, etc. Long story short, recreate the IVF indexing process from scratch on BigQuery at the document level. If I can come up with a solution to modify the vector_search function to allow filtering the base table based on the values of the query table for a corresponding row - that would save a lot of time and effort.

3 Comments
2024/11/28
14:11 UTC

4

Best Practices for Streaming Data Modeling (Pub/Sub to BigQuery to Power BI)

I’m working on a use case where I receive streaming data from Pub/Sub into BigQuery. The goal is to transform this data and expose it in Power BI for two purposes: 1. Prebuilt dashboards for monitoring. 2. Ad-hoc analysis where users can pull metrics and dimensions as needed.

The incoming data includes: • Orders: Contains nested order items in a single table. • Products and Warehouses: Reference data. • Sell-In / Sell-Out and Shipments: Operational data streams.

My Questions:

1.	Data Modeling:
•	I’m considering flattening the data in one layer (to simplify nested structures) and then creating materialized views for the next layer to expose metrics. Does this approach sound reasonable, or is there a better design for this use case?
2.	Power BI Queries:
•	Since users will query the data in real time, should I use direct queries, or would a hybrid approach combining direct and import modes be better for performance and cost optimization?
3.	Cost vs. Performance:
•	What practices or optimizations do you recommend for balancing performance and cost in this pipeline?

I’d love to hear your thoughts and suggestions from anyone who has tackled similar use cases. Thanks in advance!

2 Comments
2024/11/22
20:11 UTC

3

Seeking advice from experts on taking over a big query process

I need a staring point. A recently departed co-worker ran a process using Big Query billed to himself. I can access the project and see the tables, but the refreshes are a concern. When I approach IT with this, how do I ask for this? Do I need them to access his google cloud account as him? What are some things I should be looking out for?

7 Comments
2024/11/22
14:20 UTC

9

How can I create an API call to ingest data into a bigquery table?

I’m going through tutorials, using chat gpt, watching YouTube and I feel like I’m always missing a piece to the puzzle.

I need to do this for work and am trying to practice by ingesting data into a big query table from openweathermap.org.

I created the account to get an API key, started a bigquery account, created a table, created a service account to get an authentication json file.

Perhaps the Python code snippets I’ve been going off are not perfect. Perhaps I’m trying to do much.

My goal is a very simple project so I can get a better understanding…and some dopamine.

Can any kind souls lead me in the right direction?

11 Comments
2024/11/21
02:05 UTC

4

Python SQL Builder with BigQuery support

Hey, we are using python quite a bit to dynamically construct sql queries. However, we are really doing it the hard way concatenating strings. Is there any python based package recommended to compose BigQuery queries?

I checked out SQLAlchemy, PyPika and some others but wasn't convinced they will do the job with BigQuery syntax better then we currently do.

6 Comments
2024/11/20
21:49 UTC

2

BigQuery SA360 Data Transfer says Custom Columns don’t exist - help needed

Hi All,

I am trying to create an SA360 data transfer in BigQuery via the UI.

I add my custom columns using the standard json format but when I run the transfer it states that custom column with the id given (which is 100% correct) does not exist. “Error code 3: Custom Column with id “123456” does not exist”

Has anyone else encountered this before and managed to resolve it?

2 Comments
2024/11/20
16:03 UTC

2

retrieve missing data from GA4

I have missing data in my big query, the data supposed to be saved in big query from GA4
but for sometime I could not have the data for a while, so is there a way to retrieve this missing data ?
for example I have my data for the year 2022 till 2023 March, and I don't have the data for 6 months,
my question is to retrieve this data and save it in the event table in my big query

4 Comments
2024/11/20
07:38 UTC

3

Can we use python in bigquery for lookerstudio reports?

Heya,

I want to create some statistical calculations in bigquery for significance testing. For this I'd need python.

How easily can the two be connected?

11 Comments
2024/11/19
15:25 UTC

1

Purge older partitions without incurring query costs

I have huge tables about 20TB each partitioned by dates going back to 2016, we no longer need all the legacy information. I tried to perform a DELETE statement using timestamp but its incurring huge query costs to execute, Is there a better way to do it without incurring query costs

EDIT: I want to delete data prior to 2022 and keep data from the years 2022,2023 and going forward

12 Comments
2024/11/17
18:47 UTC

5

Bring multiple data to Bigquery - begineer question

Hi im trying to build multiple stream of data from

  1. Search console (100+ acc)
  2. Google analytics (20+ acc)
  3. Airtable
  4. Google sheet
  5. Few custom api

The data isnt huge, and the search console account is constantly adding. What is the best way to brong data in? Im not really a coder.

I am considering few tools but they seems quite costly when the data adds up:

  1. Windsor
  2. Hevo
  3. Airbyte

Is there any decent and affordable tool tat below $100 per month for above usage?

Ps: i prefer tool to inject historical data, the native integration from search console and analytic brings in too complicated data and cant backdate.

20 Comments
2024/11/14
02:01 UTC

1

Per routine performance metrics

Is there a way to get performance metrics on a per routine (stored procedure) basis? I can see the information I want in information_schema.jobs but don't know how to link a job to a routine.

4 Comments
2024/11/13
17:09 UTC

5

Pricing of Storage compared to Snowflake

Hi, I have a question regarding the cost of storage in BigQuery (compared to Snowflake for the sake of a benchmark).

Server would be in europe, so BigQuery gives 0.02$/GiB for logical data and 0.044$/GiB for physical (compressed) data. You can choose per Dataset.

Snowflake in comparison gives for GCP in europe 0.02$/GB for storage and always uses compressed data to calculate that.

In my understanding, that would mean Snowflake is always and up to 50%, cheaper than BigQuery when it comes to storage. Is my thinking correct? Because I read everywhere that they don't differ so much in Storage cost. But up to 50% less cost and an easier calculation without any further thought on compression is a big difference.

Did I miss something?

17 Comments
2024/11/12
15:35 UTC

1

How to see total storage of google big query?

I'm a BigQuery beginner that's trying to understand how to track things.

I'm trying to use BigQuery for some querying, but I need to be careful not to go over 10GB of storage as well as 1TB of processing because I do not want to be charged and I wish to remain on the free tier.

I am uploading multiple csv files on bigquery but I cannot find the page where they show you the total storage of all the files I uploaded. I need to be able to see it so that I do not go over the limit as I upload.

Exactly where can I see the total storage of bigquery I've filled, as well as the processing I've done per month? There should be something that allows me to track those things via the UI right? No matter how I search online I cannot find the answer for this which imo should be something quite simple.

8 Comments
2024/11/12
00:44 UTC

5

Getting data from GA4 API with cloud functions?

How hard is to write custom cloud function that downloads Google Analytics 4 api data? Are there any examples? Tried to find some but seems like nothing is out there on the internet.

The reason for cloud function is that GA4 BigQuery export is such a mess that is hard to match UI numbers.

Thanks a lot!

13 Comments
2024/11/10
18:46 UTC

1

GA4 and big query

Hello, I linked Google analytics to Big query, but I want to save the data in more structured and organized way, so I decided to create a data warehouse schema and save the data, to be more organized and also be easier when I use power bi.
My question here is about the schema itself, because I created many but feel I need a better solution,

Do anyone create something like that before, or if someone has a better idea than mine?

8 Comments
2024/11/10
18:00 UTC

7

Do you think GA4's horribleness is a sneaky strategy to get us to start paying for BigQuery and GCP, or just Google completely missing the mark?

Sometimes I really feel like GA4 is a sales strategy to push us toward GCP—kind of like how they encourage us to use Google Tag Manager even though it can slow down websites, only to then suggest server-side tracking (also on GCP). Maybe it's a tinfoil hat moment, but curious what others think!

7 Comments
2024/11/08
02:48 UTC

3

should filenames be unique in dataform?

In dataform, you can reference depencies by its filename as stated below

> Replace DEPENDENCY with the filename of the table, assertion, data source declaration, or custom SQL operation that you want to add as a dependency. You can enter multiple filenames, separated by commas

(https://cloud.google.com/dataform/docs/dependencies#config-block-dependencies)

Does this mean filenames should be unique inside the repository? I was not able to find any requirement in the document, and I was wondering if there were any best practices/rules around file names.

4 Comments
2024/11/06
03:28 UTC

9

An article on UDFs

Hi all! I've recently started learning about UDFs (user-defined functions) and found them surprisingly cool and useful. I wrote an article with some function ideas, I would appreciate it a lot if you check it out and let me know what you think!

https://medium.com/@lilya.tantushyan/6-udf-ideas-in-bigquery-funwithsql-918cf2dc6496

5 Comments
2024/11/05
12:54 UTC

6

How come looker studio gives me different rates than bigquery?

So I'm calculating conversion rates...

In BigQuery I have my code like

SELECT
EXTRACT(ISOWEEK FROM date) AS iso_week_number,
COUNT(DISTINCT user) AS total_user,
COUNT(DISTINCT CASE WHEN ts_pcc < ts_tran THEN transaction_id ELSE NULL END) AS conversion
FROM prep
GROUP BY ALL

Which on average gives me 1-2%

However If I instead do

SELECT
date,
COUNT(DISTINCT user) AS total_user,
COUNT(DISTINCT CASE WHEN ts_pcc < ts_tran THEN transaction_id ELSE NULL END) AS conversion
FROM prep
GROUP BY ALL

Explore in looker studio; set date format into ISO week then my percentages are widely different (more towards 6-10%)

This percentage is done in a calculated field where I do: conversions / total_users

Am I missing something?

11 Comments
2024/11/05
12:31 UTC

17

Divide the query to make it dirt cheap

The post is about geospatial queries, but could be helpful with other types of queries too

https://mentin.medium.com/divide-the-query-to-improve-cost-and-performance-df310a502a07

3 Comments
2024/11/03
23:55 UTC

5

Limiting BQ costs

Hi all, I use only a fraction of the free tier allowances but I wish to set up quotas to prevent a huge bill should anything go wrong. I've set Query usage per day to 100 GiB, on the assumption that the max I'll be able to use a month if something goes very wrong is around 3 TiB which will cost me $12 give or take

Do I have this set up correctly and are there any other quota I'd need to set to ensure that I can't accidentally run up a bill of, say, $100 or more

What about storage - can I limit this in some way too?

Thanks!

5 Comments
2024/11/03
15:41 UTC

6

How to download all the sql queries

How to download all the sql query inputs written in google bigquery console as .txt file

6 Comments
2024/11/01
18:37 UTC

2

Searching a column in a database

I am currently looking for a particular column in available in a dataset. I have no idea in which dataset it belongs. But I want to use that column for writing a query in google bigquery console. How to do it.

4 Comments
2024/11/01
16:30 UTC

1

How to rename saved query in bigquery?

Same

3 Comments
2024/11/01
15:34 UTC

2

Is big query right for me?

I currently import all of my companies historic sales into Google sheets and have created several dashboards and reports based on the data. My problem is the data set is getting to be far too large and everything is operating quite slow.

Currently I have about 200k rows and 15 columns, I add roughly 100 new rows of data daily, 36,500~ yearly.

I’ve read that big query may be a solution to host my data and mirror it on Google sheets so that GS is not storing my data and slowing it down.

Is big query right for me? Would there be any costs associated with this? Is there any other recommendations out there?

Appreciate it!

13 Comments
2024/11/01
14:58 UTC

1

REST api to JSON

I'm using bigquery rest api using postman. I want to query select * from <table_name> but when i do that i get the output which unreadable("v" and "f"). How can i convert it into (key) : (value) type output. I tried to select every field individually, which gave result but very hectic. Need a workaround.

4 Comments
2024/10/29
08:06 UTC

18

Tired of BQ UX, I created a Chrome extension to fix it

13 Comments
2024/10/28
18:28 UTC

Back To Top