/r/storj
Storj is private by design and secure by default, delivering unparalleled data protection and privacy vs. centralized cloud object storage alternatives. Developers can trust in innovative decentralization technology to take ownership of their data and build with confidence. For more info visit https://www.storj.io.
Storj is based on blockchain technology and peer-to-peer protocols to provide the most secure, private and efficient cloud storage.
This subreddit is intended for discussion related to Storj and its various applications as well as projects and ideas related to the Storj project.
Kindly read the F.A.Q and Community Rules before creating a new post.
/r/storj
Hi everyone,
The Storj documentation says '2 TB of available space per storage node process' and '1.5 TB per month of transit per TB of storage node capacity; unlimited preferred'.
Does the transit capacity (minimum 3TB) have to be on the same drive as the main node capacity (2TB)? Or should the transit capacity be on a separate disc?
I switched to Storj because I was under the impression that it functioned the same as S3. But I notice that ACLs don't do anything. When I upload an object with authenticated-read, it can be seen by the whole world. When I try to run the "putObjectAcl" function, it fails every single time with "NotImplemented: A header you provided implies functionality that is not implemented "
Does storj not use ACLs? If so, is it possible to change the visibility of individual files and create signed requests?
Based on Storj pricing, they collect $0.13 from the clients using 15GB Egress and ~8Gb.
Considering same file chunk is stored in multiple nodes, let's say 3 nodes? 0.04*3 = 0.12.
Then Storj profits $0.01?
I have ~12TB available and I will start a node.
As far as I understood we can increase the available space, but we can not decrease.
This way, if i start a node with 12TB and if I need space in the future, I will be in trouble.
Does it make sense to boot 3 nodes with ~4TB each? This way, I could shutdown nodes to claim the space back.
Is it a good strategy? What I am not seeing here?
So, we're hosting data with Backblaze b2, and aren't really happy with their model.
Anyone have experience hosting several hundred GiB on Storj (and offering it publically)? From what I can tell, we would have to run our own gateway which kind of means we're doubling the bandwidth usage (data into our gateway, data out of our gateway)
Bandwidth is "multiple TiB a month"
Why doesn't Storj develop a front end service that operates and targets users similar to Google Drive & Dropbox? This would increase usage, demand and ofc revenue for node operators. It really doesn't make sense to me.
Yes you can set this up through software but that is not what the masses do/use
Just got a node up and running with 20TB. Seems to be working well, but really disappointed in the dashboard, especially after see that all of the previous month (Dec) has now disappeared. It doesn't seem like there is any configurability in the dashboard.
After digging around quite a bit, it looks like people have created their own solutions based on Grafana or something similar. Everything I can find also seems to be at least a couple of years old. What are the latest common practices for monitoring a Storj node? If it is still a Grafana-like approach, are there any guides for someone that doesn't have much experience with it?
I want to get ETH but I don't want to waste 25% in fees. If I learned correctly, it waits until the payout is at least 4x the gas fee.
So if my payout is $6 and the fee is $1.5, I don't want to receive just $4.5
Can I set my own threshold? If not, where can I submit the feature idea?
https://youtu.be/c0RpGxIV8sQ?si=tuQFGza1gsTgP8i4
I just created this crazy 10 Bay HDD working on a raspberry pi 5. I think some of you might be interested in how this works! :)
Merry Christmas 🎄 Best Andreas
Hello, I have free mini PC with 9TB HDD connected. It's possible to earn some coins on that with Storj?
Regards.
I'm a node operator who is interested in learning more about the companies/customers using Storj (Public and/or Select networks). I've listened to their Town Halls where some of these are mentioned, but I'm sure the list is far from comprehensive. Does anyone know if Storj Labs provides any reporting of this information? I think it would be wonderful if they started to provide this information to their Node Operator community at least...it would help interest me even more knowing what kinds of use cases and data my HDDs might be storing...
What explains this dip? I find it hard to believe that a huge amount of data was deleted and re-uploaded, some sort of glitch?
edit: anyone experiencing the same issue?
STILL?!
storj leaves open files, hanging my shutdown til i KILLPID it.
Been going on fooorrreeeeeveeerrrrr....
https://youtu.be/c0RpGxIV8sQ?si=tuQFGza1gsTgP8i4
I just created this crazy 10 Bay HDD working on a raspberry pi 5.
Hello. I'm new to Storj, and s3 storage in general. I'm kind of confused but getting by, I'm really surprised by the speeds, is there a way to see the speed per second instead of a ETA because the ETA is kind of weird, I think it's per file because when a file finishes it changes the time.
I have mounted the storage as a drive with Mountain Duck, I heard about something called "True NAS" what is that?
Hello,
first time using Storj, but I am using it to backup some very important data from my Synology to the Cloud.
When I login to the console and explore the bucket, I cannot see my actual files and folders, but I can only see an .hbk
folder were then inside there are some system folders like Config, Guard, Pool
etc.
I do not understand why I can explore my files and maybe download a single one if needed. I don't know if this is Storj encryption that won't let me see my actual data, of it is the Synology that is just uploading the backup like this.
I suppose that if I will ever need to restore the backup from Synology Hyper Backup, the "Restore" functionality should work. But this means that basically it is going to download the whole bucket and restore everything. I would really like to have the freedom to see my files and download what I need though.
So in the end, is this Storj or Synology fault? Is there a way to bypass this?
How is storj makes sure file durability? After the split what prevent over time form 3 node that happened to have all the needed pieces for the file to get destroyed? (It's not have to be instantly, it can happen over time).
Are there any mechanized in place that make sure dudurability over time (behind rid-solomon enogh sub files).
My average disk space used is staying constant around 700-800 gb and not going beyond 1TB from last 5-6 months. What can be the reason
Have you actually used Storj as a “customer”?
Curious your thoughts and what made you use it instead of more traditional methods of file storage.
Would you recommend to non-crypto native family members?
Need help form you guys. Just started node on Truenas scale had a spare disk since NAS is already working why not take few buck to pay electricity.
Anyway I started node got ID and my node is working and receiving traffic. I got to 100 GB stored in like two days. But if I restart Truenas or restart storj app in docker my used spaceddrops to 10GB. Every restart I loose stored data. File are there since dataset is still occupied with same space it had before restart and i can browse to them but node is not counting that stored data.
Anyone know what is the deal.
I'm currently using Storj with Duplicati for backups and aiming to prevent ransomware from being able to delete or tamper with stored backups. By restricting Duplicati to read or write-only access on Storj, I know I can limit deletion permissions, but this also prevents me from setting up a retention policy directly in Duplicati.
Has anyone managed a similar setup on Storj? Are there recommended practices for balancing retention with ransomware protection, possibly through Storj’s native features, immutability settings, or automated solutions? Any insights on achieving a secure and efficient backup setup would be greatly appreciated!
Thanks!
I have a home server with heaps of spare storage, and remember testing out node operations back in the early days and even receiving STORJ to some wallet that's been sitting idle for years since...
I thought I'd explore this given I've been exploring docker containers lately. Observations:
Easy to set up! It only took about 20 mins to generate the necessary tokens - apparently very lucky as it seems some people can spend hours to get one.
Nice UI / dashboard - I loved geeking out about the data coming in and seeing my volume expand, and get a nice 100% figure for uptime etc.
Then - set and forget. It just worked. Great! I can just watch money pouring in!
But... then a few days later I can literally hear my drive trashing about when I'm in the vicinity, and looking at processes I see huge number of files all being written/read by Storj. This is basically non-stop.
A couple of days after that I check out the storj earnings estimator spreadsheet. $39/year, maybe. $3.22/month.
No point even graceful exiting. Opened up my docker, and dumped the image.
I'm happy to have tried, but even with hardware on the ready and expendable it makes absolutely no sense to run this. Sure, maybe if you have 100TB? That only would take you what... 8 years to achieve?
I won't say storj is unsustainable given they still manage to stay around - but it feels like irrational sustainability thanks to node operators that are penny pinching?
Hi friends,
Using Storj as cloud back up for my media and I have uploaded like 10gb of pics and videos already. Cant seem to find a way to change file names though. How can I do it?
I’m looking to start storing data for StorJ and have a few options for servers that I have laying around. Both are supermicro platforms, and a mix of x10 (broadwell Xeon e5) and x11 (skylake scalable Xeon) in each physical platform, and a few complete systems of each configuration. however I don’t know if more drives per physical server would be best, or breaking it up with multi node systems would be ideal as it seems a lot advise using less drives per node. I’ve got a metric pile of 4TB exos enterprise drives, so they will be used regardless of the server used.
Servers: 36 bay supermicro 144tb (36x4tb) raw capacity
Fat twin supermicro 2 node 2u 12 bays total, 6 per node 48TB raw 24TB per node
Twin pro supermicro 4 node 2u, 12 bays total, 3 per node 48TB Raw, 12tb per node
I setup Storj yesterday on truenas scale because i have far more storage then i need so i setup 10tb and its bee deploying since about 6pm yesterday. Is this normal should i stop it and try again?