/r/storj
Storj is private by design and secure by default, delivering unparalleled data protection and privacy vs. centralized cloud object storage alternatives. Developers can trust in innovative decentralization technology to take ownership of their data and build with confidence. For more info visit https://www.storj.io.
Storj is based on blockchain technology and peer-to-peer protocols to provide the most secure, private and efficient cloud storage.
This subreddit is intended for discussion related to Storj and its various applications as well as projects and ideas related to the Storj project.
Kindly read the F.A.Q and Community Rules before creating a new post.
/r/storj
Hello,
first time using Storj, but I am using it to backup some very important data from my Synology to the Cloud.
When I login to the console and explore the bucket, I cannot see my actual files and folders, but I can only see an .hbk
folder were then inside there are some system folders like Config, Guard, Pool
etc.
I do not understand why I can explore my files and maybe download a single one if needed. I don't know if this is Storj encryption that won't let me see my actual data, of it is the Synology that is just uploading the backup like this.
I suppose that if I will ever need to restore the backup from Synology Hyper Backup, the "Restore" functionality should work. But this means that basically it is going to download the whole bucket and restore everything. I would really like to have the freedom to see my files and download what I need though.
So in the end, is this Storj or Synology fault? Is there a way to bypass this?
How is storj makes sure file durability? After the split what prevent over time form 3 node that happened to have all the needed pieces for the file to get destroyed? (It's not have to be instantly, it can happen over time).
Are there any mechanized in place that make sure dudurability over time (behind rid-solomon enogh sub files).
My average disk space used is staying constant around 700-800 gb and not going beyond 1TB from last 5-6 months. What can be the reason
Have you actually used Storj as a “customer”?
Curious your thoughts and what made you use it instead of more traditional methods of file storage.
Would you recommend to non-crypto native family members?
Need help form you guys. Just started node on Truenas scale had a spare disk since NAS is already working why not take few buck to pay electricity.
Anyway I started node got ID and my node is working and receiving traffic. I got to 100 GB stored in like two days. But if I restart Truenas or restart storj app in docker my used spaceddrops to 10GB. Every restart I loose stored data. File are there since dataset is still occupied with same space it had before restart and i can browse to them but node is not counting that stored data.
Anyone know what is the deal.
I'm currently using Storj with Duplicati for backups and aiming to prevent ransomware from being able to delete or tamper with stored backups. By restricting Duplicati to read or write-only access on Storj, I know I can limit deletion permissions, but this also prevents me from setting up a retention policy directly in Duplicati.
Has anyone managed a similar setup on Storj? Are there recommended practices for balancing retention with ransomware protection, possibly through Storj’s native features, immutability settings, or automated solutions? Any insights on achieving a secure and efficient backup setup would be greatly appreciated!
Thanks!
I have a home server with heaps of spare storage, and remember testing out node operations back in the early days and even receiving STORJ to some wallet that's been sitting idle for years since...
I thought I'd explore this given I've been exploring docker containers lately. Observations:
Easy to set up! It only took about 20 mins to generate the necessary tokens - apparently very lucky as it seems some people can spend hours to get one.
Nice UI / dashboard - I loved geeking out about the data coming in and seeing my volume expand, and get a nice 100% figure for uptime etc.
Then - set and forget. It just worked. Great! I can just watch money pouring in!
But... then a few days later I can literally hear my drive trashing about when I'm in the vicinity, and looking at processes I see huge number of files all being written/read by Storj. This is basically non-stop.
A couple of days after that I check out the storj earnings estimator spreadsheet. $39/year, maybe. $3.22/month.
No point even graceful exiting. Opened up my docker, and dumped the image.
I'm happy to have tried, but even with hardware on the ready and expendable it makes absolutely no sense to run this. Sure, maybe if you have 100TB? That only would take you what... 8 years to achieve?
I won't say storj is unsustainable given they still manage to stay around - but it feels like irrational sustainability thanks to node operators that are penny pinching?
Hi friends,
Using Storj as cloud back up for my media and I have uploaded like 10gb of pics and videos already. Cant seem to find a way to change file names though. How can I do it?
I’m looking to start storing data for StorJ and have a few options for servers that I have laying around. Both are supermicro platforms, and a mix of x10 (broadwell Xeon e5) and x11 (skylake scalable Xeon) in each physical platform, and a few complete systems of each configuration. however I don’t know if more drives per physical server would be best, or breaking it up with multi node systems would be ideal as it seems a lot advise using less drives per node. I’ve got a metric pile of 4TB exos enterprise drives, so they will be used regardless of the server used.
Servers: 36 bay supermicro 144tb (36x4tb) raw capacity
Fat twin supermicro 2 node 2u 12 bays total, 6 per node 48TB raw 24TB per node
Twin pro supermicro 4 node 2u, 12 bays total, 3 per node 48TB Raw, 12tb per node
I setup Storj yesterday on truenas scale because i have far more storage then i need so i setup 10tb and its bee deploying since about 6pm yesterday. Is this normal should i stop it and try again?
Hi storjiers, I called graceful exit on one of my nodes because I have to move and I will not be able to maintain uptime. I think I called graceful exit at least 48 hours ago (likely more) and the four satellites are still showing 0.00% complete.
Am I just being impatient?
I am starting up a Storj node as a fun hobby project. I created my node on a test server on Sept 11. It's storing 291GB, has passed 208 audits, and has an upload success rate of 99.72%. The project is pretty damn cool!
Anyway, the purpose of this post is I'm building out a server to move the node to where it will live permanently. It's a 4-bay 1U chassis. I have 4x 12TB HGST (HUH721212AL56S0) that are perfect for the project. I'm heavily leaning towards building out a ZFS RAIDz1 array with these drives, which would allow me to run a single node on it and give 1 disk redundancy.
Is this a good idea? It seems it would be much easier to manage vs having 1 node per drive as there's less management overhead, should increase disk performance, and offers drive redundancy. Downsides are 12TB space lost to redundancy and loose everything if the node becomes disqualified. I am also aware it would take a long time to fill.
Or should I just stick with 1 node per drive and run with 1 drive until it's near full (I'm aware of the /24 rule).
Thanks for the advice. I'd like to do it right from the start and not have to restructure later :) :)
i read through this guide https://support.storj.io/hc/en-us/articles/360026612332-Install-storagenode-on-Raspberry-Pi3-or-higher-version
i'm at the step: Setup the storagenode before the run
a bit down port number 28967
but could this not be replaced with say 10000 ? the reason is that i'm behind a CG-NAT and i cant do portforward, but i can use tailscale funnel but i'm limited to 443
, 8443
, and 10000
the speed is also arround 10/10 but i think that is the last of my problems
EDIT: is this what i'm looking for? https://forum.storj.io/t/how-to-change-node-port/2604
I'm a noob/idiot/old fart...
i get that if i want to be paid in ether i can go create a wallet for that. i dont have any crypto wallets as i'm not a very big fan of crypto in general ( i like normal human money)
but zkSync Era, i see links for it but they dont open zkSync Era but just the general zkSync page
so how does the "opt in" with them work?
I used Storj for a while but only for very tiny folders (few KB to MB). I've been subscribing to some established privacy clouds for the past 2-3 years, but am having a lot of unresolved issues involving file corruption when I finally need access to critical files and re-download them from the cloud. If anyone here has also had experience with other privacy clouds like IceDrive, Filen..., could you share your experience with Storj? Are you able reliably to get large folders (about 100GB+) up into the Storj decentralized network without problems? When you need to download these large folders back to your local system, has this been reliable (no hard-to-decipher error messages, no file corruption)? Am looking for a cloud home for critical data. Willing to pay well for it, but need files to be free of corruption from the up-/download processes. Thanks!
I want some failover and setting a nodes ip to a static ip or dyndns is not really failover. If that single ip goes offline, the node is offline.
But how about SRV records for us with multiple public ips that can be routed to the same internal node? I found an old issue on GitHub, someone requesting SRV records be supported but it was just forgotten about it seems, years old now and not closed.
With srv records we could do something like:
Node01.domain.whatever
Srv -> node01-ip01.domain.whatever:11111
Srv -> node01-ip02.domain.whatever:22222
Srv -> node01-ip03.domain.whatever:33333
And so on, so if the first one fails, satellites and other nodes will just try the next one, and then permafail if they are all offline.
Can something like this be done today for storj nodes or does it still only support A and CNAME records? How are the big node operators in here handling multi ip to the same node on the public/DNS side?
And alternative would be to have dyndns like setup with a single ip on the nodes A record, and then change it using a script on the node to change it to whatever ip is online, i just find this hacky and like the native way SRV records work much better as they are designed for stuff like this. Also this may result in someones local dns or server caching records for days if misconfigured and then it would not get the update and try the old ip, so a scripted solution like this might spawn weird connection issues later.
I had not used storj in a while sorry if I missed the news. I had free storage and now it jist says trial's over and I must upgrade to pro.
I'm using a pro account I got way back when they were offering 250GB for free, however, all of sudden I can neither download or upload anything. If you look at this screenshot it seems like it's a lack of available segments that's causing the issue? https://i.imgur.com/XpswYQp.jpeg
I'm a little confused on the amounts withheld. Month 10-15 100% is paid out but month 16+ you are back to 50%?
My isp does not provide a public ip can i still expose my node port using noip dynamic dns
Hi,
I just received my monthly payout on l1 despite using zksync for 2 years. Has anyone else encountered this ?
Has anyone met the same problem? Or it is just me? Incognito mode works tho
Hi, so if I start node now with 160tb, on fiber, 24/7. How my payouts will look. And How will my utilization graph (used space/time) would look like? Thanks