/r/cassandra
Hey folks! Following up from my single cassandra/Datastax DSE node setup, here's how I created a two-node cluster.
What I'm Working With:
Here's the step-by-step:
1. First, Stop Everything
$ node1/bin/nodetool stopdaemon
2. Clean Slate
sudo rm -rf /var/lib/cassandra/*
3. The Important Part - cassandra.yaml Config 🔑
A. Set the same cluster name on both nodes
yamlCopy
cluster_name: 'YourClusterName'
B. Seed Provider Setup (this is crucial!)
yamlCopy- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "192.168.47.128" # Use Node 1's IP here
!Pro tip: Make sure Node 2 also points to Node 1's IP in its seeds config!
C. Network Settings
​
listen_address: 192.168.47.128
rpc_address: 192.168.47.128
For Node 2:
listen_address: 192.168.47.129
rpc_address: 192.168.47.129
4. Open Firewall Ports
bashCopy$ sudo iptables -A INPUT -p tcp --dport 7000 -j ACCEPT
$ sudo iptables -A INPUT -p tcp --dport 9042 -j ACCEPT
5. Fire It Up!
6. Check If It Worked
$ bin/nodetool status
You should see something like this:
Datacenter: Cassandra ===================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving/Stopped -- Address Load Tokens Owns Host ID Rack UN 192.168.47.128 123.59 KiB 1 100.0% 2f3f874e-74d1-435d-b124-61f249f54d97 rack1 UN 192.168.47.129 204.15 KiB 1 100.0% 76b05689-845c-43e5-9606-50a06d68df14 rack1
Bonus: Checking Data Distribution
Want to see how your data is spread across nodes? Try this in CQL shell:
sqlCopy
cqlsh:killervideo> SELECT token(tag), tag FROM videos_by_tag;
You can also check which node owns what data:
$ node/bin/nodetool getendpoints keyspacename tablename 'partition_key_value'
# Example:
$ node/bin/nodetool getendpoints killrvideo videos_by_tag 'cassandra'
That's it! Let me know if you run into any issues or have questions! 🚀
I'm using a Mac M2 Pro, so the basic setup is VMware Fusio Pro + Ubuntu Server 24.10 for ARM + Datastax DSE(Cassandra 4)
Part 1 – PREPERATION (not mentioned in official doc, but essential to unexperienced users, LIKE ME)
Â
https://blogs.vmware.com/teamfusion/2024/05/fusion-pro-now-available-free-for-personal-use.htmlÂ
Â
Â
https://www.datastax.com/products/datastax-enterpriseÂ
Â
Â
https://ubuntu.com/download/server/armÂ
 Â
Â
user@MacBook-user% scp dse-6.9.3-bin.tar.gz user@IP:/home/username
Part 2 - INSTALLATION
Â
https://docs.datastax.com/en/dse/6.9/installing/basic-install.htmlÂ
Â
Â
export CQLSH_PYTHON=python3.11
Start “./cqlsh” from the installation directory, if you can see "cqlsh>" prompt, that means you're all set!
I have data in the format of (id, data), such as (1, "someDataS").
Initially, when I insert data, it is stored in the Memtable, and reads pull directly from the Memtable.
After more data is inserted, it flushes to the SSTable. At this point, reads start retrieving the data from the SSTable, which makes sense.
However, I’m confused about what happens after updating older data that is already in the SSTable.
For example, if I update a data item that is currently in the SSTable, I expect the Memtable to hold the new version, while the older version remains in the SSTable. But when I perform a read after this update, it still checks the SSTable, even though a newer version should be in the Memtable.
Question: Why doesn’t the read operation return the updated data directly from the Memtable, where the latest version is stored? Is there a reason it still checks the SSTable?
I used query tracing feature to debug it, It led me to believe the relevant code is in following file https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/db/SinglePartitionReadCommand.java
more specific "queryMemtableAndSSTablesInTimestampOrder" method. To me it looks like, it always checks sstable.
I currently have two Apache Cassandra nodes running on EC2, each with 300 GB of RAM and 120 TB of storage, with about 40 TB of free space left on each. My admin team hasn't raised any concerns about maintaining the current node sizes or expanding to improve performance, but I'm wondering if there's a general guideline or recommendation for how many nodes a Cassandra cluster should have and what the ideal node size might be for my setup? NOTE: the data is read and populated by Geomesa and is using geospatial queries. Should I be looking into adding more nodes or adjusting the current configuration? Any advice or best practices would be appreciated!
I'm gearing up to take the DataStax Architect Certification for Apache Cassandra exam this Friday. I've already completed and passed the Developer and Admin exams with a comfortable margin (Reference by certfun.com). I found those exams to be quite straightforward and passed them without much trouble.
Has anyone here taken the Architect exam? How would you rate its difficulty compared to the other exams? Also, does anyone know the passing criteria? Is there a specific reason why the exam is allocated 90 minutes?
EDIT: I passed! The Architect exam was definitely more challenging than the Admin or Developer exams. While the passing criteria isn't officially published, I suspect it’s higher than 70%—I’m thinking around 75%.
I am currently benchmarking storage attached index released in Apache Cassandra version 5.. it doesn't not compare anywhere near Datastax Cassandra's SAI.
Can someone please confirm if both implementations are the same??
TIA!
I am a Computer Science Student and I had to choose for my license between MongoDB and Apache Cassandra and you already know what I have chosen. I have managed to set up a local Cassandra node using the prequisites from the documentation, but I can't get the PHP driver to work.
What I am looking for: a tutorial on Udemy(or any other platform) that covers Cassandra+connecting through to a backend using PHP+some front-end(optional) as I already know HTML+CSS+JS.
Thank you very much guys! đź–¤
We have a use case requiring a wide-column database with multi-datacenter support, high availability, and low-latency performance. I’m trying to determine whether Apache Cassandra or ScyllaDB is a better fit. While I’m aware that Apache Cassandra has a more extensive user base with proven stability, ScyllaDB promises lower latency and potentially reduced costs.
Given that both databases support our architecture needs, I would like to know if you’ve had experience with both and, based on that, which one you would recommend.
Does anyone has experience with the DataStax Enterprise (DSE) Administration Certification exam? If so, how was your experience, and how hard was the exam? I’m also curious about the exam format—how is it taken, and what types of questions are asked? Any details on the difficulty level and preparation tips would be really helpful. Thanks!
I'm currently conducting a survey to collect insights into user expectations regarding comparing various data formats. Your expertise in the field would be incredibly valuable to this research.
The survey should take no more than 10 minutes to complete. You can access it here: https://forms.gle/K9AR6gbyjCNCk4FL6
I would greatly appreciate your response!
Hello Everyone,
I am trying to install cassandra on RHEL 8 using RPM packages. But I couldn't find pakages anywhere.
If possible, please share links to download RPM packages of cassandra.
I have a Cassandra cluster with 3 nodes with replica factor of 3. I have a use case of read heavy and comparatively less write workload. Can I tune my write consistency to all and read consistency of one to achieve nominal consistency and availability. So in my understanding read can have last version data with less latency. If I'm wrong somewhere how can I configure the Cluster(even addition of nodes) to have high throughput with less latency?
I have a Cassandra Cluster with 3 nodes with 3 replica factor. I have a scenario where the 3 parallel update request with same timestamps for a row comes to the cluster's coordinator node, and each of which could cause a conflict when I read it after updating, how can I handle this situation. Suggest a configurations that can be tuned for this purpose.
I have a Cassandra cluster with single DC, 3 node, in contrast 1 master and 2 follower MySql architecture. I would have like, 10M reads in 3hrs and 3M write and updates in 3hrs, with replica factor. I have no complex queries and single primary key. What configuration can I have in my cluster to improve performance and latency issues.
Hey folks. I'm part of the C* project, and I'm hosting a 1 hour webinar + 30 min of Q&A on Thursday morning, 9am PDT, to show off new features coming in 5.0. I'll be covering:
I hope to see you there! Link to sign up is here: https://streamyard.com/watch/i8hUyrMzKEQ9
I know that Cassandra guarantees a row level isolation in a single replica, which means that other requests see either the entire update to that row applied or none. But does this guarantee that there are no dirty writes and dirty reads (in the scope of that row in that replica)?
Hi All,
Want to learn Cassandra a bit by implementing my own deployment on my home server. I've currently got an HP MiniDesk G3 with 32GB ram, 2TB SSD storage, 12TB HDD (6x 2TB WDGreen) storage running Proxmox. My plan was to use this as my "database" for the other components in the server. (Few more HP Minis running a few services - nothing crazy)
Now, the ultimate goal of this is to learn how to deploy Cassandra at scale - given... that is kind of what it does. I'm less concerned with actual HA, than I am simulated HA given my hardware constraints. Let me know if the below sounds crazy.
Was thinking of spinning up 3x LXC Cassandra nodes on the one machine, and provisioning each one of them a 2TB HDD. (Potentially splitting up partitions of the 2TB SSD for the write log... but, need to get through the basics here) That would allow me to not have to RAID10 across the rest for replication, and then can offload snapshots to Azure or something to make sure whatever data I generate I don't lose.
I do have 3 other HP Minis (8GB Ram, 500GB NVMe) but - believe the overhead of running Ceph to get the HDD storage to the other nodes would be too much for the small cluster + Cassandra on three separate pieces of hardware.
Was thinking if I tune the heap size and let them fight over cores I'd be ok? (4x cores per i5-6500 in each machine)
Am I nuts? Anything you'd do differently? Thanks in advance!
-Mousse
We can read the following note in the documentation:
In read repair, Cassandra sends a digest request to each replica not directly involved in the read. Cassandra compares all replicas and writes the most recent version to any replica node that does not have it. If the query's consistency level is above ONE, Cassandra performs this process on all replica nodes in the foreground before the data is returned to the client. Read repair repairs any node queried by the read. This means that for a consistency level of ONE, no data is repaired because no comparison takes place. For QUORUM, only the nodes that the query touches are repaired, not all nodes.
If I understand it right, there're three cases of how a read repair can be carried out:
ONE
/LOCAL_ONE
- no read repairs at allQUORUM
/LOCAL_QUORUM
- read repairs only for replicas that are part of the read query (but it may happen that all replicas are repaired due to read_repair_chance
?)Does it work that way?
Cassdio is a tool designed to make database operations simpler and more efficient. With minimal setup, it supports connections to various databases and facilitates easy data processing and query execution. Cassdio offers clean code and an intuitive interface, making it accessible for both beginners and experts. For more information, visit the GitHub page.
#cassandra #webconsole #hakdang
Im looking at Medusa to do our backups. Is there a possibility to thottle disk IO during backup when using the local storage mode? i have only seen options for s3 bucket throttle.
Hey folks, I wanted to share a tool, easy-cass-lab, I've worked on for a while now that makes it easy to quickly spin up clusters in AWS. These are the same tools I've used for years as a consultant and Cassandra committer to find bugs, do performance analysis, and test C* features. Quickest way to get started is using homebrew.
https://rustyrazorblade.com/post/2024/easy-cass-lab-homebrew/
Project repo is here: https://github.com/rustyrazorblade/easy-cass-lab
Looking forward to hearing any feedback!
I'm currently working on making an interface for Cassandra using ImGui with C++ in order to visualize Cassandra data easier and have a better access to your database. I'm worried though how most users of this database host or deploy it. I'm working on making the app use some information from datastax. This would make it so the user would have to submit their clientID, secret, and secure connection bundle all provided by datastax. I've also been trying to implement a way to connect to the DB from docker but nothing I've tried so far has really worked.
In my PhD thesis, I have designed a novel distributed database architecture named "Parallel Committees."This architecture addresses some of the same challenges as NoSQL databases, particularly in terms of scalability and security, but it also aims to provide stronger consistency.
The thesis explores the limitations of classic consensus mechanisms such as Paxos, Raft, or PBFT, which, despite offering strong and strict consistency, suffer from low scalability due to their high time and message complexity. As a result, many systems adopt eventual consistency to achieve higher performance, though at the cost of strong consistency.
In contrast, the Parallel Committees architecture employs classic fault-tolerant consensus mechanisms to ensure strong consistency while achieving very high transactional throughput, even in large-scale networks. This architecture offers an alternative to the trade-offs typically seen in NoSQL databases.
Additionally, my dissertation includes comparisons between the Parallel Committees architecture and various distributed databases and data replication systems, including Apache Cassandra, Amazon DynamoDB, Google Bigtable, Google Spanner, and ScyllaDB.
I have prepared a video presentation outlining the proposed distributed database architecture, which you can access via the following YouTube link:
https://www.youtube.com/watch?v=EhBHfQILX1o
A narrated PowerPoint presentation is also available on ResearchGate at the following link:
My dissertation can be accessed on Researchgate via the following link: Ph.D. Dissertation
If needed, I can provide more detailed explanations of the problem and the proposed solution.
I would greatly appreciate feedback and comments on the distributed database architecture proposed in my PhD dissertation. Your insights and opinions are invaluable, so please feel free to share them without hesitation.
We have 10-15 spark jobs which takes data from one source and push it to cassandra and we have 15 nodes of cluster with 32 core and 90 GB memory per node. We are trying to create this cluster on demand and once the cassandra is up with all the nodes, we try to insert the data with spark job and some time jobs get stucked during the execution of spark job and all these cassand are running on GKE. We are frequently facing this issue and it works sometime but it stucked at last step most of the time.
(myenv) [root@e2e-19-193 ~]# medusa backup --backup-name=b11 --mode=full
[2024-05-09 17:44:11,990] INFO: Resolving ip address
[2024-05-09 17:44:12,000] INFO: ip address to resolve 43.252.90.193
[2024-05-09 17:44:12,004] INFO: Registered backup id b11
[2024-05-09 17:44:12,005] INFO: Monitoring provider is noop
[2024-05-09 17:44:12,025] INFO: Found credentials in shared credentials file: /etc/medusa/medusa-minio-credentials
[2024-05-09 17:44:13,368] INFO: Starting backup using Stagger: None Mode: full Name: b11
[2024-05-09 17:44:13,368] INFO: Updated from existing status: -1 to new status: 0 for backup id: b11
[2024-05-09 17:44:13,369] INFO: Saving tokenmap and schema
[2024-05-09 17:44:13,758] INFO: Resolving ip address 172.16.231.75
[2024-05-09 17:44:13,758] INFO: ip address to resolve 172.16.231.75
[2024-05-09 17:44:13,762] INFO: Resolving ip address 172.16.231.63
[2024-05-09 17:44:13,763] INFO: ip address to resolve 172.16.231.63
[2024-05-09 17:44:13,767] INFO: Resolving ip address 172.16.231.72
[2024-05-09 17:44:13,767] INFO: ip address to resolve 172.16.231.72
[2024-05-09 17:44:13,770] INFO: Resolving ip address 172.16.231.75
[2024-05-09 17:44:13,770] INFO: ip address to resolve 172.16.231.75
[2024-05-09 17:52:34,499] ERROR: Issue occurred inside handle_backup Name: b11 Error: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'>
[2024-05-09 17:52:34,500] INFO: Updated from existing status: 0 to new status: 2 for backup id: b11
[2024-05-09 17:52:34,500] ERROR: Error occurred during backup: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/medusa/backup_node.py", line 199, in handle_backup
enable_md5_checks_flag, backup_name, config, monitoring)
File "/usr/local/lib/python3.6/site-packages/medusa/backup_node.py", line 231, in start_backup
node_backup.schema = schema
File "/usr/local/lib/python3.6/site-packages/medusa/storage/node_backup.py", line 137, in schema
self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema)
File "/usr/local/lib/python3.6/site-packages/retrying.py", line 56, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/usr/local/lib/python3.6/site-packages/retrying.py", line 266, in call
raise attempt.get()
File "/usr/local/lib/python3.6/site-packages/retrying.py", line 301, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/retrying.py", line 251, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/usr/local/lib/python3.6/site-packages/medusa/storage/abstract_storage.py", line 68, in upload_blob_from_string
headers=headers,
File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream
storage_class=ex_storage_class)
File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart
headers=headers)
File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart
headers=headers, params=params)
File "/usr/local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request
response = responseCls(**kwargs)
File "/usr/local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in __init__
message=self.parse_error(),
File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 148, in parse_error
driver=S3StorageDriver)
libcloud.common.types.LibcloudError: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'>
u/medusa u/cassandra u/dbaas u/nosql u/coloumdatabase u/distributeddatabase
I am trying to access a cassandra db I was just informed about. I was able to get the process on Linux for Cassandra running but I'm unable to login to the database.
I have set the following in \
/var/lib/cassandra/conf/cassandra.yaml`:`
authenticator: AllowAllAuthenticator
authorizer: AllowAllAuthorizer
When I restart Cassandra, I keep getting connection refused:
[root@db1 cassandra]# cqlsh localhost 9042
Connection error: ('Unable to connect to any servers', {'127.0.0.1': error(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused"), '::1': error(111, "Tried connecting to [('::1', 9042, 0, 0)]. Last error: Connection refused")})
Any ideas why Im unable to auth into the db w/ CQLSH?
storage_port: 7000
ssl_storage_port: 7001
listen_address:
192.168.12.50
start_native_transport: true
native_transport_port: 9042
start_rpc: false
rpc_address:
192.168.12.50
rpc_port: 9160
rpc_keepalive: true
rpc_server_type: sync
I'm doing a project with cassandra and can't decide how to proceed. Example:
users table has fields (userid), name. orders table has ((userid), orderid), name, ...
userid 1 changes his name. How do I sync his orders to reflect the name change?
The easiest is to not denormalize: remove name field in orders. Then do 2 lookups, one for the order, another for the user name.
Not great. Then I saw tried batch, but quickly found that changes aren't atomic, since the tables could be on different nodes. Hard pass for my use case.
I then read about event sourcing pattern. In my case, it would be to replace name in both tables with name and name_version, and then have a new change table with fields ((action), timestamp), version, old, new. To change, I'll add to change table: ChangeName, <time>, 1, foo, bar. Then spin up a program that looks into both user and orders table to set name=bar where name_ver=1.
Is my understanding correct? If so this sounds like an awful Amount of overhead for updates. It also isn't really making an atomic change across tables. Third, is the program going to long poll the changes table forever looking for changes? How is that efficient?
Cassandra first timer. Appreciate your help!
[2024-05-09 06:55:49,778] ERROR: Issue occurred inside handle_backup Name: b81 Error: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'> [2024-05-09 06:55:49,779] INFO: Updated from existing status: 0 to new status: 2 for backup id: b81 [2024-05-09 06:55:49,780] ERROR: Error occurred during backup: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'> Traceback (most recent call last): File "/usr/local/lib/python3.6/site-packages/medusa/backup_node.py", line 199, in handle_backup enable_md5_checks_flag, backup_name, config, monitoring) File "/usr/local/lib/python3.6/site-packages/medusa/backup_node.py", line 231, in start_backup node_backup.schema = schema File "/usr/local/lib/python3.6/site-packages/medusa/storage/node_backup.py", line 137, in schema self._storage.storage_driver.upload_blob_from_string(self.schema_path, schema) File "/usr/local/lib/python3.6/site-packages/retrying.py", line 56, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "/usr/local/lib/python3.6/site-packages/retrying.py", line 266, in call raise attempt.get() File "/usr/local/lib/python3.6/site-packages/retrying.py", line 301, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise raise value File "/usr/local/lib/python3.6/site-packages/retrying.py", line 251, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "/usr/local/lib/python3.6/site-packages/medusa/storage/abstract_storage.py", line 68, in upload_blob_from_string headers=headers, File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 753, in upload_object_via_stream storage_class=ex_storage_class) File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 989, in _put_object_multipart headers=headers) File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 573, in _initiate_multipart headers=headers, params=params) File "/usr/local/lib/python3.6/site-packages/libcloud/common/base.py", line 655, in request response = responseCls(**kwargs) File "/usr/local/lib/python3.6/site-packages/libcloud/common/base.py", line 166, in init message=self.parse_error(), File "/usr/local/lib/python3.6/site-packages/libcloud/storage/drivers/s3.py", line 148, in parse_error driver=S3StorageDriver) libcloud.common.types.LibcloudError: <LibcloudError in <class 'libcloud.storage.drivers.s3.S3StorageDriver'> 'Unknown error. Status code: 501'>`
How would you approach a complete rack migration in Cassandra 4.x? Assume many nodes…let’s say 100 nodes in a particular rack with TBs of data per node. RF is 3 and 3 racks. I have Rack 1,2,3 in a DC and I need to move all of rack 3 to rack 4. Most advice I have read says to rsync data in the new nodes in the new rack ahead of time so as to get the replacement nodes “close” in data then shutdown the old node, do one last rsync and start the new node.
Let’s pretend I have 100 new nodes waiting to join and I have rsynced the data as much as I can ahead of time. How does Cassandra behave in this intermediate time when I am starting new nodes in a new rack and will have 4 racks available until I can stop all nodes in rack 3? What are the nuances of this process? Gotchas? Different approach? Other things to worry about?