/r/redis
All about the Redis key-value store
Redis is a persistent data structure server operating on the key/value model, where values can be hashes, lists, sets, or sorted sets.
Resources:
/r/redis
Greetings!
I have encountered a problem when using ACL authentication in a Redis Replication + Sentinel configuration.
First, to exclude any questions about permissions, I will use a user with full access to all keys and commands.
aclfile "/etc/redis/users-redis.acl"
masterauth "admin_pass"
masteruser "admin"
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync yes
repl-diskless-sync-delay 5
repl-diskless-sync-max-replicas 0
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 20
protected-mode no
port 26379
daemonize no
supervised systemd
dir "/var/lib/redis"
loglevel notice
acllog-max-len 128
logfile "/var/log/redis/redis-sentinel.log"
pidfile "/run/sentinel/redis-sentinel.pid"
sentinel monitor redis-cluster 6379 2
sentinel down-after-milliseconds redis-cluster 2000
sentinel failover-timeout redis-cluster 5000
######## ACL ########
aclfile "/etc/redis/users-sentinel.acl"
######## SENTINEL --> REDIS ########
sentinel auth-user redis-cluster admin
sentinel auth-pass redis-cluster admin_pass
######## SENTINEL <--> SENTINEL ########
sentinel sentinel-user sentinel-sync
sentinel sentinel-pass sentinel-sync_password172.16.0.22
user default off
user admin ON >admin_pass ~* +@all
user sentinel ON >sentinel_pass allchannels +multi +slaveof +ping +exec +subscribe +config|rewrite +role +publish +info +client|setname +client|kill +script|kill
user replica-user ON >replica_password +psync +replconf +ping
Note: Although the following example uses admin, I left the permissions taken from the documentation page, where replica-user is used for replica authentication to the master (redis.conf configuration), and sentinel is used for Sentinel connection to Redis (sentinel.conf parameters sentinel auth-pass, auth-user).
(The ACL file for authentication between Sentinel instances does not affect the situation, so I did not describe it.)
With the above configuration, the situation is as follows:
172.16.0.21
(node01)172.16.0.22
(node02)172.16.0.23
(node03)On nodes 21 and 23, replicaof
172.16.0.22
is specified. Node 22 is currently the master.
We turn everything on:
Now, we simulate turning off the master server. We can see that the replicas detect that the master has failed, but Sentinel cannot perform a failover to anothr master.
I try to perform a manual master switch to node 172.16.0.23
:
node01: SLAVEOF 6379
node02: SLAVEOF 6379
node03: SLAVEOF NO ONE172.16.0.23172.16.0.23
We observe that everything successfully reconnects. However, the Sentinel logs display issues of the following nature.
I disable ACL in the Redis configuration by commenting out the following lines:
# aclfile "/etc/redis/users-redis.acl"
# masterauth "admin_pass"
# masteruser "admin"
We turn off the master, wait a bit, turn it on, and check.
The master changes successfully, and the logs are in order.
I need to implement ACL in my environment, but I cannot lose fault tolerance.
I wish to migrate my redis cluster from 1 IDC to another.
So normally when i check for options. i come across tools like redis shake. however this tool rquires that there atleast a node which is able to connect to both the old and the new redis cluster.
the poblem is that i have no such node available beacuse both of them are under their induvidual private network.
I can enable some sort of rsync to sync the rdb files but can i rebuild the cluster in the destination ?
Note : in my application some amount of downtime is OK not necessarily looking for a no-downtime solution
Ever wondered how Redis performs under real-world conditions? I recently ran performance benchmarks using the redis-benchmark
tool to understand throughput and latency across various scenarios.
Here’s what I discovered:
✅ Pipelining drastically reduces latency.
✅ Testing with multiple clients mimics real-world traffic.
✅ Redis handles high-concurrency workloads exceptionally well.
I share detailed testing methods and commands to help you optimize Redis for your needs. Let’s discuss your experiences or tips for Redis performance tuning!
(Feel free to ask questions or share your thoughts!)
Hi everyone,
I'm having some serious issues with Redis cache invalidation on our WooCommerce site and could use your help. Let me break down what's happening:
We have around 30,000 products on our site. Earlier today, I did a stress test in production, updating metadata for all 30,000 products and flushing + invalidating their caches. The site handled this perfectly fine using our batching strategy. However, about 45 minutes later, when we tried to do the same operation but only for 8,000 products, the site completely crashed—which makes no sense since it's less than a third of what we just tested successfully.
Here's what our cache invalidation process looks like:
The main issue seems to be that when this fails:
What's particularly frustrating is that according to everything I've read, Redis should be able to handle hundreds of thousands of operations per second on even modest hardware. Yet we're seeing it lock up at around 30,000 ops.
One thing we've noticed is that our term-queries and post_meta cache groups are sharded to the same Redis node. When we flush post_meta, that node gets hammered with traffic and becomes unresponsive.
We've tried:
What I'm trying to figure out is:
Has anyone dealt with similar issues? Any advice would be appreciated, especially regarding Redis configuration or alternative ways to handle cache invalidation at this scale. However, I am quite limited in terms of groupings, etc. because of WordPress' abstraction layers. I am considering 4 separate instances and then rewriting the Object Cache Pro plugin so I can choose where each group goes, meaning I can avoid heavy groups on the same node.
Thanks!
SERVER INFO:
4 nodes running on the same server as the WordPress install.
# Server
redis_version:7.4.1
redis_git_sha1:00000000
redis_git_dirty:1
redis_build_id:81eea6befd94aa73
redis_mode:cluster
os:Linux 6.6.56 x86_64
arch_bits:64
monotonic_clock:POSIX clock_gettime
multiplexing_api:epoll
atomicvar_api:c11-builtin
gcc_version:14.2.0
process_id:156067
process_supervised:no
run_id:4b8ff9f5e4898f8e981e3c0c9610d815f1fb4c97
tcp_port:5001
server_time_usec:1732694188077621
uptime_in_seconds:30264
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:4640940
executable:/etc/app/j/service/redis-cluster1
config_file:/etc/app/j/config/redis-cluster1.conf
io_threads_active:0
listener0:name=tcp,bind=127.0.0.1,port=5001
# Clients
connected_clients:23
cluster_connections:6
maxclients:10000
client_recent_max_input_buffer:24576
client_recent_max_output_buffer:0
blocked_clients:0
tracking_clients:0
pubsub_clients:0
watching_clients:0
clients_in_timeout_table:0
total_watched_keys:0
total_blocking_keys:0
total_blocking_keys_on_nokey:0
# Memory
used_memory:2849571088
used_memory_human:2.65G
used_memory_rss:2839195648
used_memory_rss_human:2.64G
used_memory_peak:2849764176
used_memory_peak_human:2.65G
used_memory_peak_perc:99.99%
used_memory_overhead:144595488
used_memory_startup:2287720
used_memory_dataset:2704975600
used_memory_dataset_perc:95.00%
allocator_allocated:2850751472
allocator_active:2851196928
allocator_resident:2900549632
allocator_muzzy:0
total_system_memory:135035219968
total_system_memory_human:125.76G
used_memory_lua:31744
used_memory_vm_eval:31744
used_memory_lua_human:31.00K
used_memory_scripts_eval:0
number_of_cached_scripts:0
number_of_functions:0
number_of_libraries:0
used_memory_vm_functions:32768
used_memory_vm_total:64512
used_memory_vm_total_human:63.00K
used_memory_functions:192
used_memory_scripts:192
used_memory_scripts_human:192B
maxmemory:8192000000
maxmemory_human:7.63G
maxmemory_policy:allkeys-lru
allocator_frag_ratio:1.00
allocator_frag_bytes:369424
allocator_rss_ratio:1.02
allocator_rss_bytes:49352704
rss_overhead_ratio:0.98
rss_overhead_bytes:-61353984
mem_fragmentation_ratio:1.00
mem_fragmentation_bytes:-10334552
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_total_replication_buffers:0
mem_clients_slaves:0
mem_clients_normal:307288
mem_cluster_links:6432
mem_aof_buffer:0
mem_allocator:jemalloc-5.3.0
mem_overhead_db_hashtable_rehashing:0
active_defrag_running:0
lazyfree_pending_objects:0
lazyfreed_objects:0
# Persistence
loading:0
async_loading:0
current_cow_peak:0
current_cow_size:0
current_cow_size_age:0
current_fork_perc:0.00
current_save_keys_processed:0
current_save_keys_total:0
rdb_changes_since_last_save:1758535
rdb_bgsave_in_progress:0
rdb_last_save_time:1732663924
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_saves:0
rdb_last_cow_size:0
rdb_last_load_keys_expired:0
rdb_last_load_keys_loaded:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_rewrites:0
aof_rewrites_consecutive_failures:0
aof_last_write_status:ok
aof_last_cow_size:0
module_fork_in_progress:0
module_fork_last_cow_size:0
# Stats
total_connections_received:165696
total_commands_processed:10601881
instantaneous_ops_per_sec:708
total_net_input_bytes:3275574241
total_net_output_bytes:12690048161
total_net_repl_input_bytes:0
total_net_repl_output_bytes:0
instantaneous_input_kbps:83.99
instantaneous_output_kbps:766.93
instantaneous_input_repl_kbps:0.00
instantaneous_output_repl_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_subkeys:0
expired_keys:67
expired_stale_perc:0.00
expired_time_cap_reached_count:0
expire_cycle_cpu_milliseconds:8552
evicted_keys:0
evicted_clients:0
evicted_scripts:0
total_eviction_exceeded_time:0
current_eviction_exceeded_time:0
keyspace_hits:8307641
keyspace_misses:1891368
pubsub_channels:0
pubsub_patterns:0
pubsubshard_channels:0
latest_fork_usec:0
total_forks:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0
total_active_defrag_time:0
current_active_defrag_time:0
tracking_total_keys:0
tracking_total_items:0
tracking_total_prefixes:0
unexpected_error_replies:0
total_error_replies:280148
dump_payload_sanitizations:0
total_reads_processed:11050900
total_writes_processed:10885296
io_threaded_reads_processed:0
io_threaded_writes_processed:221318
client_query_buffer_limit_disconnections:0
client_output_buffer_limit_disconnections:0
reply_buffer_shrinks:57760
reply_buffer_expands:49315
eventloop_cycles:10789939
eventloop_duration_sum:885693079
eventloop_duration_cmd_sum:70152906
instantaneous_eventloop_cycles_per_sec:688
instantaneous_eventloop_duration_usec:73
acl_access_denied_auth:0
acl_access_denied_cmd:0
acl_access_denied_key:0
acl_access_denied_channel:0
# Replication
role:master
connected_slaves:0
master_failover_state:no-failover
master_replid:6f39b6572bdcc8b3f7078e75e1bb96c0a97fffeb
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
# CPU
used_cpu_sys:299.243068
used_cpu_user:449.845643
used_cpu_sys_children:0.000000
used_cpu_user_children:0.000000
used_cpu_sys_main_thread:296.645247
used_cpu_user_main_thread:425.392475
# Modules
# Errorstats
errorstat_CLUSTERDOWN:count=33204
errorstat_MOVED:count=246944
# Cluster
cluster_enabled:1
# Keyspace
db0:keys=1617028,expires=1617028,avg_ttl=158380312,subexpiry=0
___
# CPU SERVER INFO
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
BogoMIPS: 4499.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge m
ca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall
nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cp
uid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma
cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_t
imer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_
legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefe
tch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsg
sbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap c
lflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero x
saveerptr wbnoinvd arat npt nrip_save umip rdpid overf
low_recov succor arch_capabilities
Virtualization features:
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
Caches (sum of all):
L1d: 4 MiB (64 instances)
L1i: 4 MiB (64 instances)
L2: 32 MiB (64 instances)
L3: 1 GiB (64 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Reg file data sampling: Not affected
Retbleed: Vulnerable
Spec rstack overflow: Vulnerable: No microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prct
l
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointe
r sanitization
Spectre v2: Vulnerable; IBPB: conditional; STIBP: disabled; RSB fi
lling; PBRSB-eIBRS: Not affected; BHI: Not affected
Srbds: Not affected
Tsx async abort: Not affected
___
# MEMORY
MemTotal: 131870332 kB
MemFree: 8178308 kB
MemAvailable: 108269976 kB
Buffers: 4117968 kB
Cached: 91676776 kB
SwapCached: 290564 kB
Active: 36338944 kB
Inactive: 77541588 kB
Active(anon): 5588612 kB
Inactive(anon): 16187016 kB
Active(file): 30750332 kB
Inactive(file): 61354572 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1730612 kB
SwapFree: 359368 kB
Zswap: 0 kB
Zswapped: 0 kB
Dirty: 796 kB
Writeback: 0 kB
AnonPages: 17731740 kB
Mapped: 3760924 kB
Shmem: 3689144 kB
KReclaimable: 9247864 kB
Slab: 9438552 kB
SReclaimable: 9247864 kB
SUnreclaim: 190688 kB
KernelStack: 26448 kB
PageTables: 69620 kB
SecPageTables: 0 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 67665776 kB
Committed_AS: 32844556 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 31268 kB
VmallocChunk: 0 kB
Percpu: 58368 kB
HardwareCorrupted: 0 kB
AnonHugePages: 7870464 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
Unaccepted: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 40812 kB
DirectMap2M: 10444800 kB
DirectMap1G: 125829120 kB
I’m currently working on my thesis project to implement a write-through or write-behind pattern for my use case where my Redis Enterprise Software server is running on AWS EC2
However, I’m facing an issue where I cannot find how to add the RedisGears module to an existing database. When I navigate through the Redis Enterprise Admin Console, there is no option to add or enable RedisGears for the database. I am using Redis Enterprise version 7.8.2, and RedisGears is already installed on the cluster. But, I don’t see the "Modules" section under capabilities or any other place where I can enable or configure RedisGears for a specific database. And when creating a new database, I can only see 4 modules available under the Capabilities menu: Search and Query, JSON, Time Series, and Probabilistic. Could anyone guide me on how to enable RedisGears for my database in this setup?
I expected to see RedisGears as an available module under the capabilities, similar to how other modules like Search and JSON are listed. I also tried creating a new database, but the only modules available are Search, JSON, Time Series, and Probabilistic, with no option for RedisGears
Thank you
Hi everyone, I'm encountering some concerning data loss issues in my Redis cluster setup and could use some expert advice.
**Setup Details:**
I have a NestJS application interfacing with a local Redis cluster. The application runs one main async function that executes 13 sub-functions, each handling approximately 100k record insertions into Redis.
**The Issue:**
We're experiencing random data loss of approximately 100-1,000 records with no discernible pattern. The concerning part is that all data successfully passes through the application logic and reaches the Redis SET operation, yet some records are mysteriously missing afterwards.
**Environment Configuration:**
- Cluster node specifications:
- 1 core CPU
- 600MB memory allocation
- Current usage: 100-200MB per node
- Network stability verified
- Using both AOF and RDB for persistence
**Current Configuration:**
```typescript
environment.clusterMode
? new Redis.Cluster(
[{
host: environment.redisCluster.clusterHost,
port: parseInt(environment.redisCluster.clusterPort),
}],
{
redisOptions: {
username: environment.redisCluster.clusterUsername,
password: environment.redisCluster.clusterPassword,
},
maxRedirections: 300,
retryDelayOnFailover: 300,
}
)
: new Redis({
host: environment.redisHost,
port: parseInt(environment.redisPort),
})
Troubleshooting Steps Taken:
Has anyone encountered similar issues or can suggest additional debugging approaches? Any insights would be greatly appreciated.
I created a redis benchmark for all platforms including Windows
https://github.com/nrukavkov/another-redis-benchmark
Check this out 😁
I made a docker image of my golang application. When my application ran it connect to a redis standalone instance and a redis cluster. This is command I'm using to run docker container
docker run --network host \
-e RATE_SHIELD_PORT=8080 \
-e REDIS_RULES_INSTANCE_URL=host.docker.internal:6379 \
-e REDIS_CLUSTERS_URLS=host.docker.internal:6380,host.docker.internal:6381,host.docker.internal:6382,host.docker.internal:6382,host.docker.internal:6384,host.docker.internal:6385 \
rate_shield_backend
It is successfully able to connect to redis standalone instance but not able to connect to redis server. Also I entered into docker container and tried connecting using redis-cli I can connect to redis standalone instance but can't connect to cluster.
Here is output of docker run
Redis Rules Instance Ping Result: PONG
Redis Cluster Instance Ping Result:
2024-11-20T11:29:07Z FTL unable to connect to redis or ping result is nil for rate limit cluster error="dial tcp 127.0.0.1:6380: connect: connection refused"
I'm receiving PING for redis on port 6379 which is single instance but not for cluster
Hi everyone,
I’ve been trying to resolve an issue related to Redis services on Azure. Azure support advised me to reach out to a specific Redis contact email, which I did, along with sending an email to the general support address, but I haven’t received any response after several days.
Does anyone know the best way to get in touch with Redis support for Azure-related inquiries? I’d greatly appreciate any help or guidance!
Thanks in advance!
I developed a project for one of my government client years ago and it uses REDIS 6.x version for Streaming and caching. This runs on K8/Kubernetes instances with an image DockerHub (redis-6.x-alpine). that time it was opensource and free to use. Recently there was LICENSE change happened with REDIS. How does it affect them? Do they need to start paying money now? Total only 200MB of cached data they have. Please let me know.
Redis producer and server talk to each other using a TCP socket. Currently my producer is getting data from a source which is using dkdk which is causing my redis producer consumer TCP socket to choke. Is there any implementation of redis which uses dpdk? Or is there any way to match the rate at which the data is being produced? TiA
I'm using Boost ASIO to schedule a thread that pushes high-frequency data to Redis. However, the Redis producer is slower, causing a buildup of Boost ASIO calls, which leads to high memory usage.
I am new in HFT. Any help will be appreciated
I want to run my ML algorithm on a website with a nice realtime chart. I wrote the data pipeline which takes in different data streams using async python and would like to store it in memory with a TTL. It is financial time series trading data from a websocket.
Sorted Set: Can't store nested json. Trades / order books are nested values. RedisTs: Can only store single values. Same issue as above ^ RedisStreams: Maybe? RedisJson: No pub/sub model Redis py om: Have to define fields and closely couple data. I just want to dump the data in a list ordered by time. Can use if I have to.
Ideally I would like to dump the data streams, and then have a pubsub model to let the front end that a new data point is there, so it can run inference with my model, and then redraw the graph, with a TTL of a few minutes. I also need to do on the fly aggregation and cleaning of the data.
Raw data -> aggregated data -> data with model -> front end Something like that.
When I scraped a training dataset I used a pd dataframe which allowed my to loop and aggregate, which worked great.
Sorry for the noob question, I've gone through every redis service for past few days and just need some guidance on what to use. My first time building a real website and first time use with Redis.
How do I enable ReJSON in redis cluster?
Can I use same code used is redis?
Hey everyone!
I'm working on a project that involves filtering IPs within ranges, and I need a high-performance solution for storing millions of these IP ranges (specified as start and end IPs as int32). The aim is to quickly check if an IP falls within any of these ranges and provide some associated metadata in case.
Would Redis with some workaround be viable, or are there better alternatives?
Thanks!
I am reading a book that uses both JWT and Redis.
According to the book, the ID of the access token (the jti
attribute in the JWT claims) is used as the key, and the user's ID is stored as the value in Redis.
I have one question: I thought JWT was intended for stateless authentication, but the method used in the book seems to add statefulness. Why does the book still use JWT? If statefulness is acceptable, wouldn’t session-based authentication be a better choice?
Thank you!
Hello guys i'm newbie in redis and still wonder if my feature is necessary to use redis or just cached of database.
I have to generate "Top videos". My idea, is having a cron job that reset list of top videos, and stored in redis hash map name ( video-details ), now problem come if i have multiple filter and sort. For example, if i want to filter for 3 type of video_level, i have to define that 3 set on redis. same as sort if sort by view or avg, then 2 set on redis => SUM UP there would be 5 set and 1 hashmap on redis. I wonder is this a good design or not meanwhile i can have a table naming "cachingTopvideo", then cronjob update from here ?
I appreciate your comment and upvote
Help meeeee.
A new version of Redis is out. It's a milestone release so, maybe don't use it for production. But it's got tons of performance improvements. And, I'm particularly excited to share that the Redis Query Engine—which we used to just call RediSearch—supports clustering in Community Edition (i.e. for free). In our benchmarks, we used it to perform vector searches on a billion vectors. Details at the link.
I have been working for some time with GenAI chatbots backed by Redis. While most of my experience is using Python and a bit of Java for such applications, I wanted to try doing the same in PHP.
I developed this proof of concept for Laravel using the LLPhant GenAI framework and Redis as the vector database.
I used predis, the Redis PHP client library with support for the Redis query engine (includes vector search, full-text search, exact match, numeric, and geo), and the JSON format is your perfect asset to query your Redis database and start building.
Let me know your thoughts, I am looking for feedback. I am not a Laravel expert, I am coming from a few years with other framework.
Hi all,
I'm running Immich in docker on a VPS with external block storage. It has four containers - server, postgress, reddish and machine learning.
A week or so ago, I noticed that the server was not accepting uploads or in turn login, and further to that the Web portal does not resolve.
Investigation found all containers are 'healthy' but the server container has this error in the logs.
ReplyError: NOAUTH Authentication required. at parseError (/usr/src/app/node_modules/redis-parser/lib/parser.js:179:12) at parseType (/usr/src/app/node_modules/redis-parser/lib/parser.js:302:14) { command: { name: 'info', args: [] } }
I can see it's an authentication error with reddis, but not sure how to fix.
Any ideas would be greatly appreciated.
Thanks S
Hi y'all!
I'm working on an IoT Solution in which we want to improve reliability and speed, and thought that maybe REDIS was the kind of DB that might fit our case.
So, for context:
We have a bunch [1500~2000] IoT devices, which are fully featured embedded Linux devices. Each one has like 6GB ram and 64GB disk space with a decent CPU+GPU.
Right now there are some dockers in each device making requests to a cloud BE, but some things are being cached in a local DB for faster access. That DB is mongo with some synchronization service that's soon to be deprecated. But we need this approach to make the solution more reliable since we could be offering an offline experience with the same device in case of connection loss.
So I was considering moving onto REDIS to replace that internal DB since it seems to be way less memory hungry and it's intended for distributed usage, so it has the means of synchronization against a Master. That master in our case could be on-premises or cloud based.
Thank you all for reading and shedding some light into this matter!
I'm building an e-commerce app and want to implement a lightning-fast, scalable product search feature. I’m working with MongoDB as the database, and each product document has fields like productId
, title
, description
, price
, images
, inventory_quantity
, and more (sample document below). For search, I'd primarily focus on the title
, and potentially the description
if it doesn't compromise speed too much.
Here is a simple document:
The goal is to make the search feature ultrafast and highly relevant, handling high volumes and returning accurate results in milliseconds. Here are some key requirements:
title
, and ideally description
if it doesn’t slow things down significantly.title
and description
) to facilitate faster searches, as I’ve heard this is a technique often used in search systems.Questions:
Any experiences, insights, or suggestions (technical details especially welcome!) are greatly appreciated. Thank you!
I'm new to Redis and wondering if it would be a good for something I'm working on.
I have a form on a client-facing site that's collecting data (maybe a dozen fields) from users (maybe 1000 or so). Our internal system can query that data through a REST API for display, but each API call is pretty slow (a few seconds).
I was thinking about caching the data after a call to the API and then having any new form submissions trigger the cache to clear.
Is this a common use case? And is that a reasonable amount of data to store?
Hi all,
I am trying to upgrade redis from 6.2 on Rocky Linux 8 to 7.2 on Rocky Linux 9 and I managed to do almost everything but new slaves are in disconnected state and can't figure out the reason why.
So this his how I did it:
I thought that should do it and when I tried to failover I get (error) NOGOODSLAVE No suitable replica to promote
After some digging through statuses I found out the issue is 10) "slave,disconnected"
when I run redis-cli -p 26379 sentinel replicas test-cluster
.
Here are some outputs:
[root@redis4 ~]# redis-cli -p 26379 sentinel replicas test-cluster
1) 1) "name"
2) "10.100.200.106:6379"
3) "ip"
4) "10.100.200.106"
5) "port"
6) "6379"
7) "runid"
8) "57bb455a3e7dcb13396696b9e96eaa6463fdf7e2"
9) "flags"
10) "slave,disconnected"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "956"
19) "last-ping-reply"
20) "956"
21) "down-after-milliseconds"
22) "5000"
23) "info-refresh"
24) "4080"
25) "role-reported"
26) "slave"
27) "role-reported-time"
28) "4877433"
29) "master-link-down-time"
30) "0"
31) "master-link-status"
32) "ok"
33) "master-host"
34) "10.100.200.104"
35) "master-port"
36) "6379"
37) "slave-priority"
38) "100"
39) "slave-repl-offset"
40) "2115110"
41) "replica-announced"
42) "1"
2) 1) "name"
2) "10.100.200.105:6379"
3) "ip"
4) "10.100.200.105"
5) "port"
6) "6379"
7) "runid"
8) "5ba882d9d6e44615e9be544e6c5d469d13e9af2c"
9) "flags"
10) "slave,disconnected"
11) "link-pending-commands"
12) "0"
13) "link-refcount"
14) "1"
15) "last-ping-sent"
16) "0"
17) "last-ok-ping-reply"
18) "956"
19) "last-ping-reply"
20) "956"
21) "down-after-milliseconds"
22) "5000"
23) "info-refresh"
24) "4080"
25) "role-reported"
26) "slave"
27) "role-reported-time"
28) "4877433"
29) "master-link-down-time"
30) "0"
31) "master-link-status"
32) "ok"
33) "master-host"
34) "10.100.200.104"
35) "master-port"
36) "6379"
37) "slave-priority"
38) "100"
39) "slave-repl-offset"
40) "2115110"
41) "replica-announced"
42) "1"
Sentinel log on the slave:
251699:X 24 Oct 2024 17:16:35.623 * User requested shutdown...
251699:X 24 Oct 2024 17:16:35.623 # Sentinel is now ready to exit, bye bye...
252065:X 24 Oct 2024 17:16:35.639 * Supervised by systemd. Please make sure you set appropriate values for TimeoutStartSec and TimeoutStopSec in your service unit.
252065:X 24 Oct 2024 17:16:35.639 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
252065:X 24 Oct 2024 17:16:35.639 * Redis version=7.2.6, bits=64, commit=00000000, modified=0, pid=252065, just started
252065:X 24 Oct 2024 17:16:35.639 * Configuration loaded
252065:X 24 Oct 2024 17:16:35.639 * monotonic clock: POSIX clock_gettime
252065:X 24 Oct 2024 17:16:35.639 * Running mode=sentinel, port=26379.
252065:X 24 Oct 2024 17:16:35.639 * Sentinel ID is ca842661e783b16daffecb56638ef2f1036826fa
252065:X 24 Oct 2024 17:16:35.639 # +monitor master test-cluster 10.100.200.104 6379 quorum 2
252065:signal-handler (1729785210) Received SIGTERM scheduling shutdown...
252065:X 24 Oct 2024 17:53:30.528 * User requested shutdown...
252065:X 24 Oct 2024 17:53:30.528 # Sentinel is now ready to exit, bye bye...
252697:X 24 Oct 2024 17:53:30.541 * Supervised by systemd. Please make sure you set appropriate values for TimeoutStartSec and TimeoutStopSec in your service unit.
252697:X 24 Oct 2024 17:53:30.541 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
252697:X 24 Oct 2024 17:53:30.541 * Redis version=7.2.6, bits=64, commit=00000000, modified=0, pid=252697, just started
252697:X 24 Oct 2024 17:53:30.541 * Configuration loaded
252697:X 24 Oct 2024 17:53:30.541 * monotonic clock: POSIX clock_gettime
252697:X 24 Oct 2024 17:53:30.541 * Running mode=sentinel, port=26379.
252697:X 24 Oct 2024 17:53:30.541 * Sentinel ID is ca842661e783b16daffecb56638ef2f1036826fa
252697:X 24 Oct 2024 17:53:30.541 # +monitor master test-cluster 10.100.200.104 6379 quorum 2
Redis log:
Oct 24 18:08:48 redis5 redis[246101]: User requested shutdown...
Oct 24 18:08:48 redis5 redis[246101]: Saving the final RDB snapshot before exiting.
Oct 24 18:08:48 redis5 redis[246101]: DB saved on disk
Oct 24 18:08:48 redis5 redis[246101]: Removing the pid file.
Oct 24 18:08:48 redis5 redis[246101]: Redis is now ready to exit, bye bye...
Oct 24 18:08:48 redis5 redis[252962]: monotonic clock: POSIX clock_gettime
Oct 24 18:08:48 redis5 redis[252962]: Running mode=standalone, port=6379.
Oct 24 18:08:48 redis5 redis[252962]: Server initialized
Oct 24 18:08:48 redis5 redis[252962]: Loading RDB produced by version 7.2.6
Oct 24 18:08:48 redis5 redis[252962]: RDB age 0 seconds
Oct 24 18:08:48 redis5 redis[252962]: RDB memory usage when created 1.71 Mb
Oct 24 18:08:48 redis5 redis[252962]: Done loading RDB, keys loaded: 0, keys expired: 0.
Oct 24 18:08:48 redis5 redis[252962]: DB loaded from disk: 0.000 seconds
Oct 24 18:08:48 redis5 redis[252962]: Before turning into a replica, using my own master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
Oct 24 18:08:48 redis5 redis[252962]: Ready to accept connections tcp
Oct 24 18:08:48 redis5 redis[252962]: Connecting to MASTER 10.100.200.104:6379
Oct 24 18:08:48 redis5 redis[252962]: MASTER <-> REPLICA sync started
Oct 24 18:08:48 redis5 redis[252962]: Non blocking connect for SYNC fired the event.
Oct 24 18:08:48 redis5 redis[252962]: Master replied to PING, replication can continue...
Oct 24 18:08:48 redis5 redis[252962]: Trying a partial resynchronization (request db5a47a36aadccb0c928fc632f5232c0fc07051b:2151335).
Oct 24 18:08:48 redis5 redis[252962]: Successful partial resynchronization with master.
Oct 24 18:08:48 redis5 redis[252962]: MASTER <-> REPLICA sync: Master accepted a Partial Resynchronization.
Firewall is off, selinux is not running. I have no idea why are slaves disconnected. Anyone have a clue maybe?
Hi all,
What redis clients are you using for Dev Teams?
I'm looking for a Redis client that allow us to control the access of Dev members, and roles.
Thanks.
Did marketing ask me to post this? Of course. But that doesn't mean it's not worth checking out!
Redis Released: Worldwide is next month. It's virtual, it's free, and it's packed with talks by industry leaders from places like Dell, Viacom, NVIDIA, AWS, and more.
Edit: Here's the link.
Hi all, I have 7x Redis with Sentinel working on version 5.0.4 with some hammers on the entrypoint for the thing to work more or less without problems on Kubernetes Cluster. This Redis are storing the Database on a File Storage from Oracle Cloud (NFS)
Só, tried to upgrade to version 7.4.1 using Helm Chart from Bitnami and it went well..
The problem is, we have the old redis data base on a File Storage from Oracle Cloud (NFS) and its working as expected a year or two. With this new one from Bitnami i pointed the helm chart to the mount volume on NFS and it recognized the old DB from 5.0.4 and it reconfigured for the new version 7.4.1, all fine, but after a while of load on the Redis it starts to restart the redis container entering in Failover, the logs are showing me errors on the “fsync” operation and MISCONF errors..
So, i tried to mount in a disk volume after some reading on the internet and voilá it works fine..
Problem are the costs, it needs 3 disks per redis cluster, or if i scale it it will require more disks for each pod. The new minium disk i can create on Oracle Cloud is 50Gb, so i need 150Gb of disks for each cluster, without scaling and it’s not viable for us.
My Redis have each one around 1~5Gb of space, i dont need 150Gb to have 99% free all the time..
What i’m missing here? What i’m doing wrong?
Thank you!