创建Redis-Cluster集群时遇到的问题 " /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- redis (LoadError)"
[root@iZbp143t3oxhfc3ar7jey0Z bin]# redis-trib create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- redis (LoadError) from /usr/share/rubygems/rubygems/core_ext/kernel_require.rb:55:in `require' from /usr/local/bin/redis-trib:25:in `<main>'
解决:
[root@iZbp143t3oxhfc3ar7jey0Z bin]# gem install redis Fetching redis-4.1.3.gem Successfully installed redis-4.1.3 Parsing documentation for redis-4.1.3 Installing ri documentation for redis-4.1.3 Done installing documentation for redis after 1 seconds 1 gem installed [root@iZbp143t3oxhfc3ar7jey0Z bin]# redis-trib create --replicas 1 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 >>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 Adding replica 127.0.0.1:6383 to 127.0.0.1:6379 Adding replica 127.0.0.1:6384 to 127.0.0.1:6380 Adding replica 127.0.0.1:6382 to 127.0.0.1:6381 >>> Trying to optimize slaves allocation for anti-affinity [WARNING] Some slaves are in the same host as their master M: 8de9ebab0e21b4343faaf0aca24f925e1b540607 127.0.0.1:6379 slots:0-5460 (5461 slots) master M: 3f2adf61d4cb795a425bc6371b68fc9a92a5068b 127.0.0.1:6380 slots:5461-10922 (5462 slots) master M: 274ad8fd63a1a18c68e5deb063d78a9a8a7ab3e9 127.0.0.1:6381 slots:10923-16383 (5461 slots) master S: b7011bc56dc685ceb734f3888bd6edc786b349f8 127.0.0.1:6382 replicates 274ad8fd63a1a18c68e5deb063d78a9a8a7ab3e9 S: 106d83662a79958e0f24c9e1e3d9a51235c5fd63 127.0.0.1:6383 replicates 8de9ebab0e21b4343faaf0aca24f925e1b540607 S: bcd235ffe5ebebfb96293fea0b4115a729a58fff 127.0.0.1:6384 replicates 3f2adf61d4cb795a425bc6371b68fc9a92a5068b Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join.... >>> Performing Cluster Check (using node 127.0.0.1:6379) M: 8de9ebab0e21b4343faaf0aca24f925e1b540607 127.0.0.1:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) M: 3f2adf61d4cb795a425bc6371b68fc9a92a5068b 127.0.0.1:6380 slots:5461-10922 (5462 slots) master 1 additional replica(s) M: 274ad8fd63a1a18c68e5deb063d78a9a8a7ab3e9 127.0.0.1:6381 slots:10923-16383 (5461 slots) master 1 additional replica(s) S: bcd235ffe5ebebfb96293fea0b4115a729a58fff 127.0.0.1:6384 slots: (0 slots) slave replicates 3f2adf61d4cb795a425bc6371b68fc9a92a5068b S: 106d83662a79958e0f24c9e1e3d9a51235c5fd63 127.0.0.1:6383 slots: (0 slots) slave replicates 8de9ebab0e21b4343faaf0aca24f925e1b540607 S: b7011bc56dc685ceb734f3888bd6edc786b349f8 127.0.0.1:6382 slots: (0 slots) slave replicates 274ad8fd63a1a18c68e5deb063d78a9a8a7ab3e9 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered. [root@iZbp143t3oxhfc3ar7jey0Z bin]#
如果你ruby安装的版本过低,可以看这篇博客:https://www.cnblogs.com/dalianpai/p/12686271.html