CentOS7下配置bridge

说明:之前在CentOS7下配置过bridge,现在讲bridge模式改为普通模式后,查看网卡的时候还是可以看到很多垃圾信息,想彻底删除自己不想要的网卡配置信息,操作如下:

[root@linux-node1 ~]# ip add list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9b:7d:d6 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.11/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9b:7dd6/64 scope link
valid_lft forever preferred_lft forever
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether 52:54:00:f4:24:05 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
5: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000
link/ether 52:54:00:f4:24:05 brd ff:ff:ff:ff:ff:ff

查看网络列表:

[root@linux-node1 ~]# virsh net-list
Name State Autostart Persistent
———————————————————-
default active yes yes

使用“virsh net-destroy default”删除

[root@linux-node1 ~]# virsh net-destroy default
Network default destroyed

从配置文件中剔除

[root@linux-node1 ~]# virsh net-undefine default
Network default has been undefined

重启libvirtd服务

[root@linux-node1 ~]# systemctl restart libvirtd.service
[root@linux-node1 ~]# virsh net-list
Name State Autostart Persistent
———————————————————-
[root@linux-node1 ~]# ip add list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9b:7d:d6 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.11/24 brd 10.10.10.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe9b:7dd6/64 scope link
valid_lft forever preferred_lft forever

再次查看,发现不必要的信息已经清除,清爽多了

发表在 LinuxBasic | 标签为 | 留下评论

SSH连接的时候Host key verification failed.

SSH连接的时候Host key verification failed.

[root@cache001 swftools-0.9.0]# ssh 192.168.1.90
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
05:25:84:ea:dd:92:8d:80:ce:ad:5b:79:58:fe:c9:42.
Please contact your system administrator.
Add correct host key in /root/.ssh/known_hosts to get rid of this message.
Offending key in /root/.ssh/known_hosts:10
RSA host key for 192.168.1.90 has changed and you have requested strict checking.
Host key verification failed.
==================================
==================================

用OpenSSH的人都知ssh会把你每个你访问过计算机的公钥(public key)都记录在~/.ssh/known_hosts。当下次访问相同计算机时,OpenSSH会核对公钥。如果公钥不同,OpenSSH会发出警告,避免你受到DNS Hijack之类的攻击。 SSH对主机的public_key的检查等级是根据StrictHostKeyChecking变量来配置的。默认情况下,StrictHostKeyChecking=ask。简单所下它的三种配置值:

1.StrictHostKeyChecking=no

#最不安全的级别,当然也没有那么多烦人的提示了,相对安全的内网时建议使用。如果连接server的key在本地不存在,那么就自动添加到文件中(默认是known_hosts),并且给出一个警告。

2.StrictHostKeyChecking=ask #默认的级别,就是出现刚才的提示了。如果连接和key不匹配,给出提示,并拒绝登录。

3.StrictHostKeyChecking=yes #最安全的级别,如果连接与key不匹配,就拒绝连接,不会提示详细信息。
我一般是用方法2解决

-------------
解决方法 1
-------------

对于我来说,在内网的进行的一些测试,为了方便,选择最低的安全级别。在.ssh/config(或者/etc/ssh/ssh_config)中配置:

StrictHostKeyChecking no
UserKnownHostsFile /dev/null

(注:这里为了简便,将knownhostfile设为/dev/null,就不保存在known_hosts中了)
---------------
解决方法 2
---------------

vi ~/.ssh/known_hosts

删除对应ip的相关rsa信息
---------------
解决方法 3
---------------

rm known_hosts

发表在 LinuxBasic, ssh | 标签为 | 留下评论

Redis的基本安装

redis基本安装

Remode Dlctionary Server(远程字典服务器)
使用C语言编写的,遵守BSD的开源软件
是一款高性能的(Key/Values)分布式内存数据库
并支持数据持久化的NoSQL数据库服务软件
中文网站www.redis.cn

特点
支持数据持久化,可写入硬盘

安装
安装源码包
确认gcc gcc-c++已经安装
[root@51 redis-4.0.8]# rpm -q gcc gcc-c++
gcc-4.8.5-16.el7.x86_64
gcc-c++-4.8.5-16.el7.x86_64

直接编译,已配置好
[root@51 redis-4.0.8]# make
cd src && make all
make[1]: 进入目录“/root/soft/redis/redis-4.0.8/src”
CC Makefile.dep
make[1]: 离开目录“/root/soft/redis/redis-4.0.8/src”
make[1]: 进入目录“/root/soft/redis/redis-4.0.8/src”
rm -rf redis-server redis-sentinel redis-cli redis-benchmark redis-check-rdb redis-check-aof *.o *.gcda *.gcno *.gcov redis.info lcov-html Makefile.dep dict-benchmark
(cd ../deps && make distclean)
make[2]: 进入目录“/root/soft/redis/redis-4.0.8/deps”
(cd hiredis && make clean) > /dev/null || true
(cd linenoise && make clean) > /dev/null || true
(cd lua && make clean) > /dev/null || true
(cd jemalloc && [ -f Makefile ] && make distclean) > /dev/null || true
(rm -f .make-*)
make[2]: 离开目录“/root/soft/redis/redis-4.0.8/deps”
(rm -f .make-*)
echo STD=-std=c99 -pedantic -DREDIS_STATIC=” >> .make-settings
echo WARN=-Wall -W -Wno-missing-field-initializers >> .make-settings
echo OPT=-O2 >> .make-settings
echo MALLOC=jemalloc >> .make-settings
echo CFLAGS= >> .make-settings
echo LDFLAGS= >> .make-settings
echo REDIS_CFLAGS= >> .make-settings
echo REDIS_LDFLAGS= >> .make-settings
echo PREV_FINAL_CFLAGS=-std=c99 -pedantic -DREDIS_STATIC=” -Wall -W -Wno-missing-field-initializers -O2 -g -ggdb -I../deps/hiredis -I../deps/linenoise -I../deps/lua/src -DUSE_JEMALLOC -I../deps/jemalloc/include >> .make-settings
echo PREV_FINAL_LDFLAGS= -g -ggdb -rdynamic >> .make-settings
(cd ../deps && make hiredis linenoise lua jemalloc)
make[2]: 进入目录“/root/soft/redis/redis-4.0.8/deps”
(cd hiredis && make clean) > /dev/null || true
(cd linenoise && make clean) > /dev/null || true
(cd lua && make clean) > /dev/null || true
(cd jemalloc && [ -f Makefile ] && make distclean) > /dev/null || true
(rm -f .make-*)
(echo “” > .make-cflags)
(echo “” > .make-ldflags)
MAKE hiredis
cd hiredis && make static
make[3]: 进入目录“/root/soft/redis/redis-4.0.8/deps/hiredis”
cc -std=c99 -pedantic -c -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb net.c
cc -std=c99 -pedantic -c -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb hiredis.c
cc -std=c99 -pedantic -c -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb sds.c
cc -std=c99 -pedantic -c -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb async.c
cc -std=c99 -pedantic -c -O3 -fPIC -Wall -W -Wstrict-prototypes -Wwrite-strings -g -ggdb read.c
ar rcs libhiredis.a net.o hiredis.o sds.o async.o read.o
make[3]: 离开目录“/root/soft/redis/redis-4.0.8/deps/hiredis”
MAKE linenoise
cd linenoise && make
make[3]: 进入目录“/root/soft/redis/redis-4.0.8/deps/linenoise”
cc -Wall -Os -g -c linenoise.c
make[3]: 离开目录“/root/soft/redis/redis-4.0.8/deps/linenoise”
MAKE lua
cd lua/src && make all CFLAGS=”-O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” ” MYLDFLAGS=”” AR=”ar rcu”
make[3]: 进入目录“/root/soft/redis/redis-4.0.8/deps/lua/src”
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lapi.o lapi.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lcode.o lcode.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ldebug.o ldebug.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ldo.o ldo.c
ldo.c: 在函数‘f_parser’中:
ldo.c:496:7: 警告:未使用的变量‘c’ [-Wunused-variable]
int c = luaZ_lookahead(p->z);
^
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ldump.o ldump.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lfunc.o lfunc.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lgc.o lgc.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o llex.o llex.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lmem.o lmem.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lobject.o lobject.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lopcodes.o lopcodes.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lparser.o lparser.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lstate.o lstate.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lstring.o lstring.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ltable.o ltable.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ltm.o ltm.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lundump.o lundump.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lvm.o lvm.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lzio.o lzio.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o strbuf.o strbuf.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o fpconv.o fpconv.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lauxlib.o lauxlib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lbaselib.o lbaselib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ldblib.o ldblib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o liolib.o liolib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lmathlib.o lmathlib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o loslib.o loslib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o ltablib.o ltablib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lstrlib.o lstrlib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o loadlib.o loadlib.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o linit.o linit.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lua_cjson.o lua_cjson.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lua_struct.o lua_struct.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lua_cmsgpack.o lua_cmsgpack.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lua_bit.o lua_bit.c
ar rcu liblua.a lapi.o lcode.o ldebug.o ldo.o ldump.o lfunc.o lgc.o llex.o lmem.o lobject.o lopcodes.o lparser.o lstate.o lstring.o ltable.o ltm.o lundump.o lvm.o lzio.o strbuf.o fpconv.o lauxlib.o lbaselib.o ldblib.o liolib.o lmathlib.o loslib.o ltablib.o lstrlib.o loadlib.o linit.o lua_cjson.o lua_struct.o lua_cmsgpack.o lua_bit.o # DLL needs all object files
ranlib liblua.a
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o lua.o lua.c
cc -o lua lua.o liblua.a -lm
liblua.a(loslib.o):在函数‘os_tmpname’中:
loslib.c:(.text+0x28c): 警告:the use of `tmpnam’ is dangerous, better use `mkstemp’
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o luac.o luac.c
cc -O2 -Wall -DLUA_ANSI -DENABLE_CJSON_GLOBAL -DREDIS_STATIC=” -c -o print.o print.c
cc -o luac luac.o print.o liblua.a -lm
make[3]: 离开目录“/root/soft/redis/redis-4.0.8/deps/lua/src”
MAKE jemalloc
cd jemalloc && ./configure –with-lg-quantum=3 –with-jemalloc-prefix=je_ –enable-cc-silence CFLAGS=”-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops ” LDFLAGS=””
checking for xsltproc… /usr/bin/xsltproc
checking for gcc… gcc
checking whether the C compiler works… yes
checking for C compiler default output file name… a.out
checking for suffix of executables…
checking whether we are cross compiling… no
checking for suffix of object files… o
checking whether we are using the GNU C compiler… yes
checking whether gcc accepts -g… yes
checking for gcc option to accept ISO C89… none needed
checking how to run the C preprocessor… gcc -E
checking for grep that handles long lines and -e… /usr/bin/grep
checking for egrep… /usr/bin/grep -E
checking for ANSI C header files… yes
checking for sys/types.h… yes
checking for sys/stat.h… yes
checking for stdlib.h… yes
checking for string.h… yes
checking for memory.h… yes
checking for strings.h… yes
checking for inttypes.h… yes
checking for stdint.h… yes
checking for unistd.h… yes
checking whether byte ordering is bigendian… no
checking size of void *… 8
checking size of int… 4
checking size of long… 8
checking size of intmax_t… 8
checking build system type… x86_64-unknown-linux-gnu
checking host system type… x86_64-unknown-linux-gnu
checking whether pause instruction is compilable… yes
checking for ar… ar
checking malloc.h usability… yes
checking malloc.h presence… yes
checking for malloc.h… yes
checking whether malloc_usable_size definition can use const argument… no
checking whether __attribute__ syntax is compilable… yes
checking whether compiler supports -fvisibility=hidden… yes
checking whether compiler supports -Werror… yes
checking whether tls_model attribute is compilable… yes
checking whether compiler supports -Werror… yes
checking whether alloc_size attribute is compilable… yes
checking whether compiler supports -Werror… yes
checking whether format(gnu_printf, …) attribute is compilable… yes
checking whether compiler supports -Werror… yes
checking whether format(printf, …) attribute is compilable… yes
checking for a BSD-compatible install… /usr/bin/install -c
checking for ranlib… ranlib
checking for ld… /usr/bin/ld
checking for autoconf… false
checking for memalign… yes
checking for valloc… yes
checking configured backtracing method… N/A
checking for sbrk… yes
checking whether utrace(2) is compilable… no
checking whether valgrind is compilable… no
checking whether a program using __builtin_ffsl is compilable… yes
checking LG_PAGE… 12
checking pthread.h usability… yes
checking pthread.h presence… yes
checking for pthread.h… yes
checking for pthread_create in -lpthread… yes
checking for library containing clock_gettime… none required
checking for secure_getenv… yes
checking for issetugid… no
checking for _malloc_thread_cleanup… no
checking for _pthread_mutex_init_calloc_cb… no
checking for TLS… yes
checking whether C11 atomics is compilable… no
checking whether atomic(9) is compilable… no
checking whether Darwin OSAtomic*() is compilable… no
checking whether madvise(2) is compilable… yes
checking whether to force 32-bit __sync_{add,sub}_and_fetch()… no
checking whether to force 64-bit __sync_{add,sub}_and_fetch()… no
checking for __builtin_clz… yes
checking whether Darwin OSSpin*() is compilable… no
checking whether glibc malloc hook is compilable… yes
checking whether glibc memalign hook is compilable… yes
checking whether pthreads adaptive mutexes is compilable… yes
checking for stdbool.h that conforms to C99… yes
checking for _Bool… yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating jemalloc.pc
config.status: creating doc/html.xsl
config.status: creating doc/manpages.xsl
config.status: creating doc/jemalloc.xml
config.status: creating include/jemalloc/jemalloc_macros.h
config.status: creating include/jemalloc/jemalloc_protos.h
config.status: creating include/jemalloc/jemalloc_typedefs.h
config.status: creating include/jemalloc/internal/jemalloc_internal.h
config.status: creating test/test.sh
config.status: creating test/include/test/jemalloc_test.h
config.status: creating config.stamp
config.status: creating bin/jemalloc-config
config.status: creating bin/jemalloc.sh
config.status: creating bin/jeprof
config.status: creating include/jemalloc/jemalloc_defs.h
config.status: creating include/jemalloc/internal/jemalloc_internal_defs.h
config.status: creating test/include/test/jemalloc_test_defs.h
config.status: executing include/jemalloc/internal/private_namespace.h commands
config.status: executing include/jemalloc/internal/private_unnamespace.h commands
config.status: executing include/jemalloc/internal/public_symbols.txt commands
config.status: executing include/jemalloc/internal/public_namespace.h commands
config.status: executing include/jemalloc/internal/public_unnamespace.h commands
config.status: executing include/jemalloc/internal/size_classes.h commands
config.status: executing include/jemalloc/jemalloc_protos_jet.h commands
config.status: executing include/jemalloc/jemalloc_rename.h commands
config.status: executing include/jemalloc/jemalloc_mangle.h commands
config.status: executing include/jemalloc/jemalloc_mangle_jet.h commands
config.status: executing include/jemalloc/jemalloc.h commands
===============================================================================
jemalloc version : 4.0.3-0-ge9192eacf8935e29fc62fddc2701f7942b1cc02c
library revision : 2

CONFIG : –with-lg-quantum=3 –with-jemalloc-prefix=je_ –enable-cc-silence ‘CFLAGS=-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops ‘ LDFLAGS=
CC : gcc
CFLAGS : -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -fvisibility=hidden
CPPFLAGS : -D_GNU_SOURCE -D_REENTRANT
LDFLAGS :
EXTRA_LDFLAGS :
LIBS : -lpthread
TESTLIBS :
RPATH_EXTRA :

XSLTPROC : /usr/bin/xsltproc
XSLROOT :

PREFIX : /usr/local
BINDIR : /usr/local/bin
DATADIR : /usr/local/share
INCLUDEDIR : /usr/local/include
LIBDIR : /usr/local/lib
MANDIR : /usr/local/share/man

srcroot :
abs_srcroot : /root/soft/redis/redis-4.0.8/deps/jemalloc/
objroot :
abs_objroot : /root/soft/redis/redis-4.0.8/deps/jemalloc/

JEMALLOC_PREFIX : je_
JEMALLOC_PRIVATE_NAMESPACE
: je_
install_suffix :
autogen : 0
cc-silence : 1
debug : 0
code-coverage : 0
stats : 1
prof : 0
prof-libunwind : 0
prof-libgcc : 0
prof-gcc : 0
tcache : 1
fill : 1
utrace : 0
valgrind : 0
xmalloc : 0
munmap : 0
lazy_lock : 0
tls : 1
cache-oblivious : 1
===============================================================================
cd jemalloc && make CFLAGS=”-std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops ” LDFLAGS=”” lib/libjemalloc.a
make[3]: 进入目录“/root/soft/redis/redis-4.0.8/deps/jemalloc”
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/jemalloc.o src/jemalloc.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/arena.o src/arena.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/atomic.o src/atomic.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/base.o src/base.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/bitmap.o src/bitmap.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk.o src/chunk.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk_dss.o src/chunk_dss.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/chunk_mmap.o src/chunk_mmap.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/ckh.o src/ckh.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/ctl.o src/ctl.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/extent.o src/extent.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/hash.o src/hash.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/huge.o src/huge.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/mb.o src/mb.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/mutex.o src/mutex.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/pages.o src/pages.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/prof.o src/prof.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/quarantine.o src/quarantine.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/rtree.o src/rtree.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/stats.o src/stats.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tcache.o src/tcache.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/util.o src/util.c
gcc -std=gnu99 -Wall -pipe -g3 -O3 -funroll-loops -c -D_GNU_SOURCE -D_REENTRANT -Iinclude -Iinclude -o src/tsd.o src/tsd.c
ar crus lib/libjemalloc.a src/jemalloc.o src/arena.o src/atomic.o src/base.o src/bitmap.o src/chunk.o src/chunk_dss.o src/chunk_mmap.o src/ckh.o src/ctl.o src/extent.o src/hash.o src/huge.o src/mb.o src/mutex.o src/pages.o src/prof.o src/quarantine.o src/rtree.o src/stats.o src/tcache.o src/util.o src/tsd.o
make[3]: 离开目录“/root/soft/redis/redis-4.0.8/deps/jemalloc”
make[2]: 离开目录“/root/soft/redis/redis-4.0.8/deps”
CC adlist.o
CC quicklist.o
CC ae.o
CC anet.o
CC dict.o
CC server.o
CC sds.o
CC zmalloc.o
CC lzf_c.o
CC lzf_d.o
CC pqsort.o
CC zipmap.o
CC sha1.o
CC ziplist.o
CC release.o
CC networking.o
CC util.o
CC object.o
CC db.o
CC replication.o
CC rdb.o
CC t_string.o
CC t_list.o
CC t_set.o
CC t_zset.o
CC t_hash.o
CC config.o
CC aof.o
CC pubsub.o
CC multi.o
CC debug.o
CC sort.o
CC intset.o
CC syncio.o
CC cluster.o
CC crc16.o
CC endianconv.o
CC slowlog.o
CC scripting.o
CC bio.o
CC rio.o
CC rand.o
CC memtest.o
CC crc64.o
CC bitops.o
CC sentinel.o
CC notify.o
CC setproctitle.o
CC blocked.o
CC hyperloglog.o
CC latency.o
CC sparkline.o
CC redis-check-rdb.o
CC redis-check-aof.o
CC geo.o
CC lazyfree.o
CC module.o
CC evict.o
CC expire.o
CC geohash.o
CC geohash_helper.o
CC childinfo.o
CC defrag.o
CC siphash.o
CC rax.o
LINK redis-server
INSTALL redis-sentinel
CC redis-cli.o
LINK redis-cli
CC redis-benchmark.o
LINK redis-benchmark
INSTALL redis-check-rdb
INSTALL redis-check-aof

Hint: It’s a good idea to run ‘make test’ 😉

make[1]: 离开目录“/root/soft/redis/redis-4.0.8/src”

安装
[root@51 redis-4.0.8]# make install
cd src && make install
make[1]: 进入目录“/root/soft/redis/redis-4.0.8/src”
CC Makefile.dep
make[1]: 离开目录“/root/soft/redis/redis-4.0.8/src”
make[1]: 进入目录“/root/soft/redis/redis-4.0.8/src”

Hint: It’s a good idea to run ‘make test 😉

INSTALL install
INSTALL install
INSTALL install
INSTALL install
INSTALL install
make[1]: 离开目录“/root/soft/redis/redis-4.0.8/src”

*重要*初始化配置
[root@51 redis-4.0.8]# ./utils/install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
选择端口,默认6379
Please select the redis port for this instance: [6379]
Selecting default: 6379
选择配置文件名称和位置,默认
Please select the redis config file name [/etc/redis/6379.conf]
Selected default – /etc/redis/6379.conf
选择日志名称和位置,选择默认
Please select the redis log file name [/var/log/redis_6379.log]
Selected default – /var/log/redis_6379.log
选择数据库名称和位置,选择默认
Please select the data directory for this instance [/var/lib/redis/6379]
Selected default – /var/lib/redis/6379
选择程序输出位置,选择默认
Please select the redis executable path [/usr/local/bin/redis-server]
Selected config:
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service…
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server…
Installation successful!

确认服务已启动
[root@51 redis-4.0.8]# netstat -antuo | grep 6379
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN off (0.00/0/0)

可以手动启动和停止
[root@51 redis-4.0.8]# /etc/init.d/
netconsole network redis_6379 rhnsd
[root@51 redis-4.0.8]# /etc/init.d/
netconsole network redis_6379 rhnsd
[root@51 redis-4.0.8]# /etc/init.d/redis_6379 status
Redis is running (4762)
[root@51 redis-4.0.8]# /etc/init.d/redis_6379 stop
Stopping …
Redis stopped
[root@51 redis-4.0.8]# /etc/init.d/redis_6379 status
cat: /var/run/redis_6379.pid: 没有那个文件或目录
Redis is running ()
[root@51 redis-4.0.8]# /etc/init.d/redis_6379 start
Starting Redis server…
[root@51 redis-4.0.8]# /etc/init.d/redis_6379 status
Redis is running (4831)
[root@51 redis-4.0.8]# netstat -antuo | grep 6379
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 127.0.0.1:6379 127.0.0.1:47670 TIME_WAIT timewait (40.09/0/0)

查看配置文件
[root@51 redis-4.0.8]# vim /etc/redis/6379.conf

查看数据库目录
[root@51 redis-4.0.8]# ll /var/lib/redis/6379/
总用量 4
-rw-r–r–. 1 root root 92 6月 7 09:34 dump.rdb

redis命令

[root@51 redis-4.0.8]# redis-cli
127.0.0.1:6379> ping
PONG

插入值
插入格式:
127.0.0.1:6379[15]> set key valuse [EX seconds] [PX milliseconds] [NX|XX]
例如
127.0.0.1:6379> select 15
OK
127.0.0.1:6379[15]> keys *
(empty list or set)
127.0.0.1:6379[15]> set key jim
OK
127.0.0.1:6379[15]> get key
“jim”

插入多个值
127.0.0.1:6379[15]> select 0
OK
127.0.0.1:6379> set v1 9
OK
127.0.0.1:6379> set v2 10
OK
127.0.0.1:6379> set v3 100
OK
127.0.0.1:6379> keys v?
1) “v3”
2) “v2”
3) “v1”

查看变量类型,默认都是字符类型string
127.0.0.1:6379> type v1
string

查看变量生存期(即什么时候从内存清除),默认用不过期
127.0.0.1:6379> ttl v1
(integer) -1

查看变量是否存在(存在为1,不存在为0)
127.0.0.1:6379> Exists k1
(integer) 0
127.0.0.1:6379> Exists v1
(integer) 1
127.0.0.1:6379>

将值移动到其它库,例如2
127.0.0.1:6379> move v2 1
(integer) 1
127.0.0.1:6379> keys * //0库里面已经没了
1) “v3”
2) “v1”
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> keys * //在1库里面了
1) “v2”
127.0.0.1:6379[1]>

redis支持持久化,使用save命令
127.0.0.1:6379[1]> save
OK

redis清空内存
127.0.0.1:6379[1]> keys *
1) “v2″
127.0.0.1:6379[1]> flushall
OK
127.0.0.1:6379[1]> keys *
(empty list or set)

修改配置文件
[root@51 ~]# vim /etc/redis/6379.conf
在70行添加IP
70 bind 127.0.0.1 192.168.4.51

重启服务
[root@51 ~]# /etc/init.d/redis_6379 start
Starting Redis server…
[root@51 ~]# /etc/init.d/redis_6379 status
Redis is running (5967)

确认配置文件起效,可以看到,已经在监听192.168.4.51
[root@51 ~]# netstat -antup | grep redis
tcp 0 0 192.168.4.51:6379 0.0.0.0:* LISTEN 5967/redis-server 1
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 5967/redis-server 1

也可以更改一个端口
93 port 8888
[root@51 ~]# vim /etc/redis/6379.conf
[root@51 ~]# /etc/init.d/redis_6379 stop
Stopping …
Waiting for Redis to shutdown …
^[[ARedis stopped
[root@51 ~]# /etc/init.d/redis_6379 start
Starting Redis server…
[root@51 ~]# /etc/init.d/redis_6379 status
Redis is running (6028)
[root@51 ~]# netstat -antup | grep redis
tcp 0 0 192.168.4.51:8888 0.0.0.0:* LISTEN 6028/redis-server 1
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 6028/redis-server 1

注意,此时访问时需要指定端口
原端口已经无法使用
[root@51 ~]# redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> exit
使用新端口
[root@51 ~]# redis-cli -p 8888
127.0.0.1:8888> exit

在另外一台机器51上验证
[root@56 ~]# redis-cli -h 192.168.4.51 -p 8888
192.168.4.51:8888> select 0
OK
192.168.4.51:8888> keys *
(empty list or set)
192.168.4.51:8888>

可以限制TCP链接数量
redis使用的是TCP连接,可以限制TCP总和,超过的就排队
102 tcp-backlog 511

设置连接永不超时
114 timeout 0

设置连接保活时间(每隔300S检查一下链接状态)
131 tcp-keepalive 300

是否常驻内存守护进程
137 daemonize yes

数据库个数(默认0-15号库)
187 databases 16

日志文件位置,之前已经配置好了
172 logfile /var/log/redis_6379.log

redis链接的并发链接数
533 # maxclients 10000

设置redis数据库目录,前面也设置过了
264 dir /var/lib/redis/6379

设置开辟多少内存空间给数据库,注意在配置文件的开头已经注释好了单位(只能是字节)
560 # maxmemory <bytes>

12 # 1k => 1000 bytes
13 # 1kb => 1024 bytes
14 # 1m => 1000000 bytes
15 # 1mb => 1024*1024 bytes
16 # 1g => 1000000000 bytes
17 # 1gb => 1024*1024*1024 bytes

内存的清除策略
最近最少使用
565 # volatile-lru -> Evict using approximated LRU among the keys with an expire set.
删除最少使用的key
566 # allkeys-lru -> Evict any key using approximated LRU.

567 # volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
568 # allkeys-lfu -> Evict any key using approximated LFU.
在设置了过期的key里随机移除
569 # volatile-random -> Remove a random key among the ones with an expire set.
随机移除key
570 # allkeys-random -> Remove a random key, any key.
移除最近过期的key
571 # volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
默认不清除,存满时报错,生产环境不使用
572 # noeviction -> Don’t evict anything, just return an error on write operations.
在591行已经说明了默认的清除策略是不清楚

设置最大模板数量
602 # maxmemory-samples 5

/etc/init.d/下面为脚本
[root@51 ~]# cd /etc/init.d/
[root@51 init.d]# ll
总用量 48
-rw-r–r–. 1 root root 17500 5月 3 2017 functions
-rwxr-xr-x. 1 root root 4334 5月 3 2017 netconsole
-rwxr-xr-x. 1 root root 7293 5月 3 2017 network
-rw-r–r–. 1 root root 1160 6月 27 2017 README
-rwxr-xr-x. 1 root root 1702 6月 7 09:28 redis_6379
-rwxr-xr-x. 1 root root 2443 3月 6 2017 rhnsd

[root@51 init.d]# netstat -antup | grep 2636
tcp 0 0 192.168.4.51:6379 0.0.0.0:* LISTEN 2636/redis-server 1
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 2636/redis-server 1

设置需要密码
501 requirepass 123456

[root@51 init.d]# /etc/init.d/redis_6379 start
Starting Redis server…
[root@51 init.d]# /etc/init.d/redis_6379 status
Redis is running (2636)
[root@51 init.d]# redis-cli
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> exit

注意,如果更改了端口号,密码,IP等等,则调用脚本时,需要在脚本里面也更改相应的信息
例如,如果有密码,直接用脚本停是停不了的
[root@51 init.d]# /etc/init.d/redis_6379 stop
Stopping …
(error) NOAUTH Authentication required.
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
^C

此时需修改脚本
[root@51 init.d]# vim /etc/init.d/redis_6379
43 $CLIEXEC -p $REDISPORT -a 123456 shutdown

再用脚本停止服务时一切正常
[root@51 init.d]# /etc/init.d/redis_6379 stop
Stopping …
Waiting for Redis to shutdown …
Redis stopped

同理在修改IP,端口号等,脚本也要做相应的调整

换另一台机器验证IP
[root@56 ~]# redis-cli -h 192.168.4.51 -a 123456
192.168.4.51:6379> ping
PONG
192.168.4.51:6379>

再修改端口试一试,把6379改成8888
[root@51 init.d]# vim /etc/redis/6379.conf
93 port 8888
[root@51 init.d]# /etc/init.d/redis_6379 stop
Stopping …
Redis stopped
[root@51 init.d]#
[root@51 init.d]# /etc/init.d/redis_6379 start
Starting Redis server…
[root@51 init.d]# netstat -antup | grep 8888
tcp 0 0 192.168.4.51:8888 0.0.0.0:* LISTEN 3198/redis-server 1
tcp 0 0 127.0.0.1:8888 0.0.0.0:* LISTEN 3198/redis-server 1
验证关闭,可以看到,关不了了
[root@51 init.d]# /etc/init.d/redis_6379 stop
Stopping …
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
Waiting for Redis to shutdown …
^C

把端口改一下,再运行一次,就对了
[root@51 init.d]# vim /etc/init.d/redis_6379
8 REDISPORT=”8888″
[root@51 init.d]# /etc/init.d/redis_6379 stop
Stopping …
Redis stopped
[root@51 init.d]#

以上总之,改了什么值,脚本也要做相应的修改,否则就运行 redis-cli 命令加上参数

LNMP配置
1部署LNMP平台
(1)源码安装NGINX
[root@51 nginx-1.12.2]# yum install -y gcc gcc-c++ pcre-devel zlib-devel php-common
[root@51 nginx-1.12.2]# ./configure –prefix=/usr/local/nginx
make & make install
[root@51 lnmp]# ln -s /usr/local/nginx/sbin/nginx /usr/bin/nginx
[root@51 lnmp]# nginx
[root@51 lnmp]# netstat -antup | grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 8627/nginx: master

配置nginx配置文件
[root@51 nginx]# vim conf/nginx.conf

65 location ~ \.php$ {
66 root html;
67 fastcgi_pass 127.0.0.1:9000;
68 fastcgi_index index.php;
69 fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
70 include fastcgi.conf;
71 }
72
73 # deny access to .htaccess files, if Apache’s document root
74 # concurs with nginx’s one
75 #
76 location ~ /\.ht {
77 deny all;
78 }

(2)部署MYSQL
已经安装了,省略

(3)部署PHP
[root@51 lnmp]# rpm -ivh php-fpm-5.4.16-42.el7.x86_64.rpm
警告:php-fpm-5.4.16-42.el7.x86_64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID f4a80eb5: NOKEY
准备中… ################################# [100%]
正在升级/安装…
1:php-fpm-5.4.16-42.el7 ################################# [100%]
[root@51 lnmp]# systemctl restart php-fpm
[root@51 lnmp]# systemctl enable php-fpm
Created symlink from /etc/systemd/system/multi-user.target.wants/php-fpm.service to /usr/lib/systemd/system/php-fpm.service.
[root@51 lnmp]# netstat -antup | grep 9000
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 9038/php-fpm: maste

写一个默认主页
[root@51 lnmp]# echo 123 > /usr/local/nginx/html/index.html

找一台机器试一下能不能访问
[root@56 ~]# curl 192.168.4.51
123

(*)重要安装 php-redis
安装依赖1
[root@51 redis]# yum install -y autoconf automake
已安装:
autoconf.noarch 0:2.69-11.el7 automake.noarch 0:1.13.4-3.el7
作为依赖被安装:
m4.x86_64 0:1.4.16-10.el7
完毕!

安装依赖2
[root@51 redis]# yum install php-devel-5.4.16-42.el7.x86_64.rpm
已安装:
php-devel.x86_64 0:5.4.16-42.el7
作为依赖被安装:
php-cli.x86_64 0:5.4.16-42.el7
完毕!

确认依赖安装好(即安装好了PHP配置文件命令)
[root@51 phpredis-2.2.4]# ll /usr/bin/php*
-rwxr-xr-x. 1 root root 4617936 8月 5 2016 /usr/bin/php
-rwxr-xr-x. 1 root root 4596800 8月 5 2016 /usr/bin/php-cgi
-rwxr-xr-x. 1 root root 4524 11月 6 2016 /usr/bin/php-config
-rwxr-xr-x. 1 root root 4760 8月 5 2016 /usr/bin/phpize

解压安装包php-redis
[root@51 redis]# tar -xf php-redis-2.2.4.tar.gz
[root@51 redis]# cd phpredis-2.2.4/
[root@51 phpredis-2.2.4]# ll
总用量 504
-rw-rw-r–. 1 root root 8156 9月 2 2013 arrays.markdown
-rw-rw-r–. 1 root root 5747 9月 2 2013 common.h
-rw-rw-r–. 1 root root 1966 9月 2 2013 config.h
-rwxrwxr-x. 1 root root 3344 9月 2 2013 config.m4
-rw-rw-r–. 1 root root 462 9月 2 2013 config.w32
-rw-rw-r–. 1 root root 3218 9月 2 2013 COPYING
-rw-rw-r–. 1 root root 160 9月 2 2013 CREDITS
drwxrwxr-x. 2 root root 112 9月 2 2013 debian
-rw-rw-r–. 1 root root 309 9月 2 2013 debian.control
-rw-rw-r–. 1 root root 49143 9月 2 2013 library.c
-rw-rw-r–. 1 root root 5246 9月 2 2013 library.h
-rwxrwxr-x. 1 root root 636 9月 2 2013 mkdeb-apache2.sh
-rwxrwxr-x. 1 root root 471 9月 2 2013 mkdeb.sh
-rw-rw-r–. 1 root root 2703 9月 2 2013 package.xml
-rw-rw-r–. 1 root root 8357 9月 2 2013 php_redis.h
-rw-rw-r–. 1 root root 86482 9月 2 2013 README.markdown
-rw-rw-r–. 1 root root 36092 9月 2 2013 redis_array.c
-rw-rw-r–. 1 root root 1513 9月 2 2013 redis_array.h
-rw-rw-r–. 1 root root 35592 9月 2 2013 redis_array_impl.c
-rw-rw-r–. 1 root root 1528 9月 2 2013 redis_array_impl.h
-rw-rw-r–. 1 root root 200437 9月 2 2013 redis.c
-rw-rw-r–. 1 root root 12241 9月 2 2013 redis_session.c
-rw-rw-r–. 1 root root 251 9月 2 2013 redis_session.h
drwxrwxr-x. 2 root root 45 9月 2 2013 rpm
-rw-rw-r–. 1 root root 424 9月 2 2013 serialize.list
drwxrwxr-x. 2 root root 101 9月 2 2013 tests

先配置安装

linux系统中,php安装成功后,在bin目录下会生成一个名叫phpize的可执行脚本,这个脚本的用途是动态安装php扩展模块。
使用phpize脚本安装php扩展模块的好处:在安装php时没有安装的扩展,可以在以后随时安装上,且不需要重新安装PHP。
安装步骤:
1、切换到扩展模块目录
在php源码包被解压后的目录中有个ext子目录,这里有近70多个主流的php扩展模块安装包。
如现在要安装imap扩展,则切换到imap目录:
cd /software/php-5.5.3/ext/imap
2、在imap目录中执行phpize脚本
/usr/local/php/bin/phpize
执行成功会返回几行数据:
Configuring for:
PHP Api Version: 20041225
Zend Module Api No: 20060613
Zend Extension Api No: 220060519
3、开始编译(注意–with-php-config参数)
./configure –with-php-config=/usr/local/php/bin/php-config –with-kerberos –with-imap-ssl
4、make
5、make install’

执行phpize命令,该命令用于动态加载PHP模块
[root@51 phpredis-2.2.4]# /usr/bin/phpize
Configuring for:
PHP Api Version: 20100412
Zend Module Api No: 20100525
Zend Extension Api No: 220100525

指定配置文件的安装位置
[root@51 phpredis-2.2.4]# ./configure –with-php-config=/usr/bin/php-config

执行make
[root@51 phpredis-2.2.4]# make
/bin/sh /root/soft/redis/phpredis-2.2.4/libtool –mode=compile cc -I. -I/root/soft/redis/phpredis-2.2.4 -DPHP_ATOM_INC -I/root/soft/redis/phpredis-2.2.4/include -I/root/soft/redis/phpredis-2.2.4/main -I/root/soft/redis/phpredis-2.2.4 -I/usr/include/php -I/usr/include/php/main -I/usr/include/php/TSRM -I/usr/include/php/Zend -I/usr/include/php/ext -I/usr/include/php/ext/date/lib -DHAVE_CONFIG_H -g -O2 -c /root/soft/redis/phpredis-2.2.4/redis.c -o redis.lo

*************************

执行make install
[root@51 phpredis-2.2.4]# make install
Installing shared extensions: /usr/lib64/php/modules/

安装完成后,加载redis模块
先确定有没有redis模块
[root@51 phpredis-2.2.4]# php -m
[PHP Modules]
bz2
calendar
Core
ctype
curl
date
ereg
exif
fileinfo
filter
ftp
gettext
gmp
hash
iconv
json
libxml
mhash
openssl
pcntl
pcre
Phar
readline
Reflection
session
shmop
SimpleXML
sockets
SPL
standard
tokenizer
xml
zip
zlib

[Zend Modules]

[root@51 phpredis-2.2.4]# php -m | grep -i redis
(可以发现什么也没有,说明没有redis模块)

修改php程序配置文件php.ini,模块在哪里?
通过刚才的安装信息
[root@51 phpredis-2.2.4]# make install
Installing shared extensions: /usr/lib64/php/modules/
可以发现 ‘/usr/lib64/php/modules/’这个关键词

查看一下这个目录
[root@51 phpredis-2.2.4]# ll /usr/lib64/php/modules/
总用量 4116
-rwxr-xr-x. 1 root root 74648 8月 5 2016 curl.so
-rwxr-xr-x. 1 root root 2713352 8月 5 2016 fileinfo.so
-rwxr-xr-x. 1 root root 44680 8月 5 2016 json.so
-rwxr-xr-x. 1 root root 272000 8月 5 2016 phar.so
-rwxr-xr-x. 1 root root 1038392 6月 7 16:50 redis.so
-rwxr-xr-x. 1 root root 58376 8月 5 2016 zip.so

可以发现redis.so 这个模块已经装了,只是没有启用,因此配置模块
注意文件夹路径要写对
854 extension_dir=”/usr/lib64/php/modules/”
855 extension=”redis.so”
配置这里的时候,配置文件已经给出了提示:
726 ; Directory in which the loadable extensions (modules) reside.
727 ; http://php.net/extension-dir
728 ; extension_dir = “./”
729 ; On windows:
730 ; extension_dir = “ext”
********************************************
842 ; If you wish to have an extension loaded automatically, use the following
843 ; syntax:
844 ;
845 ; extension=modulename.extension
846 ;
847 ; For example, on Windows:
848 ;
849 ; extension=msql.dll
850 ;
851 ; … or under UNIX:
852 ;
853 ; extension=msql.so
854 extension_dir=”./”
855 extension=”redis.so”
856 ;
857 ; … or with a path:
858 ;
859 ; extension=/path/to/extension/msql.so

访问页面检查http://192.168.4.51/test.php
搜索redis
redis
Redis Support enabled
Redis Version 2.2.4
或者本地确认
[root@51 phpredis-2.2.4]# php -m | grep -i redis
redis

编写页面,链接测试
(我已经在56上面装好了redis)
[root@51 phpredis-2.2.4]# cat /usr/local/nginx/html/redis.php
<?php
$redis = new redis();
$redis->connect(‘192.168.4.56’,6379);
$redis->set(‘redistest’,’666666′);
echo $redis->get(‘redistest’);
?>

访问页面http://192.168.4.51/redis.php,测试成功
[root@room9pc01 ~]# curl http://192.168.4.51/redis.php
666666
再次验证,直接远程登陆redis,可以发现有值了
[root@51 phpredis-2.2.4]# redis-cli -h 192.168.4.56
192.168.4.56:6379> keys *
1) “redistest”
192.168.4.56:6379> get redistest
“666666”
192.168.4.56:6379>

发表在 reids | 标签为 | 留下评论

Redis基本操作

redis操作
string 字符串
127.0.0.1:6351> set strmul abcdefj
OK
127.0.0.1:6351> get strmul
“abcdefj”
127.0.0.1:6351> getrange strmul 1 3
“bcd”
127.0.0.1:6351> getrange strmul 0 2
“abc”

127.0.0.1:6351> set num 123456789
OK
127.0.0.1:6351> getrange num 5 5
“6”
127.0.0.1:6351> getrange num 5 6
“67”

注意是从0开始算第1位

hash表
1单个设置hash值
127.0.0.1:6351> hset site google ‘www.google.com’
127.0.0.1:6351> hset site baidu ‘www.baidu.com’

多个设置
127.0.0.1:6351> hmset site sina www.sina.com taobao www.taobao.com

单个读
127.0.0.1:6351> hget site baidu
“www.baidu.com”

多个读
127.0.0.1:6351> hmset site sina www.sina.com taobao www.taobao.com
OK
127.0.0.1:6351> hmget site sina google taobao baidu
1) “www.sina.com”
2) “www.google.com”
3) “www.taobao.com”
4) “www.baidu.com”

列出所有值,包括字段名
127.0.0.1:6351> hgetall site
1) “google”
2) “www.google.com”
3) “baidu”
4) “www.baidu.com”
5) “sina”
6) “www.sina.com”
7) “taobao”
8) “www.taobao.com”

列出所有值
127.0.0.1:6351> hvals site
1) “www.google.com”
2) “www.baidu.com”
3) “www.sina.com”
4) “www.taobao.com”

列出字段名
127.0.0.1:6351> hkeys site
1) “google”
2) “baidu”
3) “sina”
4) “taobao”

LIST 列表
注意,先进后出
127.0.0.1:6351> LPUSH list a b c
(integer) 3
127.0.0.1:6351> lrange list 0 2
1) “c”
2) “b”
3) “a”
127.0.0.1:6351> lrange list 0 0
1) “c”
127.0.0.1:6351> lrange list 1 1
1) “b”

注意,也可以是负数,正数从前往后取值,负数从后往前取值

发表在 reids | 标签为 | 留下评论

Redis cluster 集群配置

redis集群
搭建61-66的环境:安装redis 端口号6351-6356 删除127.0.0.1 添加192.168.4.61-66
完成后确认服务已启动,且有两个端口,其中一个10000+的就是集群时用的端口
IP 规划
• redis 服务器 ip 地址及端口规划
– redisA 192.168.4.51 6351
– redisB 192.168.4.52 6352
– redisC 192.168.4.53 6353
– redisD 192.168.4.54 6354
– redisE 192.168.4.55 6355
– redisF 192.168.4.56 6356

[root@61 ~]# /etc/init.d/redis_6351 restart
Stopping …
Redis stopped
Starting Redis server…
[root@61 ~]# netstat -antup | grep redis
tcp 0 0 192.168.4.61:6351 0.0.0.0:* LISTEN 2807/redis-server 1
tcp 0 0 192.168.4.61:16351 0.0.0.0:* LISTEN 2807/redis-server 1

全部登陆一遍,确认远程登陆正常
[root@66 ~]# redis-cli -h 192.168.4.61 -p 6351
192.168.4.61:6351> ping
PONG
192.168.4.61:6351> exit
not connected> exit
[root@66 ~]# redis-cli -h 192.168.4.62 -p 6352
192.168.4.62:6352> ping
PONG
192.168.4.62:6352> exit
[root@66 ~]# redis-cli -h 192.168.4.63 -p 6353
192.168.4.63:6353> ping
PONG
192.168.4.63:6353> exit
[root@66 ~]# redis-cli -h 192.168.4.64 -p 6354
192.168.4.64:6354> ping
PONG
192.168.4.64:6354> exit
[root@66 ~]# redis-cli -h 192.168.4.65 -p 6355
192.168.4.65:6355> ping
PONG
192.168.4.65:6355> exit
[root@66 ~]# redis-cli -h 192.168.4.66 -p 6356
192.168.4.66:6356> ping
PONG
192.168.4.66:6356> exit

安装集群环境
安装ruby执行环境环境,为了调用ridis下面的集群环境的脚本
[root@61 redis-cluster]# yum install -y ruby rubygems
已安装:
ruby.x86_64 0:2.0.0.648-30.el7 rubygems.noarch 0:2.0.14.1-30.el7
作为依赖被安装:
libyaml.x86_64 0:0.1.4-11.el7_0 ruby-irb.noarch 0:2.0.0.648-30.el7
ruby-libs.x86_64 0:2.0.0.648-30.el7 rubygem-bigdecimal.x86_64 0:1.2.0-30.el7
rubygem-io-console.x86_64 0:0.4.2-30.el7 rubygem-json.x86_64 0:1.7.7-30.el7
rubygem-psych.x86_64 0:2.0.0-30.el7 rubygem-rdoc.noarch 0:4.0.0-30.el7
完毕!

安装gem包,必须先装gem程序,默认已安装
[root@61 redis-cluster]# rpm -qf /usr/bin/gem
rubygems-2.0.14.1-30.el7.noarch

[root@61 redis-cluster]# gem install redis-3.2.1.gem
Successfully installed redis-3.2.1
Parsing documentation for redis-3.2.1
Installing ri documentation for redis-3.2.1
1 gem installed

安装ruby-devel
[root@61 redis-cluster]# yum install -y ruby-devel-2.0.0.648-30.el7.x86_64.rpm
已安装:
ruby-devel.x86_64 0:2.0.0.648-30.el7
完毕!

上面的操作,为的是部署ruby环境,因为执行redis-cluster脚本时,需要ruby环境,脚本就在安装包SRC文件夹里面
[root@61 src]# pwd
/root/soft/redis/redis-4.0.8/src
[root@61 src]# ls
adlist.c cluster.o geo.h memtest.o rdb.c scripting.c syncio.o
adlist.h config.c geohash.c mkreleasehdr.sh rdb.h scripting.o testhelp.h
adlist.o config.h geohash.h module.c rdb.o sdsalloc.h t_hash.c
ae.c config.o geohash_helper.c module.o redisassert.h sds.c t_hash.o
ae_epoll.c crc16.c geohash_helper.h modules redis-benchmark sds.h t_list.c
ae_evport.c crc16.o geohash_helper.o multi.c redis-benchmark.c sds.o t_list.o
ae.h crc64.c geohash.o multi.o redis-benchmark.o sentinel.c t_set.c
ae_kqueue.c crc64.h geo.o networking.c redis-check-aof sentinel.o t_set.o
ae.o crc64.o help.h networking.o redis-check-aof.c server.c t_string.c
ae_select.c db.c hyperloglog.c notify.c redis-check-aof.o server.h t_string.o
anet.c db.o hyperloglog.o notify.o redis-check-rdb server.o t_zset.c
anet.h debug.c intset.c object.c redis-check-rdb.c setproctitle.c t_zset.o
anet.o debugmacro.h intset.h object.o redis-check-rdb.o setproctitle.o util.c
aof.c debug.o intset.o pqsort.c redis-cli sha1.c util.h
aof.o defrag.c latency.c pqsort.h redis-cli.c sha1.h util.o
asciilogo.h defrag.o latency.h pqsort.o redis-cli.o sha1.o valgrind.sup
atomicvar.h dict.c latency.o pubsub.c redismodule.h siphash.c version.h
bio.c dict.h lazyfree.c pubsub.o redis-sentinel siphash.o ziplist.c
bio.h dict.o lazyfree.o quicklist.c redis-server slowlog.c ziplist.h
bio.o endianconv.c lzf_c.c quicklist.h redis-trib.rb slowlog.h ziplist.o
bitops.c endianconv.h lzf_c.o quicklist.o release.c slowlog.o zipmap.c
bitops.o endianconv.o lzf_d.c rand.c release.h solarisfixes.h zipmap.h
blocked.c evict.c lzf_d.o rand.h release.o sort.c zipmap.o
blocked.o evict.o lzf.h rand.o replication.c sort.o zmalloc.c
childinfo.c expire.c lzfP.h rax.c replication.o sparkline.c zmalloc.h
childinfo.o expire.o Makefile rax.h rio.c sparkline.h zmalloc.o
cluster.c fmacros.h Makefile.dep rax_malloc.h rio.h sparkline.o
cluster.h geo.c memtest.c rax.o rio.o syncio.c

执行 redis-trib.rb create 创建集群,后面跟上节点IP地址和端口号
[root@61 src]# ./redis-trib.rb create –replicas 1 192.168.4.61:6351 192.168.4.62:6352 192.168.4.63:6353 192.168.4.64:6354 192.168.4.65:6355 192.168.4.66:6356
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes…
Using 3 masters:
192.168.4.61:6351
192.168.4.62:6352
192.168.4.63:6353
Adding replica 192.168.4.65:6355 to 192.168.4.61:6351
Adding replica 192.168.4.66:6356 to 192.168.4.62:6352
Adding replica 192.168.4.64:6354 to 192.168.4.63:6353
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-5460 (5461 slots) master
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:5461-10922 (5462 slots) master
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:10923-16383 (5461 slots) master
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
Can I set the above configuration? (type ‘yes’ to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join…
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

登陆查看状态

[root@61 src]# redis-cli -c -h 192.168.4.61 -p 6351
192.168.4.61:6351> ping
PONG
192.168.4.61:6351> cluster nodes
56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355@16355 slave fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 0 1528428354808 5 connected
4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354@16354 slave 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 0 1528428354808 4 connected
6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352@16352 master – 0 1528428354306 2 connected 5461-10922
fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351@16351 myself,master – 0 1528428346000 1 connected 0-5460
fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356@16356 slave 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 0 1528428352800 6 connected
7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353@16353 master – 0 1528428353804 3 connected 10923-16383

192.168.4.61:6351> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:6569
cluster_stats_messages_pong_sent:5928
cluster_stats_messages_sent:12497
cluster_stats_messages_ping_received:5923
cluster_stats_messages_pong_received:6569
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:12497
192.168.4.61:6351> exit

查看有对应的集群配置文件
[root@61 src]# ls /var/lib/redis/6351/
dump.rdb nodes-6379.conf

[root@61 src]# cat /var/lib/redis/6351/nodes-6379.conf
56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355@16355 slave fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 0 1528425332116 5 connected
4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354@16354 slave 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 0 1528425330101 4 connected
6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352@16352 master – 0 1528425331814 2 connected 5461-10922
fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351@16351 myself,master – 0 1528425326000 1 connected 0-5460
fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356@16356 slave 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 0 1528425330614 6 connected
7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353@16353 master – 0 1528425331612 3 connected 10923-16383
vars currentEpoch 6 lastVoteEpoch 0
解读以上信息:该集群共3对1主1从,且1个主对应一个从
fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356@16356 slave 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 0 1528425330614 6 connected
表示该节点是主机*1a的从
6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352@16352 master – 0 1528425331814 2 connected 5461-10922

以此类推

计算方法
把key用CRC16算法做计算 然后计算结果和16384做求模 确定在哪个槽
结果%16384 得到 497,可以发现对应这个槽的范围,因此放在这个槽(对应192.168.4.61)
fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351@16351 myself,master – 0 1528425326000 1 connected 0-5460
然后它的从自动复制信息(对应192.168.4.65)
56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355@16355 slave fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 0 1528425332116 5 connected

然后对应的从去自动赋值主的信息

测试集群
随便插入一个值试一下
192.168.4.61:6351> set name jerry
-> Redirected to slot [5798] located at 192.168.4.62:6352
OK
192.168.4.62:6352> get name
“jerry”
192.168.4.62:6352> exit

我们挨个登陆检查一下看看在哪个库,由上面存储跳动的IP可以发现,值存在了62这个库,然后因为66是62的从库,所有66库里面也有一份。
然后这是分布式存储,所以值是分布在各个主库里面的,3个主库里只有1个库有这个值name,其它库是没有的
[root@61 src]# redis-cli -c -h 192.168.4.61 -p 6351
192.168.4.61:6351> keys *
(empty list or set)
192.168.4.61:6351> exit
[root@61 src]# redis-cli -c -h 192.168.4.62 -p 6352
192.168.4.62:6352> keys *
1) “name”
192.168.4.62:6352> exit
[root@61 src]# redis-cli -c -h 192.168.4.63 -p 6353
192.168.4.63:6353> keys *
(empty list or set)
192.168.4.63:6353> exit
[root@61 src]# redis-cli -c -h 192.168.4.64 -p 6354
192.168.4.64:6354> keys *
(empty list or set)
192.168.4.64:6354> exit
[root@61 src]# redis-cli -c -h 192.168.4.65 -p 6355
192.168.4.65:6355> keys *
(empty list or set)
192.168.4.65:6355> exit
[root@61 src]# redis-cli -c -h 192.168.4.66 -p 6356
192.168.4.66:6356> keys *
1) “name”
192.168.4.66:6356> exit

注意,当值在存储时,是根据算法来决定存在哪个库,所以实际访问时,IP会改变是正常的,也是根据算法来决定从哪个库上去找

管理集群
新准备2台机器57,58,运行redis做集群配置
同上面的配置集群一样,配置好57,58两台redis服务器
(省略具体的配置)
[root@localhost redis-4.0.8]# pwd
/root/soft/redis/redis-4.0.8
[root@localhost redis-4.0.8]# make
[root@localhost redis-4.0.8]# make install
[root@localhost redis-4.0.8]# ./utils/install_server.sh
[root@localhost redis-4.0.8]# vim /etc/redis/6358.conf
[root@localhost redis-4.0.8]# /etc/init.d/redis_6358 restart
[root@localhost redis-4.0.8]# netstat -antup | grep redis
tcp 0 0 192.168.4.68:6358 0.0.0.0:* LISTEN 4941/redis-server 1
tcp 0 0 192.168.4.68:16358 0.0.0.0:* LISTEN 4941/redis-server 1
[root@localhost redis-4.0.8]#

在没有添加集群时也是可以看状态的
192.168.4.68:6358> cluster nodes
f6d60b431050daac64c0a5c020dcf49a399d9cc6 :6358@16358 myself,master – 0 0 0 connected
192.168.4.68:6358> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0

规划哈希槽位,一共16384个槽位,原来3个,现在加1个,4个,那每个槽位刚好4096个槽位
执行该步骤的命令为reshard重新为节点主机分片

add-node 添加新节点
check 对节点主机做检查
reshard 对节点主机重新分片
add-node –slave 添加从节点主机,如果不指定,则默认为主
del-node 删除节点主机

从节点都可以随意增删,但是主节点因为手里有哈希槽,所以不论时增加或者删除,都需要对哈希槽进行配置

进入部署了luster的主机去操作
进入源码目录包
[root@61 src]# pwd
/root/soft/redis/redis-4.0.8/src
添加第一台主机,没有加选项则默认为主
[root@61 src]# ./redis-trib.rb add-node 192.168.4.67:6357 192.168.4.61:6351
>>> Adding node 192.168.4.67:6357 to cluster 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:5461-10922 (5462 slots) master
1 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.4.67:6357 to make it join the cluster.
[OK] New node added correctly.

查一下刚刚新添加的主机的状态,可以看到有4台M(主机)了,但是还没有分配槽位
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots: (0 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:10923-16383 (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

重新进行分片,分配新的哈希槽

[root@61 src]# ./redis-trib.rb reshard 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-7509,10923-12969 (9557 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:7510-10922 (3413 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots: (0 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:12970-16383 (3414 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
//问你需要挪多少哈希槽出来,挪4096个(上面算了的,一共16384,分4台主机,刚好一台4096个)
What is the receiving node ID? cd07e05296288cb1e5f8e901015783100624b91a
Please enter all the source node IDs.
Type ‘all’ to use all the nodes as source nodes for the hash slots.
Type ‘done’ once you entered all the source nodes IDs.
Source node #1:all
//问你从哪里挪,ALL就是所有的节点挪
Ready to move 4096 slots.
Source nodes:
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:0-7509,10923-12969 (9557 slots) master
1 additional replica(s)
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:7510-10922 (3413 slots) master
1 additional replica(s)
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:12970-16383 (3414 slots) master
1 additional replica(s)
Destination node:
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots: (0 slots) master
0 additional replica(s)
Resharding plan:
Moving slot 0 from fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
*********************************
Moving slot 8361 from 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
Moving slot 8362 from 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
Do you want to proceed with the proposed reshard plan (yes/no)? yes
问你上面这个挪的计划怎么样,可以就回答YES
Moving slot 0 from 192.168.4.61:6351 to 192.168.4.67:6357:
Moving slot 1 from 192.168.4.61:6351 to 192.168.4.67:6357:
****************************************
完成了检查一下

[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:4514-7509,10923-12969 (5043 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:0-1212,2390-4513,8363-10922,13823-14580 (6655 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:1213-2389,7510-8362,12970-13822 (2883 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:14581-16383 (1803 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

*************************************************************
可以发现改乱了,重新分配一下:
(其实就是第1步,规划一下,哪个节点要分多少哈希槽;
第2步,查看一下各个节点的哈希槽情况,其实就是看看哪台的少,然后算一下要挪多少过去补上;
第3步,就是挪哈希槽)
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:5461-7509,10923-12969 (4096 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:3736-4513,8363-10922,13823-14580 (4096 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:0-597,1213-2389,4514-4966,7510-8362,12970-13822,14581-14741 (4095 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:598-1212,2390-3735,4967-5460,14742-16383 (4097 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
//经过不停的来回挪动,终于改得差不多了,最后2个,差别就1个,不改了,就这样吧

添加从节点
[root@61 src]# ./redis-trib.rb add-node –slave 192.168.4.68:6358 192.168.4.61:6351
>>> Adding node 192.168.4.68:6358 to cluster 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:5461-7509,10923-12969 (4096 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:3736-4513,8363-10922,13823-14580 (4096 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:0-597,1213-2389,4514-4966,7510-8362,12970-13822,14581-14741 (4095 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:598-1212,2390-3735,4967-5460,14742-16383 (4097 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
Automatically selected master 192.168.4.67:6357
>>> Send CLUSTER MEET to node 192.168.4.68:6358 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.4.67:6357.
[OK] New node added correctly.
//从上面的提示可以看到自动添加成为了67的从

最后检查一下添加的情况,验证68确实成为了67的从
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:5461-7509,10923-12969 (4096 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:3736-4513,8363-10922,13823-14580 (4096 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:0-597,1213-2389,4514-4966,7510-8362,12970-13822,14581-14741 (4095 slots) master
1 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:598-1212,2390-3735,4967-5460,14742-16383 (4097 slots) master
1 additional replica(s)
S: f6d60b431050daac64c0a5c020dcf49a399d9cc6 192.168.4.68:6358
slots: (0 slots) slave
replicates cd07e05296288cb1e5f8e901015783100624b91a
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

移除节点:移除主节点和从节点
删除主节点很简单,不存在释放哈希槽的问题,所以直接del-node就行了

先登陆68,查看一下对应的ID是多少
[root@61 src]# redis-cli -c -h 192.168.4.68 -p 6358
192.168.4.68:6358> cluster node
(error) ERR Wrong CLUSTER subcommand or number of arguments
192.168.4.68:6358> cluster nodes
7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353@16353 master – 0 1528446557000 11 connected 598-1212 2390-3735 4967-5460 14742-16383
fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356@16356 slave 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 0 1528446559000 9 connected
6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352@16352 master – 0 1528446557000 9 connected 3736-4513 8363-10922 13823-14580
4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354@16354 slave 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 0 1528446558000 11 connected
f6d60b431050daac64c0a5c020dcf49a399d9cc6 192.168.4.68:6358@16358 myself,slave cd07e05296288cb1e5f8e901015783100624b91a 0 1528446553000 0 connected
cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357@16357 master – 0 1528446559098 10 connected 0-597 1213-2389 4514-4966 7510-8362 12970-13822 14581-14741
56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355@16355 slave fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 0 1528446559000 7 connected
fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351@16351 master – 0 1528446559298 7 connected 5461-7509 10923-12969

********
//因为上面分片问题,暂时忽略,一会再重新分配一下

开始删除该从节点,然后check一下,确定删除成功
[root@61 src]# ./redis-trib.rb del-node 192.168.4.61:6351 f6d60b431050daac64c0a5c020dcf49a399d9cc6
>>> Removing node f6d60b431050daac64c0a5c020dcf49a399d9cc6 from cluster 192.168.4.61:6351
>>> Sending CLUSTER FORGET messages to the cluster…
>>> SHUTDOWN the node.
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:5461-7509,10923-12969 (4096 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:3736-4513,8363-10922,13823-14580 (4096 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:0-597,1213-2389,4514-4966,7510-8362,12970-13822,14581-14741 (4095 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:598-1212,2390-3735,4967-5460,14742-16383 (4097 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

删除主节点,其实思路同上面重新分配哈希槽一样,先把哈希槽统统分给其它主节点,然后再删除
[root@61 src]# ./redis-trib.rb reshard 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:6827-7509,10923-12969 (2730 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:8950-10922,13823-14580 (2731 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:1979-2389,4514-4966,7510-8362,12970-13822,14581-14741 (2731 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:0-1978,2390-4513,4967-6826,8363-8949,14742-16383 (8192 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 2731
//注意,要看自己的案例,本次移动之前,我又做了多次移动,所以 67 这台主机,之剩下2731个哈希槽了,
所以只移动2731个
What is the receiving node ID? fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
Please enter all the source node IDs.
Type ‘all’ to use all the nodes as source nodes for the hash slots.
Type ‘done’ once you entered all the source nodes IDs.
Source node #1:cd07e05296288cb1e5f8e901015783100624b91a
Source node #2:done

Ready to move 2731 slots.
Source nodes:
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots:1979-2389,4514-4966,7510-8362,12970-13822,14581-14741 (2731 slots) master
0 additional replica(s)
Destination node:
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:6827-7509,10923-12969 (2730 slots) master
1 additional replica(s)
Resharding plan:
Moving slot 1979 from cd07e05296288cb1e5f8e901015783100624b91a
Moving slot 1980 from cd07e05296288cb1e5f8e901015783100624b91a
********************************************************

完成了以后重新确认一下,可以看到67没有槽位了,可以放心删除了
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:1979-2389,4514-4966,6827-8362,10923-13822,14581-14741 (5461 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:8950-10922,13823-14580 (2731 slots) master
1 additional replica(s)
M: cd07e05296288cb1e5f8e901015783100624b91a 192.168.4.67:6357
slots: (0 slots) master
0 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:0-1978,2390-4513,4967-6826,8363-8949,14742-16383 (8192 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

开始移除主节点57
[root@61 src]# ./redis-trib.rb del-node 192.168.4.61:6351 cd07e05296288cb1e5f8e901015783100624b91a
>>> Removing node cd07e05296288cb1e5f8e901015783100624b91a from cluster 192.168.4.61:6351
>>> Sending CLUSTER FORGET messages to the cluster…
>>> SHUTDOWN the node.

确认一下移除操作,可以看到57已经没了
[root@61 src]# ./redis-trib.rb check 192.168.4.61:6351
>>> Performing Cluster Check (using node 192.168.4.61:6351)
M: fb2dc0d120eea527ccf2aedaa61503d58c5c7f80 192.168.4.61:6351
slots:1979-2389,4514-4966,6827-8362,10923-13822,14581-14741 (5461 slots) master
1 additional replica(s)
S: 56c7c875ec0ace4902f70c0dabeb401b0e60eeee 192.168.4.65:6355
slots: (0 slots) slave
replicates fb2dc0d120eea527ccf2aedaa61503d58c5c7f80
S: 4d672007b9e27119049efe94c798ca1b0ab7c7b3 192.168.4.64:6354
slots: (0 slots) slave
replicates 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8
M: 6fa6d44af120d0a4be5f0ba2d083b800b59be41a 192.168.4.62:6352
slots:8950-10922,13823-14580 (2731 slots) master
1 additional replica(s)
S: fbe8bb6550867ef98feb2e06238a96e1df85f1d7 192.168.4.66:6356
slots: (0 slots) slave
replicates 6fa6d44af120d0a4be5f0ba2d083b800b59be41a
M: 7b27e1369fb56a16f4b090d011ba291cf6a1dcf8 192.168.4.63:6353
slots:0-1978,2390-4513,4967-6826,8363-8949,14742-16383 (8192 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.

思考题

发表在 reids | 标签为 | 留下评论

Redis cluster 官方文档-Redis cluster tutorial

转载自https://redis.io/topics/cluster-tutorial#redis-cluster-tutorial

Redis cluster tutorial

This document is a gentle introduction to Redis Cluster, that does not use complex to understand distributed systems concepts. It provides instructions about how to setup a cluster, test, and operate it, without going into the details that are covered in the Redis Cluster specification but just describing how the system behaves from the point of view of the user.

However this tutorial tries to provide information about the availability and consistency characteristics of Redis Cluster from the point of view of the final user, stated in a simple to understand way.

Note this tutorial requires Redis version 3.0 or higher.

If you plan to run a serious Redis Cluster deployment, the more formal specification is a suggested reading, even if not strictly required. However it is a good idea to start from this document, play with Redis Cluster some time, and only later read the specification.

 

Redis Cluster 101

Redis Cluster provides a way to run a Redis installation where data is automatically sharded across multiple Redis nodes.

Redis Cluster also provides some degree of availability during partitions, that is in practical terms the ability to continue the operations when some nodes fail or are not able to communicate. However the cluster stops to operate in the event of larger failures (for example when the majority of masters are unavailable).

So in practical terms, what you get with Redis Cluster?

  • The ability to automatically split your dataset among multiple nodes.
  • The ability to continue operations when a subset of the nodes are experiencing failures or are unable to communicate with the rest of the cluster.

 

Redis Cluster TCP ports

Every Redis Cluster node requires two TCP connections open. The normal Redis TCP port used to serve clients, for example 6379, plus the port obtained by adding 10000 to the data port, so 16379 in the example.

This second high port is used for the Cluster bus, that is a node-to-node communication channel using a binary protocol. The Cluster bus is used by nodes for failure detection, configuration update, failover authorization and so forth. Clients should never try to communicate with the cluster bus port, but always with the normal Redis command port, however make sure you open both ports in your firewall, otherwise Redis cluster nodes will be not able to communicate.

The command port and cluster bus port offset is fixed and is always 10000.

Note that for a Redis Cluster to work properly you need, for each node:

  1. The normal client communication port (usually 6379) used to communicate with clients to be open to all the clients that need to reach the cluster, plus all the other cluster nodes (that use the client port for keys migrations).
  2. The cluster bus port (the client port + 10000) must be reachable from all the other cluster nodes.

If you don’t open both TCP ports, your cluster will not work as expected.

The cluster bus uses a different, binary protocol, for node to node data exchange, which is more suited to exchange information between nodes using little bandwidth and processing time.

 

Redis Cluster and Docker

Currently Redis Cluster does not support NATted environments and in general environments where IP addresses or TCP ports are remapped.

Docker uses a technique called port mapping: programs running inside Docker containers may be exposed with a different port compared to the one the program believes to be using. This is useful in order to run multiple containers using the same ports, at the same time, in the same server.

In order to make Docker compatible with Redis Cluster you need to use the host networking mode of Docker. Please check the --net=host option in the Docker documentation for more information.

 

Redis Cluster data sharding

Redis Cluster does not use consistent hashing, but a different form of sharding where every key is conceptually part of what we call an hash slot.

There are 16384 hash slots in Redis Cluster, and to compute what is the hash slot of a given key, we simply take the CRC16 of the key modulo 16384.

Every node in a Redis Cluster is responsible for a subset of the hash slots, so for example you may have a cluster with 3 nodes, where:

  • Node A contains hash slots from 0 to 5500.
  • Node B contains hash slots from 5501 to 11000.
  • Node C contains hash slots from 11001 to 16383.

This allows to add and remove nodes in the cluster easily. For example if I want to add a new node D, I need to move some hash slot from nodes A, B, C to D. Similarly if I want to remove node A from the cluster I can just move the hash slots served by A to B and C. When the node A will be empty I can remove it from the cluster completely.

Because moving hash slots from a node to another does not require to stop operations, adding and removing nodes, or changing the percentage of hash slots hold by nodes, does not require any downtime.

Redis Cluster supports multiple key operations as long as all the keys involved into a single command execution (or whole transaction, or Lua script execution) all belong to the same hash slot. The user can force multiple keys to be part of the same hash slot by using a concept called hash tags.

Hash tags are documented in the Redis Cluster specification, but the gist is that if there is a substring between {} brackets in a key, only what is inside the string is hashed, so for example this{foo}key and another{foo}key are guaranteed to be in the same hash slot, and can be used together in a command with multiple keys as arguments.

 

Redis Cluster master-slave model

In order to remain available when a subset of master nodes are failing or are not able to communicate with the majority of nodes, Redis Cluster uses a master-slave model where every hash slot has from 1 (the master itself) to N replicas (N-1 additional slaves nodes).

In our example cluster with nodes A, B, C, if node B fails the cluster is not able to continue, since we no longer have a way to serve hash slots in the range 5501-11000.

However when the cluster is created (or at a later time) we add a slave node to every master, so that the final cluster is composed of A, B, C that are masters nodes, and A1, B1, C1 that are slaves nodes, the system is able to continue if node B fails.

Node B1 replicates B, and B fails, the cluster will promote node B1 as the new master and will continue to operate correctly.

However note that if nodes B and B1 fail at the same time Redis Cluster is not able to continue to operate.

 

Redis Cluster consistency guarantees

Redis Cluster is not able to guarantee strong consistency. In practical terms this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client.

The first reason why Redis Cluster can lose writes is because it uses asynchronous replication. This means that during writes the following happens:

  • Your client writes to the master B.
  • The master B replies OK to your client.
  • The master B propagates the write to its slaves B1, B2 and B3.

As you can see B does not wait for an acknowledge from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its slaves, one of the slaves (that did not receive the write) can be promoted to master, losing the write forever.

This is very similar to what happens with most databases that are configured to flush data to disk every second, so it is a scenario you are already able to reason about because of past experiences with traditional database systems not involving distributed systems. Similarly you can improve consistency by forcing the database to flush data on disk before replying to the client, but this usually results into prohibitively low performance. That would be the equivalent of synchronous replication in the case of Redis Cluster.

Basically there is a trade-off to take between performance and consistency.

Redis Cluster has support for synchronous writes when absolutely needed, implemented via the WAIT command, this makes losing writes a lot less likely, however note that Redis Cluster does not implement strong consistency even when synchronous replication is used: it is always possible under more complex failure scenarios that a slave that was not able to receive the write is elected as master.

There is another notable scenario where Redis Cluster will lose writes, that happens during a network partition where a client is isolated with a minority of instances including at least a master.

Take as an example our 6 nodes cluster composed of A, B, C, A1, B1, C1, with 3 masters and 3 slaves. There is also a client, that we will call Z1.

After a partition occurs, it is possible that in one side of the partition we have A, C, A1, B1, C1, and in the other side we have B and Z1.

Z1 is still able to write to B, that will accept its writes. If the partition heals in a very short time, the cluster will continue normally. However if the partition lasts enough time for B1 to be promoted to master in the majority side of the partition, the writes that Z1 is sending to B will be lost.

Note that there is a maximum window to the amount of writes Z1 will be able to send to B: if enough time has elapsed for the majority side of the partition to elect a slave as master, every master node in the minority side stops accepting writes.

This amount of time is a very important configuration directive of Redis Cluster, and is called the node timeout.

After node timeout has elapsed, a master node is considered to be failing, and can be replaced by one of its replicas. Similarly after node timeout has elapsed without a master node to be able to sense the majority of the other master nodes, it enters an error state and stops accepting writes.

 

Redis Cluster configuration parameters

We are about to create an example cluster deployment. Before we continue, let’s introduce the configuration parameters that Redis Cluster introduces in the redis.conf file. Some will be obvious, others will be more clear as you continue reading.

  • cluster-enabled <yes/no>: If yes enables Redis Cluster support in a specific Redis instance. Otherwise the instance starts as a stand alone instance as usually.
  • cluster-config-file <filename>: Note that despite the name of this option, this is not an user editable configuration file, but the file where a Redis Cluster node automatically persists the cluster configuration (the state, basically) every time there is a change, in order to be able to re-read it at startup. The file lists things like the other nodes in the cluster, their state, persistent variables, and so forth. Often this file is rewritten and flushed on disk as a result of some message reception.
  • cluster-node-timeout <milliseconds>: The maximum amount of time a Redis Cluster node can be unavailable, without it being considered as failing. If a master node is not reachable for more than the specified amount of time, it will be failed over by its slaves. This parameter controls other important things in Redis Cluster. Notably, every node that can’t reach the majority of master nodes for the specified amount of time, will stop accepting queries.
  • cluster-slave-validity-factor <factor>: If set to zero, a slave will always try to failover a master, regardless of the amount of time the link between the master and the slave remained disconnected. If the value is positive, a maximum disconnection time is calculated as the node timeout value multiplied by the factor provided with this option, and if the node is a slave, it will not try to start a failover if the master link was disconnected for more than the specified amount of time. For example if the node timeout is set to 5 seconds, and the validity factor is set to 10, a slave disconnected from the master for more than 50 seconds will not try to failover its master. Note that any value different than zero may result in Redis Cluster to be unavailable after a master failure if there is no slave able to failover it. In that case the cluster will return back available only when the original master rejoins the cluster.
  • cluster-migration-barrier <count>: Minimum number of slaves a master will remain connected with, for another slave to migrate to a master which is no longer covered by any slave. See the appropriate section about replica migration in this tutorial for more information.
  • cluster-require-full-coverage <yes/no>: If this is set to yes, as it is by default, the cluster stops accepting writes if some percentage of the key space is not covered by any node. If the option is set to no, the cluster will still serve queries even if only requests about a subset of keys can be processed.

 

Creating and using a Redis Cluster

Note: to deploy a Redis Cluster manually it is very important to learn certain operational aspects of it. However if you want to get a cluster up and running ASAP (As Soon As Possible) skip this section and the next one and go directly to Creating a Redis Cluster using the create-cluster script.

To create a cluster, the first thing we need is to have a few empty Redis instances running in cluster mode. This basically means that clusters are not created using normal Redis instances as a special mode needs to be configured so that the Redis instance will enable the Cluster specific features and commands.

The following is a minimal Redis cluster configuration file:

port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

As you can see what enables the cluster mode is simply the cluster-enabled directive. Every instance also contains the path of a file where the configuration for this node is stored, which by default is nodes.conf. This file is never touched by humans; it is simply generated at startup by the Redis Cluster instances, and updated every time it is needed.

Note that the minimal cluster that works as expected requires to contain at least three master nodes. For your first tests it is strongly suggested to start a six nodes cluster with three masters and three slaves.

To do so, enter a new directory, and create the following directories named after the port number of the instance we’ll run inside any given directory.

Something like:

mkdir cluster-test
cd cluster-test
mkdir 7000 7001 7002 7003 7004 7005

Create a redis.conf file inside each of the directories, from 7000 to 7005. As a template for your configuration file just use the small example above, but make sure to replace the port number 7000 with the right port number according to the directory name.

Now copy your redis-server executable, compiled from the latest sources in the unstable branch at GitHub, into the cluster-test directory, and finally open 6 terminal tabs in your favorite terminal application.

Start every instance like that, one every tab:

cd 7000
../redis-server ./redis.conf

As you can see from the logs of every instance, since no nodes.conf file existed, every node assigns itself a new ID.

[82462] 26 Nov 11:56:55.329 * No cluster configuration found, I'm 97a3a64667477371c4479320d683e4c8db5858b1

This ID will be used forever by this specific instance in order for the instance to have a unique name in the context of the cluster. Every node remembers every other node using this IDs, and not by IP or port. IP addresses and ports may change, but the unique node identifier will never change for all the life of the node. We call this identifier simply Node ID.

 

Creating the cluster

Now that we have a number of instances running, we need to create our cluster by writing some meaningful configuration to the nodes.

This is very easy to accomplish as we are helped by the Redis Cluster command line utility called redis-trib, a Ruby program executing special commands on instances in order to create new clusters, check or reshard an existing cluster, and so forth.

The redis-trib utility is in the src directory of the Redis source code distribution. You need to install redis gem to be able to run redis-trib.

gem install redis

To create your cluster simply type:

./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005

The command used here is create, since we want to create a new cluster. The option --replicas 1 means that we want a slave for every master created. The other arguments are the list of addresses of the instances I want to use to create the new cluster.

Obviously the only setup with our requirements is to create a cluster with 3 masters and 3 slaves.

Redis-trib will propose you a configuration. Accept the proposed configuration by typing yes. The cluster will be configured and joined, which means, instances will be bootstrapped into talking with each other. Finally, if everything went well, you’ll see a message like that:

[OK] All 16384 slots covered

This means that there is at least a master instance serving each of the 16384 slots available.

 

Creating a Redis Cluster using the create-cluster script

If you don’t want to create a Redis Cluster by configuring and executing individual instances manually as explained above, there is a much simpler system (but you’ll not learn the same amount of operational details).

Just check utils/create-cluster directory in the Redis distribution. There is a script called create-cluster inside (same name as the directory it is contained into), it’s a simple bash script. In order to start a 6 nodes cluster with 3 masters and 3 slaves just type the following commands:

  1. create-cluster start
  2. create-cluster create

Reply to yes in step 2 when the redis-trib utility wants you to accept the cluster layout.

You can now interact with the cluster, the first node will start at port 30001 by default. When you are done, stop the cluster with:

  1. create-cluster stop.

Please read the README inside this directory for more information on how to run the script.

 

Playing with the cluster

At this stage one of the problems with Redis Cluster is the lack of client libraries implementations.

I’m aware of the following implementations:

  • redis-rb-cluster is a Ruby implementation written by me (@antirez) as a reference for other languages. It is a simple wrapper around the original redis-rb, implementing the minimal semantics to talk with the cluster efficiently.
  • redis-py-cluster A port of redis-rb-cluster to Python. Supports majority of redis-py functionality. Is in active development.
  • The popular Predis has support for Redis Cluster, the support was recently updated and is in active development.
  • The most used Java client, Jedis recently added support for Redis Cluster, see the Jedis Cluster section in the project README.
  • StackExchange.Redis offers support for C# (and should work fine with most .NET languages; VB, F#, etc)
  • thunk-redis offers support for Node.js and io.js, it is a thunk/promise-based redis client with pipelining and cluster.
  • redis-go-cluster is an implementation of Redis Cluster for the Go language using the Redigo library client as the base client. Implements MGET/MSET via result aggregation.
  • The redis-cli utility in the unstable branch of the Redis repository at GitHub implements a very basic cluster support when started with the -c switch.

An easy way to test Redis Cluster is either to try any of the above clients or simply the redis-cli command line utility. The following is an example of interaction using the latter:

$ redis-cli -c -p 7000
redis 127.0.0.1:7000> set foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7002
OK
redis 127.0.0.1:7002> set hello world
-> Redirected to slot [866] located at 127.0.0.1:7000
OK
redis 127.0.0.1:7000> get foo
-> Redirected to slot [12182] located at 127.0.0.1:7002
"bar"
redis 127.0.0.1:7000> get hello
-> Redirected to slot [866] located at 127.0.0.1:7000
"world"

Note: if you created the cluster using the script your nodes may listen to different ports, starting from 30001 by default.

The redis-cli cluster support is very basic so it always uses the fact that Redis Cluster nodes are able to redirect a client to the right node. A serious client is able to do better than that, and cache the map between hash slots and nodes addresses, to directly use the right connection to the right node. The map is refreshed only when something changed in the cluster configuration, for example after a failover or after the system administrator changed the cluster layout by adding or removing nodes.

 

Writing an example app with redis-rb-cluster

Before going forward showing how to operate the Redis Cluster, doing things like a failover, or a resharding, we need to create some example application or at least to be able to understand the semantics of a simple Redis Cluster client interaction.

In this way we can run an example and at the same time try to make nodes failing, or start a resharding, to see how Redis Cluster behaves under real world conditions. It is not very helpful to see what happens while nobody is writing to the cluster.

This section explains some basic usage of redis-rb-cluster showing two examples. The first is the following, and is the example.rb file inside the redis-rb-cluster distribution:

   1  require './cluster'
   2
   3  if ARGV.length != 2
   4      startup_nodes = [
   5          {:host => "127.0.0.1", :port => 7000},
   6          {:host => "127.0.0.1", :port => 7001}
   7      ]
   8  else
   9      startup_nodes = [
  10          {:host => ARGV[0], :port => ARGV[1].to_i}
  11      ]
  12  end
  13
  14  rc = RedisCluster.new(startup_nodes,32,:timeout => 0.1)
  15
  16  last = false
  17
  18  while not last
  19      begin
  20          last = rc.get("__last__")
  21          last = 0 if !last
  22      rescue => e
  23          puts "error #{e.to_s}"
  24          sleep 1
  25      end
  26  end
  27
  28  ((last.to_i+1)..1000000000).each{|x|
  29      begin
  30          rc.set("foo#{x}",x)
  31          puts rc.get("foo#{x}")
  32          rc.set("__last__",x)
  33      rescue => e
  34          puts "error #{e.to_s}"
  35      end
  36      sleep 0.1
  37  }

The application does a very simple thing, it sets keys in the form foo<number> to number, one after the other. So if you run the program the result is the following stream of commands:

  • SET foo0 0
  • SET foo1 1
  • SET foo2 2
  • And so forth…

The program looks more complex than it should usually as it is designed to show errors on the screen instead of exiting with an exception, so every operation performed with the cluster is wrapped by begin rescue blocks.

The line 14 is the first interesting line in the program. It creates the Redis Cluster object, using as argument a list of startup nodes, the maximum number of connections this object is allowed to take against different nodes, and finally the timeout after a given operation is considered to be failed.

The startup nodes don’t need to be all the nodes of the cluster. The important thing is that at least one node is reachable. Also note that redis-rb-cluster updates this list of startup nodes as soon as it is able to connect with the first node. You should expect such a behavior with any other serious client.

Now that we have the Redis Cluster object instance stored in the rc variable we are ready to use the object like if it was a normal Redis object instance.

This is exactly what happens in line 18 to 26: when we restart the example we don’t want to start again with foo0, so we store the counter inside Redis itself. The code above is designed to read this counter, or if the counter does not exist, to assign it the value of zero.

However note how it is a while loop, as we want to try again and again even if the cluster is down and is returning errors. Normal applications don’t need to be so careful.

Lines between 28 and 37 start the main loop where the keys are set or an error is displayed.

Note the sleep call at the end of the loop. In your tests you can remove the sleep if you want to write to the cluster as fast as possible (relatively to the fact that this is a busy loop without real parallelism of course, so you’ll get the usually 10k ops/second in the best of the conditions).

Normally writes are slowed down in order for the example application to be easier to follow by humans.

Starting the application produces the following output:

ruby ./example.rb
1
2
3
4
5
6
7
8
9
^C (I stopped the program here)

This is not a very interesting program and we’ll use a better one in a moment but we can already see what happens during a resharding when the program is running.

 

Resharding the cluster

Now we are ready to try a cluster resharding. To do this please keep the example.rb program running, so that you can see if there is some impact on the program running. Also you may want to comment the sleep call in order to have some more serious write load during resharding.

Resharding basically means to move hash slots from a set of nodes to another set of nodes, and like cluster creation it is accomplished using the redis-trib utility.

To start a resharding just type:

./redis-trib.rb reshard 127.0.0.1:7000

You only need to specify a single node, redis-trib will find the other nodes automatically.

Currently redis-trib is only able to reshard with the administrator support, you can’t just say move 5% of slots from this node to the other one (but this is pretty trivial to implement). So it starts with questions. The first is how much a big resharding do you want to do:

How many slots do you want to move (from 1 to 16384)?

We can try to reshard 1000 hash slots, that should already contain a non trivial amount of keys if the example is still running without the sleep call.

Then redis-trib needs to know what is the target of the resharding, that is, the node that will receive the hash slots. I’ll use the first master node, that is, 127.0.0.1:7000, but I need to specify the Node ID of the instance. This was already printed in a list by redis-trib, but I can always find the ID of a node with the following command if I need:

$ redis-cli -p 7000 cluster nodes | grep myself
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5460

Ok so my target node is 97a3a64667477371c4479320d683e4c8db5858b1.

Now you’ll get asked from what nodes you want to take those keys. I’ll just type all in order to take a bit of hash slots from all the other master nodes.

After the final confirmation you’ll see a message for every slot that redis-trib is going to move from a node to another, and a dot will be printed for every actual key moved from one side to the other.

While the resharding is in progress you should be able to see your example program running unaffected. You can stop and restart it multiple times during the resharding if you want.

At the end of the resharding, you can test the health of the cluster with the following command:

./redis-trib.rb check 127.0.0.1:7000

All the slots will be covered as usually, but this time the master at 127.0.0.1:7000 will have more hash slots, something around 6461.

 

Scripting a resharding operation

Reshardings can be performed automatically without the need to manually enter the parameters in an interactive way. This is possible using a command line like the following:

./redis-trib.rb reshard --from <node-id> --to <node-id> --slots <number of slots> --yes <host>:<port>

This allows to build some automatism if you are likely to reshard often, however currently there is no way for redis-trib to automatically rebalance the cluster checking the distribution of keys across the cluster nodes and intelligently moving slots as needed. This feature will be added in the future.

 

A more interesting example application

The example application we wrote early is not very good. It writes to the cluster in a simple way without even checking if what was written is the right thing.

From our point of view the cluster receiving the writes could just always write the key foo to 42 to every operation, and we would not notice at all.

So in the redis-rb-cluster repository, there is a more interesting application that is called consistency-test.rb. It uses a set of counters, by default 1000, and sends INCR commands in order to increment the counters.

However instead of just writing, the application does two additional things:

  • When a counter is updated using INCR, the application remembers the write.
  • It also reads a random counter before every write, and check if the value is what we expected it to be, comparing it with the value it has in memory.

What this means is that this application is a simple consistency checker, and is able to tell you if the cluster lost some write, or if it accepted a write that we did not receive acknowledgment for. In the first case we’ll see a counter having a value that is smaller than the one we remember, while in the second case the value will be greater.

Running the consistency-test application produces a line of output every second:

$ ruby consistency-test.rb
925 R (0 err) | 925 W (0 err) |
5030 R (0 err) | 5030 W (0 err) |
9261 R (0 err) | 9261 W (0 err) |
13517 R (0 err) | 13517 W (0 err) |
17780 R (0 err) | 17780 W (0 err) |
22025 R (0 err) | 22025 W (0 err) |
25818 R (0 err) | 25818 W (0 err) |

The line shows the number of Reads and Writes performed, and the number of errors (query not accepted because of errors since the system was not available).

If some inconsistency is found, new lines are added to the output. This is what happens, for example, if I reset a counter manually while the program is running:

$ redis-cli -h 127.0.0.1 -p 7000 set key_217 0
OK

(in the other tab I see...)

94774 R (0 err) | 94774 W (0 err) |
98821 R (0 err) | 98821 W (0 err) |
102886 R (0 err) | 102886 W (0 err) | 114 lost |
107046 R (0 err) | 107046 W (0 err) | 114 lost |

When I set the counter to 0 the real value was 114, so the program reports 114 lost writes (INCR commands that are not remembered by the cluster).

This program is much more interesting as a test case, so we’ll use it to test the Redis Cluster failover.

 

Testing the failover

Note: during this test, you should take a tab open with the consistency test application running.

In order to trigger the failover, the simplest thing we can do (that is also the semantically simplest failure that can occur in a distributed system) is to crash a single process, in our case a single master.

We can identify a cluster and crash it with the following command:

$ redis-cli -p 7000 cluster nodes | grep master
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385482984082 0 connected 5960-10921
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 master - 0 1385482983582 0 connected 11423-16383
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422

Ok, so 7000, 7001, and 7002 are masters. Let’s crash node 7002 with the DEBUG SEGFAULT command:

$ redis-cli -p 7002 debug segfault
Error: Server closed the connection

Now we can look at the output of the consistency test to see what it reported.

18849 R (0 err) | 18849 W (0 err) |
23151 R (0 err) | 23151 W (0 err) |
27302 R (0 err) | 27302 W (0 err) |

... many error warnings here ...

29659 R (578 err) | 29660 W (577 err) |
33749 R (578 err) | 33750 W (577 err) |
37918 R (578 err) | 37919 W (577 err) |
42077 R (578 err) | 42078 W (577 err) |

As you can see during the failover the system was not able to accept 578 reads and 577 writes, however no inconsistency was created in the database. This may sound unexpected as in the first part of this tutorial we stated that Redis Cluster can lose writes during the failover because it uses asynchronous replication. What we did not say is that this is not very likely to happen because Redis sends the reply to the client, and the commands to replicate to the slaves, about at the same time, so there is a very small window to lose data. However the fact that it is hard to trigger does not mean that it is impossible, so this does not change the consistency guarantees provided by Redis cluster.

We can now check what is the cluster setup after the failover (note that in the meantime I restarted the crashed instance so that it rejoins the cluster as a slave):

$ redis-cli -p 7000 cluster nodes
3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385503418521 0 connected
a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385503419023 0 connected
97a3a64667477371c4479320d683e4c8db5858b1 :0 myself,master - 0 0 0 connected 0-5959 10922-11422
3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385503419023 3 connected 11423-16383
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385503417005 0 connected 5960-10921
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385503418016 3 connected

Now the masters are running on ports 7000, 7001 and 7005. What was previously a master, that is the Redis instance running on port 7002, is now a slave of 7005.

The output of the CLUSTER NODES command may look intimidating, but it is actually pretty simple, and is composed of the following tokens:

  • Node ID
  • ip:port
  • flags: master, slave, myself, fail, …
  • if it is a slave, the Node ID of the master
  • Time of the last pending PING still waiting for a reply.
  • Time of the last PONG received.
  • Configuration epoch for this node (see the Cluster specification).
  • Status of the link to this node.
  • Slots served…

 

Manual failover

Sometimes it is useful to force a failover without actually causing any problem on a master. For example in order to upgrade the Redis process of one of the master nodes it is a good idea to failover it in order to turn it into a slave with minimal impact on availability.

Manual failovers are supported by Redis Cluster using the CLUSTER FAILOVER command, that must be executed in one of the slaves of the master you want to failover.

Manual failovers are special and are safer compared to failovers resulting from actual master failures, since they occur in a way that avoid data loss in the process, by switching clients from the original master to the new master only when the system is sure that the new master processed all the replication stream from the old one.

This is what you see in the slave log when you perform a manual failover:

# Manual failover user request accepted.
# Received replication offset for paused master manual failover: 347540
# All master replication stream processed, manual failover can start.
# Start of election delayed for 0 milliseconds (rank #0, offset 347540).
# Starting a failover election for epoch 7545.
# Failover election won: I'm the new master.

Basically clients connected to the master we are failing over are stopped. At the same time the master sends its replication offset to the slave, that waits to reach the offset on its side. When the replication offset is reached, the failover starts, and the old master is informed about the configuration switch. When the clients are unblocked on the old master, they are redirected to the new master.

 

Adding a new node

Adding a new node is basically the process of adding an empty node and then moving some data into it, in case it is a new master, or telling it to setup as a replica of a known node, in case it is a slave.

We’ll show both, starting with the addition of a new master instance.

In both cases the first step to perform is adding an empty node.

This is as simple as to start a new node in port 7006 (we already used from 7000 to 7005 for our existing 6 nodes) with the same configuration used for the other nodes, except for the port number, so what you should do in order to conform with the setup we used for the previous nodes:

  • Create a new tab in your terminal application.
  • Enter the cluster-test directory.
  • Create a directory named 7006.
  • Create a redis.conf file inside, similar to the one used for the other nodes but using 7006 as port number.
  • Finally start the server with ../redis-server ./redis.conf

At this point the server should be running.

Now we can use redis-trib as usually in order to add the node to the existing cluster.

./redis-trib.rb add-node 127.0.0.1:7006 127.0.0.1:7000

As you can see I used the add-node command specifying the address of the new node as first argument, and the address of a random existing node in the cluster as second argument.

In practical terms redis-trib here did very little to help us, it just sent a CLUSTER MEET message to the node, something that is also possible to accomplish manually. However redis-trib also checks the state of the cluster before to operate, so it is a good idea to perform cluster operations always via redis-trib even when you know how the internals work.

Now we can connect to the new node to see if it really joined the cluster:

redis 127.0.0.1:7006> cluster nodes
3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 127.0.0.1:7001 master - 0 1385543178575 0 connected 5960-10921
3fc783611028b1707fd65345e763befb36454d73 127.0.0.1:7004 slave 3e3a6cb0d9a9a87168e266b0a0b24026c0aae3f0 0 1385543179583 0 connected
f093c80dde814da99c5cf72a7dd01590792b783b :0 myself,master - 0 0 0 connected
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543178072 3 connected
a211e242fc6b22a9427fed61285e85892fa04e08 127.0.0.1:7003 slave 97a3a64667477371c4479320d683e4c8db5858b1 0 1385543178575 0 connected
97a3a64667477371c4479320d683e4c8db5858b1 127.0.0.1:7000 master - 0 1385543179080 0 connected 0-5959 10922-11422
3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7005 master - 0 1385543177568 3 connected 11423-16383

Note that since this node is already connected to the cluster it is already able to redirect client queries correctly and is generally speaking part of the cluster. However it has two peculiarities compared to the other masters:

  • It holds no data as it has no assigned hash slots.
  • Because it is a master without assigned slots, it does not participate in the election process when a slave wants to become a master.

Now it is possible to assign hash slots to this node using the resharding feature of redis-trib. It is basically useless to show this as we already did in a previous section, there is no difference, it is just a resharding having as a target the empty node.

 

Adding a new node as a replica

Adding a new Replica can be performed in two ways. The obvious one is to use redis-trib again, but with the –slave option, like this:

./redis-trib.rb add-node --slave 127.0.0.1:7006 127.0.0.1:7000

Note that the command line here is exactly like the one we used to add a new master, so we are not specifying to which master we want to add the replica. In this case what happens is that redis-trib will add the new node as replica of a random master among the masters with less replicas.

However you can specify exactly what master you want to target with your new replica with the following command line:

./redis-trib.rb add-node --slave --master-id 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:7006 127.0.0.1:7000

This way we assign the new replica to a specific master.

A more manual way to add a replica to a specific master is to add the new node as an empty master, and then turn it into a replica using the CLUSTER REPLICATE command. This also works if the node was added as a slave but you want to move it as a replica of a different master.

For example in order to add a replica for the node 127.0.0.1:7005 that is currently serving hash slots in the range 11423-16383, that has a Node ID 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e, all I need to do is to connect with the new node (already added as empty master) and send the command:

redis 127.0.0.1:7006> cluster replicate 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e

That’s it. Now we have a new replica for this set of hash slots, and all the other nodes in the cluster already know (after a few seconds needed to update their config). We can verify with the following command:

$ redis-cli -p 7000 cluster nodes | grep slave | grep 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e
f093c80dde814da99c5cf72a7dd01590792b783b 127.0.0.1:7006 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617702 3 connected
2938205e12de373867bf38f1ca29d31d0ddb3e46 127.0.0.1:7002 slave 3c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 0 1385543617198 3 connected

The node 3c3a0c… now has two slaves, running on ports 7002 (the existing one) and 7006 (the new one).

 

Removing a node

To remove a slave node just use the del-node command of redis-trib:

./redis-trib del-node 127.0.0.1:7000 `<node-id>`

The first argument is just a random node in the cluster, the second argument is the ID of the node you want to remove.

You can remove a master node in the same way as well, however in order to remove a master node it must be empty. If the master is not empty you need to reshard data away from it to all the other master nodes before.

An alternative to remove a master node is to perform a manual failover of it over one of its slaves and remove the node after it turned into a slave of the new master. Obviously this does not help when you want to reduce the actual number of masters in your cluster, in that case, a resharding is needed.

 

Replicas migration

In Redis Cluster it is possible to reconfigure a slave to replicate with a different master at any time just using the following command:

CLUSTER REPLICATE <master-node-id>

However there is a special scenario where you want replicas to move from one master to another one automatically, without the help of the system administrator. The automatic reconfiguration of replicas is called replicas migration and is able to improve the reliability of a Redis Cluster.

Note: you can read the details of replicas migration in the Redis Cluster Specification, here we’ll only provide some information about the general idea and what you should do in order to benefit from it.

The reason why you may want to let your cluster replicas to move from one master to another under certain condition, is that usually the Redis Cluster is as resistant to failures as the number of replicas attached to a given master.

For example a cluster where every master has a single replica can’t continue operations if the master and its replica fail at the same time, simply because there is no other instance to have a copy of the hash slots the master was serving. However while netsplits are likely to isolate a number of nodes at the same time, many other kind of failures, like hardware or software failures local to a single node, are a very notable class of failures that are unlikely to happen at the same time, so it is possible that in your cluster where every master has a slave, the slave is killed at 4am, and the master is killed at 6am. This still will result in a cluster that can no longer operate.

To improve reliability of the system we have the option to add additional replicas to every master, but this is expensive. Replica migration allows to add more slaves to just a few masters. So you have 10 masters with 1 slave each, for a total of 20 instances. However you add, for example, 3 instances more as slaves of some of your masters, so certain masters will have more than a single slave.

With replicas migration what happens is that if a master is left without slaves, a replica from a master that has multiple slaves will migrate to the orphaned master. So after your slave goes down at 4am as in the example we made above, another slave will take its place, and when the master will fail as well at 5am, there is still a slave that can be elected so that the cluster can continue to operate.

So what you should know about replicas migration in short?

  • The cluster will try to migrate a replica from the master that has the greatest number of replicas in a given moment.
  • To benefit from replica migration you have just to add a few more replicas to a single master in your cluster, it does not matter what master.
  • There is a configuration parameter that controls the replica migration feature that is called cluster-migration-barrier: you can read more about it in the example redis.conf file provided with Redis Cluster.

 

Upgrading nodes in a Redis Cluster

Upgrading slave nodes is easy since you just need to stop the node and restart it with an updated version of Redis. If there are clients scaling reads using slave nodes, they should be able to reconnect to a different slave if a given one is not available.

Upgrading masters is a bit more complex, and the suggested procedure is:

  1. Use CLUSTER FAILOVER to trigger a manual failover of the master to one of its slaves (see the “Manual failover” section of this documentation).
  2. Wait for the master to turn into a slave.
  3. Finally upgrade the node as you do for slaves.
  4. If you want the master to be the node you just upgraded, trigger a new manual failover in order to turn back the upgraded node into a master.

Following this procedure you should upgrade one node after the other until all the nodes are upgraded.

 

Migrating to Redis Cluster

Users willing to migrate to Redis Cluster may have just a single master, or may already using a preexisting sharding setup, where keys are split among N nodes, using some in-house algorithm or a sharding algorithm implemented by their client library or Redis proxy.

In both cases it is possible to migrate to Redis Cluster easily, however what is the most important detail is if multiple-keys operations are used by the application, and how. There are three different cases:

  1. Multiple keys operations, or transactions, or Lua scripts involving multiple keys, are not used. Keys are accessed independently (even if accessed via transactions or Lua scripts grouping multiple commands, about the same key, together).
  2. Multiple keys operations, transactions, or Lua scripts involving multiple keys are used but only with keys having the same hash tag, which means that the keys used together all have a {...} sub-string that happens to be identical. For example the following multiple keys operation is defined in the context of the same hash tag: SUNION {user:1000}.foo {user:1000}.bar.
  3. Multiple keys operations, transactions, or Lua scripts involving multiple keys are used with key names not having an explicit, or the same, hash tag.

The third case is not handled by Redis Cluster: the application requires to be modified in order to don’t use multi keys operations or only use them in the context of the same hash tag.

Case 1 and 2 are covered, so we’ll focus on those two cases, that are handled in the same way, so no distinction will be made in the documentation.

Assuming you have your preexisting data set split into N masters, where N=1 if you have no preexisting sharding, the following steps are needed in order to migrate your data set to Redis Cluster:

  1. Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application / environment.
  2. Generate an append only file for all of your N masters using the BGREWRITEAOF command, and waiting for the AOF file to be completely generated.
  3. Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers).
  4. Create a Redis Cluster composed of N masters and zero slaves. You’ll add slaves later. Make sure all your nodes are using the append only file for persistence.
  5. Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N.
  6. Restart your Redis Cluster nodes with the new AOF files. They’ll complain that there are keys that should not be there according to their configuration.
  7. Use redis-trib fix command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not.
  8. Use redis-trib check at the end to make sure your cluster is ok.
  9. Restart your clients modified to use a Redis Cluster aware client library.

There is an alternative way to import data from external instances to a Redis Cluster, which is to use the redis-trib import command.

The command moves all the keys of a running instance (deleting the keys from the source instance) to the specified pre-existing Redis Cluster. However note that if you use a Redis 2.8 instance as source instance the operation may be slow since 2.8 does not implement migrate connection caching, so you may want to restart your source instance with a Redis 3.x version before to perform such operation.

发表在 reids | 标签为 | 留下评论

Mysql插入中文时提示:ERROR 1366 (HY000): Incorrect string value: ‘\xE5\x8F\xB0\xE5\xBC\x8F…

原因:插入字段时提示错误:

mysql> insert into 学生表 values (“张三丰”,”武当山”),(“章司封”,”二郎山”);
ERROR 1366 (HY000): Incorrect string value: ‘\xE5\xBC\xA0\xE4\xB8\x89…’ for column ‘姓名’ at row 1

分析:

mysql> show create table 学生表 ;
| Table | Create Table |
| 学生表 | CREATE TABLE `学生表` (
`姓名` char(20) CHARACTER SET latin1 DEFAULT NULL,
`家庭地址` char(100) CHARACTER SET latin1 DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |

1 row in set (0.01 sec)
可以看到,虽然表的编码改了,但是字段的编码没有更改,因此字段也需要更改
(1)更改 姓名 字段
mysql> alter table 学生表 change 姓名 姓名 char(20) character set utf8;
Query OK, 0 rows affected (0.52 sec)
Records: 0 Duplicates: 0 Warnings: 0
(2)更改 家庭地址 字段
mysql> alter table 学生表 change 家庭地址 家庭地址 char(100) character set utf8;
Query OK, 0 rows affected (0.34 sec)
Records: 0 Duplicates: 0 Warnings: 0
(3)确认字段编码类型
mysql> show create table 学生表 ;

| Table | Create Table |

| 学生表 | CREATE TABLE `学生表` (
`姓名` char(20) DEFAULT NULL,
`家庭地址` char(100) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8 |

1 row in set (0.00 sec)
(4)重新插入字段,提示完成
mysql> insert into 学生表 values (“张三丰”,”武当山”),(“章司封”,”二郎山”);
Query OK, 2 rows affected (0.04 sec)
Records: 2 Duplicates: 0 Warnings: 0
(5)查看插入数据
mysql> select * from 学生表;
+———–+————–+
| 姓名 | 家庭地址 |
+———–+————–+
| 张三丰 | 武当山 |
| 章司封 | 二郎山 |
+———–+————–+
2 rows in set (0.00 sec)
(6)总结
1)修改表的编码方式:
ALTER TABLE `test` DEFAULT CHARACTER SET utf8;
该命令用于将表test的编码方式改为utf8;
2)修改字段的编码方式:
ALTER TABLE `test` CHANGE `name` `name` char(40) CHARACTER SET utf8 NOT NULL;
该命令用于将表test中name字段的编码方式改为utf8
3) 查看字段的编码方式:
show create table `name`;

发表在 mysql | 标签为 | 留下评论

Nextcloud/ownCloud上传大于512MB的文件的相关配置

在正常情况下,Nextcloud的默认的最大上传限制为512MB,在你的的文件系统和操作系统允许的前提下,你可以提升这一限制。换一种说法,能上传的最大文件的大小还取决于你的浏览器和操作系统:

  • 32位系统可上传小于2GB的文件
  • IE6~IE8可上传小于2GB的文件
  • IE9~IE11可上传小于4GB的文件

在64位系统中允许上传更大的文件,具体能上传多大仍然取决于你的操作系统的版本。

使用Nextcloud sync 客户端可以无视这个上传限制,因为这个客户端会将文件切分后上传。

系统设置

  • 确保所安装的PHP的版本在5.6.6以上,推荐使用最新版本的PHP
  • 禁用用户配额:将所有用户的配额设置为“无限制”
  • 对服务器的要求:临时文件空间(/tmp目录)和分区必须足够大以承载多个用户的多个并行上传,如果最大上传的限制为10GB,而约有100个用户(一般达不到这个数量)同时上传文件,那么至少要有100×10GB的空间来容纳临时文件(这儿牵涉到Linux的临时文件的分区)

配置你的网站服务器

Nextcloud利用其程序根目录下的的.htaccess文件来控制上传限制。因为php-fpm无法读取.htaccess文件,所以相关的PHP设置必须要设置在

nextcloud/.user.ini

中。

在.htaccess文件中中设置以下参数,可将上传文件大小限制设置为16GB:

php_value upload_max_filesize = 16G
php_value post_max_size = 16G

当然,具体要设置为多少取决于你的需求。

而另一个限制了大文件的上传的因素是PHP的超时(timeout),如果你在日志中看到超时的记录,可以将PHP的超时的数值调得大一些:

php_value max_input_time 3600
php_value max_execution_time 3600

说到timeout,不仅PHP会有timeout,HTTP服务器也会有。例如Apache的

mod_reqtimeout

模块,它也可能会导致大文件上传的失败。如果你在使用这个模块并且遇到了超时的问题,那么可以考虑在Apache配置中禁用它:

使用命令禁用:

a2dismod reqtimeout
service apache2 restart

或者直接删除该模块的配置文件:

rm /etc/apache2/mods-enabled/reqtimeout.*
service apache2 restart

(实际环境中Apache的安装方式可能有所不同,禁用该模块方式也可能不同)

其它限制

除了上文所述的会限制到大文件上传的地方,一些HTTP服务器的配置也会影响到大文件的上传:

Apache

  • LimitRequestBody(请求实体限制)
  • SSLRenegBufferSize(缓冲区大小)

1.LimitRequestBody:这项配置一般在Apache做反响代理服务器用到,具体用法是在Apache配置文件中加上这样一句(后面所跟的数值的单位为K):

LimitRequestBody 102400   #允许上传最大为100k的文件

2.SSLRenegBufferSize:在进行SSL握手之前,Apache会将请求先存放到缓冲区中,这项配置决定了缓冲区的大小:

SSLRenegBufferSize 262144  #缓冲区大小为262144Bytes

Apache with mod_fcgid

Nginx

Nginx中有这样几个影响上传文件的配置选项:

  • client_max_body_size(最大请求实体)示例:
    client_max_body_size 512M; #最大请求实体为512M
  • fastcgi_buffers(指定本地需要用多少和多大的缓冲区来缓冲FastCGI的应答)示例:
    fastcgi_buffers 64 4K; #为小于256k的PHP页面分配64个4k缓冲区
  • fastcgi_read_timeout(FastCGI服务器的响应超时时间)示例:
    fastcgi_read_timeout 60 #超时时间为1分钟
  • client_body_temp_path(指定POST上传的文件地址)示例:
    client_body_temp_path /spool/nginx/client_temp 3 2; #请求实体临时路径 3、2为存放临时文件的目录名的数字位数
  • fastcgi_request_buffering(这个是1.7.11版本的一个新的配置选项)一般予以关闭:
    fastcgi_request_buffering off;
确保client_body_temp_path指向具有足够空间的上传文件大小的分区,并且与upload_tmp_dir或temp目录(见下文)为相同的分区。为了获得最佳性能,最好将它们放在专用于交换和临时存储的单独的硬盘驱动器上。

如果Nginx作为反向代理服务器,那么还会有这样两项配置与之有关:

  • proxy_buffering
  • proxy_max_temp_file_size

PHP的配置

如果你不想使用Nextcloud的.htaccess或.user.ini文件,也可以直接去修改PHP的配置,如果要这样做,确保注释掉.htaccess中的与文件上传有关的那几行。如果您在32位系统上运行Nextcloud,则您的php.ini文件中的任何open_basedir指令都需要注释掉。

在php.ini中设置下面的两个参数,以修改最大文件上传限制为16G(或其他数值):

upload_max_filesize = 16G
post_max_size = 16G

为PHP设置你想使用的临时目录的位置,例如:

upload_tmp_dir = /var/big_temp_file/

输出缓冲必须在.htaccess或.user.ini或php.ini中关闭,否则返回与内存相关的错误:

output_buffering = 0

Nextcloud配置

如果你无法修改php.ini,那么你也可以在config.php中指定

upload_tmp_dir

的值,例如在文件中加入这样一行:

'tempdirectory' => '/var/big_temp_file/',

详细的内容可以参见官方文档中关于config.php的介绍。

如果在config.php(参见Config.php参数)文件中配置了session_lifetime设置,那么请确保它不是太低。此设置需要至少配置为最长上传时间(以秒为单位)。如果不确定,就从配置中完全删除它,将其重置为config.sample.php中显示的默认值。

在管理页面中配置上传大小限制

说了这么多,最简单的方法还是在管理页面中直接修改上传大小(这个途径受制上文中所有内容):

前提是.htaccess文件正常工作。

额,你可能遇到这样的尴尬的情况:

将.htaccess文件的用户设置为www-data就可以了:

chown www-data .htaccess
发表在 Nextcloud | 标签为 | 留下评论

mysql salve从库设置read only 属性

在MySQL数据库中,在进行数据迁移和从库只读状态设置时,都会涉及到只读状态和Master-slave的设置和关系。

经过实际测试,对于MySQL单实例数据库和master库,如果需要设置为只读状态,需要进行如下操作和设置:
将MySQL设置为只读状态的命令:
# mysql -uroot -p
mysql> show global variables like “%read_only%”;
mysql> flush tables with read lock;
mysql> set global read_only=1;
mysql> show global variables like “%read_only%”;

将MySQL从只读设置为读写状态的命令:
mysql> unlock tables;
mysql> set global read_only=0;

对于需要保证master-slave主从同步的salve库,如果要设置为只读状态,需要执行的命令为:
mysql> set global read_only=1;

将salve库从只读状态变为读写状态,需要执行的命令是:
mysql> set global read_only=0;

对于数据库读写状态,主要靠 “read_only”全局参数来设定;默认情况下,数据库是用于读写操作的,所以read_only参数也是0或faluse状态,这时候不论是本地用户还是远程访问数据库的用户,都可以进行读写操作;如需设置为只读状态,将该read_only参数设置为1或TRUE状态,但设置 read_only=1 状态有两个需要注意的地方:
1.read_only=1只读模式,不会影响slave同步复制的功能,所以在MySQL slave库中设定了read_only=1后,通过 show slave status\G 命令查看salve状态,可以看到salve仍然会读取master上的日志,并且在slave库中应用日志,保证主从数据库同步一致;
2.read_only=1只读模式,可以限定普通用户进行数据修改的操作,但不会限定具有super权限的用户的数据修改操作;在MySQL中设置read_only=1后,普通的应用用户进行insert、update、delete等会产生数据变化的DML操作时,都会报出数据库处于只读模式不能发生数据变化的错误,但具有super权限的用户,例如在本地或远程通过root用户登录到数据库,还是可以进行数据变化的DML操作;
——-锁表—
为了确保所有用户,包括具有super权限的用户也不能进行读写操作,就需要执行给所有的表加读锁的命令 “flush tables with read lock;”,这样使用具有super权限的用户登录数据库,想要发生数据变化的操作时,也会提示表被锁定不能修改的报错。

这样通过 设置“read_only=1”和“flush tables with read lock;”两条命令,就可以确保数据库处于只读模式,不会发生任何数据改变,在MySQL进行数据库迁移时,限定master主库不能有任何数据变化,就可以通过这种方式来设定。

但同时由于加表锁的命令对数据库表限定非常严格,如果再slave从库上执行这个命令后,slave库可以从master读取binlog日志,但不能够应用日志,slave库不能发生数据改变,当然也不能够实现主从同步了,这时如果使用 “unlock tables;”解除全局的表读锁,slave就会应用从master读取到的binlog日志,继续保证主从库数据库一致同步。

为了保证主从同步可以一直进行,在slave库上要保证具有super权限的root等用户只能在本地登录,不会发生数据变化,其他远程连接的应用用户只按需分配为select,insert,update,delete等权限,保证没有super权限,则只需要将salve设定“read_only=1”模式,即可保证主从同步,又可以实现从库只读。

相对的,设定“read_only=1”只读模式开启的解锁命令为设定“read_only=0”;设定全局锁“flush tables with read lock;”,对应的解锁模式命令为:“unlock tables;”.

当然设定了read_only=1后,所有的select查询操作都是可以正常进行的。

发表在 mysql | 标签为 | 留下评论

mysql 中 sync_binlog 参数作用

sync_binlog”:这个参数是对于MySQL系统来说是至关重要的,他不仅影响到Binlog对MySQL所带来的性能损耗,而且还影响到MySQL中数据的完整性。对于“sync_binlog”参数的各种设置的说明如下:

sync_binlog=0,当事务提交之后,MySQL不做fsync之类的磁盘同步指令刷新binlog_cache中的信息到磁盘,而让Filesystem自行决定什么时候来做同步,或者cache满了之后才同步到磁盘。

sync_binlog=n,当每进行n次事务提交之后,MySQL将进行一次fsync之类的磁盘同步指令来将binlog_cache中的数据强制写入磁盘。

在MySQL中系统默认的设置是sync_binlog=0,也就是不做任何强制性的磁盘刷新指令,这时候的性能是最好的,但是风险也是最大的。因为一旦系统Crash,在binlog_cache中的所有binlog信息都会被丢失。而当设置为“1”的时候,是最安全但是性能损耗最大的设置。因为当设置为1的时候,即使系统Crash,也最多丢失binlog_cache中未完成的一个事务,对实际数据没有任何实质性影响。

从以往经验和相关测试来看,对于高并发事务的系统来说,“sync_binlog”设置为0和设置为1的系统写入性能差距可能高达5倍甚至更多

发表在 mysql | 标签为 | 留下评论