使用dd命令测试硬盘速度

转载自 https://blog.csdn.net/English0523/article/details/78646924

测试方式:使用dd指令,对磁盘进行连续写入,不使用内存缓冲区,每次写入8k的数据,总共写入20万次,产生1.6G大小的文件。
测试指令:dd if=/dev/zero of=/data01/test.dbf bs=8k count=200000 conv=fdatasync

正确的使用dd进行磁盘读写速度测试
dd是Linux/UNIX 下的一个非常有用的命令,作用是用指定大小的块拷贝一个文件,并在拷贝的同时进行指定的转换,所以可以用来测试硬盘的顺序读写能力。可以写文件,可以写裸设备。

dd语法
———————————————————
功能说明:读取,转换并输出数据。
语  法:dd [bs=<字节数>][cbs=<字节数>][conv=<关键字>][count=<区块数>][ibs=<字节数>][if=<文件>][obs=<字节数>][of=<文件>][seek=<区块数>][skip=<区块数>][–help][–version]
补充说明:dd可从标准输入或文件读取数据,依指定的格式来转换数据,再输出到文件,设备或标准输出。
参  数:
bs=<字节数> 将ibs( 输入)与obs(输出)设成指定的字节数。
cbs=<字节数> 转换时,每次只转换指定的字节数。
conv=<关键字> 指定文件转换的方式。
count=<区块数> 仅读取指定的区块数。
ibs=<字节数> 每次读取的字节数。
if=<文件> 从文件读取。
obs=<字节数> 每次输出的字节数。
of=<文件> 输出到文件。
seek=<区块数> 一开始输出时,跳过指定的区块数。
skip=<区块数> 一开始读取时,跳过指定的区块数。
–help 帮助。
–version 显示版本信息。

dd常用参数详解
———————————————————
if=xxx 从xxx读取,如if=/dev/zero,该设备无穷尽地提供0,(不产生读磁盘IO)
of=xxx 向xxx写出,可以写文件,可以写裸设备。如of=/dev/null,”黑洞”,它等价于一个只写文件. 所有写入它的内容都会永远丢失. (不产生写磁盘IO)
bs=8k 每次读或写的大小,即一个块的大小。
count=xxx 读写块的总数量。

避免操作系统“写缓存”干扰测试成绩,使用sync、fsync、fdatasync
———————————————————
关于sync、fsync、fdatasync请参考:http://elf8848.iteye.com/blog/2088986

dd bs=8k count=4k if=/dev/zero of=test.log conv=fsync
dd bs=8k count=4k if=/dev/zero of=test.log conv=fdatasync
dd bs=8k count=4k if=/dev/zero of=test.log oflag=dsync
dd bs=8k count=4k if=/dev/zero of=test.log 默认“写缓存”启作用
dd bs=8k count=4k if=/dev/zero of=test.log conv=sync “写缓存”启作用
dd bs=8k count=4k if=/dev/zero of=test.log; sync “写缓存”启作用

dd bs=8k count=4k if=/dev/zero of=test.log conv=fsync
加入这个参数后,dd命令执行到最后会真正执行一次“同步(sync)”操作,,这样算出来的时间才是比较符合实际使用结果的。conv=fsync表示把文件的“数据”和“metadata”都写入磁盘(metadata包括size、访问时间st_atime & st_mtime等等),因为文件的数据和metadata通常存在硬盘的不同地方,因此fsync至少需要两次IO写操作,fsync 与fdatasync相差不大。(重要,最有参考价值)

dd bs=8k count=4k if=/dev/zero of=test.log conv=fdatasync
加入这个参数后,dd命令执行到最后会真正执行一次“同步(sync)”操作,,这样算出来的时间才是比较符合实际使用结果的。conv=fdatasync表示只把文件的“数据”写入磁盘,fsync 与fdatasync相差不大。(重要,最有参考价值)

dd bs=8k count=4k if=/dev/zero of=test.log oflag=dsync
加入这个参数后,每次读取8k后就要先把这8k写入磁盘,然后再读取下面一个8k,一共重复4K次。这是最慢的一种方式了。

dd bs=8k count=4k if=/dev/zero of=test
没加关于操作系统“写缓存”的参数,默认“写缓存”启作用。dd先把数据写的操作系统“写缓存”,就完成了写操作。通常称为update的系统守护进程会周期性地(一般每隔30秒)调用sync函数,把“写缓存”中的数据刷入磁盘。因为“写缓存”起作用,你会测试出一个超级快的性能。
如:163840000 bytes (164 MB) copied, 0.742906 seconds, 221 MB/s

dd bs=8k count=4k if=/dev/zero of=test conv=sync
conv=sync参数明确“写缓存”启作用,默认值就是conv=sync

dd bs=8k count=4k if=/dev/zero of=test; sync
与第1个完全一样,分号隔开的只是先后两个独立的命令。当sync命令准备开始往磁盘上真正写入数据的时候,前面dd命令已经把错误的“写入速度”值显示在屏幕上了。所以你还是得不到真正的写入速度。

裸设备测试
———————-
1、裸设备到文件系统
dd if=/dev/rsd1b of=/backup/df1.dbf bs=8k skip=8 count=3841
2、文件系统到裸设备
dd if=/backup/df1.dbf of=/dev/rsd2b bs=8k seek=8

更多关于裸设备,请参考:http://czmmiao.iteye.com/blog/1748748

发表在 LinuxBasic | 标签为 | 留下评论

rabbitmq启用web管理插件方法

rabbitmq-plugins enable rabbitmq_management

启用后到15672端口访问web,默认用户名guest,密码guest

发表在 rabbitmq | 标签为 | 留下评论

lvm在线扩容方法

转载自 http://www.voidcn.com/article/p-uxhrkuzs-bsd.html

1. 创建pv
pvcreate /dev/sda#新增硬盘

即:pvcreate /dev/sda

2. 把创建的新pv加入vg
vgextend 卷组名 /dev/sda

即:vgextend centos /dev/sda

3. 扩容逻辑卷并制作xfs文件系统
vextend -r -l +100%FREE /dev/卷组名/逻辑卷名

即:vextend -r -l +100%FREE /dev/centos/centos-home

发表在 lvm | 标签为 | 留下评论

centos7无法创建pv的解决方法

转载自https://www.cnblogs.com/daynote/p/9747053.html

centos7默认情况下是创建不了pv的(原因待查证),解决方法如下:

默认:

[root@compute1 ~]# pvcreate /dev/sdbDevice /dev/sdb excluded by a filter

解决:

[root@compute1 ~]# dd if=/dev/urandom of=/dev/sdb bs=512 count=64
64+0 records in
64+0 records out
32768 bytes (33 kB) copied, 0.00760562 s, 4.3 MB/s
[root@compute1 ~]# pvcreate /dev/sdb
Physical volume “/dev/sdb” successfully created.

发表在 OpenStack | 标签为 | 留下评论

openstack 热迁移[Live Migration] 知识合集

现在正在学习nova虚拟机热迁移,找了很多资料,我罗列了一下:

KVM 介绍(8):使用 libvirt 迁移 QEMU/KVM 虚机和 Nova 虚机 [Nova Libvirt QEMU/KVM Live Migration]

https://www.cnblogs.com/sammyliu/p/4572287.html

KVM在线迁移

http://bbs.chinaunix.net/forum.php?mod=viewthread&tid=4072829&ordertype=2

openstack调整云主机大小

http://blog.itpub.net/30345407/viewspace-2084838/

openstack热迁移失败排错

https://zhuanlan.zhihu.com/p/27275895

发表在 OpenStack | 标签为 | 留下评论

libvirtd tcp 方式远程连接配置步骤

转载自https://blog.51cto.com/xiaoli110/619709

1 修改/etc/libvirt/libvirtd.conf中
listen_tls = 0
listen_tcp = 1
tcp_port = “16509”
auth_tcp = “sasl”

2 去掉/etc/sysconfig/libvirtd 中LIBVIRTD_ARGS=”–listen” 的注释

3 修改/etc/libvirt/qemu.conf 中listen 为0.0.0.0 ,去掉password注释

4 运行以下命令可以添加用户
# saslpasswd2 -a libvirt admin
Password: xxxxxx
Again (for verification): xxxxxx
通过以下命令可以查看已经创建的用户
# sasldblistusers2 -f /etc/libvirt/passwd.db
[email protected]: userPassword
5 重启libvirtd 服务 ok!

发表在 OpenStack | 标签为 | 留下评论

nova-status upgrade check 报错403解决办法

我在安装openstack stein版本时,nova组件安装完成,验证环节遇到一个403报错

[root@controller ~]# nova-status upgrade check
错误:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 515, in main
ret = fn(*fn_args, **fn_kwargs)
File "/usr/lib/python2.7/site-packages/oslo_upgradecheck/upgradecheck.py", line 99, in check
result = func(self)
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 160, in _check_placement
versions = self._placement_get("/")
File "/usr/lib/python2.7/site-packages/nova/cmd/status.py", line 150, in _placement_get
return client.get(path, raise_exc=True).json()
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 375, in get
return self.request(url, 'GET', **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 237, in request
return self.session.request(url, method, **kwargs)
File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 890, in request
raise exceptions.from_response(resp, method, url)
Forbidden: Forbidden (HTTP 403)

这个报错困扰我很久,查询具体的日志,一个是placement的日志

[root@controller ~]# tail -f /var/log/placement/placement-api.log 
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api
AH01630: client denied by server configuration: /usr/bin/placement-api

nova的日志毫无头绪,只有placement的日志能看出是权限问题

执行placement 检查时是正常的

[root@controller ~]# placement-status upgrade check
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+

就是检查nova的时候出问题,然后搜索半天,获得信息是这是个BUG,然后果然是apache的目录权限问题,按这个小哥的方法成功解决问题

[root@dlp ~(keystone)]# vi /etc/httpd/conf.d/00-nova-placement-api.conf
# add near line 15
<Directory /usr/bin>
Require all granted
</Directory>
</VirtualHost>

执行nova检查也成功了

[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Console Auths |
| Result: Success |
| Details: None |
+--------------------------------+

小哥的链接如下:

https://www.server-world.info/en/note?os=CentOS_7&p=openstack_stein&f=8

发表在 OpenStack | 标签为 | 留下评论

FreeNAS11.2安装时白屏

找了一台配置比较差的PC安装FreeNAS11.2,进入界面后,默认回车选择第1项安装系统,然后就白屏了。

解决方法:

按照这个哥们的方法,主板设置一下,选择集显启动系统(之前用的是独显),问题解决。

原文地址:https://www.ixsystems.com/community/threads/white-screen-when-trying-to-install-freenas.73069/

原文回复:

Hi,
I’ve found if you use integrated graphics first install is unimpeded. Running through any peripheral graphics hiccups the install. Hope this helps!
发表在 freenas | 标签为 | 一条评论

Nginx各项配置的含义

#user nobody; #配置用户或者组,默认为nobody nobody
worker_processes 4; #允许生成的进程数,默认为1
worker_cpu_affinity 00000001 00000010 00000100 00001000; #为每个进程分配一个CPU
worker_rlimit_nofile 102400; #为nginx工作进程改变打开最多文件描述符数目的限制。用来在不重启主进程的情况下增加限制。

#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info; #指定日志路径,级别。这个设置可以放入全局块,http块,server块,级别依次为:debug|info|notice|warn|error|crit|alert|emerg

#pid logs/nginx.pid; #指定nginx进程运行文件存放地址

events {
accept_mutex on; #设置网路连接序列化,防止惊群现象发生,默认为on
multi_accept on; #设置一个进程是否同时接受多个网络连接,默认为off
use epoll; #使用epoll(linux2.6的高性能方式)事件驱动模型,select|poll|kqueue|epoll|resig|/dev/poll|eventport
worker_connections 102400; #最大连接数,默认为512
}

http {
include mime.types; #文件扩展名与文件类型映射表
default_type application/octet-stream; #默认文件类型,默认为text/plain
lua_package_path “/usr/local/lib/lua/?.lua;;”; #lua库位置
charset utf-8; #字符集

server_names_hash_bucket_size 128; # 保存服务器名字的hash表
client_header_buffer_size 4k; #用来缓存请求头信息的,容量4K,如果header头信息请求超过了且没有配置client_header_buffer_size,nginx会直接返回400错误
large_client_header_buffers 4 32k; #如果large_buffer还是无法容纳,那么就会返回414(处理request_line)/400(处理request_header)错误
client_max_body_size 300m; #允许客户端请求的最大单文件字节数

tcp_nodelay on; #提高数据的实时响应性
client_body_buffer_size 512k; #缓冲区代理缓冲用户端请求的最大字节数(请求多)

proxy_connect_timeout 5s; #nginx跟后端服务器连接超时时间(代理连接超时)
proxy_read_timeout 60s; #连接成功后,后端服务器响应时间(代理接收超时)
proxy_send_timeout 5s; #后端服务器数据回传时间(代理发送超时)

proxy_buffer_size 16k; #设置代理服务器(nginx)保存用户头信息的缓冲区大小
proxy_buffers 4 64k; #该指令设置缓冲区的大小和数量,从被代理的后端服务器取得的响应内容,会放置到这里
proxy_busy_buffers_size 128k; #所有处在busy状态的buffer size加起来不能超过proxy_busy_buffers_size
proxy_temp_file_write_size 128k; #如果response的内容很大的话,Nginx会接收并把他们写入到temp_file里去。busy的buffer传输完了会从temp_file里面接着读数据,直到传输完毕

gzip on; #NGINX可以压缩静态资源
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.1;
gzip_comp_level 2; #压缩级别大小,最小1,最大9,值越小,压缩后比例越小,CPU处理更快; 值越大压缩后占用带宽越少。
gzip_types text/plain application/x-javascript text/css application/xml; #压缩类型:text js css xml 都会被压缩
gzip_vary on; #作用是在http响应中增加一行,目的是改变反向代理服务器的缓存策略

#log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘
# ‘$status $body_bytes_sent “$http_referer” ‘
# ‘”$http_user_agent” “$http_x_forwarded_for”‘;

#access_log logs/access.log main;
#access_log off; #取消服务日志

#日志格式
# ip 远程用户 当地时间 请求URL 状态 发送的大小 响应的头 客户端使用的浏览器 页面响应的时间
log_format myFormat ‘$remote_addr–$remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $request_time $http_x_forwarded_for’; #自定义格式
access_log logs/access.log myFormat; #combined为日志格式的默认值

sendfile on; #允许sendfile方式传输文件,默认为off,可以在http块,server块,location块
#sendfile_max_chunk 100k; #每个进程每次调用传输数量不能大于设定的值,默认为0,即不设上限

#tcp_nopush on;
tcp_nopush on; #防止网络阻塞

#keepalive_timeout 0;
keepalive_timeout 65; #连接超时时间,默认为75s,可以在http,server,location块

#gzip on;

#上游服务器
upstream myweb {
#负载均衡算法,默认为round-robin轮循
ip_hash; #每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题,ip_hash不支持weight和backup

server 192.168.5.91:7878 max_fails=2 fail_timeout=10s;
server 192.168.5.92:7878 max_fails=2 fail_timeout=10s;

#server 192.168.5.91:7878 max_fails=2 fail_timeout=10s weight=1;
#server 192.168.5.92:7878 max_fails=2 fail_timeout=10s weight=2;
#server 192.168.5.90:7878 backup; #热备
}

#error_page 404 https://error.page; #错误页

server {
keepalive_requests 120; #单连接请求上限次数
listen 9080; #监听端口
server_name localhost; #监听地址 127.0.0.1
#charset koi8-r;

#access_log logs/host.access.log main;

#location ~*^.+$ { #请求的url过滤,正则匹配,~为区分大小写,~*为不区分大小写。
# #root path; #根目录
# #index vv.txt; #设置默认页
# proxy_pass http://myweb; #请求转向myweb 定义的服务器列表
# deny 127.0.0.1; #拒绝的ip
# allow 172.18.5.54; #允许的ip
#}

location /test {

proxy_next_upstream http_502 http_504 error timeout invalid_header;
proxy_next_upstream_timeout 10s;
proxy_next_upstream_tries 2;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
add_header upstream_addr $upstream_addr;

proxy_pass http://myweb;

}

#nginx主页
location / {
root html;
index index.html index.htm;
}

#用lua脚本向reids存值
location /lua/set {
default_type ‘text/plain’;
content_by_lua_file conf/lua/setKeyValue.lua;
}

#用lua脚本从reids取值
location /lua/get {
default_type ‘text/plain’;
content_by_lua_file conf/lua/getKey.lua;
}

#静态资源代理
location ~ .*\.(html|htm|gif|jpg|jpeg|bmp|png|ico|txt|js|css)$ {
root /var/local/static;
expires 30d;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

}

发表在 Nginx | 标签为 | 留下评论

配置swift报错503

原先的swift不用了,更换为新主机,配置后,在控制节点验证时报错

Sep 27 02:57:04 controller proxy-server: Auth Token confirmed use of None apis
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda4 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx1f433d3167494f5a8b1e6-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda7 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx1f433d3167494f5a8b1e6-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda5 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx1f433d3167494f5a8b1e6-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda6 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx1f433d3167494f5a8b1e6-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: Account HEAD returning 503 for [] (txn: tx1f433d3167494f5a8b1e6-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: 192.168.0.101 192.168.0.101 27/Sep/2019/06/57/04 HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef HTTP/1.0 503 - python-swiftclient-2.6.0 28cf7a4fc5d147f0... - - - tx1f433d3167494f5a8b1e6-005d8db2c0 - 0.1169 - - 1569567424.278646946 1569567424.395510912 -
Sep 27 02:57:04 controller proxy-server: Auth Token confirmed use of None apis
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda4 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx5e4261a5720844fcb5037-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda7 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx5e4261a5720844fcb5037-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda5 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx5e4261a5720844fcb5037-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda6 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx5e4261a5720844fcb5037-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: Account HEAD returning 503 for [] (txn: tx5e4261a5720844fcb5037-005d8db2c0) (client_ip: 192.168.0.101)
Sep 27 02:57:04 controller proxy-server: 192.168.0.101 192.168.0.101 27/Sep/2019/06/57/04 HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef HTTP/1.0 503 - python-swiftclient-2.6.0 28cf7a4fc5d147f0... - - - tx5e4261a5720844fcb5037-005d8db2c0 - 0.1174 - - 1569567424.487860918 1569567424.605252028 -
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda7 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx229fd3a27bab49f481dbf-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda5 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx229fd3a27bab49f481dbf-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda4 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx229fd3a27bab49f481dbf-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda6 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx229fd3a27bab49f481dbf-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: Account GET returning 503 for [] (txn: tx229fd3a27bab49f481dbf-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: 192.168.0.101 192.168.0.101 27/Sep/2019/06/57/05 GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef%3Fformat%3Djson HTTP/1.0 503 - python-swiftclient-2.6.0 28cf7a4fc5d147f0... - 118 - tx229fd3a27bab49f481dbf-005d8db2c1 - 0.0432 - - 1569567425.122399092 1569567425.165566921 -
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda4 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx87264f312e8f4bfc80204-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda5 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx87264f312e8f4bfc80204-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda7 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx87264f312e8f4bfc80204-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda6 re: Trying to HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx87264f312e8f4bfc80204-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: Account HEAD returning 503 for [] (txn: tx87264f312e8f4bfc80204-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: 192.168.0.101 192.168.0.101 27/Sep/2019/06/57/05 HEAD /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef HTTP/1.0 503 - python-swiftclient-2.6.0 28cf7a4fc5d147f0... - - - tx87264f312e8f4bfc80204-005d8db2c1 - 0.0056 - - 1569567425.260286093 1569567425.265904903 -
Sep 27 02:57:05 controller proxy-server: Auth Token confirmed use of None apis
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda4 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx04bd4d67dc004488a90f2-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda7 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx04bd4d67dc004488a90f2-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda5 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx04bd4d67dc004488a90f2-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: ERROR with Account server 192.168.0.102:6002/sda6 re: Trying to GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef: Connection refused (txn: tx04bd4d67dc004488a90f2-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: Account GET returning 503 for [] (txn: tx04bd4d67dc004488a90f2-005d8db2c1) (client_ip: 192.168.0.101)
Sep 27 02:57:05 controller proxy-server: 192.168.0.101 192.168.0.101 27/Sep/2019/06/57/05 GET /v1/AUTH_46890a3c5f1b45a39dc872d4601de7ef%3Fformat%3Djson HTTP/1.0 503 - python-swiftclient-2.6.0 28cf7a4fc5d147f0... - 118 - tx04bd4d67dc004488a90f2-005d8db2c1 - 0.1144 - - 1569567425.274853945 1569567425.389211893 -

查阅了很多资料,当然也给了很多思路,比如配置写错了,IP 端口写错了,这个我以前也遇到过,但是今天死活弄不好,最后想想觉得还是配置文件的问题,因为我的openstack是16年的Liberty版,现在早已是19年的S版,控制节点的配置文件确实是L版的,但是存储节点原先的下载链接早已失效,下下来是S版,其实就是几个选项不同,但是我又难得去仔细看,最后换回原来L版的配置文件,果然是对了,我把L版的配置文件粘贴上来:

[root@compute1 swift]# cat account-server.conf 
[DEFAULT] 
bind_ip = 192.168.0.102
bind_port = 6002
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon account-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[app:account-server]
use = egg:swift#account
[filter:healthcheck]
use = egg:swift#healthcheck
[root@compute1 swift]# 

[root@compute1 swift]# cat container-server.conf 
[DEFAULT]
bind_ip = 192.168.0.102
bind_port = 6001
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon container-server
[filter:recon]
use = egg:swift#recon
recon_cache_path = /var/cache/swift
[app:container-server]
use = egg:swift#container
[filter:healthcheck]
use = egg:swift#healthcheck
[root@compute1 swift]# 


[root@compute1 swift]# cat object-server.conf 
[DEFAULT]
bind_ip = 192.168.0.102
bind_port = 6000
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = true
[pipeline:main]
pipeline = healthcheck recon object-server
[filter:recon] 
use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
[app:object-server]
use = egg:swift#object
[filter:healthcheck]
use = egg:swift#healthcheck
[object-auditor]
[object-replicator]
[object-updater]
[root@compute1 swift]#

当然,说白了,还是自己对swift的选项不了解,不熟悉

配完之后,重启服务,

存储节点:

systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service

控制节点:

systemctl restart openstack-swift-proxy.service memcached.service

最后去控制节点验证一下:

[root@controller ~]# swift stat
/usr/lib/python2.7/site-packages/keystoneclient/service_catalog.py:196: UserWarning: Providing attr without filter_value to get_urls() is deprecated as of the 1.7.0 release and may be removed in the 2.0.0 release. Either both should be provided or neither should be provided.
'Providing attr without filter_value to get_urls() is '
Account: AUTH_833fe12cb0f741bbbf520c15578e180f
Containers: 1
Objects: 0
Bytes: 0
Containers in policy "policy-0": 1
Objects in policy "policy-0": 0
Bytes in policy "policy-0": 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1569567780.84655
X-Trans-Id: txdbac7f9af0e54b35847ac-005d8dba97
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
[root@controller ~]#

测试正常了

SWIFT的配置文件都是去GITHUB上面去拉:

https://opendev.org/openstack/swift/src/branch/master/etc

发表在 OpenStack | 标签为 , | 留下评论