2-openstack(rocky)最小环境安装(包括cinder和swift)

1-0.组件介绍
The OpenStack Identity service provides a single point of integration for managing authentication, authorization, and a catalog of services.

The Identity service is typically the first service a user interacts with. Once authenticated, an end user can use their identity to access other OpenStack services. Likewise, other OpenStack services leverage the Identity service to ensure users are who they say they are and discover where other services are within the deployment. The Identity service can also integrate with some external user management systems (such as LDAP).

Users and services can locate other services by using the service catalog, which is managed by the Identity service. As the name implies, a service catalog is a collection of available services in an OpenStack deployment. Each service can have one or many endpoints and each endpoint can be one of three types: admin, internal, or public. In a production environment, different endpoint types might reside on separate networks exposed to different types of users for security reasons. For instance, the public API network might be visible from the Internet so customers can manage their clouds. The admin API network might be restricted to operators within the organization that manages cloud infrastructure. The internal API network might be restricted to the hosts that contain OpenStack services. Also, OpenStack supports multiple regions for scalability. For simplicity, this guide uses the management network for all endpoint types and the default RegionOne region. Together, regions, services, and endpoints created within the Identity service comprise the service catalog for a deployment. Each OpenStack service in your deployment needs a service entry with corresponding endpoints stored in the Identity service. This can all be done after the Identity service has been installed and configured.

The Identity service contains these components:

Server
A centralized server provides authentication and authorization services using a RESTful interface.
Drivers
Drivers or a service back end are integrated to the centralized server. They are used for accessing identity information in repositories external to OpenStack, and may already exist in the infrastructure where OpenStack is deployed (for example, SQL databases or LDAP servers).
Modules
Middleware modules run in the address space of the OpenStack component that is using the Identity service. These modules intercept service requests, extract user credentials, and send them to the centralized server for authorization. The integration between the middleware modules and OpenStack components uses the Python Web Server Gateway Interface.

1.1 keystone安装和配置
先在数据库中创建库,创建用户,并授权

[root@controller ~]

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 12
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE keystone;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’localhost’ IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

安装所需组件,注意,任何配置文件在修改的时候应该添加部分选项,而不是修改部分选项。另外,配置文件中的注释部分(包含…)的表示你需要保留的。
安装keystone httpd mod_wsgi 用来支持Python

[root@controller ~]

# yum install openstack-keystone httpd mod_wsgi
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.sjc02.svwh.net
  • centos-qemu-ev: sjc.edge.kernel.org
  • epel: ewr.edge.kernel.org
  • extras: linux.mirrors.es.net
  • updates: mirror.sjc02.svwh.net
    Resolving Dependencies
    –> Running transaction check
    —> Package httpd.x86_64 0:2.4.6-88.el7.centos will be installed
    –> Processing Dependency: httpd-tools = 2.4.6-88.el7.centos for package: httpd-2.4.6-88.el7.centos.x86_64
    –> Processing Dependency: /etc/mime.types for package: httpd-2.4.6-88.el7.centos.x86_64
    —> Package mod_wsgi.x86_64 0:3.4-18.el7 will be installed
    —> Package openstack-keystone.noarch 1:14.1.0-1.el7 will be installed
    –> Processing Dependency: python-keystone = 1:14.1.0-1.el7 for package: 1:openstack-keystone-14.1.0-1.el7.noarch
    –> Running transaction check
    —> Package httpd-tools.x86_64 0:2.4.6-88.el7.centos will be installed
    —> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
    —> Package python-keystone.noarch 1:14.1.0-1.el7 will be installed
#

Installed:
httpd.x86_64 0:2.4.6-88.el7.centos mod_wsgi.x86_64 0:3.4-18.el7 openstack-keystone.noarch 1:14.1.0-1.el7

Dependency Installed:
MySQL-python.x86_64 0:1.2.5-1.el7 httpd-tools.x86_64 0:2.4.6-88.el7.centos
mailcap.noarch 0:2.1.41-2.el7 python-aniso8601.noarch 0:0.82-3.el7
python-beaker.noarch 0:1.5.4-10.el7 python-editor.noarch 0:0.4-4.el7
python-jwcrypto.noarch 0:0.4.2-1.el7 python-keystone.noarch 1:14.1.0-1.el7
python-ldap.x86_64 0:2.4.15-2.el7 python-mako.noarch 0:0.8.1-2.el7
python-migrate.noarch 0:0.11.0-1.el7 python-oslo-cache-lang.noarch 0:1.30.3-1.el7
python-oslo-concurrency-lang.noarch 0:3.27.0-1.el7 python-oslo-db-lang.noarch 0:4.40.1-1.el7
python-oslo-middleware-lang.noarch 0:3.36.0-1.el7 python-oslo-policy-lang.noarch 0:1.38.1-1.el7
python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 python-paste-deploy.noarch 0:1.5.2-6.el7
python-pycadf-common.noarch 0:2.8.0-1.el7 python-routes.noarch 0:2.4.1-1.el7
python-sqlparse.noarch 0:0.1.18-5.el7 python-tempita.noarch 0:0.5.1-8.el7
python2-alembic.noarch 0:0.9.7-1.el7 python2-amqp.noarch 0:2.4.0-1.el7
python2-bcrypt.x86_64 0:3.1.4-4.el7 python2-cachetools.noarch 0:2.1.0-1.el7
python2-click.noarch 0:6.7-8.el7 python2-defusedxml.noarch 0:0.5.0-2.el7
python2-eventlet.noarch 0:0.20.1-6.el7 python2-fasteners.noarch 0:0.14.1-6.el7
python2-flask.noarch 1:1.0.2-1.el7 python2-flask-restful.noarch 0:0.3.6-7.el7
python2-future.noarch 0:0.16.0-7.el7 python2-futurist.noarch 0:1.7.0-1.el7
python2-greenlet.x86_64 0:0.4.12-1.el7 python2-itsdangerous.noarch 0:0.24-14.el7
python2-jinja2.noarch 0:2.10-2.el7 python2-keystonemiddleware.noarch 0:5.2.0-1.el7
python2-kombu.noarch 1:4.2.2-1.el7 python2-ldappool.noarch 0:2.3.1-1.el7
python2-markupsafe.x86_64 0:0.23-16.el7 python2-oauthlib.noarch 0:2.0.1-8.el7
python2-oslo-cache.noarch 0:1.30.3-1.el7 python2-oslo-concurrency.noarch 0:3.27.0-1.el7
python2-oslo-db.noarch 0:4.40.1-1.el7 python2-oslo-messaging.noarch 0:8.1.2-1.el7
python2-oslo-middleware.noarch 0:3.36.0-1.el7 python2-oslo-policy.noarch 0:1.38.1-1.el7
python2-oslo-service.noarch 0:1.31.8-1.el7 python2-osprofiler.noarch 0:2.3.0-1.el7
python2-passlib.noarch 0:1.7.1-1.el7 python2-pycadf.noarch 0:2.8.0-1.el7
python2-pyngus.noarch 0:2.2.4-1.el7 python2-pysaml2.noarch 0:4.5.0-4.el7
python2-qpid-proton.x86_64 0:0.26.0-2.el7 python2-scrypt.x86_64 0:0.8.0-2.el7
python2-sqlalchemy.x86_64 0:1.2.7-1.el7 python2-statsd.noarch 0:3.2.1-5.el7
python2-tenacity.noarch 0:4.12.0-1.el7 python2-vine.noarch 0:1.2.0-1.el7
python2-webob.noarch 0:1.8.2-1.el7 python2-werkzeug.noarch 0:0.14.1-3.el7
qpid-proton-c.x86_64 0:0.26.0-2.el7

Complete!

修改配置文件,官方文档是
Edit the /etc/keystone/keystone.conf file and complete the following actions:

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

Replace KEYSTONE_DBPASS with the password you chose for the database.

Note

Comment out or remove any other connection options in the [database] section.

Note

The host, controller in this example, must be resolvable.

In the [token] section, configure the Fernet token provider:

[token]

provider = fernet

我的配置文件如下,再说一遍,不要去修改已有的配置,而是增加,并删除多余的配置,这是个习惯问题

[root@controller ~]

# grep -v “^#” /etc/keystone/keystone.conf | grep -v “^$”
[DEFAULT]

[application_credential]

[assignment]

[auth]

[cache]

[catalog]

[cors]

[credential]

[database]

connection = mysql+pymysql://keystone:123456@controller/keystone

[domain_config]

[endpoint_filter]

[endpoint_policy]

[eventlet_server]

[federation]

[fernet_tokens]

[healthcheck]

[identity]

[identity_mapping]

[ldap]

[matchmaker_redis]

[memcache]

[oauth1]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[policy]

[profiler]

[resource]

[revoke]

[role]

[saml]

[security_compliance]

[shadow_users]

[signing]

[token]

provider = fernet

[tokenless_auth]

[trust]

[unified_limit]

[wsgi]

开始向数据库中导入keystone的数据
注意,从现在开始,建议你打开几个窗口,随时tail -f 监视日志,因为大部分的命令都是没输出的。

[root@controller ~]

# su -s /bin/sh -c “keystone-manage db_sync” keystone

从keystone日志可以确定已安装完成

[root@controller ~]

# tail -f /var/log/keystone/keystone.log
2019-04-14 02:15:04.397 12629 INFO migrate.versioning.api [-] 47 -> 48…
2019-04-14 02:15:04.429 12629 INFO migrate.versioning.api [-] done
2019-04-14 02:15:04.430 12629 INFO migrate.versioning.api [-] 48 -> 49…
2019-04-14 02:15:04.488 12629 INFO migrate.versioning.api [-] done
2019-04-14 02:15:04.489 12629 INFO migrate.versioning.api [-] 49 -> 50…
2019-04-14 02:15:04.521 12629 INFO migrate.versioning.api [-] done
2019-04-14 02:15:04.521 12629 INFO migrate.versioning.api [-] 50 -> 51…
2019-04-14 02:15:04.546 12629 INFO migrate.versioning.api [-] done
2019-04-14 02:15:04.546 12629 INFO migrate.versioning.api [-] 51 -> 52…
2019-04-14 02:15:04.572 12629 INFO migrate.versioning.api [-] done

初始化fernet的数据库密钥

[root@controller ~]

# keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone

[root@controller ~]

# keystone-manage credential_setup –keystone-user keystone –keystone-group keystone
检查日志
2019-04-14 02:18:05.058 12846 INFO keystone.common.fernet_utils [-] key_repository does not appear to exist; attempting to create it
2019-04-14 02:18:05.058 12846 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2019-04-14 02:18:05.058 12846 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Starting key rotation with 1 key files: [‘/etc/keystone/fernet-keys/0’]
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/fernet-keys/0.tmp
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Current primary key is: 0
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Next primary key will be: 1
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 1
2019-04-14 02:18:05.059 12846 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/fernet-keys/0
2019-04-14 02:18:59.616 12892 INFO keystone.common.fernet_utils [-] key_repository does not appear to exist; attempting to create it
2019-04-14 02:18:59.616 12892 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2019-04-14 02:18:59.616 12892 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/credential-keys/0
2019-04-14 02:18:59.616 12892 INFO keystone.common.fernet_utils [-] Starting key rotation with 1 key files: [‘/etc/keystone/credential-keys/0’]
2019-04-14 02:18:59.617 12892 INFO keystone.common.fernet_utils [-] Created a new temporary key: /etc/keystone/credential-keys/0.tmp
2019-04-14 02:18:59.617 12892 INFO keystone.common.fernet_utils [-] Current primary key is: 0
2019-04-14 02:18:59.617 12892 INFO keystone.common.fernet_utils [-] Next primary key will be: 1
2019-04-14 02:18:59.617 12892 INFO keystone.common.fernet_utils [-] Promoted key 0 to be the primary: 1
2019-04-14 02:18:59.617 12892 INFO keystone.common.fernet_utils [-] Become a valid new key: /etc/keystone/credential-keys/0

Bootstrap the Identity service:
注意,在Queens版本以前,keystone需要两个独立的端口,因为有一个admin端口需要使用v2的35357,现在不需要了,可以同时通过一个端口认证。
官方文档如下
Note

Before the Queens release, keystone needed to be run on two separate ports to accommodate the Identity v2 API which ran a separate admin-only service commonly on port 35357. With the removal of the v2 API, keystone can be run on the same port for all interfaces.

keystone-manage bootstrap –bootstrap-password ADMIN_PASS \

–bootstrap-admin-url http://controller:5000/v3/ \
–bootstrap-internal-url http://controller:5000/v3/ \
–bootstrap-public-url http://controller:5000/v3/ \
–bootstrap-region-id RegionOne
Replace ADMIN_PASS with a suitable password for an administrative user.
我的配置如下:

[root@controller ~]

# keystone-manage bootstrap –bootstrap-password 123456 \

–bootstrap-admin-url http://controller:5000/v3/ \
–bootstrap-internal-url http://controller:5000/v3/ \
–bootstrap-public-url http://controller:5000/v3/ \
–bootstrap-region-id RegionOne
检查日志
2019-04-14 02:22:47.406 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created domain default
2019-04-14 02:22:47.489 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created project admin
2019-04-14 02:22:47.785 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created user admin
2019-04-14 02:22:47.801 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created role reader
2019-04-14 02:22:47.836 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created role member
2019-04-14 02:22:47.866 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created implied role where a95ea890a0d545aba6a56c3b9500d6e6 implies a272788b4adc4e72b4587c20592f42a3
2019-04-14 02:22:47.889 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created role admin
2019-04-14 02:22:47.933 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created implied role where e064f5e2780a4ef996ab13cf0e8df715 implies a95ea890a0d545aba6a56c3b9500d6e6
2019-04-14 02:22:47.971 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Granted admin on admin to user admin.
2019-04-14 02:22:47.990 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Granted admin on the system to user admin.
2019-04-14 02:22:48.016 13034 WARNING py.warnings [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] /usr/lib/python2.7/site-packages/pycadf/identifier.py:72: UserWarning: Invalid uuid: RegionOne. To ensure interoperability, identifiers should be a valid uuid.
‘identifiers should be a valid uuid.’ % (value)))

2019-04-14 02:22:48.021 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created region RegionOne
2019-04-14 02:22:48.079 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created admin endpoint http://controller:5000/v3/
2019-04-14 02:22:48.121 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created internal endpoint http://controller:5000/v3/
2019-04-14 02:22:48.156 13034 INFO keystone.cmd.bootstrap [req-a8f3ea55-3791-4766-acbe-7fe08d7bbcea – – – – -] Created public endpoint http://controller:5000/v3/

配置http服务
Edit the /etc/httpd/conf/httpd.conf file and configure the ServerName option to reference the controller node:

ServerName controller
Create a link to the /usr/share/keystone/wsgi-keystone.conf file:

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

Finalize the installation¶
Start the Apache HTTP service and configure it to start when the system boots:

systemctl enable httpd.service

systemctl start httpd.service

Configure the administrative account

$ export OS_USERNAME=admin
$ export OS_PASSWORD=ADMIN_PASS
$ export OS_PROJECT_NAME=admin
$ export OS_USER_DOMAIN_NAME=Default
$ export OS_PROJECT_DOMAIN_NAME=Default
$ export OS_AUTH_URL=http://controller:5000/v3
$ export OS_IDENTITY_API_VERSION=3
Replace ADMIN_PASS with the password used in the keystone-manage bootstrap command in keystone-install-configure-rdo.

修改/etc/httpd/conf/httpd.conf
修改 ServerName 为 controller
映射配置文件

[root@controller ~]

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

[root@controller ~]

# ll /etc/httpd/conf.d/
total 16
-rw-r–r– 1 root root 2926 Nov 4 20:47 autoindex.conf
-rw-r–r– 1 root root 366 Nov 4 20:47 README
-rw-r–r– 1 root root 1252 Oct 30 11:00 userdir.conf
-rw-r–r– 1 root root 824 Oct 30 11:00 welcome.conf
lrwxrwxrwx 1 root root 38 Apr 14 02:30 wsgi-keystone.conf -> /usr/share/keystone/wsgi-keystone.conf
启动httpd服务,并验证
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.

[root@controller ~]

# systemctl start httpd.service

[root@controller ~]

# systemctl status httpd.service
● httpd.service – The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 02:31:06 EDT; 10s ago
Docs: man:httpd(8)
man:apachectl(8)
Main PID: 13506 (httpd)
Status: “Total requests: 0; Current requests/sec: 0; Current traffic: 0 B/sec”
Tasks: 26
CGroup: /system.slice/httpd.service
├─13506 /usr/sbin/httpd -DFOREGROUND
├─13507 (wsgi:keystone- -DFOREGROUND
├─13508 (wsgi:keystone- -DFOREGROUND
├─13509 (wsgi:keystone- -DFOREGROUND
├─13510 (wsgi:keystone- -DFOREGROUND
├─13511 (wsgi:keystone- -DFOREGROUND
├─13512 /usr/sbin/httpd -DFOREGROUND
├─13513 /usr/sbin/httpd -DFOREGROUND
├─13514 /usr/sbin/httpd -DFOREGROUND
├─13515 /usr/sbin/httpd -DFOREGROUND
└─13516 /usr/sbin/httpd -DFOREGROUND

Apr 14 02:31:06 controller systemd[1]: Starting The Apache HTTP Server…
Apr 14 02:31:06 controller systemd[1]: Started The Apache HTTP Server.

[root@controller ~]

# netstat -antup | grep httpd
tcp6 0 0 :::5000 :::* LISTEN 13506/httpd
tcp6 0 0 :::80 :::* LISTEN 13506/httpd

[root@controller ~]

#

编辑环境变量,然后source一下生效,并且验证一下,准备后面的配置

[root@controller ~]

# vim administrative.openstack

[root@controller ~]

# cat administrative.openstack
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

[root@controller ~]

# source administrative.openstack
验证一下

[root@controller ~]

# echo “$OS_PASSWORD”
123456

[root@controller ~]

# echo “$OS_AUTH_URL”
http://controller:5000/v3

[root@controller ~]

#

1.1.1 创建域、项目、用户和角色
老规矩,官方文档先送上
The Identity service provides authentication services for each OpenStack service. The authentication service uses a combination of domains, projects, users, and roles.

Although the “default” domain already exists from the keystone-manage bootstrap step in this guide, a formal way to create a new domain would be:

$ openstack domain create –description “An Example Domain” example

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | An Example Domain |
| enabled | True |
| id | 2f4f80574fd84fe6ba9067228ae0a50c |
| name | example |
| tags | [] |
+————-+———————————-+
This guide uses a service project that contains a unique user for each service that you add to your environment. Create the service project:

$ openstack project create –domain default \
–description “Service Project” service

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 24ac7f19cd944f4cba1d77469b2a73ed |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+————-+———————————-+
Regular (non-admin) tasks should use an unprivileged project and user. As an example, this guide creates the myproject project and myuser user.

Create the myproject project:

$ openstack project create –domain default \
–description “Demo Project” myproject

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 231ad6e7ebba47d6a1e57e1cc07ae446 |
| is_domain | False |
| name | myproject |
| parent_id | default |
| tags | [] |
+————-+———————————-+
Note

Do not repeat this step when creating additional users for this project.

Create the myuser user:

$ openstack user create –domain default \
–password-prompt myuser

User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | aeda23aa78f44e859900e22c24817832 |
| name | myuser |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
Create the myrole role:

$ openstack role create myrole

+———–+———————————-+
| Field | Value |
+———–+———————————-+
| domain_id | None |
| id | 997ce8d05fc143ac97d83fdfb5998552 |
| name | myrole |
+———–+———————————-+
Add the myrole role to the myproject project and myuser user:

$ openstack role add –project myproject –user myuser myrole
Note

This command provides no output.

Note

You can repeat this procedure to create additional projects and users.

下面是我的,注意,一定要盯好keystone的日志
创建域

[root@controller ~]

# openstack domain create –description “An Example Domain” example
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | An Example Domain |
| enabled | True |
| id | 56772bfad89b4604aa8a5419a519ec47 |
| name | example |
| tags | [] |
+————-+———————————-+

[root@controller ~]

#
日志
2019-04-14 02:40:29.812 13510 INFO keystone.common.wsgi [req-1b17f5a3-35a6-4918-aaa3-bb0e5d30311b – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:40:32.150 13511 INFO keystone.common.wsgi [req-0c262245-b37f-46aa-9cd1-bc21eaaec007 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:40:33.698 13509 INFO keystone.common.wsgi [req-ded50e7b-0e3b-41d3-9483-a77df9be59c4 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/domains
2019-04-14 02:40:33.710 13509 WARNING py.warnings [req-ded50e7b-0e3b-41d3-9483-a77df9be59c4 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_domain failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

创建项目

[root@controller ~]

# openstack project create –domain default \

–description “Service Project” service
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 74731514589347a088d52a50c4b9c88b |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
+————-+———————————-+

[root@controller ~]

#
日志
2019-04-14 02:42:53.833 13510 INFO keystone.common.wsgi [req-39027956-7595-4dbe-b78a-d83a6f7885e3 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:42:54.282 13508 INFO keystone.common.wsgi [req-9d75ab01-334d-4a70-b219-88630e0bb3ee – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:42:54.932 13507 INFO keystone.common.wsgi [req-3d0ea303-3495-4009-809e-56fa127a36db 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/domains/default
2019-04-14 02:42:54.943 13507 WARNING py.warnings [req-3d0ea303-3495-4009-809e-56fa127a36db 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:get_domain failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 02:42:55.021 13510 INFO keystone.common.wsgi [req-8d5c5181-0e18-4110-a99b-4deebf421a54 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/projects
2019-04-14 02:42:55.032 13510 WARNING py.warnings [req-8d5c5181-0e18-4110-a99b-4deebf421a54 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_project failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

给普通用户(不是管理员)也创建一个demo项目

[root@controller ~]

# openstack project create –domain default \

–description “Demo Project” myproject
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 48ca2da44aa94fee851cb16211c18aad |
| is_domain | False |
| name | myproject |
| parent_id | default |
| tags | [] |
+————-+———————————-+

[root@controller ~]

#
日志
2019-04-14 02:43:53.943 13509 INFO keystone.common.wsgi [req-ce00f8e2-9028-48b9-b54f-3e32eb652d8e – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:43:54.321 13507 INFO keystone.common.wsgi [req-9c1fb93c-b678-44bd-a3bc-e6bf3fd4dced – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:43:54.949 13511 INFO keystone.common.wsgi [req-0aebff0b-83f3-4123-903b-65a4f2044224 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/domains/default
2019-04-14 02:43:54.960 13511 WARNING py.warnings [req-0aebff0b-83f3-4123-903b-65a4f2044224 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:get_domain failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 02:43:55.039 13509 INFO keystone.common.wsgi [req-9a856f21-0a29-44f3-bc85-c33bdb7d85f3 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/projects
2019-04-14 02:43:55.048 13509 WARNING py.warnings [req-9a856f21-0a29-44f3-bc85-c33bdb7d85f3 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_project failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

创建用户,我的密码是123456,你随意

[root@controller ~]

# openstack user create –domain default \

–password-prompt myuser
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 0a9eefe8b20b4258bbd16af82b8a0132 |
| name | myuser |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
日志
2019-04-14 02:44:53.888 13509 INFO keystone.common.wsgi [req-77a1618c-95fe-4af7-b1b2-e5f4bb983a4b – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:44:54.244 13510 INFO keystone.common.wsgi [req-b44156a0-c5bd-43e3-b148-096aaf1e8eb9 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:44:54.672 13507 INFO keystone.common.wsgi [req-2dd0b2fe-e584-4e87-bfb1-c1209d9297ec 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/domains/default
2019-04-14 02:45:20.629 13509 INFO keystone.common.wsgi [req-1bd3b685-f3b0-415b-b306-151075c90f6b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/users
2019-04-14 02:45:20.638 13509 WARNING py.warnings [req-1bd3b685-f3b0-415b-b306-151075c90f6b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_user failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

创建角色

[root@controller ~]

# openstack role create myrole
+———–+———————————-+
| Field | Value |
+———–+———————————-+
| domain_id | None |
| id | 690ad257168845adaa7d9b6713fcec70 |
| name | myrole |
+———–+———————————-+

[root@controller ~]

#
日志
2019-04-14 02:46:04.089 13508 INFO keystone.common.wsgi [req-ea5aaed1-db24-4e48-8087-3bbe2fa7527e – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:46:04.502 13509 INFO keystone.common.wsgi [req-5136c0e0-2b2b-459a-9d5e-6bd24a2adb7e – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:46:04.929 13510 INFO keystone.common.wsgi [req-db1d0d06-1ab8-49a2-ab14-8829309cdd9b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/roles
2019-04-14 02:46:04.938 13510 WARNING py.warnings [req-db1d0d06-1ab8-49a2-ab14-8829309cdd9b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_role failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

把刚才创建的用户绑定到刚才创建的角色上面去
注意,这条命令没输出,去看日志

[root@controller ~]

# openstack role add –project myproject –user myuser myrole
日志
2019-04-14 02:47:19.089 13508 INFO keystone.common.wsgi [req-d5f9d9de-4030-450a-a0a0-b7fd351f677c – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:47:19.442 13507 INFO keystone.common.wsgi [req-38a35913-dcad-41e0-b793-ba8513060a97 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 02:47:19.924 13511 INFO keystone.common.wsgi [req-0504ae88-cad8-43d3-acf2-21b8cad60756 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/roles/myrole
2019-04-14 02:47:19.928 13511 WARNING keystone.common.wsgi [req-0504ae88-cad8-43d3-acf2-21b8cad60756 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find role: myrole.: RoleNotFound: Could not find role: myrole.
2019-04-14 02:47:20.003 13508 INFO keystone.common.wsgi [req-2f6d7f1c-89c4-43f5-bfdc-604ef50f7936 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/roles?name=myrole
2019-04-14 02:47:20.014 13508 WARNING py.warnings [req-2f6d7f1c-89c4-43f5-bfdc-604ef50f7936 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:list_roles failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 02:47:20.092 13507 INFO keystone.common.wsgi [req-26345dbe-cc9e-4ea1-bf6f-d997d898504d 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/users/myuser
2019-04-14 02:47:20.098 13507 WARNING keystone.common.wsgi [req-26345dbe-cc9e-4ea1-bf6f-d997d898504d 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find user: myuser.: UserNotFound: Could not find user: myuser.
2019-04-14 02:47:20.179 13511 INFO keystone.common.wsgi [req-2d010b63-efa2-4756-a3e3-a218c6129c83 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/users?name=myuser
2019-04-14 02:47:20.188 13511 WARNING py.warnings [req-2d010b63-efa2-4756-a3e3-a218c6129c83 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:list_users failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 02:47:20.282 13508 INFO keystone.common.wsgi [req-aa90bf4a-6c7b-45dd-b916-0c520dd73e8f 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/projects/myproject
2019-04-14 02:47:20.286 13508 WARNING keystone.common.wsgi [req-aa90bf4a-6c7b-45dd-b916-0c520dd73e8f 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find project: myproject.: ProjectNotFound: Could not find project: myproject.
2019-04-14 02:47:20.364 13507 INFO keystone.common.wsgi [req-42674446-abe2-41f1-8da4-3a8cdd1678ae 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/projects?name=myproject
2019-04-14 02:47:20.373 13507 WARNING py.warnings [req-42674446-abe2-41f1-8da4-3a8cdd1678ae 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:list_projects failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 02:47:20.451 13511 INFO keystone.common.wsgi [req-4833313f-74fb-4e73-b6c5-a6a6b06e1e4b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] PUT http://controller:5000/v3/projects/48ca2da44aa94fee851cb16211c18aad/users/0a9eefe8b20b4258bbd16af82b8a0132/roles/690ad257168845adaa7d9b6713fcec70
2019-04-14 02:47:20.479 13511 WARNING py.warnings [req-4833313f-74fb-4e73-b6c5-a6a6b06e1e4b 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_grant failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

当然,你也可以创建其它的项目和用户,不一定仅限于此

1.1.2验证上面的操作
现在,我们要取消环境变量OS_AUTH_URL 和 OS_PASSWORD的值,来验证keystone是否能正常工作

[root@controller ~]

# unset OS_AUTH_URL OS_PASSWORD

[root@controller ~]

# echo “$OS_AUTH_URL”

[root@controller ~]

# echo “$OS_PASSWORD”

然后请求admin的令牌,我的密码是123456

[root@controller ~]

# openstack –os-auth-url http://controller:5000/v3 \

–os-project-domain-name Default –os-user-domain-name Default \
–os-project-name admin –os-username admin token issue
Password:
+————+—————————————————————————————————————————————————————————————–+
| Field | Value |
+————+—————————————————————————————————————————————————————————————–+
| expires | 2019-04-14T07:56:35+0000 |
| id | gAAAAABcstmjb1lAbI6PsG62civPM0DIG_uRJofWS551-E6pO4JCFlSJEJRkEv44oTol7Gq-OZt4RHWGfSgpjd-yfKgpobnPGwL-J_6DSELrVa3z-9vWcK6VIly2BC-2_PGjXG_dUrbwEGuwvgTMihb0WK1VyTOSEGx_KUxcGA8owIPrNnq_pwo |
| project_id | 25a82cb651074f3494aeb5639d62ed22 |
| user_id | 5a1ba2f524234e76a97b18a6eb7419c0 |
+————+—————————————————————————————————————————————————————————————–+

[root@controller ~]

#
日志
2019-04-14 02:56:35.184 13510 INFO keystone.common.wsgi [req-c51c74f8-764b-4dab-b859-398bde9cc142 – – – – -] POST http://controller:5000/v3/auth/tokens

上面我们还创建了一个myuser的用户,也验证一下

[root@controller ~]

# openstack –os-auth-url http://controller:5000/v3 \

–os-project-domain-name Default –os-user-domain-name Default \
–os-project-name myproject –os-username myuser token issue
Password:
+————+—————————————————————————————————————————————————————————————–+
| Field | Value |
+————+—————————————————————————————————————————————————————————————–+
| expires | 2019-04-14T07:58:11+0000 |
| id | gAAAAABcstoDOhTjnKRHn4MsCiLdF_6Xo_Q_a2_YnexmnGA-SW7hO0m16xa735DsXmnuVxQ-yVEyDV_69mdqJmDgtcrz3RycmCYIdysofIUJJKnUYatmJ29Icvr7-zI_uzC4hhN3XF9mPavoFTMFYHrZJlk-2pn37l852-KSBPzgzD8ZLtc8mfw |
| project_id | 48ca2da44aa94fee851cb16211c18aad |
| user_id | 0a9eefe8b20b4258bbd16af82b8a0132 |
+————+—————————————————————————————————————————————————————————————–+

[root@controller ~]

#
日志
2019-04-14 02:58:11.186 13508 INFO keystone.common.wsgi [req-5ce5187c-fdb2-449a-b6f0-054dc316316d – – – – -] POST http://controller:5000/v3/auth/tokens

如果能够正常取回令牌,说明我们的Keystone没有问题了,如果你的不是,那退回去重新检查吧

1.1.3创建admin和其它用户的脚本,供以后的命令行使用
The previous sections used a combination of environment variables and command options to interact with the Identity service via the openstack client. To increase efficiency of client operations, OpenStack supports simple client environment scripts also known as OpenRC files. These scripts typically contain common options for all clients, but also support unique options. For more information, see the OpenStack End User Guide.

Creating the scripts¶
Create client environment scripts for the admin and demo projects and users. Future portions of this guide reference these scripts to load appropriate credentials for client operations.

Note

The paths of the client environment scripts are unrestricted. For convenience, you can place the scripts in any location, however ensure that they are accessible and located in a secure place appropriate for your deployment, as they do contain sensitive credentials.

Create and edit the admin-openrc file and add the following content:

Note

The OpenStack client also supports using a clouds.yaml file. For more information, see the os-client-config.

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace ADMIN_PASS with the password you chose for the admin user in the Identity service.

Create and edit the demo-openrc file and add the following content:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=MYUSER_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace DEMO_PASS with the password you chose for the demo user in the Identity service.

Using the scripts¶
To run clients as a specific project and user, you can simply load the associated client environment script prior to running them. For example:

Load the admin-openrc file to populate environment variables with the location of the Identity service and the admin project and user credentials:

$ . admin-openrc
Request an authentication token:

$ openstack token issue

+————+—————————————————————–+
| Field | Value |
+————+—————————————————————–+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+————+—————————————————————–+

先创建admin和demo用户的脚本

[root@controller ~]

# vim admin-openrc

[root@controller ~]

# cat admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

[root@controller ~]

# vim demo-openrc

[root@controller ~]

# cat demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

[root@controller ~]

#

先使用admin的脚本,获取下令牌

[root@controller ~]

# . admin-openrc

[root@controller ~]

# echo “$OS_USERNAME”
admin

[root@controller ~]

# openstack token issue
+————+—————————————————————————————————————————————————————————————–+
| Field | Value |
+————+—————————————————————————————————————————————————————————————–+
| expires | 2019-04-14T08:03:28+0000 |
| id | gAAAAABcsttAHJMka_IZPybsPBRhDQ-JhWjnYCtow_eZr1Mh9BLsah8bv0kiPtGzj5TLT5K_PIcIcRYgumVqdB8dn_S9pF6nFkO3bYCpcvIySY9s5JUd2m74MErBJIcazr0LOXN6acLcG7qlxsRJx9SBTLe_fJoXqY58QXJOcQUz-aCbgi_o-h8 |
| project_id | 25a82cb651074f3494aeb5639d62ed22 |
| user_id | 5a1ba2f524234e76a97b18a6eb7419c0 |
+————+—————————————————————————————————————————————————————————————–+

[root@controller ~]

#
日志
2019-04-14 03:03:28.090 13510 INFO keystone.common.wsgi [req-4b1882dd-ea83-4eee-b6e7-37b696dda051 – – – – -] POST http://controller:5000/v3/auth/tokens

#
#
#
#
#
#

1.2 glance的安装与配置
先创建数据库

[root@controller ~]

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 20
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE glance;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

刷一下admin的环境,准备后续操作

[root@controller ~]

# . admin-openrc

创建glance用户,并把这个用户添加到admin角色
注意,为了简化篇幅,后续不再填普通输出日志
创建glance用户

[root@controller ~]

# openstack user create –domain default –password-prompt glance
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | b34071ae3d72484783da962fb08b226c |
| name | glance |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
把glance用户添加到admin角色

[root@controller ~]

# openstack role add –project service –user glance admin
创建glance服务

[root@controller ~]

# openstack service create –name glance \

–description “OpenStack Image” image
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Image |
| enabled | True |
| id | ed3efd1713724f26893dd77882b3f842 |
| name | glance |
| type | image |
+————-+———————————-+

创建glance的API端点以供使用

[root@controller ~]

# openstack endpoint create –region RegionOne \

image public http://controller:9292

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 2073fa2886a94d4aab1773da178c0b08 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed3efd1713724f26893dd77882b3f842 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+————–+———————————-+

[root@controller ~]

#

[root@controller ~]

# openstack endpoint create –region RegionOne \

image internal http://controller:9292
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 58ece16cca714b2e9731b70fc3f40943 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed3efd1713724f26893dd77882b3f842 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+————–+———————————-+

[root@controller ~]

# openstack endpoint create –region RegionOne \
image admin http://controller:9292
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 1d0f0fc7fd444cf0b84a01d2137baaf9 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ed3efd1713724f26893dd77882b3f842 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
+————–+———————————-+

1.2.1安装和配置glance组件

[root@controller ~]

# yum install -y openstack-glance
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.sjc02.svwh.net
  • centos-qemu-ev: mirror.scalabledns.com
  • epel: mirror.coastal.edu
  • extras: mirror.scalabledns.com
  • updates: mirror.sjc02.svwh.net
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-glance.noarch 1:17.0.0-2.el7 will be installed
    –> Processing Dependency: python-glance = 1:17.0.0-2.el7 for package: 1:openstack-glance-17.0.0-2.el7.noarch
    –> Running transaction check
    —> Package python-glance.noarch 1:17.0.0-2.el7 will be installed
    –> Processing Dependency: python2-wsme >= 0.8 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-taskflow >= 2.16.0 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-swiftclient >= 2.2.0 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-oslo-vmware >= 0.11.1 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-os-brick >= 1.8.0 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-glance-store >= 0.26.1 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-cursive for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python2-boto for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python-retrying for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: python-httplib2 for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Processing Dependency: pysendfile for package: 1:python-glance-17.0.0-2.el7.noarch
    –> Running transaction check
#

Installed:
openstack-glance.noarch 1:17.0.0-2.el7

Dependency Installed:
atlas.x86_64 0:3.10.1-12.el7 pysendfile.x86_64 0:2.0.0-5.el7 python-glance.noarch 1:17.0.0-2.el7
python-httplib2.noarch 0:0.9.2-1.el7 python-lxml.x86_64 0:3.2.1-4.el7 python-networkx.noarch 0:1.10-1.el7
python-networkx-core.noarch 0:1.10-1.el7 python-nose.noarch 0:1.3.7-7.el7 python-oslo-privsep-lang.noarch 0:1.29.2-1.el7
python-oslo-vmware-lang.noarch 0:2.31.0-1.el7 python-retrying.noarch 0:1.2.3-4.el7 python-simplegeneric.noarch 0:0.8-7.el7
python2-automaton.noarch 0:1.15.0-1.el7 python2-boto.noarch 0:2.45.0-3.el7 python2-castellan.noarch 0:0.19.0-1.el7
python2-cursive.noarch 0:0.2.2-1.el7 python2-glance-store.noarch 0:0.26.1-1.el7 python2-numpy.x86_64 1:1.14.5-1.el7
python2-os-brick.noarch 0:2.5.6-1.el7 python2-os-win.noarch 0:4.0.1-1.el7 python2-oslo-privsep.noarch 0:1.29.2-1.el7
python2-oslo-rootwrap.noarch 0:5.14.1-1.el7 python2-oslo-vmware.noarch 0:2.31.0-1.el7 python2-pyasn1.noarch 0:0.1.9-7.el7
python2-rsa.noarch 0:3.4.1-1.el7 python2-scipy.x86_64 0:0.18.0-3.el7 python2-swiftclient.noarch 0:3.6.0-1.el7
python2-taskflow.noarch 0:3.2.0-1.el7 python2-wsme.noarch 0:0.9.3-1.el7 sysfsutils.x86_64 0:2.1.0-16.el7

Complete!

修改/etc/glance/glance-api.conf配置文件
In the [database] section, configure database access:

[database]

connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
Replace GLANCE_DBPASS with the password you chose for the Image service database.

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]

flavor = keystone
Replace GLANCE_PASS with the password you chose for the glance user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [glance_store] section, configure the local file system store and location of image files:

[glance_store]

stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
我的配置文件如下

[root@controller ~]

# grep -v ‘^#’ /etc/glance/glance-api.conf | grep -v ‘^$’
[DEFAULT]

[cors]

[database]

connection = mysql+pymysql://glance:123456@controller/glance

[glance_store]

stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

[image_format]

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[matchmaker_redis]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[paste_deploy]

flavor = keystone

[profiler]

[store_type_location_strategy]

[task]

[taskflow_executor]

[root@controller ~]

#

修改/etc/glance/glance-registry.conf配置文件
Edit the /etc/glance/glance-registry.conf file and complete the following actions:

Note

The Glance Registry Service and its APIs have been DEPRECATED in the Queens release and are subject to removal at the beginning of the ‘S’ development cycle, following the OpenStack standard deprecation policy.

For more information, see the Glance specification document Actually Deprecate the Glance Registry.

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
Replace GLANCE_DBPASS with the password you chose for the Image service database.

In the [keystone_authtoken] and [paste_deploy] sections, configure Identity service access:

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS

[paste_deploy]

flavor = keystone
Replace GLANCE_PASS with the password you chose for the glance user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

我的配置文件如下

[root@controller ~]

# grep -v ‘^#’ /etc/glance/glance-registry.conf | grep -v ‘^$’
[DEFAULT]

[database]

connection = mysql+pymysql://glance:123456@controller/glance

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[matchmaker_redis]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_policy]

[paste_deploy]

flavor = keystone

[profiler]

[root@controller ~]

#

安装glance数据库

[root@controller ~]

# su -s /bin/sh -c “glance-manage db_sync” glance
/usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:1352: OsloDBDeprecationWarning: EngineFacade is deprecated; please use oslo_db.sqlalchemy.enginefacade
expire_on_commit=expire_on_commit, _conf=conf)
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> liberty, liberty initial
INFO [alembic.runtime.migration] Running upgrade liberty -> mitaka01, add index on created_at and updated_at columns of ‘images’ table
INFO [alembic.runtime.migration] Running upgrade mitaka01 -> mitaka02, update metadef os_nova_server
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, add visibility to images
INFO [alembic.runtime.migration] Running upgrade ocata_expand01 -> pike_expand01, empty expand for symmetry with pike_contract01
INFO [alembic.runtime.migration] Running upgrade pike_expand01 -> queens_expand01
INFO [alembic.runtime.migration] Running upgrade queens_expand01 -> rocky_expand01, add os_hidden column to images table
INFO [alembic.runtime.migration] Running upgrade rocky_expand01 -> rocky_expand02, add os_hash_algo and os_hash_value columns to images table
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: rocky_expand02, current revision(s): rocky_expand02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database migration is up to date. No migration needed.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_contract01, remove is_public from images
INFO [alembic.runtime.migration] Running upgrade ocata_contract01 -> pike_contract01, drop glare artifacts tables
INFO [alembic.runtime.migration] Running upgrade pike_contract01 -> queens_contract01
INFO [alembic.runtime.migration] Running upgrade queens_contract01 -> rocky_contract01
INFO [alembic.runtime.migration] Running upgrade rocky_contract01 -> rocky_contract02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Upgraded database to: rocky_contract02, current revision(s): rocky_contract02
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.

启动glance服务并设置为开机自启

[root@controller ~]

# systemctl enable openstack-glance-api.service \

openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.

[root@controller ~]

# systemctl start openstack-glance-api.service \
openstack-glance-registry.service

[root@controller ~]

# systemctl status openstack-glance-api.service openstack-glance-registry.service
● openstack-glance-api.service – OpenStack Image Service (code-named Glance) API server
Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 03:50:52 EDT; 7s ago
Main PID: 18942 (glance-api)
Tasks: 5
CGroup: /system.slice/openstack-glance-api.service
├─18942 /usr/bin/python2 /usr/bin/glance-api
├─18968 /usr/bin/python2 /usr/bin/glance-api
├─18969 /usr/bin/python2 /usr/bin/glance-api
├─18970 /usr/bin/python2 /usr/bin/glance-api
└─18971 /usr/bin/python2 /usr/bin/glance-api

Apr 14 03:50:53 controller glance-api[18942]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters…arately.
Apr 14 03:50:53 controller glance-api[18942]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-api[18942]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters…arately.
Apr 14 03:50:53 controller glance-api[18942]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-api[18942]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters…arately.
Apr 14 03:50:53 controller glance-api[18942]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-api[18942]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters…arately.
Apr 14 03:50:53 controller glance-api[18942]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-api[18942]: /usr/lib/python2.7/site-packages/paste/deploy/util.py:55: DeprecationWarning: Using function…a filter
Apr 14 03:50:53 controller glance-api[18942]: val = callable(*args, **kw)

● openstack-glance-registry.service – OpenStack Image Service (code-named Glance) Registry server
Loaded: loaded (/usr/lib/systemd/system/openstack-glance-registry.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 03:50:52 EDT; 7s ago
Main PID: 18943 (glance-registry)
Tasks: 5
CGroup: /system.slice/openstack-glance-registry.service
├─18943 /usr/bin/python2 /usr/bin/glance-registry
├─18964 /usr/bin/python2 /usr/bin/glance-registry
├─18965 /usr/bin/python2 /usr/bin/glance-registry
├─18966 /usr/bin/python2 /usr/bin/glance-registry
└─18967 /usr/bin/python2 /usr/bin/glance-registry

Apr 14 03:50:53 controller glance-registry[18943]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parame…rately.
Apr 14 03:50:53 controller glance-registry[18943]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-registry[18943]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parame…rately.
Apr 14 03:50:53 controller glance-registry[18943]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-registry[18943]: /usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parame…rately.
Apr 14 03:50:53 controller glance-registry[18943]: return pkg_resources.EntryPoint.parse(“x=” + s).load(False)
Apr 14 03:50:53 controller glance-registry[18943]: /usr/lib/python2.7/site-packages/glance/registry/api/init.py:36: DeprecationWarning:…emoval.
Apr 14 03:50:53 controller glance-registry[18943]: debtcollector.deprecate(“Glance Registry service has been “
Apr 14 03:50:53 controller glance-registry[18943]: /usr/lib/python2.7/site-packages/paste/deploy/util.py:55: DeprecationWarning: Using func… filter
Apr 14 03:50:53 controller glance-registry[18943]: val = callable(*args, **kw)
Hint: Some lines were ellipsized, use -l to show in full.

[root@controller ~]

#

1.2.2验证glance安装
这一步,我们将从网上下载一个镜像,并且上传,验证
注意,不翻墙下不下来,或者你自己准备一个镜像
Verify operation of the Image service using CirrOS, a small Linux image that helps you test your OpenStack deployment.

For more information about how to download and build images, see OpenStack Virtual Machine Image Guide. For information about how to manage images, see the OpenStack End User Guide.

Note

Perform these commands on the controller node.

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
Download the source image:

$ wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Note

Install wget if your distribution does not include it.

Upload the image to the Image service using the QCOW2 disk format, bare container format, and public visibility so all projects can access it:

$ openstack image create “cirros” \
–file cirros-0.4.0-x86_64-disk.img \
–disk-format qcow2 –container-format bare \
–public

+——————+——————————————————+
| Field | Value |
+——————+——————————————————+
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
| container_format | bare |
| created_at | 2015-03-26T16:52:10Z |
| disk_format | qcow2 |
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
| id | cc5c6982-4910-471e-b864-1098015901b5 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | ae7a98326b9c455588edd2656d723b9d |
| protected | False |
| schema | /v2/schemas/image |
| size | 13200896 |
| status | active |
| tags | |
| updated_at | 2015-03-26T16:52:10Z |
| virtual_size | None |
| visibility | public |
+——————+——————————————————+
For information about the openstack image create parameters, see Create or update an image (glance) in the OpenStack User Guide.

For information about disk and container formats for images, see Disk and container formats for images in the OpenStack Virtual Machine Image Guide.

Note

OpenStack generates IDs dynamically, so you will see different values in the example command output.

Confirm upload of the image and validate attributes:

$ openstack image list

+————————————–+——–+——–+
| ID | Name | Status |
+————————————–+——–+——–+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+————————————–+——–+——–+

先刷一下admin的环境变量

[root@controller ~]

# . admin-openrc

从官网下一个镜像,这个镜像就13M,也不大

[root@controller ~]

# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
–2019-04-14 03:54:20– http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
Resolving download.cirros-cloud.net (download.cirros-cloud.net)… 64.90.42.85
Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 12716032 (12M) [text/plain]
Saving to: ‘cirros-0.4.0-x86_64-disk.img’

100%[===========================================================================================================>] 12,716,032 2.92MB/s in 4.2s

2019-04-14 03:54:30 (2.92 MB/s) – ‘cirros-0.4.0-x86_64-disk.img’ saved [12716032/12716032]

[root@controller ~]

# ll cirros-0.4.0-x86_64-disk.img
-rw-r–r– 1 root root 12716032 Nov 19 2017 cirros-0.4.0-x86_64-disk.img

[root@controller ~]

# ll -h cirros-0.4.0-x86_64-disk.img
-rw-r–r– 1 root root 13M Nov 19 2017 cirros-0.4.0-x86_64-disk.img

[root@controller ~]

#

上传镜像,并且命名为cirros,公开使用

[root@controller ~]

# openstack image create “cirros” \

–file cirros-0.4.0-x86_64-disk.img \
–disk-format qcow2 –container-format bare \
–public
+——————+——————————————————————————————————————————————————————————————–+
| Field | Value |
+——————+——————————————————————————————————————————————————————————————–+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2019-04-14T07:55:24Z |
| disk_format | qcow2 |
| file | /v2/images/e0810f42-705b-4bdd-9e8e-12313a8ff2e0/file |
| id | e0810f42-705b-4bdd-9e8e-12313a8ff2e0 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 25a82cb651074f3494aeb5639d62ed22 |
| properties | os_hash_algo=’sha512′, os_hash_value=’6513f21e44aa3da349f248188a44bc304a3653a04122d8fb4535423c8e1d14cd6a153f735bb0982e2161b5b5186106570c17a9e58b64dd39390617cd5a350f78′, os_hidden=’False’ |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2019-04-14T07:55:24Z |
| virtual_size | None |
| visibility | public |
+——————+——————————————————————————————————————————————————————————————–+

[root@controller ~]

#

列出镜像列表

[root@controller ~]

# openstack image list
+————————————–+——–+——–+
| ID | Name | Status |
+————————————–+——–+——–+
| e0810f42-705b-4bdd-9e8e-12313a8ff2e0 | cirros | active |
+————————————–+——–+——–+

[root@controller ~]

#

注意,如果你还有很多镜像,后面你可以到dashboard上去上传,更方便

#
#
#
#
#
#

1.3 nova部署,也就是compute服务
这个服务就是实实在在的虚拟机运行的地方,所有控制节点和每一个nova节点都要部署
为了避免头晕出错,我先出去买个菜,呼吸下新鲜空气。

controller节点配置
This section describes how to install and configure the Compute service, code-named nova, on the controller node.

Prerequisites
Before you install and configure the Compute service, you must create databases, service credentials, and API endpoints.

To create the databases, complete these steps:

Use the database access client to connect to the database server as the root user:

$ mysql -u root -p
Create the nova_api, nova, nova_cell0, and placement databases:

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> CREATE DATABASE placement;
Grant proper access to the databases:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ \
IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ \
IDENTIFIED BY ‘NOVA_DBPASS’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’localhost’ \
IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ \
IDENTIFIED BY ‘NOVA_DBPASS’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’localhost’ \
IDENTIFIED BY ‘NOVA_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ \
IDENTIFIED BY ‘NOVA_DBPASS’;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’localhost’ \
IDENTIFIED BY ‘PLACEMENT_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’%’ \
IDENTIFIED BY ‘PLACEMENT_DBPASS’;
Replace NOVA_DBPASS and PLACEMENT_DBPASS with a suitable password.

Exit the database access client.

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
Create the Compute service credentials:

Create the nova user:

$ openstack user create –domain default –password-prompt nova

User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 8a7dbf5279404537b1c7b86c033620fe |
| name | nova |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
Add the admin role to the nova user:

$ openstack role add –project service –user nova admin
Note

This command provides no output.

Create the nova service entity:

$ openstack service create –name nova \
–description “OpenStack Compute” compute

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Compute |
| enabled | True |
| id | 060d59eac51b4594815603d75a00aba2 |
| name | nova |
| type | compute |
+————-+———————————-+
Create the Compute API service endpoints:

$ openstack endpoint create –region RegionOne \
compute public http://controller:8774/v2.1

+————–+——————————————-+
| Field | Value |
+————–+——————————————-+
| enabled | True |
| id | 3c1caa473bfe4390a11e7177894bcc7b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+——————————————-+

$ openstack endpoint create –region RegionOne \
compute internal http://controller:8774/v2.1

+————–+——————————————-+
| Field | Value |
+————–+——————————————-+
| enabled | True |
| id | e3c918de680746a586eac1f2d9bc10ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+——————————————-+

$ openstack endpoint create –region RegionOne \
compute admin http://controller:8774/v2.1

+————–+——————————————-+
| Field | Value |
+————–+——————————————-+
| enabled | True |
| id | 38f7af91666a47cfb97b4dc790b94424 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 060d59eac51b4594815603d75a00aba2 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+——————————————-+
Create a Placement service user using your chosen PLACEMENT_PASS:

$ openstack user create –domain default –password-prompt placement

User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | fa742015a6494a949f67629884fc7ec8 |
| name | placement |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
Add the Placement user to the service project with the admin role:

$ openstack role add –project service –user placement admin
Note

This command provides no output.

Create the Placement API entry in the service catalog:

$ openstack service create –name placement \
–description “Placement API” placement

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Placement API |
| enabled | True |
| id | 2d1a27022e6e4185b86adac4444c495f |
| name | placement |
| type | placement |
+————-+———————————-+
Create the Placement API service endpoints:

$ openstack endpoint create –region RegionOne \
placement public http://controller:8778

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 2b1b2637908b4137a9c2e0470487cbc0 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+

$ openstack endpoint create –region RegionOne \
placement internal http://controller:8778

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 02bcda9a150a4bd7993ff4879df971ab |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+

$ openstack endpoint create –region RegionOne \
placement admin http://controller:8778

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 3d71177b9e0f406f98cbff198d74b182 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d1a27022e6e4185b86adac4444c495f |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+
Install and configure components¶
Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Install the packages:

yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
Edit the /etc/nova/nova.conf file and complete the following actions:

In the [DEFAULT] section, enable only the compute and metadata APIs:

[DEFAULT]

enabled_apis = osapi_compute,metadata
In the [api_database], [database], and [placement_database] sections, configure database access:

[api_database]

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[database]

connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[placement_database]

connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
Replace NOVA_DBPASS with the password you chose for the Compute databases and PLACEMENT_DBPASS for Placement database.

In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [api] and [keystone_authtoken] sections, configure Identity service access:

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] section, configure the my_ip option to use the management interface IP address of the controller node:

[DEFAULT]

my_ip = 10.0.0.11
In the [DEFAULT] section, enable support for the Networking service:

[DEFAULT]

use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
Note

By default, Compute uses an internal firewall driver. Since the Networking service includes a firewall driver, you must disable the Compute firewall driver by using the nova.virt.firewall.NoopFirewallDriver firewall driver.

Configure the [neutron] section of /etc/nova/nova.conf. Refer to the Networking service install guide for more details.

In the [vnc] section, configure the VNC proxy to use the management interface IP address of the controller node:

[vnc]

enabled = true

server_listen = $my_ip
server_proxyclient_address = $my_ip
In the [glance] section, configure the location of the Image service API:

[glance]

api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp
In the [placement] section, configure the Placement API:

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
Replace PLACEMENT_PASS with the password you choose for the placement user in the Identity service. Comment out any other options in the [placement] section.

Due to a packaging bug, you must enable access to the Placement API by adding the following configuration to /etc/httpd/conf.d/00-nova-placement-api.conf:

= 2.4> Require all granted Order allow,deny Allow from all
Restart the httpd service:

systemctl restart httpd

Populate the nova-api and placement databases:

su -s /bin/sh -c “nova-manage api_db sync” nova

Note

Ignore any deprecation messages in this output.

Register the cell0 database:

su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova

Create the cell1 cell:

su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova

109e1d4b-536a-40d0-83c6-5f121b82b650
Populate the nova database:

su -s /bin/sh -c “nova-manage db sync” nova

Verify nova cell0 and cell1 are registered correctly:

su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova

+——-+————————————–+
| Name | UUID |
+——-+————————————–+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+——-+————————————–+
Finalize installation¶
Start the Compute services and configure them to start when the system boots:

Note

nova-consoleauth is deprecated since 18.0.0 (Rocky) and will be removed in an upcoming release. Console proxies should be deployed per cell. If performing a fresh install (not an upgrade), then you likely do not need to install the nova-consoleauth service. See workarounds.enable_consoleauth for details.

systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service \

openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

(1)还是和其它节点一样,要先在数据库创建相应的库,不过这一次库比较多,所以不要弄错。

[root@controller ~]

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 29
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE nova_api;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE DATABASE nova;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE DATABASE nova_cell0;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.00 sec)

然后给数据库授权,注意每个库都要单独给loclhost和%授权,不要偷懒。
我的密码是123456,你自己看情况。
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

刷一下admin环境变量,准备创建域、用户、服务、API节点等。
(2)[root@controller ~]# . admin-openrc

创建Compute的一系列服务认证
先创建nova用户

[root@controller ~]

# openstack user create –domain default –password-prompt nova
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 942fea1a0f8a46cb9a18f5baee8edc2a |
| name | nova |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
日志
2019-04-14 09:35:40.716 13511 INFO keystone.common.wsgi [req-3fbb172d-f413-41ff-a36d-8f937a89b187 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 09:35:41.073 13510 INFO keystone.common.wsgi [req-3aeac1e9-6bea-44c3-a088-5bbd5e4a9397 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 09:35:41.566 13508 INFO keystone.common.wsgi [req-b4c5501c-5112-48be-9421-bb676d2d2eb9 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/domains/default
2019-04-14 09:35:41.575 13508 WARNING py.warnings [req-b4c5501c-5112-48be-9421-bb676d2d2eb9 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:get_domain failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 09:35:56.957 13511 INFO keystone.common.wsgi [req-232dc41e-99eb-4563-8382-a9dd44546337 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] POST http://controller:5000/v3/users
把nova用户添加到admin角色里面
这条命令没输出,日志里面有

[root@controller ~]

# openstack role add –project service –user nova admin
日志
2019-04-14 09:36:56.493 13510 INFO keystone.common.wsgi [req-96d4a10c-de6b-4712-849f-0714f896c0e0 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 09:36:56.851 13507 INFO keystone.common.wsgi [req-f5c1d47e-1531-4146-bfef-345079ab1473 – – – – -] POST http://controller:5000/v3/auth/tokens
2019-04-14 09:36:57.278 13508 INFO keystone.common.wsgi [req-f443823d-9996-4fc7-a798-f8cc925415dc 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/roles/admin
2019-04-14 09:36:57.282 13508 WARNING keystone.common.wsgi [req-f443823d-9996-4fc7-a798-f8cc925415dc 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find role: admin.: RoleNotFound: Could not find role: admin.
2019-04-14 09:36:57.425 13509 INFO keystone.common.wsgi [req-c90aa5b1-e320-432f-9a29-9ac8c5b9b340 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/roles?name=admin
2019-04-14 09:36:57.434 13509 WARNING py.warnings [req-c90aa5b1-e320-432f-9a29-9ac8c5b9b340 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:list_roles failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 09:36:57.511 13507 INFO keystone.common.wsgi [req-5e09f147-fe7b-4d25-951d-40094d301cc5 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/users/nova
2019-04-14 09:36:57.518 13507 WARNING keystone.common.wsgi [req-5e09f147-fe7b-4d25-951d-40094d301cc5 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find user: nova.: UserNotFound: Could not find user: nova.
2019-04-14 09:36:57.595 13510 INFO keystone.common.wsgi [req-42e78527-9924-4cc7-a5d7-c2b7c03bd5ef 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/users?name=nova
2019-04-14 09:36:57.702 13509 INFO keystone.common.wsgi [req-90762380-4b00-40da-9bcb-5a8971c66c3a 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/projects/service
2019-04-14 09:36:57.705 13509 WARNING keystone.common.wsgi [req-90762380-4b00-40da-9bcb-5a8971c66c3a 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] Could not find project: service.: ProjectNotFound: Could not find project: service.
2019-04-14 09:36:57.783 13511 INFO keystone.common.wsgi [req-f2d58b0f-ff4a-400b-bc33-2d4a98cf96ab 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] GET http://controller:5000/v3/projects?name=service
2019-04-14 09:36:57.792 13511 WARNING py.warnings [req-f2d58b0f-ff4a-400b-bc33-2d4a98cf96ab 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:list_projects failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)

2019-04-14 09:36:57.873 13510 INFO keystone.common.wsgi [req-984c0578-9a07-44e0-9cb8-6954bd751411 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] PUT http://controller:5000/v3/projects/74731514589347a088d52a50c4b9c88b/users/942fea1a0f8a46cb9a18f5baee8edc2a/roles/e064f5e2780a4ef996ab13cf0e8df715
2019-04-14 09:36:57.900 13510 WARNING py.warnings [req-984c0578-9a07-44e0-9cb8-6954bd751411 5a1ba2f524234e76a97b18a6eb7419c0 25a82cb651074f3494aeb5639d62ed22 – default default] /usr/lib/python2.7/site-packages/oslo_policy/policy.py:896: UserWarning: Policy identity:create_grant failed scope check. The token used to make the request was project scoped but the policy requires [‘system’] scope. This behavior may change in the future where using the intended scope is required
warnings.warn(msg)
(3)创建nova服务实体
后面不贴普通日志了

[root@controller ~]

# openstack service create –name nova \

–description “OpenStack Compute” compute
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Compute |
| enabled | True |
| id | 3e692d4c610f4d279c3da4ea19158a3a |
| name | nova |
| type | compute |
+————-+———————————-+
(4)创建API服务节点

[root@controller ~]

# openstack endpoint create –region RegionOne \
compute public http://controller:8774/v2.1
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 3050ab5ae8974469a099b885a8530c58 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3e692d4c610f4d279c3da4ea19158a3a |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+———————————-+

[root@controller ~]

# openstack endpoint create –region RegionOne \
compute internal http://controller:8774/v2.1
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 36a60694691e46b4ad499665d45a97d0 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3e692d4c610f4d279c3da4ea19158a3a |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+———————————-+

[root@controller ~]

#

[root@controller ~]

# openstack endpoint create –region RegionOne \
compute admin http://controller:8774/v2.1
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 066760a082dc47f686a425ff5c247f50 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 3e692d4c610f4d279c3da4ea19158a3a |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+————–+—–
(5)创建Placement服务用户,用户名就叫placement

[root@controller ~]

# openstack user create –domain default –password-prompt placement
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 2d83f8b479e140d0b54af0b08d1560a3 |
| name | placement |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
(6)把刚才创建的用户placement添加到admin角色

[root@controller ~]

# openstack role add –project service –user placement admin
(7)创建Placement API服务实体

[root@controller ~]

# openstack service create –name placement \
–description “Placement API” placement
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | Placement API |
| enabled | True |
| id | 0d6e998b73e84383b1bec4ed91fcf3b9 |
| name | placement |
| type | placement |
+————-+———————————-+

[root@controller ~]

#
(8)创建Placement的API服务节点

[root@controller ~]

# openstack endpoint create –region RegionOne \
placement public http://controller:8778
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | f74aa09df5484c84997678aed180d186 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0d6e998b73e84383b1bec4ed91fcf3b9 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+

[root@controller ~]

# openstack endpoint create –region RegionOne \
placement internal http://controller:8778
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | a4719725cbe24d63b5094efa2b89f99a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0d6e998b73e84383b1bec4ed91fcf3b9 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+

[root@controller ~]

# openstack endpoint create –region RegionOne \
placement admin http://controller:8778
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | f3e846cd16bc4b84965fd776e71af5e8 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 0d6e998b73e84383b1bec4ed91fcf3b9 |
| service_name | placement |
| service_type | placement |
| url | http://controller:8778 |
+————–+———————————-+

[root@controller ~]

#

1.3.1安装和配置controller节点的组件
(1)装包

[root@controller ~]

# yum install openstack-nova-api openstack-nova-conductor \

openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 12 kB 00:00:00

  • base: mirror.sjc02.svwh.net
  • centos-qemu-ev: sjc.edge.kernel.org
  • epel: mirror.oss.ou.edu
  • extras: linux.mirrors.es.net
  • updates: mirror.sjc02.svwh.net
    base | 3.6 kB 00:00:00
    centos-ceph-luminous | 2.9 kB 00:00:00
    centos-openstack-rocky | 2.9 kB 00:00:00
    centos-qemu-ev | 2.9 kB 00:00:00
    epel | 4.7 kB 00:00:00
    extras | 3.4 kB 00:00:00
    updates | 3.4 kB 00:00:00
    (1/2): epel/x86_64/updateinfo | 985 kB 00:00:04
    (2/2): epel/x86_64/primary_db | 6.7 MB 00:00:07
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-nova-api.noarch 1:18.2.0-1.el7 will be installed
    –> Processing Dependency: openstack-nova-common = 1:18.2.0-1.el7 for package: 1:openstack-nova-api-18.2.0-1.el7.noarch
    —> Package openstack-nova-conductor.noarch 1:18.2.0-1.el7 will be installed
    —> Package openstack-nova-console.noarch 1:18.2.0-1.el7 will be installed
    –> Processing Dependency: python-websockify >= 0.8.0 for package: 1:openstack-nova-console-18.2.0-1.el7.noarch
    —> Package openstack-nova-novncproxy.noarch 1:18.2.0-1.el7 will be installed
    –> Processing Dependency: novnc for package: 1:openstack-nova-novncproxy-18.2.0-1.el7.noarch
    —> Package openstack-nova-placement-api.noarch 1:18.2.0-1.el7 will be installed
    —> Package openstack-nova-scheduler.noarch 1:18.2.0-1.el7 will be installed
#

Installed:
openstack-nova-api.noarch 1:18.2.0-1.el7 openstack-nova-conductor.noarch 1:18.2.0-1.el7 openstack-nova-console.noarch 1:18.2.0-1.el7
openstack-nova-novncproxy.noarch 1:18.2.0-1.el7 openstack-nova-placement-api.noarch 1:18.2.0-1.el7 openstack-nova-scheduler.noarch 1:18.2.0-1.el7

Dependency Installed:
novnc.noarch 0:0.5.1-2.el7 openstack-nova-common.noarch 1:18.2.0-1.el7 python-kazoo.noarch 0:2.2.1-1.el7
python-nova.noarch 1:18.2.0-1.el7 python-oslo-versionedobjects-lang.noarch 0:1.33.3-1.el7 python-paramiko.noarch 0:2.1.1-9.el7
python-websockify.noarch 0:0.8.0-1.el7 python2-microversion-parse.noarch 0:0.2.1-1.el7 python2-os-traits.noarch 0:0.9.0-1.el7
python2-os-vif.noarch 0:1.11.1-1.el7 python2-oslo-reports.noarch 0:1.28.0-1.el7 python2-oslo-versionedobjects.noarch 0:1.33.3-1.el7
python2-psutil.x86_64 0:5.2.2-2.el7 python2-pyroute2.noarch 0:0.4.21-1.el7 python2-redis.noarch 0:2.10.6-1.el7
python2-tooz.noarch 0:1.62.1-1.el7 python2-voluptuous.noarch 0:0.11.5-1.el7.1 python2-zake.noarch 0:0.2.2-2.el7

Complete!
(2)配置/etc/nova/nova.conf配置文件
内容非常多,看仔细

[root@controller ~]

# vim /etc/nova/nova.conf

[root@controller ~]

# grep -v ‘^#’ /etc/nova/nova.conf | grep -v ‘^$’
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.101
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

connection = mysql+pymysql://nova:123456@controller/nova_api

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

connection = mysql+pymysql://nova:123456@controller/nova

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

[placement_database]

connection = mysql+pymysql://placement:123456@controller/placement

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

根据社区BUG反馈,必须给Placement API有进入/usr/bin的权限,因此需要修改httpd的配置文件

[root@controller ~]

# vim /etc/httpd/conf.d/00-nova-placement-api.conf

[root@controller ~]

# cat /etc/httpd/conf.d/00-nova-placement-api.conf
Listen 8778

WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova WSGIScriptAlias / /usr/bin/nova-placement-api = 2.4> ErrorLogFormat “%M” ErrorLog /var/log/nova/nova-placement-api.log #SSLEngine On #SSLCertificateFile … #SSLCertificateKeyFile …

Alias /nova-placement-api /usr/bin/nova-placement-api
SetHandler wsgi-script Options +ExecCGI WSGIProcessGroup nova-placement-api WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On

= 2.4> Require all granted Order allow,deny Allow from all
完成后重启httpd服务,并验证服务正常

[root@controller ~]

# systemctl restart httpd

[root@controller ~]

# systemctl status httpd
● httpd.service – The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:06:56 EDT; 6s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 1939 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 1949 (httpd)
Status: “Processing requests…”
Tasks: 38
CGroup: /system.slice/httpd.service
├─1949 /usr/sbin/httpd -DFOREGROUND
├─1950 /usr/sbin/httpd -DFOREGROUND
├─1951 /usr/sbin/httpd -DFOREGROUND
├─1952 /usr/sbin/httpd -DFOREGROUND
├─1953 (wsgi:keystone- -DFOREGROUND
├─1954 (wsgi:keystone- -DFOREGROUND
├─1955 (wsgi:keystone- -DFOREGROUND
├─1956 (wsgi:keystone- -DFOREGROUND
├─1957 (wsgi:keystone- -DFOREGROUND
├─1958 /usr/sbin/httpd -DFOREGROUND
├─1959 /usr/sbin/httpd -DFOREGROUND
├─1960 /usr/sbin/httpd -DFOREGROUND
├─1961 /usr/sbin/httpd -DFOREGROUND
└─1962 /usr/sbin/httpd -DFOREGROUND

Apr 14 10:06:56 controller systemd[1]: Starting The Apache HTTP Server…
Apr 14 10:06:56 controller systemd[1]: Started The Apache HTTP Server.

[root@controller ~]

#

[root@controller ~]

# ss -antup | grep httpd
tcp LISTEN 0 128 :::5000 :::* users:((“httpd”,pid=1962,fd=8),(“httpd”,pid=1961,fd=8),(“httpd”,pid=1960,fd=8),(“httpd”,pid=1959,fd=8),(“httpd”,pid=1958,fd=8),(“httpd”,pid=1949,fd=8))
tcp LISTEN 0 128 :::8778 :::* users:((“httpd”,pid=1962,fd=6),(“httpd”,pid=1961,fd=6),(“httpd”,pid=1960,fd=6),(“httpd”,pid=1959,fd=6),(“httpd”,pid=1958,fd=6),(“httpd”,pid=1949,fd=6))
tcp LISTEN 0 128 :::80 :::* users:((“httpd”,pid=1962,fd=4),(“httpd”,pid=1961,fd=4),(“httpd”,pid=1960,fd=4),(“httpd”,pid=1959,fd=4),(“httpd”,pid=1958,fd=4),(“httpd”,pid=1949,fd=4))

[root@controller ~]

#

(3)如果以上一切正常,那么现在开始向数据库中导入nova-pai 和 placement的数据

[root@controller ~]

# su -s /bin/sh -c “nova-manage api_db sync” nova
日志

[root@controller ~]

# tail -f /var/log/nova/nova-manage.log
2019-04-14 10:09:45.812 2074 INFO migrate.versioning.api [-] 56 -> 57…
2019-04-14 10:09:45.855 2074 INFO migrate.versioning.api [-] done
2019-04-14 10:09:45.855 2074 INFO migrate.versioning.api [-] 57 -> 58…
2019-04-14 10:09:46.197 2074 INFO migrate.versioning.api [-] done
2019-04-14 10:09:46.197 2074 INFO migrate.versioning.api [-] 58 -> 59…
2019-04-14 10:09:46.624 2074 INFO migrate.versioning.api [-] done
2019-04-14 10:09:46.624 2074 INFO migrate.versioning.api [-] 59 -> 60…
2019-04-14 10:09:47.024 2074 INFO migrate.versioning.api [-] done
2019-04-14 10:09:47.024 2074 INFO migrate.versioning.api [-] 60 -> 61…
2019-04-14 10:09:47.489 2074 INFO migrate.versioning.api [-] done

(4)注册cell0数据库

[root@controller ~]

# su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
日志
(没找到,后面找到了贴上去)

(5)创建cell1的元件cell

[root@controller ~]

# su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova
e1760c51-f39c-40cf-872f-3c97ff1a7677

(6)导入nova的数据
数据非常多,因为我是机械硬盘,所以等了10多分钟,说实话,我有点后悔没上SSD了

[root@controller ~]

# su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova
e1760c51-f39c-40cf-872f-3c97ff1a7677

[root@controller ~]

# su -s /bin/sh -c “nova-manage db sync” nova
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u’Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx. This is deprecated and will be disallowed in a future release.’)
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u’Duplicate index uniq_instances0uuid. This is deprecated and will be disallowed in a future release.’)
result = self._query(query)

[root@controller ~]

#
日志很多,我贴最后那部分
2019-04-14 10:20:19.488 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.489 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 384 -> 385…
2019-04-14 10:20:19.610 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.611 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 385 -> 386…
2019-04-14 10:20:19.681 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.682 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 386 -> 387…
2019-04-14 10:20:19.703 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.704 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 387 -> 388…
2019-04-14 10:20:19.729 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.729 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 388 -> 389…
2019-04-14 10:20:19.946 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done
2019-04-14 10:20:19.947 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] 389 -> 390…
2019-04-14 10:20:20.654 2519 INFO migrate.versioning.api [req-d4c6925e-429d-494c-8289-fad8d0aa47b9 – – – – -] done

(7)验证nova的cell0和cell1是否正确注册

[root@controller ~]

# su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova
+——-+————————————–+————————————+————————————————-+———-+
| Name | UUID | Transport URL | Database Connection | Disabled |
+——-+————————————–+————————————+————————————————-+———-+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 | False | | cell1 | e1760c51-f39c-40cf-872f-3c97ff1a7677 | rabbit://openstack:@controller | mysql+pymysql://nova:****@controller/nova | False |
+——-+————————————–+————————————+————————————————-+———-+

[root@controller ~]

#

注意,nova-consoleauth自从18.0.0(也就是现在的rocky版本),已经启用了,并且在将来的版本中会直接删除。需要为每个单元部署用来作代理的控制台。如果你是全新安装,那么就不要再安装nova-consoleauth服务了
启动服务,并设置为开机自启

[root@controller ~]

# systemctl enable openstack-nova-api.service \

openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service to /usr/lib/systemd/system/openstack-nova-consoleauth.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.

[root@controller ~]

# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
验证一下

[root@controller ~]

# systemctl status openstack-nova-api.service openstack-nova-consoleauth openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
● openstack-nova-api.service – OpenStack Nova API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:26:25 EDT; 7s ago
Main PID: 2969 (nova-api)
Tasks: 9
CGroup: /system.slice/openstack-nova-api.service
├─2969 /usr/bin/python2 /usr/bin/nova-api
├─3043 /usr/bin/python2 /usr/bin/nova-api
├─3044 /usr/bin/python2 /usr/bin/nova-api
├─3045 /usr/bin/python2 /usr/bin/nova-api
├─3046 /usr/bin/python2 /usr/bin/nova-api
├─3051 /usr/bin/python2 /usr/bin/nova-api
├─3052 /usr/bin/python2 /usr/bin/nova-api
├─3053 /usr/bin/python2 /usr/bin/nova-api
└─3054 /usr/bin/python2 /usr/bin/nova-api

Apr 14 10:26:21 controller systemd[1]: Starting OpenStack Nova API Server…
Apr 14 10:26:25 controller systemd[1]: Started OpenStack Nova API Server.

● openstack-nova-consoleauth.service – OpenStack Nova VNC console auth Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:26:24 EDT; 8s ago
Main PID: 2970 (nova-consoleaut)
Tasks: 1
CGroup: /system.slice/openstack-nova-consoleauth.service
└─2970 /usr/bin/python2 /usr/bin/nova-consoleauth

Apr 14 10:26:21 controller systemd[1]: Starting OpenStack Nova VNC console auth Server…
Apr 14 10:26:24 controller systemd[1]: Started OpenStack Nova VNC console auth Server.

● openstack-nova-scheduler.service – OpenStack Nova Scheduler Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:26:24 EDT; 8s ago
Main PID: 2971 (nova-scheduler)
Tasks: 5
CGroup: /system.slice/openstack-nova-scheduler.service
├─2971 /usr/bin/python2 /usr/bin/nova-scheduler
├─3026 /usr/bin/python2 /usr/bin/nova-scheduler
├─3027 /usr/bin/python2 /usr/bin/nova-scheduler
├─3028 /usr/bin/python2 /usr/bin/nova-scheduler
└─3029 /usr/bin/python2 /usr/bin/nova-scheduler

Apr 14 10:26:21 controller systemd[1]: Starting OpenStack Nova Scheduler Server…
Apr 14 10:26:24 controller systemd[1]: Started OpenStack Nova Scheduler Server.

● openstack-nova-conductor.service – OpenStack Nova Conductor Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:26:24 EDT; 8s ago
Main PID: 2972 (nova-conductor)
Tasks: 5
CGroup: /system.slice/openstack-nova-conductor.service
├─2972 /usr/bin/python2 /usr/bin/nova-conductor
├─3035 /usr/bin/python2 /usr/bin/nova-conductor
├─3036 /usr/bin/python2 /usr/bin/nova-conductor
├─3037 /usr/bin/python2 /usr/bin/nova-conductor
└─3038 /usr/bin/python2 /usr/bin/nova-conductor

Apr 14 10:26:21 controller systemd[1]: Starting OpenStack Nova Conductor Server…
Apr 14 10:26:24 controller systemd[1]: Started OpenStack Nova Conductor Server.

● openstack-nova-novncproxy.service – OpenStack Nova NoVNC Proxy Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 10:26:21 EDT; 11s ago
Main PID: 2973 (nova-novncproxy)
Tasks: 1
CGroup: /system.slice/openstack-nova-novncproxy.service
└─2973 /usr/bin/python2 /usr/bin/nova-novncproxy –web /usr/share/novnc/

Apr 14 10:26:21 controller systemd[1]: Started OpenStack Nova NoVNC Proxy Server.

compute节点配置
This section describes how to install and configure the Compute service on a compute node. The service supports several hypervisors to deploy instances or virtual machines (VMs). For simplicity, this configuration uses the Quick EMUlator (QEMU) hypervisor with the kernel-based VM (KVM) extension on compute nodes that support hardware acceleration for virtual machines. On legacy hardware, this configuration uses the generic QEMU hypervisor. You can follow these instructions with minor modifications to horizontally scale your environment with additional compute nodes.

Note

This section assumes that you are following the instructions in this guide step-by-step to configure the first compute node. If you want to configure additional compute nodes, prepare them in a similar fashion to the first compute node in the example architectures section. Each additional compute node requires a unique IP address.

Install and configure components¶
Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Install the packages:

yum install openstack-nova-compute

Edit the /etc/nova/nova.conf file and complete the following actions:

In the [DEFAULT] section, enable only the compute and metadata APIs:

[DEFAULT]

enabled_apis = osapi_compute,metadata
In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [api] and [keystone_authtoken] sections, configure Identity service access:

[api]

auth_strategy = keystone

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] section, configure the my_ip option:

[DEFAULT]

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your compute node, typically 10.0.0.31 for the first node in the example architecture.

In the [DEFAULT] section, enable support for the Networking service:

[DEFAULT]

use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
Note

By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver firewall driver.

Configure the [neutron] section of /etc/nova/nova.conf. Refer to the Networking service install guide for more details.

In the [vnc] section, enable and configure remote console access:

[vnc]

enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
The server component listens on all IP addresses and the proxy component only listens on the management interface IP address of the compute node. The base URL indicates the location where you can use a web browser to access remote consoles of instances on this compute node.

Note

If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.

In the [glance] section, configure the location of the Image service API:

[glance]

api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/nova/tmp
In the [placement] section, configure the Placement API:

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS
Replace PLACEMENT_PASS with the password you choose for the placement user in the Identity service. Comment out any other options in the [placement] section.

Finalize installation¶
Determine whether your compute node supports hardware acceleration for virtual machines:

$ egrep -c ‘(vmx|svm)’ /proc/cpuinfo
If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.

If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.

Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:

[libvirt]

virt_type = qemu
Start the Compute service including its dependencies and configure them to start automatically when the system boots:

systemctl enable libvirtd.service openstack-nova-compute.service

systemctl start libvirtd.service openstack-nova-compute.service

Note

If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall on the controller node is preventing access to port 5672. Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.

Add the compute node to the cell database¶
Important

Run the following commands on the controller node.

Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:

$ . admin-openrc

$ openstack compute service list –service nova-compute
+—-+——-+————–+——+——-+———+—————————-+
| ID | Host | Binary | Zone | State | Status | Updated At |
+—-+——-+————–+——+——-+———+—————————-+
| 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
+—-+——-+————–+——+——-+———+—————————-+
Discover compute hosts:

su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova

Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell ‘cell1’: ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host ‘compute’: fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host ‘compute’: fe58ddc1-1d65-4f87-9456-bc040dc106b3
Note

When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:

[scheduler]

discover_hosts_in_cells_interval = 300

去跳板机装包,把配置文件挨个分发过去,千万不要每台机器单独创建,非常容易出现单点错误
(1)装包

[root@exp2 ~]

# ansible compute -m yum -a ‘name=openstack-nova-compute state=installed disable_gpg_check=yes’
输出就不贴了,实在太多了
(2)修改配置文件/etc/nova/nova.conf

[root@exp2 ~]

# grep -v ‘^#’ nova.conf | grep -v ‘^$’
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.102
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

[placement_database]

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

[root@exp2 ~]

#
(3)分发配置文件到compute节点,然后再到每个compute节点修改my_ip

[root@exp2 ~]

# for i in {102..107};do rsync –delete -r -v /root/nova.conf 192.168.0.$i:/etc/nova/nova.conf ;done
sending incremental file list
nova.conf

sent 2,337 bytes received 3,401 bytes 2,295.20 bytes/sec
total size is 392,100 speedup is 68.33
sending incremental file list
nova.conf

sent 8,054 bytes received 3,389 bytes 3,269.43 bytes/sec
total size is 392,100 speedup is 34.27
sending incremental file list
nova.conf

sent 8,054 bytes received 3,389 bytes 4,577.20 bytes/sec
total size is 392,100 speedup is 34.27
sending incremental file list
nova.conf

sent 8,054 bytes received 3,389 bytes 3,269.43 bytes/sec
total size is 392,100 speedup is 34.27
sending incremental file list
nova.conf

sent 8,054 bytes received 3,389 bytes 22,886.00 bytes/sec
total size is 392,100 speedup is 34.27
sending incremental file list
nova.conf

sent 8,054 bytes received 3,389 bytes 3,269.43 bytes/sec
total size is 392,100 speedup is 34.27
确认每个节点都修改正确,这里我贴上/etc/hosts供比对

[root@controller ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.101 controller
192.168.0.102 compute1
192.168.0.103 compute2
192.168.0.104 compute3
192.168.0.105 compute4
192.168.0.106 compute5
192.168.0.107 compute6
192.168.0.9 block1
192.168.0.9 object1
注意,小心总不会有错,一定要再三确认

[root@exp2 ~]

# ansible compute -m command -a ‘head -4 /etc/nova/nova.conf’
compute5 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.106

compute2 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.103

compute4 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.105

compute1 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.102

compute3 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.104

compute6 | CHANGED | rc=0 >>
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.107

(4)虚拟化确认

[root@exp2 ~]

# ansible compute -m command -a ‘egrep -c “(vmx|svm)” /proc/cpuinfo’
compute5 | CHANGED | rc=0 >>
4

compute2 | CHANGED | rc=0 >>
4

compute3 | CHANGED | rc=0 >>
4

compute1 | CHANGED | rc=0 >>
4

compute4 | CHANGED | rc=0 >>
4

compute6 | CHANGED | rc=0 >>
4
如果输出值大于等于1,那说明compute节点的硬件是支持虚拟化的,如果是0,那需要更改 /etc/nova/nova.conf中虚拟化类型为qemu,也就是软件虚拟化

[libvirt]

virt_type = qemu
因为我的是4,支持硬件虚拟化,所以就什么都不用动。

(5)启动服务,并设为开机自启,并验证操作

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl start libvirtd.service openstack-nova-compute.service’
compute1 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute3 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute2 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute5 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute4 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute6 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

起不来,是吧!起得来就怪了,我犯了个致命的错误导致我在此浪费了将近一个小时排错,原因很简单,文件分发过去后,权限不对,权限的所属组必须为nova
所以,赶紧改吧

[root@exp2 ~]

# ansible compute -m command -a ‘chown .nova /etc/nova/nova.conf’
[WARNING]: Consider using the file module with owner rather than running ‘chown’. If you need to use command because file is insufficient you can
add ‘warn: false’ to this command task or set ‘command_warnings=False’ in ansible.cfg to get rid of this message.

compute1 | CHANGED | rc=0 >>

compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

验证一下

[root@exp2 ~]

# ansible compute -m command -a ‘ls -l /etc/nova/nova.conf’
compute5 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:13 /etc/nova/nova.conf

compute4 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:13 /etc/nova/nova.conf

compute3 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:12 /etc/nova/nova.conf

compute1 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:36 /etc/nova/nova.conf

compute2 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:12 /etc/nova/nova.conf

compute6 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392100 Apr 14 11:13 /etc/nova/nova.conf

好了,现在再次启动服务

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl start libvirtd.service openstack-nova-compute.service’
compute3 | CHANGED | rc=0 >>

compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

现在开心了吧,不放心?再验证一下:

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl status libvirtd.service openstack-nova-compute.service’
compute5 | CHANGED | rc=0 >>
● libvirtd.service – Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2019-04-13 22:48:08 EDT; 13h ago
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 4995 (libvirtd)
Tasks: 19 (limit: 32768)
CGroup: /system.slice/libvirtd.service
├─4995 /usr/sbin/libvirtd
├─5461 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/libexec/libvirt_leaseshelper
└─5462 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/libexec/libvirt_leaseshelper

Apr 13 22:48:08 compute5 systemd[1]: Started Virtualization daemon.
Apr 13 22:48:10 compute5 dnsmasq[5461]: started, version 2.76 cachesize 150
Apr 13 22:48:10 compute5 dnsmasq[5461]: compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
Apr 13 22:48:10 compute5 dnsmasq-dhcp[5461]: DHCP, IP range 192.168.122.2 — 192.168.122.254, lease time 1h
Apr 13 22:48:10 compute5 dnsmasq-dhcp[5461]: DHCP, sockets bound exclusively to interface virbr0
Apr 13 22:48:10 compute5 dnsmasq[5461]: reading /etc/resolv.conf
Apr 13 22:48:10 compute5 dnsmasq[5461]: using nameserver 192.168.0.1#53
Apr 13 22:48:10 compute5 dnsmasq[5461]: read /etc/hosts – 11 addresses
Apr 13 22:48:10 compute5 dnsmasq[5461]: read /var/lib/libvirt/dnsmasq/default.addnhosts – 0 addresses
Apr 13 22:48:10 compute5 dnsmasq-dhcp[5461]: read /var/lib/libvirt/dnsmasq/default.hostsfile

● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2019-04-14 11:50:52 EDT; 10s ago
Main PID: 14727 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─14727 /usr/bin/python2 /usr/bin/nova-compute

Apr 14 11:50:49 compute5 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 14 11:50:52 compute5 systemd[1]: Started OpenStack Nova Compute Server.

可以了,其它机器的就不贴了

(6)在contoller节点,把计算几点添加到cell数据库
先刷一下admin环境变量

[root@controller ~]

# . admin-openrc
然后列出nova所有节点

[root@controller ~]

# openstack compute service list –service nova-compute
+—-+————–+———-+——+———+——-+—————————-+
| ID | Binary | Host | Zone | Status | State | Updated At |
+—-+————–+———-+——+———+——-+—————————-+
| 10 | nova-compute | compute3 | nova | enabled | up | 2019-04-14T15:52:33.000000 |
| 11 | nova-compute | compute2 | nova | enabled | up | 2019-04-14T15:52:33.000000 |
| 12 | nova-compute | compute4 | nova | enabled | up | 2019-04-14T15:52:33.000000 |
| 13 | nova-compute | compute5 | nova | enabled | up | 2019-04-14T15:52:33.000000 |
| 14 | nova-compute | compute1 | nova | enabled | up | 2019-04-14T15:52:32.000000 |
| 15 | nova-compute | compute6 | nova | enabled | up | 2019-04-14T15:52:23.000000 |
+—-+————–+———-+——+———+——-+—————————-+
(看到都上来了,十分开心)
好了,现在开始,发现计算节点

[root@controller ~]

# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell ‘cell1’: e1760c51-f39c-40cf-872f-3c97ff1a7677
Checking host mapping for compute host ‘compute3’: 7d4aa5f7-88c3-413e-ba97-b79a22026a13
Creating host mapping for compute host ‘compute3’: 7d4aa5f7-88c3-413e-ba97-b79a22026a13
Checking host mapping for compute host ‘compute4’: 5f4f28e2-ad6e-4a4f-aa51-6832270615c2
Creating host mapping for compute host ‘compute4’: 5f4f28e2-ad6e-4a4f-aa51-6832270615c2
Checking host mapping for compute host ‘compute2’: fbcc8bd5-3fff-4c93-a624-44832c345809
Creating host mapping for compute host ‘compute2’: fbcc8bd5-3fff-4c93-a624-44832c345809
Checking host mapping for compute host ‘compute1’: bbdcb5a3-a98e-4a99-8d75-f4dd0a2a3509
Creating host mapping for compute host ‘compute1’: bbdcb5a3-a98e-4a99-8d75-f4dd0a2a3509
Checking host mapping for compute host ‘compute5’: 01d99543-5c01-4f78-b445-740754c79424
Creating host mapping for compute host ‘compute5’: 01d99543-5c01-4f78-b445-740754c79424
Checking host mapping for compute host ‘compute6’: 3fada388-bd06-469e-a1bd-78d35b23ece5
Creating host mapping for compute host ‘compute6’: 3fada388-bd06-469e-a1bd-78d35b23ece5
Found 6 unmapped computes in cell: e1760c51-f39c-40cf-872f-3c97ff1a7677

[root@controller ~]

#
这里有个小贴士,就是每次添加计算节点,你都需要手动运行一下cell_v2发现,或者呢,你可以设置一个间隔时间,时间在 nova.conf里面去更改

[scheduler]

discover_hosts_in_cells_interval = 300

1.3.3验证controller配置和node的配置
(1)刷admin环境

[root@controller ~]

# . admin-openrc
(2)验证服务

[root@controller ~]

# openstack compute service list
+—-+——————+————+———-+———+——-+—————————-+
| ID | Binary | Host | Zone | Status | State | Updated At |
+—-+——————+————+———-+———+——-+—————————-+
| 1 | nova-scheduler | controller | internal | enabled | up | 2019-04-14T15:57:51.000000 |
| 5 | nova-consoleauth | controller | internal | enabled | up | 2019-04-14T15:57:52.000000 |
| 6 | nova-conductor | controller | internal | enabled | up | 2019-04-14T15:57:51.000000 |
| 10 | nova-compute | compute3 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
| 11 | nova-compute | compute2 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
| 12 | nova-compute | compute4 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
| 13 | nova-compute | compute5 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
| 14 | nova-compute | compute1 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
| 15 | nova-compute | compute6 | nova | enabled | up | 2019-04-14T15:57:53.000000 |
+—-+——————+————+———-+———+——-+—————————-+
(3)列出所有api端点

[root@controller ~]

# openstack catalog list
+———–+———–+—————————————–+
| Name | Type | Endpoints |
+———–+———–+—————————————–+
| placement | placement | RegionOne |
| | | internal: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | public: http://controller:8778 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| keystone | identity | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | |
+———–+———–+—————————————–+
(4)验证镜像服务

[root@controller ~]

# openstack image list
+————————————–+——–+——–+
| ID | Name | Status |
+————————————–+——–+——–+
| e0810f42-705b-4bdd-9e8e-12313a8ff2e0 | cirros | active |
+————————————–+——–+——–+
(5)验证cells和placement API工作正常

[root@controller ~]

# nova-status upgrade check
+——————————–+
| Upgrade Check Results |
+——————————–+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+——————————–+
| Check: Placement API |
| Result: Success |
| Details: None |
+——————————–+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+——————————–+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+——————————–+
| Check: API Service Version |
| Result: Success |
| Details: None |
+——————————–+
| Check: Request Spec Migration |
| Result: Success |
| Details: None |
+——————————–+
| Check: Console Auths |
| Result: Success |
| Details: None |
+——————————–+

[root@controller ~]

#

#
#
#
#
#
#

1.4 Neutron节点部署
We recommend that you verify network connectivity to the Internet and among the nodes before proceeding further.

From the controller node, test access to the Internet:

ping -c 4 openstack.org

PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

— openstack.org ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
From the controller node, test access to the management interface on the compute node:

ping -c 4 compute1

PING compute1 (10.0.0.31) 56(84) bytes of data.
64 bytes from compute1 (10.0.0.31): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from compute1 (10.0.0.31): icmp_seq=4 ttl=64 time=0.202 ms

— compute1 ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
From the compute node, test access to the Internet:

ping -c 4 openstack.org

PING openstack.org (174.143.194.225) 56(84) bytes of data.
64 bytes from 174.143.194.225: icmp_seq=1 ttl=54 time=18.3 ms
64 bytes from 174.143.194.225: icmp_seq=2 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=3 ttl=54 time=17.5 ms
64 bytes from 174.143.194.225: icmp_seq=4 ttl=54 time=17.4 ms

— openstack.org ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3022ms
rtt min/avg/max/mdev = 17.489/17.715/18.346/0.364 ms
From the compute node, test access to the management interface on the controller node:

ping -c 4 controller

PING controller (10.0.0.11) 56(84) bytes of data.
64 bytes from controller (10.0.0.11): icmp_seq=1 ttl=64 time=0.263 ms
64 bytes from controller (10.0.0.11): icmp_seq=2 ttl=64 time=0.202 ms
64 bytes from controller (10.0.0.11): icmp_seq=3 ttl=64 time=0.203 ms
64 bytes from controller (10.0.0.11): icmp_seq=4 ttl=64 time=0.202 ms

— controller ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.202/0.217/0.263/0.030 ms
Note

Your distribution enables a restrictive firewall by default. During the installation process, certain steps will fail unless you alter or disable the firewall. For more information about securing your environment, refer to the OpenStack Security Guide.
整体概述,首先,我的整体结构如下

[root@controller ~]

# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.101 controller
192.168.0.102 compute1
192.168.0.103 compute2
192.168.0.104 compute3
192.168.0.105 compute4
192.168.0.106 compute5
192.168.0.107 compute6
192.168.0.9 block1
192.168.0.9 object1
随机找个节点验证一下网络
(ping -c 4 openstack.org)
(我的网关是翻墙网关,所以ping opoenstack是不通的)

[root@controller ~]

# ping -c 4 compute1
PING compute1 (192.168.0.102) 56(84) bytes of data.
64 bytes from compute1 (192.168.0.102): icmp_seq=1 ttl=64 time=0.480 ms
64 bytes from compute1 (192.168.0.102): icmp_seq=2 ttl=64 time=0.467 ms
^C
— compute1 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1000ms

controller节点
Prerequisites¶
Before you configure the OpenStack Networking (neutron) service, you must create a database, service credentials, and API endpoints.

To create the database, complete these steps:

Use the database access client to connect to the database server as the root user:

mysql

Create the neutron database:

MariaDB [(none)] CREATE DATABASE neutron;
Grant proper access to the neutron database, replacing NEUTRON_DBPASS with a suitable password:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ \
IDENTIFIED BY ‘NEUTRON_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ \
IDENTIFIED BY ‘NEUTRON_DBPASS’;
Exit the database access client.

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
To create the service credentials, complete these steps:

Create the neutron user:

$ openstack user create –domain default –password-prompt neutron

User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
Add the admin role to the neutron user:

$ openstack role add –project service –user neutron admin
Note

This command provides no output.

Create the neutron service entity:

$ openstack service create –name neutron \
–description “OpenStack Networking” network

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+————-+———————————-+
Create the Networking service API endpoints:

$ openstack endpoint create –region RegionOne \
network public http://controller:9696

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+

$ openstack endpoint create –region RegionOne \
network internal http://controller:9696

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+

$ openstack endpoint create –region RegionOne \
network admin http://controller:9696

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+
Configure networking options¶
You can deploy the Networking service using one of two architectures represented by options 1 and 2.

Option 1 deploys the simplest possible architecture that only supports attaching instances to provider (external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the admin or other privileged user can manage provider networks.

Option 2 augments option 1 with layer-3 services that support attaching instances to self-service networks. The demo or other unprivileged user can manage self-service networks including routers that provide connectivity between self-service and provider networks. Additionally, floating IP addresses provide connectivity to instances using self-service networks from external networks such as the Internet.

Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN include additional headers that increase overhead and decrease space available for the payload or user data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service automatically provides the correct MTU value to instances via DHCP. However, some cloud images do not use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.

Note

Option 2 also supports attaching instances to provider networks.

Choose one of the following networking options to configure services specific to it. Afterwards, return here and proceed to Configure the metadata agent.

Networking Option 1: Provider networks
Networking Option 2: Self-service networks
Configure the metadata agent¶
The metadata agent provides configuration information such as credentials to instances.

Edit the /etc/neutron/metadata_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the metadata host and shared secret:

[DEFAULT]

nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
Replace METADATA_SECRET with a suitable secret for the metadata proxy.

Configure the Compute service to use the Networking service¶
Note

The Nova compute service must be installed to complete this step. For more details see the compute install guide found under the Installation Guides section of the docs website.

Edit the /etc/nova/nova.conf file and perform the following actions:

In the [neutron] section, configure access parameters, enable the metadata proxy, and configure the secret:

[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Replace METADATA_SECRET with the secret you chose for the metadata proxy.

Finalize installation¶
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Populate the database:

su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf \

–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
Note

Database population occurs later for Networking because the script requires complete server and plug-in configuration files.

Restart the Compute API service:

systemctl restart openstack-nova-api.service

Start the Networking services and configure them to start when the system boots.

For both networking options:

systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

systemctl start neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
For networking option 2, also enable and start the layer-3 service:

systemctl enable neutron-l3-agent.service

systemctl start neutron-l3-agent.service

环境准备,主要是数据库
(1)创建数据库,并做相应授权

[root@controller ~]

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 583
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE neutron;
Query OK, 1 row affected (0.01 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.01 sec)

MariaDB [(none)]> exit
Bye

[root@controller ~]

#
(2)刷一下admin环境

[root@controller ~]

# . admin-openrc
(3)创建neution服务认证
先创建neutron用户

[root@controller ~]

# openstack user create –domain default –password-prompt neutron
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 8981748b08a14ed78e11822a35a39f6b |
| name | neutron |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
然后把neutron用户添加到admin角色,注意这一步没输出,我也懒得贴日志了

[root@controller ~]

# openstack role add –project service –user neutron admin
创建neutron服务实体

[root@controller ~]

# openstack service create –name neutron \

–description “OpenStack Networking” network
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Networking |
| enabled | True |
| id | 6f6de4f156854ca08b33fb778d67a4e4 |
| name | neutron |
| type | network |
+————-+———————————-+

[root@controller ~]

#
创建neutron服务的API节点

[root@controller ~]

# openstack endpoint create –region RegionOne \
network public http://controller:9696
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 3253c5d2464a439895fc65d5423c5c18 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6f6de4f156854ca08b33fb778d67a4e4 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+

[root@controller ~]

# openstack endpoint create –region RegionOne \
network internal http://controller:9696

+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 3da202d5c98e4f56ac3cbd1672a4ebe5 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6f6de4f156854ca08b33fb778d67a4e4 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+

[root@controller ~]

#

[root@controller ~]

# openstack endpoint create –region RegionOne \

network admin http://controller:9696
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | b603071fe95f4bba8e2b27fb306e8639 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6f6de4f156854ca08b33fb778d67a4e4 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+————–+———————————-+

[root@controller ~]

#

配置网络选项
(可以选择选项1或者选项2)
选项1是最简单的,直接连接到外部网络
选项2提供外部网络和自有网络。
我选择的是选项2,当然你也可以选择选项1,这里我把选项1和选项2的官方文档贴出来,你可以自己选择。

Networking Option 1: Provider networks
Install and configure the Networking components on the controller node.

Install the components¶

yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables
Configure the server component¶
The Networking server component configuration includes the database, authentication mechanism, message queue, topology change notifications, and plug-in.

Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace NEUTRON_DBPASS with the password you chose for the database.

Note

Comment out or remove any other connection options in the [database] section.

In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable additional plug-ins:

[DEFAULT]

core_plugin = ml2
service_plugins =
In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of network topology changes:

[DEFAULT]

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]

auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in¶
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:

In the [ml2] section, enable flat and VLAN networks:

[ml2]

type_drivers = flat,vlan
In the [ml2] section, disable self-service networks:

[ml2]

tenant_network_types =
In the [ml2] section, enable the Linux bridge mechanism:

[ml2]

mechanism_drivers = linuxbridge
Warning

After you configure the ML2 plug-in, removing values in the type_drivers option can lead to database inconsistency.

In the [ml2] section, enable the port security extension driver:

[ml2]

extension_drivers = port_security
In the [ml2_type_flat] section, configure the provider virtual network as a flat network:

[ml2_type_flat]

flat_networks = provider
In the [securitygroup] section, enable ipset to increase efficiency of security group rules:

[securitygroup]

enable_ipset = true
Configure the Linux bridge agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network interface. See Host networking for more information.

In the [vxlan] section, disable VXLAN overlay networks:

[vxlan]

enable_vxlan = false
In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.

Configure the DHCP agent¶
The DHCP agent provides DHCP services for virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:

[DEFAULT]

interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Return to Networking controller node configuration.

选项2
Networking Option 2: Self-service networks
Install and configure the Networking components on the controller node.

Install the components¶

yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables
Configure the server component¶
Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
Replace NEUTRON_DBPASS with the password you chose for the database.

Note

Comment out or remove any other connection options in the [database] section.

In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and overlapping IP addresses:

[DEFAULT]

core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of network topology changes:

[DEFAULT]

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[nova]

auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp
Configure the Modular Layer 2 (ML2) plug-in¶
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual networking infrastructure for instances.

Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:

In the [ml2] section, enable flat, VLAN, and VXLAN networks:

[ml2]

type_drivers = flat,vlan,vxlan
In the [ml2] section, enable VXLAN self-service networks:

[ml2]

tenant_network_types = vxlan
In the [ml2] section, enable the Linux bridge and layer-2 population mechanisms:

[ml2]

mechanism_drivers = linuxbridge,l2population
Warning

After you configure the ML2 plug-in, removing values in the type_drivers option can lead to database inconsistency.

Note

The Linux bridge agent only supports VXLAN overlay networks.

In the [ml2] section, enable the port security extension driver:

[ml2]

extension_drivers = port_security
In the [ml2_type_flat] section, configure the provider virtual network as a flat network:

[ml2_type_flat]

flat_networks = provider
In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for self-service networks:

[ml2_type_vxlan]

vni_ranges = 1:1000
In the [securitygroup] section, enable ipset to increase efficiency of security group rules:

[securitygroup]

enable_ipset = true
Configure the Linux bridge agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network interface. See Host networking for more information.

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:

[vxlan]

enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the underlying physical network interface that handles overlay networks. The example architecture uses the management interface to tunnel traffic to the other nodes. Therefore, replace OVERLAY_INTERFACE_IP_ADDRESS with the management IP address of the controller node. See Host networking for more information.

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.

Configure the layer-3 agent¶
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.

Edit the /etc/neutron/l3_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the Linux bridge interface driver and external network bridge:

[DEFAULT]

interface_driver = linuxbridge
Configure the DHCP agent¶
The DHCP agent provides DHCP services for virtual networks.

Edit the /etc/neutron/dhcp_agent.ini file and complete the following actions:

In the [DEFAULT] section, configure the Linux bridge interface driver, Dnsmasq DHCP driver, and enable isolated metadata so instances on provider networks can access metadata over the network:

[DEFAULT]

interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Return to Networking controller node configuration.

我选择选项1,因为我是测试环境,但是我还是把配置2给你们贴一下,我的配置2如下
首先在控制节点安装所需的安装包

[root@controller ~]

# yum install openstack-neutron openstack-neutron-ml2 \

openstack-neutron-linuxbridge ebtables
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 16 kB 00:00:00

  • base: linux.mirrors.es.net
  • centos-qemu-ev: sjc.edge.kernel.org
  • epel: mirrors.xmission.com
  • extras: linux.mirrors.es.net
  • updates: linux.mirrors.es.net
    base | 3.6 kB 00:00:01
    centos-ceph-luminous | 2.9 kB 00:00:00
    centos-openstack-rocky | 2.9 kB 00:00:00
    centos-qemu-ev | 2.9 kB 00:00:00
    epel | 4.7 kB 00:00:00
    extras | 3.4 kB 00:00:00
    updates | 3.4 kB 00:00:00
    (1/2): epel/x86_64/updateinfo | 986 kB 00:00:05
    (2/2): epel/x86_64/primary_db | 6.7 MB 00:00:15
    Package ebtables-2.0.10-16.el7.x86_64 already installed and latest version
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-neutron.noarch 1:13.0.2-1.el7 will be installed
#

Installed:
openstack-neutron.noarch 1:13.0.2-1.el7 openstack-neutron-linuxbridge.noarch 1:13.0.2-1.el7 openstack-neutron-ml2.noarch 1:13.0.2-1.el7

Dependency Installed:
c-ares.x86_64 0:1.10.0-3.el7 conntrack-tools.x86_64 0:1.4.4-4.el7 dibbler-client.x86_64 0:1.0.1-0.RC1.2.el7
dnsmasq-utils.x86_64 0:2.76-7.el7 haproxy.x86_64 0:1.5.18-8.el7 keepalived.x86_64 0:1.3.5-8.el7_6
libev.x86_64 0:4.15-7.el7 libnetfilter_cthelper.x86_64 0:1.0.0-9.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 libxslt-python.x86_64 0:1.1.28-5.el7 openpgm.x86_64 0:5.2.122-2.el7
openstack-neutron-common.noarch 1:13.0.2-1.el7 openvswitch.x86_64 1:2.10.1-3.el7 python-beautifulsoup4.noarch 0:4.6.0-1.el7
python-logutils.noarch 0:0.3.3-3.el7 python-neutron.noarch 1:13.0.2-1.el7 python-openvswitch.x86_64 1:2.10.1-3.el7
python-ryu-common.noarch 0:4.26-1.el7 python-waitress.noarch 0:0.8.9-5.el7 python-webtest.noarch 0:2.0.23-1.el7
python-zmq.x86_64 0:14.7.0-2.el7 python2-designateclient.noarch 0:2.10.0-1.el7 python2-gevent.x86_64 0:1.1.2-2.el7
python2-ncclient.noarch 0:0.4.7-5.el7 python2-neutron-lib.noarch 0:1.18.0-1.el7 python2-os-xenapi.noarch 0:0.3.3-1.el7
python2-ovsdbapp.noarch 0:0.12.3-1.el7 python2-pecan.noarch 0:1.3.2-1.el7 python2-ryu.noarch 0:4.26-1.el7
python2-singledispatch.noarch 0:3.4.0.3-4.el7 python2-tinyrpc.noarch 0:0.5-4.20170523git1f38ac.el7 python2-weakrefmethod.noarch 0:1.0.2-3.el7
zeromq.x86_64 0:4.0.5-4.el7

Complete!

配置服务组件
编辑/etc/neutron/neutron.conf文件

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/neutron.conf | grep -v ‘^$’
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[agent]

[cors]

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[matchmaker_redis]

[nova]

auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[ssl]

编辑L2插件,编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v ‘^$’
[DEFAULT]

[l2pop]

[ml2]

type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_geneve]

[ml2_type_gre]

[ml2_type_vlan]

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

[root@controller ~]

#

配置Linux网桥代理,编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini | grep -v ‘^$’
[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:enp2s0

[network_log]

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = true
local_ip = 192.168.0.101
l2_population = true

[root@controller ~]

#

在启用网桥之前,要先在内核启用net.bridge

[root@controller ~]

# vim /etc/sysctl.conf

[root@controller ~]

# cat /etc/sysctl.conf

sysctl settings are defined through files in

/usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

#

Vendors settings live in /usr/lib/sysctl.d/.

To override a whole file, create a new file with the same in

/etc/sysctl.d/ and put new settings there. To override

only specific settings, add a file with a lexically later

name in /etc/sysctl.d/ and put new settings there.

#

For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
直接启动是要报错的

[root@controller ~]

# sysctl -p
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
原因是内核没有加载这个模块,现在加载

[root@controller ~]

# modprobe br_netfilter

[root@controller ~]

# ls /proc/sys/net/bridge
bridge-nf-call-arptables bridge-nf-call-iptables bridge-nf-filter-vlan-tagged
bridge-nf-call-ip6tables bridge-nf-filter-pppoe-tagged bridge-nf-pass-vlan-input-dev
再次启用就可以了

[root@controller ~]

# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

启动L3代理

[root@controller ~]

# vim /etc/neutron/l3_agent.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/l3_agent.ini | grep -v ‘^$’
[DEFAULT]
interface_driver = linuxbridge

[agent]

[ovs]

[root@controller ~]

#
启用DHCP代理

[root@controller ~]

# vim /etc/neutron/dhcp_agent.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/dhcp_agent.ini | grep -v ‘^$’
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

[agent]

[ovs]

[root@controller ~]

#

我的配置1按如下去操作
首先编辑 /etc/neutron/neutron.conf

[root@controller ~]

# vim /etc/neutron/neutron.conf

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/neutron.conf | grep -v ‘^$’
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[agent]

[cors]

[database]

connection = mysql+pymysql://neutron:123456@controller/neutron

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[matchmaker_redis]

[nova]

auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = 123456

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[ssl]

[root@controller ~]

#

编辑ML2插件,编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[root@controller ~]

# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v ‘^$’
[DEFAULT]

[l2pop]

[ml2]

type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_geneve]

[ml2_type_gre]

[ml2_type_vlan]

[ml2_type_vxlan]

[securitygroup]

enable_ipset = true

[root@controller ~]

#

编辑LINUX网桥

[root@controller ~]

# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini | grep -v ‘^$’
[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:enp2s0

[network_log]

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = false

[root@controller ~]

#

启用网桥

[root@controller ~]

# sysctl -p
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

[root@controller ~]

#

以上网络选项配置完毕,下面配置公共部分
配置元数据代理

[root@controller ~]

# grep -v ‘^#’ /etc/neutron/metadata_agent.ini | grep -v ‘^$’
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 123456

[agent]

[cache]

[root@controller ~]

#

编辑Nova以启用neutron

[root@controller ~]

# vim /etc/neutron/metadata_agent.ini

[root@controller ~]

# grep -v ‘^#’ /etc/nova/nova.conf | grep -v ‘^$’
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.101
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

connection = mysql+pymysql://nova:123456@controller/nova_api

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

connection = mysql+pymysql://nova:123456@controller/nova

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

[placement_database]

connection = mysql+pymysql://placement:123456@controller/placement

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

[root@controller ~]

#

完成安装
映射ml2_conf.ini

[root@controller ~]

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

[root@controller ~]

# ll /etc/neutron/plugin.ini
lrwxrwxrwx 1 root root 37 Apr 15 19:15 /etc/neutron/plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
安装neutron的数据库

[root@controller ~]

# su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf \

–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron …
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf
INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee
INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f
INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773
INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592
INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7
INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59
INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d
INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a
INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25
INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee
INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9
INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4
INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664
INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5
INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f
INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821
INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4
INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81
INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6
INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532
INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f
INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a
INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b
INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73
INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99
INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada
INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016
INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3
INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d
INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d
INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297
INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c
INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39
INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b
INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050
INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9
INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada
INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc
INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53
INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70
INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502
INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee
INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048
INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4
INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37
INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa
INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf
INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4
INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e
INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90
INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4
INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426
INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524
INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc
INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d
INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70
INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c
INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c
INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da
INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192
INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9
INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6
INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f
INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee
INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c
INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a
INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad
INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab
INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0
INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62
INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353
INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
OK

[root@controller ~]

#
重启nova的API服务,因为刚才修改了nova的配置文件

[root@controller ~]

# systemctl restart openstack-nova-api.service

[root@controller ~]

# systemctl status openstack-nova-api.service
● openstack-nova-api.service – OpenStack Nova API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:19:15 EDT; 5s ago
Main PID: 20157 (nova-api)
Tasks: 9
CGroup: /system.slice/openstack-nova-api.service
├─20157 /usr/bin/python2 /usr/bin/nova-api
├─20168 /usr/bin/python2 /usr/bin/nova-api
├─20169 /usr/bin/python2 /usr/bin/nova-api
├─20170 /usr/bin/python2 /usr/bin/nova-api
├─20171 /usr/bin/python2 /usr/bin/nova-api
├─20176 /usr/bin/python2 /usr/bin/nova-api
├─20177 /usr/bin/python2 /usr/bin/nova-api
├─20178 /usr/bin/python2 /usr/bin/nova-api
└─20179 /usr/bin/python2 /usr/bin/nova-api

Apr 15 19:19:12 controller systemd[1]: Starting OpenStack Nova API Server…
Apr 15 19:19:15 controller systemd[1]: Started OpenStack Nova API Server.

[root@controller ~]

#
把neutron服务设为开机自启,并启动服务

[root@controller ~]

# systemctl enable neutron-server.service \

neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.

[root@controller ~]

# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service

[root@controller ~]

# systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
● neutron-server.service – OpenStack Neutron Server
Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:20:07 EDT; 9s ago
Main PID: 20253 (neutron-server)
Tasks: 8
CGroup: /system.slice/neutron-server.service
├─20253 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20321 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20322 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20323 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20324 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20325 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
├─20326 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…
└─20327 /usr/bin/python2 /usr/bin/neutron-server –config-file /usr/share/neutron/neutron-dist.conf –config-dir /usr/share/neutron/ser…

Apr 15 19:20:05 controller systemd[1]: Starting OpenStack Neutron Server…
Apr 15 19:20:07 controller systemd[1]: Started OpenStack Neutron Server.

● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:20:05 EDT; 11s ago
Process: 20254 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 20262 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─20262 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutr…

Apr 15 19:20:05 controller systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:20:05 controller neutron-enable-bridge-firewall.sh[20254]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:20:05 controller neutron-enable-bridge-firewall.sh[20254]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:20:05 controller systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:20:07 controller sudo[20311]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf priv…
Apr 15 19:20:08 controller sudo[20344]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

● neutron-dhcp-agent.service – OpenStack Neutron DHCP Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:20:05 EDT; 11s ago
Main PID: 20255 (neutron-dhcp-ag)
Tasks: 1
CGroup: /system.slice/neutron-dhcp-agent.service
└─20255 /usr/bin/python2 /usr/bin/neutron-dhcp-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neut…

Apr 15 19:20:05 controller systemd[1]: Started OpenStack Neutron DHCP Agent.

● neutron-metadata-agent.service – OpenStack Neutron Metadata Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-metadata-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:20:05 EDT; 11s ago
Main PID: 20257 (neutron-metadat)
Tasks: 3
CGroup: /system.slice/neutron-metadata-agent.service
├─20257 /usr/bin/python2 /usr/bin/neutron-metadata-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/…
├─20309 /usr/bin/python2 /usr/bin/neutron-metadata-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/…
└─20310 /usr/bin/python2 /usr/bin/neutron-metadata-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/…

Apr 15 19:20:05 controller systemd[1]: Started OpenStack Neutron Metadata Agent.
Hint: Some lines were ellipsized, use -l to show in full.

[root@controller ~]

#

如果你选择的是选项2的网络,那需要把L3服务也启动一下
(因为我选择的是选项1,所以不需要操作)

systemctl enable neutron-l3-agent.service

systemctl start neutron-l3-agent.service

compute节点
The compute node handles connectivity and security groups for instances.

Install the components¶

yum install openstack-neutron-linuxbridge ebtables ipset

Configure the common component¶
The Networking common component configuration includes the authentication mechanism, message queue, and plug-in.

Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Edit the /etc/neutron/neutron.conf file and complete the following actions:

In the [database] section, comment out any connection options because compute nodes do not directly access the database.

In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp
Configure networking options¶
Choose the same networking option that you chose for the controller node to configure services specific to it. Afterwards, return here and proceed to Configure the Compute service to use the Networking service.

Networking Option 1: Provider networks
Networking Option 2: Self-service networks
Configure the Compute service to use the Networking service¶
Edit the /etc/nova/nova.conf file and complete the following actions:

In the [neutron] section, configure access parameters:

[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

Finalize installation¶
Restart the Compute service:

systemctl restart openstack-nova-compute.service

Start the Linux bridge agent and configure it to start when the system boots:

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

compute节点就稍微简单些了

环境准备,先到跳板机,给所有的compute节点装包

[root@exp2 ~]

# ansible compute -m yum -a ‘name=openstack-neutron-linuxbridge,ebtables,ipset state=installed disable_gpg_check=yes’

#

compute6 | CHANGED => {
“ansible_facts”: {
“pkg_mgr”: “yum”
},
“changed”: true,
“msg”: “”,
“rc”: 0,
“results”: [
“ebtables-2.0.10-16.el7.x86_64 providing ebtables is already installed”,
“ipset-6.38-3.el7_6.x86_64 providing ipset is already installed”,
“Loaded plugins: fastestmirror, langpacks\nLoading mirror speeds from cached hostfile\n * base: mirror.sjc02.svwh.net\n * centos-qemu-ev: mirror.scalabledns.com\n * epel: mirrors.sonic.net\n * extras: mirrors.usc.edu\n * updates: mirrors.usc.edu\nResolving Dependencies\n–> Running transaction check\n—> Package openstack-neutron-linuxbridge.noarch 1:13.0.2-1.el7 will be installed\n–> Processing Dependency: openstack-neutron-common = 1:13.0.2-1.el7 for package: 1:openstack-neutron-linuxbridge-13.0.2-1.el7.noarch\n–> Running transaction check\n—> Package openstack-neutron-common.noarch 1:13.0.2-1.el7 will be installed\n–> Processing Dependency: python-neutron = 1:13.0.2-1.el7 for package: 1:openstack-neutron-common-13.0.2-1.el7.noarch\n–> Running transaction check\n—> Package python-neutron.noarch 1:13.0.2-1.el7 will be installed\n–> Processing Dependency: python2-weakrefmethod >= 1.0.2 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-ryu >= 4.24 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-pecan >= 1.3.2 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-osprofiler >= 1.4.0 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-os-xenapi >= 0.3.1 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-neutron-lib >= 1.18.0 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-designateclient >= 2.7.0 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python-httplib2 >= 0.9.1 for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Processing Dependency: python2-ovsdbapp for package: 1:python-neutron-13.0.2-1.el7.noarch\n–> Running transaction check\n—> Package python-httplib2.noarch 0:0.9.2-1.el7 will be installed\n—> Package python2-designateclient.noarch 0:2.10.0-1.el7 will be installed\n—> Package python2-neutron-lib.noarch 0:1.18.0-1.el7 will be installed\n—> Package python2-os-xenapi.noarch 0:0.3.3-1.el7 will be installed\n—> Package python2-osprofiler.noarch 0:2.3.0-1.el7 will be installed\n—> Package python2-ovsdbapp.noarch 0:0.12.3-1.el7 will be installed\n–> Processing Dependency: python2-openvswitch for package: python2-ovsdbapp-0.12.3-1.el7.noarch\n—> Package python2-pecan.noarch 0:1.3.2-1.el7 will be installed\n–> Processing Dependency: python2-singledispatch for package: python2-pecan-1.3.2-1.el7.noarch\n–> Processing Dependency: python-webtest for package: python2-pecan-1.3.2-1.el7.noarch\n–> Processing Dependency: python-simplegeneric for package: python2-pecan-1.3.2-1.el7.noarch\n–> Processing Dependency: python-logutils for package: python2-pecan-1.3.2-1.el7.noarch\n—> Package python2-ryu.noarch 0:4.26-1.el7 will be installed\n–> Processing Dependency: python-ryu-common = 4.26-1.el7 for package: python2-ryu-4.26-1.el7.noarch\n–> Processing Dependency: python2-tinyrpc for package: python2-ryu-4.26-1.el7.noarch\n—> Package python2-weakrefmethod.noarch 0:1.0.2-3.el7 will be installed\n–> Running transaction check\n—> Package python-logutils.noarch 0:0.3.3-3.el7 will be installed\n—> Package python-openvswitch.x86_64 1:2.10.1-3.el7 will be installed\n–> Processing Dependency: libopenvswitch-2.10.so.0(libopenvswitch_0)(64bit) for package: 1:python-openvswitch-2.10.1-3.el7.x86_64\n–> Processing Dependency: libopenvswitch-2.10.so.0()(64bit) for package: 1:python-openvswitch-2.10.1-3.el7.x86_64\n—> Package python-ryu-common.noarch 0:4.26-1.el7 will be installed\n—> Package python-simplegeneric.noarch 0:0.8-7.el7 will be installed\n—> Package python-webtest.noarch 0:2.0.23-1.el7 will be installed\n–> Processing Dependency: python-waitress for package: python-webtest-2.0.23-1.el7.noarch\n–> Processing Dependency: python-beautifulsoup4 for package: python-webtest-2.0.23-1.el7.noarch\n—> Package python2-singledispatch.noarch 0:3.4.0.3-4.el7 will be installed\n—> Package python2-tinyrpc.noarch 0:0.5-4.20170523git1f38ac.el7 will be installed\n–> Processing Dependency: python-zmq for package: python2-tinyrpc-0.5-4.20170523git1f38ac.el7.noarch\n–> Processing Dependency: python-werkzeug for package: python2-tinyrpc-0.5-4.20170523git1f38ac.el7.noarch\n–> Processing Dependency: python-gevent for package: python2-tinyrpc-0.5-4.20170523git1f38ac.el7.noarch\n–> Running transaction check\n—> Package openvswitch.x86_64 1:2.10.1-3.el7 will be installed\n—> Package python-beautifulsoup4.noarch 0:4.6.0-1.el7 will be installed\n—> Package python-waitress.noarch 0:0.8.9-5.el7 will be installed\n—> Package python-zmq.x86_64 0:14.7.0-2.el7 will be installed\n–> Processing Dependency: libzmq.so.4()(64bit) for package: python-zmq-14.7.0-2.el7.x86_64\n—> Package python2-gevent.x86_64 0:1.1.2-2.el7 will be installed\n–> Processing Dependency: libev.so.4()(64bit) for package: python2-gevent-1.1.2-2.el7.x86_64\n–> Processing Dependency: libcares.so.2()(64bit) for package: python2-gevent-1.1.2-2.el7.x86_64\n—> Package python2-werkzeug.noarch 0:0.14.1-3.el7 will be installed\n–> Running transaction check\n—> Package c-ares.x86_64 0:1.10.0-3.el7 will be installed\n—> Package libev.x86_64 0:4.15-7.el7 will be installed\n—> Package zeromq.x86_64 0:4.0.5-4.el7 will be installed\n–> Processing Dependency: libpgm-5.2.so.0()(64bit) for package: zeromq-4.0.5-4.el7.x86_64\n–> Running transaction check\n—> Package openpgm.x86_64 0:5.2.122-2.el7 will be installed\n–> Finished Dependency Resolution\n\nDependencies Resolved\n\n================================================================================\n Package Arch Version Repository Size\n================================================================================\nInstalling:\n openstack-neutron-linuxbridge\n noarch 1:13.0.2-1.el7 centos-openstack-rocky 14 k\nInstalling for dependencies:\n c-ares x86_64 1.10.0-3.el7 base 78 k\n libev x86_64 4.15-7.el7 extras 44 k\n openpgm x86_64 5.2.122-2.el7 centos-openstack-rocky 172 k\n openstack-neutron-common noarch 1:13.0.2-1.el7 centos-openstack-rocky 222 k\n openvswitch x86_64 1:2.10.1-3.el7 centos-openstack-rocky 1.9 M\n python-beautifulsoup4 noarch 4.6.0-1.el7 centos-openstack-rocky 171 k\n python-httplib2 noarch 0.9.2-1.el7 extras 115 k\n python-logutils noarch 0.3.3-3.el7 centos-ceph-luminous 42 k\n python-neutron noarch 1:13.0.2-1.el7 centos-openstack-rocky 2.1 M\n python-openvswitch x86_64 1:2.10.1-3.el7 centos-openstack-rocky 226 k\n python-ryu-common noarch 4.26-1.el7 centos-openstack-rocky 53 k\n python-simplegeneric noarch 0.8-7.el7 centos-ceph-luminous 12 k\n python-waitress noarch 0.8.9-5.el7 centos-openstack-rocky 152 k\n python-webtest noarch 2.0.23-1.el7 centos-openstack-rocky 84 k\n python-zmq x86_64 14.7.0-2.el7 centos-openstack-rocky 495 k\n python2-designateclient noarch 2.10.0-1.el7 centos-openstack-rocky 117 k\n python2-gevent x86_64 1.1.2-2.el7 centos-openstack-rocky 443 k\n python2-neutron-lib noarch 1.18.0-1.el7 centos-openstack-rocky 297 k\n python2-os-xenapi noarch 0.3.3-1.el7 centos-openstack-rocky 72 k\n python2-osprofiler noarch 2.3.0-1.el7 centos-openstack-rocky 121 k\n python2-ovsdbapp noarch 0.12.3-1.el7 centos-openstack-rocky 100 k\n python2-pecan noarch 1.3.2-1.el7 centos-openstack-rocky 268 k\n python2-ryu noarch 4.26-1.el7 centos-openstack-rocky 2.0 M\n python2-singledispatch noarch 3.4.0.3-4.el7 centos-ceph-luminous 18 k\n python2-tinyrpc noarch 0.5-4.20170523git1f38ac.el7\n centos-openstack-rocky 32 k\n python2-weakrefmethod noarch 1.0.2-3.el7 centos-openstack-rocky 13 k\n python2-werkzeug noarch 0.14.1-3.el7 centos-openstack-rocky 466 k\n zeromq x86_64 4.0.5-4.el7 centos-openstack-rocky 434 k\n\nTransaction Summary\n================================================================================\nInstall 1 Package (+28 Dependent packages)\n\nTotal download size: 10 M\nInstalled size: 46 M\nDownloading packages:\n——————————————————————————–\nTotal 2.3 MB/s | 10 MB 00:04 \nRunning transaction check\nRunning transaction test\nTransaction test succeeded\nRunning transaction\n Installing : python2-osprofiler-2.3.0-1.el7.noarch 1/29 \n Installing : python2-weakrefmethod-1.0.2-3.el7.noarch 2/29 \n Installing : python-beautifulsoup4-4.6.0-1.el7.noarch 3/29 \n Installing : python-httplib2-0.9.2-1.el7.noarch 4/29 \n Installing : python2-werkzeug-0.14.1-3.el7.noarch 5/29 \n Installing : python-waitress-0.8.9-5.el7.noarch 6/29 \n Installing : python-webtest-2.0.23-1.el7.noarch 7/29 \n Installing : libev-4.15-7.el7.x86_64 8/29 \n Installing : c-ares-1.10.0-3.el7.x86_64 9/29 \n Installing : python2-gevent-1.1.2-2.el7.x86_64 10/29 \n Installing : openpgm-5.2.122-2.el7.x86_64 11/29 \n Installing : zeromq-4.0.5-4.el7.x86_64 12/29 \n Installing : python-zmq-14.7.0-2.el7.x86_64 13/29 \n Installing : python2-tinyrpc-0.5-4.20170523git1f38ac.el7.noarch 14/29 \n Installing : python2-os-xenapi-0.3.3-1.el7.noarch 15/29 \n Installing : python2-designateclient-2.10.0-1.el7.noarch 16/29 \n Installing : python-logutils-0.3.3-3.el7.noarch 17/29 \n Installing : 1:openvswitch-2.10.1-3.el7.x86_64 18/29 \n Installing : 1:python-openvswitch-2.10.1-3.el7.x86_64 19/29 \n Installing : python2-ovsdbapp-0.12.3-1.el7.noarch 20/29 \n Installing : python-ryu-common-4.26-1.el7.noarch 21/29 \n Installing : python2-ryu-4.26-1.el7.noarch 22/29 \n Installing : python2-singledispatch-3.4.0.3-4.el7.noarch 23/29 \n Installing : python-simplegeneric-0.8-7.el7.noarch 24/29 \n Installing : python2-pecan-1.3.2-1.el7.noarch 25/29 \n Installing : python2-neutron-lib-1.18.0-1.el7.noarch 26/29 \n Installing : 1:python-neutron-13.0.2-1.el7.noarch 27/29 \n Installing : 1:openstack-neutron-common-13.0.2-1.el7.noarch 28/29 \n Installing : 1:openstack-neutron-linuxbridge-13.0.2-1.el7.noarch 29/29 \n Verifying : 1:openstack-neutron-common-13.0.2-1.el7.noarch 1/29 \n Verifying : python-simplegeneric-0.8-7.el7.noarch 2/29 \n Verifying : python2-singledispatch-3.4.0.3-4.el7.noarch 3/29 \n Verifying : python2-tinyrpc-0.5-4.20170523git1f38ac.el7.noarch 4/29 \n Verifying : python-ryu-common-4.26-1.el7.noarch 5/29 \n Verifying : python2-neutron-lib-1.18.0-1.el7.noarch 6/29 \n Verifying : python2-ovsdbapp-0.12.3-1.el7.noarch 7/29 \n Verifying : 1:openvswitch-2.10.1-3.el7.x86_64 8/29 \n Verifying : python-logutils-0.3.3-3.el7.noarch 9/29 \n Verifying : python2-designateclient-2.10.0-1.el7.noarch 10/29 \n Verifying : python2-ryu-4.26-1.el7.noarch 11/29 \n Verifying : python2-os-xenapi-0.3.3-1.el7.noarch 12/29 \n Verifying : openpgm-5.2.122-2.el7.x86_64 13/29 \n Verifying : python2-weakrefmethod-1.0.2-3.el7.noarch 14/29 \n Verifying : python2-osprofiler-2.3.0-1.el7.noarch 15/29 \n Verifying : c-ares-1.10.0-3.el7.x86_64 16/29 \n Verifying : python2-pecan-1.3.2-1.el7.noarch 17/29 \n Verifying : 1:python-neutron-13.0.2-1.el7.noarch 18/29 \n Verifying : zeromq-4.0.5-4.el7.x86_64 19/29 \n Verifying : 1:openstack-neutron-linuxbridge-13.0.2-1.el7.noarch 20/29 \n Verifying : libev-4.15-7.el7.x86_64 21/29 \n Verifying : python-zmq-14.7.0-2.el7.x86_64 22/29 \n Verifying : python-webtest-2.0.23-1.el7.noarch 23/29 \n Verifying : python-waitress-0.8.9-5.el7.noarch 24/29 \n Verifying : python2-werkzeug-0.14.1-3.el7.noarch 25/29 \n Verifying : python-httplib2-0.9.2-1.el7.noarch 26/29 \n Verifying : 1:python-openvswitch-2.10.1-3.el7.x86_64 27/29 \n Verifying : python-beautifulsoup4-4.6.0-1.el7.noarch 28/29 \n Verifying : python2-gevent-1.1.2-2.el7.x86_64 29/29 \n\nInstalled:\n openstack-neutron-linuxbridge.noarch 1:13.0.2-1.el7 \n\nDependency Installed:\n c-ares.x86_64 0:1.10.0-3.el7 \n libev.x86_64 0:4.15-7.el7 \n openpgm.x86_64 0:5.2.122-2.el7 \n openstack-neutron-common.noarch 1:13.0.2-1.el7 \n openvswitch.x86_64 1:2.10.1-3.el7 \n python-beautifulsoup4.noarch 0:4.6.0-1.el7 \n python-httplib2.noarch 0:0.9.2-1.el7 \n python-logutils.noarch 0:0.3.3-3.el7 \n python-neutron.noarch 1:13.0.2-1.el7 \n python-openvswitch.x86_64 1:2.10.1-3.el7 \n python-ryu-common.noarch 0:4.26-1.el7 \n python-simplegeneric.noarch 0:0.8-7.el7 \n python-waitress.noarch 0:0.8.9-5.el7 \n python-webtest.noarch 0:2.0.23-1.el7 \n python-zmq.x86_64 0:14.7.0-2.el7 \n python2-designateclient.noarch 0:2.10.0-1.el7 \n python2-gevent.x86_64 0:1.1.2-2.el7 \n python2-neutron-lib.noarch 0:1.18.0-1.el7 \n python2-os-xenapi.noarch 0:0.3.3-1.el7 \n python2-osprofiler.noarch 0:2.3.0-1.el7 \n python2-ovsdbapp.noarch 0:0.12.3-1.el7 \n python2-pecan.noarch 0:1.3.2-1.el7 \n python2-ryu.noarch 0:4.26-1.el7 \n python2-singledispatch.noarch 0:3.4.0.3-4.el7 \n python2-tinyrpc.noarch 0:0.5-4.20170523git1f38ac.el7 \n python2-weakrefmethod.noarch 0:1.0.2-3.el7 \n python2-werkzeug.noarch 0:0.14.1-3.el7 \n zeromq.x86_64 0:4.0.5-4.el7 \n\nComplete!\n”
]
}

配置neutron配置文件/etc/neutron/neutron.conf,用跳板机分发过去

[root@exp2 ~]

# grep -v ‘^#’ neutron.conf | grep -v ‘^$’
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone

[agent]

[cors]

[database]

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[quotas]

[ssl]

[root@exp2 ~]

#
向所有的compute节点分发neutron.conf

[root@exp2 ~]

# for i in {102..107};do rsync –delete -r -v /root/neutron.conf 192.168.0.$i:/etc/neutron/neutron.conf ;done
sending incremental file list
neutron.conf

sent 508 bytes received 653 bytes 2,322.00 bytes/sec
total size is 71,648 speedup is 61.71
sending incremental file list
neutron.conf

sent 2,969 bytes received 647 bytes 7,232.00 bytes/sec
total size is 71,648 speedup is 19.81
sending incremental file list
neutron.conf

sent 2,969 bytes received 647 bytes 2,410.67 bytes/sec
total size is 71,648 speedup is 19.81
sending incremental file list
neutron.conf

sent 2,969 bytes received 647 bytes 7,232.00 bytes/sec
total size is 71,648 speedup is 19.81
sending incremental file list
neutron.conf

sent 2,969 bytes received 647 bytes 7,232.00 bytes/sec
total size is 71,648 speedup is 19.81
sending incremental file list
neutron.conf

sent 2,969 bytes received 647 bytes 7,232.00 bytes/sec
total size is 71,648 speedup is 19.81

[root@exp2 ~]

#

配置网络选项,这个和控制节点一样,有配置1和配置2,随便你怎么选,但是要和controller节点对应,我的是配置1
Networking Option 1: Provider networks
Configure the Networking components on a compute node.

Configure the Linux bridge agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network interface. See Host networking for more information.

In the [vxlan] section, disable VXLAN overlay networks:

[vxlan]

enable_vxlan = false
In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.

Return to Networking compute node configuration

Networking Option 2: Self-service networks
Configure the Networking components on a compute node.

Configure the Linux bridge agent¶
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for instances and handles security groups.

Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete the following actions:

In the [linux_bridge] section, map the provider virtual network to the provider physical network interface:

[linux_bridge]

physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
Replace PROVIDER_INTERFACE_NAME with the name of the underlying provider physical network interface. See Host networking for more information.

In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the physical network interface that handles overlay networks, and enable layer-2 population:

[vxlan]

enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the underlying physical network interface that handles overlay networks. The example architecture uses the management interface to tunnel traffic to the other nodes. Therefore, replace OVERLAY_INTERFACE_IP_ADDRESS with the management IP address of the compute node. See Host networking for more information.

In the [securitygroup] section, enable security groups and configure the Linux bridge iptables firewall driver:

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Ensure your Linux operating system kernel supports network bridge filters by verifying all the following sysctl values are set to 1:

net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs to be loaded. Check your operating system’s documentation for additional details on enabling this module.

Return to Networking compute node configuration.

我选择配置1,具体的配置如下,就是修改下linuxbrige

[root@exp2 ~]

# grep -v ‘^#’ linuxbridge_agent.ini | grep -v ‘^$’
[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings = provider:enp0s31f6

[network_log]

[securitygroup]

enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan = false
然后分发过去

[root@exp2 ~]

# for i in {102..107};do rsync –delete -r -v /root/linuxbridge_agent.ini 192.168.0.$i:/etc/neutron/plugins/ml2/linuxbridge_agent.ini ;done
sending incremental file list
linuxbridge_agent.ini

sent 165 bytes received 125 bytes 193.33 bytes/sec
total size is 10,185 speedup is 35.12
sending incremental file list
linuxbridge_agent.ini

sent 2,434 bytes received 125 bytes 5,118.00 bytes/sec
total size is 10,185 speedup is 3.98
sending incremental file list
linuxbridge_agent.ini

sent 2,434 bytes received 125 bytes 5,118.00 bytes/sec
total size is 10,185 speedup is 3.98
sending incremental file list
linuxbridge_agent.ini

sent 2,434 bytes received 125 bytes 5,118.00 bytes/sec
total size is 10,185 speedup is 3.98
sending incremental file list
linuxbridge_agent.ini

sent 2,434 bytes received 125 bytes 5,118.00 bytes/sec
total size is 10,185 speedup is 3.98
sending incremental file list
linuxbridge_agent.ini

sent 2,434 bytes received 125 bytes 1,706.00 bytes/sec
total size is 10,185 speedup is 3.98

[root@exp2 ~]

#
注意,别忘了要从系统内核加载br_netfilter模块

[root@exp2 ~]

# ansible compute -m command -a ‘modprobe br_netfilter’
compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘ls /proc/sys/net/bridge’
compute5 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev

compute1 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev

compute4 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev

compute3 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev

compute2 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev

compute6 | CHANGED | rc=0 >>
bridge-nf-call-arptables
bridge-nf-call-ip6tables
bridge-nf-call-iptables
bridge-nf-filter-pppoe-tagged
bridge-nf-filter-vlan-tagged
bridge-nf-pass-vlan-input-dev
然后分发sysctl.conf,然后启用

[root@exp2 ~]

# cat sysctl.conf

sysctl settings are defined through files in

/usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

#

Vendors settings live in /usr/lib/sysctl.d/.

To override a whole file, create a new file with the same in

/etc/sysctl.d/ and put new settings there. To override

only specific settings, add a file with a lexically later

name in /etc/sysctl.d/ and put new settings there.

#

For more information, see sysctl.conf(5) and sysctl.d(5).

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

[root@exp2 ~]

# for i in {102..107};do rsync –delete -r -v /root/sysctl.conf 192.168.0.$i:/etc/sysctl.conf ;done
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 1,328.00 bytes/sec
total size is 524 speedup is 0.79
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 1,328.00 bytes/sec
total size is 524 speedup is 0.79
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 1,328.00 bytes/sec
total size is 524 speedup is 0.79
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 442.67 bytes/sec
total size is 524 speedup is 0.79
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 442.67 bytes/sec
total size is 524 speedup is 0.79
sending incremental file list
sysctl.conf

sent 623 bytes received 41 bytes 1,328.00 bytes/sec
total size is 524 speedup is 0.79

[root@exp2 ~]

# ansible compute -m command -a ‘sysctl -p’
compute5 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

compute3 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

compute2 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

compute1 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

compute4 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

compute6 | CHANGED | rc=0 >>
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

然后修改nova配置文件,以支持neutron

[root@exp2 ~]

# grep -v ‘^#’ nova.conf | grep -v ‘^$’
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.102
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

[placement_database]

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

[root@exp2 ~]

#
分发到compute

[root@exp2 ~]

# for i in {102..107};do rsync –delete -r -v /root/nova.conf 192.168.0.$i:/etc/nova/nova.conf ;done
sending incremental file list
nova.conf

sent 2,337 bytes received 3,401 bytes 3,825.33 bytes/sec
total size is 392,325 speedup is 68.37
sending incremental file list
nova.conf

sent 3,962 bytes received 3,401 bytes 14,726.00 bytes/sec
total size is 392,325 speedup is 53.28
sending incremental file list
nova.conf

sent 3,962 bytes received 3,401 bytes 14,726.00 bytes/sec
total size is 392,325 speedup is 53.28
sending incremental file list
nova.conf

sent 3,962 bytes received 3,401 bytes 14,726.00 bytes/sec
total size is 392,325 speedup is 53.28
sending incremental file list
nova.conf

sent 3,962 bytes received 3,401 bytes 14,726.00 bytes/sec
total size is 392,325 speedup is 53.28
sending incremental file list
nova.conf

sent 3,962 bytes received 3,401 bytes 4,908.67 bytes/sec
total size is 392,325 speedup is 53.28

[root@exp2 ~]

#

重启nova服务,启用neutron并设置为开机自启

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl restart openstack-nova-compute.service’
compute1 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute4 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute5 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute3 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute2 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code

compute6 | FAILED | rc=1 >>
Job for openstack-nova-compute.service failed because the control process exited with error code. See “systemctl status openstack-nova-compute.service” and “journalctl -xe” for details.non-zero return code
直接启动又报错了,这不是很正常吗,我又忘了给配置文件设置权限,来吧,一个一个设置
(1)/etc/nova/nova.conf

[root@exp2 ~]

# ansible compute -m command -a ‘chown .nova /etc/nova/nova.conf’
[WARNING]: Consider using the file module with owner rather than running ‘chown’. If you need to use command because file is insufficient you can
add ‘warn: false’ to this command task or set ‘command_warnings=False’ in ansible.cfg to get rid of this message.

compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘ls -l /etc/nova/nova.conf’
compute5 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf

compute2 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf

compute4 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf

compute3 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf

compute1 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf

compute6 | CHANGED | rc=0 >>
-rw-r—– 1 root nova 392325 Apr 15 19:50 /etc/nova/nova.conf
然后再重启一下,就OK了

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl restart openstack-nova-compute.service’
compute5 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl status openstack-nova-compute.service’
compute5 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:35 EDT; 10s ago
Main PID: 20639 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─20639 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:32 compute5 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:35 compute5 systemd[1]: Started OpenStack Nova Compute Server.

compute4 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:35 EDT; 10s ago
Main PID: 15317 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─15317 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:32 compute4 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:35 compute4 systemd[1]: Started OpenStack Nova Compute Server.

compute1 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:35 EDT; 10s ago
Main PID: 30795 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─30795 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:32 compute1 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:35 compute1 systemd[1]: Started OpenStack Nova Compute Server.

compute2 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:35 EDT; 9s ago
Main PID: 15163 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─15163 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:33 compute2 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:35 compute2 systemd[1]: Started OpenStack Nova Compute Server.

compute3 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:35 EDT; 10s ago
Main PID: 20482 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─20482 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:32 compute3 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:35 compute3 systemd[1]: Started OpenStack Nova Compute Server.

compute6 | CHANGED | rc=0 >>
● openstack-nova-compute.service – OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:55:38 EDT; 7s ago
Main PID: 15068 (nova-compute)
Tasks: 22
CGroup: /system.slice/openstack-nova-compute.service
└─15068 /usr/bin/python2 /usr/bin/nova-compute

Apr 15 19:55:35 compute6 systemd[1]: Starting OpenStack Nova Compute Server…
Apr 15 19:55:38 compute6 systemd[1]: Started OpenStack Nova Compute Server.

(2)/etc/neutron/neutron.conf

[root@exp2 ~]

# ansible compute -m command -a ‘chown .neutron /etc/neutron/neutron.conf’
[WARNING]: Consider using the file module with owner rather than running ‘chown’. If you need to use command because file is insufficient you can
add ‘warn: false’ to this command task or set ‘command_warnings=False’ in ansible.cfg to get rid of this message.

compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘ls -l /etc/neutron/neutron.conf’
compute5 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

compute4 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

compute1 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

compute3 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

compute2 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

compute6 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 71648 Apr 15 19:36 /etc/neutron/neutron.conf

(3)/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@exp2 ~]

# ansible compute -m command -a ‘chown .neutron /etc/neutron/plugins/ml2/linuxbridge_agent.ini’
[WARNING]: Consider using the file module with owner rather than running ‘chown’. If you need to use command because file is insufficient you can
add ‘warn: false’ to this command task or set ‘command_warnings=False’ in ansible.cfg to get rid of this message.

compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘ls -l /etc/neutron/plugins/ml2/linuxbridge_agent.ini’
compute5 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

compute3 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

compute2 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

compute1 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

compute6 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

compute4 | CHANGED | rc=0 >>
-rw-r—– 1 root neutron 10185 Apr 15 19:44 /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@exp2 ~]

#

现在再次重启neutron服务,并设为开机自启

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl enable neutron-linuxbridge-agent.service’
compute5 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

compute4 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

compute1 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

compute2 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

compute3 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

compute6 | CHANGED | rc=0 >>
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl start neutron-linuxbridge-agent.service’
compute5 | CHANGED | rc=0 >>

compute4 | CHANGED | rc=0 >>

compute2 | CHANGED | rc=0 >>

compute3 | CHANGED | rc=0 >>

compute1 | CHANGED | rc=0 >>

compute6 | CHANGED | rc=0 >>

[root@exp2 ~]

# ansible compute -m command -a ‘systemctl status neutron-linuxbridge-agent.service’
compute5 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 21123 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 21130 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─21130 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute5 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute5 neutron-enable-bridge-firewall.sh[21123]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute5 neutron-enable-bridge-firewall.sh[21123]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute5 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:41 compute5 sudo[21151]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmpCDExMe/privsep.sock
Apr 15 19:59:42 compute5 sudo[21174]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

compute4 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 15801 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 15808 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─15808 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute4 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute4 neutron-enable-bridge-firewall.sh[15801]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute4 neutron-enable-bridge-firewall.sh[15801]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute4 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:41 compute4 sudo[15829]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmp6sRimD/privsep.sock
Apr 15 19:59:42 compute4 sudo[15852]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

compute3 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 20967 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 20974 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─20974 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute3 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute3 neutron-enable-bridge-firewall.sh[20967]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute3 neutron-enable-bridge-firewall.sh[20967]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute3 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:41 compute3 sudo[20995]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmpexsqHf/privsep.sock
Apr 15 19:59:42 compute3 sudo[21018]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

compute2 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 15647 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 15654 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─15654 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute2 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute2 neutron-enable-bridge-firewall.sh[15647]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute2 neutron-enable-bridge-firewall.sh[15647]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute2 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:41 compute2 sudo[15675]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmpSMgexU/privsep.sock
Apr 15 19:59:42 compute2 sudo[15698]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

compute1 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 31279 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 31286 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─31286 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute1 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute1 neutron-enable-bridge-firewall.sh[31279]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute1 neutron-enable-bridge-firewall.sh[31279]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute1 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:42 compute1 sudo[31307]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmpLnbmRh/privsep.sock
Apr 15 19:59:42 compute1 sudo[31330]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

compute6 | CHANGED | rc=0 >>
● neutron-linuxbridge-agent.service – OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 19:59:40 EDT; 8s ago
Process: 15554 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 15561 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─15561 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/common –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –log-file /var/log/neutron/linuxbridge-agent.log

Apr 15 19:59:40 compute6 systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
Apr 15 19:59:40 compute6 neutron-enable-bridge-firewall.sh[15554]: net.bridge.bridge-nf-call-iptables = 1
Apr 15 19:59:40 compute6 neutron-enable-bridge-firewall.sh[15554]: net.bridge.bridge-nf-call-ip6tables = 1
Apr 15 19:59:40 compute6 systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
Apr 15 19:59:42 compute6 sudo[15583]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper –config-file /usr/share/neutron/neutron-dist.conf –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini –config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent –privsep_context neutron.privileged.default –privsep_sock_path /tmp/tmpOyMTcX/privsep.sock
Apr 15 19:59:43 compute6 sudo[15606]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf

验证上面的网络配置
因为有2个选项,所以你自己选择
Networking Option 1: Provider networks

UPDATED: 2019-04-15 23:01
List agents to verify successful launch of the neutron agents:

$ openstack network agent list

+————————————–+——————–+————+——————-+——-+——-+—————————+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+————————————–+——————–+————+——————-+——-+——-+—————————+
| 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
| ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent |
| fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
+————————————–+——————–+————+——————-+——-+——-+—————————+
The output should indicate three agents on the controller node and one agent on each compute node.

Networking Option 2: Self-service networks

UPDATED: 2019-04-15 23:01
List agents to verify successful launch of the neutron agents:

$ openstack network agent list

+————————————–+——————–+————+——————-+——-+——-+—————————+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+————————————–+——————–+————+——————-+——-+——-+—————————+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
+————————————–+——————–+————+——————-+——-+——-+—————————+
The output should indicate four agents on the controller node and one agent on each compute node.

我的是选项1,所以没有L3代理

[root@controller ~]

# openstack network agent list
+————————————–+——————–+————+——————-+——-+——-+—————————+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+————————————–+——————–+————+——————-+——-+——-+—————————+
| 117ab610-ef0b-4340-b989-34f6ba8aa3b0 | DHCP agent | controller | nova | 🙂 | UP | neutron-dhcp-agent |
| 13efd53b-6d47-496f-a168-3e2f5ce2da41 | Linux bridge agent | compute3 | None | 🙂 | UP | neutron-linuxbridge-agent |
| 23a58100-0d38-437f-a4f7-d50cb2de0679 | Metadata agent | controller | None | 🙂 | UP | neutron-metadata-agent |
| 3b993af6-8cc4-4ccc-a4f7-aff559570306 | Linux bridge agent | compute2 | None | 🙂 | UP | neutron-linuxbridge-agent |
| 53110700-6e83-41ef-8686-0b33890ebd4b | Linux bridge agent | controller | None | 🙂 | UP | neutron-linuxbridge-agent |
| 7a5b1efe-5256-4e25-80bb-9e267f2f08e6 | Linux bridge agent | compute5 | None | 🙂 | UP | neutron-linuxbridge-agent |
| 7c5429d7-62f7-43be-933c-2f5c5b66b96b | Linux bridge agent | compute1 | None | 🙂 | UP | neutron-linuxbridge-agent |
| a8ccf75d-fb8b-4bbe-af43-1a04330e27f3 | Linux bridge agent | compute6 | None | 🙂 | UP | neutron-linuxbridge-agent |
| ec4eaf63-eee8-4ba0-860d-6ef423256d31 | Linux bridge agent | compute4 | None | 🙂 | UP | neutron-linuxbridge-agent |
+————————————–+——————–+————+——————-+——-+——-+—————————+

[root@controller ~]

#

至此,网络neutron部署完毕

#
#
#
#
#
#

1.5dashborad部署
下面就是激动人心的dashboard了,因为从现在开始,你就可以看到图形界面了
rocky版本的部署有以下最小需求
The Rocky release of horizon has the following dependencies.

Python 2.7 or 3.5
Django 1.11 or 2.0
Django 1.8 to 1.10 are no longer supported since Rocky release.
Horizon usually syncs with Django’s Roadmap and basically supports maintained versions of Django as of the feature freeze of each OpenStack release.
An accessible keystone endpoint
All other services are optional. Horizon supports the following services as of the Rocky release. If the keystone endpoint for a service is configured, horizon detects it and enables its support automatically.
cinder: Block Storage
glance: Image Management
neutron: Networking
nova: Compute
swift: Object Storage
Horizon also supports many other OpenStack services via plugins. For more information, see the Plugin Registry.

首先是安装dashboard
This section describes how to install and configure the dashboard on the controller node.

The only core service required by the dashboard is the Identity service. You can use the dashboard in combination with other services, such as Image service, Compute, and Networking. You can also use the dashboard in environments with stand-alone services such as Object Storage.

Note

This section assumes proper installation, configuration, and operation of the Identity service using the Apache HTTP server and Memcached service.

Install and configure components¶
Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Install the packages:

yum install openstack-dashboard

Edit the /etc/openstack-dashboard/local_settings file and complete the following actions:

Configure the dashboard to use OpenStack services on the controller node:

OPENSTACK_HOST = “controller”
Allow your hosts to access the dashboard:

ALLOWED_HOSTS = [‘one.example.com’, ‘two.example.com’]
Note

ALLOWED_HOSTS can also be [‘*’] to accept all hosts. This may be useful for development work, but is potentially insecure and should not be used in production. See https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for further information.

Configure the memcached session storage service:

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’

CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
Note

Comment out any other session storage configuration.

Enable the Identity API version 3:

OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
Enable support for domains:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
Configure API versions:

OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 2,
}
Configure Default as the default domain for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
Configure user as the default role for users that you create via the dashboard:

OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
If you chose networking option 1, disable support for layer-3 networking services:

OPENSTACK_NEUTRON_NETWORK = {

‘enable_router’: False,
‘enable_quotas’: False,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_fip_topology_check’: False,
}
Optionally, configure the time zone:

TIME_ZONE = “TIME_ZONE”
Replace TIME_ZONE with an appropriate time zone identifier. For more information, see the list of time zones.

Add the following line to /etc/httpd/conf.d/openstack-dashboard.conf if not included.

WSGIApplicationGroup %{GLOBAL}
Finalize installation¶
Restart the web server and session storage service:

systemctl restart httpd.service memcached.service

Note

The systemctl restart command starts each service if not currently running.

好了,开始干正事,装包吧

[root@controller ~]

# yum install openstack-dashboard
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 18 kB 00:00:00

  • base: linux.mirrors.es.net
  • centos-qemu-ev: sjc.edge.kernel.org
  • epel: fedora.mirrors.pair.com
  • extras: linux.mirrors.es.net
  • updates: linux.mirrors.es.net
    base | 3.6 kB 00:00:00
    centos-ceph-luminous | 2.9 kB 00:00:00
    centos-openstack-rocky | 2.9 kB 00:00:00
    centos-qemu-ev | 2.9 kB 00:00:00
    extras | 3.4 kB 00:00:00
    updates | 3.4 kB 00:00:00
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-dashboard.noarch 1:14.0.2-1.el7 will be installed
#

Installed:
openstack-dashboard.noarch 1:14.0.2-1.el7

Dependency Installed:
XStatic-Angular-common.noarch 1:1.5.8.0-1.el7 bootswatch-common.noarch 0:3.3.7.0-1.el7
bootswatch-fonts.noarch 0:3.3.7.0-1.el7 fontawesome-fonts.noarch 0:4.4.0-1.el7
fontawesome-fonts-web.noarch 0:4.4.0-1.el7 mdi-common.noarch 0:1.4.57.0-4.el7
mdi-fonts.noarch 0:1.4.57.0-4.el7 openstack-dashboard-theme.noarch 1:14.0.2-1.el7
python-XStatic-Angular-lrdragndrop.noarch 0:1.0.2.2-2.el7 python-XStatic-Bootstrap-Datepicker.noarch 0:1.3.1.0-1.el7
python-XStatic-Hogan.noarch 0:2.0.0.2-2.el7 python-XStatic-JQuery-Migrate.noarch 0:1.2.1.1-2.el7
python-XStatic-JQuery-TableSorter.noarch 0:2.14.5.1-2.el7 python-XStatic-JQuery-quicksearch.noarch 0:2.0.3.1-2.el7
python-XStatic-Magic-Search.noarch 0:0.2.0.1-2.el7 python-XStatic-Rickshaw.noarch 0:1.5.0.0-4.el7
python-XStatic-Spin.noarch 0:1.2.5.2-2.el7 python-XStatic-jQuery.noarch 0:1.10.2.1-1.el7
python-XStatic-jquery-ui.noarch 0:1.12.0.1-1.el7 python-bson.x86_64 0:3.0.3-1.el7
python-django-appconf.noarch 0:1.0.1-4.el7 python-django-bash-completion.noarch 0:1.11.20-1.el7
python-django-horizon.noarch 1:14.0.2-1.el7 python-django-pyscss.noarch 0:2.0.2-1.el7
python-lesscpy.noarch 0:0.9j-4.el7 python-pathlib.noarch 0:1.0.1-1.el7
python-pint.noarch 0:0.6-2.el7 python-pymongo.x86_64 0:3.0.3-1.el7
python-semantic_version.noarch 0:2.4.2-2.el7 python-versiontools.noarch 0:1.9.1-4.el7
python2-XStatic.noarch 0:1.0.1-8.el7 python2-XStatic-Angular.noarch 1:1.5.8.0-1.el7
python2-XStatic-Angular-Bootstrap.noarch 0:2.2.0.0-1.el7 python2-XStatic-Angular-FileUpload.noarch 0:12.0.4.0-1.el7
python2-XStatic-Angular-Gettext.noarch 0:2.3.8.0-1.el7 python2-XStatic-Angular-Schema-Form.noarch 0:0.8.13.0-0.1.pre_review.el7
python2-XStatic-Bootstrap-SCSS.noarch 0:3.3.7.1-2.el7 python2-XStatic-D3.noarch 0:3.5.17.0-1.el7
python2-XStatic-Font-Awesome.noarch 0:4.7.0.0-3.el7 python2-XStatic-JSEncrypt.noarch 0:2.3.1.1-1.el7
python2-XStatic-Jasmine.noarch 0:2.4.1.1-1.el7 python2-XStatic-bootswatch.noarch 0:3.3.7.0-1.el7
python2-XStatic-mdi.noarch 0:1.4.57.0-4.el7 python2-XStatic-objectpath.noarch 0:1.2.1.0-0.1.pre_review.el7
python2-XStatic-roboto-fontface.noarch 0:0.5.0.0-1.el7 python2-XStatic-smart-table.noarch 0:1.4.13.2-1.el7
python2-XStatic-termjs.noarch 0:0.0.7.0-1.el7 python2-XStatic-tv4.noarch 0:1.2.7.0-0.1.pre_review.el7
python2-django.noarch 0:1.11.20-1.el7 python2-django-babel.noarch 0:0.6.2-1.el7
python2-django-compressor.noarch 0:2.1-5.el7 python2-rcssmin.x86_64 0:1.0.6-10.el7
python2-rjsmin.x86_64 0:1.0.12-2.el7 python2-scss.x86_64 0:1.3.4-6.el7
roboto-fontface-common.noarch 0:0.5.0.0-1.el7 roboto-fontface-fonts.noarch 0:0.5.0.0-1.el7
web-assets-filesystem.noarch 0:5-1.el7 xstatic-angular-bootstrap-common.noarch 0:2.2.0.0-1.el7
xstatic-angular-fileupload-common.noarch 0:12.0.4.0-1.el7 xstatic-angular-gettext-common.noarch 0:2.3.8.0-1.el7
xstatic-angular-schema-form-common.noarch 0:0.8.13.0-0.1.pre_review.el7 xstatic-bootstrap-scss-common.noarch 0:3.3.7.1-2.el7
xstatic-d3-common.noarch 0:3.5.17.0-1.el7 xstatic-jasmine-common.noarch 0:2.4.1.1-1.el7
xstatic-jsencrypt-common.noarch 0:2.3.1.1-1.el7 xstatic-objectpath-common.noarch 0:1.2.1.0-0.1.pre_review.el7
xstatic-smart-table-common.noarch 0:1.4.13.2-1.el7 xstatic-termjs-common.noarch 0:0.0.7.0-1.el7
xstatic-tv4-common.noarch 0:1.2.7.0-0.1.pre_review.el7

Complete!

然后配置/etc/openstack-dashboard/local_settings
(注意,因为我的网络选择的是选项1,所以禁用了对L3的支持,如果你是选项2,就不要禁用)

[root@controller ~]

# vim /etc/openstack-dashboard/local_settings

[root@controller ~]

# grep -v ‘^#’ /etc/openstack-dashboard/local_settings | grep -v ‘^$’
import os
from django.utils.translation import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False
WEBROOT = ‘/dashboard/’
ALLOWED_HOSTS = [‘*’]
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 2,
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
LOCAL_PATH = ‘/tmp’
SECRET_KEY=’d4a759fe1e6a5bbe0d04′
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_KEYSTONE_BACKEND = {
‘name’: ‘native’,
‘can_edit_user’: True,
‘can_edit_group’: True,
‘can_edit_project’: True,
‘can_edit_domain’: True,
‘can_edit_role’: True,
}
OPENSTACK_HYPERVISOR_FEATURES = {
‘can_set_mount_point’: False,
‘can_set_password’: False,
‘requires_keypair’: False,
‘enable_quotas’: True
}
OPENSTACK_CINDER_FEATURES = {
‘enable_backup’: False,
}
OPENSTACK_NEUTRON_NETWORK = {
‘enable_router’: False,
‘enable_quotas’: False,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
‘enable_fip_topology_check’: False,
‘physical_networks’: [],
}
OPENSTACK_HEAT_STACK = {
‘enable_user_pass’: True,
}
IMAGE_CUSTOM_PROPERTY_TITLES = {
“architecture”: (“Architecture”), “kernel_id”: (“Kernel ID”),
“ramdisk_id”: (“Ramdisk ID”), “image_state”: (“Euca2ools state”),
“project_id”: (“Project ID”), “image_type”: (“Image Type”),
}
IMAGE_RESERVED_CUSTOM_PROPERTIES = []
API_RESULT_LIMIT = 1000
API_RESULT_PAGE_SIZE = 20
SWIFT_FILE_TRANSFER_CHUNK_SIZE = 512 * 1024
INSTANCE_LOG_LENGTH = 35
DROPDOWN_MAX_ITEMS = 30
TIME_ZONE = “UTC”
POLICY_FILES_PATH = ‘/etc/openstack-dashboard’
LOGGING = {
‘version’: 1,
‘formatters’: {
‘console’: {
‘format’: ‘%(levelname)s %(name)s %(message)s’
},
‘operation’: {
# The format of “%(message)s” is defined by
# OPERATION_LOG_OPTIONS[‘format’]
‘format’: ‘%(message)s’
},
},
‘handlers’: {
‘null’: {
‘level’: ‘DEBUG’,
‘class’: ‘logging.NullHandler’,
},
‘console’: {
# Set the level to “DEBUG” for verbose output logging.
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘console’,
},
‘operation’: {
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘operation’,
},
},
‘loggers’: {
‘horizon’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘horizon.operation_log’: {
‘handlers’: [‘operation’],
‘level’: ‘INFO’,
‘propagate’: False,
},
‘openstack_dashboard’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘novaclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘cinderclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneauth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘keystoneclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘glanceclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘neutronclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘swiftclient’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘oslo_policy’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘openstack_auth’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
‘django’: {
‘handlers’: [‘console’],
‘level’: ‘DEBUG’,
‘propagate’: False,
},
# Logging from django.db.backends is VERY verbose, send to null
# by default.
‘django.db.backends’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘requests’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘urllib3’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘chardet.charsetprober’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘iso8601’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘scss’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
},
}
SECURITY_GROUP_RULES = {
‘all_tcp’: {
‘name’: (‘All TCP’), ‘ip_protocol’: ‘tcp’, ‘from_port’: ‘1’, ‘to_port’: ‘65535’, }, ‘all_udp’: { ‘name’: (‘All UDP’),
‘ip_protocol’: ‘udp’,
‘from_port’: ‘1’,
‘to_port’: ‘65535’,
},
‘all_icmp’: {
‘name’: _(‘All ICMP’),
‘ip_protocol’: ‘icmp’,
‘from_port’: ‘-1’,
‘to_port’: ‘-1’,
},
‘ssh’: {
‘name’: ‘SSH’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’22’,
‘to_port’: ’22’,
},
‘smtp’: {
‘name’: ‘SMTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’25’,
‘to_port’: ’25’,
},
‘dns’: {
‘name’: ‘DNS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’53’,
‘to_port’: ’53’,
},
‘http’: {
‘name’: ‘HTTP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ’80’,
‘to_port’: ’80’,
},
‘pop3’: {
‘name’: ‘POP3’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘110’,
‘to_port’: ‘110’,
},
‘imap’: {
‘name’: ‘IMAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘143’,
‘to_port’: ‘143’,
},
‘ldap’: {
‘name’: ‘LDAP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘389’,
‘to_port’: ‘389’,
},
‘https’: {
‘name’: ‘HTTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘443’,
‘to_port’: ‘443’,
},
‘smtps’: {
‘name’: ‘SMTPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘465’,
‘to_port’: ‘465’,
},
‘imaps’: {
‘name’: ‘IMAPS’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘993’,
‘to_port’: ‘993’,
},
‘pop3s’: {
‘name’: ‘POP3S’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘995’,
‘to_port’: ‘995’,
},
‘ms_sql’: {
‘name’: ‘MS SQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘1433’,
‘to_port’: ‘1433’,
},
‘mysql’: {
‘name’: ‘MYSQL’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3306’,
‘to_port’: ‘3306’,
},
‘rdp’: {
‘name’: ‘RDP’,
‘ip_protocol’: ‘tcp’,
‘from_port’: ‘3389’,
‘to_port’: ‘3389’,
},
}
REST_API_REQUIRED_SETTINGS = [‘OPENSTACK_HYPERVISOR_FEATURES’,
‘LAUNCH_INSTANCE_DEFAULTS’,
‘OPENSTACK_IMAGE_FORMATS’,
‘OPENSTACK_KEYSTONE_BACKEND’,
‘OPENSTACK_KEYSTONE_DEFAULT_DOMAIN’,
‘CREATE_IMAGE_DEFAULTS’,
‘ENFORCE_PASSWORD_CHECK’]
ALLOWED_PRIVATE_SUBNET_CIDR = {‘ipv4’: [], ‘ipv6’: []}

[root@controller ~]

#

然后别忘了向/etc/httpd/conf.d/openstack-dashboard.conf添加一行
WSGIApplicationGroup %{GLOBAL}

[root@controller ~]

# vim /etc/httpd/conf.d/openstack-dashboard.conf

[root@controller ~]

# cat /etc/httpd/conf.d/openstack-dashboard.conf
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi

WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static
WSGIApplicationGroup %{GLOBAL}

Options All AllowOverride All Require all granted

Options All AllowOverride All Require all granted

[root@controller ~]

#

然后重启httpd memcached,你就可以开心的玩耍了

[root@controller ~]

# systemctl restart httpd.service memcached.service

[root@controller ~]

# systemctl status httpd.service memcached.service
● httpd.service – The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/httpd.service.d
└─openstack-dashboard.conf
Active: active (running) since Mon 2019-04-15 20:40:12 EDT; 2min 10s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 23918 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Process: 23968 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py compress –force -v0 (code=exited, status=0/SUCCESS)
Process: 23937 ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py collectstatic –noinput –clear -v0 (code=exited, status=0/SUCCESS)
Main PID: 24023 (httpd)
Status: “Total requests: 104; Current requests/sec: 0.6; Current traffic: 716 B/sec”
Tasks: 61
CGroup: /system.slice/httpd.service
├─24023 /usr/sbin/httpd -DFOREGROUND
├─24024 /usr/sbin/httpd -DFOREGROUND
├─24025 /usr/sbin/httpd -DFOREGROUND
├─24026 /usr/sbin/httpd -DFOREGROUND
├─24027 /usr/sbin/httpd -DFOREGROUND
├─24028 (wsgi:keystone- -DFOREGROUND
├─24029 (wsgi:keystone- -DFOREGROUND
├─24030 (wsgi:keystone- -DFOREGROUND
├─24031 (wsgi:keystone- -DFOREGROUND
├─24032 (wsgi:keystone- -DFOREGROUND
├─24033 /usr/sbin/httpd -DFOREGROUND
├─24034 /usr/sbin/httpd -DFOREGROUND
├─24035 /usr/sbin/httpd -DFOREGROUND
├─24036 /usr/sbin/httpd -DFOREGROUND
├─24037 /usr/sbin/httpd -DFOREGROUND
├─24083 /usr/sbin/httpd -DFOREGROUND
├─24093 /usr/sbin/httpd -DFOREGROUND
├─24094 /usr/sbin/httpd -DFOREGROUND
├─24135 /usr/sbin/httpd -DFOREGROUND
└─24177 /usr/sbin/httpd -DFOREGROUND

Apr 15 20:39:53 controller systemd[1]: Starting The Apache HTTP Server…
Apr 15 20:40:12 controller python[23968]: Compressing… done
Apr 15 20:40:12 controller python[23968]: Compressed 7 block(s) from 4 template(s) for 2 context(s).
Apr 15 20:40:12 controller systemd[1]: Started The Apache HTTP Server.

● memcached.service – memcached daemon
Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 20:39:53 EDT; 2min 28s ago
Main PID: 23936 (memcached)
Tasks: 10
CGroup: /system.slice/memcached.service
└─23936 /usr/bin/memcached -p 11211 -u memcached -m 64 -c 1024 -l 127.0.0.1,::1,controller

Apr 15 20:39:53 controller systemd[1]: Started memcached daemon.

验证配置
Verify operation for Red Hat Enterprise Linux and CentOS

UPDATED: 2019-04-11 06:24
Verify operation of the dashboard.

Access the dashboard using a web browser at http://controller/dashboard.

Authenticate using admin or demo user and default domain credentials.
我们登陆上去玩一玩吧
地址 http://192.168.0.101/dashboard
域 default 用户名admin 或者demo 密码123456(这是我的密码)

个性化配置,以及session,这个不是本日志的重点,我贴一下官网的配置,我会另外写日志来说明
Your OpenStack environment now includes the dashboard.

After you install and configure the dashboard, you can complete the following tasks:

Provide users with a public IP address, a username, and a password so they can access the dashboard through a web browser. In case of any SSL certificate connection problems, point the server IP address to a domain name, and give users access.

Customize your dashboard. For details, see Customize and configure the Dashboard.

Set up session storage. For details, see Set up session storage for the Dashboard.

To use the VNC client with the dashboard, the browser must support HTML5 Canvas and HTML5 WebSockets.

For details about browsers that support noVNC, see README.

(1)Customize the Dashboard

Customize and configure the Dashboard

UPDATED: 2019-04-11 06:24
Once you have the Dashboard installed, you can customize the way it looks and feels to suit the needs of your environment, your project, or your business.

You can also configure the Dashboard for a secure HTTPS deployment, or an HTTP deployment. The standard OpenStack installation uses a non-encrypted HTTP channel, but you can enable SSL support for the Dashboard.

For information on configuring HTTPS or HTTP, see Configure the Dashboard.

Customize the Dashboard¶
The OpenStack Dashboard on Ubuntu installs the openstack-dashboard-ubuntu-theme package by default. If you do not want to use this theme, remove it and its dependencies:

apt-get remove –auto-remove openstack-dashboard-ubuntu-theme

Note

This guide focuses on the local_settings.py file.

The following Dashboard content can be customized to suit your needs:

Logo
Site colors
HTML title
Logo link
Help URL
Logo and site colors¶
Create two PNG logo files with transparent backgrounds using the following sizes:

Login screen: 365 x 50
Logged in banner: 216 x 35
Upload your new images to /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/img/.

Create a CSS style sheet in /usr/share/openstack-dashboard/openstack_dashboard/static/dashboard/scss/.

Change the colors and image file names as appropriate. Ensure the relative directory paths are the same. The following example file shows you how to customize your CSS file:

/*

  • New theme colors for dashboard that override the defaults:
  • dark blue: #355796 / rgb(53, 87, 150)
  • light blue: #BAD3E1 / rgb(186, 211, 225)
    *
  • By Preston Lee [email protected]
    */
    h1.brand {
    background: #355796 repeat-x top left;
    border-bottom: 2px solid #BAD3E1;
    }
    h1.brand a {
    background: url(../img/my_cloud_logo_small.png) top left no-repeat;
    }

splash .login {

background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px;
}

splash .login .modal-header {

border-top: 1px solid #BAD3E1;
}
.btn-primary {
background-image: none !important;
background-color: #355796 !important;
border: none !important;
box-shadow: none;
}
.btn-primary:hover,
.btn-primary:active {
border: none;
box-shadow: none;
background-color: #BAD3E1 !important;
text-decoration: none;
}
Open the following HTML template in an editor of your choice:

/usr/share/openstack-dashboard/openstack_dashboard/templates/_stylesheets.html
Add a line to include your newly created style sheet. For example, custom.css file:




Restart the Apache service.

To view your changes, reload your Dashboard. If necessary, go back and modify your CSS file as appropriate.

HTML title¶
Set the HTML title, which appears at the top of the browser window, by adding the following line to local_settings.py:

SITE_BRANDING = “Example, Inc. Cloud”
Restart Apache for this change to take effect.

Logo link¶
The logo also acts as a hyperlink. The default behavior is to redirect to horizon:user_home. To change this, add the following attribute to local_settings.py:

SITE_BRANDING_LINK = “http://example.com”
Restart Apache for this change to take effect.

Help URL¶
By default, the help URL points to https://docs.openstack.org. To change this, edit the following attribute in local_settings.py:

HORIZON_CONFIG[“help_url”] = “http://openstack.mycompany.org”
Restart Apache for this change to take effect.

Configure the Dashboard¶
The following section on configuring the Dashboard for a secure HTTPS deployment, or a HTTP deployment, uses concrete examples to ensure the procedure is clear. The file path varies by distribution, however. If needed, you can also configure the VNC window size in the Dashboard.

Configure the Dashboard for HTTP¶
You can configure the Dashboard for a simple HTTP deployment. The standard installation uses a non-encrypted HTTP channel.

Specify the host for your Identity service endpoint in the local_settings.py file with the OPENSTACK_HOST setting.

The following example shows this setting:

import os

from django.utils.translation import ugettext_lazy as _

DEBUG = False
TEMPLATE_DEBUG = DEBUG
PROD = True
USE_SSL = False

SITE_BRANDING = ‘OpenStack Dashboard’

Ubuntu-specific: Enables an extra panel in the ‘Settings’ section

that easily generates a Juju environments.yaml for download,

preconfigured with endpoints and credentials required for bootstrap

and service deployment.

ENABLE_JUJU_PANEL = True

Note: You should change this value

SECRET_KEY = ‘elj1IWiLoWHgryYxFT6j7cM5fGOOxWY0’

Specify a regular expression to validate user passwords.

HORIZON_CONFIG = {

“password_validator”: {

“regex”: ‘.*’,

“help_text”: _(“Your password does not meet the requirements.”)

}

}

LOCAL_PATH = os.path.dirname(os.path.abspath(file))

CACHES = {
‘default’: {
‘BACKEND’ : ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’ : ‘127.0.0.1:11211’
}
}

Send email to the console by default

EMAIL_BACKEND = ‘django.core.mail.backends.console.EmailBackend’

Or send them to /dev/null

EMAIL_BACKEND = ‘django.core.mail.backends.dummy.EmailBackend’

Configure these for your outgoing email host

EMAIL_HOST = ‘smtp.my-company.com’

EMAIL_PORT = 25

EMAIL_HOST_USER = ‘djangomail’

EMAIL_HOST_PASSWORD = ‘top-secret!’

For multiple regions uncomment this configuration, and add (endpoint, title).

AVAILABLE_REGIONS = [

(‘http://cluster1.example.com:5000/v3’, ‘cluster1’),

(‘http://cluster2.example.com:5000/v3’, ‘cluster2’),

]

OPENSTACK_HOST = “127.0.0.1”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “Member”

The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the

capabilities of the auth backend for Keystone.

If Keystone has been configured to use LDAP as the auth backend then set

can_edit_user to False and name to ‘ldap’.

#

TODO(tres): Remove these once Keystone has an API to identify auth backend.

OPENSTACK_KEYSTONE_BACKEND = {
‘name’: ‘native’,
‘can_edit_user’: True
}

OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints

in the Keystone service catalog. Use this setting when Horizon is running

external to the OpenStack environment. The default is ‘internalURL’.

OPENSTACK_ENDPOINT_TYPE = “publicURL”

The number of Swift containers and objects to display on a single page before

providing a paging element (a “more” link) to paginate results.

API_RESULT_LIMIT = 1000

If you have external monitoring links, eg:

EXTERNAL_MONITORING = [

[‘Nagios’,’http://foo.com’],

[‘Ganglia’,’http://bar.com’],

]

LOGGING = {
‘version’: 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
‘disable_existing_loggers’: False,
‘handlers’: {
‘null’: {
‘level’: ‘DEBUG’,
‘class’: ‘logging.NullHandler’,
},
‘console’: {
# Set the level to “DEBUG” for verbose output logging.
‘level’: ‘INFO’,
‘class’: ‘logging.StreamHandler’,
},
},
‘loggers’: {
# Logging from django.db.backends is VERY verbose, send to null
# by default.
‘django.db.backends’: {
‘handlers’: [‘null’],
‘propagate’: False,
},
‘horizon’: {
‘handlers’: [‘console’],
‘propagate’: False,
},
‘novaclient’: {
‘handlers’: [‘console’],
‘propagate’: False,
},
‘keystoneclient’: {
‘handlers’: [‘console’],
‘propagate’: False,
},
‘nose.plugins.manager’: {
‘handlers’: [‘console’],
‘propagate’: False,
}
}
}
The service catalog configuration in the Identity service determines whether a service appears in the Dashboard. For the full listing, see Settings Reference.

Restart the Apache HTTP Server.

Restart memcached.

Configure the Dashboard for HTTPS¶
You can configure the Dashboard for a secured HTTPS deployment. While the standard installation uses a non-encrypted HTTP channel, you can enable SSL support for the Dashboard.

This example uses the http://openstack.example.com domain. Use a domain that fits your current setup.

In the local_settings.py file, update the following options:

USE_SSL = True
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
To enable HTTPS, the USE_SSL = True option is required.

The other options require that HTTPS is enabled; these options defend against cross-site scripting.

Edit the openstack-dashboard.conf file as shown in the Example After:

Example Before

WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/

=2.4>
Require all granted

Order allow,deny Allow from all

Example After

ServerName openstack.example.com RewriteEngine On RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} RedirectPermanent / https://openstack.example.com


ServerName openstack.example.com

SSLEngine On
# Remember to replace certificates and keys with valid paths in your environment
SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
SetEnvIf User-Agent “.MSIE.” nokeepalive ssl-unclean-shutdown

# HTTP Strict Transport Security (HSTS) enforces that all communications
# with a server go over SSL. This mitigates the threat from attacks such
# as SSL-Strip which replaces links on the wire, stripping away https prefixes
# and potentially allowing an attacker to view confidential information on the
# wire
Header add Strict-Transport-Security “max-age=15768000”

WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/

Options None
AllowOverride None
# For Apache http server 2.4 and later:
=2.4>
Require all granted

# For Apache http server 2.2 and earlier:
Order allow,deny Allow from all


In this configuration, the Apache HTTP Server listens on port 443 and redirects all non-secure requests to the HTTPS protocol. The secured section defines the private key, public key, and certificate to use.

Restart the Apache HTTP Server.

Restart memcached.

If you try to access the Dashboard through HTTP, the browser redirects you to the HTTPS page.

Note

Configuring the Dashboard for HTTPS also requires enabling SSL for the noVNC proxy service. On the controller node, add the following additional options to the [DEFAULT] section of the /etc/nova/nova.conf file:

[DEFAULT]

ssl_only = true
cert = /etc/apache2/SSL/openstack.example.com.crt
key = /etc/apache2/SSL/openstack.example.com.key
On the compute nodes, ensure the nonvncproxy_base_url option points to a URL with an HTTPS scheme:

[DEFAULT]

novncproxy_base_url = https://controller:6080/vnc_auto.html

UPDATED: 2019-04-11 06:24

(2)Set up session storage for the Dashboard
UPDATED: 2019-04-11 06:24
The Dashboard uses Django sessions framework to handle user session data. However, you can use any available session back end. You customize the session back end through the SESSION_ENGINE setting in your local_settings.py file.

After architecting and implementing the core OpenStack services and other required services, combined with the Dashboard service steps below, users and administrators can use the OpenStack dashboard. Refer to the OpenStack User Documentation chapter of the OpenStack End User Guide for further instructions on logging in to the Dashboard.

The following sections describe the pros and cons of each option as it pertains to deploying the Dashboard.

Local memory cache¶
Local memory storage is the quickest and easiest session back end to set up, as it has no external dependencies whatsoever. It has the following significant drawbacks:

No shared storage across processes or workers.
No persistence after a process terminates.
The local memory back end is enabled as the default for Horizon solely because it has no dependencies. It is not recommended for production use, or even for serious development work.

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’ : {
‘BACKEND’: ‘django.core.cache.backends.locmem.LocMemCache’
}
}
You can use applications such as Memcached or Redis for external caching. These applications offer persistence and shared storage and are useful for small-scale deployments and development.

Memcached¶
Memcached is a high-performance and distributed memory object caching system providing in-memory key-value store for small chunks of arbitrary data.

Requirements:

Memcached service running and accessible.
Python module python-memcached installed.
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘my_memcached_host:11211’,
}
}
Redis¶
Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server.

Requirements:

Redis service running and accessible.
Python modules redis and django-redis installed.
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
“default”: {
“BACKEND”: “redis_cache.cache.RedisCache”,
“LOCATION”: “127.0.0.1:6379:1”,
“OPTIONS”: {
“CLIENT_CLASS”: “redis_cache.client.DefaultClient”,
}
}
}
Initialize and configure the database¶
Database-backed sessions are scalable, persistent, and can be made high-concurrency and highly available.

However, database-backed sessions are one of the slower session storages and incur a high overhead under heavy usage. Proper configuration of your database deployment can also be a substantial undertaking and is far beyond the scope of this documentation.

Start the MySQL command-line client.

$ mysql -u root -p
Enter the MySQL root user’s password when prompted.

To configure the MySQL database, create the dash database.

mysql> CREATE DATABASE dash;
Create a MySQL user for the newly created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user.

mysql> GRANT ALL PRIVILEGES ON dash.* TO ‘dash’@’%’ IDENTIFIED BY ‘DASH_DBPASS’;
mysql> GRANT ALL PRIVILEGES ON dash.* TO ‘dash’@’localhost’ IDENTIFIED BY ‘DASH_DBPASS’;
Enter quit at the mysql> prompt to exit MySQL.

In the local_settings.py file, change these options:

SESSION_ENGINE = ‘django.contrib.sessions.backends.db’
DATABASES = {
‘default’: {
# Database configuration here
‘ENGINE’: ‘django.db.backends.mysql’,
‘NAME’: ‘dash’,
‘USER’: ‘dash’,
‘PASSWORD’: ‘DASH_DBPASS’,
‘HOST’: ‘localhost’,
‘default-character-set’: ‘utf8’
}
}
After configuring the local_settings.py file as shown, you can run the manage.py syncdb command to populate this newly created database.

/usr/share/openstack-dashboard/manage.py syncdb

The following output is returned:

Installing custom SQL …
Installing indexes …
DEBUG:django.db.backends:(0.008) CREATE INDEX django_session_c25c2c28 ON django_session (expire_date);; args=()
No fixtures found.
To avoid a warning when you restart Apache on Ubuntu, create a blackhole directory in the Dashboard directory, as follows.

mkdir -p /var/lib/dash/.blackhole

Restart the Apache service.

On Ubuntu, restart the nova-api service to ensure that the API server can connect to the Dashboard without error.

service nova-api restart

Cached database¶
To mitigate the performance issues of database queries, you can use the Django cached_db session back end, which utilizes both your database and caching infrastructure to perform write-through caching and efficient retrieval.

Enable this hybrid setting by configuring both your database and cache, as discussed previously. Then, set the following value:

SESSION_ENGINE = “django.contrib.sessions.backends.cached_db”
Cookies¶
If you use Django 1.4 or later, the signed_cookies back end avoids server load and scaling problems.

This back end stores session data in a cookie, which is stored by the user’s browser. The back end uses a cryptographic signing technique to ensure session data is not tampered with during transport. This is not the same as encryption; session data is still readable by an attacker.

The pros of this engine are that it requires no additional dependencies or infrastructure overhead, and it scales indefinitely as long as the quantity of session data being stored fits into a normal cookie.

The biggest downside is that it places session data into storage on the user’s machine and transports it over the wire. It also limits the quantity of session data that can be stored.

See the Django cookie-based sessions documentation.

dashboard支持的插件非常多
Plugin Registry

UPDATED: 2019-04-11 06:24
Note

Currently, Horizon plugins are responsible for their own compatibility. Check the individual repos for information on support.

Plugin URL Bug Tracker
BGPVPN Dashboard https://github.com/openstack/networking-bgpvpn https://launchpad.net/bgpvpn
Blazar Dashboard https://github.com/openstack/blazar-dashboard https://launchpad.net/blazar
Cloudkitty Dashboard https://github.com/openstack/cloudkitty-dashboard https://launchpad.net/cloudkitty
Congress Dashboard https://github.com/openstack/congress-dashboard https://launchpad.net/congress
Designate Dashboard https://github.com/openstack/designate-dashboard https://launchpad.net/designate-dashboard
Group Based Policy UI https://github.com/openstack/group-based-policy-ui https://launchpad.net/group-based-policy-ui
Freezer Web UI https://github.com/openstack/freezer-web-ui https://launchpad.net/freezer
Heat Dashboard https://github.com/openstack/heat-dashboard https://storyboard.openstack.org/#!/project/992
Ironic UI https://github.com/openstack/ironic-ui https://launchpad.net/ironic-ui
Karbor Dashboard https://github.com/openstack/karbor-dashboard https://launchpad.net/karbor-dashboard
Magnum UI https://github.com/openstack/magnum-ui https://launchpad.net/magnum-ui
Manila UI https://github.com/openstack/manila-ui https://launchpad.net/manila-ui
Mistral Dashboard https://github.com/openstack/mistral-dashboard https://launchpad.net/mistral
Monasca UI https://github.com/openstack/monasca-ui https://launchpad.net/monasca
Murano Dashboard https://github.com/openstack/murano-dashboard https://launchpad.net/murano
Neutron FWaaS Dashboard https://github.com/openstack/neutron-fwaas-dashboard https://launchpad.net/neutron-fwaas-dashboard
Neutron LBaaS Dashboard https://github.com/openstack/neutron-lbaas-dashboard https://storyboard.openstack.org/#!/project/907
Neutron VPNaaS Dashboard https://github.com/openstack/neutron-vpnaas-dashboard https://launchpad.net/neutron-vpnaas-dashboard
Octavia Dashboard https://github.com/openstack/octavia-dashboard https://storyboard.openstack.org/#!/project/909
Sahara Dashboard https://github.com/openstack/sahara-dashboard https://storyboard.openstack.org/#!/project/936
Searchlight UI https://github.com/openstack/searchlight-ui https://launchpad.net/searchlight
Senlin Dashboard https://github.com/openstack/senlin-dashboard https://launchpad.net/senlin-dashboard
Solum Dashboard https://github.com/openstack/solum-dashboard https://launchpad.net/solum
Tacker UI https://github.com/openstack/tacker-horizon https://launchpad.net/tacker
TripleO UI https://github.com/openstack/tripleo-ui/ https://launchpad.net/tripleo
Trove Dashboard https://github.com/openstack/trove-dashboard https://launchpad.net/trove-dashboard
Vitrage Dashboard https://github.com/openstack/vitrage-dashboard https://launchpad.net/vitrage-dashboard
Watcher Dashboard https://github.com/openstack/watcher-dashboard https://launchpad.net/watcher-dashboard
Zaqar UI https://github.com/openstack/zaqar-ui https://launchpad.net/zaqar-ui
Zun UI https://github.com/openstack/zun-ui https://launchpad.net/zun-ui

这个我们以后再研究吧(我会另外写日志),还是把其它的组件装完

#
#
#
#
#
#

1.6Cinder
cinder支持跨平台,windows也可以
controller节点安装配置
Install and configure controller node

UPDATED: 2019-01-22 15:53
This section describes how to install and configure the Block Storage service, code-named cinder, on the controller node. This service requires at least one additional storage node that provides volumes to instances.

Prerequisites¶
Before you install and configure the Block Storage service, you must create a database, service credentials, and API endpoints.

To create the database, complete these steps:

Use the database access client to connect to the database server as the root user:

$ mysql -u root -p
Create the cinder database:

MariaDB [(none)]> CREATE DATABASE cinder;
Grant proper access to the cinder database:

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’localhost’ \
IDENTIFIED BY ‘CINDER_DBPASS’;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ \
IDENTIFIED BY ‘CINDER_DBPASS’;
Replace CINDER_DBPASS with a suitable password.

Exit the database access client.

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
To create the service credentials, complete these steps:

Create a cinder user:

$ openstack user create –domain default –password-prompt cinder

User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 9d7e33de3e1a498390353819bc7d245d |
| name | cinder |
| options | {} |
| password_expires_at | None |
+———————+———————————-+
Add the admin role to the cinder user:

$ openstack role add –project service –user cinder admin
Note

This command provides no output.

Create the cinderv2 and cinderv3 service entities:

$ openstack service create –name cinderv2 \
–description “OpenStack Block Storage” volumev2

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Block Storage |
| enabled | True |
| id | eb9fd245bdbc414695952e93f29fe3ac |
| name | cinderv2 |
| type | volumev2 |
+————-+———————————-+
$ openstack service create –name cinderv3 \
–description “OpenStack Block Storage” volumev3

+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Block Storage |
| enabled | True |
| id | ab3bbbef780845a1a283490d281e7fda |
| name | cinderv3 |
| type | volumev3 |
+————-+———————————-+
Note

The Block Storage services require two service entities.

Create the Block Storage service API endpoints:

$ openstack endpoint create –region RegionOne \
volumev2 public http://controller:8776/v2/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 513e73819e14460fb904163f41ef3759 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+

$ openstack endpoint create –region RegionOne \
volumev2 internal http://controller:8776/v2/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 6436a8a23d014cfdb69c586eff146a32 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+

$ openstack endpoint create –region RegionOne \
volumev2 admin http://controller:8776/v2/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | e652cf84dd334f359ae9b045a2c91d96 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+
$ openstack endpoint create –region RegionOne \
volumev3 public http://controller:8776/v3/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 03fa2c90153546c295bf30ca86b1344b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+

$ openstack endpoint create –region RegionOne \
volumev3 internal http://controller:8776/v3/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 94f684395d1b41068c70e4ecb11364b2 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+

$ openstack endpoint create –region RegionOne \
volumev3 admin http://controller:8776/v3/%(project_id)s

+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 4511c28a0f9840c78bacb25f10f62c98 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ab3bbbef780845a1a283490d281e7fda |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+
Note

The Block Storage services require endpoints for each service entity.

Install and configure components¶
Install the packages:

yum install openstack-cinder

Edit the /etc/cinder/cinder.conf file and complete the following actions:

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
Replace CINDER_DBPASS with the password you chose for the Block Storage database.

In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS
Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] section, configure the my_ip option to use the management interface IP address of the controller node:

[DEFAULT]

my_ip = 10.0.0.11
In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp
Populate the Block Storage database:

su -s /bin/sh -c “cinder-manage db sync” cinder

Note

Ignore any deprecation messages in this output.

Configure Compute to use Block Storage¶
Edit the /etc/nova/nova.conf file and add the following to it:

[cinder]

os_region_name = RegionOne
Finalize installation¶
Restart the Compute API service:

systemctl restart openstack-nova-api.service

Start the Block Storage services and configure them to start when the system boots:

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

我的安装配置如下
先准备环境
(1)创建cinder数据库

[root@controller ~]

#

[root@controller ~]

# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 905
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type ‘help;’ or ‘\h’ for help. Type ‘\c’ to clear the current input statement.

MariaDB [(none)]> CREATE DATABASE cinder;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’localhost’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ \
-> IDENTIFIED BY ‘123456’;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> exit
Bye

[root@controller ~]

#

(2)刷admin环境

[root@controller ~]

# . admin-openrc

(3)创建cinder服务认证
创建cinder用户

[root@controller ~]

# openstack user create –domain default –password-prompt cinder
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 46881cd0c73e453993a1ff9e33c80c37 |
| name | cinder |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
把cinder用户绑定到admin角色,注意没有信息输出

[root@controller ~]

# openstack role add –project service –user cinder admin
创建cinderv2和cinderv3服务实体,注意,后端存储服务需要2个服务实体

[root@controller ~]

# openstack service create –name cinderv2 \

–description “OpenStack Block Storage” volumev2
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Block Storage |
| enabled | True |
| id | ca2ef0e21e62474baf2d8d3fa0558780 |
| name | cinderv2 |
| type | volumev2 |
+————-+———————————-+

[root@controller ~]

# openstack service create –name cinderv3 \
–description “OpenStack Block Storage” volumev3
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Block Storage |
| enabled | True |
| id | f5f6f441f84d48d2bb0174362471427f |
| name | cinderv3 |
| type | volumev3 |
+————-+———————————-+

[root@controller ~]

#
创建后端存储API节点,注意,2个服务实体都要创建
创建vinderv2的API

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev2 public http://controller:8776/v2/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | e94d384980b948469691641b3f1e3969 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ca2ef0e21e62474baf2d8d3fa0558780 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev2 internal http://controller:8776/v2/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 44c66d091a974b908ef57dcc3c543902 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ca2ef0e21e62474baf2d8d3fa0558780 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev2 admin http://controller:8776/v2/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 158dff86a60745a1a393a11807530256 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | ca2ef0e21e62474baf2d8d3fa0558780 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
+————–+——————————————+
创建cinderv3的

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev3 public http://controller:8776/v3/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 0664553839564761a2db9108f077dc1a |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f5f6f441f84d48d2bb0174362471427f |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev3 internal http://controller:8776/v3/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | d06455ebf65144a0b492e5d4f1c600dd |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f5f6f441f84d48d2bb0174362471427f |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+

[root@controller ~]

# openstack endpoint create –region RegionOne \
volumev3 admin http://controller:8776/v3/%(project_id)s
+————–+——————————————+
| Field | Value |
+————–+——————————————+
| enabled | True |
| id | 1216545b62e14eed90d8f305d00eb0de |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f5f6f441f84d48d2bb0174362471427f |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
+————–+——————————————+

[root@controller ~]

#

1.6.1安装和配置组件
(1)安装软件包

[root@controller ~]

# yum install openstack-cinder
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.scalabledns.com
  • centos-qemu-ev: mirror.scalabledns.com
  • epel: ewr.edge.kernel.org
  • extras: mirror.sjc02.svwh.net
  • updates: mirror.scalabledns.com
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-cinder.noarch 1:13.0.4-1.el7 will be installed
    –> Processing Dependency: python-cinder = 1:13.0.4-1.el7 for package: 1:openstack-cinder-13.0.4-1.el7.noarch
    –> Running transaction check
    —> Package python-cinder.noarch 1:13.0.4-1.el7 will be installed
    –> Processing Dependency: python2-oauth2client >= 1.5.0 for package: 1:python-cinder-13.0.4-1.el7.noarch
    –> Processing Dependency: python2-google-api-client >= 1.4.2 for package: 1:python-cinder-13.0.4-1.el7.noarch
    –> Processing Dependency: python2-barbicanclient >= 4.5.2 for package: 1:python-cinder-13.0.4-1.el7.noarch
    –> Running transaction check
    —> Package python2-barbicanclient.noarch 0:4.7.2-1.el7 will be installed
    —> Package python2-google-api-client.noarch 0:1.6.3-1.el7 will be installed
    –> Processing Dependency: python2-uritemplate >= 3.0.0 for package: python2-google-api-client-1.6.3-1.el7.noarch
    —> Package python2-oauth2client.noarch 0:4.0.0-2.el7 will be installed
    –> Processing Dependency: python2-pyasn1-modules >= 0.0.5 for package: python2-oauth2client-4.0.0-2.el7.noarch
    –> Processing Dependency: python2-gflags for package: python2-oauth2client-4.0.0-2.el7.noarch
    –> Running transaction check
    —> Package python2-gflags.noarch 0:2.0-5.el7 will be installed
    —> Package python2-pyasn1-modules.noarch 0:0.1.9-7.el7 will be installed
    —> Package python2-uritemplate.noarch 0:3.0.0-1.el7 will be installed
    –> Finished Dependency Resolution

Dependencies Resolved

=====================================================================================================================================================

Package Arch Version Repository Size

Installing:
openstack-cinder noarch 1:13.0.4-1.el7 centos-openstack-rocky 59 k
Installing for dependencies:
python-cinder noarch 1:13.0.4-1.el7 centos-openstack-rocky 3.9 M
python2-barbicanclient noarch 4.7.2-1.el7 centos-openstack-rocky 122 k
python2-gflags noarch 2.0-5.el7 centos-openstack-rocky 60 k
python2-google-api-client noarch 1.6.3-1.el7 epel 87 k
python2-oauth2client noarch 4.0.0-2.el7 epel 144 k
python2-pyasn1-modules noarch 0.1.9-7.el7 base 59 k
python2-uritemplate noarch 3.0.0-1.el7 epel 18 k

Transaction Summary

Install 1 Package (+7 Dependent packages)

Total download size: 4.4 M
Installed size: 22 M
Is this ok [y/d/N]: y
Downloading packages:
(1/8): openstack-cinder-13.0.4-1.el7.noarch.rpm | 59 kB 00:00:02
(2/8): python2-barbicanclient-4.7.2-1.el7.noarch.rpm | 122 kB 00:00:00
(3/8): python2-gflags-2.0-5.el7.noarch.rpm | 60 kB 00:00:00
(4/8): python-cinder-13.0.4-1.el7.noarch.rpm | 3.9 MB 00:00:03
(5/8): python2-pyasn1-modules-0.1.9-7.el7.noarch.rpm | 59 kB 00:00:01
(6/8): python2-uritemplate-3.0.0-1.el7.noarch.rpm | 18 kB 00:00:01
(7/8): python2-google-api-client-1.6.3-1.el7.noarch.rpm | 87 kB 00:00:02

(8/8): python2-oauth2client-4.0.0-2.el7.noarch.rpm | 144 kB 00:00:03

Total 650 kB/s | 4.4 MB 00:00:06
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python2-barbicanclient-4.7.2-1.el7.noarch 1/8
Installing : python2-uritemplate-3.0.0-1.el7.noarch 2/8
Installing : python2-gflags-2.0-5.el7.noarch 3/8
Installing : python2-pyasn1-modules-0.1.9-7.el7.noarch 4/8
Installing : python2-oauth2client-4.0.0-2.el7.noarch 5/8
Installing : python2-google-api-client-1.6.3-1.el7.noarch 6/8
Installing : 1:python-cinder-13.0.4-1.el7.noarch 7/8
Installing : 1:openstack-cinder-13.0.4-1.el7.noarch 8/8
Verifying : python2-pyasn1-modules-0.1.9-7.el7.noarch 1/8
Verifying : python2-oauth2client-4.0.0-2.el7.noarch 2/8
Verifying : 1:python-cinder-13.0.4-1.el7.noarch 3/8
Verifying : 1:openstack-cinder-13.0.4-1.el7.noarch 4/8
Verifying : python2-gflags-2.0-5.el7.noarch 5/8
Verifying : python2-uritemplate-3.0.0-1.el7.noarch 6/8
Verifying : python2-google-api-client-1.6.3-1.el7.noarch 7/8
Verifying : python2-barbicanclient-4.7.2-1.el7.noarch 8/8

Installed:
openstack-cinder.noarch 1:13.0.4-1.el7

Dependency Installed:
python-cinder.noarch 1:13.0.4-1.el7 python2-barbicanclient.noarch 0:4.7.2-1.el7 python2-gflags.noarch 0:2.0-5.el7
python2-google-api-client.noarch 0:1.6.3-1.el7 python2-oauth2client.noarch 0:4.0.0-2.el7 python2-pyasn1-modules.noarch 0:0.1.9-7.el7
python2-uritemplate.noarch 0:3.0.0-1.el7

Complete!
(2)修改配置文件/etc/cinder/cinder.conf

[root@controller ~]

# vim /etc/cinder/cinder.conf

[root@controller ~]

#

[root@controller ~]

# grep -v ‘^#’ /etc/cinder/cinder.conf | grep -v ‘^$’
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.0.101

[backend]

[backend_defaults]

[barbican]

[brcd_fabric_example]

[cisco_fabric_example]

[coordination]

[cors]

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

[fc-zone-manager]

[healthcheck]

[key_manager]

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[oslo_versionedobjects]

[profiler]

[sample_remote_file_source]

[service_user]

[ssl]

[vault]

[root@controller ~]

#

(3)安装数据库

[root@controller ~]

# su -s /bin/sh -c “cinder-manage db sync” cinder
Deprecated: Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” from group “DEFAULT”.
日志

[root@controller ~]

# cat /var/log/cinder/cinder-manage.log
2019-04-15 23:50:56.915 32272 INFO migrate.versioning.api [-] 84 -> 85…
2019-04-15 23:51:07.231 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:07.232 32272 INFO migrate.versioning.api [-] 85 -> 86…
2019-04-15 23:51:07.297 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:07.298 32272 INFO migrate.versioning.api [-] 86 -> 87…
2019-04-15 23:51:07.563 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:07.563 32272 INFO migrate.versioning.api [-] 87 -> 88…
2019-04-15 23:51:08.583 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:08.583 32272 INFO migrate.versioning.api [-] 88 -> 89…
2019-04-15 23:51:09.083 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:09.084 32272 INFO migrate.versioning.api [-] 89 -> 90…
2019-04-15 23:51:09.642 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:09.643 32272 INFO migrate.versioning.api [-] 90 -> 91…
2019-04-15 23:51:10.085 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.086 32272 INFO migrate.versioning.api [-] 91 -> 92…
2019-04-15 23:51:10.111 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.112 32272 INFO migrate.versioning.api [-] 92 -> 93…
2019-04-15 23:51:10.143 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.143 32272 INFO migrate.versioning.api [-] 93 -> 94…
2019-04-15 23:51:10.168 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.168 32272 INFO migrate.versioning.api [-] 94 -> 95…
2019-04-15 23:51:10.192 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.193 32272 INFO migrate.versioning.api [-] 95 -> 96…
2019-04-15 23:51:10.217 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.218 32272 INFO migrate.versioning.api [-] 96 -> 97…
2019-04-15 23:51:10.336 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.336 32272 INFO migrate.versioning.api [-] 97 -> 98…
2019-04-15 23:51:10.510 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.511 32272 INFO migrate.versioning.api [-] 98 -> 99…
2019-04-15 23:51:10.887 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.887 32272 INFO migrate.versioning.api [-] 99 -> 100…
2019-04-15 23:51:10.896 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding attachment_specs_attachment_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.902 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding cgsnapshots_consistencygroup_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.907 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding group_snapshots_group_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.912 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding group_type_specs_group_type_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.917 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding group_volume_type_mapping_group_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.921 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding group_volume_type_mapping_volume_type_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.924 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding quality_of_service_specs_specs_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.927 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding reservations_allocated_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.930 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding reservations_usage_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.932 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshot_metadata_snapshot_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.935 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_cgsnapshot_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.937 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_group_snapshot_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.939 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding snapshots_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.942 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding transfers_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.944 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_admin_metadata_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.946 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_attachment_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.948 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_glance_metadata_snapshot_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.950 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_glance_metadata_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.952 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_metadata_volume_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.954 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_type_extra_specs_volume_type_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.955 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volume_types_qos_specs_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.958 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volumes_consistencygroup_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.960 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding volumes_group_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.962 32272 INFO 100_add_foreign_key_indexes [-] Skipped adding workers_service_id_idx because an equivalent index already exists.
2019-04-15 23:51:10.992 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:10.992 32272 INFO migrate.versioning.api [-] 100 -> 101…
2019-04-15 23:51:11.025 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:11.025 32272 INFO migrate.versioning.api [-] 101 -> 102…
2019-04-15 23:51:11.295 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:11.296 32272 INFO migrate.versioning.api [-] 102 -> 103…
2019-04-15 23:51:12.029 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:12.030 32272 INFO migrate.versioning.api [-] 103 -> 104…
2019-04-15 23:51:12.655 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:12.656 32272 INFO migrate.versioning.api [-] 104 -> 105…
2019-04-15 23:51:13.013 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.014 32272 INFO migrate.versioning.api [-] 105 -> 106…
2019-04-15 23:51:13.047 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.048 32272 INFO migrate.versioning.api [-] 106 -> 107…
2019-04-15 23:51:13.080 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.081 32272 INFO migrate.versioning.api [-] 107 -> 108…
2019-04-15 23:51:13.156 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.157 32272 INFO migrate.versioning.api [-] 108 -> 109…
2019-04-15 23:51:13.187 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.187 32272 INFO migrate.versioning.api [-] 109 -> 110…
2019-04-15 23:51:13.212 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.212 32272 INFO migrate.versioning.api [-] 110 -> 111…
2019-04-15 23:51:13.455 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.455 32272 INFO migrate.versioning.api [-] 111 -> 112…
2019-04-15 23:51:13.940 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:13.941 32272 INFO migrate.versioning.api [-] 112 -> 113…
2019-04-15 23:51:14.108 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:14.108 32272 INFO migrate.versioning.api [-] 113 -> 114…
2019-04-15 23:51:15.343 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:15.344 32272 INFO migrate.versioning.api [-] 114 -> 115…
2019-04-15 23:51:15.751 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:15.751 32272 INFO migrate.versioning.api [-] 115 -> 116…
2019-04-15 23:51:16.078 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.078 32272 INFO migrate.versioning.api [-] 116 -> 117…
2019-04-15 23:51:16.312 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.313 32272 INFO migrate.versioning.api [-] 117 -> 118…
2019-04-15 23:51:16.360 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.360 32272 INFO migrate.versioning.api [-] 118 -> 119…
2019-04-15 23:51:16.394 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.394 32272 INFO migrate.versioning.api [-] 119 -> 120…
2019-04-15 23:51:16.417 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.417 32272 INFO migrate.versioning.api [-] 120 -> 121…
2019-04-15 23:51:16.452 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.452 32272 INFO migrate.versioning.api [-] 121 -> 122…
2019-04-15 23:51:16.486 32272 INFO migrate.versioning.api [-] done
2019-04-15 23:51:16.487 32272 INFO migrate.versioning.api [-] 122 -> 123…
2019-04-15 23:51:16.803 32272 INFO migrate.versioning.api [-] done

(4)更改nova配置文件/etc/nova/nova.conf
添加相应配置

[cinder]

os_region_name = RegionOne

[root@controller ~]

# grep -v ‘^#’ /etc/nova/nova.conf | grep -v ‘^$’
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@controller
my_ip = 192.168.0.101
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

connection = mysql+pymysql://nova:123456@controller/nova_api

[barbican]

[cache]

[cells]

[cinder]

os_region_name = RegionOne

[compute]

[conductor]

[console]

[consoleauth]

[cors]

[database]

connection = mysql+pymysql://nova:123456@controller/nova

[devices]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers = http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[ironic]

[key_manager]

[keystone]

[keystone_authtoken]

auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = 123456
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456

[placement_database]

connection = mysql+pymysql://placement:123456@controller/placement

[powervm]

[profiler]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[upgrade_levels]

[vault]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[workarounds]

[wsgi]

[xenserver]

[xvp]

[zvm]

重启nova服务

[root@controller ~]

# systemctl restart openstack-nova-api.service

[root@controller ~]

# systemctl status openstack-nova-api.service
● openstack-nova-api.service – OpenStack Nova API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 23:54:27 EDT; 9s ago
Main PID: 32469 (nova-api)
Tasks: 9
CGroup: /system.slice/openstack-nova-api.service
├─32469 /usr/bin/python2 /usr/bin/nova-api
├─32482 /usr/bin/python2 /usr/bin/nova-api
├─32483 /usr/bin/python2 /usr/bin/nova-api
├─32484 /usr/bin/python2 /usr/bin/nova-api
├─32485 /usr/bin/python2 /usr/bin/nova-api
├─32490 /usr/bin/python2 /usr/bin/nova-api
├─32491 /usr/bin/python2 /usr/bin/nova-api
├─32492 /usr/bin/python2 /usr/bin/nova-api
└─32493 /usr/bin/python2 /usr/bin/nova-api

Apr 15 23:54:25 controller systemd[1]: Starting OpenStack Nova API Server…
Apr 15 23:54:27 controller systemd[1]: Started OpenStack Nova API Server.

[root@controller ~]

#

(5)启动cinder服务,并设置为开机自启

[root@controller ~]

# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.

[root@controller ~]

# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

[root@controller ~]

# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
● openstack-cinder-api.service – OpenStack Cinder API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 23:55:23 EDT; 7s ago
Main PID: 32581 (cinder-api)
Tasks: 5
CGroup: /system.slice/openstack-cinder-api.service
├─32581 /usr/bin/python2 /usr/bin/cinder-api –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –l…
├─32609 /usr/bin/python2 /usr/bin/cinder-api –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –l…
├─32610 /usr/bin/python2 /usr/bin/cinder-api –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –l…
├─32611 /usr/bin/python2 /usr/bin/cinder-api –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –l…
└─32612 /usr/bin/python2 /usr/bin/cinder-api –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf –l…

Apr 15 23:55:23 controller systemd[1]: Started OpenStack Cinder API Server.
Apr 15 23:55:24 controller cinder-api[32581]: Deprecated: Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” from gr…EFAULT”.

● openstack-cinder-scheduler.service – OpenStack Cinder Scheduler Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 23:55:23 EDT; 7s ago
Main PID: 32582 (cinder-schedule)
Tasks: 1
CGroup: /system.slice/openstack-cinder-scheduler.service
└─32582 /usr/bin/python2 /usr/bin/cinder-scheduler –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.co…

Apr 15 23:55:23 controller systemd[1]: Started OpenStack Cinder Scheduler Server.
Apr 15 23:55:24 controller cinder-scheduler[32582]: Deprecated: Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” fr…FAULT”.
Hint: Some lines were ellipsized, use -l to show in full.

[root@controller ~]

#

Storage节点配置安装
UPDATED: 2019-01-22 15:53
Prerequisites¶
Before you install and configure the Block Storage service on the storage node, you must prepare the storage device.

Note

Perform these steps on the storage node.

Install the supporting utility packages:

Install the LVM packages:

yum install lvm2 device-mapper-persistent-data

Start the LVM metadata service and configure it to start when the system boots:

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service

Note

Some distributions include LVM by default.

Create the LVM physical volume /dev/sdb:

pvcreate /dev/sdb

Physical volume “/dev/sdb” successfully created
Create the LVM volume group cinder-volumes:

vgcreate cinder-volumes /dev/sdb

Volume group “cinder-volumes” successfully created
The Block Storage service creates logical volumes in this volume group.

Only instances can access Block Storage volumes. However, the underlying operating system manages the devices associated with the volumes. By default, the LVM volume scanning tool scans the /dev directory for block storage devices that contain volumes. If projects use LVM on their volumes, the scanning tool detects these volumes and attempts to cache them which can cause a variety of problems with both the underlying operating system and project volumes. You must reconfigure LVM to scan only the devices that contain the cinder-volumes volume group. Edit the /etc/lvm/lvm.conf file and complete the following actions:

In the devices section, add a filter that accepts the /dev/sdb device and rejects all other devices:

devices {

filter = [ “a/sdb/”, “r/./”] Each item in the filter array begins with a for accept or r for reject and includes a regular expression for the device name. The array must end with r/./ to reject any remaining devices. You can use the vgs -vvvv command to test filters.

Warning

If your storage nodes use LVM on the operating system disk, you must also add the associated device to the filter. For example, if the /dev/sda device contains the operating system:

filter = [ “a/sda/”, “a/sdb/”, “r/.*/”]
Similarly, if your compute nodes use LVM on the operating system disk, you must also modify the filter in the /etc/lvm/lvm.conf file on those nodes to include only the operating system disk. For example, if the /dev/sda device contains the operating system:

filter = [ “a/sda/”, “r/.*/”]
Install and configure components¶
Install the packages:

yum install openstack-cinder targetcli python-keystone

Edit the /etc/cinder/cinder.conf file and complete the following actions:

In the [database] section, configure database access:

[database]

connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
Replace CINDER_DBPASS with the password you chose for the Block Storage database.

In the [DEFAULT] section, configure RabbitMQ message queue access:

[DEFAULT]

transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in RabbitMQ.

In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service access:

[DEFAULT]

auth_strategy = keystone

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = CINDER_PASS
Replace CINDER_PASS with the password you chose for the cinder user in the Identity service.

Note

Comment out or remove any other options in the [keystone_authtoken] section.

In the [DEFAULT] section, configure the my_ip option:

[DEFAULT]

my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network interface on your storage node, typically 10.0.0.41 for the first node in the example architecture.

In the [lvm] section, configure the LVM back end with the LVM driver, cinder-volumes volume group, iSCSI protocol, and appropriate iSCSI service. If the [lvm] section does not exist, create it:

[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
In the [DEFAULT] section, enable the LVM back end:

[DEFAULT]

enabled_backends = lvm
Note

Back-end names are arbitrary. As an example, this guide uses the name of the driver as the name of the back end.

In the [DEFAULT] section, configure the location of the Image service API:

[DEFAULT]

glance_api_servers = http://controller:9292
In the [oslo_concurrency] section, configure the lock path:

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp
Finalize installation¶
Start the Block Storage volume service including its dependencies and configure them to start when the system boots:

systemctl enable openstack-cinder-volume.service target.service

systemctl start openstack-cinder-volume.service target.service

我的安装配置如下
备注,因我的storage节点有很多我自己的东西,所以不再列出详细日志
先是环境准备
(1)安装软件包

[root@server2 ~]

# yum install lvm2 device-mapper-persistent-data
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
epel/x86_64/metalink | 14 kB 00:00:00

  • base: mirrors.usc.edu
  • centos-qemu-ev: repos-lax.psychz.net
  • centos-sclo-rh: repos-lax.psychz.net
  • centos-sclo-sclo: repos-lax.psychz.net
  • epel: mirror.rnet.missouri.edu
  • extras: repos-lax.psychz.net
  • updates: repos-lax.psychz.net
  • webtatic: uk.repo.webtatic.com
    base | 3.6 kB 00:00:00
    centos-ceph-luminous | 2.9 kB 00:00:00
    centos-openstack-rocky | 2.9 kB 00:00:00
    centos-qemu-ev | 2.9 kB 00:00:00
    centos-sclo-rh | 3.0 kB 00:00:00
    centos-sclo-sclo | 2.9 kB 00:00:00
    epel | 4.7 kB 00:00:00
    extras | 3.4 kB 00:00:00
    updates | 3.4 kB 00:00:00
    webtatic | 3.6 kB 00:00:00
    (1/2): epel/x86_64/updateinfo | 987 kB 00:00:02
    (2/2): epel/x86_64/primary_db | 6.7 MB 00:00:04
    Package 7:lvm2-2.02.180-10.el7_6.3.x86_64 already installed and latest version
    Package device-mapper-persistent-data-0.7.3-3.el7.x86_64 already installed and latest version
    Nothing to do
    启动服务,并设置为开机自启

[root@server2 ~]

# systemctl enable lvm2-lvmetad.service

[root@server2 ~]

# systemctl start lvm2-lvmetad.service

[root@server2 ~]

# systemctl status lvm2-lvmetad.service
● lvm2-lvmetad.service – LVM2 metadata daemon
Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: enabled)
Active: active (running) since Sat 2019-04-13 22:48:32 EDT; 2 days ago
Docs: man:lvmetad(8)
Main PID: 2196 (lvmetad)
CGroup: /system.slice/lvm2-lvmetad.service
└─2196 /usr/sbin/lvmetad -f

Apr 13 22:48:32 server2 systemd[1]: Started LVM2 metadata daemon.

[root@server2 ~]

#

(2)创建逻辑卷

[root@server2 ~]

# pvcreate /dev/sdd1
Physical volume “/dev/sdd1” successfully created.

[root@server2 ~]

# pvcreate /dev/sde1
Physical volume “/dev/sde1” successfully created.

(3)创建逻辑卷组

[root@server2 ~]

# vgcreate cinder-volumes /dev/sdd1 /dev/sde1
Volume group “cinder-volumes” successfully created

[root@server2 ~]

# vgs
VG #PV #LV #SN Attr VSize VFree
centos 1 2 0 wz–n- <463.76g 0
cinder-volumes 2 0 0 wz–n- <931.52g <931.52g

(4)添加过滤器
先贴一段官网的解释,你们对照着看
只有实例可以访问块存储卷组。但是,底层的操作系统管理着与这些卷相关联的设备。默认情况下,LVM卷扫描工具会扫描/dev 目录,查找包含卷的块存储设备。如果项目在他们的卷上使用了LVM,扫描工具便会在检测到这些卷时尝试缓存它们,这可能会在底层操作系统和项目卷上产生各种问题。所以您必须重新配置LVM,让它扫描仅包含cinder-volume卷组的设备。编辑/etc/lvm/lvm.conf文件并完成下面的操作:
devices部分,添加一个过滤器,只接受/dev/sdb设备,拒绝其他所有设备:
devices {

filter = [ “a/sdb/”, “r/./”] 每个过滤器组中的元素都以a开头,即为 accept,或以 r 开头,即为reject,并且包括一个设备名称的正则表达式规则。过滤器组必须以r/.*/结束,过滤所有保留设备。您可以使用 :命令:vgs -vvvv 来测试过滤器。 如果您的存储节点在操作系统磁盘上使用了 LVM,您还必需添加相关的设备到过滤器中。例如,如果 /dev/sda 设备包含操作系统: filter = [ “a/sda/”, “a/sdb/”, “r/./”]
类似地,如果您的计算节点在操作系统磁盘上使用了 LVM,您也必需修改这些节点上 /etc/lvm/lvm.conf 文件中的过滤器,将操作系统磁盘包含到过滤器中。例如,如果/dev/sda 设备包含操作系统:
filter = [ “a/sda/”, “r/./”] 然后我的配置如下 filter = [ “a/sda/”, “a/sdd/”, “a/sde/”, “r/./”]

安装和配置组件

[root@server2 ~]

# yum install openstack-cinder targetcli python-keystone
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.usc.edu
  • centos-qemu-ev: repos-lax.psychz.net
  • centos-sclo-rh: repos-lax.psychz.net
  • centos-sclo-sclo: repos-lax.psychz.net
  • epel: mirror.rnet.missouri.edu
  • extras: repos-lax.psychz.net
  • updates: repos-lax.psychz.net
  • webtatic: uk.repo.webtatic.com
    Package targetcli-2.1.fb46-7.el7.noarch already installed and latest version
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-cinder.noarch 1:13.0.4-1.el7 will be installed
#

Installed:
openstack-cinder.noarch 1:13.0.4-1.el7 python-keystone.noarch 1:14.1.0-1.el7

Dependency Installed:
MySQL-python.x86_64 0:1.2.5-1.el7 atlas.x86_64 0:3.10.1-12.el7
python-aniso8601.noarch 0:0.82-3.el7 python-beaker.noarch 0:1.5.4-10.el7
python-cinder.noarch 1:13.0.4-1.el7 python-editor.noarch 0:0.4-4.el7
python-httplib2.noarch 0:0.9.2-1.el7 python-jwcrypto.noarch 0:0.4.2-1.el7
python-kazoo.noarch 0:2.2.1-1.el7 python-ldap.x86_64 0:2.4.15-2.el7
python-lxml.x86_64 0:3.2.1-4.el7 python-mako.noarch 0:0.8.1-2.el7
python-memcached.noarch 0:1.58-1.el7 python-migrate.noarch 0:0.11.0-1.el7
python-networkx.noarch 0:1.10-1.el7 python-networkx-core.noarch 0:1.10-1.el7
python-nose.noarch 0:1.3.7-7.el7 python-oslo-cache-lang.noarch 0:1.30.3-1.el7
python-oslo-concurrency-lang.noarch 0:3.27.0-1.el7 python-oslo-db-lang.noarch 0:4.40.1-1.el7
python-oslo-middleware-lang.noarch 0:3.36.0-1.el7 python-oslo-policy-lang.noarch 0:1.38.1-1.el7
python-oslo-privsep-lang.noarch 0:1.29.2-1.el7 python-oslo-versionedobjects-lang.noarch 0:1.33.3-1.el7
python-oslo-vmware-lang.noarch 0:2.31.0-1.el7 python-paramiko.noarch 0:2.1.1-9.el7
python-paste.noarch 0:1.7.5.1-9.20111221hg1498.el7 python-paste-deploy.noarch 0:1.5.2-6.el7
python-pycadf-common.noarch 0:2.8.0-1.el7 python-retrying.noarch 0:1.2.3-4.el7
python-routes.noarch 0:2.4.1-1.el7 python-sqlparse.noarch 0:0.1.18-5.el7
python-tempita.noarch 0:0.5.1-8.el7 python2-alembic.noarch 0:0.9.7-1.el7
python2-amqp.noarch 0:2.4.0-1.el7 python2-automaton.noarch 0:1.15.0-1.el7
python2-barbicanclient.noarch 0:4.7.2-1.el7 python2-bcrypt.x86_64 0:3.1.4-4.el7
python2-cachetools.noarch 0:2.1.0-1.el7 python2-castellan.noarch 0:0.19.0-1.el7
python2-click.noarch 0:6.7-8.el7 python2-cursive.noarch 0:0.2.2-1.el7
python2-defusedxml.noarch 0:0.5.0-2.el7 python2-eventlet.noarch 0:0.20.1-6.el7
python2-fasteners.noarch 0:0.14.1-6.el7 python2-flask.noarch 1:1.0.2-1.el7
python2-flask-restful.noarch 0:0.3.6-7.el7 python2-future.noarch 0:0.16.0-7.el7
python2-futurist.noarch 0:1.7.0-1.el7 python2-gflags.noarch 0:2.0-5.el7
python2-google-api-client.noarch 0:1.6.3-1.el7 python2-greenlet.x86_64 0:0.4.12-1.el7
python2-itsdangerous.noarch 0:0.24-14.el7 python2-jinja2.noarch 0:2.10-2.el7
python2-keystonemiddleware.noarch 0:5.2.0-1.el7 python2-kombu.noarch 1:4.2.2-1.el7
python2-ldappool.noarch 0:2.3.1-1.el7 python2-markupsafe.x86_64 0:0.23-16.el7
python2-numpy.x86_64 1:1.14.5-1.el7 python2-oauth2client.noarch 0:4.0.0-2.el7
python2-oauthlib.noarch 0:2.0.1-8.el7 python2-os-brick.noarch 0:2.5.6-1.el7
python2-os-win.noarch 0:4.0.1-1.el7 python2-oslo-cache.noarch 0:1.30.3-1.el7
python2-oslo-concurrency.noarch 0:3.27.0-1.el7 python2-oslo-db.noarch 0:4.40.1-1.el7
python2-oslo-messaging.noarch 0:8.1.2-1.el7 python2-oslo-middleware.noarch 0:3.36.0-1.el7
python2-oslo-policy.noarch 0:1.38.1-1.el7 python2-oslo-privsep.noarch 0:1.29.2-1.el7
python2-oslo-reports.noarch 0:1.28.0-1.el7 python2-oslo-rootwrap.noarch 0:5.14.1-1.el7
python2-oslo-service.noarch 0:1.31.8-1.el7 python2-oslo-versionedobjects.noarch 0:1.33.3-1.el7
python2-oslo-vmware.noarch 0:2.31.0-1.el7 python2-osprofiler.noarch 0:2.3.0-1.el7
python2-passlib.noarch 0:1.7.1-1.el7 python2-psutil.x86_64 0:5.2.2-2.el7
python2-pyasn1.noarch 0:0.1.9-7.el7 python2-pyasn1-modules.noarch 0:0.1.9-7.el7
python2-pycadf.noarch 0:2.8.0-1.el7 python2-pyngus.noarch 0:2.2.4-1.el7
python2-pysaml2.noarch 0:4.5.0-4.el7 python2-qpid-proton.x86_64 0:0.26.0-2.el7
python2-redis.noarch 0:2.10.6-1.el7 python2-rsa.noarch 0:3.4.1-1.el7
python2-scipy.x86_64 0:0.18.0-3.el7 python2-scrypt.x86_64 0:0.8.0-2.el7
python2-sqlalchemy.x86_64 0:1.2.7-1.el7 python2-statsd.noarch 0:3.2.1-5.el7
python2-swiftclient.noarch 0:3.6.0-1.el7 python2-taskflow.noarch 0:3.2.0-1.el7
python2-tenacity.noarch 0:4.12.0-1.el7 python2-tooz.noarch 0:1.62.1-1.el7
python2-uritemplate.noarch 0:3.0.0-1.el7 python2-vine.noarch 0:1.2.0-1.el7
python2-voluptuous.noarch 0:0.11.5-1.el7.1 python2-webob.noarch 0:1.8.2-1.el7
python2-werkzeug.noarch 0:0.14.1-3.el7 python2-zake.noarch 0:0.2.2-2.el7
qpid-proton-c.x86_64 0:0.26.0-2.el7 sysfsutils.x86_64 0:2.1.0-16.el7

Complete!

编辑配置文件/etc/cinder/cinder.conf

[root@server2 ~]

# vim /etc/cinder/cinder.conf

[root@server2 ~]

# grep -v ‘^#’ /etc/cinder/cinder.conf | grep -v ‘^$’
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.0.9
enabled_backends = lvm
glance_api_servers = http://controller:9292

[backend]

[backend_defaults]

[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[barbican]

[brcd_fabric_example]

[cisco_fabric_example]

[coordination]

[cors]

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

[fc-zone-manager]

[healthcheck]

[key_manager]

[keystone_authtoken]

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[oslo_versionedobjects]

[profiler]

[sample_remote_file_source]

[service_user]

[ssl]

[vault]

启动服务,并设置为开机自启

[root@server2 ~]

# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.

[root@server2 ~]

# systemctl start openstack-cinder-volume.service target.service

[root@server2 ~]

# systemctl status openstack-cinder-volume.service target.service
● openstack-cinder-volume.service – OpenStack Cinder Volume Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 01:02:45 EDT; 5s ago
Main PID: 22778 (cinder-volume)
Tasks: 1
CGroup: /system.slice/openstack-cinder-volume.service
└─22778 /usr/bin/python2 /usr/bin/cinder-volume –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf …

Apr 16 01:02:45 server2 systemd[1]: Started OpenStack Cinder Volume Server.

● target.service – Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Tue 2019-04-16 01:02:45 EDT; 5s ago
Process: 22779 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
Main PID: 22779 (code=exited, status=0/SUCCESS)

Apr 16 01:02:45 server2 systemd[1]: Starting Restore LIO kernel target configuration…
Apr 16 01:02:45 server2 target[22779]: No saved config file at /etc/target/saveconfig.json, ok, exiting
Apr 16 01:02:45 server2 systemd[1]: Started Restore LIO kernel target configuration.

[root@server2 ~]

#

验证,到controller节点验证
首先刷以下admin环境

[root@controller ~]

# . admin-openrc
然后列出volume服务

[root@controller ~]

# openstack volume service list
+——————+————-+——+———+——-+—————————-+
| Binary | Host | Zone | Status | State | Updated At |
+——————+————-+——+———+——-+—————————-+
| cinder-scheduler | controller | nova | enabled | up | 2019-04-16T05:04:16.000000 |
| cinder-volume | server2@lvm | nova | enabled | up | 2019-04-16T05:04:20.000000 |
+——————+————-+——+———+——-+—————————-+

安装备份服务,注意,该服务依赖于swift组件,如果你还没有安装,那先跳到swift组件安装
Install and configure the backup service

UPDATED: 2019-01-22 15:53
Optionally, install and configure the backup service. For simplicity, this configuration uses the Block Storage node and the Object Storage (swift) driver, thus depending on the Object Storage service.

Note

You must install and configure a storage node prior to installing and configuring the backup service.

Install and configure components¶
Note

Perform these steps on the Block Storage node.

Install the packages:

yum install openstack-cinder

Edit the /etc/cinder/cinder.conf file and complete the following actions:

In the [DEFAULT] section, configure backup options:

[DEFAULT]

backup_driver = cinder.backup.drivers.swift
backup_swift_url = SWIFT_URL
Replace SWIFT_URL with the URL of the Object Storage service. The URL can be found by showing the object-store API endpoints:

$ openstack catalog show object-store
Finalize installation¶
Start the Block Storage backup service and configure it to start when the system boots:

systemctl enable openstack-cinder-backup.service

systemctl start openstack-cinder-backup.service

UPDATED: 2019-01-22 15:53

(1)装cinder包,这个装过了

[root@controller ~]

# yum install openstack-cinder
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirror.scalabledns.com
  • centos-qemu-ev: mirror.scalabledns.com
  • epel: muug.ca
  • extras: mirror.sjc02.svwh.net
  • updates: mirror.scalabledns.com
    Package 1:openstack-cinder-13.0.4-1.el7.noarch already installed and latest version
    Nothing to do
    (2)编辑cinder的配置文件,启用备份服务
    先查一下swift的URL地址,也就是API节点

[root@controller tmp]

# openstack catalog show object-store
+———–+—————————————————————————–+
| Field | Value |
+———–+—————————————————————————–+
| endpoints | RegionOne |
| | public: http://controller:8080/v1/AUTH_25a82cb651074f3494aeb5639d62ed22 |
| | RegionOne |
| | internal: http://controller:8080/v1/AUTH_25a82cb651074f3494aeb5639d62ed22 |
| | RegionOne |
| | admin: http://controller:8080/v1 |
| | |
| id | b96b751091184bfbb8cca6c2622adefa |
| name | swift |
| type | object-store |
+———–+—————————————————————————–+
然后把 admin: http://controller:8080/v1 地址记住,编辑配置文件

[root@controller ~]

# vim /etc/cinder/cinder.conf

[root@controller ~]

# grep -v ‘^#’ /etc/cinder/cinder.conf | grep -v ‘^$’
[DEFAULT]
transport_url = rabbit://openstack:123456@controller
auth_strategy = keystone
my_ip = 192.168.0.101
backup_driver = cinder.backup.drivers.swift
backup_swift_url = http://controller:8080/v1

[backend]

[backend_defaults]

[barbican]

[brcd_fabric_example]

[cisco_fabric_example]

[coordination]

[cors]

[database]

connection = mysql+pymysql://cinder:123456@controller/cinder

[fc-zone-manager]

[healthcheck]

[key_manager]

[keystone_authtoken]

auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = 123456

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path = /var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[oslo_versionedobjects]

[profiler]

[sample_remote_file_source]

[service_user]

[ssl]

[vault]

[root@controller ~]

#

(3)启动服务,并设置为开机自启

[root@controller ~]

# systemctl enable openstack-cinder-backup.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-backup.service to /usr/lib/systemd/system/openstack-cinder-backup.service.

[root@controller ~]

# systemctl start openstack-cinder-backup.service

[root@controller ~]

# systemctl status openstack-cinder-backup.service
● openstack-cinder-backup.service – OpenStack Cinder Backup Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-backup.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 05:54:59 EDT; 5s ago
Main PID: 18626 (cinder-backup)
Tasks: 1
CGroup: /system.slice/openstack-cinder-backup.service
└─18626 /usr/bin/python2 /usr/bin/cinder-backup –config-file /usr/share/cinder/cinder-dist.conf –config-file /etc/cinder/cinder.conf …

Apr 16 05:54:59 controller systemd[1]: Started OpenStack Cinder Backup Server.
Apr 16 05:55:00 controller cinder-backup[18626]: Deprecated: Option “logdir” from group “DEFAULT” is deprecated. Use option “log-dir” from …FAULT”.
Hint: Some lines were ellipsized, use -l to show in full.

#
#
#
#
#
#

1.7swift
Install and configure the controller node for Red Hat Enterprise Linux and CentOS

THIS PAGE LAST UPDATED: 2018-02-08 10:56:28
This section describes how to install and configure the proxy service that handles requests for the account, container, and object services operating on the storage nodes. For simplicity, this guide installs and configures the proxy service on the controller node. However, you can run the proxy service on any node with network connectivity to the storage nodes. Additionally, you can install and configure the proxy service on multiple nodes to increase performance and redundancy. For more information, see the Deployment Guide.

This section applies to Red Hat Enterprise Linux 7 and CentOS 7.

Prerequisites¶
The proxy service relies on an authentication and authorization mechanism such as the Identity service. However, unlike other services, it also offers an internal mechanism that allows it to operate without any other OpenStack services. Before you configure the Object Storage service, you must create service credentials and an API endpoint.

Note

The Object Storage service does not use an SQL database on the controller node. Instead, it uses distributed SQLite databases on each storage node.

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
To create the Identity service credentials, complete these steps:

Create the swift user:

$ openstack user create –domain default –password-prompt swift
User Password:
Repeat User Password:
+———–+———————————-+
| Field | Value |
+———–+———————————-+
| domain_id | default |
| enabled | True |
| id | d535e5cbd2b74ac7bfb97db9cced3ed6 |
| name | swift |
+———–+———————————-+
Add the admin role to the swift user:

$ openstack role add –project service –user swift admin
Note

This command provides no output.

Create the swift service entity:

$ openstack service create –name swift \
–description “OpenStack Object Storage” object-store
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Object Storage |
| enabled | True |
| id | 75ef509da2c340499d454ae96a2c5c34 |
| name | swift |
| type | object-store |
+————-+———————————-+
Create the Object Storage service API endpoints:

$ openstack endpoint create –region RegionOne \
object-store public http://controller:8080/v1/AUTH_%(project_id)s
+————–+———————————————-+
| Field | Value |
+————–+———————————————-+
| enabled | True |
| id | 12bfd36f26694c97813f665707114e0d |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+————–+———————————————-+

$ openstack endpoint create –region RegionOne \
object-store internal http://controller:8080/v1/AUTH_%(project_id)s
+————–+———————————————-+
| Field | Value |
+————–+———————————————-+
| enabled | True |
| id | 7a36bee6733a4b5590d74d3080ee6789 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+————–+———————————————-+

$ openstack endpoint create –region RegionOne \
object-store admin http://controller:8080/v1
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | ebb72cd6851d4defabc0b9d71cdca69b |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 75ef509da2c340499d454ae96a2c5c34 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1 |
+————–+———————————-+
Install and configure components¶
Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Install the packages:

yum install openstack-swift-proxy python-swiftclient \

python-keystoneclient python-keystonemiddleware \
memcached
Note

Complete OpenStack environments already include some of these packages.

Obtain the proxy service configuration file from the Object Storage source repository:

curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/queens

Edit the /etc/swift/proxy-server.conf file and complete the following actions:

In the [DEFAULT] section, configure the bind port, user, and configuration directory:

[DEFAULT]

bind_port = 8080
user = swift
swift_dir = /etc/swift
In the [pipeline:main] section, remove the tempurl and tempauth modules and add the authtoken and keystoneauth modules:

[pipeline:main]

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
Note

Do not change the order of the modules.

Note

For more information on other modules that enable additional features, see the Deployment Guide.

In the [app:proxy-server] section, enable automatic account creation:

[app:proxy-server]

use = egg:swift#proxy

account_autocreate = True
In the [filter:keystoneauth] section, configure the operator roles:

[filter:keystoneauth]

use = egg:swift#keystoneauth

operator_roles = admin,user
In the [filter:authtoken] section, configure Identity service access:

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory

www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = SWIFT_PASS
delay_auth_decision = True
Replace SWIFT_PASS with the password you chose for the swift user in the Identity service.

Note

Comment out or remove any other options in the [filter:authtoken] section.

In the [filter:cache] section, configure the memcached location:

[filter:cache]

use = egg:swift#memcache

memcache_servers = controller:11211

controller节点配置安装
注意,对象存储服务swift不适用SQL数据库,而是使用分布式存储数据库在每一个存储节点
(1)刷admin环境

[root@controller ~]

# . admin-openrc
(2)创建swift服务认证
先创建swift用户

[root@controller ~]

# openstack user create –domain default –password-prompt swift
User Password:
Repeat User Password:
+———————+———————————-+
| Field | Value |
+———————+———————————-+
| domain_id | default |
| enabled | True |
| id | 1969a84f93bd40b2bb8c1e1b83bba3c4 |
| name | swift |
| options | {} |
| password_expires_at | None |
+———————+———————————-+

[root@controller ~]

#
把swift添加到admin角色

[root@controller ~]

# openstack role add –project service –user swift admin
创建swift服务实体

[root@controller ~]

# openstack service create –name swift \

–description “OpenStack Object Storage” object-store
+————-+———————————-+
| Field | Value |
+————-+———————————-+
| description | OpenStack Object Storage |
| enabled | True |
| id | 299e609439534abcacb396b873726994 |
| name | swift |
| type | object-store |
+————-+———————————-+
创建swift服务API节点

[root@controller ~]

# openstack endpoint create –region RegionOne \
object-store public http://controller:8080/v1/AUTH_%(project_id)s
+————–+———————————————–+
| Field | Value |
+————–+———————————————–+
| enabled | True |
| id | 8f9341ef9db5404d929a5d317ce7a69f |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 299e609439534abcacb396b873726994 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+————–+———————————————–+

[root@controller ~]

# openstack endpoint create –region RegionOne \
object-store internal http://controller:8080/v1/AUTH_%(project_id)s
+————–+———————————————–+
| Field | Value |
+————–+———————————————–+
| enabled | True |
| id | 013d60928ca04eb1b8ddaa4f38e276a1 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 299e609439534abcacb396b873726994 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1/AUTH_%(project_id)s |
+————–+———————————————–+

[root@controller ~]

# openstack endpoint create –region RegionOne \
object-store admin http://controller:8080/v1
+————–+———————————-+
| Field | Value |
+————–+———————————-+
| enabled | True |
| id | 598f20c962bb419997c2e76cb22cfba1 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 299e609439534abcacb396b873726994 |
| service_name | swift |
| service_type | object-store |
| url | http://controller:8080/v1 |
+————–+———————————-+

1.7.1安装和配置组件

[root@controller ~]

# yum install openstack-swift-proxy python-swiftclient \

python-keystoneclient python-keystonemiddleware \
memcached
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: linux.mirrors.es.net
  • centos-qemu-ev: sjc.edge.kernel.org
  • epel: mirror.coastal.edu
  • extras: mirror.sjc02.svwh.net
  • updates: linux.mirrors.es.net
    Package python2-swiftclient-3.6.0-1.el7.noarch already installed and latest version
#

Installed:
openstack-swift-proxy.noarch 0:2.19.1-1.el7

Dependency Installed:
liberasurecode.x86_64 0:1.5.0-1.el7 libtomcrypt.x86_64 0:1.17-26.el7 libtommath.x86_64 0:0.42.0-6.el7
python-dns.noarch 0:1.15.0-5.el7 python-swift.noarch 0:2.19.1-1.el7 python2-ceilometermiddleware.noarch 0:1.3.0-1.el7
python2-crypto.x86_64 0:2.6.1-16.el7 python2-pyeclib.x86_64 0:1.5.0-3.el7

Complete!

从官网获取配置文件

[root@controller ~]

# curl -o /etc/swift/proxy-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/proxy-server.conf-sample?h=stable/queens
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 41237 100 41237 0 0 22580 0 0:00:01 0:00:01 –:–:– 22570

[root@controller ~]

# ll /etc/swift/proxy-server.conf
-rw-r—– 1 root swift 41237 Apr 16 02:12 /etc/swift/proxy-server.conf

编辑配置文件/etc/swift/proxy-server.conf

[root@controller ~]

# vim /etc/swift/proxy-server.conf

[root@controller ~]

# grep -v ‘^#’ /etc/swift/proxy-server.conf | grep -v ‘^$’
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]

use = egg:swift#proxy
account_autocreate = True

[filter:tempauth]

use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory
www_authenticate_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = 123456
delay_auth_decision = True

[filter:keystoneauth]

use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

use = egg:swift#memcache
memcache_servers = controller:11211

[filter:ratelimit]

use = egg:swift#ratelimit

[filter:domain_remap]

use = egg:swift#domain_remap

[filter:catch_errors]

use = egg:swift#catch_errors

[filter:cname_lookup]

use = egg:swift#cname_lookup

[filter:staticweb]

use = egg:swift#staticweb

[filter:tempurl]

use = egg:swift#tempurl

[filter:formpost]

use = egg:swift#formpost

[filter:name_check]

use = egg:swift#name_check

[filter:list-endpoints]

use = egg:swift#list_endpoints

[filter:proxy-logging]

use = egg:swift#proxy_logging

[filter:bulk]

use = egg:swift#bulk

[filter:slo]

use = egg:swift#slo

[filter:dlo]

use = egg:swift#dlo

[filter:container-quotas]

use = egg:swift#container_quotas

[filter:account-quotas]

use = egg:swift#account_quotas

[filter:gatekeeper]

use = egg:swift#gatekeeper

[filter:container_sync]

use = egg:swift#container_sync

[filter:xprofile]

use = egg:swift#xprofile

[filter:versioned_writes]

use = egg:swift#versioned_writes

[filter:copy]

use = egg:swift#copy

[filter:keymaster]

use = egg:swift#keymaster
encryption_root_secret = changeme

[filter:kms_keymaster]

use = egg:swift#kms_keymaster

[filter:encryption]

use = egg:swift#encryption

[filter:listing_formats]

use = egg:swift#listing_formats

[filter:symlink]

use = egg:swift#symlink

[root@controller ~]

#

存储节点部署
Install and configure the storage nodes for Red Hat Enterprise Linux and CentOS

THIS PAGE LAST UPDATED: 2018-02-08 10:56:28
This section describes how to install and configure storage nodes that operate the account, container, and object services. For simplicity, this configuration references two storage nodes, each containing two empty local block storage devices. The instructions use /dev/sdb and /dev/sdc, but you can substitute different values for your particular nodes.

Although Object Storage supports any file system with extended attributes (xattr), testing and benchmarking indicate the best performance and reliability on XFS. For more information on horizontally scaling your environment, see the Deployment Guide.

This section applies to Red Hat Enterprise Linux 7 and CentOS 7.

Prerequisites¶
Before you install and configure the Object Storage service on the storage nodes, you must prepare the storage devices.

Note

Perform these steps on each storage node.

Install the supporting utility packages:

yum install xfsprogs rsync

Format the /dev/sdb and /dev/sdc devices as XFS:

mkfs.xfs /dev/sdb

mkfs.xfs /dev/sdc

Create the mount point directory structure:

mkdir -p /srv/node/sdb

mkdir -p /srv/node/sdc

Edit the /etc/fstab file and add the following to it:

/dev/sdb /srv/node/sdb xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdc /srv/node/sdc xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
Mount the devices:

mount /srv/node/sdb

mount /srv/node/sdc

Create or edit the /etc/rsyncd.conf file to contain the following:

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = MANAGEMENT_INTERFACE_IP_ADDRESS

[account]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

Note

The rsync service requires no authentication, so consider running it on a private network in production environments.

Start the rsyncd service and configure it to start when the system boots:

systemctl enable rsyncd.service

systemctl start rsyncd.service

Install and configure components¶
Note

Default configuration files vary by distribution. You might need to add these sections and options rather than modifying existing sections and options. Also, an ellipsis (…) in the configuration snippets indicates potential default configuration options that you should retain.

Note

Perform these steps on each storage node.

Install the packages:

yum install openstack-swift-account openstack-swift-container \

openstack-swift-object
Obtain the accounting, container, and object service configuration files from the Object Storage source repository:

curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/queens

curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/queens

curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/queens

Edit the /etc/swift/account-server.conf file and complete the following actions:

In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory:

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

In the [pipeline:main] section, enable the appropriate modules:

[pipeline:main]

pipeline = healthcheck recon account-server
Note

For more information on other modules that enable additional features, see the Deployment Guide.

In the [filter:recon] section, configure the recon (meters) cache directory:

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift
Edit the /etc/swift/container-server.conf file and complete the following actions:

In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory:

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

In the [pipeline:main] section, enable the appropriate modules:

[pipeline:main]

pipeline = healthcheck recon container-server
Note

For more information on other modules that enable additional features, see the Deployment Guide.

In the [filter:recon] section, configure the recon (meters) cache directory:

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift
Edit the /etc/swift/object-server.conf file and complete the following actions:

In the [DEFAULT] section, configure the bind IP address, bind port, user, configuration directory, and mount point directory:

[DEFAULT]

bind_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node.

In the [pipeline:main] section, enable the appropriate modules:

[pipeline:main]

pipeline = healthcheck recon object-server
Note

For more information on other modules that enable additional features, see the Deployment Guide.

In the [filter:recon] section, configure the recon (meters) cache and lock directories:

[filter:recon]

use = egg:swift#recon

recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
Ensure proper ownership of the mount point directory structure:

chown -R swift:swift /srv/node

Create the recon directory and ensure proper ownership of it:

mkdir -p /var/cache/swift

chown -R root:swift /var/cache/swift

chmod -R 775 /var/cache/swift

准备环节
(1)先安装所需要的包

[root@server2 ~]

# yum install xfsprogs rsync
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.usc.edu
  • centos-qemu-ev: repos-lax.psychz.net
  • centos-sclo-rh: repos-lax.psychz.net
  • centos-sclo-sclo: repos-lax.psychz.net
  • epel: mirror.rnet.missouri.edu
  • extras: repos-lax.psychz.net
  • updates: repos-lax.psychz.net
  • webtatic: uk.repo.webtatic.com
    Package xfsprogs-4.5.0-19.el7_6.x86_64 already installed and latest version
    Package rsync-3.1.2-4.el7.x86_64 already installed and latest version
    Nothing to do

(2)格式化分区为xfs分区

[root@server2 ~]

# mkfs.xfs -f /dev/sdf
meta-data=/dev/sdf isize=512 agcount=4, agsize=30524162 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=122096646, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=59617, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@server2 ~]

# mkfs.xfs -f /dev/sdg
meta-data=/dev/sdg isize=512 agcount=4, agsize=30524162 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=122096646, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=59617, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@server2 ~]

# mkfs.xfs -f /dev/sdh
meta-data=/dev/sdh isize=512 agcount=4, agsize=30524162 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=122096646, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=59617, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none

(3)创建挂载点

[root@server2 ~]

# mkdir -p /srv/node/sdf

[root@server2 ~]

# mkdir -p /srv/node/sdg

[root@server2 ~]

# mkdir -p /srv/node/sdh

(4)去/etc/fstab写开机挂载

[root@server2 ~]

# vim /etc/fstab

[root@server2 ~]

# tail -3 /etc/fstab
/dev/sdf /srv/node/sdf xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdg /srv/node/sdg xfs noatime,nodiratime,nobarrier,logbufs=8 0 2
/dev/sdh /srv/node/sdh xfs noatime,nodiratime,nobarrier,logbufs=8 0 2

(5)挂载分区

[root@server2 ~]

# mount /srv/node/sdf

[root@server2 ~]

# mount /srv/node/sdg

[root@server2 ~]

# mount /srv/node/sdh

[root@server2 ~]

# df -h
Filesystem Size Used Avail Use% Mounted on

#

/dev/sdf 466G 33M 466G 1% /srv/node/sdf
/dev/sdg 466G 33M 466G 1% /srv/node/sdg
/dev/sdh 466G 33M 466G 1% /srv/node/sdh

(6)配置同步

[root@server2 ~]

#

[root@server2 ~]

# cat /etc/rsyncd.conf

/etc/rsyncd: configuration file for rsync daemon mode

See rsyncd.conf man page for more options.

configuration example:

uid = nobody

gid = nobody

use chroot = yes

max connections = 4

pid file = /var/run/rsyncd.pid

exclude = lost+found/

transfer logging = yes

timeout = 900

ignore nonreadable = yes

dont compress = *.gz *.tgz *.zip *.z *.Z *.rpm *.deb *.bz2

[ftp]

path = /home/ftp

comment = ftp export area

uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.0.9

[account]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/account.lock

[container]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/container.lock

[object]

max connections = 2
path = /srv/node/
read only = False
lock file = /var/lock/object.lock

[root@server2 ~]

#

(7)启动同步服务

[root@server2 ~]

# systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.

[root@server2 ~]

# systemctl start rsyncd.service

[root@server2 ~]

# systemctl status rsyncd.service
● rsyncd.service – fast remote file copy program daemon
Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 02:37:33 EDT; 4s ago
Main PID: 26003 (rsync)
Tasks: 1
CGroup: /system.slice/rsyncd.service
└─26003 /usr/bin/rsync –daemon –no-detach

Apr 16 02:37:33 server2 systemd[1]: Started fast remote file copy program daemon.

[root@server2 ~]

#

安装和配置swift组件
(1)装包

[root@server2 ~]

# yum install openstack-swift-account openstack-swift-container \

openstack-swift-object
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile

  • base: mirrors.usc.edu
  • centos-qemu-ev: repos-lax.psychz.net
  • centos-sclo-rh: repos-lax.psychz.net
  • centos-sclo-sclo: repos-lax.psychz.net
  • epel: mirror.rnet.missouri.edu
  • extras: repos-lax.psychz.net
  • updates: repos-lax.psychz.net
  • webtatic: uk.repo.webtatic.com
    Resolving Dependencies
    –> Running transaction check
    —> Package openstack-swift-account.noarch 0:2.19.1-1.el7 will be installed
#

Installed:
openstack-swift-account.noarch 0:2.19.1-1.el7 openstack-swift-container.noarch 0:2.19.1-1.el7 openstack-swift-object.noarch 0:2.19.1-1.el7

Dependency Installed:
liberasurecode.x86_64 0:1.5.0-1.el7 libtomcrypt.x86_64 0:1.17-26.el7 libtommath.x86_64 0:0.42.0-6.el7 python-dns.noarch 0:1.15.0-5.el7
python-swift.noarch 0:2.19.1-1.el7 python2-crypto.x86_64 0:2.6.1-16.el7 python2-pyeclib.x86_64 0:1.5.0-3.el7

Complete!

(2)下载配置文件

[root@server2 ~]

# curl -o /etc/swift/account-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/account-server.conf-sample?h=stable/queens
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9777 100 9777 0 0 8272 0 0:00:01 0:00:01 –:–:– 8271

[root@server2 ~]

# curl -o /etc/swift/container-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/container-server.conf-sample?h=stable/queens
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11028 100 11028 0 0 9419 0 0:00:01 0:00:01 –:–:– 9425

[root@server2 ~]

# curl -o /etc/swift/object-server.conf https://git.openstack.org/cgit/openstack/swift/plain/etc/object-server.conf-sample?h=stable/queens
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18178 100 18178 0 0 14659 0 0:00:01 0:00:01 –:–:– 14671

[root@server2 ~]

# ll /etc/swift/
total 52
drwxr-xr-x 2 root root 6 Feb 25 22:28 account-server
-rw-r—– 1 swift swift 9777 Apr 16 02:39 account-server.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 container-server
-rw-r—– 1 swift swift 11028 Apr 16 02:39 container-server.conf
-rw-r—– 1 swift swift 1181 Feb 25 22:26 internal-client.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 object-server
-rw-r—– 1 swift swift 18178 Apr 16 02:39 object-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf

(3)编辑/etc/swift/account-server.conf

[root@server2 ~]

# vim /etc/swift/account-server.conf

[root@server2 ~]

# grep -v ‘^#’ /etc/swift/account-server.conf | grep -v ‘^$’
[DEFAULT]
bind_ip = 192.168.0.9
bind_port = 6202
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]

pipeline = healthcheck recon account-server

[app:account-server]

use = egg:swift#account

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon
recon_cache_path = /var/cache/swift

[account-replicator]

[account-auditor]

[account-reaper]

[filter:xprofile]

use = egg:swift#xprofile

[root@server2 ~]

#

(4)编辑配置文件 /etc/swift/container-server.conf

[root@server2 ~]

# vim /etc/swift/container-server.conf

[root@server2 ~]

# grep -v ‘^#’ /etc/swift/container-server.conf | grep -v ‘^$’
[DEFAULT]
bind_ip = 192.168.0.9
bind_port = 6201
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]

pipeline = healthcheck recon container-server

[app:container-server]

use = egg:swift#container

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon
recon_cache_path = /var/cache/swift

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]

[filter:xprofile]

use = egg:swift#xprofile

[root@server2 ~]

#

(5)编辑配置文件 /etc/swift/object-server.conf

[root@server2 ~]

# vim /etc/swift/object-server.conf

[root@server2 ~]

# grep -v ‘^#’ /etc/swift/object-server.conf | grep -v ‘^$’
[DEFAULT]
bind_ip = 192.168.0.9
bind_port = 6200
user = swift
swift_dir = /etc/swift
devices = /srv/node
mount_check = True

[pipeline:main]

pipeline = healthcheck recon object-server

[app:object-server]

use = egg:swift#object

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:recon]

use = egg:swift#recon
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock

[object-replicator]

[object-reconstructor]

[object-updater]

[object-auditor]

[filter:xprofile]

use = egg:swift#xprofile

(6)确保挂载点授权正确

[root@server2 ~]

# chown -R swift.swift /srv/node/

[root@server2 ~]

# ll /srv/node/
total 0
drwxr-xr-x 2 swift swift 6 Apr 16 02:31 sdf
drwxr-xr-x 2 swift swift 6 Apr 16 02:31 sdg
drwxr-xr-x 2 swift swift 6 Apr 16 02:31 sdh

(7)创建recon文件夹,并确保授权正确

[root@server2 ~]

# mkdir -p /var/cache/swift

[root@server2 ~]

# chown -R root:swift /var/cache/swift

[root@server2 ~]

# chmod -R 775 /var/cache/swift

[root@server2 ~]

# ll /var/cache/swift/
total 0

[root@server2 ~]

# ll -d /var/cache/swift/
drwxrwxr-x 2 root swift 6 Feb 25 22:28 /var/cache/swift/

1.7.3做分布式环
Create and distribute initial rings

THIS PAGE LAST UPDATED: 2017-09-05 19:16:30
Before starting the Object Storage services, you must create the initial account, container, and object rings. The ring builder creates configuration files that each node uses to determine and deploy the storage architecture. For simplicity, this guide uses one region and two zones with 2^10 (1024) maximum partitions, 3 replicas of each object, and 1 hour minimum time between moving a partition more than once. For Object Storage, a partition indicates a directory on a storage device rather than a conventional partition table. For more information, see the Deployment Guide.

Note

Perform these steps on the controller node.

Create account ring¶
The account server uses the account ring to maintain lists of containers.

Change to the /etc/swift directory.

Create the base account.builder file:

swift-ring-builder account.builder create 10 3 1

Note

This command provides no output.

Add each storage node to the ring:

swift-ring-builder account.builder \

add –region 1 –zone 1 –ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS –port 6202 \
–device DEVICE_NAME –weight DEVICE_WEIGHT
Replace STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. Replace DEVICE_NAME with a storage device name on the same storage node. For example, using the first storage node in Install and configure the storage nodes with the /dev/sdb storage device and weight of 100:

swift-ring-builder account.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6202 –device sdb –weight 100
Repeat this command for each storage device on each storage node. In the example architecture, use the command in four variations:

swift-ring-builder account.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6202 –device sdb –weight 100
Device d0r1z1-10.0.0.51:6202R10.0.0.51:6202/sdb_”” with 100.0 weight got id 0

swift-ring-builder account.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6202 –device sdc –weight 100
Device d1r1z2-10.0.0.51:6202R10.0.0.51:6202/sdc_”” with 100.0 weight got id 1

swift-ring-builder account.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6202 –device sdb –weight 100
Device d2r1z3-10.0.0.52:6202R10.0.0.52:6202/sdb_”” with 100.0 weight got id 2

swift-ring-builder account.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6202 –device sdc –weight 100
Device d3r1z4-10.0.0.52:6202R10.0.0.52:6202/sdc_”” with 100.0 weight got id 3
Verify the ring contents:

swift-ring-builder account.builder

account.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6202 10.0.0.51 6202 sdb 100.00 0 -100.00
1 1 1 10.0.0.51 6202 10.0.0.51 6202 sdc 100.00 0 -100.00
2 1 2 10.0.0.52 6202 10.0.0.52 6202 sdb 100.00 0 -100.00
3 1 2 10.0.0.52 6202 10.0.0.52 6202 sdc 100.00 0 -100.00
Rebalance the ring:

swift-ring-builder account.builder rebalance

Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Create container ring¶
The container server uses the container ring to maintain lists of objects. However, it does not track object locations.

Change to the /etc/swift directory.

Create the base container.builder file:

swift-ring-builder container.builder create 10 3 1

Note

This command provides no output.

Add each storage node to the ring:

swift-ring-builder container.builder \

add –region 1 –zone 1 –ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS –port 6201 \
–device DEVICE_NAME –weight DEVICE_WEIGHT
Replace STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. Replace DEVICE_NAME with a storage device name on the same storage node. For example, using the first storage node in Install and configure the storage nodes with the /dev/sdb storage device and weight of 100:

swift-ring-builder container.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6201 –device sdb –weight 100
Repeat this command for each storage device on each storage node. In the example architecture, use the command in four variations:

swift-ring-builder container.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6201 –device sdb –weight 100
Device d0r1z1-10.0.0.51:6201R10.0.0.51:6201/sdb_”” with 100.0 weight got id 0

swift-ring-builder container.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6201 –device sdc –weight 100
Device d1r1z2-10.0.0.51:6201R10.0.0.51:6201/sdc_”” with 100.0 weight got id 1

swift-ring-builder container.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6201 –device sdb –weight 100
Device d2r1z3-10.0.0.52:6201R10.0.0.52:6201/sdb_”” with 100.0 weight got id 2

swift-ring-builder container.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6201 –device sdc –weight 100
Device d3r1z4-10.0.0.52:6201R10.0.0.52:6201/sdc_”” with 100.0 weight got id 3
Verify the ring contents:

swift-ring-builder container.builder

container.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6201 10.0.0.51 6201 sdb 100.00 0 -100.00
1 1 1 10.0.0.51 6201 10.0.0.51 6201 sdc 100.00 0 -100.00
2 1 2 10.0.0.52 6201 10.0.0.52 6201 sdb 100.00 0 -100.00
3 1 2 10.0.0.52 6201 10.0.0.52 6201 sdc 100.00 0 -100.00
Rebalance the ring:

swift-ring-builder container.builder rebalance

Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Create object ring¶
The object server uses the object ring to maintain lists of object locations on local devices.

Change to the /etc/swift directory.

Create the base object.builder file:

swift-ring-builder object.builder create 10 3 1

Note

This command provides no output.

Add each storage node to the ring:

swift-ring-builder object.builder \

add –region 1 –zone 1 –ip STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS –port 6200 \
–device DEVICE_NAME –weight DEVICE_WEIGHT
Replace STORAGE_NODE_MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the management network on the storage node. Replace DEVICE_NAME with a storage device name on the same storage node. For example, using the first storage node in Install and configure the storage nodes with the /dev/sdb storage device and weight of 100:

swift-ring-builder object.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6200 –device sdb –weight 100
Repeat this command for each storage device on each storage node. In the example architecture, use the command in four variations:

swift-ring-builder object.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6200 –device sdb –weight 100
Device d0r1z1-10.0.0.51:6200R10.0.0.51:6200/sdb_”” with 100.0 weight got id 0

swift-ring-builder object.builder add \

–region 1 –zone 1 –ip 10.0.0.51 –port 6200 –device sdc –weight 100
Device d1r1z2-10.0.0.51:6200R10.0.0.51:6200/sdc_”” with 100.0 weight got id 1

swift-ring-builder object.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6200 –device sdb –weight 100
Device d2r1z3-10.0.0.52:6200R10.0.0.52:6200/sdb_”” with 100.0 weight got id 2

swift-ring-builder object.builder add \

–region 1 –zone 2 –ip 10.0.0.52 –port 6200 –device sdc –weight 100
Device d3r1z4-10.0.0.52:6200R10.0.0.52:6200/sdc_”” with 100.0 weight got id 3
Verify the ring contents:

swift-ring-builder object.builder

object.builder, build version 4
1024 partitions, 3.000000 replicas, 1 regions, 2 zones, 4 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1
The overload factor is 0.00% (0.000000)
Devices: id region zone ip address port replication ip replication port name weight partitions balance meta
0 1 1 10.0.0.51 6200 10.0.0.51 6200 sdb 100.00 0 -100.00
1 1 1 10.0.0.51 6200 10.0.0.51 6200 sdc 100.00 0 -100.00
2 1 2 10.0.0.52 6200 10.0.0.52 6200 sdb 100.00 0 -100.00
3 1 2 10.0.0.52 6200 10.0.0.52 6200 sdc 100.00 0 -100.00
Rebalance the ring:

swift-ring-builder object.builder rebalance

Reassigned 1024 (100.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
Distribute ring configuration files¶
Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to the /etc/swift directory on each storage node and any additional nodes running the proxy service.

创建分布式平衡坏
10 3 1代表分成2的10次方也就是1024个区域,每个对象3个备份,1小时最小移动间隔时间

首先创建环
(1)进入/etc/swift目录

[root@controller ~]

# cd /etc/swift/

[root@controller swift]

# pwd
/etc/swift

[root@controller swift]

# ll
total 56
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf

[root@controller swift]

#

(2)创建环的数据库

[root@controller swift]

# swift-ring-builder account.builder create 10 3 1

[root@controller swift]

# ll
total 60
-rw-r–r– 1 root root 2443 Apr 16 02:53 account.builder
drwxr-xr-x 2 root root 40 Apr 16 02:53 backups
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf

(3)添加存储节点到环

[root@controller swift]

# swift-ring-builder account.builder add \

–region 1 –zone 1 –ip 192.168.0.9 –port 6202 –device sdf –weight 100
Device d0r1z1-192.168.0.9:6202R192.168.0.9:6202/sdf_”” with 100.0 weight got id 0

[root@controller swift]

# swift-ring-builder account.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6202 –device sdg –weight 100
Device d1r1z1-192.168.0.9:6202R192.168.0.9:6202/sdg_”” with 100.0 weight got id 1

[root@controller swift]

# swift-ring-builder account.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6202 –device sdh –weight 100
Device d2r1z1-192.168.0.9:6202R192.168.0.9:6202/sdh_”” with 100.0 weight got id 2

(4)验证环配置

[root@controller swift]

# swift-ring-builder account.builder
account.builder, build version 3, id 7bd98c47ea974305ba9a6267bebb2eaa
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 3 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file account.ring.gz not found, probably it hasn’t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.9:6202 192.168.0.9:6202 sdf 100.00 0 -100.00
1 1 1 192.168.0.9:6202 192.168.0.9:6202 sdg 100.00 0 -100.00
2 1 1 192.168.0.9:6202 192.168.0.9:6202 sdh 100.00 0 -100.00

(5)重建平衡

[root@controller swift]

# swift-ring-builder account.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

创建容器环
(1)进入/etc/swift目录

[root@controller swift]

# pwd
/etc/swift

[root@controller swift]

# ll
total 72
-rw-r–r– 1 root root 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root root 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root root 108 Apr 16 02:56 backups
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf
(2)创建数据库

[root@controller swift]

# swift-ring-builder container.builder create 10 3 1

[root@controller swift]

# ll
total 76
-rw-r–r– 1 root root 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root root 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root root 144 Apr 16 02:57 backups
-rw-r–r– 1 root root 2443 Apr 16 02:57 container.builder
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf
(3)添加存储节点到环

[root@controller swift]

# swift-ring-builder container.builder add \

–region 1 –zone 1 –ip 192.168.0.9 –port 6201 –device sdf –weight 100
Device d0r1z1-192.168.0.9:6201R192.168.0.9:6201/sdf_”” with 100.0 weight got id 0

[root@controller swift]

# swift-ring-builder container.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6201 –device sdg –weight 100
Device d1r1z1-192.168.0.9:6201R192.168.0.9:6201/sdg_”” with 100.0 weight got id 1

[root@controller swift]

# swift-ring-builder container.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6201 –device sdh –weight 100
Device d2r1z1-192.168.0.9:6201R192.168.0.9:6201/sdh_”” with 100.0 weight got id 2
(4)验证配置

[root@controller swift]

# swift-ring-builder container.builder
container.builder, build version 3, id 2442d2c046794f668c249cd4e52f8eec
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 3 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file container.ring.gz not found, probably it hasn’t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.9:6201 192.168.0.9:6201 sdf 100.00 0 -100.00
1 1 1 192.168.0.9:6201 192.168.0.9:6201 sdg 100.00 0 -100.00
2 1 1 192.168.0.9:6201 192.168.0.9:6201 sdh 100.00 0 -100.00
(5)重建平衡

[root@controller swift]

# swift-ring-builder container.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

创建对象环
(1)进入/etc/swift目录

[root@controller swift]

# pwd
/etc/swift

[root@controller swift]

# ll
total 88
-rw-r–r– 1 root root 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root root 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root root 216 Apr 16 02:59 backups
-rw-r–r– 1 root root 9339 Apr 16 02:59 container.builder
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r–r– 1 root root 242 Apr 16 02:59 container.ring.gz
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf

[root@controller swift]

#
(2)创建对象环数据库

[root@controller swift]

# swift-ring-builder object.builder create 10 3 1

[root@controller swift]

# ll
total 92
-rw-r–r– 1 root root 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root root 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root root 249 Apr 16 03:02 backups
-rw-r–r– 1 root root 9339 Apr 16 02:59 container.builder
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r–r– 1 root root 242 Apr 16 02:59 container.ring.gz
-rw-r–r– 1 root root 2443 Apr 16 03:02 object.builder
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 63 Feb 25 22:26 swift.conf
(3)添加节点到环

[root@controller swift]

# swift-ring-builder object.builder add \

–region 1 –zone 1 –ip 192.168.0.9 –port 6200 –device sdf –weight 100
Device d0r1z1-192.168.0.9:6200R192.168.0.9:6200/sdf_”” with 100.0 weight got id 0

[root@controller swift]

# swift-ring-builder object.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6200 –device sdg –weight 100
Device d1r1z1-192.168.0.9:6200R192.168.0.9:6200/sdg_”” with 100.0 weight got id 1

[root@controller swift]

# swift-ring-builder object.builder add –region 1 –zone 1 –ip 192.168.0.9 –port 6200 –device sdh –weight 100
Device d2r1z1-192.168.0.9:6200R192.168.0.9:6200/sdh_”” with 100.0 weight got id 2
(4)验证配置

[root@controller swift]

# swift-ring-builder object.builder
object.builder, build version 3, id 10c3196111ba4a278860bfefd6a4d510
1024 partitions, 3.000000 replicas, 1 regions, 1 zones, 3 devices, 100.00 balance, 0.00 dispersion
The minimum number of hours before a partition can be reassigned is 1 (0:00:00 remaining)
The overload factor is 0.00% (0.000000)
Ring file object.ring.gz not found, probably it hasn’t been written yet
Devices: id region zone ip address:port replication ip:port name weight partitions balance flags meta
0 1 1 192.168.0.9:6200 192.168.0.9:6200 sdf 100.00 0 -100.00
1 1 1 192.168.0.9:6200 192.168.0.9:6200 sdg 100.00 0 -100.00
2 1 1 192.168.0.9:6200 192.168.0.9:6200 sdh 100.00 0 -100.00
(5)重建平衡

[root@controller swift]

# swift-ring-builder object.builder rebalance
Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00

如果你有多个节点,那把这些配置文件
account.ring.gz, container.ring.gz, object.ring.gz
分发到每个对象存储节点

[root@controller swift]

# scp *.gz 192.168.0.9:/etc/swift/
[email protected]’s password:
account.ring.gz 100% 241 153.1KB/s 00:00
container.ring.gz 100% 242 167.8KB/s 00:00
object.ring.gz 100% 240 184.1KB/s 00:00

[root@controller swift]

#

1.7.4完成swift的安装
(1)从openstack官网下载配置文件

[root@controller swift]

# curl -o /etc/swift/swift.conf \

https://git.openstack.org/cgit/openstack/swift/plain/etc/swift.conf-sample?h=stable/queens
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7894 100 7894 0 0 7576 0 0:00:01 0:00:01 –:–:– 7583

[root@controller swift]

# ll /etc/swift/
total 108
-rw-r–r– 1 root root 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root root 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root root 315 Apr 16 03:04 backups
-rw-r–r– 1 root root 9339 Apr 16 02:59 container.builder
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r–r– 1 root root 242 Apr 16 02:59 container.ring.gz
-rw-r–r– 1 root root 9339 Apr 16 03:04 object.builder
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
-rw-r–r– 1 root root 240 Apr 16 03:04 object.ring.gz
drwxr-xr-x 2 root root 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 7894 Apr 16 03:06 swift.conf

(2)编辑配置文件/etc/swift/swift.conf

[root@controller swift]

# vim /etc/swift/swift.conf

[root@controller swift]

# grep -v ‘^#’ /etc/swift/swift.conf | grep -v ‘^$’

[swift-hash]

swift_hash_path_suffix = swift
swift_hash_path_prefix = swift

[storage-policy:0]

name = Policy-0
default = yes
aliases = yellow, orange

[swift-constraints]

[root@controller swift]

#

(3)把刚才编辑的配置文件swift.conf拷贝到所有的objec节点

[root@controller swift]

# scp /etc/swift/swift.conf 192.168.0.9:/etc/swift/swift.conf
[email protected]’s password:
swift.conf 100% 7888 2.9MB/s 00:00

[root@controller swift]

#

(4)在所有的节点都确保授权正确
controller节点

[root@controller swift]

# chown -R root.swift /etc/swift/

[root@controller swift]

# ll /etc/swift/
total 108
-rw-r–r– 1 root swift 9339 Apr 16 02:56 account.builder
-rw-r–r– 1 root swift 241 Apr 16 02:56 account.ring.gz
drwxr-xr-x 2 root swift 315 Apr 16 03:04 backups
-rw-r–r– 1 root swift 9339 Apr 16 02:59 container.builder
-rw-r—– 1 root swift 1415 Feb 25 22:26 container-reconciler.conf
-rw-r–r– 1 root swift 242 Apr 16 02:59 container.ring.gz
-rw-r–r– 1 root swift 9339 Apr 16 03:04 object.builder
-rw-r—– 1 root swift 291 Feb 25 22:26 object-expirer.conf
-rw-r–r– 1 root swift 240 Apr 16 03:04 object.ring.gz
drwxr-xr-x 2 root swift 6 Feb 25 22:28 proxy-server
-rw-r—– 1 root swift 41942 Apr 16 02:21 proxy-server.conf
-rw-r—– 1 root swift 7888 Apr 16 03:37 swift.conf
storage节点

[root@server2 ~]

# chown -R root.swift /etc/swift/

[root@server2 ~]

# ll /etc/swift/
total 56
drwxr-xr-x 2 root swift 6 Feb 25 22:28 account-server
-rw-r—– 1 root swift 9890 Apr 16 02:42 account-server.conf
drwxr-xr-x 2 root swift 6 Feb 25 22:28 container-server
-rw-r—– 1 root swift 11142 Apr 16 02:44 container-server.conf
-rw-r—– 1 root swift 1181 Feb 25 22:26 internal-client.conf
drwxr-xr-x 2 root swift 6 Feb 25 22:28 object-server
-rw-r—– 1 root swift 18291 Apr 16 02:46 object-server.conf
-rw-r—– 1 root swift 7888 Apr 16 03:39 swift.conf

(5)在controller节点启动服务,并设置为开机自启

[root@controller ~]

# systemctl enable openstack-swift-proxy.service memcached.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-proxy.service to /usr/lib/systemd/system/openstack-swift-proxy.service.

[root@controller ~]

# systemctl start openstack-swift-proxy.service memcached.service

[root@controller ~]

# systemctl status openstack-swift-proxy.service memcached.service
● openstack-swift-proxy.service – OpenStack Object Storage (swift) – Proxy Server
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:42:08 EDT; 4s ago
Main PID: 10867 (swift-proxy-ser)
Tasks: 5
CGroup: /system.slice/openstack-swift-proxy.service
├─10867 /usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
├─10884 /usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
├─10885 /usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
├─10886 /usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf
└─10887 /usr/bin/python2 /usr/bin/swift-proxy-server /etc/swift/proxy-server.conf

Apr 16 03:42:09 controller proxy-server[10886]: Adding required filter listing_formats to pipeline at position 5
Apr 16 03:42:09 controller proxy-server[10886]: Pipeline was modified. New pipeline is “catch_errors gatekeeper healthcheck proxy-logging …server”.
Apr 16 03:42:09 controller proxy-server[10884]: Starting Keystone auth_token middleware
Apr 16 03:42:09 controller proxy-server[10885]: Starting Keystone auth_token middleware
Apr 16 03:42:09 controller proxy-server[10887]: Starting Keystone auth_token middleware
Apr 16 03:42:09 controller proxy-server[10884]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to Fa…to True.
Apr 16 03:42:09 controller proxy-server[10885]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to Fa…to True.
Apr 16 03:42:09 controller proxy-server[10887]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to Fa…to True.
Apr 16 03:42:09 controller proxy-server[10886]: Starting Keystone auth_token middleware
Apr 16 03:42:09 controller proxy-server[10886]: AuthToken middleware is set with keystone_authtoken.service_token_roles_required set to Fa…to True.

● memcached.service – memcached daemon
Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2019-04-15 20:39:53 EDT; 7h ago
Main PID: 23936 (memcached)
Tasks: 10
CGroup: /system.slice/memcached.service
└─23936 /usr/bin/memcached -p 11211 -u memcached -m 64 -c 1024 -l 127.0.0.1,::1,controller

Apr 15 20:39:53 controller systemd[1]: Started memcached daemon.
Hint: Some lines were ellipsized, use -l to show in full.

[root@controller ~]

#

(6)在storage节点,启动服务,并设置为开机自启
account组件

[root@server2 ~]

# systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service \

openstack-swift-account-reaper.service openstack-swift-account-replicator.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account.service to /usr/lib/systemd/system/openstack-swift-account.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-auditor.service to /usr/lib/systemd/system/openstack-swift-account-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-reaper.service to /usr/lib/systemd/system/openstack-swift-account-reaper.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-account-replicator.service to /usr/lib/systemd/system/openstack-swift-account-replicator.service.

[root@server2 ~]

# systemctl status openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
● openstack-swift-account.service – OpenStack Object Storage (swift) – Account Server
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-account.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:43:21 EDT; 1min 17s ago
Main PID: 28859 (swift-account-s)
Tasks: 5
CGroup: /system.slice/openstack-swift-account.service
├─28859 /usr/bin/python2 /usr/bin/swift-account-server /etc/swift/account-server.conf
├─28895 /usr/bin/python2 /usr/bin/swift-account-server /etc/swift/account-server.conf
├─28898 /usr/bin/python2 /usr/bin/swift-account-server /etc/swift/account-server.conf
├─28901 /usr/bin/python2 /usr/bin/swift-account-server /etc/swift/account-server.conf
└─28904 /usr/bin/python2 /usr/bin/swift-account-server /etc/swift/account-server.conf

Apr 16 03:43:21 server2 systemd[1]: Started OpenStack Object Storage (swift) – Account Server.
Apr 16 03:43:21 server2 account-server[28859]: Started child 28895 from parent 28859
Apr 16 03:43:21 server2 account-server[28859]: Started child 28898 from parent 28859
Apr 16 03:43:21 server2 account-server[28859]: Started child 28901 from parent 28859
Apr 16 03:43:21 server2 account-server[28859]: Started child 28904 from parent 28859

● openstack-swift-account-auditor.service – OpenStack Object Storage (swift) – Account Auditor
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-account-auditor.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:43:21 EDT; 1min 17s ago
Main PID: 28860 (swift-account-a)
Tasks: 1
CGroup: /system.slice/openstack-swift-account-auditor.service
└─28860 /usr/bin/python2 /usr/bin/swift-account-auditor /etc/swift/account-server.conf

Apr 16 03:43:21 server2 systemd[1]: Started OpenStack Object Storage (swift) – Account Auditor.
Apr 16 03:43:21 server2 account-auditor[28860]: Starting 28860

● openstack-swift-account-reaper.service – OpenStack Object Storage (swift) – Account Reaper
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-account-reaper.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:43:21 EDT; 1min 17s ago
Main PID: 28861 (swift-account-r)
Tasks: 1
CGroup: /system.slice/openstack-swift-account-reaper.service
└─28861 /usr/bin/python2 /usr/bin/swift-account-reaper /etc/swift/account-server.conf

Apr 16 03:43:21 server2 systemd[1]: Started OpenStack Object Storage (swift) – Account Reaper.
Apr 16 03:43:21 server2 account-reaper[28861]: Starting 28861

● openstack-swift-account-replicator.service – OpenStack Object Storage (swift) – Account Replicator
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-account-replicator.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:44:36 EDT; 2s ago
Main PID: 28995 (swift-account-r)
Tasks: 1
CGroup: /system.slice/openstack-swift-account-replicator.service
└─28995 /usr/bin/python2 /usr/bin/swift-account-replicator /etc/swift/account-server.conf

Apr 16 03:44:36 server2 systemd[1]: Started OpenStack Object Storage (swift) – Account Replicator.
Apr 16 03:44:37 server2 account-replicator[28995]: Starting 28995

启动container服务

[root@server2 ~]

# systemctl enable openstack-swift-container.service \

openstack-swift-container-auditor.service openstack-swift-container-replicator.service \
openstack-swift-container-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container.service to /usr/lib/systemd/system/openstack-swift-container.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-auditor.service to /usr/lib/systemd/system/openstack-swift-container-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-replicator.service to /usr/lib/systemd/system/openstack-swift-container-replicator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-container-updater.service to /usr/lib/systemd/system/openstack-swift-container-updater.service.

[root@server2 ~]

# systemctl start openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service

[root@server2 ~]

# systemctl status openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
● openstack-swift-container.service – OpenStack Object Storage (swift) – Container Server
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-container.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:46:46 EDT; 5s ago
Main PID: 29097 (swift-container)
Tasks: 5
CGroup: /system.slice/openstack-swift-container.service
├─29097 /usr/bin/python2 /usr/bin/swift-container-server /etc/swift/container-server.conf
├─29135 /usr/bin/python2 /usr/bin/swift-container-server /etc/swift/container-server.conf
├─29137 /usr/bin/python2 /usr/bin/swift-container-server /etc/swift/container-server.conf
├─29139 /usr/bin/python2 /usr/bin/swift-container-server /etc/swift/container-server.conf
└─29140 /usr/bin/python2 /usr/bin/swift-container-server /etc/swift/container-server.conf

Apr 16 03:46:46 server2 systemd[1]: Started OpenStack Object Storage (swift) – Container Server.
Apr 16 03:46:47 server2 container-server[29097]: Started child 29135 from parent 29097
Apr 16 03:46:47 server2 container-server[29097]: Started child 29137 from parent 29097
Apr 16 03:46:47 server2 container-server[29097]: Started child 29139 from parent 29097
Apr 16 03:46:47 server2 container-server[29097]: Started child 29140 from parent 29097

● openstack-swift-container-auditor.service – OpenStack Object Storage (swift) – Container Auditor
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-container-auditor.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:46:47 EDT; 4s ago
Main PID: 29098 (swift-container)
Tasks: 1
CGroup: /system.slice/openstack-swift-container-auditor.service
└─29098 /usr/bin/python2 /usr/bin/swift-container-auditor /etc/swift/container-server.conf

Apr 16 03:46:47 server2 systemd[1]: Started OpenStack Object Storage (swift) – Container Auditor.
Apr 16 03:46:47 server2 container-auditor[29098]: Starting 29098

● openstack-swift-container-replicator.service – OpenStack Object Storage (swift) – Container Replicator
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-container-replicator.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:46:47 EDT; 4s ago
Main PID: 29099 (swift-container)
Tasks: 1
CGroup: /system.slice/openstack-swift-container-replicator.service
└─29099 /usr/bin/python2 /usr/bin/swift-container-replicator /etc/swift/container-server.conf

Apr 16 03:46:47 server2 systemd[1]: Started OpenStack Object Storage (swift) – Container Replicator.
Apr 16 03:46:47 server2 container-replicator[29099]: Starting 29099

● openstack-swift-container-updater.service – OpenStack Object Storage (swift) – Container Updater
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-container-updater.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:46:47 EDT; 4s ago
Main PID: 29100 (swift-container)
Tasks: 1
CGroup: /system.slice/openstack-swift-container-updater.service
└─29100 /usr/bin/python2 /usr/bin/swift-container-updater /etc/swift/container-server.conf

Apr 16 03:46:47 server2 systemd[1]: Started OpenStack Object Storage (swift) – Container Updater.
Apr 16 03:46:47 server2 container-updater[29100]: Starting 29100

[root@server2 ~]

#

启动object服务

openstack-swift-object-replicator.service openstack-swift-object-updater.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object.service to /usr/lib/systemd/system/openstack-swift-object.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-auditor.service to /usr/lib/systemd/system/openstack-swift-object-auditor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-replicator.service to /usr/lib/systemd/system/openstack-swift-object-replicator.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-swift-object-updater.service to /usr/lib/systemd/system/openstack-swift-object-updater.service.

[root@server2 ~]

# systemctl start openstack-swift-object.service openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service openstack-swift-object-updater.service

[root@server2 ~]

# systemctl status openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
● openstack-swift-object.service – OpenStack Object Storage (swift) – Object Server
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:47:39 EDT; 5s ago
Main PID: 29208 (swift-object-se)
Tasks: 5
CGroup: /system.slice/openstack-swift-object.service
├─29208 /usr/bin/python2 /usr/bin/swift-object-server /etc/swift/object-server.conf
├─29238 /usr/bin/python2 /usr/bin/swift-object-server /etc/swift/object-server.conf
├─29239 /usr/bin/python2 /usr/bin/swift-object-server /etc/swift/object-server.conf
├─29240 /usr/bin/python2 /usr/bin/swift-object-server /etc/swift/object-server.conf
└─29241 /usr/bin/python2 /usr/bin/swift-object-server /etc/swift/object-server.conf

Apr 16 03:47:39 server2 systemd[1]: Started OpenStack Object Storage (swift) – Object Server.
Apr 16 03:47:40 server2 object-server[29208]: Started child 29238 from parent 29208
Apr 16 03:47:40 server2 object-server[29208]: Started child 29239 from parent 29208
Apr 16 03:47:40 server2 object-server[29208]: Started child 29240 from parent 29208
Apr 16 03:47:40 server2 object-server[29208]: Started child 29241 from parent 29208

● openstack-swift-object-auditor.service – OpenStack Object Storage (swift) – Object Auditor
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-auditor.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:47:39 EDT; 5s ago
Main PID: 29209 (swift-object-au)
Tasks: 1
CGroup: /system.slice/openstack-swift-object-auditor.service
└─29209 /usr/bin/python2 /usr/bin/swift-object-auditor /etc/swift/object-server.conf

Apr 16 03:47:39 server2 systemd[1]: Started OpenStack Object Storage (swift) – Object Auditor.
Apr 16 03:47:40 server2 object-auditor[29209]: Starting 29209
Apr 16 03:47:40 server2 object-auditor[29284]: Begin object audit “forever” mode (ZBF)
Apr 16 03:47:40 server2 object-auditor[29285]: Begin object audit “forever” mode (ALL)
Apr 16 03:47:40 server2 object-auditor[29284]: Object audit (ZBF) “forever” mode completed: 0.00s. Total quarantined: 0, Total errors: 0, …te: 0.00
Apr 16 03:47:40 server2 object-auditor[29285]: Object audit (ALL) “forever” mode completed: 0.00s. Total quarantined: 0, Total errors: 0, …te: 0.00

● openstack-swift-object-replicator.service – OpenStack Object Storage (swift) – Object Replicator
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-replicator.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:47:39 EDT; 5s ago
Main PID: 29210 (swift-object-re)
Tasks: 1
CGroup: /system.slice/openstack-swift-object-replicator.service
└─29210 /usr/bin/python2 /usr/bin/swift-object-replicator /etc/swift/object-server.conf

Apr 16 03:47:39 server2 systemd[1]: Started OpenStack Object Storage (swift) – Object Replicator.
Apr 16 03:47:40 server2 object-replicator[29210]: Starting 29210
Apr 16 03:47:40 server2 object-replicator[29210]: Starting object replicator in daemon mode.
Apr 16 03:47:40 server2 object-replicator[29210]: Starting object replication pass.
Apr 16 03:47:40 server2 object-replicator[29210]: Nothing replicated for 0.429885149002 seconds.
Apr 16 03:47:40 server2 object-replicator[29210]: Object replication complete. (0.01 minutes)

● openstack-swift-object-updater.service – OpenStack Object Storage (swift) – Object Updater
Loaded: loaded (/usr/lib/systemd/system/openstack-swift-object-updater.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-04-16 03:47:39 EDT; 5s ago
Main PID: 29211 (swift-object-up)
Tasks: 1
CGroup: /system.slice/openstack-swift-object-updater.service
└─29211 /usr/bin/python2 /usr/bin/swift-object-updater /etc/swift/object-server.conf

Apr 16 03:47:39 server2 systemd[1]: Started OpenStack Object Storage (swift) – Object Updater.
Apr 16 03:47:40 server2 object-updater[29211]: Starting 29211
Hint: Some lines were ellipsized, use -l to show in full.

[root@server2 ~]

#

1.7.4 验证配置
Verify operation

THIS PAGE LAST UPDATED: 2017-07-07 10:52:39
Verify operation of the Object Storage service.

Note

Perform these steps on the controller node.

Warning

If you are using Red Hat Enterprise Linux 7 or CentOS 7 and one or more of these steps do not work, check the /var/log/audit/audit.log file for SELinux messages indicating denial of actions for the swift processes. If present, change the security context of the /srv/node directory to the lowest security level (s0) for the swift_data_t type, object_r role and the system_u user:

chcon -R system_u:object_r:swift_data_t:s0 /srv/node

Source the demo credentials:

$ . demo-openrc
Show the service status:

$ swift stat
Account: AUTH_ed0b60bf607743088218b0a533d5943f
Containers: 0
Objects: 0
Bytes: 0
X-Account-Project-Domain-Id: default
X-Timestamp: 1444143887.71539
X-Trans-Id: tx1396aeaf17254e94beb34-0056143bde
X-Openstack-Request-Id: tx1396aeaf17254e94beb34-0056143bde
Content-Type: text/plain; charset=utf-8
Accept-Ranges: bytes
Create container1 container:

$ openstack container create container1
+—————————————+————+————————————+
| account | container | x-trans-id |
+—————————————+————+————————————+
| AUTH_ed0b60bf607743088218b0a533d5943f | container1 | tx8c4034dc306c44dd8cd68-0056f00a4a |
+—————————————+————+————————————+
Upload a test file to the container1 container:

$ openstack object create container1 FILE
+——–+————+———————————-+
| object | container | etag |
+——–+————+———————————-+
| FILE | container1 | ee1eca47dc88f4879d8a229cc70a07c6 |
+——–+————+———————————-+
Replace FILE with the name of a local file to upload to the container1 container.

List files in the container1 container:

$ openstack object list container1
+——+
| Name |
+——+
| FILE |
+——+
Download a test file from the container1 container:

$ openstack object save container1 FILE
Replace FILE with the name of the file uploaded to the container1 container.

Note

This command provides no output.

首先,如果你开了selinux,检查上下文

chcon -R system_u:object_r:swift_data_t:s0 /srv/node

(1)刷admin环境

[root@controller ~]

# . admin-openrc

(2)检查swift状态

[root@controller ~]

# swift stat
是不是发现半天没反应,最后报错500
是的,这个问题困扰了几个小时,浪费了不少时间,最后才发现,是官方配置文档有问题!
具体就是keystone已经不再像以前那用35357来走admin的认证信息,全部都走5000,所以,把端口改一下就OK了
正确的配置文件如下

[root@controller ~]

# grep -v ‘^#’ /etc/swift/proxy-server.conf | grep -v ‘^$’
[DEFAULT]
bind_port = 8080
user = swift
swift_dir = /etc/swift

[pipeline:main]

pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server

[app:proxy-server]

use = egg:swift#proxy
account_autocreate = True

[filter:tempauth]

use = egg:swift#tempauth
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
user_test_tester3 = testing3
user_test5_tester5 = testing5 service

[filter:authtoken]

paste.filter_factory = keystonemiddleware.auth_token:filter_factory
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = swift
password = 123456
delay_auth_decision = True

[filter:keystoneauth]

use = egg:swift#keystoneauth
operator_roles = admin,user

[filter:healthcheck]

use = egg:swift#healthcheck

[filter:cache]

use = egg:swift#memcache
memcache_servers = controller:11211

[filter:ratelimit]

use = egg:swift#ratelimit

[filter:domain_remap]

use = egg:swift#domain_remap

[filter:catch_errors]

use = egg:swift#catch_errors

[filter:cname_lookup]

use = egg:swift#cname_lookup

[filter:staticweb]

use = egg:swift#staticweb

[filter:tempurl]

use = egg:swift#tempurl

[filter:formpost]

use = egg:swift#formpost

[filter:name_check]

use = egg:swift#name_check

[filter:list-endpoints]

use = egg:swift#list_endpoints

[filter:proxy-logging]

use = egg:swift#proxy_logging

[filter:bulk]

use = egg:swift#bulk

[filter:slo]

use = egg:swift#slo

[filter:dlo]

use = egg:swift#dlo

[filter:container-quotas]

use = egg:swift#container_quotas

[filter:account-quotas]

use = egg:swift#account_quotas

[filter:gatekeeper]

use = egg:swift#gatekeeper

[filter:container_sync]

use = egg:swift#container_sync

[filter:xprofile]

use = egg:swift#xprofile

[filter:versioned_writes]

use = egg:swift#versioned_writes

[filter:copy]

use = egg:swift#copy

[filter:keymaster]

use = egg:swift#keymaster
encryption_root_secret = changeme

[filter:kms_keymaster]

use = egg:swift#kms_keymaster

[filter:encryption]

use = egg:swift#encryption

[filter:listing_formats]

use = egg:swift#listing_formats

[filter:symlink]

use = egg:swift#symlink

[root@controller ~]

#

改完后重新验证

[root@controller ~]

# swift stat
Account: AUTH_25a82cb651074f3494aeb5639d62ed22
Containers: 0
Objects: 0
Bytes: 0
X-Put-Timestamp: 1555407656.09982
X-Timestamp: 1555407656.09982
X-Trans-Id: txfe1e256f37be4b408e481-005cb5a327
Content-Type: text/plain; charset=utf-8
X-Openstack-Request-Id: txfe1e256f37be4b408e481-005cb5a327
现在就正确了

(3)创建容器container1

[root@controller ~]

# openstack container create container1
+—————————————+————+————————————+
| account | container | x-trans-id |
+—————————————+————+————————————+
| AUTH_25a82cb651074f3494aeb5639d62ed22 | container1 | tx779df8491079492abdc28-005cb5a359 |
+—————————————+————+————————————+

(4)向容器里面上传东西试试

[root@controller ~]

# ll
total 12440
-rw-r–r– 1 root root 231 Apr 14 02:34 administrative.openstack
-rw-r–r– 1 root root 286 Apr 16 04:38 admin-openrc
-rw——-. 1 root root 1862 Apr 1 20:25 anaconda-ks.cfg
-rw-r–r–. 1 root root 1137 Apr 11 12:04 chrony.conf
-rw-r–r– 1 root root 12716032 Nov 19 2017 cirros-0.4.0-x86_64-disk.img
-rw-r–r– 1 root root 266 Apr 14 03:02 demo-openrc

[root@controller ~]

# openstack object create container1 admin-openrc
+————–+————+———————————-+
| object | container | etag |
+————–+————+———————————-+
| admin-openrc | container1 | 33752ab7aa54be908a503010d4a47699 |
+————–+————+———————————-+

(5)列出container1里面的文件

[root@controller ~]

# openstack object list container1
+————–+
| Name |
+————–+
| admin-openrc |
+————–+

[root@controller ~]

#

(6)换个目录,把刚才的文件下载下来试试

[root@controller ~]

# cd /tmp/

[root@controller tmp]

# pwd
/tmp

[root@controller tmp]

# ls

[root@controller tmp]

# openstack object save container1 admin-openrc

[root@controller tmp]

# ls
admin-openrc

有了对象存储服务,我们可以回到刚才的块存储cinder,去启用备份服务了

启动实例

首先创建网络,我的测试环境使用外部网络,也就是直接上外网
Provider network

THIS PAGE LAST UPDATED: 2019-04-15 10:11:32.708200
Before launching an instance, you must create the necessary virtual network infrastructure. For networking option 1, an instance uses a provider (external) network that connects to the physical network infrastructure via layer-2 (bridging/switching). This network includes a DHCP server that provides IP addresses to instances.

The admin or other privileged user must create this network because it connects directly to the physical network infrastructure.

Note

The following instructions and diagrams use example IP address ranges. You must adjust them for your particular environment.

Networking Option 1: Provider networks – Overview
Networking Option 1: Provider networks – Overview¶

Networking Option 1: Provider networks – Connectivity
Networking Option 1: Provider networks – Connectivity¶

Create the provider network¶
On the controller node, source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc
Create the network:

$ openstack network create –share –external \
–provider-physical-network provider \
–provider-network-type flat provider

Created a new network:

+—————————+————————————–+
| Field | Value |
+—————————+————————————–+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-03-14T14:37:39Z |
| description | |
| dns_domain | None |
| id | 54adb94a-4dce-437f-a33b-e7e2e7648173 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 4c7f48f1da5b494faaa66713686a7707 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 3 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| updated_at | 2017-03-14T14:37:39Z |
+—————————+————————————–+
The –share option allows all projects to use the virtual network.

The –external option defines the virtual network to be external. If you wish to create an internal network, you can use –internal instead. Default value is internal.

The –provider-physical-network provider and –provider-network-type flat options connect the flat virtual network to the flat (native/untagged) physical network on the eth1 interface on the host using information from the following files:

ml2_conf.ini:

[ml2_type_flat]

flat_networks = provider
linuxbridge_agent.ini:

[linux_bridge]

physical_interface_mappings = provider:eth1
Create a subnet on the network:

$ openstack subnet create –network provider \
–allocation-pool start=START_IP_ADDRESS,end=END_IP_ADDRESS \
–dns-nameserver DNS_RESOLVER –gateway PROVIDER_NETWORK_GATEWAY \
–subnet-range PROVIDER_NETWORK_CIDR provider
Replace PROVIDER_NETWORK_CIDR with the subnet on the provider physical network in CIDR notation.

Replace START_IP_ADDRESS and END_IP_ADDRESS with the first and last IP address of the range within the subnet that you want to allocate for instances. This range must not include any existing active IP addresses.

Replace DNS_RESOLVER with the IP address of a DNS resolver. In most cases, you can use one from the /etc/resolv.conf file on the host.

Replace PROVIDER_NETWORK_GATEWAY with the gateway IP address on the provider network, typically the “.1” IP address.

Example

The provider network uses 203.0.113.0/24 with a gateway on 203.0.113.1. A DHCP server assigns each instance an IP address from 203.0.113.101 to 203.0.113.250. All instances use 8.8.4.4 as a DNS resolver.

$ openstack subnet create –network provider \
–allocation-pool start=203.0.113.101,end=203.0.113.250 \
–dns-nameserver 8.8.4.4 –gateway 203.0.113.1 \
–subnet-range 203.0.113.0/24 provider

Created a new subnet:
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| allocation_pools | 203.0.113.101-203.0.113.250 |
| cidr | 203.0.113.0/24 |
| created_at | 2017-03-29T05:48:29Z |
| description | |
| dns_nameservers | 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | e84b4972-c7fc-4ce9-9742-fdc845196ac5 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 1f816a46-7c3f-4ccf-8bf3-fe0807ddff8d |
| project_id | 496efd248b0c46d3b80de60a309177b5 |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| updated_at | 2017-03-29T05:48:29Z |
+——————-+————————————–+
Return to Launch an instance – Create virtual networks.

THIS PAGE LAST UPDATED: 2019-04-15 10:11:32.708200

(1)先刷admin环境

[root@controller ~]

# . admin-openrc
(2)创建网络

[root@controller ~]

# openstack network create –share –external \

–provider-physical-network provider \
–provider-network-type flat provider
+—————————+————————————–+
| Field | Value |
+—————————+————————————–+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-04-16T10:20:01Z |
| description | |
| dns_domain | None |
| id | 53cb6072-1fc7-4b31-af29-dc5693e648f7 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 25a82cb651074f3494aeb5639d62ed22 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 0 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2019-04-16T10:20:01Z |
+—————————+————————————–+

[root@controller ~]

#
–share 表示所有的项目都能使用
–external表示外部网络 –internal表示内部,默认是内部
–The –provider-physical-network provider and –provider-network-type flat
provider 和 flat provider是根据配置文件 ml2_conf.ini 和 linuxbridge_agent.ini来的
ml2_conf.ini:

[ml2_type_flat]

flat_networks = provider

linuxbridge_agent.ini:

[linux_bridge]

physical_interface_mappings = provider:enp2s0

(3)创建子网

[root@controller ~]

# openstack subnet create –network provider \

–allocation-pool start=192.168.0.200,end=192.168.0.249 \
–dns-nameserver 192.168.0.1 –gateway 192.168.0.1 \
–subnet-range 192.168.0.0/24 provider
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| allocation_pools | 192.168.0.200-192.168.0.249 |
| cidr | 192.168.0.0/24 |
| created_at | 2019-04-16T10:27:43Z |
| description | |
| dns_nameservers | 192.168.0.1 |
| enable_dhcp | True |
| gateway_ip | 192.168.0.1 |
| host_routes | |
| id | 673b843c-181d-4852-9789-b528acb7abde |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 53cb6072-1fc7-4b31-af29-dc5693e648f7 |
| project_id | 25a82cb651074f3494aeb5639d62ed22 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2019-04-16T10:27:43Z |
+——————-+————————————–+

[root@controller ~]

#

然后我们开始创建实例
Launch an instance

THIS PAGE LAST UPDATED: 2019-04-15 10:11:32.708200
This section creates the necessary virtual networks to support launching instances. Networking option 1 includes one provider (external) network with one instance that uses it. Networking option 2 includes one provider network with one instance that uses it and one self-service (private) network with one instance that uses it.

The instructions in this section use command-line interface (CLI) tools on the controller node. However, you can follow the instructions on any host that the tools are installed.

For more information on the CLI tools, see the OpenStackClient documentation for Pike, the OpenStackClient documentation for Queens, or the OpenStackClient documentation for Rocky.

To use the dashboard, see the Dashboard User Documentation for Pike, the Dashboard User Documentation for Queens, or the Dashboard User Documentation for Rocky.

Create virtual networks¶
Create virtual networks for the networking option that you chose when configuring Neutron. If you chose option 1, create only the provider network. If you chose option 2, create the provider and self-service networks.

Provider network
Self-service network
After creating the appropriate networks for your environment, you can continue preparing the environment to launch an instance.

Create m1.nano flavor¶
The smallest default flavor consumes 512 MB memory per instance. For environments with compute nodes containing less than 4 GB memory, we recommend creating the m1.nano flavor that only requires 64 MB per instance. Only use this flavor with the CirrOS image for testing purposes.

$ openstack flavor create –id 0 –vcpus 1 –ram 64 –disk 1 m1.nano

+—————————-+———+
| Field | Value |
+—————————-+———+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+—————————-+———+
Generate a key pair¶
Most cloud images support public key authentication rather than conventional password authentication. Before launching an instance, you must add a public key to the Compute service.

Source the demo project credentials:

$ . demo-openrc
Generate a key pair and add a public key:

$ ssh-keygen -q -N “”
$ openstack keypair create –public-key ~/.ssh/id_rsa.pub mykey

+————-+————————————————-+
| Field | Value |
+————-+————————————————-+
| fingerprint | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
| name | mykey |
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
+————-+————————————————-+
Note

Alternatively, you can skip the ssh-keygen command and use an existing public key.

Verify addition of the key pair:

$ openstack keypair list

+——-+————————————————-+
| Name | Fingerprint |
+——-+————————————————-+
| mykey | ee:3d:2e:97:d4:e2:6a:54:6d:0d:ce:43:39:2c:ba:4d |
+——-+————————————————-+
Add security group rules¶
By default, the default security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images such as CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH).

Add rules to the default security group:

Permit ICMP (ping):

$ openstack security group rule create –proto icmp default

+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| created_at | 2017-03-30T00:46:43Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 1946be19-54ab-4056-90fb-4ba606f19e66 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 3f714c72aed7442681cbfa895f4a68d3 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 89ff5c84-e3d1-46bb-b149-e621689f0696 |
| updated_at | 2017-03-30T00:46:43Z |
+——————-+————————————–+
Permit secure shell (SSH) access:

$ openstack security group rule create –proto tcp –dst-port 22 default

+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| created_at | 2017-03-30T00:43:35Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 42bc2388-ae1a-4208-919b-10cf0f92bc1c |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 3f714c72aed7442681cbfa895f4a68d3 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 89ff5c84-e3d1-46bb-b149-e621689f0696 |
| updated_at | 2017-03-30T00:43:35Z |
+——————-+————————————–+
Launch an instance¶
If you chose networking option 1, you can only launch an instance on the provider network. If you chose networking option 2, you can launch an instance on the provider network and the self-service network.

Launch an instance on the provider network
Launch an instance on the self-service network
Block Storage¶
If your environment includes the Block Storage service, you can create a volume and attach it to an instance.

Block Storage
Orchestration¶
If your environment includes the Orchestration service, you can create a stack that launches an instance.

For more information, see the Orchestration installation guide for Pike, the Orchestration installation guide for Queens, or the Orchestration installation guide for Rocky.

Shared File Systems¶
If your environment includes the Shared File Systems service, you can create a share and mount it in an instance.

For more information, see the Shared File Systems installation guide for Pike, the Shared File Systems installation guide for Queens, or the Shared File Systems installation guide for Rocky.

先创建一个最小的实例类型,可以用来启动以前上传的一个最小的镜像

[root@controller ~]

# openstack flavor create –id 0 –vcpus 1 –ram 64 –disk 1 m1.nano
+—————————-+———+
| Field | Value |
+—————————-+———+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| disk | 1 |
| id | 0 |
| name | m1.nano |
| os-flavor-access:is_public | True |
| properties | |
| ram | 64 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+—————————-+———+

(1)为demo项目创建密钥对
先后先刷demo环境

[root@controller ~]

# . demo-openrc

(2)创建密钥对,并添加公钥

[root@controller ~]

# ssh-keygen -q -N “”
Enter file in which to save the key (/root/.ssh/id_rsa):

[root@controller ~]

# ll /root/.ssh/
total 16
-rw——-. 1 root root 391 Apr 10 04:25 authorized_keys
-rw——- 1 root root 1675 Apr 16 06:34 id_rsa
-rw-r–r– 1 root root 397 Apr 16 06:34 id_rsa.pub
-rw-r–r– 1 root root 346 Apr 16 03:39 known_hosts

[root@controller ~]

# openstack keypair create –public-key ~/.ssh/id_rsa.pub mykey
+————-+————————————————-+
| Field | Value |
+————-+————————————————-+
| fingerprint | 3d:77:26:1e:eb:10:ff:b1:21:c4:24:7a:72:bf:fb:9c |
| name | mykey |
| user_id | 0a9eefe8b20b4258bbd16af82b8a0132 |
+————-+————————————————-+

(3)验证上传的公钥

[root@controller ~]

# openstack keypair list
+——-+————————————————-+
| Name | Fingerprint |
+——-+————————————————-+
| mykey | 3d:77:26:1e:eb:10:ff:b1:21:c4:24:7a:72:bf:fb:9c |
+——-+————————————————-+

添加安全规则
允许icmp

[root@controller ~]

# openstack security group rule create –proto icmp default
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| created_at | 2019-04-16T10:36:45Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 8cc4ab8d-caec-41ac-b3f3-3cd917f5aa30 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 48ca2da44aa94fee851cb16211c18aad |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 3a639cd7-dc6e-48a1-a7ae-355080546168 |
| updated_at | 2019-04-16T10:36:45Z |
+——————-+————————————–+

允许22(SSH)

[root@controller ~]

# openstack security group rule create –proto tcp –dst-port 22 default
+——————-+————————————–+
| Field | Value |
+——————-+————————————–+
| created_at | 2019-04-16T10:37:11Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 942acf20-f5e8-459e-8d9c-c7b2c72e29de |
| name | None |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 48ca2da44aa94fee851cb16211c18aad |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 3a639cd7-dc6e-48a1-a7ae-355080546168 |
| updated_at | 2019-04-16T10:37:11Z |
+——————-+————————————–+

确认环境
(1)刷demo环境

[root@controller ~]

# . demo-openrc
(2)查看实例类型

[root@controller ~]

# openstack flavor list
+————————————–+———+——+——+———–+——-+———–+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+————————————–+———+——+——+———–+——-+———–+
| 0 | m1.nano | 64 | 1 | 0 | 1 | True |
| 28ff8c10-02fd-4a5e-ad6d-47da4d15e3d9 | tiny | 1024 | 10 | 0 | 1 | True |
+————————————–+———+——+——+———–+——-+———–+
(3)查看已有镜像

[root@controller ~]

# openstack image list
+————————————–+——————–+——–+
| ID | Name | Status |
+————————————–+——————–+——–+
| e0810f42-705b-4bdd-9e8e-12313a8ff2e0 | cirros | active |
| dbe2e017-c688-4e4b-a19a-a25533fa4a31 | cloud_node_centos7 | active |
+————————————–+——————–+——–+
(4)查看已有网络

[root@controller ~]

# openstack network list
+————————————–+———-+————————————–+
| ID | Name | Subnets |
+————————————–+———-+————————————–+
| 53cb6072-1fc7-4b31-af29-dc5693e648f7 | provider | 673b843c-181d-4852-9789-b528acb7abde |
+————————————–+———-+————————————–+
(5)查看已有安全组

[root@controller ~]

# openstack security group list

+————————————–+———+————————+———————————-+——+
| ID | Name | Description | Project | Tags |
+————————————–+———+————————+———————————-+——+
| 3a639cd7-dc6e-48a1-a7ae-355080546168 | default | Default security group | 48ca2da44aa94fee851cb16211c18aad | [] |
+————————————–+———+————————+———————————-+——+

启动实例
(1)(我只有1个网络,所以可以忽略–nic选项,openstack会自动配置)

[root@controller ~]

# openstack server create –flavor m1.nano –image cirros \

–security-group default \
–key-name mykey provider-instance
+—————————–+———————————————–+
| Field | Value |
+—————————–+———————————————–+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | W72HyZtRCZo9 |
| config_drive | |
| created | 2019-04-16T10:42:19Z |
| flavor | m1.nano (0) |
| hostId | |
| id | 2f822b20-79bd-4df4-b715-7ec0774b75df |
| image | cirros (e0810f42-705b-4bdd-9e8e-12313a8ff2e0) |
| key_name | mykey |
| name | provider-instance |
| progress | 0 |
| project_id | 48ca2da44aa94fee851cb16211c18aad |
| properties | |
| security_groups | name=’3a639cd7-dc6e-48a1-a7ae-355080546168′ |
| status | BUILD |
| updated | 2019-04-16T10:42:19Z |
| user_id | 0a9eefe8b20b4258bbd16af82b8a0132 |
| volumes_attached | |
+—————————–+———————————————–+
(2)查看我启动的实例

[root@controller ~]

# openstack server list
+————————————–+——————-+——–+————————+——–+———+
| ID | Name | Status | Networks | Image | Flavor |
+————————————–+——————-+——–+————————+——–+———+
| 2f822b20-79bd-4df4-b715-7ec0774b75df | provider-instance | ACTIVE | provider=192.168.0.206 | cirros | m1.nano |
+————————————–+——————-+——–+————————+——–+———+

进入实例
1获取VNC地址

[root@controller ~]

# openstack console url show provider-instance
+——-+———————————————————————————+
| Field | Value |
+——-+———————————————————————————+
| type | novnc |
| url | http://controller:6080/vnc_auto.html?token=aa94fed7-9ee7-411a-8c63-1a34fc938c9d |
+——-+———————————————————————————+

由于demo很小,直接ssh上去查看

[root@controller ~]

# ssh [email protected]
$
$ ifconfig
eth0 Link encap:Ethernet HWaddr FA:16:3E:BA:B7:E6
inet addr:192.168.0.206 Bcast:192.168.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:feba:b7e6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:161 errors:0 dropped:0 overruns:0 frame:0
TX packets:169 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:22393 (21.8 KiB) TX bytes:18421 (17.9 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

$ ping www.baidu.com
PING www.baidu.com (180.97.33.107): 56 data bytes
64 bytes from 180.97.33.107: seq=0 ttl=53 time=36.826 ms
64 bytes from 180.97.33.107: seq=1 ttl=53 time=36.587 ms
64 bytes from 180.97.33.107: seq=2 ttl=53 time=36.463 ms
^C
— www.baidu.com ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 36.463/36.625/36.826 ms

此条目发表在OpenStack分类目录,贴了标签。将固定链接加入收藏夹。

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注