Ceph Ansible

Neste post será apresentado como realizar o deploy do ambiente via ceph ansible .

O deploy via ansible é o formato mais padronizado e oficial entre os principais vendors que utilizam Ceph. Ex: Suse, Oracle e Redhat.

A instalação apresentada neste documento irá utilizar um fluxo parecido para deploy :

Fonte Redhat

1) Requisitos

Os requisitos abaixo serão atendidos via laboratório do Vagrant mais a frente, porém para o deploy via ansible é necessário :

  • Admin node : Servidor onde tem o pacote do Ansible e o módulo Ceph Ansible.
  • 3 Monitors e Managers : Monitors do Ceph , os mesmos hosts serão hosts dos Managers.
  • 3 Storage nodes: Para um deploy em ambiente físico é necessário fazer um estudo de hardware aprofundado. http://docs.ceph.com/docs/luminous/start/hardware-recommendations/

Habilite o repo epel em todos os hosts

yum install epel-release

Definição de firewall para um ambiente Ceph :

Documentação para o firewall : https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/requirements-for-installing-rhcs#configuring-a-firewall-for-red-hat-ceph-storage-install

  • Regra do monitor:
    • 6789/tcp
  • Regra do manager e osd node:
    • da porta 6800/tcp até 7300/tcp
  • Regra do metadata server:
    • 6800/tcp
  • Regra do object gateway:
    • Utiliza 8080/tcp, 80/tcp e 443/tcp (if you want SSL).

2) Laboratório

Utilizaremos o laboratório, no link seguinte tem mais detalhes dos requisitos do lab : https://cephbrasil.com/laboratorio-do-site/

Copie o repositório deploy.

git clone https://github.com/cephbrasil/deploy.git

Suba as vms conforme comando abaixo

vagrant up mon1 mon2 mon3 osd1 osd2 osd3 controller client

3) Admin node e Ceph Ansible

O admin node referenciado nos requisitos será o controller node do laboratório .

Instale o pacote ansible e git no host controller :

[root@controller ~]# yum install ansible git -y

Vamos configurar o Ceph Ansible , onde $BRANCH é a branch que será utilizada e explicada logo em seguida .

[root@controller ~]# cd /usr/share
[root@controller ~]#  git clone https://github.com/ceph/ceph-ansible.git
[root@controller ~]# git checkout $BRANCH 
[root@controller ~]# ln -s /usr/share/ceph-ansible/group_vars /etc/ansible/group_vars

Segundo a documentação do Ceph Ansible , temos as seguintes branchs para utilizarmos .

  • stable-3.0 Supports Ceph versions jewel and luminous. This branch requires Ansible version 2.4.
  • stable-3.1 Supports Ceph versions luminous and mimic. This branch requires Ansible version 2.4.
  • stable-3.2 Supports Ceph versions luminous and mimic. This branch requires Ansible version 2.6.
  • stable-4.0 Supports Ceph version nautilus. This branch requires Ansible version 2.8.
  • master Supports Ceph@master version. This branch requires Ansible version 2.8.

Para mais informações a documentação oficial tem todas as informações : http://docs.ceph.com/ceph-ansible/master/#releases

Durante este deploy estamos utilizando a branch stable-3.2 .

Criando usuário para execução do ansible em todos os hosts .

useradd admin
passwd admin 
echo "admin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/admin
chmod 0440 /etc/sudoers.d/admin
sed -i s'/Defaults requiretty/#Defaults requiretty'/g /etc/sudoers

Crie a chave no host controller e distribua entre todos os hosts .

[admin@controller ~]$ ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/home/admin/.ssh/id_rsa): 
Created directory '/home/admin/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/admin/.ssh/id_rsa.
Your public key has been saved in /home/admin/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:NpdAJw1vFexb/PPUFPFbX18DS4K550hiRMLH9oZEc3E admin@controller
The key's randomart image is:
+---[RSA 2048]----+
|    ..+++==Eo+ ..|
|     ..Bo*o.+ o..|
|      = + +. o .*|
|       + B o. o X|
|      . S *  o ++|
|       . + ..  .+|
|               .o|
|                .|
|                 |
+----[SHA256]-----+




[admin@controller ~]$ vi  ~/.ssh/config
...

Host controller
        Hostname controller
        User admin
 
Host mon1
        Hostname mon1
        User admin

Host mon2
        Hostname mon2
        User admin 

Host mon3
        Hostname mon3
        User admin

Host osd1
        Hostname osd1
        User admin
 
Host osd2
        Hostname osd2
        User admin
 
Host osd3
        Hostname osd3
        User admin

Host client
        Hostname client
        User admin
...


Crie o diretório ceph-ansible-keys

[admin@controller ~]$ mkdir ~/ceph-ansible-keys

Crie o diretório de logs do ansible

[root@controller ~]# mkdir /var/log/ansible
[root@controller ~]# chown admin.admin /var/log/ansible 
[root@controller ~]# chmod 755 /var/log/ansible

Distribua as chaves entre os hosts envolvidos.

[admin@controller ~]$ ssh-keyscan osd1 osd2 osd3 mon1 mon2 mon3 client >> ~/.ssh/known_hosts
[admin@controller ~]$ ssh-copy-id <HOST>

Vamos configurar o inventário do ansible confome exemplo abaixo .

vi /etc/ansible/hosts
...
[mons]
mon1
mon2
mon3 

[mgrs]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

Vamos configurar o yaml /etc/ansible/group_vars/all.yaml

[root@controller group_vars]$ cd /etc/ansible/group_vars/
[root@controller group_vars]$ vi /etc/ansible/group_vars/all.yml
...

ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: luminous
monitor_interface: eth1
public_network: 192.168.0.0/24
cluster_network: 10.10.10.0/24
osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
  - /dev/sdb
  - /dev/sdc
dedicated_devices: 
  - /dev/sdd
  - /dev/sdd


Segue abaixo a explicação das configurações do laboratório, lembrando que para um ambiente de produção é necessário estudo e o entendimento de cada uma destas configurações e de total responsabilidade do sysadmin que está fazendo o deploy.

O número de dedicated_devices tem que ser distribuído de acordo com o número de devices, se você tem 6 discos e 2 nvmes é necessário fazer a seguinte distribuição :

devices:
  - /dev/disk1
  - /dev/disk2
  - /dev/disk3
  - /dev/disk4
  - /dev/disk5
  - /dev/disk6
dedicated_devices: 
  - /dev/nvme0
  - /dev/nvme0
  - /dev/nvme0
  - /dev/nvme1
  - /dev/nvme1
  - /dev/nvme1

Podemos também configurar parâmetro do ceph.conf com o parâmetro ceph_conf_overrides onde podemos especificar a conf nas sessões global, mon , osd, mds :

ceph_origin: repository
ceph_repository: community
ceph_repository_type: cdn
ceph_stable_release: luminous
monitor_interface: eth1
public_network: 192.168.0.0/24
cluster_network: 10.10.10.0/24
osd_scenario: non-collocated
osd_objectstore: bluestore
devices:
  - /dev/sdb
  - /dev/sdc
dedicated_devices: 
  - /dev/sdd
  - /dev/sdd

ceph_conf_overrides:
  global:
    max_open_files: 131072
    osd_pool_default_size: 3
    osd_pool_default_min_size: 2
    osd_pool_default_crush_rule: 0
    osd_pool_default_pg_num: 32
    osd_pool_default_pgp_num: 32
  mon:
    mon_osd_down_out_interval: 600
    mon_osd_mon_down_reporters: 7
    mon_clock_drift_allowed: 0.15
    mon_clock_drift_warn_backoff: 30
    mon_osd_full_ratio: 0.95
    mon_osd_nearfull_ratio: 0.85
    mon_osd_report_timeout: 300
    mon_pg_warn_max_per_osd: 300
    mon_osd_allow_primary_affinity: true
  osd:
    osd_mon_hearbeat_inverval: 30
    osd_recovery_max_active: 1
    osd_recovery_backfills: 1
    osd_recovery_sleep: 0.1
    osd_recovery_max_chunk: 1048576
    osd_recovery_threads: 1
    osd_scrub_sleep: 0.1
    osd_deep_scrub_stride: 1048576
    osd_snap_trim_sleep: 0.1
    osd_client_message_cap: 10000
    osd_client_message_size_cap: 1048576000
    osd_scrub_begin_hour: 23
    osd_scrub_end_hour: 5

4) Deploy

Vamos realizar o deploy com os seguintes passos

[root@controller ~]$ su - admin
[admin@controller ~]$ cd /usr/share/ceph-ansible/
[admin@controller ~]$ cp site.yml.sample  site.yml 
[admin@controller ~]$ ansible-playbook site.yml 

Caso o deploy ocorra com sucesso será apresentado o seguinte output no final do deploy :

TASK [show ceph status for cluster ceph] ****************************************************************************************************
Wednesday 19 June 2019  20:44:24 -0300 (0:00:00.612)       0:06:16.827 ******** 
ok: [mon1 -> mon1] => {
    "msg": [
        "  cluster:", 
        "    id:     c9e1807a-56fd-472b-aced-9479273d18a6", 
        "    health: HEALTH_OK", 
        " ", 
        "  services:", 
        "    mon: 3 daemons, quorum mon1,mon2,mon3", 
        "    mgr: mon1(active), standbys: mon3, mon2", 
        "    osd: 6 osds: 6 up, 6 in", 
        " ", 
        "  data:", 
        "    pools:   0 pools, 0 pgs", 
        "    objects: 0 objects, 0B", 
        "    usage:   6.02GiB used, 23.8GiB / 29.8GiB avail", 
        "    pgs:     ", 
        " "
    ]
}

PLAY RECAP **********************************************************************************************************************************
mon1                       : ok=165  changed=9    unreachable=0    failed=0   
mon2                       : ok=151  changed=9    unreachable=0    failed=0   
mon3                       : ok=153  changed=9    unreachable=0    failed=0   
osd1                       : ok=128  changed=11   unreachable=0    failed=0   
osd2                       : ok=124  changed=11   unreachable=0    failed=0   
osd3                       : ok=124  changed=11   unreachable=0    failed=0   


INSTALLER STATUS ****************************************************************************************************************************
Install Ceph Monitor        : Complete (0:01:15)
Install Ceph Manager        : Complete (0:00:54)
Install Ceph OSD            : Complete (0:02:32)

Logue no monitor e verifique o status do Ceph :

[root@mon1 ~]# ceph -s 
  cluster:
    id:     c9e1807a-56fd-472b-aced-9479273d18a6
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum mon1,mon2,mon3
    mgr: mon1(active), standbys: mon3, mon2
    osd: 6 osds: 6 up, 6 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0B
    usage:   6.03GiB used, 23.8GiB / 29.8GiB avail
    pgs:     

5) Configuração do client

Atualize o inventário /etc/ansible/hosts e configure a tag clients .

[mons]
mon1
mon2
mon3 

[mgrs]
mon1
mon2
mon3

[osds]
osd1
osd2
osd3

[clients]
client

Após a instalação confirme a instalação no host client

[root@client ~]# rpm -qa | grep ceph 
ceph-fuse-12.2.12-0.el7.x86_64
python-cephfs-12.2.12-0.el7.x86_64
ceph-common-12.2.12-0.el7.x86_64
ceph-selinux-12.2.12-0.el7.x86_64
libcephfs2-12.2.12-0.el7.x86_64
ceph-base-12.2.12-0.el7.x86_64
[root@client ~]# cat /etc/ceph/ceph.conf 
# Please do not change this file directly since it is managed by Ansible and will be overwritten

[global]
fsid = c9e1807a-56fd-472b-aced-9479273d18a6

mon host = 192.168.0.100,192.168.0.101,192.168.0.102

public network = 192.168.0.0/24
cluster network = 10.10.10.0/24

[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be writable by QEMU and allowed by SELinux or AppArmor
log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU and allowed by SELinux or AppArmor

Referência :

https://cephbrasil.com

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Translate »