当前位置: 首页 > news >正文

使用 Ansible 批量完成 CentOS 7 操作系统基础配置

使用 Ansible 批量完成 CentOS 7 操作系统基础配置

1. 服务器列表

IP 内存(GB) CPU核数 磁盘 操作系统 CPU 架构 角色
10.0.0.13 8 1 500GB CentOS 7.9.2009 x86_64 Ansible 管理机+受控机
10.0.0.14 8 1 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.15 8 1 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.16 16 2 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.17 16 2 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.18 16 2 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.19 16 2 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.20 16 2 500GB CentOS 7.9.2009 x86_64 受控机
10.0.0.21 16 2 500GB CentOS 7.9.2009 x86_64 受控机

说明:

  • 每个服务器均存在一个管理员用户 admin,该用户可以免密码执行 sudo 命令;
  • 所有操作均使用 admin 用户完成。

2. Ansible 安装部署

2.1 在管理机安装 Ansible 工具

找一台有网的 CentOS 7 服务器操作

$ pwd
/home/admin$ mkdir -p package
$ sudo yum install -y yum-utils
$ yumdownloader --resolve --destdir=package/ansible ansible
$ cd package
$ tar -zcvf ansible.tar.gz ansible

将 ansible.tar.gz 下载到本地后,上传到 10.0.0.13 的 /home/admin 目录下。

在 10.0.0.13 操作

$ pwd
/home/admin$ tar -zxvf ansible.tar.gz
$ sudo yum install -y ansible/*.rpm
$ rm -rf ansible.tar.gz ansible/*
$ mkdir -p ansible/system

如 10.0.0.13 机器有互联网且已经配置阿里云的 base + epel YUM 源,则直接执行:

$ sudo yum install -y ansible
$ cd ~
$ mkdir -p ansible/system

2.2 集群间免密码 SSH 配置

在 10.0.0.13 服务器操作

$ pwd
/home/admin$ cd ansible$ tee ansible.cfg > /dev/null << 'EOF'
[defaults]
inventory=./hosts
host_key_checking=False
EOF# 说明
# 在生产环境,通常分配的机器都会使用相同的管理员用户,但是每个服务器的管理员用户密码很大可能是不一样的
$ tee hosts > /dev/null << 'EOF'
[cluster]
10.0.0.13 ansible_password=123456
10.0.0.14 ansible_password=123456
10.0.0.15 ansible_password=123456
10.0.0.16 ansible_password=123456
10.0.0.17 ansible_password=123456
10.0.0.18 ansible_password=123456
10.0.0.19 ansible_password=123456
10.0.0.20 ansible_password=123456
10.0.0.21 ansible_password=123456[cluster:vars]
ansible_user=admin
EOF

创建 /home/admin/ansible/system/ssh-key.yml,内容如下:

---
- name: Setup SSH key authentication for admin userhosts: clusterbecome: falsegather_facts: falsevars:admin_user: "admin"ssh_key_file: "~/.ssh/id_rsa"tasks:- name: Ensure .ssh directory existsfile:path: "~/.ssh"state: directoryowner: "{{ admin_user }}"group: "{{ admin_user }}"mode: '0700'- name: Generate SSH key on management nodeopenssh_keypair:path: "{{ ssh_key_file }}"type: rsasize: 4096owner: "{{ admin_user }}"group: "{{ admin_user }}"mode: '0600'delegate_to: localhostrun_once: true- name: Fetch public key from management nodeslurp:src: "{{ ssh_key_file }}.pub"delegate_to: localhostrun_once: trueregister: pubkey- name: Authorize SSH key on all nodesauthorized_key:user: "{{ admin_user }}"state: presentkey: "{{ pubkey['content'] | b64decode }}"- shell: whoamiregister: result- debug:msg: "{{ result.stdout }}"
$ pwd
/home/admin/ansible$ ansible-playbook system/ssh-key.yml
# 关键输出
TASK [debug] **********************...
ok: [10.0.0.13] => {"msg": "admin"
}
ok: [10.0.0.15] => {"msg": "admin"
}
ok: [10.0.0.14] => {"msg": "admin"
}
ok: [10.0.0.16] => {"msg": "admin"
}
ok: [10.0.0.17] => {"msg": "admin"
}
ok: [10.0.0.18] => {"msg": "admin"
}
ok: [10.0.0.19] => {"msg": "admin"
}
ok: [10.0.0.20] => {"msg": "admin"
}
ok: [10.0.0.21] => {"msg": "admin"
}$ cp hosts hosts-backup-1
$ sed -i 's/[[:space:]]*ansible_password=[^ ]*//g' hosts# 同时测试免密码 ssh 和提权
$ ansible all -a "whoami" -b
10.0.0.14 | CHANGED | rc=0 >>
root
10.0.0.16 | CHANGED | rc=0 >>
root
10.0.0.17 | CHANGED | rc=0 >>
root
10.0.0.15 | CHANGED | rc=0 >>
root
10.0.0.13 | CHANGED | rc=0 >>
root
10.0.0.18 | CHANGED | rc=0 >>
root
10.0.0.20 | CHANGED | rc=0 >>
root
10.0.0.19 | CHANGED | rc=0 >>
root
10.0.0.21 | CHANGED | rc=0 >>
root

3. 集群系统配置

3.1 修改主机名

说明:在生产环境,申请一批服务器之后,主机名通常是随机的字符组成,不利于集群管理。

目标:在集群所有机器(10.0.0.{13..21}),按照规划修改主机名,规划如下:

IP 主机名
10.0.0.13 arc-pro-dc01
10.0.0.14 arc-pro-dc02
10.0.0.15 arc-pro-dc03
10.0.0.16 arc-pro-dc04
10.0.0.17 arc-pro-dc05
10.0.0.18 arc-pro-dc06
10.0.0.19 arc-pro-dc07
10.0.0.20 arc-pro-dc08
10.0.0.21 arc-pro-dc09

在 10.0.0.13 服务器操作

创建 /home/admin/ansible/system/set-hostname.yml,内容如下:

---
- name: Set hostnames for cluster nodeshosts: clustergather_facts: falsebecome: truevars:hostnames:"10.0.0.13": "arc-pro-dc01""10.0.0.14": "arc-pro-dc02""10.0.0.15": "arc-pro-dc03""10.0.0.16": "arc-pro-dc04""10.0.0.17": "arc-pro-dc05""10.0.0.18": "arc-pro-dc06""10.0.0.19": "arc-pro-dc07""10.0.0.20": "arc-pro-dc08""10.0.0.21": "arc-pro-dc09"tasks:- name: Set the hostnamehostname:name: "{{ hostnames[inventory_hostname] }}"- shell: hostnameregister: result- debug:  msg: "{{ result.stdout }}"
$ pwd
/home/admin/ansible$ ansible-playbook system/set-hostname.ymlTASK [debug] ******************************...
ok: [10.0.0.15] => {"msg": "arc-pro-dc03"
}
ok: [10.0.0.13] => {"msg": "arc-pro-dc01"
}
ok: [10.0.0.14] => {"msg": "arc-pro-dc02"
}
ok: [10.0.0.16] => {"msg": "arc-pro-dc04"
}
ok: [10.0.0.17] => {"msg": "arc-pro-dc05"
}
ok: [10.0.0.18] => {"msg": "arc-pro-dc06"
}
ok: [10.0.0.19] => {"msg": "arc-pro-dc07"
}
ok: [10.0.0.20] => {"msg": "arc-pro-dc08"
}
ok: [10.0.0.21] => {"msg": "arc-pro-dc09"
}

3.2 修改 /etc/hosts 文件

目标:在集群所有机器(10.0.0.{13..21})上修改 /etc/hosts 文件,每台机器的 /etc/hosts 文件中都有集群 9 台服务器的 IP 地址和主机名的映射关系。

在 10.0.0.13 服务器操作

创建 /home/admin/ansible/system/set-hosts.yml,内容如下:

---
- name: Distribute /etc/hosts to all cluster nodeshosts: clustergather_facts: falsebecome: truevars:src_hosts_file: "/etc/hosts"tasks:- name: Backup existing /etc/hostscopy:src: /etc/hostsdest: /etc/hosts.bakremote_src: yesowner: rootgroup: rootmode: '0644'ignore_errors: yes   # 避免目标节点没有 /etc/hosts 文件时报错- name: Copy /etc/hosts from management node to all nodescopy:src: "{{ src_hosts_file }}"dest: /etc/hostsowner: rootgroup: rootmode: '0644'- shell: cat /etc/hostsregister: result- debug:  msg: "{{ result.stdout }}"
$ pwd
/home/admin/ansible# 先把IP地址和主机名的映射关系写入到管理机的 /etc/hosts 文件中
$ sudo tee /etc/hosts > /dev/null << 'EOF'
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain610.0.0.13 arc-pro-dc01
10.0.0.14 arc-pro-dc02
10.0.0.15 arc-pro-dc03
10.0.0.16 arc-pro-dc04
10.0.0.17 arc-pro-dc05
10.0.0.18 arc-pro-dc06
10.0.0.19 arc-pro-dc07
10.0.0.20 arc-pro-dc08
10.0.0.21 arc-pro-dc09
EOF# 再分发给其他机器
$ ansible-playbook system/set-hosts.yml# 输出结果中,每台机器的 /etc/hosts 文件中都有集群 9 台服务器的 IP 地址和主机名的映射关系
TASK [debug] **********************************************...
ok: [10.0.0.15] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.13] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.14] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.16] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.17] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.18] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.19] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.20] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}
ok: [10.0.0.21] => {"msg": "127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4\n::1         localhost localhost.localdomain localhost6 localhost6.localdomain6\n\n10.0.0.13\tarc-pro-dc01\n10.0.0.14\tarc-pro-dc02\n10.0.0.15\tarc-pro-dc03\n10.0.0.16\tarc-pro-dc04\n10.0.0.17\tarc-pro-dc05\n10.0.0.18\tarc-pro-dc06\n10.0.0.19\tarc-pro-dc07\n10.0.0.20\tarc-pro-dc08\n10.0.0.21\tarc-pro-dc09"
}# 修改 ansible 的清单文件
$ cp hosts hosts-backup-2
$ tee hosts > /dev/null << 'EOF'
[cluster]
arc-pro-dc01
arc-pro-dc02
arc-pro-dc03
arc-pro-dc04
arc-pro-dc05
arc-pro-dc06
arc-pro-dc07
arc-pro-dc08
arc-pro-dc09[cluster:vars]
ansible_user=admin
EOF# 测试
$ ansible all -a "hostname"
arc-pro-dc05 | CHANGED | rc=0 >>
arc-pro-dc05
arc-pro-dc04 | CHANGED | rc=0 >>
arc-pro-dc04
arc-pro-dc02 | CHANGED | rc=0 >>
arc-pro-dc02
arc-pro-dc03 | CHANGED | rc=0 >>
arc-pro-dc03
arc-pro-dc01 | CHANGED | rc=0 >>
arc-pro-dc01
arc-pro-dc06 | CHANGED | rc=0 >>
arc-pro-dc06
arc-pro-dc09 | CHANGED | rc=0 >>
arc-pro-dc09
arc-pro-dc08 | CHANGED | rc=0 >>
arc-pro-dc08
arc-pro-dc07 | CHANGED | rc=0 >>
arc-pro-dc07

3.3 禁用防火墙

> 在 10.0.0.13 服务器操作

创建 /home/admin/ansible/system/disable-firewall.yml,内容如下:

---
- name: Disable firewalld on all nodeshosts: clustergather_facts: falsebecome: truetasks:- name: Stop firewalld servicesystemd:name: firewalldstate: stoppedenabled: no- name: Check firewalld service statuscommand: systemctl is-active firewalldregister: firewalld_activeignore_errors: true- name: Check firewalld enabled statuscommand: systemctl is-enabled firewalldregister: firewalld_enabledignore_errors: true- name: Display firewalld statusdebug:msg: |Host: {{ inventory_hostname }}firewalld active: {{ firewalld_active.stdout | default('unknown') }}firewalld enabled: {{ firewalld_enabled.stdout | default('unknown') }}- name: Fail if firewalld is running or enabledfail:msg: "Firewalld is still active or enabled on {{ inventory_hostname }}"when: firewalld_active.stdout == "active" or firewalld_enabled.stdout == "enabled"
$ pwd
/home/admin/ansible$ ansible-playbook system/disable-firewall.ymlTASK [Display firewalld status] ***********************************************...
ok: [arc-pro-dc01] => {"msg": "Host: arc-pro-dc01\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc02] => {"msg": "Host: arc-pro-dc02\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc03] => {"msg": "Host: arc-pro-dc03\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc05] => {"msg": "Host: arc-pro-dc05\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc04] => {"msg": "Host: arc-pro-dc04\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc06] => {"msg": "Host: arc-pro-dc06\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc07] => {"msg": "Host: arc-pro-dc07\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc08] => {"msg": "Host: arc-pro-dc08\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}
ok: [arc-pro-dc09] => {"msg": "Host: arc-pro-dc09\nfirewalld active: unknown\nfirewalld enabled: disabled\n"
}

3.4 禁用 SELINUX

> 在 10.0.0.13 服务器操作

创建 /home/admin/ansible/system/disable-selinux.yml,内容如下:

---
- name: Disable SELinux on all nodeshosts: clustergather_facts: falsebecome: truetasks:- name: Temporarily disable SELinuxcommand: setenforce 0ignore_errors: true- name: Permanently disable SELinux in configlineinfile:path: /etc/selinux/configregexp: '^SELINUX='line: 'SELINUX=disabled'backup: yes- name: Check running SELinux statuscommand: getenforceregister: selinux_status- name: Check permanent SELinux statuscommand: grep '^SELINUX=' /etc/selinux/configregister: selinux_config- name: Display SELinux statusdebug:msg: |Host: {{ inventory_hostname }}Running SELinux status: {{ selinux_status.stdout }}Configured SELinux status: {{ selinux_config.stdout }}
$ pwd
/home/admin/ansible$ ansible-playbook system/disable-selinux.ymlTASK [Display SELinux status] ****************************************************************...
ok: [arc-pro-dc01] => {"msg": "Host: arc-pro-dc01\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc04] => {"msg": "Host: arc-pro-dc04\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc03] => {"msg": "Host: arc-pro-dc03\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc02] => {"msg": "Host: arc-pro-dc02\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc05] => {"msg": "Host: arc-pro-dc05\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc06] => {"msg": "Host: arc-pro-dc06\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc07] => {"msg": "Host: arc-pro-dc07\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc08] => {"msg": "Host: arc-pro-dc08\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}
ok: [arc-pro-dc09] => {"msg": "Host: arc-pro-dc09\nRunning SELinux status: Disabled\nConfigured SELinux status: SELINUX=disabled\n"
}

3.5 时钟同步

3.5.1 局域网内有指定的时间同步服务器

在生产环境,通常运维部门会提供一个内网可用的时间同步服务器,公司所有的服务器可以使用。或者按部门、项目分配指定的时间同步服务器。假设时间同步服务器的地址为:11.11.11.11

在 10.0.0.13 服务器执行

创建 /home/admin/ansible/system/ntp-sync.yml,内容如下:

---
- name: Configure NTP time synchronization using ntpdhosts: clustergather_facts: falsebecome: truevars:ntp_server: "11.11.11.11"tasks:- name: Install NTP packageyum:name: ntpstate: present- name: Configure NTP serverlineinfile:path: /etc/ntp.confregexp: '^server'line: "server {{ ntp_server }} iburst"state: presentbackup: yes- name: Ensure ntpd service is enabled and startedsystemd:name: ntpdenabled: yesstate: started- name: Force immediate time synchronizationcommand: ntpdate -u {{ ntp_server }}ignore_errors: yes- name: Restart ntpd to apply changessystemd:name: ntpdstate: restarted- name: Check ntpd service statussystemd:name: ntpdregister: ntp_statusignore_errors: yes- name: Show ntpd running statusdebug:msg: "Host {{ inventory_hostname }} ntpd is {{ 'running' if ntp_status.status.ActiveState == 'active' else 'stopped' }}"- name: Show ntpd enabled statusdebug:msg: "Host {{ inventory_hostname }} ntpd is {{ 'enabled' if ntp_status.status.UnitFileState == 'enabled' else 'disabled' }}"- name: Check NTP peerscommand: ntpq -pregister: ntpq_outputignore_errors: yes- name: Show NTP peersdebug:msg: "{{ ntpq_output.stdout_lines }}"
$ pwd
/home/admin/ansible$ ansible-playbook ntp-sync.yml

如果集群允许访问外网,则直接使用阿里云的时间服务器,修改 /home/admin/ansible/system/ntp-sync.yml

---
- name: Configure NTP time synchronization using ntpdhosts: clustergather_facts: falsebecome: truevars:ntp_server: "ntp.aliyun.com"...

然后执行 ansible-playbook ntp-sync.yml 即可。

3.5.2 局域网内没有时间同步服务器可用

在集群中选择一台服务器作为时间同步服务器(例如 10.0.0.13),其他服务器从 10.0.0.13 同步时间。

在 10.0.0.13 服务器操作

修改 ansible 清单文件:

$ pwd
/home/admin/ansible$ cat <<EOF >> hosts[ntp_server]
arc-pro-dc01[ntp_client]
arc-pro-dc02
arc-pro-dc03
arc-pro-dc04
arc-pro-dc05
arc-pro-dc06
arc-pro-dc07
arc-pro-dc08
arc-pro-dc09
EOF

创建 /home/admin/ansible/system/ntp-server.yml,内容如下:

---
- name: Deploy and configure NTP server on management nodehosts: ntp_servergather_facts: falsebecome: truevars:allow_ip_scope: "10.0.0.0"mask: "255.255.255.0"tasks:- name: Install ntp packageyum:name: ntpstate: present- name: Enable and start ntpd servicesystemd:name: ntpdstate: startedenabled: yes- name: Configure ntp serverlineinfile:path: /etc/ntp.confregexp: '^restrict default'line: 'restrict default nomodify notrap nopeer noquery'state: present- lineinfile:path: /etc/ntp.confregexp: '^server'line: 'server 127.127.1.0'state: present- lineinfile:path: /etc/ntp.confline: 'fudge 127.127.1.0 stratum 10'state: present- lineinfile:path: /etc/ntp.confline: "restrict {{ allow_ip_scope }} mask {{ mask }} nomodify notrap"state: present- name: restart ntpdsystemd:name: ntpdstate: restarted

创建 /home/admin/ansible/system/ntp-client.yml,内容如下:

- name: Configure NTP client on all nodeshosts: ntp_clientgather_facts: falsebecome: truevars:ntp_server: "10.0.0.13"tasks:- name: Install ntp packageyum:name: ntpstate: present- name: Enable and start ntpd servicesystemd:name: ntpdstate: startedenabled: yes - name: Configure ntp client to sync with management nodelineinfile:path: /etc/ntp.confregexp: '^server {{ ntp_server }}'line: "server {{ ntp_server }} iburst"state: present- lineinfile:path: /etc/ntp.confregexp: '^server 127\.127\.1\.0'line: 'server 127.127.1.0'state: present- lineinfile:path: /etc/ntp.confregexp: '^fudge 127\.127\.1\.0'line: 'fudge 127.127.1.0 stratum 10'state: present- lineinfile:path: /etc/ntp.confline: "restrict {{ ntp_server }} nomodify notrap nopeer noquery"state: present- name: stop ntpdsystemd:name: ntpdstate: stopped- name: ntpdate shell: "ntpdate {{ ntp_server }}"- name: start ntpdsystemd:name: ntpdstate: started

创建 /home/admin/ansible/system/check-ntp-sync.yml,内容如下:

---
- name: Check NTP status on all nodeshosts: clustergather_facts: falsebecome: truetasks:- name: Check NTP peerscommand: ntpq -pregister: ntpq_outputignore_errors: yes- name: Show NTP peersdebug:msg: "{{ ntpq_output.stdout_lines }}"
$ pwd
/home/admin/ansible$ ansible-playbook system/ntp-server.yml
$ ansible-playbook system/ntp-client.yml
$ ansible-playbook system/check-ntp-sync.ymlTASK [Show NTP peers] ***************************************# arc-pro-dc01 *LOCAL(0) 代表从本地同步时间
ok: [arc-pro-dc01] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*LOCAL(0)        .LOCL.          10 l    4   64  377    0.000    0.000   0.000"]
}# 其他服务器,打印的是 *arc-pro-dc01,注意 * 号在 arc-pro-dc01 之前,代表从 arc-pro-dc01 同步数据
# 有些服务器显示的还是 *LOCAL(0),这个是正常的,可能需要等待几分钟或者十几分钟,才可以切换到 arc-pro-dc01
ok: [arc-pro-dc02] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   40   64   17    0.161   22.831  23.328", " LOCAL(0)        .LOCL.          10 l  179   64   14    0.000    0.000   0.000"]
}
ok: [arc-pro-dc03] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   42   64   17    0.234  -25.571  25.300", " LOCAL(0)        .LOCL.          10 l  179   64   14    0.000    0.000   0.000"]
}
ok: [arc-pro-dc04] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   38   64   17    0.222   -3.610   6.001", " LOCAL(0)        .LOCL.          10 l  179   64   14    0.000    0.000   0.000"]
}
ok: [arc-pro-dc05] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   42   64   17    0.357  -43.293  37.725", " LOCAL(0)        .LOCL.          10 l   51   64   17    0.000    0.000   0.000"]
}
ok: [arc-pro-dc06] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   38   64   17    0.270   23.929  24.081", " LOCAL(0)        .LOCL.          10 l  180   64   14    0.000    0.000   0.000"]
}
ok: [arc-pro-dc07] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   42   64   17    0.356   11.410  12.059", " LOCAL(0)        .LOCL.          10 l   51   64   17    0.000    0.000   0.000"]
}
ok: [arc-pro-dc08] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   41   64   17    0.214   -5.287   6.890", " LOCAL(0)        .LOCL.          10 l   51   64   17    0.000    0.000   0.000"]
}
ok: [arc-pro-dc09] => {"msg": ["     remote           refid      st t when poll reach   delay   offset  jitter", "==============================================================================", "*arc-pro-dc01    LOCAL(0)        11 u   43   64   17    0.310  -10.642  11.711", " LOCAL(0)        .LOCL.          10 l   51   64   17    0.000    0.000   0.000"]
}# 确保时间同步
$ for host in arc-pro-dc0{1..9}; do ssh $host "date +'%F %T'"; done
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07
2025-09-01 14:27:07# 如与互联网时间相差过大,可以手动设置时间
$ for host in arc-pro-dc0{1..9}; do ssh $host "sudo date -s '2025-09-01 15:34:30'"; done

3.6 安装局域网内的 YUM 源(可选)

选择 10.0.0.13 服务器作为局域网内 YUM 源服务端。

连接镜像源:(此步骤在生产环境由运维部门负责完成)

image

image

在 10.0.0.13 服务器操作

挂载镜像:

$ pwd
/home/admin/ansible$ sudo ls -l /dev | grep sr
lrwxrwxrwx 1 root root           3 Sep  1 14:56 cdrom -> sr0
srw-rw-rw- 1 root root           0 Sep  1 14:56 log
brw-rw---- 1 root cdrom    11,   0 Sep  1 14:56 sr0$ sudo mkdir -p /mnt/centos
$ sudo su - 
[root@arc-dev-dc01 ~]# echo -e "/dev/sr0\t/mnt/centos\tiso9660\tloop,defaults\t0\t0" >> /etc/fstab
[root@arc-dev-dc01 ~]# exit
$ sudo mount -a$ sudo mkdir -p /etc/yum.repos.d/backup
$ sudo mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup/
$ sudo tee /etc/yum.repos.d/local.repo > /dev/null << 'EOF'
[local]
name=CentOS7 ISO Local Repo
baseurl=file:///mnt/centos
enabled=1
gpgcheck=0
EOF$ sudo yum clean all
$ sudo yum makecache$ sudo yum install -y httpd
$ sudo systemctl enable httpd --now
$ cd /var/www/html
$ sudo ln -s /mnt/centos centos7
$ sudo sed -i 's|^baseurl=file:///mnt/centos|baseurl=http://10.0.0.13/centos7/|' /etc/yum.repos.d/local.repo$ sudo yum clean all
$ sudo yum makecache

创建 /home/admin/ansible/system/local-repo.yml,内容如下:

---
- name: 分发本地 YUM 源配置hosts: cluster:!arc-pro-dc01gather_facts: falsebecome: yestasks:- name: 创建备份目录command: mkdir -p /etc/yum.repos.d/backup- name: 备份现有 repo 文件shell: mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backupignore_errors: yes- name: 分发 local.repo 到目标机器copy:src: /etc/yum.repos.d/local.repodest: /etc/yum.repos.d/local.repoowner: rootgroup: rootmode: '0644'- name: make cacheshell: yum clean all && yum makecacheignore_errors: yes

执行命令:

$ pwd
/home/admin/ansible$ ansible-playbook system/local-repo.yml

3.7 禁用交换分区

> 在 10.0.0.13 服务器操作

创建 /home/admin/ansible/system/disable-swap.yml,内容如下:

---
- name: Disable swap on all cluster nodeshosts: clustergather_facts: falsebecome: yestasks:- name: Turn off all swap immediatelycommand: swapoff -achanged_when: swapoff_result.rc == 0- name: Backup fstab before modifyingcopy:src: /etc/fstabdest: "/etc/fstab.backup"owner: rootgroup: rootmode: 0644- name: Comment out swap entries in fstab to disable on bootlineinfile:path: /etc/fstabregexp: '^([^#].*\s+swap\s+.*)$'line: '# \1'backrefs: yes
$ pwd
/home/admin/ansible$ ansible-playbook system/disable-swap.yml
http://www.hskmm.com/?act=detail&tid=15582

相关文章:

  • BeanUtils中的copyProperties方法使用和分析
  • VUE + Nginx + Traefik 项目的发布与反向代理
  • CF *3500
  • CF *3400
  • 深度优先检索:单词搜索
  • WoTerm、WindTerm及putty的性能测试对比
  • CF333E Summer Earnings
  • 一文看懂Playwright MCP如何引爆AI智能体爆发
  • 从nano banana模型到更加真实的3D打印技术
  • 职业卡点怎么破?3个月私教服务助你升级技能与面试技巧
  • OI?原来这么简单-语法算法入门篇
  • 跨境tk避雷proxy-cheap代理服务商!!!
  • Rouyan:使用WPF/C#构建的基于LLM的快捷翻译小工具
  • BM25 关键词检索算法
  • 记录用户业务请求日志
  • [C++:类的默认成员函数——Lesson7.const成员函数] - 指南
  • 详细介绍:Xilinx系列FPGA实现12G-SDI音视频编解码,支持4K60帧分辨率,提供2套工程源码和技术支持
  • 使用 VMware Workstation 安装 CentOS-7 虚拟机
  • K12教育 和 STEAM教育
  • AT_arc167_c [ARC167C] MST on Line++
  • Lombok无法使用get set方法
  • redis的哈希扩容
  • vite tailwindcss配置
  • 在Vona ORM中实现多数据库/多数据源
  • 实用指南:python全栈-数据可视化
  • sql over()函数使用
  • Git回退版本 reset、revert、read-tree、restore
  • Avalonia 背景颜色Transparent在用户界面设计中对悬浮效果影响的总结
  • 飞书 燕千云焕新上线,飞书用户即刻试用ITSM工具
  • 如果使用微软 Azure 托管的 OpenAI 服务