1、前置要求
系统环境
系统:4 * Rocky Linux9.6-x86_64-minim
内存:12GB (12288MB)
CPU:6 虚拟核
存储:150G
组件:
①Apache Ambari 3.0.0
②Apache Bigtop 3.3.0
对应的下载链接:
Rocky Linux:
https://rockylinux.org/zh-CN/download
Apache Ambari:
https://apache-ambari.com/dist/ambari/3.0.0/rocky9/
Apache Bigtop
https://apache-ambari.com/dist/bigtop/3.3.0/rocky9/
服务器参数设置
用户:hadoop
主机名称:hadoop-app、hadoop-node1、hadoop-node2、hadoop-node3
# 修改当前主机名称
sudo hostnamectl set-hostname {hadoop-node1}
设置hosts
# 安装nano或其他文本编辑器后,以sudo打开/etc/hosts进行编辑
sudo nano /etc/hosts# 在文件追加如下文本
# Hadoop cluster host configuration
192.168.122.10 hadoop-master
192.168.122.11 hadoop-node1
192.168.122.12 hadoop-node2
192.168.122.13 hadoop-node3# 或直接执行下面的命令
echo "# Hadoop cluster host configuration
192.168.122.10 hadoop-master
192.168.122.11 hadoop-node1
192.168.122.12 hadoop-node2
192.168.122.13 hadoop-node3" | sudo tee -a /etc/hosts > /dev/null# 关闭内存交换
sudo nano /etc/fstab
# comment them out
# dev/mapper/rl-swap none swap defaults 0 0
配置安全设置(开发模式)
#关闭防火墙和selinux# Disable and stop firewalld
systemctl disable firewalld
systemctl stop firewalld# Temporarily disable SELinux
setenforce 0# Permanently disable SELinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
配置SSH免密连接
# Login as root on the server machine
sudo su -# Generate SSH key if not exists
if [ ! -f ~/.ssh/id_rsa ]; thenssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
fi
# 在全部的节点上进行下面的配置
# Edit sshd_config
vi /etc/ssh/sshd_config# Ensure these lines are set
PasswordAuthentication yes
PermitRootLogin yes# Restart SSH service
systemctl restart sshd# 在server上尝试如下操作ssh-copy-id -o StrictHostKeyChecking=no root@hadoop-app
ssh-copy-id -o StrictHostKeyChecking=no root@hadoop-node1
ssh-copy-id -o StrictHostKeyChecking=no root@hadoop-node2
ssh-copy-id -o StrictHostKeyChecking=no root@hadoop-node3
基本依赖包下载与仓库配置
在全部的节点上进行如下的操作:
# Update package lists and upgrade packages
dnf update -y# Install basic utilities
dnf install -y sudo openssh-server openssh-clients which iproute net-tools less vim-enhanced
dnf install -y wget curl tar unzip git# Edit the Rocky-Devel repository configuration
vi /etc/yum.repos.d/rocky-devel.repo # There are two possible scenarios:
# 1. If all lines are commented (start with #), uncomment all lines
# 2. If you see "enabled=0", change it to "enabled=1"
# 通常是第二步,启用第一个库
# After editing, verify the repository is enabled
dnf repolist | grep devel# Install OpenJDK 8
dnf install -y java-1.8.0-openjdk-devel# Verify Java installation
java -version# Install chrony (NTP implementation)
dnf install -y chrony# Start and enable chronyd service
systemctl start chronyd
systemctl enable chronyd# Verify time synchronization
chronyc sources
2、创建本地仓库
该仓库不同于docker的repository,而是nigix实现的web端的文件系统。
接下的一系列操作主要在:hadoop-app的机器上进行
具体创建步骤如下:
# 1、安装createrepo包
sudo dnf install createrepo -y# 2、创建本地仓库路径并赋予该文件目录权限
sudo mkdir -p /var/www/html/ambari-repo
sudo chmod -R 755 /var/www/html/ambari-repo# 3、下载RPM的包
# For Rocky Linux 9:
cd /var/www/html/ambari-repo
wget -r -np -nH --cut-dirs=4 --reject 'index.html*' https://www.apache-ambari.com/dist/ambari/3.0.0/rocky9/
wget -r -np -nH --cut-dirs=4 --reject 'index.html*' https://www.apache-ambari.com/dist/bigtop/3.3.0/rocky9/# 4、创建仓库的元数据
cd /var/www/html/ambari-repo
sudo createrepo .# 5、创建本地repository的配置文件
# 该步骤需要在每个节点上运行一遍
# For Rocky Linux 9:sudo tee /etc/yum.repos.d/ambari.repo << EOF
[ambari]
name=Ambari Repository
baseurl=http://hadoop-app/ambari-repo
gpgcheck=0
enabled=1
EOF# 6、清理和更新yum缓存
sudo dnf clean all
sudo dnf makecache
3、使用nigix发布本地仓库
此步在hadoop-app上进行:
sudo dnf install nginx -y# 配置仓库
sudo tee /etc/nginx/conf.d/ambari-repo.conf << EOF
server {listen 80;server_name _;root /var/www/html/;autoindex on;location / {try_files \$uri \$uri/ =404;}
}
EOF# 启用仓库
sudo systemctl start nginx
sudo systemctl enable nginx# 测试是否正确运行
curl -XGET http://hadoop-app/#输出如下,则为正常:
<html>
<head><title>Index of /</title></head>
<body>
<h1>Index of /</h1><hr><pre><a href="../">../</a>
<a href="ambari-repo/">ambari-repo/</a> 22-Sep-2025 02:20 -
</pre><hr></body>
</html>
4、安装ambari
安装ambari的基本依赖和服务\客户端
在所有节点上运行如下操作:
yum install -y python3-distro
yum install -y java-17-openjdk-devel
yum install -y java-1.8.0-openjdk-devel
yum install -y ambari-agent
在hadoop-app运行如下操作:
yum install -y python3-psycopg2
yum install -y ambari-server
安装MYSQL作为后端数据库
yum -y install https://dev.mysql.com/get/mysql80-community-release-el8-1.noarch.rpmyum -y install mysql-server
systemctl start mysqld.service
systemctl enable mysqld.service#在mysql的日志中找到临时密码
cat /var/log/mysqld.log | grep 'temporary password'
2025-09-22T03:39:48.418689Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: e(Rndh=DX6)6# 使用mysql shell或是其他数据库工具初始化数据库
修改root密码
ALTER USER 'root'@'localhost' IDENTIFIED BY '***';
FLUSH PRIVILEGES;
初始化数据库
创建用户和模式
-- Create Ambari user and grant privileges
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'ambari';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'%' IDENTIFIED BY 'ambari';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';-- Create required databases
CREATE DATABASE ambari CHARACTER SET utf8 COLLATE utf8_general_ci;
CREATE DATABASE hive;
CREATE DATABASE ranger;
CREATE DATABASE rangerkms;-- Create service users
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%';CREATE USER 'ranger'@'%' IDENTIFIED BY 'ranger';
GRANT ALL PRIVILEGES ON *.* TO 'ranger'@'%' WITH GRANT OPTION;CREATE USER 'rangerkms'@'%' IDENTIFIED BY 'rangerkms';
GRANT ALL PRIVILEGES ON rangerkms.* TO 'rangerkms'@'%';FLUSH PRIVILEGES;
执行ambari数据库的初始化脚本
mysql -uambari -pambari ambari < /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql# 下载mysql-jdbc的jar包,保留该jar包后续可以用于hadoop或是其他组件依赖
wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar \-O /usr/share/java/mysql-connector-java.jar# Setup JDBC driver
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar# Configure MySQL 8 compatibility
echo "server.jdbc.url=jdbc:mysql://hadoop-app:3306/ambari?useSSL=true&verifyServerCertificate=false&enabledTLSProtocols=TLSv1.2" \>> /etc/ambari-server/conf/ambari.properties# Configure Ambari server
ambari-server setup -s \-j /usr/lib/jvm/java-1.8.0-openjdk \--ambari-java-home /usr/lib/jvm/java-17-openjdk \--database=mysql \--databasehost=hadoop-app \--databaseport=3306 \--databasename=ambari \--databaseusername=ambari \--databasepassword=ambari
# 在所有的节点上运行
sed -i "s/hostname=.*/hostname=hadoop-app/" /etc/ambari-agent/conf/ambari-agent.ini
启动Ambari
# 在hadoop-app:ambari-server start# 在各节点上启动
ambari-agent startAmbari Web界面: http://hadoop-app:8080
安装成功
问题shooting
1、安装MSQL的时候提示Error: GPG check FAILED
可参考MYSQL官方链接:
https://dev.mysql.com/doc/refman/8.4/en/checking-gpg-signature.html
解决方案:
sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023或将链接里面的公钥保存至服务器,然后再导入
sudo rpm --import {local_publickey_path}#然后可以继续下载了
2、MYSQL无法登陆,提示:null, message from server: "Host '_gateway' is not allowed to connect to this MySQL server"
解决方案:
# 添加一个host为%的远程账户
mysql>>
CREATE USER 'root'@'%' IDENTIFIED BY 'leo130';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';
FLUSH PRIVILEGES;
systemctl restart mysqld.service
3、MYSQL创建数据库的时候提示密码策略相关问题
解决方案:
在/etc/my.cnf的文件中
添加如下:
default-authentication-plugin=mysql_native_password
validate_password.policy=LOW
validate_password.length=1
validate_password.number_count=0
validate_password.special_char_count=0