一、环境准备
二、搭建Pinpoint
2.1、安装配置ansible
2.2、配置ansible
2.3、挂在磁盘
三、获取组件
3.1、下载zookeeper
3.2、下载hadoop
3.3、下载hbase
3.4、下载jdk 手动打开该网址
3.5、获取pinpoint
3.6、获取tomcat
四、分发解压各个组建
4.1、分发软件
4.2、解压zoopeeper
4.3、解压hadoop
4.4、解压安装hadoop
4.5、安装jdk与jvsc并配置变量
五、配置zookeeper
5.1、拷贝配置文件
5.2、修改配置文件
5.3、批量配置
5.4、查看每个状态
六、配置hadoop
6.1、配置hdfs-site.xml文件
6.2、hadoop-env.sh变量文件
6.3、配置hdfs-site.xml文件
6.4、配置yarn-site.xml
6.5、配置mapred-site.xml
6.6、配置core-site.xml
6.7、配置slave
6.8、分发配置文件
6.9、格式化hadoop
6.10、查看端口状态
6.11、查看java状态
七、Hbase配置
7.1、修改hbase变量配置文件
7.2、修改hbase服务配置文件
7.3、备用主节点与从节点配置文件
7.3、分发,启动,初始化hbase
八、配置haproxy
8.1、安装haporxy
8.2、配置haporxy
8.3、查看状态
九、配置pinpoint
9.1、解压tomcat
9.2、部署pinpoint-collector
9.3、修改配置文件
9.4、修改tomcat配置文件
9.5、修改server.xml
9.6、启动tomcat服务
十、配置mysql
十一、部署pinpoint-web
11.1、修改tomcat配置文件
11.2、修改server.xml
11.3、部署pinpoint-web
11.4、修改配置文件
11.5、配置数据库链接
11.6、启动tomcat服务
11.7、分发tomcat,响应配置文件,并启动服务,此处不做修改。
十二、haproxy配置
12.1、修改hadoop配置文件
12.2、重启hadoop服务
12.3、查看端口
十三、pinpoint-agent
13.1、将pinpoint的agent拷贝出来复制到客户端上
13.2、修改catalina文件
13.3、启动tomcat即可
一、环境准备
4台服务器
haporxy 192.168.15.24
pinpoint-1 192.168.15.30
pinpoint-2 192.168.15.31
pinpoint-2 192.168.15.32
tomcat 192.168.6.150
mysql 162.168.15.253
二、搭建Pinpoint
2.1、安装配置ansible
yum install ansible -y
2.2、配置ansible
#配置host文件
vim /etc/hosts
192.168.15.30 pinpoint-1
192.168.15.31 pinpoint-2
192.168.15.32 pinpoint-3
#配置ansible文件
vim /etc/ansible/hosts
[pinpoint]
pinpoint-1
pinpoint-2
pinpoint-3
#在每个节点上执行 相互认证
ssh-keygen
cd /root/.ssh
for (( i=0 ; i<4 ; i++ ));do ssh-copy-id -i .ssh/id_rsa.pub root@pinpoint-$i;done
#修改主机名
for (( i=1 ; i<4 ; i++ ));do ansible pinpoint-$i -m hostname -a "name=pinpoint-$i" ;done
2.3、挂在磁盘
vim disk.sh
echo -e "n\np\n\n\n\nw\n "|fdisk /dev/vdc
mkfs.ext4 /dev/vdc1
echo -e "`blkid|grep vdc1|awk '{print $2}' |tr -d '\"'`\t/pinpoint\text4\tdefaults\t0 0" >> /etc/fstab
mount -a
findmnt
ansible all -m copy -a "src=dist.sh dest=/root"
ansible all -m shell =a "bash disk.sh"
三、获取组件
mkdir /root/buxunxian/hbase
cd /root/buxunxian/hbase
3.1、下载zookeeper
wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.12/zookeeper-3.4.12.tar.gz
3.2、下载hadoop
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/hadoop-2.7.6/hadoop-2.7.6.tar.gz
3.3、下载hbase
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/2.0.0/hbase-2.0.0-bin.tar.gz
3.4、下载jdk 手动打开该网址
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
3.5、获取pinpoint
mkdir -p /pinpoint/pinpoint
git clone https://github.com/naver/pinpoint.git /pinpoint/pinpoint
git checkout 1.7.3
#相关编译请参考官网
3.6、获取tomcat
ansibel pinpoint -m shell -a "wget http://mirrors.shu.edu.cn/apache/tomcat/tomcat-8/v8.5.31/bin/apache-tomcat-8.5.31.tar.gz"
四、分发解压各个组建
4.1、分发软件
ansible pinpoint -m shell -a "mkdir /root/buxunxian/"
ansible pinpoint -m copy -a "src=/root/buxunxian/. dest=/root/"
4.2、解压zoopeeper
#解压zookeeper并改名
ansible pinpoint -m shell -a "tar xvf /root/buxunxian/hbase/zookeeper-3.4.12.tar.gz -C /pinpoint"
ansible pinpoint -m shell -a "mv /pinpoint/zookeeper-3.4.12 /pinpoint/zookeeper"
4.3、解压hadoop
#解压hadoop并改名
ansible pinpoint -m shell -a "tar xvf /root/buxunxian/hbase/hadoop-2.7.6.tar.gz -C /pinpoint"
ansible pinpoint -m shell -a "mv /pinpoint/hadoop-2.7.6 /pinpoint/hadoop"
4.4、解压安装hadoop
#解压hbase并改名
ansible pinpoint -m shell -a "tar xvf /root/buxunxian/hbase/hbase-2.0.0-bin.tar.gz -C /pinpoint"
ansible pinpoint -m shell -a "mv /pinpoint/hbase-2.0.0 /pinpoint/hbase"
4.5、安装jdk与jvsc并配置变量
ansible pinpoint -m shell -a "yum localinstall /root/buxunxian/hbase/*.rpm -y"
vim java.sh
export JAVA_HOME=/usr
export ZOOKEEPER_HOME=/pinpoint/zookeeper
export HADOOP_HOME=/pinpoint/hadoop
export HBASE_HOME=/pinpoint/hbase
export JSVC_HOME=/usr
export PATH=$PATH:$HBASE_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$ZOOKEEPER_HOME/bin:$JAVA_HOME/bin:$JSVC_HOME/bin
ansible pinpoint -m copy -a "src=java.sh dest=/etc/profile.d"
ansible pinpoint -m shell -a "source /etc/profile.d/java.sh"
五、配置zookeeper
5.1、拷贝配置文件
cp zoo_sample.cfg zoo.cfg
5.2、修改配置文件
vim zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/pinpoint/data/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=pinpoint-1:2888:3888
server.2=pinpoint-2:2888:3888
server.3=pinpoint-3:2888:3888
tickTime:这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
dataDir:顾名思义就是 Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。
clientPort:这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。
initLimit:这个配置项是用来配置 Zookeeper 接受客户端(这里所说的客户端不是用户连接 Zookeeper 服务器的客户端,而是 Zookeeper 服务器集群中连接到 Leader 的 Follower 服务器)初始化连接时最长能忍受多少个心跳时间间隔数。当已经超过 5个心跳的时间(也就是 tickTime)长度后 Zookeeper 服务器还没有收到客户端的返回信息,那么表明这个客户端连接失败。总的时间长度就是 5*2000=10 秒
syncLimit:这个配置项标识 Leader 与 Follower 之间发送消息,请求和应答时间长度,最长不能超过多少个 tickTime 的时间长度,总的时间长度就是 2*2000=4 秒
server.A=B:C:D:其中 A 是一个数字,表示这个是第几号服务器;B 是这个服务器的 ip 地址;C 表示的是这个服务器与集群中的 Leader 服务器交换信息的端口;D 表示的是万一集群中的 Leader 服务器挂了,需要一个端口来重新进行选举,选出一个新的 Leader,而这个端口就是用来执行选举时服务器相互通信的端口。如果是伪集群的配置方式,由于 B 都是一样,所以不同的 Zookeeper 实例通信端口号不能一样,所以要给它们分配不同的端口号。
5.3、批量配置
#复制文件到集群的每一个节点
ansible pinpoint -m copy -a "src=zoo.cfg dest=/pinpoint/zookeeper/conf/"
#建立data目录
ansible pinpoint -m shell -a "mkdir /pinpoint/data/zookeeper -p"
#在其他节点上执行这两条命令[echo n是与server.n对应的]
for (( i=1 ;i < 4 ; i++ ));do ansible pinpoint-$i -m shell -a "echo $i > /pinpoint/data/zookeeper/myid" ;done
#开启zookeeper服务[使用ansible批量执行]
ansible pinpoint -m shell -a "zkServer.sh start"
5.4、查看每个状态
ansible pinpoint -m shell -a "jps;ss -tnl;zkServer.sh status"
pinpoint-1 | SUCCESS | rc=0 >>
1845 QuorumPeerMain
2238 Jps
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 50 ::ffff:192.168.15.30:3888 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 50 :::54620 :::*
LISTEN 0 50 :::2181 :::*
Mode: follower
Using config: /pinpoint/zookeeper/bin/../conf/zoo.cfg
pinpoint-2 | SUCCESS | rc=0 >>
26634 QuorumPeerMain
26879 Jps
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 50 ::ffff:192.168.15.31:3888 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 50 :::2181 :::*
LISTEN 0 50 :::55462 :::*
Mode: follower
Using config: /pinpoint/zookeeper/bin/../conf/zoo.cfg
pinpoint-3 | SUCCESS | rc=0 >>
5349 QuorumPeerMain
5622 Jps
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:*
LISTEN 0 50 ::ffff:192.168.15.32:2888 :::*
LISTEN 0 50 :::48142 :::*
LISTEN 0 50 ::ffff:192.168.15.32:3888 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 50 :::2181 :::*
Mode: leader
Using config: /pinpoint/zookeeper/bin/../conf/zoo.cfg
六、配置hadoop
6.1、配置hdfs-site.xml文件
cd /pinpoint/hadoop/etc/hadoop
cp hdfs-site.xml{,.bak}
cp yarn-site.xml{,.bak}
cp mapred-site.xml.template mapred-site.xml
cp hadoop-env.sh{,.bak}
6.2、hadoop-env.sh变量文件
vim hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64
export JRE_HOME=/usr/java/jdk1.8.0_171-amd64/jre
export JSVC_HOME=/usr/bin
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_NAMENODE_USER=root
6.3、配置hdfs-site.xml文件
vim hdfs-site.xml{.bak}
<configuration>
<!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ns1</value>
</property>
<!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.ns1</name>
<value>nn1,nn2,nn3</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn1</name>
<value>pinpoint-1:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.ns1.nn1</name>
<value>pinpoint-1:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn2</name>
<value>pinpoint-2:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn2</name>
<value>pinpoint-2:50070</value>
</property>
<!-- nn3的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.ns1.nn3</name>
<value>pinpoint-3:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1.nn3</name>
<value>pinpoint-3:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://pinpoint-1:8485;pinpoint-2:8485;pinpoint-3:8485/ns1</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/pinpoint/data/zookeeper/hadoop/journaldata</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/pinpoint/data/zookeeper/hadoop/dfs/data</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
6.4、配置yarn-site.xml
vim yarn-site.xml
<configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2,rm3</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>pinpoint-1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>pinpoint-2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm3</name>
<value>pinpoint-3</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>pinpoint-1:2181,pinpoint-2:2181,pinpoint-3:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 指定最大使用内存 根据实际内存指定-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>15000</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
</configuration>
6.5、配置mapred-site.xml
vim mapred-site.xml
<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
6.6、配置core-site.xml
vim core-site.xml
<configuration>
<!--指定namenode的地址-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://ns1</value>
</property>
<!--用来指定使用hadoop时产生文件的存放目录-->
<property>
<name>hadoop.tmp.dir</name>
<value>/pinpoint/data/zookeeper/hadoop/tmp</value>
</property>
<property>
<!--用来设置检查点备份日志的最长时间-->
<name>fs.checkpoint.period</name>
<value>3600</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>pinpoint-1:2181,pinpoint-2:2181,pinpoint-3:2181</value>
</property>
</configuration>
6.7、配置slave
vim slaves
pinpoint-1
pinpoint-2
pinpoint-3
6.8、分发配置文件
cd /pinpoint/hadoop/etc/hadoop
ansible pinpoint -m copy -a "src=. dest=/pinpoint/hadoop/etc/hadoop"
#开启journalnode服务
ansible pinpoint -m shell -a "hadoop-daemon.sh start journalnode"
6.9、格式化hadoop
- 仅在主节点执行
#格式化名称节点
hdfs namenode -format
#格式化数据节点
hdfs datanode -format
#格式化zkfc
hdfs zkfc -formatZK
#分发文件到其他节点
scp -r /pinpoint/data/zookeeper/hadoop/tmp/ pinpoint-2:/pinpoint/data/zookeeper/hadoop/
scp -r /pinpoint/data/zookeeper/hadoop/tmp/ pinpoint-3:/pinpoint/data/zookeeper/hadoop/
#开启服务
ansible pinpoint -m shell -a "start-all.sh"
6.10、查看端口状态
[root@pinpoint-1 ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:8040 *:*
LISTEN 0 128 192.168.15.30:9000 *:*
LISTEN 0 128 *:9864 *:*
LISTEN 0 128 *:8042 *:*
LISTEN 0 128 *:9866 *:*
LISTEN 0 128 *:9867 *:*
LISTEN 0 128 192.168.15.30:8019 *:*
LISTEN 0 128 *:55156 *:*
LISTEN 0 128 127.0.0.1:48628 *:*
LISTEN 0 128 192.168.15.30:50070 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 192.168.15.30:8088 *:*
LISTEN 0 128 *:13562 *:*
LISTEN 0 128 192.168.15.30:8030 *:*
LISTEN 0 128 192.168.15.30:8031 *:*
LISTEN 0 128 192.168.15.30:8032 *:*
LISTEN 0 128 *:8480 *:*
LISTEN 0 128 192.168.15.30:8033 *:*
LISTEN 0 128 *:8485 *:*
LISTEN 0 50 :::33256 :::*
LISTEN 0 50 ::ffff:192.168.15.30:3888 :::*
LISTEN 0 128 :::22 :::*
LISTEN 0 50 :::2181 :::*
6.11、查看java状态
[root@pinpoint-1 ~]# jps
3122 JournalNode
2724 NameNode
3286 DFSZKFailoverController
5047 Jps
4489 NodeManager
4234 ResourceManager
2378 QuorumPeerMain
2957 DataNode
七、Hbase配置
7.1、修改hbase变量配置文件
cd /pinpoint/hbase/conf
vim hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64/
export HBASE_MANAGES_ZK=false
7.2、修改hbase服务配置文件
hbase-site.xml
<configuration>
<property>
<name>hbase.procedure.store.wal.use.hsync</name>
<value>false</value>
</property>
<property>
<name>hbase.procedure.check.owner.set</name>
<value>false</value>
<description>Whether ProcedureExecutor should enforce that each
procedure to have an owner
</description>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
<description>
Controls whether HBase will check for stream capabilities (hflush/hsync).
Disable this if you intend to run on LocalFileSystem.
WARNING: Doing so may expose you to additional risk of data loss!
</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://ns1/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>pinpoint-1,pinpoint-2,pinpoint-3</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/pinpoint/data/zookeeper</value>
</property>
</configuration>
7.3、备用主节点与从节点配置文件
echo 'pinpoint-2' > backup-masters
echo 'pinpoint-3' > regionservers
7.3、分发,启动,初始化hbase
cd /pinpoint/hbase/conf
#分发配置文件
ansible pinpoint -m copy -a "src=. dest=/pinpoint/hbase/conf"
#启动数据库
./start-hbase.sh
#初始化hbase
hbase shell /pinpoint/pinpoint/hbase/scripts/hbase-create.hbase
八、配置haproxy
8.1、安装haporxy
yum install haproxy -y
cp /etc/haproxy/haproxy.cnf{,.bak}
8.2、配置haporxy
vim /etc/haproxy/haproxy.cnf
frontend zookeeper
bind *:2181
default_backend zookeeper_back
backend zookeeper_back
balance leastconn
server pinpoint-1 10.39.15.30:2181
server pinpoint-2 10.39.15.31:2181
server pinpoint-2 10.39.15.32:2181
8.3、查看状态
systemctl restart haproxy
[root@haproxy ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:5000 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 *:2181 *:*
LISTEN 0 128 :::22 :::*
九、配置pinpoint
9.1、解压tomcat
- pinpoint需要两个tomcat来启动服务
- pinpoint-collector需要tomcat
- pinpoint-web需要tomcat
tar xvf apache-tomcat-8.5.31.tar.gz -C /pinpoint
mv /pinpoint/apache-tomcat-8.5.31/ /pinpoint/tomcat-pincol
cp -r /pinpoint/tomcat-pincol /pinpoint/tomcat-pinweb
9.2、部署pinpoint-collector
#删除tomcat原来的数据
rm -rf /pinpoint/tomcat-pincol/webapps/*
#创建一个ROOT
mkdir cd /pinpoint/tomcat-pincol/webapps/ROOT
#复制pinpoint-collector-1.7.3.war包
cd /pinpoint/tomcat-pincol/webapps/ROOT
cp /pinpoint/pinpoint/collector/target/pinpoint-collector-1.7.3.war .
#解压war包
unzip pinpoint-collector-1.7.3.war
rm -rf pinpoint-collector-1.7.3.war
9.3、修改配置文件
cd tomcat-pincol/webapps/ROOT/WEB-INF/classes/
vim hbase.properties
#填写haproxy的地址
hbase.client.host=192.168.15.24
hbase.client.port=2181
# hbase default:/hbase #此参数在hbase中设置若没有设置默认为/hbase
hbase.zookeeper.znode.parent=/hbase
9.4、修改tomcat配置文件
cd /pinpoint/tomcat-pincol/conf
sed -i 's/port="8005"/port="18005"/g' server.xml
sed -i 's/port="8080"/port="18080"/g' server.xml
sed -i 's/port="8443"/port="18443"/g' server.xml
sed -i 's/port="8009"/port="18009"/g' server.xml
sed -i 's/redirectPort="8443"/redirectPort="18443"/g' server.xml
9.5、修改server.xml
vim server.xml
<!--
修改defaultHost
修改Host name
修改prefix
-->
<Engine name="Catalina" defaultHost="192.168.15.30">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="192.168.15.30" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="pinpoint-1_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
#配置pinpoint的配置文件
vim /pinpoint/tomcat-pincol/webapps/ROOT/WEB-INF/classes/pinpoint-collector.properties
collector.receiver.stat.tcp=true
collector.receiver.span.tcp=true
#配置hbase连接文件
vim /pinpoint/tomcat-pincol/webapps/ROOT/WEB-INF/classes/hbase.properties
hbase.client.host=10.39.15.30
hbase.client.port=2181
# hbase default:/hbase
hbase.zookeeper.znode.parent=/hbase
9.6、启动tomcat服务
cd /pinpoint/tomcat-pincol/bin
./startup.sh
十、配置mysql
#下载mysql
yum install mariadb-serber
#启动并配置开机自启动
systemctl start mariadb
systemctl enable mariadb
#初始化数据库
mysql_secure_installation
#连接数据库
mysql -u[username] -p[password] -h[hostip]
#创建数据库
CREATE DATABASE pinpoint CHARACTER SET 'utf8';
#下载pinpointSQL文件(手动下载)
#将下载内容放到CreateTableStatement-mysql.sql,SpringBatchJobRepositorySchema-mysql.sql
https://github.com/naver/pinpoint/blob/master/web/src/main/resources/sql/SpringBatchJobRepositorySchema-mysql.sql#L96
https://github.com/naver/pinpoint/blob/master/web/src/main/resources/sql/CreateTableStatement-mysql.sql
touch CreateTableStatement-mysql.sql SpringBatchJobRepositorySchema-mysql.sql
#导入表
mysql -upinpoint -p -h192.168.15.253 pinpoint < CreateTableStatement-mysql.sql
mysql -upinpoint -p -h192.168.15.253 pinpoint < SpringBatchJobRepositorySchema-mysql.sql#L96
十一、部署pinpoint-web
11.1、修改tomcat配置文件
cd /pinpoint/tomcat-pinweb/conf/
sed -i 's/port="8005"/port="18005"/g' server.xml
sed -i 's/port="8080"/port="18080"/g' server.xml
sed -i 's/port="8443"/port="18443"/g' server.xml
sed -i 's/port="8009"/port="18009"/g' server.xml
sed -i 's/redirectPort="8443"/redirectPort="18443"/g' server.xml
11.2、修改server.xml
vi server.xml
<!--
修改defaultHost
修改Host name
修改prefix
-->
<Engine name="Catalina" defaultHost="192.168.15.30">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="192.168.15.30" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="pinpoint-1_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
11.3、部署pinpoint-web
rm -rf /pinpoint/tomcat-pinweb/webapps/*
mkdir cd /pinpoint/tomcat-pinweb/webapps/ROOT
cd /pinpoint/tomcat-pinweb/webapps/ROOT/
cp /pinpoint/pinpoint/web/target/pinpoint-web-1.7.3.war .
unzip target/pinpoint-web-1.7.3.war
rm -rf pinpoint-web-1.7.3.war
11.4、修改配置文件
cd /pinpoint/tomcat-pinweb/webapps/ROOT/WEB-INF/classes/
vim batch.properties
#batch enable config
batch.enable=true
#batch server ip to execute batch
batch.server.ip=127.0.0.1
#flink server list
batch.flink.server=
11.5、配置数据库链接
vim jdbc.properties
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://192.168.15.253:3306/pinpoint?characterEncoding=UTF-8
jdbc.username=root
#此处修改为自己的mysql密码
jdbc.password=[MysqlPassword]
11.6、启动tomcat服务
cd /pinpoint/tomcat-pinweb/bin
./startup.sh
11.7、分发tomcat,响应配置文件,并启动服务,此处不做修改。
scp -r tomcat-pincol tomcat-pinweb 192.168.15.31:/pinpoint/
scp -r tomcat-pincol tomcat-pinweb 192.168.15.31:/pinpoint/
十二、haproxy配置
12.1、修改hadoop配置文件
vim /etc/haproxy/haproxy.cnf
frontend pinpoint
bind *:80
default_backend pinpoint_back
backend pinpoint_back
balance roundrobin
server pinpoint-1 10.39.15.30:28080
server pinpoint-2 10.39.15.31:28080
server pinpoint-2 10.39.15.32:28080
12.2、重启hadoop服务
systemctl restart haproxy
12.3、查看端口
[root@haproxy ~]# ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:5000 *:*
LISTEN 0 128 *:80 *:*
LISTEN 0 128 *:22 *:*
LISTEN 0 128 *:2181 *:*
LISTEN 0 128 :::22 :::*
十三、pinpoint-agent
13.1、将pinpoint的agent拷贝出来复制到客户端上
scd /pinpoint/pinpoint/agent/target/pinpoint-agent-1.7.3.tar.gz 10.39.6.150:/pinpoint/
cd /pinpoint
tar xvf pinpoint-agent-1.7.3.tar.gz
cd /tomcat/bin
13.2修改catalina文件
vim catalina.sh
SERVER_NAME=[You Server Name]
CATALINA_OPTS= " -javaagent:/pinpoint/pinpoint-bootstrap-1.7.3.jar -Dpinpoint.agentId=$SERVER_NAME-agent -Dpinpoint.applicationName=$SERVER_NAME"
13.3启动tomcat即可
cd /tomcat/bin
./startup.sh