mqtt(emqx)性能优化与压力测试

一、系统调优

1.关闭交换分区

Linux 交换分区可能会导致 Erlang 虚拟机出现不确定的内存延迟,严重影响系统的稳定性。 建议永久关闭交换分区。

swapoff -a

在 /etc/fstab 文件中注释掉 swap 行。

2.linux操作系统参数

系统全局允许分配的最大文件句柄数:

# 2 millions system-wide
sysctl -w fs.file-max=2097152
sysctl -w fs.nr_open=2097152
echo 2097152 > /proc/sys/fs/nr_open

允许当前会话 / 进程打开文件句柄数:

ulimit -SHn 1048576

/etc/security/limits.conf 持久化设置允许用户 / 进程打开文件句柄数:

*      soft   nofile      1048576
*      hard   nofile      1048576

/etc/systemd/system.conf 设置服务最大文件句柄数:

DefaultLimitNOFILE=1048576

3.tcp协议栈网络参数

optimize.sh脚本:

#!/bin/bash
sysctl -w fs.file-max=2097152
sysctl -w fs.nr_open=2097152
sysctl -w net.ipv4.ip_forward=1 
#并发连接 backlog 设置: 
sysctl -w net.core.somaxconn=32768
sysctl -w net.ipv4.tcp_max_syn_backlog=16384
sysctl -w net.core.netdev_max_backlog=16384
#可用知名端口范围:
sysctl -w net.ipv4.ip_local_port_range='1024 65535'
#TCP Socket 读写 Buffer 设置:
sysctl -w net.core.rmem_default=262144
sysctl -w net.core.wmem_default=262144
sysctl -w net.core.rmem_max=16777216
sysctl -w net.core.wmem_max=16777216
sysctl -w net.core.optmem_max=16777216   
sysctl -w net.ipv4.tcp_rmem='1024 4096 16777216'
sysctl -w net.ipv4.tcp_wmem='1024 4096 16777216'
#TCP 连接追踪设置:
sysctl -w net.nf_conntrack_max=1000000
sysctl -w net.netfilter.nf_conntrack_max=1000000
sysctl -w net.netfilter.nf_conntrack_tcp_timeout_time_wait=30
#TIME-WAIT Socket 最大数量、回收与重用设置:
sysctl -w net.ipv4.tcp_max_tw_buckets=1048576
#FIN-WAIT-2 Socket 超时设置:
sysctl -w net.ipv4.tcp_fin_timeout=15

二、erlang虚拟机参数

优化设置 Erlang 虚拟机启动参数,配置文件 etc/emqx.conf:

## 设置 Erlang 系统同时存在的最大端口数
node.max_ports = 2097152

三、emqx消息服务器参数

设置 TCP 监听器的 Acceptor 池大小,最大允许连接数。

例如,TCP 监听器可使用如下配置。

## TCP Listener
listeners.tcp.$name.acceptors = 64
listeners.tcp.$name.max_connections = 1024000

emqx.conf内容:

## NOTE:
## This config file overrides data/configs/cluster.hocon,
## and is merged with environment variables which start with 'EMQX_' prefix.
##
## Config changes made from EMQX dashboard UI, management HTTP API, or CLI
## are stored in data/configs/cluster.hocon.
## To avoid confusion, please do not store the same configs in both files.
##
## See https://www.emqx.io/docs/en/v5.0/configuration/configuration.html for more details.
## Configuration full example can be found in etc/examples
node {
  name = "emqx@127.0.0.1"
  cookie = "emqxsecretcookie"
  data_dir = "data"
  max_ports = 2097152
}




cluster {
  name = emqxcl
  discovery_strategy = manual
}




dashboard {
    listeners.http {
        bind = 18083
    }
}
mqtt {
    max_mqueue_len = 100000
}
force_shutdown {
    enable = true
    max_mailbox_size = 100000
    max_heap_size = 2GB
}
listeners.tcp.default {
    bind = 1883
    acceptors = 64
    max_connections = 1024000
}

四、测试客户端设置

测试客户端服务器在一个接口上,最多只能创建 65000 连接:

sysctl -w net.ipv4.ip_local_port_range="500 65535"
echo 1000000 > /proc/sys/fs/nr_open
ulimit -n 100000

emqtt_bench

并发连接测试工具:emqtt_bench

emqtt-bench requires libatomic

# centos 7
sudo yum install libatomic
# ubuntu 20.04
sudo apt install libatomic1

make

git clone https://github.com/emqx/emqtt-bench.git
cd emqtt-bench
make
#Optional, you could disable QUIC support if you have problem with compiling
BUILD_WITHOUT_QUIC=1 make

Connect Benchmark

For example, create 50K concurrent connections at the arrival rate of 100/sec:

./emqtt_bench conn -c 50000 -i 10

Sub Benchmark

For example, create 50K concurrent connections at the arrival rate of 100/sec:

./emqtt_bench sub -c 50000 -i 10 -t bench/%i -q 2

Pub Benchmark

For example, create 100 connections and each publishes messages at the rate of 100 msg/sec.

./emqtt_bench pub -c 100 -I 10 -t bench/%i -s 256

TLS/SSL (cliet certificate is not required by server)

./emqtt_bench sub -c 100 -i 10 -t bench/%i -p 8883 --ssl
./emqtt_bench pub -c 100 -I 10 -t bench/%i -p 8883 -s 256 --ssl
Categories: 大数据运维