起初是看到一篇文章《[容器中某Go服务GC停顿经常超过100ms排查][(https://mp.weixin.qq.com/s/Lk1EbiT7WprVOyX_dXYMyg)》,里面提到了容器内 GOMAXPROCS 取值为宿主机,使得容器 内 P 数量增大,增加了 GC 压力,造成压测时 T99很高。正好最近压测也在为这个问题头疼,就按照文章里的内容做了实验。结果半路发现了一些不完全一致的结果,觉得挺有意思就记录一下。
实验器材:
机器:mac mini 2012 later 387 4c/12g 系统:Linux unas-mini 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3 (2019-09-02) x86_64 GNU/Linux Go环境:docker 17.12.0-ce golang1.10.3 / golang1.13.3 容器限制:512m 2c 压测环境:家庭局域网,本机到被压机 ping 2ms
程序模板(编译三个版本,GOMAXPROCS 分别为 2、4、32)
package main
func GetStatus(w http.ResponseWriter, _ *http.Request) {
b := Response{Status: "idle", Data: 32 }
json.NewEncoder(w).Encode(b)
}
func main() {
runtime.GOMAXPROCS(4)
router := mux.NewRouter()
router.HandleFunc("/status", GetStatus).Methods("GET")
log.Fatal(http.ListenAndServe(":8123", router))
}
首先用非常小的流量做个测试
wrk -t2 -c4 -d30s --latency http://192.168.10.24:32778/status
wrk -t4 -c8 -d30s --latency http://192.168.10.24:32778/status
GOMAXPROCS=2
4 threads and 8 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.06ms 2.22ms 59.27ms 97.29%
Req/Sec 1.09k 173.04 1.27k 92.33%
Latency Distribution
50% 1.71ms
75% 1.95ms
90% 2.40ms
99% 11.51ms
130308 requests in 30.05s, 17.90MB read
Requests/sec: 4336.24
Transfer/sec: 609.78KB
GOMAXPROCS=4
4 threads and 8 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.79ms 17.93ms 355.91ms 98.43%
Req/Sec 1.06k 201.50 1.29k 87.58%
Latency Distribution
50% 1.71ms
75% 2.04ms
90% 2.71ms
99% 39.86ms
125557 requests in 30.08s, 17.36MB read
Requests/sec: 4174.79
Transfer/sec: 591.16KB
4 threads and 8 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.98ms 2.20ms 56.15ms 97.80%
Req/Sec 1.10k 122.68 1.29k 83.50%
Latency Distribution
50% 1.69ms
75% 1.95ms
90% 2.36ms
99% 6.87ms
131746 requests in 30.05s, 18.22MB read
Requests/sec: 4384.52
Transfer/sec: 620.85KB
GOMAXPROCS=32