I have a simple stream block to stream MySQL TCP traffic to Maxscale instances. 2nd instance acts as a failover only, like:
stream {
upstream maxscale {
zone upstream_maxscale 64k;
server 10.1.0.11:3307;
server 10.1.0.12:3307 backup;
}
server {
listen 3307;
proxy_pass maxscale;
}
}
When connections are low (<30), everything goes fine. But when connection are high (>40, if we can say that 40 connections are high...), nginx error log keeps complaining about something that i don't know how to debug.
recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 10.1.0.16, server: 10.1.0.15:3307, upstream: "10.1.0.11:3307", bytes from/to client:15738/64316, bytes from/to upstream:64316/15738
I've tried play with options like reuseport
, worker_connections
or so_keepalive
but no chances.
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/ Can it be a problem in the Maxscale side?
Here the Maxscale 2.4 listener:
# Listener
[listener-rw]
type=listener
service=readwritesplit
protocol=MariaDBClient
address=10.1.0.11
port=3307
ssl=required
ssl_ca_cert=/var/lib/maxscale/ssl/ca-cert.pem
ssl_cert=/var/lib/maxscale/ssl/server.pem
ssl_key=/var/lib/maxscale/ssl/server.key
ssl_version=MAX
# Service
[readwritesplit]
type=service
router=readwritesplit
servers=sql1,sql2,sql3
user=maxscale
password=324F74A347291B3BE79956AD5F4BB2FAD65E1F9052A976722917701742729400
enable_root_user=1
max_sescmd_history=150
max_slave_connections=100%
lazy_connect=true
slave_selection_criteria=LEAST_CURRENT_OPERATIONS
optimistic_trx=true
connection_keepalive=300
master_failure_mode=fail_on_write
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/