本文2019年中原创首发于博客园,当时使用CSRedisCore的排障思路引起很大反响,当时被张队公众号翻牌,本次转回公号。
背景
上次Redis MQ分布式改造之后,编排的容器稳定运行一个多月,昨天突然收到ETL端同事通知,没有采集到解析日志。
赶紧进服务器 docker ps查看容器:
用于数据接收的ReceiverApp容器挂掉了;
尝试docker container start [containerid],几分钟后该容器再次崩溃。
Redis连接超限
docker log [containerid] 查看容器日志: 显示连接Redis服务的客户端数量超限。
CSRedis.RedisException: ERR max number of clients reached.
Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker[2]Executed action EqidManager.Controllers.EqidController.BatchPutEqidAndProfileIds (EqidReceiver) in 7.1767ms
fail: Microsoft.AspNetCore.Server.Kestrel[13]Connection id "0HLPR3AP8ODKH", Request id "0HLPR3AP8ODKH:00000001": An unhandled exception was thrown by the application.
CSRedis.RedisException: ERR max number of clients reachedat CSRedis.CSRedisClient.GetAndExecute[T](RedisClientPool pool, Func`2 handler, Int32 jump, Int32 errtimes)at CSRedis.CSRedisClient.ExecuteScalar[T](String key, Func`3 hander)at CSRedis.CSRedisClient.LPush[T](String key, T[] value)at RedisHelper.LPush[T](String key, T[] value)at EqidManager.Controllers.EqidController.BatchPutEqidAndProfileIds(List`1 eqidPairs) in /home/gitlab-runner/builds/haD2h5xC/0/webdissector/datasource/eqid-manager/src/EqidReceiver/Controllers/EqidController.cs:line 31at lambda_method(Closure , Object )at Microsoft.Extensions.Internal.ObjectMethodExecutorAwaitable.Awaiter.GetResult()at Microsoft.AspNetCore.Mvc.Internal.ActionMethodExecutor.AwaitableResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)at System.Threading.Tasks.ValueTask`1.get_Result()at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeActionMethodAsync()at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeNextActionFilterAsync()at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Rethrow(ActionExecutedContext context)at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)at Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker.InvokeInnerFilterAsync()at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeNextResourceFilter()at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Rethrow(ResourceExecutedContext context)at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeFilterPipelineAsync()at Microsoft.AspNetCore.Mvc.Internal.ResourceInvoker.InvokeAsync()at Microsoft.AspNetCore.Builder.RouterMiddleware.Invoke(HttpContext httpContext)at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
info: Microsoft.AspNetCore.Hosting.Internal.WebHost[2]Request finished in 8.9549ms 500
【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)
【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)
【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)
【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)
【dockerhost:6379/0】仍然不可用,下一次恢复检查时间:09/17/2019 03:11:25,错误:(ERR max number of clients reached)
快速思考:目前编排的某容器使用CSRedisCore对16个Redis DB实例化了16个客户端,但Redis服务也不至于这么不经折腾吧。
赶紧进redis.io官网搜集资料。
After the client is initialized, Redis checks if we are already at the limit of the number of clients that it is possible to handle simultaneously (this is configured using the maxclients configuration directive, see the next p of this document for further information).
In case it can't accept the current client because the maximum number of clients was already accepted, Redis tries to send an error to the client in order to make it aware of this condition, and closes the connection immediately. The error message will be able to reach the client even if the connection is closed immediately by Redis because the new socket output buffer is usually big enough to contain the error, so the kernel will handle the transmission of the error.
大致意思是:maxclients配置了Redis服务允许的客户端最大连接数, 如果当前连接的客户端数超限,Redis服务会回发一个错误消息给客户端,并迅速关闭客户端连接。
立刻进入Redis宿主机查看默认配置,确认当前Redis服务的maxclients=10000(这是一个动态值,由maxclients和最大进程文件句柄决定)。
# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
# maxclients 10000
通过Redis-Cli登录Redis服务器, 立刻被踢下线。
基本可认定Redis客户端使用方式有问题。
CSRedisCore使用方式
查看Redis官方资料,可利用redis-cli命令info clients、client list 分析客户端连接。
info clients 命令显示现场确实有10000的连接数;
client list命令输出字段的官方解释:
addr: The client address, that is, the client IP and the remote port number it used to connect with the Redis server.
fd: The client socket file descriptor number.
name: The client name as set by CLIENT SETNAME.
age: The number of seconds the connection existed for.
idle: The number of seconds the connection is idle.
flags: The kind of client (N means normal client, check the full list of flags).
omem: The amount of memory used by the client for the output buffer.
cmd: The last executed command.
以上解释表明Redis服务器收到很多ip=172.16.1.3(故障容器在网桥内的Ip 地址)的客户端连接,这些连接最后发出的是ping命令(这是一个测试命令)
故障容器使用的Redis客户端是CSRedisCore,该客户端只是单纯将msg写入Redis list数据结构,CSRedisCore上相关github issue给了一些启发。
发现自己将CSRedisClient实例化代码写在 .NETCore api Controller构造函数,这样每次请求构造Controller时都实例化一次Redis客户端,最终Redis客户端连接数达到最大允许连接值。
依赖注入三种模式: 单例(系统内单一实例,一次性注入);瞬态(每次请求产生实例并注入);自定义范围。
有关dotnet apiController 以瞬态模式注入,请查阅文末链接。
还有一个疑问?
为什么Redis服务器没有释放空闲的客户端连接,如果空闲连接被释放了,即使我写了low代码也不至于如此?
查询官方:
By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever.
However if you don't like this behavior, you can configure a timeout, so that if the client is idle for more than the specified number of seconds, the client connection will be closed.
You can configure this limit via
redis.conf
or simply usingCONFIG SET timeout <value>
.
大致意思是最新的Redis服务默认不会释放空闲的客户端连接。
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
修改以上Redis服务配置可释放空闲客户端连接。
我们最佳实践当然不是修改Redis idle timeout 配置,问题本质还是因为我实例化了多客户端,赶紧将CSRedisCore实例化代码移到startup.cs并注册为单例。
大胆求证
info clients命令显示稳定在53个Redis连接。
client list命令显示:172.16.1.3(故障容器)建立了50个客户端连接,编排的另一个容器webapp建立了2个连接,redis-cli命令登录到服务器建立了1个连接。
那么问题来了,修改之后,ReceiverApp容器为什么还稳定建立了50个redis连接?
进一步与CSRedisCore原作者沟通,确认CSRedisCore有预热机制,默认在连接池中预热了 50 个连接。
bingo,故障和困惑全部排查清楚。
总结
经此一役,在使用CSRedisCore客户端时,要深入理解
① Stackexchange.Redis 使用的多路复用连接机制(使用时很容易想到注册为单例),CSRedisCore开源库采用连接池机制,在高并发场景下强烈建议注册为单例, 否则在生产使用中可能会误用在瞬态请求中实例化,导致redis连接数几天之后消耗完。
② CSRedisCore会默认建立连接池,预热50个连接,开发者要心中有数。
额外的方法论: 尽量不要从某度找答案,要学会问问题,并尝试从官方、stackoverflow、github社区寻求解答,你挖过的坑也许别人早就挖过并踏平过。
Update
很多博友说问题在于我没有细看CSRedisCore官方readme(readme推荐使用单例),使用方式上我确实没有做成单例:
③ 一般连接池都会有空闲释放回收机制 (CSRedisCore也是连接池机制),所以当时并没有把单例放在心上
④ 本次重要知识点:Redis默认并不会释放空闲客户端连接(但是又设置了最大连接数),这也直接促成了本次容器崩溃事故。
嗯,坑是自己挖的。
+ https://stackoverflow.com/questions/57553401/net-core-are-mvc-controllers-default-singleton
+ https://redis.io/topics/clients
+ https://github.com/2881099/csredis/issues/115
▼
往期精彩回顾
▼
AspNetCore结合Redis实践消息队列
TPL Dataflow组件应对高并发,低延迟要求
基于docker-compose的Gitlab CI/CD实践&排坑指南
转载是一种动力,分享是一种美德 ~~..~~
如果你觉得文章还不赖,您的鼓励是原创干货作者的最大动力,让我们一起激浊扬清。
扫码
关注