docker容器stop流程

从API route开始看StopContainer接口的调用过程。

// NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {r := &containerRouter{backend: b,decoder: decoder,}r.initRoutes()return r
}
...
// initRoutes initializes the routes in container router
func (r *containerRouter) initRoutes() {r.routes = []router.Route{...router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),...}
}
func (s *containerRouter) postContainersStop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {...if err := s.backend.ContainerStop(vars["name"], seconds); err != nil {return err}w.WriteHeader(http.StatusNoContent)return nil
}
func (cli *DaemonCli) start(opts *daemonOptions) (err error) {...d, err := daemon.NewDaemon(ctx, cli.Config, pluginStore)...
}
// ContainerStop looks for the given container and stops it.
// In case the container fails to stop gracefully within a time duration
// specified by the timeout argument, in seconds, it is forcefully
// terminated (killed).
//
// If the timeout is nil, the container's StopTimeout value is used, if set,
// otherwise the engine default. A negative timeout value can be specified,
// meaning no timeout, i.e. no forceful termination is performed.
func (daemon *Daemon) ContainerStop(name string, timeout *int) error {container, err := daemon.GetContainer(name)if err != nil {return err}if !container.IsRunning() {return containerNotModifiedError{running: false}}if timeout == nil {stopTimeout := container.StopTimeout()timeout = &stopTimeout}if err := daemon.containerStop(container, *timeout); err != nil {return errdefs.System(errors.Wrapf(err, "cannot stop container: %s", name))}return nil
}
// containerStop sends a stop signal, waits, sends a kill signal.
func (daemon *Daemon) containerStop(container *containerpkg.Container, seconds int) error {if !container.IsRunning() {return nil}stopSignal := container.StopSignal()// 1. Send a stop signalif err := daemon.killPossiblyDeadProcess(container, stopSignal); err != nil {// While normally we might "return err" here we're not going to// because if we can't stop the container by this point then// it's probably because it's already stopped. Meaning, between// the time of the IsRunning() call above and now it stopped.// Also, since the err return will be environment specific we can't// look for any particular (common) error that would indicate// that the process is already dead vs something else going wrong.// So, instead we'll give it up to 2 more seconds to complete and if// by that time the container is still running, then the error// we got is probably valid and so we force kill it.ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)defer cancel()if status := <-container.Wait(ctx, containerpkg.WaitConditionNotRunning); status.Err() != nil {logrus.Infof("Container failed to stop after sending signal %d to the process, force killing", stopSignal)if err := daemon.killPossiblyDeadProcess(container, 9); err != nil {return err}}}// 2. Wait for the process to exit on its ownctx := context.Background()if seconds >= 0 {var cancel context.CancelFuncctx, cancel = context.WithTimeout(ctx, time.Duration(seconds)*time.Second)defer cancel()}if status := <-container.Wait(ctx, containerpkg.WaitConditionNotRunning); status.Err() != nil {logrus.Infof("Container %v failed to exit within %d seconds of signal %d - using the force", container.ID, seconds, stopSignal)// 3. If it doesn't, then send SIGKILLif err := daemon.Kill(container); err != nil {// Wait without a timeout, ignore result.<-container.Wait(context.Background(), containerpkg.WaitConditionNotRunning)logrus.Warn(err) // Don't return error because we only care that container is stopped, not what function stopped it}}daemon.LogContainerEvent(container, "stop")return nil
}

container.StopSignal() 优先用容器指定的信号,如果没有则默认是SIGTERM, 如果2s后容器仍未退出,再按上层(kubelet)指定的超时时间等待容器退出。
如果容器始终未退出,daemon.Kill(container) 给容器发送SIGKILL信号,强制容器退出。

这里涉及容器的两种启动方式:

  • shell格式

PID1进程为 /bin/sh -c,
因为/bin/sh不会转发信号至任何子进程。所以我们的应用将永远不会收到SIGTERM信号。显然要解决这个问题,就需要将我们的进程作为PID1进程运行。

  • exec格式

PID进程为应用程序执行文件(脚本或二进制), 我们的程序捕获了docker stop命令发送的SIGTERM信号

优先看下强制删除的过程

// Kill forcefully terminates a container.
func (daemon *Daemon) Kill(container *containerpkg.Container) error {if !container.IsRunning() {return errNotRunning(container.ID)}// 1. Send SIGKILLif err := daemon.killPossiblyDeadProcess(container, int(syscall.SIGKILL)); err != nil {// While normally we might "return err" here we're not going to// because if we can't stop the container by this point then// it's probably because it's already stopped. Meaning, between// the time of the IsRunning() call above and now it stopped.// Also, since the err return will be environment specific we can't// look for any particular (common) error that would indicate// that the process is already dead vs something else going wrong.// So, instead we'll give it up to 2 more seconds to complete and if// by that time the container is still running, then the error// we got is probably valid and so we return it to the caller.if isErrNoSuchProcess(err) {return nil}ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)defer cancel()if status := <-container.Wait(ctx, containerpkg.WaitConditionNotRunning); status.Err() != nil {return err}}// 2. Wait for the process to die, in last resort, try to kill the process directlyif err := killProcessDirectly(container); err != nil {if isErrNoSuchProcess(err) {return nil}return err}// Wait for exit with no timeout.// Ignore returned status.<-container.Wait(context.Background(), containerpkg.WaitConditionNotRunning)return nil
}

killWithSignal() 先从容器层面尝试停止容器,如果容器是 Restarting 状态,就放弃这次的Kill操作。
如果容器时 Paused 状态,先执行Resume,在容器恢复后,立刻发送SIGKILL。

等待2s,容器状态没有转成 NotRunning, 就直接给容器中的进程发送SIGKILL。到这里再等上10s,如果容器还不退,就查询容器的1号进程,发送SIGKILL。

<-container.Wait 发送完SIGKILL后,开始阻塞等, 这次没有设置超时,就是死等, 这时当前goroutine 握着一把容器级别的锁(state.Lock()) 。

TODO: daemon.containerd.Resume()

// killWithSignal sends the container the given signal. This wrapper for the
// host specific kill command prepares the container before attempting
// to send the signal. An error is returned if the container is paused
// or not running, or if there is a problem returned from the
// underlying kill command.
func (daemon *Daemon) killWithSignal(container *containerpkg.Container, sig int) error {logrus.Debugf("Sending kill signal %d to container %s", sig, container.ID)container.Lock()defer container.Unlock()if !container.Running {return errNotRunning(container.ID)}var unpause boolif container.Config.StopSignal != "" && syscall.Signal(sig) != syscall.SIGKILL {...} else {container.ExitOnNext()unpause = container.Paused}if !daemon.IsShuttingDown() {container.HasBeenManuallyStopped = truecontainer.CheckpointTo(daemon.containersReplica)}// if the container is currently restarting we do not need to send the signal// to the process. Telling the monitor that it should exit on its next event// loop is enoughif container.Restarting {return nil}if err := daemon.kill(container, sig); err != nil {if errdefs.IsNotFound(err) {unpause = falselogrus.WithError(err).WithField("container", container.ID).WithField("action", "kill").Debug("container kill failed because of 'container not found' or 'no such process'")} else {return errors.Wrapf(err, "Cannot kill container %s", container.ID)}}if unpause {// above kill signal will be sent once resume is finishedif err := daemon.containerd.Resume(context.Background(), container.ID); err != nil {logrus.Warnf("Cannot unpause container %s: %s", container.ID, err)}}attributes := map[string]string{"signal": fmt.Sprintf("%d", sig),}daemon.LogContainerEventWithAttributes(container, "kill", attributes)return nil
}
func killProcessDirectly(cntr *container.Container) error {ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)defer cancel()// Block until the container to stops or timeout.status := <-cntr.Wait(ctx, container.WaitConditionNotRunning)if status.Err() != nil {// Ensure that we don't kill ourselvesif pid := cntr.GetPID(); pid != 0 {logrus.Infof("Container %s failed to exit within 10 seconds of kill - trying direct SIGKILL", stringid.TruncateID(cntr.ID))if err := unix.Kill(pid, 9); err != nil {if err != unix.ESRCH {return err}e := errNoSuchProcess{pid, 9}logrus.Debug(e)return e}}}return nil
}
// Wait waits until the container is in a certain state indicated by the given
// condition. A context must be used for cancelling the request, controlling
// timeouts, and avoiding goroutine leaks. Wait must be called without holding
// the state lock. Returns a channel from which the caller will receive the
// result. If the container exited on its own, the result's Err() method will
// be nil and its ExitCode() method will return the container's exit code,
// otherwise, the results Err() method will return an error indicating why the
// wait operation failed.
func (s *State) Wait(ctx context.Context, condition WaitCondition) <-chan StateStatus {s.Lock()defer s.Unlock()if condition == WaitConditionNotRunning && !s.Running {// Buffer so we can put it in the channel now.resultC := make(chan StateStatus, 1)// Send the current status.resultC <- StateStatus{exitCode: s.ExitCode(),err:      s.Err(),}return resultC}// If we are waiting only for removal, the waitStop channel should// remain nil and block forever.var waitStop chan struct{}if condition < WaitConditionRemoved {waitStop = s.waitStop}// Always wait for removal, just in case the container gets removed// while it is still in a "created" state, in which case it is never// actually stopped.waitRemove := s.waitRemoveresultC := make(chan StateStatus)go func() {select {case <-ctx.Done():// Context timeout or cancellation.resultC <- StateStatus{exitCode: -1,err:      ctx.Err(),}returncase <-waitStop:case <-waitRemove:}s.Lock()result := StateStatus{exitCode: s.ExitCode(),err:      s.Err(),}s.Unlock()resultC <- result}()return resultC
}

Kill() 死等的对象,要么容器的waitStop信道醒来,要么waitRemove信道醒来。

// SetStopped sets the container state to "stopped" without locking.
func (s *State) SetStopped(exitStatus *ExitStatus) {s.Running = falses.Paused = falses.Restarting = falses.Pid = 0if exitStatus.ExitedAt.IsZero() {s.FinishedAt = time.Now().UTC()} else {s.FinishedAt = exitStatus.ExitedAt}s.ExitCodeValue = exitStatus.ExitCodes.OOMKilled = exitStatus.OOMKilledclose(s.waitStop) // fire waiters for stops.waitStop = make(chan struct{})
}
...
// SetRestarting sets the container state to "restarting" without locking.
// It also sets the container PID to 0.
func (s *State) SetRestarting(exitStatus *ExitStatus) {// we should consider the container running when it is restarting because of// all the checks in docker around rm/stop/etcs.Running = trues.Restarting = trues.Paused = falses.Pid = 0s.FinishedAt = time.Now().UTC()s.ExitCodeValue = exitStatus.ExitCodes.OOMKilled = exitStatus.OOMKilledclose(s.waitStop) // fire waiters for stops.waitStop = make(chan struct{})
}

先看看waitStop,在 SetStoppedSetRestarting 时会重置,可以让 Kill()结束等待,释放那把锁。

  1. 在docker服务重启恢复时,会批量处理一波, 从containerd查询容器的状态,如果containerd反馈容器已死,会执行一次SetStopped()

需要注意的是,如果如果容器活着,但是dockerd未开启 --live-restore, 会执行一次 daemon.kill(), 直接给容器的1号进程发送结束信号。

func (daemon *Daemon) restore() error {...for _, c := range containers {group.Add(1)go func(c *container.Container) {...alive, _, process, err = daemon.containerd.Restore(context.Background(), c.ID, c.InitializeStdio)...if !alive && process != nil {ec, exitedAt, err = process.Delete(context.Background())if err != nil && !errdefs.IsNotFound(err) {logrus.WithError(err).Errorf("Failed to delete container %s from containerd", c.ID)return}} else if !daemon.configStore.LiveRestoreEnabled {if err := daemon.kill(c, c.StopSignal()); err != nil && !errdefs.IsNotFound(err) {logrus.WithError(err).WithField("container", c.ID).Error("error shutting down container")return}}...if !alive {c.Lock()c.SetStopped(&container.ExitStatus{ExitCode: int(ec), ExitedAt: exitedAt})daemon.Cleanup(c)if err := c.CheckpointTo(daemon.containersReplica); err != nil {logrus.Errorf("Failed to update stopped container %s state: %v", c.ID, err)}c.Unlock()}...
  1. 在docker的事件处理中,有两个地方调用了 SetStopped

当docker收到退出事件后,拿住一把 容器级别的锁 (container.Lock()), 通知containerd删除对应的task,就等2秒钟,然后继续。

如果断定容器不需要需要重启,会调用一次SetStopped

如果需要重启,但重启失败了,也会调用一次SetStopped,此前已经放掉手里的锁。

// ProcessEvent is called by libcontainerd whenever an event occurs
func (daemon *Daemon) ProcessEvent(id string, e libcontainerdtypes.EventType, ei libcontainerdtypes.EventInfo) error {c, err := daemon.GetContainer(id)if err != nil {return errors.Wrapf(err, "could not find container %s", id)}switch e {...case libcontainerdtypes.EventExit:if int(ei.Pid) == c.Pid {c.Lock()_, _, err := daemon.containerd.DeleteTask(context.Background(), c.ID)...ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)c.StreamConfig.Wait(ctx)cancel()c.Reset(false)exitStatus := container.ExitStatus{ExitCode:  int(ei.ExitCode),ExitedAt:  ei.ExitedAt,OOMKilled: ei.OOMKilled,}restart, wait, err := c.RestartManager().ShouldRestart(ei.ExitCode, daemon.IsShuttingDown() || c.HasBeenManuallyStopped, time.Since(c.StartedAt))if err == nil && restart {c.RestartCount++c.SetRestarting(&exitStatus)} else {if ei.Error != nil {c.SetError(ei.Error)}c.SetStopped(&exitStatus)defer daemon.autoRemove(c)}defer c.Unlock() ...if err == nil && restart {go func() {err := <-waitif err == nil {// daemon.netController is initialized when daemon is restoring containers.// But containerStart will use daemon.netController segment.// So to avoid panic at startup process, here must wait util daemon restore done.daemon.waitForStartupDone()if err = daemon.containerStart(c, "", "", false); err != nil {logrus.Debugf("failed to restart container: %+v", err)}}if err != nil {c.Lock()c.SetStopped(&exitStatus)daemon.setStateCounter(c)c.CheckpointTo(daemon.containersReplica)c.Unlock()defer daemon.autoRemove(c)if err != restartmanager.ErrRestartCanceled {logrus.Errorf("restartmanger wait error: %+v", err)}}}()}...
func (c *client) processEventStream(ctx context.Context, ns string) {...// Filter on both namespace *and* topic. To create an "and" filter,// this must be a single, comma-separated stringeventStream, errC := c.client.EventService().Subscribe(ctx, "namespace=="+ns+",topic~=|^/tasks/|")...for {var oomKilled boolselect {...case ev = <-eventStream:...switch t := v.(type) {...case *apievents.TaskExit:et = libcontainerdtypes.EventExitei = libcontainerdtypes.EventInfo{ContainerID: t.ContainerID,ProcessID:   t.ID,Pid:         t.Pid,ExitCode:    t.ExitStatus,ExitedAt:    t.ExitedAt,}...}...c.processEvent(ctx, et, ei)}}
}
//libcontainerd/remote/client.go
func (c *client) processEvent(ctx context.Context, et libcontainerdtypes.EventType, ei libcontainerdtypes.EventInfo) {c.eventQ.Append(ei.ContainerID, func() {err := c.backend.ProcessEvent(ei.ContainerID, et, ei)...if et == libcontainerdtypes.EventExit && ei.ProcessID != ei.ContainerID {p, err := c.getProcess(ctx, ei.ContainerID, ei.ProcessID)...ctr, err := c.getContainer(ctx, ei.ContainerID)if err != nil {c.logger.WithFields(logrus.Fields{"container": ei.ContainerID,"error":     err,}).Error("failed to find container")} else {labels, err := ctr.Labels(ctx)if err != nil {c.logger.WithFields(logrus.Fields{"container": ei.ContainerID,"error":     err,}).Error("failed to get container labels")return}newFIFOSet(labels[DockerContainerBundlePath], ei.ProcessID, true, false).Close()}_, err = p.Delete(context.Background())...}})
}
// plugin/executor/containerd/containerd.go// deleteTaskAndContainer deletes plugin task and then plugin container from containerd
func deleteTaskAndContainer(ctx context.Context, cli libcontainerdtypes.Client, id string, p libcontainerdtypes.Process) {if p != nil {if _, _, err := p.Delete(ctx); err != nil && !errdefs.IsNotFound(err) {logrus.WithError(err).WithField("id", id).Error("failed to delete plugin task from containerd")}} else {if _, _, err := cli.DeleteTask(ctx, id); err != nil && !errdefs.IsNotFound(err) {logrus.WithError(err).WithField("id", id).Error("failed to delete plugin task from containerd")}}if err := cli.Delete(ctx, id); err != nil && !errdefs.IsNotFound(err) {logrus.WithError(err).WithField("id", id).Error("failed to delete plugin container from containerd")}
}...
// ProcessEvent handles events from containerd
// All events are ignored except the exit event, which is sent of to the stored handler
func (e *Executor) ProcessEvent(id string, et libcontainerdtypes.EventType, ei libcontainerdtypes.EventInfo) error {switch et {case libcontainerdtypes.EventExit:deleteTaskAndContainer(context.Background(), e.client, id, nil)return e.exitHandler.HandleExitEvent(ei.ContainerID)}return nil
}

dockerd订阅了containerd服务的 /tasks/exit 事件, 那么交接棒就到了 containerd ?

containerd里发送TaskExit事件的地方:

  • containerd-shim 主动发布退出事件
func (r *Runtime) cleanupAfterDeadShim(ctx context.Context, bundle *bundle, ns, id string) error {...// Notify ClientexitedAt := time.Now().UTC()r.events.Publish(ctx, runtime.TaskExitEventTopic, &eventstypes.TaskExit{ContainerID: id,ID:          id,Pid:         uint32(pid),ExitStatus:  128 + uint32(unix.SIGKILL),ExitedAt:    exitedAt,})r.tasks.Delete(ctx, id)...
}
  • containerd-shim服务收到SIGCHLD信号后,且为Init进程退出时,发布退出事件
func (s *Service) checkProcesses(e runc.Exit) {for _, p := range s.allProcesses() {if p.Pid() != e.Pid {continue}if ip, ok := p.(*process.Init); ok {shouldKillAll, err := shouldKillAllOnExit(s.bundle)if err != nil {log.G(s.context).WithError(err).Error("failed to check shouldKillAll")}// Ensure all children are killedif shouldKillAll {if err := ip.KillAll(s.context); err != nil {log.G(s.context).WithError(err).WithField("id", ip.ID()).Error("failed to kill init's children")}}}p.SetExited(e.Status)s.events <- &eventstypes.TaskExit{ContainerID: s.id,ID:          p.ID(),Pid:         uint32(e.Pid),ExitStatus:  uint32(e.Status),ExitedAt:    p.ExitedAt(),}return}
}

调用 cleanupAfterDeadShim() 地方:

  • 创建Task时,设置exitHandler
// Create a new task
func (r *Runtime) Create(ctx context.Context, id string, opts runtime.CreateOpts) (_ runtime.Task, err error) {namespace, err := namespaces.NamespaceRequired(ctx)...ropts, err := r.getRuncOptions(ctx, id)if err != nil {return nil, err}bundle, err := newBundle(id,filepath.Join(r.state, namespace),filepath.Join(r.root, namespace),opts.Spec.Value)...shimopt := ShimLocal(r.config, r.events)if !r.config.NoShim {...exitHandler := func() {log.G(ctx).WithField("id", id).Info("shim reaped")if _, err := r.tasks.Get(ctx, id); err != nil {// Task was never started or was already successfully deletedreturn}if err = r.cleanupAfterDeadShim(context.Background(), bundle, namespace, id); err != nil {log.G(ctx).WithError(err).WithFields(logrus.Fields{"id":        id,"namespace": namespace,}).Warn("failed to clean up after killed shim")}}shimopt = ShimRemote(r.config, r.address, cgroup, exitHandler)}s, err := bundle.NewShimClient(ctx, namespace, shimopt, ropts)if err != nil {return nil, err}defer func() {if err != nil {deferCtx, deferCancel := context.WithTimeout(namespaces.WithNamespace(context.TODO(), namespace), cleanupTimeout)defer deferCancel()if kerr := s.KillShim(deferCtx); kerr != nil {log.G(ctx).WithError(err).Error("failed to kill shim")}}}()rt := r.config.Runtimeif ropts != nil && ropts.Runtime != "" {rt = ropts.Runtime}...cr, err := s.Create(ctx, sopts)...t, err := newTask(id, namespace, int(cr.Pid), s, r.events, r.tasks, bundle)...r.events.Publish(ctx, runtime.TaskCreateEventTopic, &eventstypes.TaskCreate{ContainerID: sopts.ID,Bundle:      sopts.Bundle,Rootfs:      sopts.Rootfs,IO: &eventstypes.TaskIO{Stdin:    sopts.Stdin,Stdout:   sopts.Stdout,Stderr:   sopts.Stderr,Terminal: sopts.Terminal,},Checkpoint: sopts.Checkpoint,Pid:        uint32(t.pid),})return t, nil
}
  • containerd恢复时重新加载所有Tasks
func (r *Runtime) loadTasks(ctx context.Context, ns string) ([]*Task, error) {dir, err := ioutil.ReadDir(filepath.Join(r.state, ns))if err != nil {return nil, err}var o []*Taskfor _, path := range dir {ctx = namespaces.WithNamespace(ctx, ns)pid, _ := runc.ReadPidFile(filepath.Join(bundle.path, process.InitPidFile))shimExit := make(chan struct{})s, err := bundle.NewShimClient(ctx, ns, ShimConnect(r.config, func() {defer close(shimExit)if _, err := r.tasks.Get(ctx, id); err != nil {// Task was never started or was already successfully deletedreturn}if err := r.cleanupAfterDeadShim(ctx, bundle, ns, id); err != nil {...}}), nil)if err != nil {log.G(ctx).WithError(err).WithFields(logrus.Fields{"id":        id,"namespace": ns,}).Error("connecting to shim")err := r.cleanupAfterDeadShim(ctx, bundle, ns, id)if err != nil {log.G(ctx).WithError(err).WithField("bundle", bundle.path).Error("cleaning up after dead shim")}continue}
func (r *Runtime) restoreTasks(ctx context.Context) ([]*Task, error) {dir, err := ioutil.ReadDir(r.state)...for _, namespace := range dir {...log.G(ctx).WithField("namespace", name).Debug("loading tasks in namespace")tasks, err := r.loadTasks(ctx, name)if err != nil {return nil, err}o = append(o, tasks...)}return o, nil
}
// New returns a configured runtime
func New(ic *plugin.InitContext) (interface{}, error) {...tasks, err := r.restoreTasks(ic.Context)if err != nil {return nil, err}...

containerd-shim收到SIGCHLD信号时,生成一个runc.Exit事件,推送所有订阅者,这里订阅者基本就是containerd-shim自己了,

在协程processExit里调用checkProcesses, 然后向containerd推送TaskExit事件。

func handleSignals(logger *logrus.Entry, signals chan os.Signal, server *ttrpc.Server, sv *shim.Service) error {var (termOnce sync.Oncedone     = make(chan struct{}))for {select {case <-done:return nilcase s := <-signals:switch s {case unix.SIGCHLD:if err := reaper.Reap(); err != nil {logger.WithError(err).Error("reap exit status")}...
// Reap should be called when the process receives an SIGCHLD.  Reap will reap
// all exited processes and close their wait channels
func Reap() error {now := time.Now()exits, err := sys.Reap(false)for _, e := range exits {done := Default.notify(runc.Exit{Timestamp: now,Pid:       e.Pid,Status:    e.Status,})select {case <-done:case <-time.After(1 * time.Second):}}return err
}
...
func (m *Monitor) notify(e runc.Exit) chan struct{} {const timeout = 1 * time.Millisecondvar (done    = make(chan struct{}, 1)timer   = time.NewTimer(timeout)success = make(map[chan runc.Exit]struct{}))stop(timer, true)go func() {defer close(done)for {var (failed      intsubscribers = m.getSubscribers())for _, s := range subscribers {s.do(func() {if s.closed {return}if _, ok := success[s.c]; ok {return}timer.Reset(timeout)recv := trueselect {case s.c <- e:success[s.c] = struct{}{}case <-timer.C:recv = falsefailed++}stop(timer, recv)})}// all subscribers received the messageif failed == 0 {return}}}()return done
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/668250.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

docker搭建Mysql集群准备(一)

docker搭建Mysql集群准备 Linux基本知识&#xff1a; 修改机器 IP&#xff0c;变成静态 IP vim /etc/sysconfig/network-scripts/ifcfg-ens33 文件 TYPEEthernet PROXY_METHODnone BROWSER_ONLYno BOOTPROTOstatic IPADDR192.168.190.67 NETMASK255.255.255.0 GAT…

记录一些git的常用操作

文章目录 前言一、记录一些git的常用操作总结 前言 记录一些常用的git操作&#xff0c;持续更新&#xff0c;方便自己查阅。 一、记录一些git的常用操作 创建并切换到新分支 git branch <branch_name> //新建分支 git checkout <branch_name> //切换分支 git …

数据库管理phpmyadmin

子任务1-PHPmyadmin软件的使用 本子任务讲解phpmyadmin的介绍和使用操作。 训练目标 1、掌握PHPmyadmin软件的使用方法。 步骤1 phpMyAdmin 介绍 phpmyadmin是一个用PHP编写的软件工具&#xff0c;可以通过web方式控制和操作MySQL数据库。通过phpMyAdmin可以完全对数据库进行…

用Matlab 2015a svmtrain函数训练的SVM model在2021b无法使用的解决方法

背景 与r2015a版本的Matlab相比&#xff0c;r2021b版本中包含更多集成好的算法模块&#xff08;尤其是深度学习的模块&#xff09;&#xff0c;想把原来r2015a版本的代码升级到r2021b高版本的Matlab已经采用fitcsvm函数和predict函数替代了旧版本中svmtrain函数和svmclassify函…

创新大赛专访丨善世集团荣膺2023年度卓越雇主品牌:筑巢引凤,贯彻“人才是第一资源”理念,以人才驱动企业增长

日前&#xff0c;2023第三届全国人力资源创新大赛颁奖典礼暨成果展圆满举行。自2023年10月份启动以来&#xff0c;大赛共吸引了457个案例报名参赛&#xff0c;经组委会专家团队评审严格审核&#xff0c;企业赛道共有103个案例获奖、72家企业、13位个人、7个产业园斩获荣誉。 广…

3、安全开发-Python-协议库爆破FTPSSHRedisMYSQLSMTP等

用途&#xff1a;个人学习笔记&#xff0c;有所借鉴&#xff0c;欢迎指正&#xff01; 目录 前言&#xff1a; 一、Python-文件传输爆破-ftplib库操作ftp协议 1、关键代码解释&#xff1a; 2、完整代码&#xff1a; 二、Python-登录爆破-paramiko库操作ssh协议 1、关键…

vue全家桶之状态管理Pinia

一、Pinia和Vuex的对比 1.什么是Pinia呢&#xff1f; Pinia&#xff08;发音为/piːnjʌ/&#xff0c;如英语中的“peenya”&#xff09;是最接近pia&#xff08;西班牙语中的菠萝&#xff09;的词&#xff1b; Pinia开始于大概2019年&#xff0c;最初是作为一个实验为Vue重新…

【目标跟踪】相机运动补偿

文章目录 一、前言二、简介三、改进思路3.1、状态定义3.2、相机运动补偿3.3、iou和ReID融合3.4、改进总结 四、相机运动补偿 一、前言 目前 MOT (Multiple Object Tracking) 最有效的方法仍然是 Tracking-by-detection。今天给大家分享一篇论文 BoT-SORT。论文地址 &#xff0…

C#学习(十二)——Linq

一、Linq Language-Integrated Query 语言集成查询 对内存中数据、关系数据和XML数据执行的查询进行检查 例如&#xff0c;在不使用Linq语法时&#xff0c;想要实现查看C盘windows文件夹下最大的前五个文件 class Program {static void Main(string[] args){//实现文件排序功能…

【力扣 51】N 皇后(回溯+剪枝+深度优先搜索)

按照国际象棋的规则&#xff0c;皇后可以攻击与之处在同一行或同一列或同一斜线上的棋子。 n 皇后问题 研究的是如何将 n 个皇后放置在 nn 的棋盘上&#xff0c;并且使皇后彼此之间不能相互攻击。 给你一个整数 n &#xff0c;返回所有不同的 n 皇后问题 的解决方案。 每一种…

【数据结构与算法】(3)基础数据结构 之 链表 单向链表、双向链表、循环链表详细示例讲解

目录 2.2 链表1) 概述2) 单向链表3) 单向链表&#xff08;带哨兵&#xff09;4) 双向链表&#xff08;带哨兵&#xff09;5) 环形链表&#xff08;带哨兵&#xff09; 2.2 链表 1) 概述 定义 在计算机科学中&#xff0c;链表是数据元素的线性集合&#xff0c;其每个元素都指…

Rust 本地文档的使用:rustup doc

Rust 是一种系统级编程语言&#xff0c;以其安全性、速度和内存控制能力而闻名。为了方便开发者更好地了解并利用 Rust 标准库和工具链中的功能&#xff0c;Rust 提供了一种内置的文档浏览方式——通过 rustup doc 命令。 安装 rustup 在查阅 Rust 文档之前&#xff0c;确保你…

蓝桥杯刷题--python-1

0门牌制作 - 蓝桥云课 (lanqiao.cn) import os import sys # 请在此输入您的代码 res0 for i in range (1,2021): xstr(i) resx.count(2) print(res) 0卡片 - 蓝桥云课 (lanqiao.cn) import os import sys # 请在此输入您的代码 res_10 for i in range(1,99999999999999999): r…

platfrom tree架构下实现3-Wire驱动(DS1302)

目录 概述 1 认识DS1302 1.1 DS1302 硬件电路 1.2 操作DS1302 1.3 注意要点 2 IO引脚位置 3 添加驱动节点 3.1 更新内核.dts 3.2 更新板卡.dtb 4 驱动程序实现 4.1 编写驱动程序 4.2 编写驱动程序的Makefile 4.3 安装驱动程序 5 验证驱动程序 5.1 编写测试程序…

何时以及如何选择制动电阻

制动电阻的选择是优化变频器应用的关键因素 制动电阻器在变频器中是如何工作的&#xff1f; 制动电阻器在 VFD 应用中的工作原理是将电机减速到驱动器设定的精确速度。它们对于电机的快速减速特别有用。制动电阻还可以将任何多余的能量馈入 VFD&#xff0c;以提升直流母线上的…

Electron实战(二):将Node.js和UI能力(app/BrowserWindow/dialog)等注入html

文章目录 设置webPreferences参数安装electron/remotemain进程中初始化html中使用dialog踩坑参考文档 上一篇&#xff1a;Electron实战(一)&#xff1a;环境搭建/Hello World/打包exe 设置webPreferences参数 为了能够在html/js中访问Node.js提供fs等模块&#xff0c;需要在n…

新概念英语第二册(54)

【New words and expressions】生词和短语&#xff08;14&#xff09; sticky adj. 粘的 finger n. 手指 pie n. 馅饼 mix v. 混合&#xff0c;拌和 pastr…

踩坑实录(First Day)

2023 年一整年感觉我的进步都很小&#xff0c;所以自 2024 年起&#xff0c;我将专门开设专栏记录自己在工作期间踩过的所有坑&#xff0c;一来是为了有个记录&#xff0c;自己不会再踩&#xff0c;而来也是为了跟大家做一个分享&#xff0c;与大家进行讨论&#xff0c;提升自己…

力扣宝石补给

欢迎各位勇者来到力扣新手村&#xff0c;在开始试炼之前&#xff0c;请各位勇者先进行「宝石补给」。 每位勇者初始都拥有一些能量宝石&#xff0c; gem[i] 表示第 i 位勇者的宝石数量。现在这些勇者们进行了一系列的赠送&#xff0c;operations[j] [x, y] 表示在第 j 次的赠送…

QT 范例阅读:系统托盘 The System Tray Icon example

main.cpp QApplication app(argc, argv);//判断系统是否支持 系统托盘功能if (!QSystemTrayIcon::isSystemTrayAvailable()) {QMessageBox::critical(0, QObject::tr("Systray"),QObject::tr("I couldnt detect any system tray ""on this system.&qu…