您的当前位置:首页正文

[k8s源码分析][kubelet] devicemanager

来源:花图问答

1. 前言

转载请说明原文出处, 尊重他人劳动成果!

本文将分析一下device manager的其他方法以及kubeletdevice plugin重启的时候会做什么样的操作.

2. readCheckpoint 和 writeCheckpoint

持久化到/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint

func (m *ManagerImpl) writeCheckpoint() error {
    m.mutex.Lock()
    registeredDevs := make(map[string][]string)
    // 只将healthy的devices持久化
    for resource, devices := range m.healthyDevices {
        registeredDevs[resource] = devices.UnsortedList()
    }
    // 将podDevices的内容持久化
    data := checkpoint.New(m.podDevices.toCheckpointData(),
        registeredDevs)
    m.mutex.Unlock()
    err := m.checkpointManager.CreateCheckpoint(kubeletDeviceManagerCheckpoint, data)
    if err != nil {
        return fmt.Errorf("failed to write checkpoint file %q: %v", kubeletDeviceManagerCheckpoint, err)
    }
    return nil
}

可以看到device manager只将healthyDevicespodDevices中的内容持久化, 别的属性比如allocatedDevices(已经分配的devices) 以及 unhealthyDevices的内容并没有做持久化.

func (m *ManagerImpl) readCheckpoint() error {
    registeredDevs := make(map[string][]string)
    devEntries := make([]checkpoint.PodDevicesEntry, 0)
    cp := checkpoint.New(devEntries, registeredDevs)
    err := m.checkpointManager.GetCheckpoint(kubeletDeviceManagerCheckpoint, cp)
    if err != nil {
        if err == errors.ErrCheckpointNotFound {
            klog.Warningf("Failed to retrieve checkpoint for %q: %v", kubeletDeviceManagerCheckpoint, err)
            return nil
        }
        return err
    }
    m.mutex.Lock()
    defer m.mutex.Unlock()
    podDevices, registeredDevs := cp.GetData()
    // 只恢复了podDevices中的内容 并没有恢复healthyDevices里面的内容
    m.podDevices.fromCheckpointData(podDevices)
    m.allocatedDevices = m.podDevices.devices()
    for resource := range registeredDevs {
        // 为每个资源生成了一个带有stop时间的endpoint 等到device plugin重新注册
        m.healthyDevices[resource] = sets.NewString()
        m.unhealthyDevices[resource] = sets.NewString()
        m.endpoints[resource] = endpointInfo{e: newStoppedEndpointImpl(resource), opts: nil}
    }
    return nil
}

需要注意三点:
1. 恢复了podDevices中的内容
2. 没有恢复healthyDevices里面的内容
3. 为每个资源生成了一个带有stop时间的endpoint, 等到device plugin重新注册, 那什么时候会重新注册呢? 后面会有分析. 因为重新注册的时候会调用回调函数来更新healthyDevicesunhealthyDevices
可以看到writeCheckpoint中持久化的healthyDevices, 在readCheckpoint是为给每个healthyDevices中的资源生成一个带有停止时间的endpoint.

3. GetCapacity

func (m *ManagerImpl) GetCapacity() (v1.ResourceList, v1.ResourceList, []string) {
    needsUpdateCheckpoint := false
    var capacity = v1.ResourceList{}
    var allocatable = v1.ResourceList{}
    deletedResources := sets.NewString()
    m.mutex.Lock()
    for resourceName, devices := range m.healthyDevices {
        eI, ok := m.endpoints[resourceName]
        if (ok && eI.e.stopGracePeriodExpired()) || !ok {
            // The resources contained in endpoints and (un)healthyDevices
            // should always be consistent. Otherwise, we run with the risk
            // of failing to garbage collect non-existing resources or devices.
            if !ok {
                klog.Errorf("unexpected: healthyDevices and endpoints are out of sync")
            }
            // 删除device manager中关于ResourceName的所有关系
            delete(m.endpoints, resourceName)
            delete(m.healthyDevices, resourceName)
            deletedResources.Insert(resourceName)
            needsUpdateCheckpoint = true
        } else {
            capacity[v1.ResourceName(resourceName)] = *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
            allocatable[v1.ResourceName(resourceName)] = *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
        }
    }
    for resourceName, devices := range m.unhealthyDevices {
        eI, ok := m.endpoints[resourceName]
        if (ok && eI.e.stopGracePeriodExpired()) || !ok {
            if !ok {
                klog.Errorf("unexpected: unhealthyDevices and endpoints are out of sync")
            }
            delete(m.endpoints, resourceName)
            delete(m.unhealthyDevices, resourceName)
            deletedResources.Insert(resourceName)
            needsUpdateCheckpoint = true
        } else {
            capacityCount := capacity[v1.ResourceName(resourceName)]
            unhealthyCount := *resource.NewQuantity(int64(devices.Len()), resource.DecimalSI)
            capacityCount.Add(unhealthyCount)
            capacity[v1.ResourceName(resourceName)] = capacityCount
        }
    }
    m.mutex.Unlock()
    if needsUpdateCheckpoint {
        // 如果某个resourceName不存在endpoint 或者endpoint有stop时间
        m.writeCheckpoint()
    }
    return capacity, allocatable, deletedResources.UnsortedList()
}

capacity: 每个resouceName中的unhealthyDeviceshealthyDevices之和.
allocatable: 每个resouceNamehealthyDevices.

4. GetDeviceRunContainerOptions

func (m *ManagerImpl) GetDeviceRunContainerOptions(pod *v1.Pod, container *v1.Container) (*DeviceRunContainerOptions, error) {
    podUID := string(pod.UID)
    contName := container.Name
    needsReAllocate := false
    for k := range container.Resources.Limits {
        resource := string(k)
        if !m.isDevicePluginResource(resource) {
            continue
        }
        err := m.callPreStartContainerIfNeeded(podUID, contName, resource)
        if err != nil {
            return nil, err
        }
        if m.podDevices.containerDevices(podUID, contName, resource) == nil {
            needsReAllocate = true
        }
    }
    if needsReAllocate {
        klog.V(2).Infof("needs re-allocate device plugin resources for pod %s", podUID)
        m.allocatePodResources(pod)
    }
    m.mutex.Lock()
    defer m.mutex.Unlock()
    return m.podDevices.deviceRunContainerOptions(string(pod.UID), container.Name), nil
}
func (m *ManagerImpl) callPreStartContainerIfNeeded(podUID, contName, resource string) error {
    ...
    devices := m.podDevices.containerDevices(podUID, contName, resource)
    if devices == nil {
        m.mutex.Unlock()
        return fmt.Errorf("no devices found allocated in local cache for pod %s, container %s, resource %s", podUID, contName, resource)
    }
    _, err := eI.e.preStartContainer(devs)
    ...
}

该方法也是kubelet调用的, 获得该容器启动时需要的运行参数. 如果有必要, 还需要用grpc调用device plugineI.e.preStartContainer(devs)方法.

5. 重启kubelet

5.1 重启kubelet

重启kubelet的时候readCheckpoint的时候healthyDevices并没有填入任何信息, 解释说需要等待device plugin的重新注册, 那device plugin是如何知道kubelet重新启动了, 自己也需要去重新注册呢?

// k8s-device-plugin/main.go
func main() {
    ...
    log.Println("Starting FS watcher.")
    watcher, err := newFSWatcher(pluginapi.DevicePluginPath)
    ...
    restart := true
    var devicePlugin *NvidiaDevicePlugin

L:
    for {
        if restart {
            if devicePlugin != nil {
                devicePlugin.Stop()
            }

            devicePlugin = NewNvidiaDevicePlugin()
            if err := devicePlugin.Serve(); err != nil {
                ...
            } else {
                restart = false
            }
        }

        select {
        case event := <-watcher.Events:
            if event.Name == pluginapi.KubeletSocket && event.Op&fsnotify.Create == fsnotify.Create {
                log.Printf("inotify: %s created, restarting.", pluginapi.KubeletSocket)
                restart = true
            }

        case err := <-watcher.Errors:
            log.Printf("inotify: %s", err)
        ...
    }
}

5.2 重启device plugin

现在kubelet运行正常, 但是device plugin一直在重启, 那不是始终在生成新的endpoint, 这样的话那不是越来越多endpoint在运行了吗? 针对这个问题, 可以看一下runEndpoint

func (m *ManagerImpl) runEndpoint(resourceName string, e endpoint) {
    e.run()
    e.stop()

    m.mutex.Lock()
    defer m.mutex.Unlock()

    if old, ok := m.endpoints[resourceName]; ok && old.e == e {
        m.markResourceUnhealthy(resourceName)
    }

    klog.V(2).Infof("Endpoint (%s, %v) became unhealthy", resourceName, e)
}

device plugin重启的时候, 之前的服务(nvidia.sock)会中断与其对应的endpointrun方法, 因为e.run()方法是阻塞方法, 所以中断后才会进入到e.stop()设置了stop时间, 另外重启后的device plugin只要resouceName没有发生改变, device managerendpoints也会被新生成的endpoint覆盖.

6 总结

conclusion.png

1. 可以看到总共有三个方法(genericDeviceUpdateCallback, GetCapacityallocateContainerResources)调用了writeCheckpointdevice manager的属性podDevicesregisterDevices持久化到文件/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中. 因为这三个方法都有可能对持久化的内容有所改变.

2.kubelet重启的时候, 这个时候有两个动作需要注意:

2.1 device manager会通过readCheckpoint的内容加载到podDevices中.
2.2 /var/lib/kubelet/device-plugins/kubelet.sock文件会重新生成, 从而触发device plugin重启, 重启注册后会通过回调函数genericDeviceUpdateCallback加载设备到healthyDevicesunhealthyDevices中. 最后一直通过ListAndWatch保持连接一直报告最新的healthyDevicesunhealthyDevices.

3. /var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中的内容与实际情况是有"延迟"的
注意: 这里的延迟指的是有些占有资源的pod已经运行结束, 但是device manager中的allocatedDevices没有及时更新.

可以看到三个有可能调用writeCheckpoint方法里面只有genericDeviceUpdataCallback是一定会调用writeCheckpoint的, 而另外两个方法都是条件成立的条件下才会调用writeCheckpoint方法. 这里就分析其中一种情况下会使/var/lib/kubelet/device-plugins/kubelet_internal_checkpoint中的内容有延迟. 说明一下, GetCapacity方法只有在某个资源对应的endpoint不存在或者过期的时候才会调用writeCheckpoint方法.
当现在所有的资源(比如gpu卡)已经分配出去了, 然后过了一段时间某个占有资源的pod运行结束了, 此时调用GetCapacitygenericDeviceUpdataCallback即使调用了writeCheckpoint方法也只是更新了registerDevices部分, 并没有更新podDevices部分, 因为只有调用了Allocate方法(申请资源)才会调用updateAllocatedDevices方法把这些运行结束的pod中的资源真正释放出来, 这也就是为什么根据kubelet_internal_checkpoint统计资源的情况会与实际使用的情况不一样, 因为有的pod已经运行结束了, 但是device manager中的资源并没有去更新, 而且kubelet_checkpoint_internal文件中依然有这个pod的资源使用信息.