(P&VhM&Yaio1UTC:Etc/UTC(: pmcd.pmlogger.host:.hM&YV22175.: pmcd.pmlogger.port:= pmcd.pmlogger.archive=0pmcd.pid03 pmcd.seqnum34l hinv.nfchost4hM&Ux{"domainname":"localdomain","groupid":989,"hostname":"aio1","machineid":"c9ce8e4178408b77ed2a4c341e694a92","userid":999}  &+5= H"ltIhM&U<{"agent":"linux"} I\lNumber of fibre channel host bus adapters from /sys/class/fc_host/host*\l: kernel.all.pid_max:M maximum process identifier from /proc/sys/kernel/pid_maxM C kernel.all.entropy.poolsizeC5 maximum size of the entropy pool5 @ kernel.all.entropy.avail@B entropy available to random number generatorsB 2 hinv.ntape26number of Linux scsi tape devices67 hinv.map.mdname7thM&U4{"device_type":"block","indom_name":"per md device"}  $tP per-multi-device device persistent name mapping to md[0-9]*P0 multi-device driver devices0  F hinv.cpu.frequency_scaling.minFlhM&U,{"device_type":"cpu","indom_name":"per cpu"}  " l hM&U  {"cpu":0}  {"cpu":1}  {"cpu":2}  {"cpu":3}  {"cpu":4}  {"cpu":5}  {"cpu":6}  {"cpu":7}  X Maximum scaled CPU frequency from /sys/devices/system/cpu/*/cpufreqX* set of all processors*  Fhinv.cpu.frequency_scaling.maxFXMinimum scaled CPU frequency from /sys/devices/system/cpu/*/cpufreqXGhinv.cpu.frequency_scaling.timeGUCPU frequency scaled time from /sys/devices/system/cpu/*/cpufreqUH hinv.cpu.frequency_scaling.countHVCPU frequency scaled count from /sys/devices/system/cpu/*/cpufreqVN&hinv.cpu.thermal_throttle.package.timeN^CPU package throttle time from /sys/devices/system/cpu/*/thermal_throttle^hM&U #cpu0cpu1cpu2cpu3cpu4cpu5cpu6cpu7O'hinv.cpu.thermal_throttle.package.countOZCPU package throttles from /sys/devices/system/cpu/*/thermal_throttleZK #hinv.cpu.thermal_throttle.core.timeK[CPU core throttle time from /sys/devices/system/cpu/*/thermal_throttle[L$hinv.cpu.thermal_throttle.core.countLWCPU core throttles from /sys/devices/system/cpu/*/thermal_throttleW8hinv.node.online8xhM&U8{"device_type":"numa_node","indom_name":"per numa_node"}   (xGhM&U {"numa_node":0} GRNUMA node online state from /sys/devices/system/node/*/onlineR; non-uniform memory access (NUMA) nodes; .hM&Unode0.7hinv.cpu.online7KCPU online state from /sys/devices/system/cpu/*/onlineK7 hinv.map.dmname7thM&U4{"device_type":"block","indom_name":"per dm device"}  $tR per-device-mapper device persistent name mapping to dm-[0-9]*R1 device mapper driver devices1  Cnetwork.interface.inet_addrCxhM&U8{"device_type":"interface","indom_name":"per interface"}   (xhM&U   Cstring INET interface address (ifconfig style)C@ network interface addresses (inet and ipv6)@ hM&U  '08ALZ`kwloenX0enX1br-vlanbr-dbaasbr-lbaasbr-bmaasbr-mgmtbr-vxlanbr-storagebr-dbaas-vetheth13dummy-vlandummy-bmaasdummy-vxlandummy-dbaasdummy-lbaaseth14bonding_masterseth12br-lbaas-vethdummy-mgmtbr-bmaas-vethdummy-storageeth15br-vlan-vethAP mem.slabinfo.objects.sizeAphM&U 0{"device_type":"memory","indom_name":"per slab"}  % p=Psize of individual objects of each cache=(  kernel memory slabs(P  @H hinv.cpu.cache_alignment@NH Cache alignment for each CPU as reported by /proc/cpuinfoNH 6H hinv.cpu.flags6XH Hardware capability flags for each CPU as reported by /proc/cpuinfoXH ;H hinv.cpu.model_name;HH model name of each CPU as reported by /proc/cpuinfoHH 9Hhinv.map.cpu_node9BHlogical CPU to NUMA node mapping for each CPUBH4H hinv.machine4@Hhardware identifier as reported by uname(2)@H8Hhinv.map.cpu_num8AHlogical to physical CPU mapping for each CPUAH9Hhinv.cpu.bogomips9OHbogo mips rating for each CPU as reported by /proc/cpuinfoOH6Hhinv.cpu.cache6PHprimary cache size of each CPU as reported by /proc/cpuinfoPH9Hhinv.cpu.stepping9FHstepping of each CPU as reported by /proc/cpuinfoFH6Hhinv.cpu.model6JHmodel number of each CPU as reported by /proc/cpuinfoJH7Hhinv.cpu.vendor7JHmanufacturer of each CPU as reported by /proc/cpuinfoJH6Hhinv.cpu.clock6PHclock rate in Mhz for each CPU as reported by /proc/cpuinfoPH5<  hinv.map.scsi5ohM&U /{"device_type":"block","indom_name":"per disk"}  $ o0<list of active SCSI devices0!  SCSI devices!<There is one string value for each SCSI device active in the system, as extracted from /proc/scsi/scsi. The external instance name for each device is in the format scsiD:C:I:L where D is controller number, C is channel number, I is device ID and L is the SCSI LUN number for the device. The values for this metric are the actual device names (sd[a-z] are SCSI disks, st[0-9] are SCSI tapes and scd[0-9] are SCSI CD-ROMS.  ;0kernel.uname.distro;,0Linux distribution name,0The Linux distribution name, as determined by a number of heuristics. For example: + on Fedora, the contents of /etc/fedora-release + on RedHat, the contents of /etc/redhat-release20 pmda.uname280identity and type of current system80Identity and type of current system. The concatenation of the values returned from utsname(2), also similar to uname -a. See also the kernel.uname.* metrics=0kernel.uname.nodename=:0host name of this node on the network:0Name of this node on the network as reported by the nodename[] value returned from uname(2) or uname -n. Usually a synonym for the host name. See also pmda.uname.<0kernel.uname.machine<G0name of the hardware type the system is running onG0Name of the hardware type the system is running on as reported by the machine[] value returned from uname(2) or uname -m, e.g. "i686". See also pmda.uname.<0kernel.uname.sysname<G0name of the implementation of the operating systemG0Name of the implementation of the running operating system as reported by the sysname[] value returned from uname(2) or uname -s. Usually "Linux". See also pmda.uname.<0kernel.uname.version<V0version level (build number) and build date of the running kernelV0Version level of the running kernel as reported by the version[] value returned from uname(2) or uname -v. Usually a build number followed by a build date. See also pmda.uname.<0kernel.uname.release<80release level of the running kernel80Release level of the running kernel as reported via the release[] value returned from uname(2) or uname -r. See also pmda.uname.9 filesys.blocksize9hM&U E Size of each block on mounted filesystem (Bytes)E; mounted block-device-backed filesystem;  hM&U  +6AL/dev/xvda1/dev/xvdd/dev/xvde1/dev/xvde2/dev/loop1/dev/loop2/dev/loop3/dev/loop48filesys.mountdir8,File system mount point,8filesys.capacity8BTotal capacity of mounted filesystem (Kbytes)B5 hinv.nfilesys5Enumber of (local) file systems currently mountedEDkernel.all.interrupts.errorsD@interrupt error count from /proc/interrupts@This is a global counter (normally converted to a count/second) for any and all errors that occur while handling interrupts.7 hinv.ninterface7= number of active (up) network interfaces= @ network.interface.duplex@xhM&U8{"device_type":"interface","indom_name":"per interface"}   (x~hM&U {"interface":"lo"} {"interface":"enX0"} {"interface":"enX1"} {"interface":"dummy-mgmt"} {"interface":"dummy-vxlan"} {"interface":"br-vlan"} {"interface":"br-dbaas"} {"interface":"br-lbaas"} {"interface":"br-bmaas"} {"interface":"eth12"}  {"interface":"br-vlan-veth"}  {"interface":"eth13"}  {"interface":"br-dbaas-veth"}  {"interface":"eth14"} {"interface":"br-lbaas-veth"} {"interface":"eth15"} {"interface":"br-bmaas-veth"} {"interface":"dummy-storage"} {"interface":"dummy-vlan"} {"interface":"dummy-dbaas"} {"interface":"dummy-lbaas"} {"interface":"dummy-bmaas"} {"interface":"br-mgmt"} {"interface":"br-vxlan"} {"interface":"br-storage"} ~H value one for half or two for full duplex interfaceH. set of network interfaces.  hM&U  $,5>GMZ`ntloenX0enX1dummy-mgmtdummy-vxlanbr-vlanbr-dbaasbr-lbaasbr-bmaaseth12br-vlan-vetheth13br-dbaas-vetheth14br-lbaas-vetheth15br-bmaas-vethdummy-storagedummy-vlandummy-dbaasdummy-lbaasdummy-bmaasbr-mgmtbr-vxlanbr-storage? 0network.interface.speed?< interface speed in megabytes per second< The linespeed on the network interface, as reported by the kernel, scaled from Megabits/second to Megabytes/second. See also network.interface.baudrate for the bytes/second value.= network.interface.mtu=C maximum transmission unit on network interfaceC :kernel.all.lastpid:?most recently allocated process identifier?9;hinv.hugepagesize96;Huge page size from /proc/meminfo6N;The memory huge page size of the running kernel in bytes.N5  hinv.pagesize5% Memory page size%I The memory page size of the running kernel in bytes.I4  hinv.physmem4B total system memory metric from /proc/meminfoB 3 swap.length3Ctotal swap available metric from /proc/meminfoC3 mem.physmem3Itotal system memory metric reported by /proc/meminfoIQThe value of this metric corresponds to the "MemTotal" field reported by /proc/meminfo. Note that this does not necessarily correspond to actual installed physical memory - there may be areas of the physical address space mapped as ROM in various peripheral devices and the bios may be mirroring certain ROMs in RAM.Q8ghinv.map.scsi_id8ohM&U/{"device_type":"block","indom_name":"per disk"}  $ ohM&U {"device_name":"xvda"} {"device_name":"xvde"} {"device_name":"xvdd"} @gscsi disk physical device unique identifier@% set of all disks%gThe unique identifier (e.g World Wide Id) of a scsi device. SCSI path aliases to the same physical device share the same ID string, e.g. for multipath scsi path names. This can be used to aggregate traffic statistics to each physical device and to determine the proportion of traffic over different paths. See also multipathd(1), where the same ID strings (WWID) are used to identify different paths to each physical device. GhM&U xvdaxvdexvddG:;disk.dev.scheduler:+;per-disk I/O scheduler+Q;The name of the I/O scheduler in use for each device. The scheduler is part of the block layer in the kernel, and attempts to optimise the I/O submission patterns using various techniques (typically, sorting and merging adjacent requests into larger ones to reduce seek activity, but certainly not limited to that).Q500 kernel.all.hz5R0value of HZ (jiffies/second) for the currently running kernelR02! hinv.ndisk22!number of disks in the system2!1  hinv.ncpu11 number of CPUs in the system1 2 hinv.nnode27number of NUMA nodes in the system7;0kernel.all.boottime;.boot time from /proc/stat.= pmcd.pmie.eval.actual=HhM&U{"agent":"pmcd"} H5 count of actual rule evaluations5) pmie Instance Domain) A cumulative count of the pmie rules which have been evaluated. This value is incremented once for each evaluation of each rule.+ One instance per running pmie process. The internal and external instance identifiers are the process IDs of the pmie instances. The primary pmie has an extra instance with the instance name "primary" and an instance ID of zero (in addition to its normal process ID instance).+?0pmcd.pmie.eval.expected?6expected rate of rule evaluations6This is the expected rate of evaluation of pmie rules. The value is calculated once when pmie starts, and is the number of pmie rules divided by the average time interval over which they are to be evaluated.>pmcd.pmie.eval.unknown>;count of pmie predicates not evaluated;The predicate part of a pmie rule can be said to evaluate to either true, false, or not known. This metric is a cumulative count of the number of rules which have not been successfully evaluated. This could be due to not yet having sufficient values to evaluate the rule, or a metric fetch may have been unsuccessful in retrieving current values for metrics required for evaluation of the rule.<pmcd.pmie.eval.false<@count of pmie predicates evaluated to false@The predicate part of a pmie rule can be said to evaluate to either true, false, or not known. This metric is a cumulative count of the number of rules which have evaluated to false for each pmie instance.;pmcd.pmie.eval.true;?count of pmie predicates evaluated to true?The predicate part of a pmie rule can be said to evaluate to either true, false, or not known. This metric is a cumulative count of the number of rules which have evaluated to true for each pmie instance.9pmcd.pmie.actions96count of rules evaluating to true6A cumulative count of the evaluated pmie rules which have evaluated to true. This value is incremented once each time an action is executed. This value will always be less than or equal to pmcd.pmie.eval.true because predicates which have evaluated to true may be suppressed in the action part of the pmie rule, in which case this counter will not be incremented.:pmcd.pmie.numrules:4number of rules being evaluated4TThe total number of rules being evaluated by each pmie process.T;pmcd.pmie.pmcd_host;7default hostname for pmie instance7The default host from which pmie is fetching metrics. This is either the hostname given to pmie on the command line or the local host. Note that this does not consider host names specified in the pmie configuration file (these are considered non-default and can be more than one per pmie instance). All daemon pmie instances started through pmie_check(1) will have their default host passed in on their command line.9pmcd.pmie.logfile98filename of pmie instance event log88The file to which each instance of pmie is writting events. No two pmie instances can share the same log file. If no logfile was specified when pmie was started, this metrics has the value "". All daemon pmie instances started through pmie_check(1) must have an associated log file.8<pmcd.pmie.configfile<,configuration file name,|The full path in the filesystem to the configuration file containing the rules being evaluated by each pmie instance. If the configuration file was supplied on the standard input, then this metric will have the value "". If multiple configuration files were given to pmie, then the value of this metric will be the first configuration file specified.|9pmcd.agent.status9 PMDA status ) PMDA Instance Domain)This metric encodes the current status of each PMDA. The default value is 0 if the PMDA is active. Other values encode various degrees of PMDA difficulty in three bit fields (bit 0 is the low-order bit) as follows: bits 7..0 1 the PMDA is connected, but not yet "ready" to accept requests from PMCD 2 the PMDA has exited of its own accord 4 some error prevented the PMDA being started 8 PMCD stopped communication with the PMDA due to a protocol or timeout error bits 15..8 the exit() status from the PMDA bits 23..16 the number of the signal that terminated the PMDA One instance per PMDA managed by PMCD. The external and internal instance identifiers are taken from the first two fields of the PMDA specification in $PCP_PMCDCONF_PATH.hM&U  <F_z !%)rootpmcdprocpmproxyxfslinuxmmvkvmjbd20hM&Uprimary0? pmcd.pmlogger.pmcd_host?H host from which active pmlogger is fetching metricsH? Instance domain "pmloggers" from PMCD PMDA?Y The fully qualified domain name of the host from which a pmlogger instance is fetching metrics to be archived. The instance names are process IDs of the active pmloggers. The primary pmlogger has an extra instance with the instance name "primary" and an instance ID of zero (in addition to its normal process ID instance).YJ This is the list of currently active pmlogger instances on the same machine as this PMCD. The instance names are the process IDs of the pmlogger instances. The primary pmlogger has an extra instance with the instance name "primary" and an instance ID of zero (in addition to its normal process ID instance).J2 pmcd.build2<build version for installed PCP package<Minor part of the PCP build version numbering. For example on Linux with RPM packaging, if the PCP RPM version is pcp-2.5.99-20070323 then pmcd.build returns the string "20070323".5 pmcd.services5;running PCP services on the local host;A space-separated string representing all running PCP services with PID files in $PCP_RUN_DIR (such as pmcd itself, pmproxy and a few others).4 pmcd.version4!PMCD version!7pmcd.numclients7BNumber of clients currently connected to PMCDBhThe number of connections open to client programs retrieving information from PMCD.h6pmcd.numagents6INumber of agents (PMDAs) currently connected to PMCDIThe number of agents (PMDAs) currently connected to PMCD. This may differ from the number of agents configured in $PCP_PMCDCONF_PATH if agents have terminated and/or been timed-out by PMCD.5! kvm.tlb_flush5GhM+Әr_{"agent":"kvm"} GP!Number of tlb_flush operations performed by the hypervisor.P!8 kvm.signal_exits8P Number of guest exits due to pending signals from the host.P 7kvm.request_irq7=Number of guest interrupt request exits.=<kvm.remote_tlb_flush<PNumber of tlb_flush operations performed by the hypervisor.P4 kvm.pf_guest4@Number of page faults injected into guests.@4 kvm.pf_fixed4NNumber of fixed (non-paging) page table entry (PTE) maps.N6kvm.nmi_window6cNumber of guest exits from (outstanding) Non-maskable Interrupt (NMI) windows.c:kvm.nmi_injections:GNumber of Non-maskable Interrupt (NMI) injections.G6kvm.mmu_unsync6QNumber of non-synchronized pages which are not yet unlinked Q=kvm.mmu_shadow_zapped=ANumber of shadow pages that has been zapped.A8kvm.mmu_recycled8BNumber of shadow pages that can be reclaimed.B9kvm.mmu_pte_write9)Number of PTE write.);kvm.mmu_pte_updated;,Number of PTE updated. ,:kvm.mmu_pde_zapped:QNumber of page directory entry (PDE) destruction operations.Q7kvm.mmu_flooded7RDetection count of excessive write operations to an MMU page.R^This counts detected write operations not of individual write operations.^:kvm.mmu_cache_miss:*Number of cache miss.*6kvm.mmio_exits6TNumber of guest exits due to memory mapped I/O (MMIO) accesses.T6kvm.largepages6<Number of large pages currently in use.<6kvm.irq_window6PNumber of guest exits from an outstanding interrupt window.P:kvm.irq_injections:9Number of interrupts sent to guests.95  kvm.irq_exits5F Number of guest exits due to external interrupts.F 4  kvm.io_exits4B Number of guest exits from I/O port accesses.B 2  kvm.invlpg2/ Number of invlpg attepts. / ? kvm.insn_emulation_fail?> Number of failed insn_emulation attempts.> : kvm.insn_emulation:7 Number of insn_emulation attempts.7 6kvm.hypercalls6>Number of guest hypervisor service calls.>=kvm.host_state_reload==Number of full reloads of the host state=ECurrently tallies MSR setup and guest MSR reads.E7kvm.halt_wakeup73Number of wakeups from a halt.3@kvm.halt_successful_poll@QThe number of times the vcpu attempts to polls successfully.Q6kvm.halt_exits6=Number of guest exits due to halt calls.=LThis type of exit is usually seen when a guest is idle.L?kvm.halt_attempted_poll?@Number of times the vcpu attempts to polls.@6kvm.fpu_reload6<Number of reload fpu(Float Point Unit).<1 kvm.exits1CNumber of guest exits from I/O port accesses. C7kvm.efer_reload7ONumber of Extended Feature Enable Register (EFER) reloads.OJt"kernel.all.pressure.irq.full.totalJFtTotal time when all tasks stall on IRQ processingFtThe CPU time in which all tasks stalled on IRQ resources. Pressure stall information (PSI) from /proc/pressure/irq.Ht% kernel.all.pressure.irq.full.avgHHhM+Әr % <,HMtPercentage of time all work is delayed from IRQ pressureMR %pressure time averages for 10 seconds, 1 minute and 5 minutesRtIndicates the time in which all tasks stalled on IRQ resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/irq. %?hnetwork.all.out.packets?Jhnetwork packets sent from physical network interfacesJhSum of packets column on the "Transmit" side of /proc/net/dev for network interfaces deemed to be 'physical' interfaces, using regular expression pattern described in the $PCP_SYSCONF_DIR/linux/interfaces.conf file.=hnetwork.all.out.bytes=Hhnetwork bytes sent from physical network interfacesHhSum of bytes column on the "Transmit" side of /proc/net/dev for network interfaces deemed to be 'physical' interfaces, using regular expression pattern described in the $PCP_SYSCONF_DIR/linux/interfaces.conf file.>hnetwork.all.in.packets>Jhnetwork recv packets from physical network interfacesJhSum of packets column on the "Receive" side of /proc/net/dev for network interfaces deemed to be 'physical' interfaces, using regular expression pattern described in the $PCP_SYSCONF_DIR/linux/interfaces.conf file.<hnetwork.all.in.bytes<Hhnetwork recv bytes from physical network interfacesHhSum of bytes column on the "Receive" side of /proc/net/dev for network interfaces deemed to be 'physical' interfaces, using regular expression pattern described in the $PCP_SYSCONF_DIR/linux/interfaces.conf file.I`&!zram.mm_stat.data_size.compressedI8`compressed data stored in this disk85 &set of compressed memory devices5` &G`&zram.mm_stat.data_size.originalG:`uncompressed data stored in this disk:`5X& zram.capacity5:Xper-compressed-memory-device capacity:VXTotal space presented by each zram device, from /proc/partitions.VIT!kernel.all.pressure.io.full.totalIDTTotal time when all tasks stall on IO resourcesDTThe CPU time in which all tasks stalled on IO resources. Pressure stall information (PSI) from /proc/pressure/io.GT%kernel.all.pressure.io.full.avgGLTPercentage of time all work is delayed from IO pressureLTIndicates the time in which all tasks stalled on IO resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/io.ThM+Әr% <, 10 second1 minute5 minuteTIT!kernel.all.pressure.io.some.totalIBTTotal time processes stalled for IO resourcesBTThe CPU time in which at least some tasks stalled on IO resources. Pressure stall information (PSI) from /proc/pressure/io.GT%kernel.all.pressure.io.some.avgGSTPercentage of time runnable processes delayed for IO resourcesSTIndicates the time in which at least some tasks stalled on IO resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/io.MP%kernel.all.pressure.memory.full.totalMHPTotal time when all tasks stall on memory resourcesHPThe CPU time for which all tasks stalled on memory resources. Pressure stall information (PSI) from /proc/pressure/memory.KP%#kernel.all.pressure.memory.full.avgKPPPercentage of time all work is delayed from memory pressurePPIndicates the time in which all tasks stalled on memory resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/memory.MP%kernel.all.pressure.memory.some.totalMFPTotal time processes stalled for memory resourcesFPThe CPU time for which at least some tasks stalled on memory resources. Pressure stall information (PSI) from /proc/pressure/memory.KP%#kernel.all.pressure.memory.some.avgKWPPercentage of time runnable processes delayed for memory resourcesWPIndicates the time in which at least some tasks stalled on memory resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/memory.JL"kernel.all.pressure.cpu.some.totalJCLTotal time processes stalled for CPU resourcesCLIndicates the time in which at least some tasks stalled on CPU resources. Pressure stall information (PSI) from /proc/pressure/cpu.HL% kernel.all.pressure.cpu.some.avgHTLPercentage of time runnable processes delayed for CPU resourcesTLIndicates the time in which at least some tasks stalled on CPU resources. The ratios are tracked as recent trends over ten second, one minute, and five minute windows. Pressure stall information (PSI) from /proc/pressure/cpu.:(#tty.serial.overrun:F(Number of overrun errors for current serial line.F. #serial devices (aka ttys).( #6(#tty.serial.brk6>(Number of breaks for current serial line.>(9(#tty.serial.parity9E(Number of parity errors for current serial line.E(8(#tty.serial.frame8D(Number of frame errors for current serial line.D(5(# tty.serial.rx5J(Number of receive interrupts for current serial line.J(5(# tty.serial.tx5K(Number of transmit interrupts for current serial line.K(E$network.sockstat.frag6.memoryEB$instantaneous number of used memory for frag6B$D$network.sockstat.frag6.inuseDK$instantaneous number of frag6 sockets currently in useK$C$network.sockstat.raw6.inuseCJ$instantaneous number of raw6 sockets currently in useJ$G$network.sockstat.udplite6.inuseGN$instantaneous number of udplite6 sockets currently in useN$C$network.sockstat.udp6.inuseCJ$instantaneous number of udp6 sockets currently in useJ$C$network.sockstat.tcp6.inuseCJ$instantaneous number of tcp6 sockets currently in useJ$9 "tape.dev.write_ns9[ cumulative amount of time spent waiting for write requests to complete[< "scsi tape devices, per scsi tape device<  "The scsi tape instance domain includes st[0-9]+ devices, but not any of the derived devices such as nst0, nst0a, st0l, st0m and so forth. The derived devices all share the same statistics in the kernel as the st devices.:"tape.dev.write_cnt:Fnumber of write requests issued to the tape driveF?"tape.dev.write_byte_cnt?>number of bytes written to the tape drive>:"tape.dev.resid_cnt:Jcount of read or write residual data, per tape deviceJNumber of times during a read or write we found the residual amount to be non-zero. For reads this means a program is issuing a read larger than the block size on tape. For writes it means not all data made it to tape.8"tape.dev.read_ns8Zcumulative amount of time spent waiting for read requests to completeZ9"tape.dev.read_cnt9Enumber of read requests issued to the tape driveE>"tape.dev.read_byte_cnt>=number of bytes read from the tape drive=:"tape.dev.other_cnt:^number of I/Os issued to the tape drive other than read or write commands^6"tape.dev.io_ns6ccumulative amount of time spent waiting for all I/O to complete to tape devicecThe amount of time spent waiting (in nanoseconds) for all I/O to complete (including read and write). This includes tape movement commands such as seeking between file or set marks and implicit tape movement such as when rewind on close tape devices are used.:"tape.dev.in_flight:Mnumber of I/Os currently outstanding to this tape deviceM?!mem.zoneinfo.protection?hM+Әr !0{"lowmem_reserved":0,"numa_node":0,"zone":"DMA"}  !$ *0{"lowmem_reserved":1,"numa_node":0,"zone":"DMA"}  !$ *0{"lowmem_reserved":2,"numa_node":0,"zone":"DMA"}  !$ *0{"lowmem_reserved":3,"numa_node":0,"zone":"DMA"}  !$ *0{"lowmem_reserved":4,"numa_node":0,"zone":"DMA"}  !$ *2{"lowmem_reserved":0,"numa_node":0,"zone":"DMA32"}  !$ *2{"lowmem_reserved":1,"numa_node":0,"zone":"DMA32"}  !$ *2{"lowmem_reserved":2,"numa_node":0,"zone":"DMA32"}  !$ *2{"lowmem_reserved":3,"numa_node":0,"zone":"DMA32"}  !$ * 2{"lowmem_reserved":4,"numa_node":0,"zone":"DMA32"}  !$ * 3{"lowmem_reserved":0,"numa_node":0,"zone":"Normal"}  !$ * 3{"lowmem_reserved":1,"numa_node":0,"zone":"Normal"}  !$ * 3{"lowmem_reserved":2,"numa_node":0,"zone":"Normal"}  !$ * 3{"lowmem_reserved":3,"numa_node":0,"zone":"Normal"}  !$ *3{"lowmem_reserved":4,"numa_node":0,"zone":"Normal"}  !$ *4{"lowmem_reserved":0,"numa_node":0,"zone":"Movable"}  !$ * 4{"lowmem_reserved":1,"numa_node":0,"zone":"Movable"}  !$ * 4{"lowmem_reserved":2,"numa_node":0,"zone":"Movable"}  !$ * 4{"lowmem_reserved":3,"numa_node":0,"zone":"Movable"}  !$ * 4{"lowmem_reserved":4,"numa_node":0,"zone":"Movable"}  !$ * 3{"lowmem_reserved":0,"numa_node":0,"zone":"Device"}  !$ *3{"lowmem_reserved":1,"numa_node":0,"zone":"Device"}  !$ *3{"lowmem_reserved":2,"numa_node":0,"zone":"Device"}  !$ *3{"lowmem_reserved":3,"numa_node":0,"zone":"Device"}  !$ *3{"lowmem_reserved":4,"numa_node":0,"zone":"Device"}  !$ *Eprotection space in each zone for each NUMA nodeET !low memory regions for each zoneinfo memory type, per-NUMA-nodeT !hM+Әr! :Wt ,Ll/PqDMA::node0::lowmem_reserved0DMA::node0::lowmem_reserved1DMA::node0::lowmem_reserved2DMA::node0::lowmem_reserved3DMA::node0::lowmem_reserved4DMA32::node0::lowmem_reserved0DMA32::node0::lowmem_reserved1DMA32::node0::lowmem_reserved2DMA32::node0::lowmem_reserved3DMA32::node0::lowmem_reserved4Normal::node0::lowmem_reserved0Normal::node0::lowmem_reserved1Normal::node0::lowmem_reserved2Normal::node0::lowmem_reserved3Normal::node0::lowmem_reserved4Movable::node0::lowmem_reserved0Movable::node0::lowmem_reserved1Movable::node0::lowmem_reserved2Movable::node0::lowmem_reserved3Movable::node0::lowmem_reserved4Device::node0::lowmem_reserved0Device::node0::lowmem_reserved1Device::node0::lowmem_reserved2Device::node0::lowmem_reserved3Device::node0::lowmem_reserved4: mem.ksm.sleep_time:;Time ksmd should sleep between batches;9mem.ksm.run_state9EWhether the KSM daemon has run and/or is runningE>mem.ksm.pages_volatile>DNumber of pages that are candidate to be sharedD>mem.ksm.pages_unshared>=The number of nodes in the unstable tree==mem.ksm.pages_to_scan=6Number of pages to scan at a time6=mem.ksm.pages_sharing=OThe number of virtual pages that are sharing a single pageO<mem.ksm.pages_shared<;The number of nodes in the stable tree;Bmem.ksm.merge_across_nodesB<Kernel allows merging across NUMA nodes<:mem.ksm.full_scans:PNumber of times that KSM has scanned for duplicated contentP@- mem.zoneinfo.nr_free_cma@hM+Әr L{"device_type":["memory","numa_node"],"indom_name":"per zone per numa_node"} ' 3hM+Әr  {"numa_node":0} {"numa_node":0,"zone":"DMA"}  {"numa_node":0,"zone":"DMA32"}  {"numa_node":0,"zone":"Normal"}  d-count of free Contiguous Memory Allocator pages in each zone for each NUMA noded9  zoneinfo memory types, per-NUMA-node9-  lhM+Әr node0DMA::node0DMA32::node0Normal::node0lR, *mem.zoneinfo.nr_anon_transparent_hugepagesR_,number of anonymous transparent huge pages in each zone for each NUMA node_,K+ #mem.zoneinfo.workingset_nodereclaimKa+count of NUMA node working set page reclaims in each zone for each NUMA nodea+H*  mem.zoneinfo.workingset_activateHf*count of page activations to form the working set in each zone for each NUMA nodef*G) mem.zoneinfo.workingset_refaultGb)count of refaults of previously evicted pages in each zone for each NUMA nodeb)?( mem.zoneinfo.numa_other?B(unsuccessful allocations from local NUMA zoneBl(Count of unsuccessful allocations from local NUMA zone in each zone for each NUMA node.l?' mem.zoneinfo.numa_local?@'successful allocations from local NUMA zone@j'Count of successful allocations from local NUMA zone in each zone for each NUMA node.jD& mem.zoneinfo.numa_interleaveDZ&count of interleaved NUMA allocations in each zone for each NUMA nodeZ&A% mem.zoneinfo.numa_foreignA2%foreign NUMA zone allocations2\%Count of foreign NUMA zone allocations in each zone for each NUMA node.\>$ mem.zoneinfo.numa_miss>F$unsuccessful allocations from preferred NUMA zoneFp$Count of unsuccessful allocations from preferred NUMA zone in each zone for each NUMA node.p=# mem.zoneinfo.numa_hit=D#successful allocations from preferred NUMA zoneDn#Count of successful allocations from preferred NUMA zone in each zone for each NUMA node.n?" mem.zoneinfo.nr_written?O"count of pages written out in each zone for each NUMA nodeO"?! mem.zoneinfo.nr_dirtied?X!count of pages entering dirty state in each zone for each NUMA nodeX!=  mem.zoneinfo.nr_shmem=R number of shared memory pages in each zone for each NUMA nodeR E mem.zoneinfo.nr_isolated_fileEYnumber of isolated file memory pages in each zone for each NUMA nodeYE mem.zoneinfo.nr_isolated_anonE^number of isolated anonymous memory pages in each zone for each NUMA node^F mem.zoneinfo.nr_writeback_tempFXnumber of temporary writeback pages in each zone for each NUMA nodeXP (mem.zoneinfo.nr_vmscan_immediate_reclaimP_prioritise for reclaim when writeback ends in each zone for each NUMA node_D mem.zoneinfo.nr_vmscan_writeDApages written from the LRU by the VM scannerACount of pages written from the LRU by the VM scanner in each zone for each NUMA node.The VM is supposed to minimise the number of pages which get written from the LRU (for IO scheduling efficiency, and for high reclaim-success rates).> mem.zoneinfo.nr_bounce>Rnumber of bounce buffer pages in each zone for each NUMA nodeR@ mem.zoneinfo.nr_unstable@Snumber of pages unstable state in each zone for each NUMA nodeSD mem.zoneinfo.nr_kernel_stackDTnumber of pages of kernel stack in each zone for each NUMA nodeTH  mem.zoneinfo.nr_page_table_pagesHOnumber of page table pages in each zone for each NUMA nodeOJ "mem.zoneinfo.nr_slab_unreclaimableJWnumber of unreclaimable slab pages in each zone for each NUMA nodeWH  mem.zoneinfo.nr_slab_reclaimableHUnumber of reclaimable slab pages in each zone for each NUMA nodeUA mem.zoneinfo.nr_writebackATnumber of pages writeback state in each zone for each NUMA nodeT= mem.zoneinfo.nr_dirty=Pnumber of pages dirty state in each zone for each NUMA nodePB mem.zoneinfo.nr_file_pagesBSnumber of file pagecache pages in each zone for each NUMA nodeS> mem.zoneinfo.nr_mapped>Unumber of mapped pagecache pages in each zone for each NUMA nodeUB mem.zoneinfo.nr_anon_pagesB_number of anonymous mapped pagecache pages in each zone for each NUMA node_= mem.zoneinfo.nr_mlock=Pnumber of pages under mlock in each zone for each NUMA nodePC mem.zoneinfo.nr_unevictableCPnumber of unevictable pages in each zone for each NUMA nodePC  mem.zoneinfo.nr_active_fileC^ number of active file memory memory pages in each zone for each NUMA node^ E  mem.zoneinfo.nr_inactive_fileEY number of inactive file memory pages in each zone for each NUMA nodeY C  mem.zoneinfo.nr_active_anonC\ number of active anonymous memory pages in each zone for each NUMA node\ E  mem.zoneinfo.nr_inactive_anonE^ number of inactive anonymous memory pages in each zone for each NUMA node^ C  mem.zoneinfo.nr_alloc_batchCX number of pages allocated to other zones due to insufficient memoryX{ Number of pages allocated to other zones due to insufficient memory, for each zone for each NUMA node.{B mem.zoneinfo.nr_free_pagesBJnumber of free pages in each zone for each NUMA node.J< mem.zoneinfo.managed<Bmanaged space in each zone for each NUMA nodeB< mem.zoneinfo.present<Bpresent space in each zone for each NUMA nodeB< mem.zoneinfo.spanned<Bspanned space in each zone for each NUMA nodeB< mem.zoneinfo.scanned<Bscanned space in each zone for each NUMA nodeB9 mem.zoneinfo.high9?high space in each zone for each NUMA node?8 mem.zoneinfo.low8>low space in each zone for each NUMA node>8 mem.zoneinfo.min8>min space in each zone for each NUMA node>9 mem.zoneinfo.free9?free space in each zone for each NUMA node?; mem.buddyinfo.total;hM+ӘrM{"device_type":["memory","numa_node"],"indom_name":"per buddy per numa_node"} ' 3 hM+Әr !&{"numa_node":0,"order":0,"zone":"DMA"}   &{"numa_node":0,"order":1,"zone":"DMA"}   &{"numa_node":0,"order":2,"zone":"DMA"}   &{"numa_node":0,"order":3,"zone":"DMA"}   &{"numa_node":0,"order":4,"zone":"DMA"}   &{"numa_node":0,"order":5,"zone":"DMA"}   &{"numa_node":0,"order":6,"zone":"DMA"}   &{"numa_node":0,"order":7,"zone":"DMA"}   &{"numa_node":0,"order":8,"zone":"DMA"}    &{"numa_node":0,"order":9,"zone":"DMA"}    '{"numa_node":0,"order":10,"zone":"DMA"}   ! ({"numa_node":0,"order":0,"zone":"DMA32"}    ({"numa_node":0,"order":1,"zone":"DMA32"}    ({"numa_node":0,"order":2,"zone":"DMA32"}   ({"numa_node":0,"order":3,"zone":"DMA32"}   ({"numa_node":0,"order":4,"zone":"DMA32"}   ({"numa_node":0,"order":5,"zone":"DMA32"}   ({"numa_node":0,"order":6,"zone":"DMA32"}   ({"numa_node":0,"order":7,"zone":"DMA32"}   ({"numa_node":0,"order":8,"zone":"DMA32"}   ({"numa_node":0,"order":9,"zone":"DMA32"}   ){"numa_node":0,"order":10,"zone":"DMA32"}   !){"numa_node":0,"order":0,"zone":"Normal"}   ){"numa_node":0,"order":1,"zone":"Normal"}   ){"numa_node":0,"order":2,"zone":"Normal"}   ){"numa_node":0,"order":3,"zone":"Normal"}   ){"numa_node":0,"order":4,"zone":"Normal"}   ){"numa_node":0,"order":5,"zone":"Normal"}   ){"numa_node":0,"order":6,"zone":"Normal"}   ){"numa_node":0,"order":7,"zone":"Normal"}   ){"numa_node":0,"order":8,"zone":"Normal"}   ){"numa_node":0,"order":9,"zone":"Normal"}    *{"numa_node":0,"order":10,"zone":"Normal"}   ! A page fragmentation size from /proc/buddyinfoAG buddyinfo memory fragmentation sets, per-NUMA-nodeG  hM+Әr!  &9L_r&;Pez(>TjDMA::order0::node0DMA::order1::node0DMA::order2::node0DMA::order3::node0DMA::order4::node0DMA::order5::node0DMA::order6::node0DMA::order7::node0DMA::order8::node0DMA::order9::node0DMA::order10::node0DMA32::order0::node0DMA32::order1::node0DMA32::order2::node0DMA32::order3::node0DMA32::order4::node0DMA32::order5::node0DMA32::order6::node0DMA32::order7::node0DMA32::order8::node0DMA32::order9::node0DMA32::order10::node0Normal::order0::node0Normal::order1::node0Normal::order2::node0Normal::order3::node0Normal::order4::node0Normal::order5::node0Normal::order6::node0Normal::order7::node0Normal::order8::node0Normal::order9::node0Normal::order10::node0; mem.buddyinfo.pages;? fragmented page count from /proc/buddyinfo? 5 ipc.sem.nsems5Fnumber of semaphore (from semctl(..,SEM_STAT,..))F/ IPC sem_stat semaphore IDs/ 5 ipc.sem.perms5Eaccess permissions (from msgctl(..,SEM_STAT,..))E5 ipc.sem.owner5Dusername of owner (from msgctl(..,SEM_STAT,..))D3 ipc.sem.key3Ikey of these semaphore (from msgctl(..,SEM_STAT,..))I8ipc.msg.messages8Vnumber of messages currently queued (from msgctl(..,MSG_STAT,..))V3 IPC msg_stat message queue IDs3 5 ipc.msg.msgsz5Eused size in bytes (from msgctl(..,MSG_STAT,..))E5 ipc.msg.perms5Eaccess permissions (from msgctl(..,MSG_STAT,..))E5 ipc.msg.owner5Dusername of owner (from msgctl(..,MSG_STAT,..))D3 ipc.msg.key3Nname of these messages slot (from msgctl(..,MSG_STAT,..))N6ipc.shm.status6Pshare memory segment status (from shmctl(.., SHM_STAT, ..))P3 IPC shm_stat shared memory IDs3The string value may contain the space-separated values "dest" (a shared memory segment marked for destruction on last detach) and "locked" or the empty string. 6ipc.shm.nattch6Lno. of current attaches (from shmctl(.., SHM_STAT, ..))L5 ipc.shm.segsz5Lsize of segment (bytes) (from shmctl(.., SHM_STAT, ..))L5 ipc.shm.perms5Doperation perms (from shmctl(.., SHM_STAT, ..))D5 ipc.shm.owner5Nshare memory segment owner (rom shmctl(.., SHM_STAT, ..))N3 ipc.shm.key3KKey supplied to shmget (from shmctl(.., SHM_STAT, ..))K9ipc.msg.tot_bytes9`number of bytes in all messages in all queues (from msgctl(..,MSG_INFO,..))`7ipc.msg.tot_msg7Ytotal number of messages in all queues (from msgctl(..,MSG_INFO,..))Y;ipc.msg.used_queues;`number of message queues that currently exist (from msgctl(..,MSG_INFO,..))`7ipc.sem.tot_sem7anumber of semaphores in all sets on the system (from semctl(..,SEM_INFO,..))a8ipc.sem.used_sem8cnumber of semaphore sets currently on the system (from semctl(..,SEM_INFO,..))c6disk.md.status6Rper-multi-device "mdadm --test --detail " return codeR? disk.md.total_rawactive?Dper-multi-device raw count of I/O response timeDFor each completed I/O on each multi-device device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding I/Os for a multi-device device. When divided by the number of completed I/Os for a multi-device device (disk.md.total), the value represents the stochastic average of the I/O response (or wait) time for that multi-device device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.md.await = delta(disk.md.total_rawactive) / delta(disk.md.total)? disk.md.write_rawactive?Fper-multi-device raw count of write response timeFFor each completed write on each multi-device device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding writes for a multi-device device. When divided by the number of completed writes for a multi-device device (disk.md.write), the value represents the stochastic average of the write response (or wait) time for that device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.md.w_await = delta(disk.md.write_rawactive) / delta(disk.md.write)> disk.md.read_rawactive>Eper-multi-device raw count of read response timeEFor each completed read on each multi-device device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding reads for a multi-device device. When divided by the number of completed reads for a multi-device device (disk.md.read), the value represents the stochastic average of the read response (or wait) time for that device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.md.r_await = delta(disk.md.read_rawactive) / delta(disk.md.read)4   disk.md.aveq4X per-multi-device device time averaged count of request queue lengthX 8  disk.md.avactive8A per-multi-device device count of active timeAD Counts the number of milliseconds for which at least one I/O is in progress for each multi-device device. When converted to a rate, this metric represents the average utilization of the device during the sampling interval. A value of 0.5 (or 50%) means the device was active (i.e. busy) half the time.D; disk.md.write_merge;K per-multi-device device count of merged write requestsK : disk.md.read_merge:J per-multi-device device count of merged read requestsJ ;disk.md.total_bytes;Rper-multi-device device count of total bytes read and writtenR;disk.md.write_bytes;Cper-multi-device device count of bytes writtenC:disk.md.read_bytes:@per-multi-device device count of bytes read@8disk.md.blkwrite8Cper-multi-device device block write operationsC7disk.md.blkread7Bper-multi-device device block read operationsB5 disk.md.total5Jper-multi-device device total (read+write) operationsJ5 disk.md.write5=per-multi-device device write operations=4 disk.md.read4<per-multi-device device read operations<AJnetwork.udp6.ignoredmultiA/Jcount of udp6 ignoredmulti/JAInetwork.udp6.incsumerrorsA/Icount of udp6 incsumerrors/IAHnetwork.udp6.sndbuferrorsA/Hcount of udp6 sndbuferrors/HAGnetwork.udp6.rcvbuferrorsA/Gcount of udp6 rcvbuferrors/GAFnetwork.udp6.outdatagramsA/Fcount of udp6 outdatagrams/F=Enetwork.udp6.inerrors=+Ecount of udp6 inerrors+E<Dnetwork.udp6.noports<*Dcount of udp6 noports*D@Cnetwork.udp6.indatagrams@.Ccount of udp6 indatagrams.CEBnetwork.icmp6.outmldv2reportsE3Bcount of icmp6 outmldv2reports3BBAnetwork.icmp6.outredirectsB0Acount of icmp6 outredirects0AO@'network.icmp6.outneighboradvertisementsO=@count of icmp6 outneighboradvertisements=@I?!network.icmp6.outneighborsolicitsI7?count of icmp6 outneighborsolicits7?M>%network.icmp6.outrouteradvertisementsM;>count of icmp6 outrouteradvertisements;>G=network.icmp6.outroutersolicitsG5=count of icmp6 outroutersolicits5=L<$network.icmp6.outgroupmembreductionsL:8network.icmp6.outechos>,8count of icmp6 outechos,8E7network.icmp6.outparmproblemsE37count of icmp6 outparmproblems37B6network.icmp6.outtimeexcdsB06count of icmp6 outtimeexcds06C5network.icmp6.outpkttoobigsC15count of icmp6 outpkttoobigs15E4network.icmp6.outdestunreachsE34count of icmp6 outdestunreachs34D3network.icmp6.inmldv2reportsD23count of icmp6 inmldv2reports23A2network.icmp6.inredirectsA/2count of icmp6 inredirects/2N1&network.icmp6.inneighboradvertisementsN<1count of icmp6 inneighboradvertisements<1H0 network.icmp6.inneighborsolicitsH60count of icmp6 inneighborsolicits60L/$network.icmp6.inrouteradvertisementsL:/count of icmp6 inrouteradvertisements:/F.network.icmp6.inroutersolicitsF4.count of icmp6 inroutersolicits4.K-#network.icmp6.ingroupmembreductionsK9-count of icmp6 ingroupmembreductions9-J,"network.icmp6.ingroupmembresponsesJ8,count of icmp6 ingroupmembresponses8,H+ network.icmp6.ingroupmembqueriesH6+count of icmp6 ingroupmembqueries6+C*network.icmp6.inechorepliesC1*count of icmp6 inechoreplies1*=)network.icmp6.inechos=+)count of icmp6 inechos+)D(network.icmp6.inparmproblemsD/(count of icmp6 inparmprobs/(A'network.icmp6.intimeexcdsA/'count of icmp6 intimeexcds/'B&network.icmp6.inpkttoobigsB0&count of icmp6 inpkttoobigs0&D%network.icmp6.indestunreachsD2%count of icmp6 indestunreachs2%B$network.icmp6.incsumerrorsB0$count of icmp6 incsumerrors0$?#network.icmp6.outerrors?-#count of icmp6 outerrors-#="network.icmp6.outmsgs=+"count of icmp6 outmsgs+">!network.icmp6.inerrors>,!count of icmp6 inerrors,!< network.icmp6.inmsgs<* count of icmp6 inmsgs* <network.ip6.incepkts<Dcount of ip6 Congestion Experimented packets inD>network.ip6.inect0pkts>>count of ip6 packets received with ECT(0)>>network.ip6.inect1pkts>>count of ip6 packets received with ECT(1)>?network.ip6.innoectpkts?=count of ip6 packets received with NOECT=Bnetwork.ip6.outbcastoctetsB6count of ip6 broadcast octets uot6Anetwork.ip6.inbcastoctetsA5count of ip6 broadcast octets in5Bnetwork.ip6.outmcastoctetsB6count of ip6 multicast octets out6Anetwork.ip6.inmcastoctetsA5count of ip6 multicast octets in5=network.ip6.outoctets=,count of ip6 octets out,<network.ip6.inoctets<+count of ip6 octets in+@network.ip6.outmcastpkts@7count of ip6 multicast packets out7?network.ip6.inmcastpkts?6count of ip6 multicast packets in6?network.ip6.fragcreates?9count of ip6 fragmentation creations9=network.ip6.fragfails=8count of ip6 fragmentation failures8;network.ip6.fragoks;3count of ip6 fragmentation oks3>network.ip6.reasmfails>5count of ip6 reassembly failures54The number of failures detected by the IPv6 re-assembly algorithm (for whatever reason: timed out, errors, etc). Note that this is not necessarily a count of discarded IPv6 fragments since some algorithms can lose track of the number of fragments by combining them as they are received.4<network.ip6.reasmoks<0count of ip6 reassembly oks0>network.ip6.reasmreqds>6count of ip6 reassembly requireds6@ network.ip6.reasmtimeout@. count of ip6 reasmtimeout. ? network.ip6.outnoroutes?- count of ip6 outnoroutes- ? network.ip6.outdiscards?- count of ip6 outdiscards- ? network.ip6.outrequests?- count of ip6 outrequests- D network.ip6.outforwdatagramsD2 count of ip6 outforwdatagrams2 >network.ip6.indelivers>,count of ip6 indelivers,>network.ip6.indiscards>,count of ip6 indiscards,Cnetwork.ip6.intruncatedpktsC1count of ip6 intruncatedpkts1Cnetwork.ip6.inunknownprotosC1count of ip6 inunknownprotos1@network.ip6.inaddrerrors@.count of ip6 inaddrerrors.>network.ip6.innoroutes>,count of ip6 innoroutes,Bnetwork.ip6.intoobigerrorsB0count of ip6 intoobigerrors0?network.ip6.inhdrerrors?-count of ip6 inhdrerrors->network.ip6.inreceives>,count of ip6 inreceives,O 'network.softnet.percpu.flow_limit_countO4 softnet_data flow limit counter4 The network stack has to drop packets when a receive processing a CPUs backlog reaches netdev_max_backlog. The flow_limit_count counter is the number of times very active flows have dropped their traffic earlier to maintain capacity for other less active flows.K #network.softnet.percpu.received_rpsKH number of times rps_trigger_softirq has been calledH L $network.softnet.percpu.cpu_collisionL_ number of times that two cpus collided trying to get the device queue lock_ K#network.softnet.percpu.time_squeezeKhnumber of times ksoftirq ran out of netdev_budget or time slice with work remaininghFnetwork.softnet.percpu.droppedF`number of packets that were dropped because netdev_max_backlog was exceeded`H network.softnet.percpu.processedH`number of packets (not including netpoll) received by the interrupt handler`H network.softnet.flow_limit_countH4softnet_data flow limit counter4Dnetwork.softnet.received_rpsDHnumber of times rps_trigger_softirq has been calledHEnetwork.softnet.cpu_collisionE_number of times that two cpus collided trying to get the device queue lock_Dnetwork.softnet.time_squeezeDhnumber of times ksoftirq ran out of netdev_budget or time slice with work remainingh?network.softnet.dropped?`number of packets that were dropped because netdev_max_backlog was exceeded`Anetwork.softnet.processedA`number of packets (not including netpoll) received by the interrupt handler`>ipc.shm.swap_successes>Knumber of swap successes (from shmctl(..,SHM_INFO,..))K=ipc.shm.swap_attempts=Jnumber of swap attempts (from shmctl(..,SHM_INFO,..))J8ipc.shm.used_ids8Xnumber of currently existing segments (from shmctl(..,SHM_INFO,..))X3 ipc.shm.swp3Xnumber of swapped shared memory pages (from shmctl(..,SHM_INFO,..))X3 ipc.shm.rss3Ynumber of resident shared memory pages (from shmctl(..,SHM_INFO,..))Y3 ipc.shm.tot3Vtotal number of shared memory pages (from shmctl(..,SHM_INFO,..))V? disk.dm.total_rawactive?Eper-device-mapper raw count of I/O response timeEFor each completed I/O on each device-mapper device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding I/Os for a device-mapper device. When divided by the number of completed I/Os for a device-mapper device (disk.dm.total), the value represents the stochastic average of the I/O response (or wait) time for that device-mapper device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dm.await = delta(disk.dm.total_rawactive) / delta(disk.dm.total)? disk.dm.write_rawactive?Gper-device-mapper raw count of write response timeGFor each completed write on each device-mapper device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding writes for a device-mapper device. When divided by the number of completed writes for a device-mapper device (disk.dm.write), the value represents the stochastic average of the write response (or wait) time for that device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dm.w_await = delta(disk.dm.write_rawactive) / delta(disk.dm.write)> disk.dm.read_rawactive>Fper-device-mapper raw count of read response timeFFor each completed read on each device-mapper device the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding reads for a device-mapper device. When divided by the number of completed reads for a device-mapper device (disk.dm.read), the value represents the stochastic average of the read response (or wait) time for that device. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dm.r_await = delta(disk.dm.read_rawactive) / delta(disk.dm.read)4   disk.dm.aveq4Y per-device-mapper device time averaged count of request queue lengthY 8  disk.dm.avactive8B per-device-mapper device count of active timeBE Counts the number of milliseconds for which at least one I/O is in progress for each device-mapper device. When converted to a rate, this metric represents the average utilization of the device during the sampling interval. A value of 0.5 (or 50%) means the device was active (i.e. busy) half the time.E; disk.dm.write_merge;L per-device-mapper device count of merged write requestsL : disk.dm.read_merge:K per-device-mapper device count of merged read requestsK ;disk.dm.total_bytes;Sper-device-mapper device count of total bytes read and writtenS;disk.dm.write_bytes;Dper-device-mapper device count of bytes writtenD:disk.dm.read_bytes:Aper-device-mapper device count of bytes readA8disk.dm.blkwrite8Dper-device-mapper device block write operationsD7disk.dm.blkread7Cper-device-mapper device block read operationsC5 disk.dm.total5Kper-device-mapper device total (read+write) operationsK5 disk.dm.write5>per-device-mapper device write operations>4 disk.dm.read4=per-device-mapper device read operations=@network.tcp.tcpplbrehash@Rthe TCPPLBRehash field of the Tcp line from /proc/net/netstatR;Խnetwork.tcp.tcploss;MԽthe TCPLOSS field of the Tcp line from /proc/net/netstatMԽHԧ network.tcp.tcpmigratereqfailureHZԧthe TCPMigrateReqFailure field of the Tcp line from /proc/net/netstatZԧHԦ network.tcp.tcpmigratereqsuccessHZԦthe TCPMigrateReqSuccess field of the Tcp line from /proc/net/netstatZԦJԥ"network.tcp.tcpdsackignoreddubiousJ\ԥthe TCPDSACKIgnoredDubious field of the Tcp line from /proc/net/netstat\ԥDԤnetwork.tcp.tcpdsackrecvsegsDVԤthe TCPDSACKRecvSegs field of the Tcp line from /proc/net/netstatVԤJԣ"network.tcp.tcpduplicatedatarehashJ\ԣthe TcpDuplicateDataRehash field of the Tcp line from /proc/net/netstat\ԣDԢnetwork.tcp.tcptimeoutrehashDVԢthe TcpTimeoutRehash field of the Tcp line from /proc/net/netstatVԢLԡ$network.tcp.tcpfastopenpassivealtkeyL^ԡthe TCPFastOpenPassiveAltKey field of the Tcp line from /proc/net/netstat^ԡCԠnetwork.tcp.tcpwqueuetoobigCUԠthe TCPWqueueTooBig field of the Tcp line from /proc/net/netstatUԠ?ԟnetwork.tcp.tcprcvqdrop?Qԟthe TCPRcvQDrop field of the Tcp line from /proc/net/netstatQԟEԞnetwork.tcp.tcpzerowindowdropEWԞthe TCPZeroWindowDrop field of the Tcp line from /proc/net/netstatWԞDԝnetwork.tcp.tcpackcompressedDVԝthe TCPAckCompressed field of the Tcp line from /proc/net/netstatVԝBԜnetwork.tcp.tcpdeliveredceBTԜthe TCPDeliveredCE field of the Tcp line from /proc/net/netstatTԜ@ԛnetwork.tcp.tcpdelivered@Rԛthe TCPDelivered field of the Tcp line from /proc/net/netstatRԛBԚnetwork.tcp.tcpmtupsuccessBTԚthe TCPMTUPSuccess field of the Tcp line from /proc/net/netstatTԚ?ԙnetwork.tcp.tcpmtupfail?Qԙthe TCPMTUPFail field of the Tcp line from /proc/net/netstatQԙ@Ԙnetwork.tcp.tcpkeepalive@RԘthe TCPKeepAlive field of the Tcp line from /proc/net/netstatRԘ?ԗnetwork.tcp.tcpwinprobe?Qԗthe TCPWinProbe field of the Tcp line from /proc/net/netstatQԗJԖ"network.tcp.tcpackskippedchallengeJ\Ԗthe TCPACKSkippedChallenge field of the Tcp line from /proc/net/netstat\ԖIԕ!network.tcp.tcpackskippedtimewaitI[ԕthe TCPACKSkippedTimeWait field of the Tcp line from /proc/net/netstat[ԕIԔ!network.tcp.tcpackskippedfinwait2I[Ԕthe TCPACKSkippedFinWait2 field of the Tcp line from /proc/net/netstat[ԔDԓnetwork.tcp.tcpackskippedseqDVԓthe TCPACKSkippedSeq field of the Tcp line from /proc/net/netstatVԓEԒnetwork.tcp.tcpackskippedpawsEWԒthe TCPACKSkippedPAWS field of the Tcp line from /proc/net/netstatWԒHԑ network.tcp.tcpackskippedsynrecvHZԑthe TCPACKSkippedSynRecv field of the Tcp line from /proc/net/netstatZԑGԐnetwork.tcp.tcphystartdelaycwndGYԐthe TCPHystartDelayCwnd field of the Tcp line from /proc/net/netstatYԐIԏ!network.tcp.tcphystartdelaydetectI[ԏthe TCPHystartDelayDetect field of the Tcp line from /proc/net/netstat[ԏGԎnetwork.tcp.tcphystarttraincwndGYԎthe TCPHystartTrainCwnd field of the Tcp line from /proc/net/netstatYԎIԍ!network.tcp.tcphystarttraindetectI[ԍthe TCPHystartTrainDetect field of the Tcp line from /proc/net/netstat[ԍHԉ network.tcp.tcpfastopenblackholeHGԉNumber of times the TFO blackhole has been enabledGZԉThe TCPFastOpenBlackhole field of the Tcp line from /proc/net/netstatZIԈ!network.tcp.tcpfastopenactivefailI9ԈFast Open attempts (SYN/data) failed9ԈFailure because the remote does not accept it or the attempts timed out. The TCPFastOpenActiveFail field of the Tcp line from /proc/net/netstatBԇnetwork.tcp.pfmemallocdropB:ԇDropped skb allocated from pfmemalloc:ԇJust counts the cases for packets which did not have the SOCK_MEMALLOC flag set. The PFMemallocDrop field of the Tcp line from /proc/net/netstatAԆnetwork.tcp.tcpmd5failureA?ԆCounter for drops caused by md5 mismatches?SԆThe TCPMD5Failure field of the Tcp line from /proc/net/netstatSLԅ$network.tcp.tcpmemorypressureschronoLSԅCumulative counter tracking duration of memory pressure eventsSGԅthe Z field of the Tcp line from /proc/net/netstatGFԄnetwork.tcp.tcpbacklogcoalesceFOԄNumber of coalesced packets that were in the backlog queueOXԄThe TCPBacklogCoalesce field of the Tcp line from /proc/net/netstatX@vnetwork.tcp.origdatasent@BvNumber of outgoing packets with original dataBvExcluding retransmission but including data-in-SYN). This counter is different from TcpOutSegs because TcpOutSegs also tracks pure ACKs. TCPOrigDataSent is more useful to track the TCP retransmission rate.>unetwork.tcp.synretrans>6uNumber of SYN-SYN/ACK retransmits6vuNumber of SYN-SYN/ACK retransmits to break down retransmissions in SYN, fast/timeout retransmits.vEtnetwork.tcp.wantzerowindowadvE:tNumber of times zero window announced:tCsnetwork.tcp.tozerowindowadvCFsNumber of times window went from non-zero to zeroFsErnetwork.tcp.fromzerowindowadvEFrNumber of times window went from zero to non-zeroFr?qnetwork.tcp.autocorking?`qNumber of times stack detected skb was underused and its flush was deferred`qEpnetwork.tcp.busypollrxpacketsEFpNumber of low latency application-fetched packetsFpIo!network.tcp.spuriousrtxhostqueuesI_oNumber of times that the fast clone is not yet freed in tcp_transmit_skb()_oFnnetwork.tcp.fastopencookiereqdF:nNumber of fast open cookies requested:nJm"network.tcp.fastopenlistenoverflowJImNumber of times the fastopen listen queue overflowedImGlnetwork.tcp.fastopenpassivefailG@lNumber of passive fast open attempts failed@lCknetwork.tcp.fastopenpassiveC<kNumber of successful passive fast opens<kFjnetwork.tcp.fastopenactivefailFdjNumber of fast open attempts failed due to remote not accepting it or time outsdjBinetwork.tcp.fastopenactiveB;iNumber of successful active fast opens;i@hnetwork.tcp.synchallenge@MhNumber of challenge ACKs sent in response to SYN packetsMh@gnetwork.tcp.challengeack@AgNumber of challenge ACKs sent (RFC 5961 3.2)Ag<fnetwork.tcp.ofomerge<QfNumber of packets in OFO that were merged with other packetsQf;enetwork.tcp.ofodrop;^eNumber of packets meant to be queued in OFO but dropped due to limits hit^qeNumber of packets meant to be queued in OFO but dropped because socket rcvbuf limit reached.q<dnetwork.tcp.ofoqueue<:dNumber of packets queued in OFO queue:d?cnetwork.tcp.rcvcoalesce?HcNumber of times tried to coalesce the receive queueHc?bnetwork.tcp.retransfail?@bNumber of failed tcp_retransmit_skb() calls@b@anetwork.tcp.reqqfulldrop@YaNumber of times a SYN request was dropped due to disabled syncookiesYaE`network.tcp.reqqfulldocookiesEF`Number of times a SYNCOOKIE was replied to clientF`D_network.tcp.timewaitoverflowDG_Number of occurrences of time wait bucket overflowG_>^network.tcp.iprpfilter>Z^Number of packets dropped in input path because of rp_filter settingsZ^C]network.tcp.deferacceptdropCR]Number of dropped ACK frames when socket is in SYN-RECV stateRN]Due to SYNACK retrans count lower than defer_accept valueN>\network.tcp.minttldrop>K\Number of frames dropped when TTL is under the minimumK\?[network.tcp.backlogdrop?K[Number of frames dropped because of full backlog queueK[EZnetwork.tcp.sackshiftfallbackE.ZNumber of SACKs fallbacks.Z>Ynetwork.tcp.sackmerged>+YNumber of SACKs merged+Y?Xnetwork.tcp.sackshifted?,XNumber of SACKs shifted,XAWnetwork.tcp.md5unexpectedABWNumber of times MD5 hash unexpected but foundBW?Vnetwork.tcp.md5notfound?DVNumber of times MD5 hash expected but not foundDV@Unetwork.tcp.spuriousrtos@IUNumber of FRTO's successfully detected spurious RTOsIUFTnetwork.tcp.dsackignorednoundoFOTNumber of ignored duplicate SACKs with undo_marker not setOTCSnetwork.tcp.dsackignoredoldC:SNumber of ignored old duplicate SACKs:S?Rnetwork.tcp.sackdiscard?.RNumber of SACKs discarded.RCQnetwork.tcp.memorypressuresC9QNumer of times TCP ran low on memory9Q?Pnetwork.tcp.abortfailed?HPNumber of times unable to send RST due to no memoryHPAOnetwork.tcp.abortonlingerAUONumber of connections aborted after user close in linger timeoutUOBNnetwork.tcp.abortontimeoutBANNumber of connections aborted due to timeoutANAMnetwork.tcp.abortonmemoryAIMNumber of connections aborted due to memory pressureIM@Lnetwork.tcp.abortonclose@HLNumber of connections reset due to early user closeHL?Knetwork.tcp.abortondata?GKNumber of connections reset due to unexpected dataGK@Jnetwork.tcp.dsackoforecv@GJNumber of DSACKs for out of order packets receivedGJ=Inetwork.tcp.dsackrecv=.INumber of DSACKs received.I@Hnetwork.tcp.dsackofosent@CHNumber of DSACKs sent for out of order packetsCH@Gnetwork.tcp.dsackoldsent@:GNumber of DSACKs sent for old packets:G@Fnetwork.tcp.rcvcollapsed@ZFNumber of packets collapsed in receive queue due to low socket bufferZFAEnetwork.tcp.schedulerfailAVENumber of times receiver scheduled too late for direct processingVEDDnetwork.tcp.sackrecoveryfailD6DNumber of SACK retransmits failed6DDCnetwork.tcp.renorecoveryfailD;CNumber of reno fast retransmits failed;CEBnetwork.tcp.lossproberecoveryE8BNumber of TCP loss probe recoveries8B>Anetwork.tcp.lossprobes>3ANumber of sent TCP loss probes3A<@network.tcp.timeouts<1@Number of other TCP timeouts1@D?network.tcp.slowstartretransD8?Number of retransmits in slow start8?B>network.tcp.forwardretransB2>Number of forward retransmits2>?=network.tcp.fastretrans?/=Number of fast retransmits/=@<network.tcp.lossfailures@5network.tcp.timewaited>KNumber of TCP sockets finished time wait in fast timerK=network.tcp.arpfilter=3Number of arp packets filtered3Dnetwork.tcp.lockdroppedicmpsDENumber of dropped ICMP because socket was lockedEDnetwork.tcp.outofwindowicmpsD:Number of dropped out of window ICMPs:=network.tcp.ofopruned=gNumber of packets dropped from out-of-order queue because of socket buffer overrung=network.tcp.rcvpruned=@Number of packets pruned from receive queue@?network.tcp.prunecalled?aNumber of packets pruned from receive queue because of socket buffer overrunaAnetwork.tcp.embryonicrstsAMNumber of resets received for embryonic SYN_RECV socketsMDnetwork.tcp.syncookiesfailedD1Number of failed SYN cookies1Bnetwork.tcp.syncookiesrecvB3Number of received SYN cookies3Bnetwork.tcp.syncookiessentB/Number of sent SYN cookies/9network.ip.cepkts9LNumber of packets received with Congestion ExperimentedL;network.ip.ect0pkts;;Number of packets received with ECT(0);;network.ip.ect1pkts;;Number of packets received with ECT(1);< network.ip.noectpkts<: Number of packets received with NOECT: = network.ip.csumerrors=@ Number of IP datagrams with checksum errors@ A network.ip.outbcastoctetsA7 Number of sent IP broadcast octets7 @ network.ip.inbcastoctets@; Number of received IP broadcast octets; A network.ip.outmcastoctetsA7 Number of sent IP multicast octets7 @network.ip.inmcastoctets@;Number of received IP multicast octets;<network.ip.outoctets<*Number of sent octets*;network.ip.inoctets;.Number of received octets.?network.ip.outbcastpkts?9Number of sent IP bradcast datagrams9>network.ip.inbcastpkts>>Number of received IP broadcast datagrams>?network.ip.outmcastpkts?:Number of sent IP multicast datagrams:>network.ip.inmcastpkts>>Number of received IP multicast datagrams>Bnetwork.ip.intruncatedpktsB[Number of IP datagrams discarded due to frame not carrying enough data[=network.ip.innoroutes=YNumber of IP datagrams discarded due to no routes in forwarding pathYH) mem.numa.util.hugepagesSurpBytesH@)per-node amount of surplus hugepages memory@)H( mem.numa.util.hugepagesFreeBytesH=(per-node amount of free hugepages memory=(I'!mem.numa.util.hugepagesTotalBytesI>'per-node total amount of hugepages memory>'>&0mem.numa.max_bandwidth>I&maximum memory bandwidth supported on each numa nodeI&Maximum memory bandwidth supported on each numa node. It makes use of a bandwith.conf file which has the bandwidth information for each node : node_num:bandwidth The node_num must match with any node in sysfs/devices/system/node directory. And, the bandwidth is expressed in terms of MBps. This config file should be filled up manually after running some bandwidth saturation benchmark tools.A%mem.numa.alloc.other_nodeA_%count of times a process ran on this node and got memory from another node_%A$mem.numa.alloc.local_nodeAZ$count of times a process ran on this node and got memory on this nodeZ$E#mem.numa.alloc.interleave_hitE^#count of times interleaving wanted to allocate on this node and succeeded^#>"mem.numa.alloc.foreign>b"count of times a task on another node alloced on that node, but got this nodeb";!mem.numa.alloc.miss;c!per-node count of times a task wanted alloc on local node but got another nodec!: mem.numa.alloc.hit:\ per-node count of times a task wanted alloc on local node and succeeded\ Cmem.numa.util.hugepagesSurpC8per-node count of surplus hugepages8Cmem.numa.util.hugepagesFreeC5per-node count of free hugepages5Dmem.numa.util.hugepagesTotalD6per-node total count of hugepages6Gmem.numa.util.slabUnreclaimableGPper-node memory used for slab objects that is unreclaimablePEmem.numa.util.slabReclaimableEPper-node memory used for slab objects that can be reclaimedP:mem.numa.util.slab::per-node memory used for slab objects:Bmem.numa.util.writebackTmpBAper-node temporary memory used for writebackA<mem.numa.util.bounce<<per-node memory used for bounce buffers<Bmem.numa.util.NFS_UnstableBJper-node memory holding NFS data that needs writebackJ@mem.numa.util.pageTables@8per-node memory used for pagetables8Amem.numa.util.kernelStackA:per-node memory used as kernel stacks:;mem.numa.util.shmem;5per-node amount of shared memory5?mem.numa.util.anonpages?.per-node anonymous memory.<mem.numa.util.mapped<+per-node mapped memory+?mem.numa.util.filePages?=per-node count of memory backed by files=?mem.numa.util.writeback?Tper-node count of memory locked for writeback to stable storageT;mem.numa.util.dirty;*per-node dirty memory*=mem.numa.util.mlocked=5per-node count of Mlocked memory5A mem.numa.util.unevictableA0 per-node Unevictable memory0 = mem.numa.util.lowFree=) per-node lowmem free) > mem.numa.util.lowTotal>* per-node lowmem total* > mem.numa.util.highFree>* per-node highmem free* ? mem.numa.util.highTotal?+ per-node highmem total+ Cmem.numa.util.inactive_fileCBper-node file-backed Inactive list LRU memoryBAmem.numa.util.active_fileA@per-node file-backed Active list LRU memory@Cmem.numa.util.inactive_anonC@per-node anonymous Inactive list LRU memory@Amem.numa.util.active_anonA>per-node anonymous Active list LRU memory>>mem.numa.util.inactive>6per-node Inactive list LRU memory6<mem.numa.util.active<4per-node Active list LRU memory4:mem.numa.util.used:)per-node used memory):mem.numa.util.free:)per-node free memory);mem.numa.util.total;*per-node total memory*Anetwork.interface.hw_addrA2hardware address (from sysfs)2Dnetwork.interface.ipv6_scopeDAstring IPv6 interface scope (ifconfig style)ACnetwork.interface.ipv6_addrCCstring IPv6 interface address (ifconfig style)C;pmem.vmstat.oom_kill;1pcount of out-of-memory kills1pIp!mem.vmstat.numa_hint_faults_localIDpcount of NUMA PTE fault hints satisfied locallyDpCpmem.vmstat.numa_hint_faultsCGpcount of page migrations from NUMA PTE fault hintsGpApmem.vmstat.pgsteal_kswapdA2pmem pages reclaimed by kswapd2YpCount of mem pages reclaimed by kswapd since boot, from /proc/vmstatYApmem.vmstat.pgsteal_directA1pmem pages directly reclaimed1XpCount of mem pages directly reclaimed since boot, from /proc/vmstatX@pmem.vmstat.pgscan_kswapd@0pmem pages scanned by kswapd0WpCount of mem pages scanned by kswapd since boot, from /proc/vmstatWIp!mem.vmstat.pgscan_direct_throttleI7pthrottled direct scanned mem pages7`pCount of throttled mem pages scanned directly since boot, from /proc/vmstat`@pmem.vmstat.pgscan_direct@/pdirectly scanned mem pages/VpCount of mem pages scanned directly since boot, from /proc/vmstatVIp!mem.vmstat.pgsteal_direct_movableI0pmovable mem pages reclaimed0WpCount of movable mem pages reclaimed since boot, from /proc/vmstatWHp mem.vmstat.pgsteal_direct_normalH/pnormal mem pages reclaimed/VpCount of normal mem pages reclaimed since boot, from /proc/vmstatVGpmem.vmstat.pgsteal_direct_dma32G.pdma32 mem pages reclaimed.UpCount of dma32 mem pages reclaimed since boot, from /proc/vmstatUEpmem.vmstat.pgsteal_direct_dmaE,pdma mem pages reclaimed,SpCount of dma mem pages reclaimed since boot, from /proc/vmstatSIp!mem.vmstat.pgsteal_kswapd_movableI:pmovable mem pages reclaimed by kswapd:apCount of movable mem pages reclaimed by kswapd since boot, from /proc/vmstataHp mem.vmstat.pgsteal_kswapd_normalH9pnormal mem pages reclaimed by kswapd9`pCount of normal mem pages reclaimed by kswapd since boot, from /proc/vmstat`Gpmem.vmstat.pgsteal_kswapd_dma32G8pdma32 mem pages reclaimed by kswapd8_pCount of dma32 mem pages reclaimed by kswapd since boot, from /proc/vmstat_Epmem.vmstat.pgsteal_kswapd_dmaE6pdma mem pages reclaimed by kswapd6]pCount of dma mem pages reclaimed by kswapd since boot, from /proc/vmstat]Bpmem.vmstat.thp_file_mappedB8ptransparent huge page file mappings8pApmem.vmstat.thp_file_allocA;ptransparent huge page file allocations;p=pmem.vmstat.nr_zspages=/pnumber of compressed pages/pHp mem.vmstat.nr_zone_write_pendingHApcount of dirty, writeback and unstable pagesApFpmem.vmstat.nr_zone_unevictableF@pnumber of unevictable memory pages in zones@pFpmem.vmstat.nr_zone_active_fileFBpnumber of isolated file memory pages in zonesBpHp mem.vmstat.nr_zone_inactive_fileHGpnumber of isolated anonymous memory pages in zonesGpFpmem.vmstat.nr_zone_active_anonFBpnumber of inactive file memory pages in zonesBpHp mem.vmstat.nr_zone_inactive_anonHGpnumber of inactive anonymous memory pages in zonesGpEpmem.vmstat.nr_shmem_pmdmappedEBpnumber of PMD mappings used for shared memoryBpEpmem.vmstat.nr_shmem_hugepagesE@pnumber of huge pages used for shared memory@pCp~mem.vmstat.compact_isolatedC7p~count of isolated compaction pages7p~Ep}mem.vmstat.workingset_refaultEBp}count of refaults of previously evicted pagesBp}Ip|!mem.vmstat.workingset_nodereclaimIAp|count of NUMA node working set page reclaimsAp|Fp{mem.vmstat.workingset_activateFFp{count of page activations to form the working setFp{@pzmem.vmstat.thp_split_pmd@Fpzcount of times a PMD was split into table of PTEsFpzThis can happen, for instance, when an application calls mprotect() or munmap() on part of huge page. It doesn't split the huge page, only the page table entry.Hpy mem.vmstat.thp_split_page_failedH;pycount of failures to split a huge page;pyApxmem.vmstat.thp_split_pageA>pxcount of huge page splits into base pages>pxJpw"mem.vmstat.thp_deferred_split_pageJ>pwcount of huge page enqueues for splitting>pwDpvmem.vmstat.pgmigrate_successD=pvcount of successful NUMA page migrations=pvApumem.vmstat.pgmigrate_failA?pucount of unsuccessful NUMA page migrations?pu>ptmem.vmstat.pglazyfreed>0ptcount of pages lazily freed0ptCpsmem.vmstat.numa_pte_updatesC;pscount of NUMA page table entry updates;psFprmem.vmstat.numa_pages_migratedF2prcount of NUMA page migrations2prNpq&mem.vmstat.nr_vmscan_immediate_reclaimN?pqprioritise for reclaim when writeback ends?pqCppmem.vmstat.nr_pages_scannedC?ppcount of pages scanned during page reclaim?pp>pomem.vmstat.nr_free_cma>Dpocount of free Contiguous Memory Allocator pagesDpo<pnmem.vmstat.drop_slab<<pncount of calls to drop slab cache pages<pnApmmem.vmstat.drop_pagecacheA<pmcount of calls to drop page cache pages<pmJpl"mem.vmstat.compact_migrate_scannedJ9plcount of pages scanned for migration9plGpkmem.vmstat.compact_free_scannedG7pkcount of pages scanned for freeing7pkFpjmem.vmstat.compact_daemon_wakeFKpjnumber of times the memory compaction daemon was wokenKpjBpimem.vmstat.balloon_migrateBApinumber of virt guest balloon page migrationsApiBphmem.vmstat.balloon_inflateB@phcount of virt guest balloon page inflations@phBpgmem.vmstat.balloon_deflateBApgnumber of virt guest balloon page deflationsApgMpf%mem.vmstat.thp_zero_page_alloc_failedMSpfcount of transparent huge page zeroed page allocation failuresSpfFpemem.vmstat.thp_zero_page_allocFKpecount of transparent huge page zeroed page allocationsKpe=pdmem.vmstat.numa_other=Kpdcount of unsuccessful allocations from local NUMA zoneKpd<pcmem.vmstat.numa_miss<Opccount of unsuccessful allocations from preferred NUMA zonaOpc=pbmem.vmstat.numa_local=Ipbcount of successful allocations from local NUMA zoneIpbBpamem.vmstat.numa_interleaveB:pacount of interleaved NUMA allocations:pa;p`mem.vmstat.numa_hit;Mp`count of successful allocations from preferred NUMA zoneMp`?p_mem.vmstat.numa_foreign?;p_count of foreign NUMA zone allocations;p_=p^mem.vmstat.nr_written=/p^count of pages written out/Bp^Count of pages written out, from /proc/vmstatBEp]mem.vmstat.nr_dirty_thresholdE/p]dirty throttling threshold/p]Pp\(mem.vmstat.nr_dirty_background_thresholdP3p\background writeback threshold3p\=p[mem.vmstat.nr_dirtied=+p[count of pages dirtied+Kp[Count of pages entering dirty state, from /proc/vmstatKPpZ(mem.vmstat.nr_anon_transparent_hugepagesP?pZnumber of anonymous transparent huge pages?`pZInstantaneous number of anonymous transparent huge pages, from /proc/vmstat`NpY&mem.vmstat.kswapd_skip_congestion_waitNOpYcount of times kswapd skipped waiting on device congestionOpYCount of times kswapd skipped waiting due to device congestion as a result of being under the low watermark, from /proc/vmstatPpX(mem.vmstat.kswapd_high_wmark_hit_quicklyPBpXcount of times high watermark reached quicklyB\pXCount of times kswapd reached high watermark quickly, from /proc/vmstat\OpW'mem.vmstat.kswapd_low_wmark_hit_quicklyOApWcount of times low watermark reached quicklyA[pWCount of times kswapd reached low watermark quickly, from /proc/vmstat[FpVmem.vmstat.zone_reclaim_failedF4pVnumber of zone reclaim failures4pVKpU#mem.vmstat.unevictable_pgs_strandedK8pUcount of unevictable pages stranded8pUJpT"mem.vmstat.unevictable_pgs_scannedJ7pTcount of unevictable pages scanned7pTJpS"mem.vmstat.unevictable_pgs_rescuedJ7pScount of unevictable pages rescued7pSLpR$mem.vmstat.unevictable_pgs_munlockedL9pRcount of unevictable pages munlocked9pRMpQ%mem.vmstat.unevictable_pgs_mlockfreedM;pQcount of unevictable pages mlock freed;pQJpP"mem.vmstat.unevictable_pgs_mlockedJ7pPcount of mlocked unevictable pages7pPIpO!mem.vmstat.unevictable_pgs_culledI6pOcount of unevictable pages culled6pOJpN"mem.vmstat.unevictable_pgs_clearedJ7pNcount of unevictable pages cleared7pN<pMmem.vmstat.thp_split<:pMcount of transparent huge page splits:pMLpL$mem.vmstat.thp_collapse_alloc_failedL<pLtransparent huge page collapse failures<pLEpKmem.vmstat.thp_collapse_allocE?pKtransparent huge page collapse allocations?pKEpJmem.vmstat.thp_fault_fallbackE:pJtransparent huge page fault fallbacks:pJBpImem.vmstat.thp_fault_allocB<pItransparent huge page fault allocations<pIBpHmem.vmstat.pgsteal_movableB0pHmovable mem pages reclaimed0WpHCount of movable mem pages reclaimed since boot, from /proc/vmstatW@pGmem.vmstat.pgsteal_dma32@.pGdma32 mem pages reclaimed.UpGCount of dma32 mem pages reclaimed since boot, from /proc/vmstatUHpF mem.vmstat.pgscan_kswapd_movableH8pFmovable mem pages scanned by kswapd8_pFCount of movable mem pages scanned by kswapd since boot, from /proc/vmstat_FpEmem.vmstat.pgscan_kswapd_dma32F6pEdma32 mem pages scanned by kswapd6]pECount of dma32 mem pages scanned by kswapd since boot, from /proc/vmstat]HpD mem.vmstat.pgscan_direct_movableH.pDmovable mem pages scanned.UpDCount of movable mem pages scanned since boot, from /proc/vmstatUFpCmem.vmstat.pgscan_direct_dma32F,pCdma32 mem pages scanned,SpCCount of dma32 mem pages scanned since boot, from /proc/vmstatSCpBmem.vmstat.pgrefill_movableCHpBmovable mem pages inspected in refill_inactive_zoneHopBCount of movable mem pages inspected in refill_inactive_zone since boot, from /proc/vmstatoApAmem.vmstat.pgrefill_dma32AFpAdma32 mem pages inspected in refill_inactive_zoneFmpACount of dma32 mem pages inspected in refill_inactive_zone since boot, from /proc/vmstatmBp@mem.vmstat.pgalloc_movableB1p@movable mem page allocations1Xp@Count of movable mem page allocations since boot, from /proc/vmstatX@p?mem.vmstat.pgalloc_dma32@/p?dma32 mem page allocations/Vp?Count of dma32 mem page allocations since boot, from /proc/vmstatVBp>mem.vmstat.compact_successBOp>count of successful compactions for high order allocationsOp>@p=mem.vmstat.compact_stall@?p=count of failures to even start compacting?p=Fp<mem.vmstat.compact_pages_movedFEpp2number of isolated anonymous memory pages>p2Cp1mem.vmstat.nr_inactive_fileC9p1number of inactive file memory pages9p1Cp0mem.vmstat.nr_inactive_anonC>p0number of inactive anonymous memory pages>p0@p/mem.vmstat.nr_free_pages@)p/number of free pages)p/Ap.mem.vmstat.nr_active_fileA>p.number of active file memory memory pages>p.Ap-mem.vmstat.nr_active_anonA<p-number of active anonymous memory pages<p-Kp,#mem.vmstat.htlb_buddy_alloc_successK=p,huge TLB page buddy allocation successes=Yp,Count of huge TLB page buddy allocation successes, from /proc/vmstatYHp+ mem.vmstat.htlb_buddy_alloc_failH<p+huge TLB page buddy allocation failures<Xp+Count of huge TLB page buddy allocation failures, from /proc/vmstatXBp*mem.vmstat.nr_vmscan_writeB9p*pages written by VM scanner from LRU9p*Count of pages written from the LRU by the VM scanner, from /proc/vmstat. The VM is supposed to minimise the number of pages which get written from the LRU (for IO scheduling efficiency, and for high reclaim-success rates).<p(mem.vmstat.nr_bounce<2p(number of bounce buffer pages2Sp(Instantaneous number of bounce buffer pages, from /proc/vmstatS@p'mem.vmstat.nr_anon_pages@?p'number of anonymous mapped pagecache pages?p'Instantaneous number of anonymous mapped pagecache pages, from /proc/vmstat See also mem.vmstat.mapped for other mapped pages.Hp& mem.vmstat.nr_slab_unreclaimableH-p&unreclaimable slab pages-Yp&Instantaneous number of unreclaimable slab pages, from /proc/vmstat.YFp%mem.vmstat.nr_slab_reclaimableF+p%reclaimable slab pages+Wp%Instantaneous number of reclaimable slab pages, from /proc/vmstat.W<p$mem.vmstat.pgrotated<5p$pages rotated to tail of the LRU5\p$Count of pages rotated to tail of the LRU since boot, from /proc/vmstat\=p#mem.vmstat.allocstall=)p#direct reclaim calls)Pp#Count of direct reclaim calls since boot, from /proc/vmstatP=p"mem.vmstat.pageoutrun=1p"kswapd calls to page reclaim1Xp"Count of kswapd calls to page reclaim since boot, from /proc/vmstatXDp!mem.vmstat.kswapd_inodestealD=p!pages reclaimed via kswapd inode freeing=dp!Count of pages reclaimed via kswapd inode freeing since boot, from /proc/vmstatd?p mem.vmstat.kswapd_steal?.p pages reclaimed by kswapd.Up Count of pages reclaimed by kswapd since boot, from /proc/vmstatU@pmem.vmstat.slabs_scanned@'pslab pages scanned'NpCount of slab pages scanned since boot, from /proc/vmstatN?pmem.vmstat.pginodesteal?6ppages reclaimed via inode freeing6]pCount of pages reclaimed via inode freeing since boot, from /proc/vmstat]Dpmem.vmstat.pgscan_direct_dmaD*pdma mem pages scanned*QpCount of dma mem pages scanned since boot, from /proc/vmstatQGpmem.vmstat.pgscan_direct_normalG-pnormal mem pages scanned-TpCount of normal mem pages scanned since boot, from /proc/vmstatTEpmem.vmstat.pgscan_direct_highE+phigh mem pages scanned+RpCount of high mem pages scanned since boot, from /proc/vmstatRDpmem.vmstat.pgscan_kswapd_dmaD4pdma mem pages scanned by kswapd4[pCount of dma mem pages scanned by kswapd since boot, from /proc/vmstat[Gpmem.vmstat.pgscan_kswapd_normalG7pnormal mem pages scanned by kswapd7^pCount of normal mem pages scanned by kswapd since boot, from /proc/vmstat^Epmem.vmstat.pgscan_kswapd_highE5phigh mem pages scanned by kswapd5\pCount of high mem pages scanned by kswapd since boot, from /proc/vmstat\>pmem.vmstat.pgsteal_dma>,pdma mem pages reclaimed,SpCount of dma mem pages reclaimed since boot, from /proc/vmstatSApmem.vmstat.pgsteal_normalA/pnormal mem pages reclaimed/VpCount of normal mem pages reclaimed since boot, from /proc/vmstatV?pmem.vmstat.pgsteal_high?-phigh mem pages reclaimed-TpCount of high mem pages reclaimed since boot, from /proc/vmstatT?pmem.vmstat.pgrefill_dma?Dpdma mem pages inspected in refill_inactive_zoneDkpCount of dma mem pages inspected in refill_inactive_zone since boot, from /proc/vmstatkBpmem.vmstat.pgrefill_normalBGpnormal mem pages inspected in refill_inactive_zoneGnpCount of normal mem pages inspected in refill_inactive_zone since boot, from /proc/vmstatn@pmem.vmstat.pgrefill_high@Ephigh mem pages inspected in refill_inactive_zoneElpCount of high mem pages inspected in refill_inactive_zone since boot, from /proc/vmstatl=pmem.vmstat.pgmajfault=0pmajor page fault operations0WpCount of major page fault operations since boot, from /proc/vmstatW:pmem.vmstat.pgfault::ppage major and minor fault operations:apCount of page major and minor fault operations since boot, from /proc/vmstata?pmem.vmstat.pgdeactivate?8ppages moved from active to inactive8_pCount of pages moved from active to inactive since boot, from /proc/vmstat_=pmem.vmstat.pgactivate=8ppages moved from inactive to active8_pCount of pages moved from inactive to active since boot, from /proc/vmstat_9p mem.vmstat.pgfree9)p page free operations)Pp Count of page free operations since boot, from /proc/vmstatP>p mem.vmstat.pgalloc_dma>-p dma mem page allocations-Tp Count of dma mem page allocations since boot, from /proc/vmstatTAp mem.vmstat.pgalloc_normalA0p normal mem page allocations0Wp Count of normal mem page allocations since boot, from /proc/vmstatW?p mem.vmstat.pgalloc_high?.p high mem page allocations.Up Count of high mem page allocations since boot, from /proc/vmstatU:p mem.vmstat.pswpout:&p pages swapped out&Mp Count of pages swapped out since boot, from /proc/vmstatM9pmem.vmstat.pswpin9%ppages swapped in%LpCount of pages swapped in since boot, from /proc/vmstatL:pmem.vmstat.pgpgout:(ppage out operations(OpCount of page out operations since boot, from /proc/vmstatO9pmem.vmstat.pgpgin9'ppage in operations'NpCount of page in operations since boot, from /proc/vmstatN:pmem.vmstat.nr_slab:)pnumber of slab pages)pInstantaneous number of slab pages, from /proc/vmstat This counter was retired in 2.6.18 kernels, and is now the sum of mem.vmstat.nr_slab_reclaimable and mem.vmstat.nr_slab_unreclaimable.<pmem.vmstat.nr_mapped<5pnumber of mapped pagecache pages5pInstantaneous number of mapped pagecache pages, from /proc/vmstat See also mem.vmstat.nr_anon for anonymous mapped pages.Fpmem.vmstat.nr_page_table_pagesF/pnumber of page table pages/PpInstantaneous number of page table pages, from /proc/vmstatP>pmem.vmstat.nr_unstable>6pnumber of pages in unstable state6WpInstantaneous number of pages in unstable state, from /proc/vmstatW?pmem.vmstat.nr_writeback?7pnumber of pages in writeback state7XpInstantaneous number of pages in writeback state, from /proc/vmstatX;pmem.vmstat.nr_dirty;3pnumber of pages in dirty state3TpInstantaneous number of pages in dirty state, from /proc/vmstatT7lvfs.dentry.free7Clnumber of available directory entry structuresCl8lvfs.dentry.count8@lnumber of in-use directory entry structures@l7lvfs.inodes.free79lnumber of available inode structures9l8lvfs.inodes.count86lnumber of in-use inode structures6l5l vfs.files.max5>lhard maximum on number of file structures>l6lvfs.files.free68lnumber of available file structures8l7lvfs.files.count75lnumber of in-use file structures5l;h0kernel.all.idletime;Ehtime the current kernel has been idle since bootEh9h0kernel.all.uptime9=htime the current kernel has been running=h9dkernel.all.nusers9Kdnumber of user sessions on the system (including root)Kd:\ipc.shm.max_shmsys:e\maximum amount of shared memory in system in pages (from shmctl(..,IPC_INFO,..))e\;\ipc.shm.max_segproc;`\maximum number of shared segments per process (from shmctl(..,IPC_INFO,..))`\7\ipc.shm.max_seg7^\maximum number of shared segments in system (from shmctl(..,IPC_INFO,..))^\9\ipc.shm.min_segsz9W\minimum shared segment size in bytes (from shmctl(..,IPC_INFO,..))W\9\ipc.shm.max_segsz9W\maximum shared segment size in bytes (from shmctl(..,IPC_INFO,..))W\7Xipc.msg.max_seg7UXmaximum number of message segments (from msgctl(..,IPC_INFO,..))UX;Xipc.msg.num_smsghdr;SXnumber of system message headers (from msgctl(..,IPC_INFO,..))SX:Xipc.msg.max_msgseg:GXmessage segment size (from msgctl(..,IPC_INFO,..))GX:Xipc.msg.max_msgqid:^Xmaximum number of message queue identifiers (from msgctl(..,IPC_INFO,..))^X;Xipc.msg.max_defmsgq;ZXdefault maximum size of a message queue (from msgctl(..,IPC_INFO,..))ZX9Xipc.msg.max_msgsz9UXmaximum size of a message in bytes (from msgctl(..,IPC_INFO,..))UX6Xipc.msg.mapent6UXnumber of entries in a message map (from msgctl(..,IPC_INFO,..))UX7Xipc.msg.sz_pool7TXsize of message pool in kilobytes (from msgctl(..,IPC_INFO,..))TX8T ipc.sem.max_exit8OT adjust on exit maximum value (from semctl(..,IPC_INFO,..))OT :Tipc.sem.max_semval:JTsemaphore maximum value (from semctl(..,IPC_INFO,..))JT:Tipc.sem.sz_semundo:JTsize of struct sem_undo (from semctl(..,IPC_INFO,..))JT;Tipc.sem.max_undoent;]Tmaximum number of undo entries per process (from semctl(..,IPC_INFO,..))]T7Tipc.sem.max_ops7^Tmaximum number of operations per semop call (from semctl(..,IPC_INFO,..))^T9Tipc.sem.max_perid9^Tmaximum number of semaphores per identifier (from semctl(..,IPC_INFO,..))^T8Tipc.sem.num_undo8VTnumber of undo structures in system (from semctl(..,IPC_INFO,..))VT7Tipc.sem.max_sem7YTmaximum number of semaphores in system (from semctl(..,IPC_INFO,..))YT9Tipc.sem.max_semid9ZTmaximum number of semaphore identifiers (from semctl(..,IPC_INFO,..))ZT:Tipc.sem.max_semmap:_Tmaximum number of entries in a semaphore map (from semctl(..,IPC_INFO,..))_T@8Snetwork.udp.incsumerrors@48Scount of udp in checksum errors48S@8Lnetwork.udp.sndbuferrors@48Lcount of udp send buffer errors48LA8Knetwork.udp.recvbuferrorsA78Kcount of udp receive buffer errors78K@8Jnetwork.udp.outdatagrams@.8Jcount of udp outdatagrams.8J<8Hnetwork.udp.inerrors<*8Hcount of udp inerrors*8H;8Gnetwork.udp.noports;)8Gcount of udp noports)8G?8Fnetwork.udp.indatagrams?-8Fcount of udp indatagrams-8F@8@network.tcp.incsumerrors@H8@count of tcp segments received with checksum errorsH8@;8?network.tcp.outrsts;=8?count of tcp segments sent with RST flag=8?:8>network.tcp.inerrs:<8>count of tcp segments received in error<8>?8=network.tcp.retranssegs?88=count of tcp segments retransmitted88=;8<network.tcp.outsegs;/88"network.icmp.outerrors>,8"count of icmp outerrors,8"<8!network.icmp.outmsgs<*8!count of icmp outmsgs*8!C8 network.icmp.inaddrmaskrepsC18 count of icmp inaddrmaskreps18 @8network.icmp.inaddrmasks@.8count of icmp inaddrmasks.8D8network.icmp.intimestamprepsD28count of icmp intimestampreps28A8network.icmp.intimestampsA/8count of icmp intimestamps/8?8network.icmp.inechoreps?-8count of icmp inechoreps-8<8network.icmp.inechos<*8count of icmp inechos*8@8network.icmp.inredirects@.8count of icmp inredirects.8A8network.icmp.insrcquenchsA/8count of icmp insrcquenchs/8@8network.icmp.inparmprobs@.8count of icmp inparmprobs.8@8network.icmp.intimeexcds@.8count of icmp intimeexcds.8C8network.icmp.indestunreachsC18count of icmp indestunreachs18=8network.icmp.inerrors=+8count of icmp inerrors+8;8network.icmp.inmsgs;)8count of icmp inmsgs)8>8network.ip.fragcreates>,8count of ip fragcreates,8<8network.ip.fragfails<*8count of ip fragfails*8:8network.ip.fragoks:(8count of ip fragoks(8=8network.ip.reasmfails=+8count of ip reasmfails+08The number of failures detected by the IP re-assembly algorithm (for whatever reason: timed out, errors, etc). Note that this is not necessarily a count of discarded IP fragments since some algorithms can lose track of the number of fragments by combining them as they are received.0;8network.ip.reasmoks;)8count of ip reasmoks)8=8 network.ip.reasmreqds=+8 count of ip reasmreqds+8 ?8 network.ip.reasmtimeout?-8 count of ip reasmtimeout-8 >8 network.ip.outnoroutes>,8 count of ip outnoroutes,8 >8 network.ip.outdiscards>,8 count of ip outdiscards,8 >8 network.ip.outrequests>,8 count of ip outrequests,8 =8network.ip.indelivers=+8count of ip indelivers+8=8network.ip.indiscards=+8count of ip indiscards+8B8network.ip.inunknownprotosB08count of ip inunknownprotos08@8network.ip.forwdatagrams@.8count of ip forwdatagrams.8?8network.ip.inaddrerrors?-8count of ip inaddrerrors-8>8network.ip.inhdrerrors>,8count of ip inhdrerrors,8=8network.ip.inreceives=+8count of ip inreceives+8=8network.ip.defaultttl=+8count of ip defaultttl+8=8network.ip.forwarding=+8count of ip forwarding+8D,network.sockstat.frag.memoryD@,nstantaneous number of used memory for frag@,C,network.sockstat.frag.inuseCJ,instantaneous number of frag sockets currently in useJ,@,network.sockstat.udp.mem@@,instantaneous number of used memory for udp@,@, network.sockstat.tcp.mem@@, instantaneous number of used memory for tcp@, B, network.sockstat.tcp.allocB>, instantaneous number of allocated sockets>, ?, network.sockstat.tcp.tw?B, instantaneous number of sockets waiting closeB, C, network.sockstat.tcp.orphanC;, instantaneous number of orphan sockets;, >, network.sockstat.total>@, total number of sockets used by the system.@, F,network.sockstat.udplite.inuseFM,instantaneous number of udplite sockets currently in useM,B,network.sockstat.raw.inuseBI,instantaneous number of raw sockets currently in useI,B,network.sockstat.udp.inuseBI,instantaneous number of udp sockets currently in useI,B,network.sockstat.tcp.inuseBI,instantaneous number of tcp sockets currently in useI,G(  disk.partitions.total_rawactiveGthM+Әr 4{"device_type":"block","indom_name":"per partition"}  $thM+Әr  {"device_name":"xvda1"} {"device_name":"xvde1"} {"device_name":"xvde2"} F(per-disk-partition raw count of I/O response timeF/  set of all disk partitions/(For each completed I/O on each disk partition the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding I/Os for a disk partition. When divided by the number of completed I/Os for a disk partition (disk.partitions.total), the value represents the stochastic average of the I/O response (or wait) time for that disk partition.  JhM+Әr  xvda1xvde1xvde2JG(  disk.partitions.write_rawactiveGH(per-disk-partition raw count of write response timeH(For each completed write on each disk partition the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding writes for a disk partition. When divided by the number of completed writes for a disk partition (disk.partitions.write), the value represents the stochastic average of the write response (or wait) time for that disk partition. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.partitions.w_await = delta(disk.partitions.write_rawactive) / delta(disk.partitions.write)F(   disk.partitions.read_rawactiveFG( per-disk-partition raw count of read response timeG( For each completed read on each disk partition the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding reads for a disk partition. When divided by the number of completed reads for a disk partition (disk.partitions.read), the value represents the stochastic average of the read response (or wait) time for that disk partition. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.partitions.r_await = delta(disk.partitions.read_rawactive) / delta(disk.partitions.read)<(   disk.partitions.aveq<Z( per-disk-partition device time averaged count of request queue lengthZ( @(   disk.partitions.avactive@C( per-disk-partition device count of active timeCO( Counts the number of milliseconds for which at least one I/O is in progress for each disk partition. When converted to a rate, this metric represents the average utilization of the disk partition during the sampling interval. A value of 0.5 (or 50%) means the disk partition was active (i.e. busy) half the time.OC(  disk.partitions.write_mergeCF( per-disk-partition count of merged write requestsF( B(  disk.partitions.read_mergeBE( per-disk-partition count of merged read requestsE( C( disk.partitions.total_bytesCR(total number of bytes read and written for storage partitionsR(Cumulative number of bytes read and written since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.C( disk.partitions.write_bytesCC(number of bytes written for storage partitionsC(Cumulative number of bytes written since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.B( disk.partitions.read_bytesB@(number of bytes read for storage partitions@(Cumulative number of bytes read since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.@( disk.partitions.blktotal@V(total (read+write) block operations metric for storage partitionsV(Cumulative number of disk block read and write operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.@( disk.partitions.blkwrite@I(block write operations metric for storage partitionsI(Cumulative number of disk block write operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.?( disk.partitions.blkread?H(block read operations metric for storage partitionsH(Cumulative number of disk block read operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.=( disk.partitions.total=T(total (read+write) I/O operations metric for storage partitionsT(Cumulative number of disk read and write operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.=( disk.partitions.write=C(write operations metric for storage partitionsC(Cumulative number of disk write operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.<( disk.partitions.read<B(read operations metric for storage partitionsB(Cumulative number of disk read operations since system boot time (subject to counter wrap) for individual disk partitions or logical volumes.8Cnfs4.server.reqs8<hM+Әr '<\Ccumulative total for each server NFSv4 operation, and for NULL requests\B network filesystem (NFS) v4 server operationsBC 8Anfs4.client.reqs8<hM+Әr <HAcumulative total for each client NFSv4 request typeHB network filesystem (NFS) v4 client operationsBA 8?nfs3.server.reqs8<hM+Әr <N?cumulative total of client NFSv3 requests by request typeN; network filesystem (NFS) v3 operations;? 8=nfs3.client.reqs8N=cumulative total of client NFSv3 requests by request typeN=;7rpc.server.io_write;I7cumulative count of bytes passed into write requestsI7:6rpc.server.io_read:J6cumulative count of bytes returned from read requestsJ6=/rpc.server.nettcpconn=Y/cumulative total of server RPC TCP network layer connection requestsY/<.rpc.server.nettcpcnt<N.cumulative total of server RPC TCP network layer requestsN.<-rpc.server.netudpcnt<N-cumulative total of server RPC UDP network layer requestsN-9,rpc.server.netcnt9J,cumulative total of server RPC network layer requestsJ,<%rpc.server.rcnocache<N%cumulative total of uncached request-reply-cache requestsN%;$rpc.server.rcmisses;C$cumulative total of request-reply-cache missesC$9#rpc.server.rchits9A#cumulative total of request-reply-cache hitsA#="rpc.server.rpcbadclnt=E"cumulative total of server RPC bad client errorsE"=!rpc.server.rpcbadauth=C!cumulative total of server RPC bad auth errorsC!< rpc.server.rpcbadfmt<E cumulative total of server RPC bad format errorsE 9rpc.server.rpcerr9:cumulative total of server RPC errors:9rpc.server.rpccnt9<cumulative total of server RPC requests<=rpc.client.nettcpconn=Ycumulative total of client RPC TCP network layer connection requestsY<rpc.client.nettcpcnt<Ncumulative total of client RPC TCP network layer requestsN<rpc.client.netudpcnt<Ncumulative total of client RPC UDP network layer requestsN9rpc.client.netcnt9Jcumulative total of client RPC network layer requestsJArpc.client.rpcauthrefreshABcumulative total of client RPC auth refreshesB=rpc.client.rpcretrans=Ccumulative total of client RPC retransmissionsC9rpc.client.rpccnt9<cumulative total of client RPC requests<7 nfs.server.reqs7<hM+Әr <N cumulative total of client NFSv2 requests by request typeN; network filesystem (NFS) v2 operations;  7nfs.client.reqs7Ncumulative total of client NFSv2 requests by request typeN5  filesys.avail5V Total space free to non-superusers on mounted filesystem (Kbytes)V 4 filesys.full44Percentage of filesystem in use49filesys.freefiles9GNumber of unallocated inodes on mounted filesystemG9filesys.usedfiles9ENumber of inodes allocated on mounted filesystemE8filesys.maxfiles8:Inodes capacity of mounted filesystem:4 filesys.free4DTotal space free on mounted filesystem (Kbytes)D4 filesys.used4DTotal space used on mounted filesystem (Kbytes)D:kernel.percpu.intr:Ainterrupt count metric from /proc/interruptsAAggregate count of each CPUs interrupt processing count, calculated as the sum of all interrupt types in /proc/interrupts for each CPU.B network.interface.wirelessB> boolean for whether interface is wireless> A network.interface.runningAJ boolean for whether interface has resources allocatedJ < network.interface.up<J boolean for whether interface is currently up or downJ B 0network.interface.baudrateB8 interface speed in bytes per second8 The linespeed on the network interface, as reported by the kernel, scaled up from Megabits/second to bits/second and divided by 8 to convert to bytes/second. See also network.interface.speed for the Megabytes/second value.F network.interface.total.mcastsFW network total (in) mcasts from /proc/net/dev per network interfaceW Linux does not account for outgoing mcast packets per device, so this counter is identical to the incoming mcast metric.E network.interface.total.dropsEZ network total (in+out) drops from /proc/net/dev per network interfaceZ F network.interface.total.errorsF[ network total (in+out) errors from /proc/net/dev per network interface[ G network.interface.total.packetsG\ network total (in+out) packets from /proc/net/dev per network interface\ E network.interface.total.bytesEZ network total (in+out) bytes from /proc/net/dev per network interfaceZ H  network.interface.out.compressedHU network send compressed from /proc/net/dev per network interfaceU compressed column on the "Transmit" side of /proc/net/dev (stats->tx_compressed counter in rtnl_link_stats64). Almost exclusively used for CSLIP or HDLC devicesE network.interface.out.carrierER network send carrier from /proc/net/dev per network interfaceR carrier column on the "Transmit" side of /proc/net/dev (stats->{tx_carrier_errors + tx_aborted_errors + tx_window_errors + tx_heartbeat_errors} counters in rtnl_link_stats64).D network.interface.collisionsDU network send collisions from /proc/net/dev per network interfaceU colls column on the "Transmit" side of /proc/net/dev (stats->collisions counter in rtnl_link_stats64). Counter only valid for outgoing packets.B network.interface.out.fifoBP network send fifos from /proc/net/dev per network interfaceP} fifo column on the "Transmit" side of /proc/net/dev (stats->tx_fifo_errors counter in rtnl_link_stats64)}C network.interface.out.dropsCP network send drops from /proc/net/dev per network interfacePy drop column on the "Transmit" side of /proc/net/dev (stats->tx_dropped counter in rtnl_link_stats64)yD network.interface.out.errorsDQ network send errors from /proc/net/dev per network interfaceQz errors column on the "Transmit" side of /proc/net/dev (stats->tx_errors counter in rtnl_link_stats64)zE network.interface.out.packetsER network send packets from /proc/net/dev per network interfaceR| packets column on the "Transmit" side of /proc/net/dev (stats->tx_packets counter in rtnl_link_stats64)|C network.interface.out.bytesCP network send bytes from /proc/net/dev per network interfacePx bytes column on the "Transmit" side of /proc/net/dev (stats->tx_bytes counter in rtnl_link_stats64)xC network.interface.in.mcastsC\ network recv multicast packets from /proc/net/dev per network interface\| multicast column on the "Receive" side of /proc/net/dev (stats->multicast counter in rtnl_link_stats64)|G network.interface.in.compressedGU network recv compressed from /proc/net/dev per network interfaceU compressed column on the "Receive" side of /proc/net/dev (stats->rx_compressed counter in rtnl_link_stats64). Almost exclusively used for CSLIP or HDLC devicesB network.interface.in.frameBX network recv frames errors from /proc/net/dev per network interfaceX frame column on the "Receive" side of /proc/net/dev (stats->{rx_length_errors + rx_over_errors + rx_crc_errors + rx_frame_errors} counters in rtnl_link_stats64)A network.interface.in.fifoA^ network recv fifo overrun errors from /proc/net/dev per network interface^| fifo column on the "Receive" side of /proc/net/dev (stats->rx_fifo_errors counter in rtnl_link_stats64)|B network.interface.in.dropsBU network recv read drops from /proc/net/dev per network interfaceU= drop column on the "Receive" side of /proc/net/dev (stats->{rx_dropped + rx_missed_errors} counters in rtnl_link_stats64) rx_dropped are the dropped packets due to no space in linux buffers and rx_missed are due to the receiver NIC missing a packet. Not all NICS use the rx_missed_errors counter.=C network.interface.in.errorsCV network recv read errors from /proc/net/dev per network interfaceVy errors column on the "Receive" side of /proc/net/dev (stats->rx_errors counter in rtnl_link_stats64)yD network.interface.in.packetsDW network recv read packets from /proc/net/dev per network interfaceW{ packets column on the "Receive" side of /proc/net/dev (stats->rx_packets counter in rtnl_link_stats64){B network.interface.in.bytesBU network recv read bytes from /proc/net/dev per network interfaceUw bytes column on the "Receive" side of /proc/net/dev (stats->rx_bytes counter in rtnl_link_stats64)w9kernel.all.nprocs9<total number of processes (lightweight)<;kernel.all.runnable;Jtotal number of processes in the (per-CPU) run queuesJ7kernel.all.load7HhM+Әr H41, 5 and 15 minute load average4; load averages for 1, 5, and 15 minutes; ThM+Әr 1 minute5 minute15 minuteT7Jmem.util.percpu77Jamount of per CPU allocator memory7J@Imem.util.shadowcallstack@8Imemory used for shadown call stacks8%IShadow stacks are a modern security feature allowing for detection of corruption of the call stack, allowing the kernel to react to such an attach in an appropriate fashion. Shadow stacks are often maintained by the processor hardware and require additional stack memory.%9Hmem.util.zswapped9CHcurrent memory used from the zswap memory poolCH6Gmem.util.zswap68Gtotal size of the zswap memory pool8G;Fmem.util.unaccepted;0Famount of unaccepted memory0FWhen this mechanism is in use, a virtual machine can be launched with its memory in an unaccepted state. Such a system will not be able to make use of the memory provided until that memory has been explicitly accepted. On these systems, the bootloader will typically pre-accept enough memory to allow the guest kernel to boot then that kernel must take responsibility for accepting the rest before using it.8Emem.util.cmafree8<Efree contiguous memory allocator memory<E9Dmem.util.cmatotal9=Dtotal contiguous memory allocator memory=D>Cmem.util.filepmdmapped>DCpage cache mapped into userspace with hugepagesDC>Bmem.util.filehugepages>EBpage cache (file) pages allocated with hugepagesEB?Amem.util.shmempmdmapped?GAshared memory mapped into userspace with hugepagesGA?@mem.util.shmemhugepages?E@amount of shared memory allocated with hugepagesE@C?mem.util.hugepagesSurpBytesC:?amount of memory in surplus hugepages:Z?User memory (Kbytes) in pages not backed by files, e.g. from malloc()ZC>mem.util.hugepagesRsvdBytesC;>amount of memory in reserved hugepages;>C=mem.util.hugepagesFreeBytesC7=amount of memory in free hugepages7=D<mem.util.hugepagesTotalBytesD88mem.util.anonhugepages>=8amount of memory in anonymous huge pages=8@7mem.util.corrupthardware@A7amount of memory in hardware corrupted pagesA7;6mem.util.quicklists;?6amount of memory in the per-CPU quicklists?6:5mem.util.mmap_copy:E5amount of mmap_copy space (non-MMU kernels only)E5=4mem.util.vmallocChunk=34amount of vmalloc chunk memory34<3mem.util.vmallocUsed<23amount of used vmalloc memory23=2mem.util.vmallocTotal=B2amount of kernel memory allocated via vmallocB2<1mem.util.directMap2M<J1amount of memory that is directly mapped in 2MB pagesJ1<0mem.util.directMap4k<J0amount of memory that is directly mapped in 4kB pagesJ0>/mem.util.hugepagesSurp>1/a count of surplus hugepages1/>.mem.util.hugepagesRsvd>2.a count of reserved hugepages2.>-mem.util.hugepagesFree>.-a count of free hugepages.-?,mem.util.hugepagesTotal?/,a count of total hugepages/,<+mem.util.kernelStack<<+kbytes of memory used for kernel stacks<+6*mem.util.shmem6$*kbytes of shmem$*8)mem.util.mlocked8@)kbytes of memory that is pinned via mlock()@)<(mem.util.unevictable<9(kbytes of memory that is unevictable9(>'mem.util.inactive_file>9'file-backed Inactive list LRU memory9'<&mem.util.active_file<7&file-backed Active list LRU memory7&>%mem.util.inactive_anon>7%anonymous Inactive list LRU memory7%<$mem.util.active_anon<5$anonymous Active list LRU memory5$B#mem.util.slabUnreclaimableBK#Kbytes in unreclaimable slab pages, from /proc/meminfoK#@"mem.util.slabReclaimable@I"Kbytes in reclaimable slab pages, from /proc/meminfoI"=!mem.util.NFS_Unstable=F!Kbytes in NFS unstable memory, from /proc/meminfoF!7 mem.util.bounce7A Kbytes in bounce buffers, from /proc/meminfoA <mem.util.commitLimit<NKbytes limit for address space commit, from /proc/meminfoNThe static total, in Kbytes, available for commitment to address spaces. Thus, mem.util.committed_AS may range up to this total. Normally the kernel overcommits memory, so this value may exceed mem.physmem:mem.util.anonpages:QKbytes in user pages not backed by files, from /proc/meminfoQ<mem.util.cache_clean<YKbytes cached and not dirty or writeback, derived from /proc/meminfoY<mem.util.reverseMaps<GKbytes in reverse mapped pages, from /proc/meminfoG;mem.util.pageTables;EKbytes in kernel page tables, from /proc/meminfoE=mem.util.committed_AS=KKbytes committed to address spaces, from /proc/meminfoKmAn estimate of how much RAM you would need to make a 99.99% guarantee that there never is OOM (out of memory) for this workload. Normally the kernel will overcommit memory. That means, say you do a 1GB malloc, nothing happens, really. Only when you start USING that malloc memory you will get real memory on demand, and just as much as you use.m5 mem.util.slab5>Kbytes in slab memory, from /proc/meminfo>4in-kernel data structures cache47mem.util.mapped7?Kbytes in mapped pages, from /proc/meminfo?Dfiles which have been mmaped, such as librariesD:mem.util.writeback:BKbytes in writeback pages, from /proc/meminfoBLMemory which is actively being written back to the diskL6mem.util.dirty6>Kbytes in dirty pages, from /proc/meminfo>LMemory which is waiting to get written back to the diskL9mem.util.swapFree99Kbytes free swap, from /proc/meminfo9[Memory which has been evicted from RAM, and is temporarily on the disk[:mem.util.swapTotal:4Kbytes swap, from /proc/meminfo49total amount of swap space available98mem.util.lowFree8?Kbytes free low memory, from /proc/meminfo?*See mem.util.lowTotal*9mem.util.lowTotal9CKbytes in low memory total, from /proc/meminfoCWLowmem is memory which can be used for everything that highmem can be used for, but it is also availble for the kernel's use for its own data structures. Among many other things, it is where everything from the Slab is allocated. Bad things happen when you're out of lowmem. (this may only be true on i386 architectures).W9mem.util.highFree9@Kbytes free high memory, from /proc/meminfo@YSee mem.util.highTotal. Not used on ia64 arch (and possibly others).Y:mem.util.highTotal:>Kbytes in high memory, from /proc/meminfo>yThis is apparently an i386 specific metric, and seems to be always zero on ia64 architecture (and possibly others). On i386 arch (at least), highmem is all memory above ~860MB of physical memory. Highmem areas are for use by userspace programs, or for the pagecache. The kernel must use tricks to access this memory, making it slower to access than lowmem.y9mem.util.inactive9QKbytes on the inactive page list (candidates for discarding)QvMemory which has been less recently used. It is more eligible to be reclaimed for other purposesv7mem.util.active7OKbytes on the active page list (recently referenced pages)OsMemory that has been used more recently and usually not reclaimed unless absolutely necessary.s; mem.util.swapCached;= Kbytes in swap cache, from /proc/meminfo= Memory that once was swapped out, is swapped back in but still also is in the swapfile (if memory is needed it doesn't need to be swapped out AGAIN because it is already in the swapfile. This saves I/O)6 mem.util.other6' unaccounted memory' Memory that is not free (i.e. has been referenced) and is not cached. mem.physmem - mem.util.free - mem.util.cached - mem.util.buffers3  mem.freemem3A free system memory metric from /proc/meminfoA 1 swap.free18swap free metric from /proc/meminfo81 swap.used18swap used metric from /proc/meminfo87mem.util.cached79page cache metric from /proc/meminfo9Memory used by the page cache, including buffered file data. This is in-memory cache for files read from the disk (the pagecache) but doesn't include SwapCached.7mem.util.bufmem7:I/O buffers metric from /proc/meminfo:7Memory allocated for buffer_heads.77mem.util.shared7<shared memory metric from /proc/meminfo<|Shared memory metric. Currently always zero on Linux 2.4 kernels and has been removed from 2.6 kernels.|5 mem.util.free5:free memory metric from /proc/meminfo:+Alias for mem.freemem.+5 mem.util.used5:used memory metric from /proc/meminfo:WUsed memory is the difference between mem.physmem and mem.freemem.W;adisk.all.blkdiscard;Cablock discard operations, summed for all disksCa8`disk.all.discard8C`total discard operations, summed for all disksC`>Zdisk.dev.discard_bytes>7Zper-disk count of bytes discard'ed7Z;Ydisk.dev.blkdiscard;6Yper-disk block discard operations6_YCumulative number of disk block discard operations since system boot time._8Xdisk.dev.discard80Xper-disk discard operations0YXCumulative number of disk discard operations since system boot time.YEU kernel.pernode.cpu.guest_niceE[hM+ӘrU#{"device_type":["numa_node","cpu"]} [DUtotal virtual nice guest CPU time for each nodeDU?T kernel.percpu.cpu.vnice?_Tpercpu nice user CPU time metric from /proc/stat, excluding guest CPU time_TDS kernel.percpu.cpu.guest_niceD/Spercpu nice guest CPU time/WSPer-CPU nice time spent running (virtual) guest operating systems.W<R kernel.all.cpu.vnice<MhM+ӘrR{"device_type":"cpu"} M`Rtotal nice user CPU time from /proc/stat for all CPUs, excluding guest time`RAQ kernel.all.cpu.guest_niceAMhM+ӘrQ{"device_type":"cpu"} MCQtotal virtual guest CPU nice time for all CPUsCWQTotal CPU nice time spent running virtual guest operating systems.W@P disk.all.total_rawactive@IPraw count of I/O response time, summed for all disksIPFor each completed I/O on every disk the response time (queue time plus service time) in milliseconds is added to this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding I/Os across all disks. When divided by the number of completed I/Os for all disks (disk.all.total), value represents the stochastic average of the I/O response (or wait) time across all disks.@O disk.dev.total_rawactive@<Oper-disk raw count of I/O response time<OFor each completed I/O on each disk the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding I/Os for a disk. When divided by the number of completed I/Os for a disk (disk.dev.total), the value represents the stochastic average of the I/O response (or wait) time for that disk. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dev.await = delta(disk.dev.total_rawactive) / delta(disk.dev.total)<N kernel.all.cpu.vuser<MhM+ӘrN{"device_type":"cpu"} M_Ntotal user CPU time from /proc/stat for all CPUs, excluding guest CPU time_N@M kernel.pernode.cpu.vuser@[hM+ӘrM#{"device_type":["numa_node","cpu"]} [`Mtotal user CPU time from /proc/stat for each node, excluding guest CPU time`M?L kernel.percpu.cpu.vuser?ZLpercpu user CPU time metric from /proc/stat, excluding guest CPU timeZL@K disk.all.write_rawactive@KKraw count of write response time, summed for all disksKKFor each completed write on every disk the response time (queue time plus service time) in milliseconds is added to this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding writes across all disks. When divided by the number of completed writes for all disks (disk.all.write), value represents the stochastic average of the write response (or wait) time across all disks.?J disk.all.read_rawactive?JJraw count of read response time, summed for all disksJJFor each completed read on every disk the response time (queue time plus service time) in milliseconds is added to this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding reads across all disks. When divided by the number of completed reads for all disks (disk.all.read), value represents the stochastic average of the read response (or wait) time across all disks. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.all.r_await = delta(disk.all.read_rawactive) / delta(disk.all.read)@I disk.dev.write_rawactive@>Iper-disk raw count of write response time>IFor each completed write on each disk the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding writes for a disk. When divided by the number of completed writes for a disk (disk.dev.write), the value represents the stochastic average of the write response (or wait) time for that disk. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dev.w_await = delta(disk.dev.write_rawactive) / delta(disk.dev.write)?H disk.dev.read_rawactive?=Hper-disk raw count of read response time=HFor each completed read on each disk the response time (queue time plus service time) in milliseconds is added to the associated instance of this metric. When converted to a normalized rate, the value represents the time average of the number of outstanding reads for a disk. When divided by the number of completed reads for a disk (disk.dev.read), the value represents the stochastic average of the read response (or wait) time for that disk. It is suitable mainly for use in calculations with other metrics, e.g. mirroring the results from existing performance tools: iostat.dev.r_await = delta(disk.dev.read_rawactive) / delta(disk.dev.read)CG kernel.pernode.cpu.irq.hardC[hM+ӘrG#{"device_type":["numa_node","cpu"]} [JGhard interrupt CPU time from /proc/stat for each nodeJGCF kernel.pernode.cpu.irq.softC[hM+ӘrF#{"device_type":["numa_node","cpu"]} [JFsoft interrupt CPU time from /proc/stat for each nodeJFEE kernel.pernode.cpu.wait.totalE[hM+ӘrE#{"device_type":["numa_node","cpu"]} [FEtotal wait CPU time from /proc/stat for each nodeFE@D kernel.pernode.cpu.guest@[hM+ӘrD#{"device_type":["numa_node","cpu"]} [?Dtotal virtual guest CPU time for each node?D@C kernel.pernode.cpu.steal@[hM+ӘrC#{"device_type":["numa_node","cpu"]} [FCtotal virtualisation CPU steal time for each nodeFC?A kernel.pernode.cpu.idle?[hM+ӘrA#{"device_type":["numa_node","cpu"]} [FAtotal idle CPU time from /proc/stat for each nodeFA>@ kernel.pernode.cpu.sys>[hM+Әr@#{"device_type":["numa_node","cpu"]} [E@total sys CPU time from /proc/stat for each nodeE@?? kernel.pernode.cpu.nice?[hM+Әr?#{"device_type":["numa_node","cpu"]} [a?total nice user CPU time from /proc/stat for each node, including guest timea??> kernel.pernode.cpu.user?[hM+Әr>#{"device_type":["numa_node","cpu"]} [`>total user CPU time from /proc/stat for each node, including guest CPU time`>?= kernel.percpu.cpu.guest?*=percpu guest CPU time*R=Per-CPU time spent running (virtual) guest operating systems.R<< kernel.all.cpu.guest<MhM+Әr<{"device_type":"cpu"} M>R kernel.percpu.cpu.intr>.percpu interrupt CPU time.Total time spent processing interrupts on each CPU (this includes both soft and hard interrupt processing time).D kernel.percpu.cpu.wait.totalD)percpu wait CPU time)ZPer-CPU I/O wait CPU time - time spent with outstanding I/O requests.Z6disk.all.total6Htotal (read+write) operations, summed for all disksHCumulative number of disk read and write operations since system boot time (subject to counter wrap), summed over all disk devices.6disk.dev.total6;per-disk total (read+write) operations;zCumulative number of disk read and write operations since system boot time (subject to counter wrap).z9disk.all.blkwrite9Ablock write operations, summed for all disksACumulative number of disk block write operations since system boot time (subject to counter wrap), summed over all disk devices.8disk.all.blkread8@block read operations, summed for all disks@Cumulative number of disk block read operations since system boot time (subject to counter wrap), summed over all disk devices.6disk.all.write6Atotal write operations, summed for all disksACumulative number of disk read operations since system boot time (subject to counter wrap), summed over all disk devices.5 disk.all.read5@total read operations, summed for all disks@Cumulative number of disk read operations since system boot time (subject to counter wrap), summed over all disk devices.; kernel.all.cpu.idle;MhM+Әr{"device_type":"cpu"} MEtotal idle CPU time from /proc/stat for all CPUsE: kernel.all.cpu.sys:MhM+Әr{"device_type":"cpu"} MDtotal sys CPU time from /proc/stat for all CPUsD; kernel.all.cpu.nice;MhM+Әr{"device_type":"cpu"} M`total nice user CPU time from /proc/stat for all CPUs, including guest time`; kernel.all.cpu.user;MhM+Әr{"device_type":"cpu"} M_total user CPU time from /proc/stat for all CPUs, including guest CPU time_:kernel.all.blocked:Jnumber of currently blocked processes from /proc/statJ:kernel.all.running:Jnumber of currently running processes from /proc/statJ:kernel.all.sysfork:5fork rate metric from /proc/stat5: kernel.all.pswitch:< context switches metric from /proc/stat< 7 kernel.all.intr7; interrupt count metric from /proc/stat; The value is the first value from the intr field in /proc/stat, which is a counter of the total number of interrupts processed. The value is normally converted to a rate (count/second). This counter usually increases by at least HZ/second, i.e. the clock interrupt rate, wehich is usually 100/second. See also kernel.percpu.intr and kernel.percpu.interrupts to get the breakdown of interrupt count by interrupt type and which CPU processed each one.0 swap.out02 number of swap out operations2 / swap.in/1 number of swap in operations1 5  swap.pagesout5T pages written to swap devices due to demand for physical memoryT 4 swap.pagesin4Spages read from swap devices due to demand for physical memoryS9disk.dev.blkwrite94per-disk block write operations4wCumulative number of disk block write operations since system boot time (subject to counter wrap).w8disk.dev.blkread83per-disk block read operations3vCumulative number of disk block read operations since system boot time (subject to counter wrap).v6disk.dev.write6.per-disk write operations.qCumulative number of disk write operations since system boot time (subject to counter wrap).q5 disk.dev.read5-per-disk read operations-pCumulative number of disk read operations since system boot time (subject to counter wrap).p> kernel.percpu.cpu.idle>@percpu idle CPU time metric from /proc/stat@= kernel.percpu.cpu.sys=?percpu sys CPU time metric from /proc/stat?> kernel.percpu.cpu.nice>_percpu nice user CPU time metric from /proc/stat, including guest CPU time_> kernel.percpu.cpu.user>Zpercpu user CPU time metric from /proc/stat, including guest CPU timeZ;Dxfs.buffer.get_read;PDnumber of buffer get calls requiring immediate device readsPD=Dxfs.buffer.page_found=MDnumber of hits in the page cache when looking for a pageMD?Dxfs.buffer.page_retries?^Dnumber of retry attempts when allocating a page for insertion in a buffer^D>Dxfs.buffer.miss_locked>YDnumber of requests for a locked buffer which failed due to no bufferYD>Dxfs.buffer.busy_locked>UDnumber of non-blocking requests for a locked buffer which failedUDDDxfs.buffer.get_locked_waitedDHDnumber of requests for a locked buffer which waitedHD=Dxfs.buffer.get_locked=KDnumber of requests for a locked buffer which succeededKD9Dxfs.buffer.create9.Dnumber of buffers created.D6Dxfs.buffer.get63Dnumber of request buffer calls3D;@?xfs.quota.cachehits;J@?value from xs_qm_dqcachehits field of struct xfsstatsJ@?6@6xfs.read_bytes6L@6number of bytes read in XFS file system read operationsL@6This is the number of bytes read via read(2) system calls to files in XFS file systems. It can be used in conjunction with the read_calls count to calculate the average size of the read operations to files in XFS file systems.0@5xfs.read0>@5number of XFS file system read operations>b@5This is the number of read(2) system calls made to files in XFS file systems.b7@4xfs.write_bytes7P@4number of bytes written in XFS file system write operationsP@4This is the number of bytes written via write(2) system calls to files in XFS file systems. It can be used in conjunction with the write_calls count to calculate the average size of the write operations to files in XFS file systems.1@3 xfs.write1?@3number of XFS file system write operations?c@3This is the number of write(2) system calls made to files in XFS file systems.c8@#xfs.log.noiclogs8M@#count of failures for immediate get of buffered/internalM@#This variable keeps track of times when a logged transaction can not get any log buffer space. When this occurs, all of the internal log buffers are busy flushing their data to the physical on-disk log.6@"xfs.log.blocks6=@"write throughput to the physical XFS log=@"This variable counts the number of Kbytes of information being written to the physical log partitions of XFS filesystems. Log data traffic is proportional to the level of meta-data updating. The rate with which log data gets written depends on the size of internal log buffers and disk write speed. Therefore, filesystems with very high meta-data updating may need to stripe the log partition or put the log partition on a separate drive.6@!xfs.log.writes6K@!number of buffer writes going to the disk from the logK@!This variable counts the number of log buffer writes going to the physical log partitions of XFS filesystems. Log data traffic is proportional to the level of meta-data updating. Log buffer writes get generated when they fill up or external syncs occur.94proc.runq.blocked9HhM+Әr{"agent":"proc"} HA4number of processes in uninterruptible sleepAn4Instantaneous number of processes in uninterruptible sleep or parked; state 'D' in ps(1).n:4proc.runq.runnable:@4number of runnable (on run queue) processes@c4Instantaneous number of runnable (on run queue) processes; state 'R' in ps(1).c3 c proc.nprocs36 cinstantaneous number of processes6 c7$ proc.memory.rss7 hMiP UP {"pid":21840} x {"pid":120} h {"pid":360} UQ {"pid":21841} y {"pid":121}  {"pid":1}  {"pid":241} 8A {"pid":14401} UR {"pid":21842} z {"pid":122}  {"pid":2}  {"pid":242} Eb {"pid":17762} K {"pid":19442} US {"pid":21843} K {"pid":19443}  {"pid":3} k {"pid":363} 7 {"pid":14283}  {"pid":4} 7 {"pid":14284}  {"pid":5} 7 {"pid":14285}  {"pid":6}  {"pid":7}  {"pid":247} o {"pid":367} Q {"pid":20888} Q {"pid":849}  {"pid":130}  {"pid":10}  {"pid":11} S {"pid":851}  {"pid":132}  {"pid":12} T {"pid":852}  {"pid":133}  {"pid":13}  {"pid":493}  {"pid":5653}  {"pid":134}  {"pid":14}  {"pid":135}  {"pid":15}  {"pid":136}  {"pid":16} Tq {"pid":21617}  {"pid":137}  {"pid":17}  {"pid":138}  {"pid":18}  {"pid":139}  {"pid":19}  {"pid":140}  {"pid":20}  {"pid":21}  {"pid":22}  {"pid":23}  {"pid":144}  {"pid":24}  {"pid":25}  {"pid":6385}  {"pid":26} 8Z {"pid":14426}  {"pid":27}  {"pid":3987} 8[ {"pid":14427}  {"pid":28} N {"pid":20189}  {"pid":29}  {"pid":389} E {"pid":5189}  {"pid":30}  {"pid":390}  {"pid":31} G {"pid":5191}  {"pid":32} V {"pid":22233} ! {"pid":33} 1 {"pid":12753}  {"pid":154} " {"pid":34} 1 {"pid":12754} # {"pid":35} 1 {"pid":12755} $ {"pid":36}  {"pid":276} 1 {"pid":12756} 1 {"pid":12757} HU {"pid":18517} & {"pid":38} 1^ {"pid":12638} 1 {"pid":12758} HV {"pid":18518} ' {"pid":39} 1_ {"pid":12639} 1 {"pid":12759} HW {"pid":18519} ( {"pid":40} 1` {"pid":12640} 1 {"pid":12760} HX {"pid":18520} ) {"pid":41} 1a {"pid":12641} HY {"pid":18521} * {"pid":42} 1b {"pid":12642} + {"pid":43}  {"pid":283} 0 {"pid":12523} 1c {"pid":12643} , {"pid":44} 0 {"pid":12524} 1d {"pid":12644} - {"pid":45} 0 {"pid":12525} 1e {"pid":12645} . {"pid":46} 0 {"pid":12526} / {"pid":47} w {"pid":887} 0 {"pid":12527} 0 {"pid":48} 0 {"pid":12528} 0 {"pid":12529} 2 {"pid":50} 0 {"pid":12530} 3 {"pid":51} # {"pid":291} 4 {"pid":52} 5 {"pid":53}  {"pid":413}  {"pid":1253} IU {"pid":18773} 6 {"pid":54} 7 {"pid":55} QP {"pid":20816} 8 {"pid":56} 9 {"pid":57} : {"pid":58} ; {"pid":59}  {"pid":180} < {"pid":60}  {"pid":181} = {"pid":61} L. {"pid":19502} > {"pid":62} @ {"pid":64}  {"pid":904} x {"pid":1144} F {"pid":17944} B {"pid":66} C {"pid":67} \ {"pid":5468} E {"pid":69}  {"pid":909} F {"pid":70}  {"pid":910} L7 {"pid":19511} G {"pid":71} H {"pid":72}  {"pid":193} I {"pid":73} J {"pid":74} K {"pid":75}  {"pid":195} .; {"pid":11835} L {"pid":76}  {"pid":196} .< {"pid":11836} M {"pid":77} N {"pid":78} 8 {"pid":14478} O {"pid":79} P {"pid":80} M {"pid":19881} Q {"pid":81} R {"pid":82} B {"pid":322} ! {"pid":8602} S {"pid":83}  {"pid":443} 3 {"pid":563} ! {"pid":8603} T {"pid":84} -T {"pid":11604} F, {"pid":17964} M {"pid":19885} U {"pid":85} V {"pid":86} W {"pid":87} X {"pid":88}  {"pid":3808} . {"pid":11968} Y {"pid":89}  {"pid":209}  {"pid":929} . {"pid":11969} F1 {"pid":17969} Z {"pid":90} [ {"pid":91}  {"pid":211} \ {"pid":92}  {"pid":5612} F4 {"pid":17972} ] {"pid":93} = {"pid":573} ^ {"pid":94} V {"pid":22175} _ {"pid":95}  ' {"pid":2855}  {"pid":456} F8 {"pid":17976}  {"pid":5617} b {"pid":98} J {"pid":19178} c {"pid":99} d {"pid":100} e {"pid":101}  {"pid":461} f {"pid":102}  {"pid":222} g {"pid":103} 07 {"pid":12343} h {"pid":104} 08 {"pid":12344} i {"pid":105}  {"pid":5385}  {"pid":5505} 09 {"pid":12345} j {"pid":106} Z {"pid":346} 0: {"pid":12346} k {"pid":107} 0; {"pid":12347} E {"pid":17867} l {"pid":108} m {"pid":109} 0= {"pid":12349} K {"pid":19429} n {"pid":110} ^ {"pid":350} 0> {"pid":12350} 0? {"pid":12351} p {"pid":112} 0@ {"pid":12352} TY {"pid":21593} 0A {"pid":12353} K {"pid":19433} r {"pid":114} B {"pid":834}  {"pid":475} UL {"pid":21836} t {"pid":116} d {"pid":356} K {"pid":19436} u {"pid":117} 8= {"pid":14397} K {"pid":19437} UO {"pid":21839} G {"pid":839}  e$instantaneous resident size of process, excluding page table and task structure.e* all current processes*e$instantaneous resident size of process, excluding page table and task structure.e #,hMiP   !"#$&'()*+,-./023456789:;<=>@BCEFGHIJKLMNOPQRSTUVWXYZ[\]^_bcdefghijklmnprtuxyz#BZ^dhko3=BGQSTwx 'EG \!!-T.;.<..0708090:0;0=0>0?0@0A000000001^1_1`1a1b1c1d1e111111117778=8A8Z8[8EbEFF,F1F4F8HUHVHWHXHYIUJKKKKKKL.L7MMNQPQTYTqULUOUPUQURUSVV$D]v,Lm&Bgx)>Sx$;Pe'>Sh9\o1?[o ";Qj  * : W i x  5 V m   0 G ^ u  " @ ^  D ` En~)PjANn~*Mj"/Haz'@Yr8Qj0Iby(?Vr$Rx '?Kq"/;c>d000001 /sbin/init000002 (kthreadd)000003 (pool_workqueue_release)000004 (kworker/R-rcu_g)000005 (kworker/R-rcu_p)000006 (kworker/R-slub_)000007 (kworker/R-netns)000010 (kworker/0:0H-events_highpri)000011 (kworker/u30:0-ext4-rsv-conversion)000012 (kworker/R-mm_pe)000013 (rcu_tasks_kthread)000014 (rcu_tasks_rude_kthread)000015 (rcu_tasks_trace_kthread)000016 (ksoftirqd/0)000017 (rcu_preempt)000018 (migration/0)000019 (idle_inject/0)000020 (cpuhp/0)000021 (cpuhp/1)000022 (idle_inject/1)000023 (migration/1)000024 (ksoftirqd/1)000025 (kworker/1:0-events)000026 (kworker/1:0H-events_highpri)000027 (cpuhp/2)000028 (idle_inject/2)000029 (migration/2)000030 (ksoftirqd/2)000031 (kworker/2:0-ipv6_addrconf)000032 (kworker/2:0H-events_highpri)000033 (cpuhp/3)000034 (idle_inject/3)000035 (migration/3)000036 (ksoftirqd/3)000038 (kworker/3:0H-events_highpri)000039 (cpuhp/4)000040 (idle_inject/4)000041 (migration/4)000042 (ksoftirqd/4)000043 (kworker/4:0-cgroup_destroy)000044 (kworker/4:0H-events_highpri)000045 (cpuhp/5)000046 (idle_inject/5)000047 (migration/5)000048 (ksoftirqd/5)000050 (kworker/5:0H-events_highpri)000051 (cpuhp/6)000052 (idle_inject/6)000053 (migration/6)000054 (ksoftirqd/6)000055 (kworker/6:0-events)000056 (kworker/6:0H-kblockd)000057 (cpuhp/7)000058 (idle_inject/7)000059 (migration/7)000060 (ksoftirqd/7)000061 (kworker/7:0-events)000062 (kworker/7:0H-events_highpri)000064 (kworker/u32:0-events_unbound)000066 (kworker/u34:0-flush-202:0)000067 (kworker/u35:0-writeback)000069 (kworker/u37:0-events_unbound)000070 (kworker/u38:0-flush-202:0)000071 (kdevtmpfs)000072 (kworker/R-inet_)000073 (kworker/u31:1-loop5)000074 (kauditd)000075 (khungtaskd)000076 (oom_reaper)000077 (kworker/u31:2-events_unbound)000078 (kworker/R-write)000079 (kcompactd0)000080 (ksmd)000081 (kworker/1:1-events)000082 (khugepaged)000083 (kworker/R-kinte)000084 (kworker/R-kbloc)000085 (kworker/R-blkcg)000086 (irq/9-acpi)000087 (kworker/5:1-inode_switch_wbs)000088 (xen-balloon)000089 (kworker/R-tpm_d)000090 (kworker/R-ata_s)000091 (kworker/R-md)000092 (kworker/R-md_bi)000093 (kworker/R-edac-)000094 (kworker/R-devfr)000095 (watchdogd)000098 (kworker/1:1H-kblockd)000099 (kswapd0)000100 (ecryptfs-kthread)000101 (kworker/R-kthro)000102 (kworker/R-acpi_)000103 (xenbus)000104 (kworker/u36:1-loop5)000105 (xenwatch)000106 (khvcd)000107 (scsi_eh_0)000108 (kworker/R-scsi_)000109 (scsi_eh_1)000110 (kworker/R-scsi_)000112 (kworker/u36:2-flush-202:0)000114 (kworker/4:1-cgroup_destroy)000116 (kworker/6:1H-kblockd)000117 (kworker/u37:1-writeback)000120 (kworker/R-mld)000121 (kworker/0:1H-kblockd)000122 (kworker/R-ipv6_)000130 (kworker/R-kstrp)000132 (kworker/u39:0)000133 (kworker/u40:0)000134 (kworker/u41:0)000135 (kworker/u42:0)000136 (kworker/u43:0)000137 (kworker/u44:0)000138 (kworker/u45:0)000139 (kworker/u46:0)000140 (kworker/u47:0)000144 (kworker/u34:1-flush-202:0)000154 (kworker/R-charg)000180 (kworker/4:1H-kblockd)000181 (kworker/7:1H-kblockd)000193 (kworker/2:1H-kblockd)000195 (kworker/3:1H-kblockd)000196 (kworker/5:1H-kblockd)000209 (kworker/u30:1-ext4-rsv-conversion)000211 (kworker/5:2-events)000222 (kworker/u32:2-events_unbound)000241 (jbd2/xvda1-8)000242 (kworker/R-ext4-)000247 (kworker/u35:2-writeback)000276 /usr/lib/systemd/systemd-journald000283 (kworker/0:2-events)000291 (kworker/u34:2-events_power_efficient)000322 (kworker/3:3-cgroup_destroy)000346 /usr/lib/systemd/systemd-udevd000350 (kworker/u33:1-events_unbound)000356 /usr/sbin/haveged000360 (kworker/u37:2-events_power_efficient)000363 /usr/lib/systemd/systemd-resolved000367 (psimon)000389 (kworker/u38:2-flush-202:0)000390 (kworker/7:2-events)000413 (kworker/u33:2-flush-202:0)000443 (kworker/3:4-events)000456 (kworker/R-crypt)000461 @dbus-daemon000475 /usr/lib/systemd/systemd-logind000493 /usr/sbin/rsyslogd000563 (kworker/u38:5-events_power_efficient)000573 (kworker/2:3-events)000834 /usr/sbin/cron000839 /usr/sbin/atd000849 /sbin/agetty000851 /sbin/agetty000852 /sbin/agetty000887 (kworker/u36:3-events_unbound)000904 sshd:000909 /usr/lib/systemd/systemd000910 (sd-pam)000929 sshd:001144 /usr/bin/python3001253 /usr/sbin/unbound002855 (kworker/u31:3-events_power_efficient)003808 (kworker/0:3-cgroup_destroy)003987 (kworker/6:4-events)005189 (kworker/u35:3-flush-202:0)005191 (kworker/u37:3-loop5)005385 (kworker/u32:4-events_unbound)005468 (kworker/u34:3-events_power_efficient)005505 (kworker/u33:4-flush-202:0)005612 /usr/libexec/packagekitd005617 /usr/lib/polkit-1/polkitd005653 sshd:006385 (kworker/R-tls-s)008602 /usr/sbin/chronyd008603 /usr/sbin/chronyd011604 (kworker/u34:4-loop4)011835 (jbd2/xvde1-8)011836 (kworker/R-ext4-)011968 (jbd2/xvde2-8)011969 (kworker/R-ext4-)012343 (kworker/R-xfsal)012344 (kworker/R-xfs_m)012345 (kworker/R-xfs-b)012346 (kworker/R-xfs-c)012347 (kworker/R-xfs-r)012349 (kworker/R-xfs-b)012350 (kworker/R-xfs-i)012351 (kworker/R-xfs-l)012352 (kworker/R-xfs-c)012353 (xfsaild/loop1)012523 (kworker/R-xfs-b)012524 (kworker/R-xfs-c)012525 (kworker/R-xfs-r)012526 (kworker/R-xfs-b)012527 (kworker/R-xfs-i)012528 (kworker/R-xfs-l)012529 (kworker/R-xfs-c)012530 (xfsaild/loop2)012638 (kworker/R-xfs-b)012639 (kworker/R-xfs-c)012640 (kworker/R-xfs-r)012641 (kworker/R-xfs-b)012642 (kworker/R-xfs-i)012643 (kworker/R-xfs-l)012644 (kworker/R-xfs-c)012645 (xfsaild/loop3)012753 (kworker/R-xfs-b)012754 (kworker/R-xfs-c)012755 (kworker/R-xfs-r)012756 (kworker/R-xfs-b)012757 (kworker/R-xfs-i)012758 (kworker/R-xfs-l)012759 (kworker/R-xfs-c)012760 (xfsaild/loop4)014283 (kworker/u30:2)014284 (kworker/4:2-events)014285 (kworker/4:3-cgroup_bpf_destroy)014397 (kworker/u36:0)014401 /usr/lib/systemd/systemd-networkd014426 (kworker/R-cfg80)014427 (kworker/5:0)014478 (kworker/6:1-events)017762 (kworker/u38:1-events_power_efficient)017867 (kworker/u31:0-events_unbound)017944 (kworker/0:0-events)017964 (kworker/3:0-events)017969 (kworker/1:2-events)017972 (kworker/2:1)017976 (kworker/7:1-events)018517 /bin/sh018518 sudo018519 /bin/sh018520 /usr/bin/python3018521 bash018773 (kworker/u32:1-events_unbound)019178 python3019429 sshd:019433 ssh:019436 /usr/lib/systemd/systemd019437 (sd-pam)019442 (kworker/3:1)019443 (psimon)019502 bash019511 /opt/ansible-runtime/bin/python3019881 sshd:019885 ssh:020189 /opt/ansible-runtime/bin/python3020816 (kworker/u33:0-events_unbound)020888 /usr/sbin/irqbalance021593 (psimon)021617 (kworker/0:1-events)021836 /usr/lib/pcp/bin/pmcd021839 /var/lib/pcp/pmdas/root/pmdaroot021840 /var/lib/pcp/pmdas/proc/pmdaproc021841 /var/lib/pcp/pmdas/xfs/pmdaxfs021842 /var/lib/pcp/pmdas/linux/pmdalinux021843 /var/lib/pcp/pmdas/kvm/pmdakvm022175 /usr/lib/pcp/bin/pmlogger022233 /bin/sh#,8$ proc.memory.size8d$instantaneous virtual size of process, excluding page table and task structure.dd$instantaneous virtual size of process, excluding page table and task structure.d;  proc.psinfo.maj_flt;= count of page faults other than reclaims== count of page faults other than reclaims=hM'ի  #(-28>DJPnbd0nbd1nbd2nbd3nbd4nbd5nbd6nbd7nbd8nbd9nbd10nbd11nbd12nbd13nbd14nbd15/hN'ꒀlxcbr0/3hN'ꒀ/dev/loop63/hN'ꒀlxcbr0/ BhN(\A +7BFruB8A8[F4KKMMNQTYZZZZZ[[[[[[[[[[ [ [ [ [ [aaaaeQeReSf ghLtHtkuuug !2TXklmnopqrstuvwxyz{|}~2Kd},E^w 4AMi"8Qn*RzBMu#.ITc~-HWfq|)DSb} 023226 (kworker/R-kmpat)023227 (kworker/R-kmpat)023233 (kworker/R-dm_bu)023286 (kworker/R-iscsi)023295 (kworker/R-nbd0-)023296 (kworker/R-nbd1-)023297 (kworker/R-nbd2-)023298 (kworker/R-nbd3-)023299 (kworker/R-nbd4-)023300 (kworker/R-nbd5-)023301 (kworker/R-nbd6-)023302 (kworker/R-nbd7-)023303 (kworker/R-nbd8-)023304 (kworker/R-nbd9-)023305 (kworker/R-nbd10)023306 (kworker/R-nbd11)023307 (kworker/R-nbd12)023308 (kworker/R-nbd13)023309 (kworker/R-nbd14)023310 (kworker/R-nbd15)024856 (kworker/u35:1)024961 /usr/lib/systemd/systemd-journald025006 sshd:025010 ssh:025937 (kworker/7:3-events)025938 (kworker/7:4-events)025939 (kworker/7:5)026125 /usr/sbin/irqbalance026620 sshd:026700 /usr/lib/systemd/systemd-networkd029768 (kworker/u34:0)029803 dnsmasq029959 (psimon)029969 (jbd2/loop6-8)029970 (kworker/R-ext4-)033383 (kworker/u38:0-loop6)034441 (kworker/u37:1-loop6)037152 (kworker/u35:4-writeback)037153 (kworker/u35:5)037682 (kworker/u38:2-flush-202:0)038400 (kworker/u30:1-ext4-rsv-conversion)040276 sshd:040280 ssh:043371 /opt/ansible-runtime/bin/python3043372 /opt/ansible-runtime/bin/python3043373 /opt/ansible-runtime/bin/python3043374 /opt/ansible-runtime/bin/python3043375 /opt/ansible-runtime/bin/python3043376 /opt/ansible-runtime/bin/python3043377 /opt/ansible-runtime/bin/python3043378 ssh043379 /opt/ansible-runtime/bin/python3043380 ssh043381 /bin/sh043382 /usr/bin/python3.12043383 /bin/sh043384 ssh043385 /usr/bin/python3.12043386 /bin/sh043387 ssh043388 /usr/bin/python3.12043389 /bin/sh043390 ssh043391 /usr/bin/python3.12043392 ssh043393 /bin/sh043394 /usr/bin/python3.12043395 /bin/sh043396 /usr/bin/python3.12043397 ssh043398 /bin/sh043399 ssh043400 /usr/bin/python3.12043401 /bin/sh043402 /usr/bin/python3.12043403 /usr/bin/lxc-create043404 /usr/bin/lxc-create043406 /bin/sh043409 /bin/sh043424 tar043426 tar043427 xz043428 /usr/bin/lxc-create043429 xz043431 /usr/bin/lxc-create043432 /bin/sh043435 /bin/sh043449 tar043450 xz043453 tar043454 xz043455 /usr/bin/lxc-create043456 /usr/bin/lxc-create043458 /bin/sh043460 /bin/sh043464 /usr/bin/lxc-create043473 /bin/sh043484 tar043486 tar043488 xz043489 xz043492 tar043493 xz043494 /usr/bin/lxc-create043496 /bin/sh043505 tar043506 xz BhO2(! !"#$%&'()*+,-./0123456*8FTbp~ &4BP^lz5af1d0ed_eth1d170d4ae_eth03f04d7c1_eth14b5d5912_eth1dd9739ec_eth015af9d1b_eth1759e6a6a_eth15dee23ed_eth1f867c4f2_eth271453474_eth03170046e_eth0f867c4f2_eth0a85e173f_eth17927bbd4_eth0d170d4ae_eth1dd9739ec_eth115af9d1b_eth25af1d0ed_eth03f04d7c1_eth071453474_eth14b5d5912_eth015af9d1b_eth0759e6a6a_eth03170046e_eth15dee23ed_eth0f867c4f2_eth1a85e173f_eth07927bbd4_eth1hO2(! !"#$%&'()*+,-./012345*8FTbp~ &4BP^lz3170046e_eth0759e6a6a_eth0a85e173f_eth05dee23ed_eth05af1d0ed_eth04b5d5912_eth03170046e_eth1f867c4f2_eth0759e6a6a_eth1a85e173f_eth15af1d0ed_eth14b5d5912_eth15dee23ed_eth1f867c4f2_eth17927bbd4_eth07927bbd4_eth1f867c4f2_eth23f04d7c1_eth03f04d7c1_eth171453474_eth071453474_eth1dd9739ec_eth0dd9739ec_eth115af9d1b_eth0d170d4ae_eth0d170d4ae_eth115af9d1b_eth115af9d1b_eth2hO( aeReS!TXklmnopqrstuvwxyz{|}~STt*nopqrstuv45d߭,-EMas{} (079Uj %(-.9;ACEKMNXaj|~K^(-=EHy  & ' W h i k        1 :q!!/y1E1Fll>Z~,Hl!Fj,Pt2Nj .@Rdv 6_,AMv>Rf  ! - A h |   , S z  ) = O a  . W 3 Z n 7^r*S| Ir&6FVfv6Wn044014 (kworker/5:0-events)044049 (kworker/6:0-mm_percpu_wq)044115 (kworker/3:2-events)044116 (kworker/6:2-cgroup_destroy)044148 (kworker/1:0-events)044177 (kworker/2:1-events)044250 (kworker/1:3-rcu_gp)044252 (kworker/1:4)044276 (kworker/0:2-events_long)044314 (kworker/4:0-cgroup_destroy)044330 (kworker/4:1-events)044398 (kworker/u34:1-flush-202:64)044399 (kworker/u34:5-flush-202:0)044400 (kworker/u34:6-events_unbound)044401 (kworker/u34:7-flush-202:64)044402 (kworker/u34:8-flush-202:64)044403 (kworker/u34:9-flush-202:64)044404 (kworker/u34:10-flush-202:64)044405 (kworker/u34:11-flush-202:0)044406 (kworker/u34:12)044753 (kworker/u33:2-events_power_efficient)044779 (kworker/u31:4)044826 (kworker/2:2)044852 (kworker/u38:3)044853 (kworker/0:4)044940 (kworker/u36:4-flush-202:64)044941 (kworker/u36:5-flush-202:64)044942 (kworker/u36:6-flush-202:64)044943 (kworker/u36:7-flush-202:64)044944 (kworker/u36:8-flush-202:64)044945 (kworker/u36:9-flush-202:64)044946 (kworker/u36:10-flush-202:64)044947 (kworker/u36:11)057108 (kworker/4:4)057119 (kworker/7:2-rcu_gp)057188 (kworker/5:3-events)057261 (kworker/5:4-rcu_gp)057358 [lxc057365 [lxc057388 [lxc057389 [lxc057413 [lxc057421 [lxc057441 [lxc057459 [lxc057467 /sbin/init057469 /sbin/init057472 /sbin/init057474 /sbin/init057475 /sbin/init057476 /sbin/init057498 /sbin/init057585 /sbin/init057851 /usr/lib/systemd/systemd-journald057853 /usr/lib/systemd/systemd-journald058038 /usr/lib/systemd/systemd-journald058096 (kworker/2:4-events)058121 /usr/lib/systemd/systemd-journald058152 /usr/lib/systemd/systemd-journald058160 /usr/lib/systemd/systemd-journald058167 /usr/lib/systemd/systemd-journald058169 /usr/lib/systemd/systemd-journald058197 /usr/lib/systemd/systemd-resolved058218 /usr/lib/systemd/systemd-resolved058246 (kworker/3:3)058248 [lxc058303 /usr/lib/systemd/systemd-resolved058313 @dbus-daemon058320 /usr/lib/systemd/systemd-logind058325 @dbus-daemon058330 /usr/lib/systemd/systemd-logind058332 /usr/lib/systemd/systemd-resolved058338 /usr/lib/systemd/systemd-resolved058340 /sbin/agetty058345 /sbin/agetty058352 /usr/lib/systemd/systemd-resolved058354 (kworker/6:3-events)058356 /usr/lib/systemd/systemd-resolved058364 [lxc058369 /usr/lib/systemd/systemd-resolved058380 [lxc058394 [lxc058399 [lxc058405 @dbus-daemon058408 /usr/lib/systemd/systemd-logind058413 @dbus-daemon058414 /usr/lib/systemd/systemd-logind058425 @dbus-daemon058427 /sbin/init058433 @dbus-daemon058435 /usr/lib/systemd/systemd-logind058437 @dbus-daemon058443 /sbin/agetty058445 /usr/lib/systemd/systemd-logind058446 /usr/lib/systemd/systemd-logind058456 /sbin/agetty058465 /sbin/agetty058474 /sbin/agetty058492 /sbin/agetty058494 /sbin/init058544 @dbus-daemon058559 /sbin/init058573 /usr/lib/systemd/systemd-logind058605 /sbin/agetty058626 /sbin/init058628 /sbin/init058699 /usr/lib/systemd/systemd-journald058718 /usr/lib/systemd/systemd-journald058761 /usr/lib/systemd/systemd-journald058920 /usr/lib/systemd/systemd-resolved058925 /usr/lib/systemd/systemd-journald058941 /usr/lib/systemd/systemd-journald058949 /usr/lib/systemd/systemd-resolved058952 /usr/lib/systemd/systemd-resolved059001 @dbus-daemon059011 @dbus-daemon059012 @dbus-daemon059017 /usr/lib/systemd/systemd-logind059019 /usr/lib/systemd/systemd-logind059029 /usr/lib/systemd/systemd-logind059043 /sbin/agetty059046 /sbin/agetty059047 /usr/lib/systemd/systemd-resolved059053 /sbin/agetty059055 /usr/lib/systemd/systemd-resolved059129 @dbus-daemon059133 @dbus-daemon059134 /usr/lib/systemd/systemd-logind059138 /usr/lib/systemd/systemd-logind059142 /sbin/agetty059146 /sbin/agetty059799 /usr/lib/systemd/systemd-networkd059800 /usr/lib/systemd/systemd-networkd059834 /usr/lib/systemd/systemd-networkd059843 /usr/lib/systemd/systemd-networkd059844 /usr/lib/systemd/systemd-networkd059848 /usr/lib/systemd/systemd-networkd059882 /usr/lib/systemd/systemd-networkd059916 /usr/lib/systemd/systemd-networkd060080 /usr/lib/systemd/systemd-networkd060111 /usr/lib/systemd/systemd-networkd060112 /usr/lib/systemd/systemd-networkd060154 /usr/lib/systemd/systemd-networkd060184 /usr/lib/systemd/systemd-networkd065565 (kworker/u33:3)067465 (kworker/u32:3)068134 (kworker/u37:4-events_unbound)068135 (kworker/u37:5)068695 (psimon)068712 (psimon)068713 (psimon)068715 (psimon)068739 (psimon)068741 (psimon)068758 (psimon)068761 (psimon)068886 (psimon)068888 (psimon)068894 (psimon)068913 (psimon)068922 (psimon)069745 sshd:069781 ssh:069840 sshd:069866 ssh:073602 (kworker/u30:0)074141 (kworker/u33:5)074142 (kworker/u38:4-writeback)077689 (kworker/u35:1)078149 (kworker/u37:6-writeback)078150 (kworker/u37:7)093434 (ansible-playboo)093435 /opt/ansible-runtime/bin/python3 hQ(! Ip#w 'G\78=F,F1F8JKL.L7QPTqgtHu2Tqrstuv45߭, (0799CXK^-= & W h i k        1 :q!!/y1E1Fllqqqqqqqqrrr r!r#ss s s sss"sssssssu)I(eqr{ GK[t{χ$&)^ "#@i 6_#3CScs7LX,EZv?h|*:T|094631 (kworker/u34:0)094652 /usr/lib/systemd/systemd-journald094662 /usr/lib/systemd/systemd-journald094663 /usr/lib/systemd/systemd-journald094665 /usr/lib/systemd/systemd-journald094668 /usr/lib/systemd/systemd-journald094671 /usr/lib/systemd/systemd-journald094675 /usr/lib/systemd/systemd-journald094738 /usr/lib/systemd/systemd-journald094749 /usr/lib/systemd/systemd-journald094752 /usr/lib/systemd/systemd-journald094753 /usr/lib/systemd/systemd-journald094755 /usr/lib/systemd/systemd-journald094984 (psimon)094985 (psimon)094986 (psimon)094988 (psimon)094999 (psimon)095007 (psimon)095010 (psimon)095105 (psimon)095106 (psimon)095127 (psimon)095146 (psimon)095148 (psimon)095154 sshd:095158 ssh:095529 /sbin/auditd115017 /usr/libexec/packagekitd116008 sshd:116325 (audit_prune_tree)116337 (kworker/1:2)117106 bash117115 /opt/ansible-runtime/bin/python3117734 /usr/sbin/haproxy117753 (kworker/5:2)118480 (kworker/7:1-events)118481 (kworker/7:2-events_long)118486 /usr/sbin/rsyslogd118496 (kworker/u38:2)118527 (psimon)118539 /usr/sbin/haproxy118599 (kworker/2:0)118603 (kworker/2:1-events)118619 (kworker/4:0-events)118644 (kworker/u30:0)118651 [lxc118663 /sbin/init118763 /usr/lib/systemd/systemd-journald118769 (kworker/u33:0-events_unbound)118784 /usr/lib/systemd/systemd-networkd118789 /usr/lib/systemd/systemd-resolved118799 @dbus-daemon118800 /usr/lib/systemd/systemd-logind118808 /sbin/agetty118812 (kworker/3:0-mm_percpu_wq)118820 sshd:118822 (kworker/6:2)118825 ssh:119249 (kworker/u36:0-flush-202:0)119646 (psimon)119751 /usr/sbin/glusterd119837 /opt/ansible-runtime/bin/python3119838 ssh119839 sudo119840 lxc-attach119842 su119843 -bash hR>( ^FeQoM{U%(K~Hqqqr ss sI t$)^ "#qփ#BcNY]^`3Feqnu5GNXYa_uz^hm?EO|~)*+,?Z,:Sw 4Ho9bv'Fe$DiBk -BRL119921 /usr/sbin/glusterfsd120048 (kworker/u37:1-flush-202:0)120451 /usr/sbin/glusterfs121054 (kworker/u34:0-events_power_efficient)121379 (kworker/u31:1-flush-202:0)121410 /usr/bin/htcacheclean121443 (kworker/u35:1-events_unbound)121934 (psimon)121945 (kworker/4:3)121949 /usr/sbin/apache2121950 logger121952 /usr/sbin/apache2122163 (kworker/3:1-cgroup_destroy)122182 (kworker/0:0-cgroup_destroy)122213 [lxc122225 /sbin/init122324 /usr/lib/systemd/systemd-journald122343 /usr/lib/systemd/systemd-networkd122349 /usr/lib/systemd/systemd-resolved122359 @dbus-daemon122360 /usr/lib/systemd/systemd-logind122368 /sbin/agetty122990 (psimon)122997 /usr/bin/memcached123304 (kworker/1:3-events)123333 [lxc123345 /sbin/init123445 /usr/lib/systemd/systemd-journald123463 /usr/lib/systemd/systemd-networkd123470 /usr/lib/systemd/systemd-resolved123480 @dbus-daemon123481 /usr/lib/systemd/systemd-logind123489 /sbin/agetty124875 (kworker/R-dio/x)124877 (kworker/2:2-dio/xvde1)124879 (kworker/2:5-dio/xvde1)124883 (kworker/2:6-dio/xvde1)124884 (kworker/2:7-dio/xvde1)124885 (kworker/2:8-dio/xvde1)124886 (kworker/2:9-dio/xvde1)124887 (kworker/2:10-dio/xvde1)124888 (kworker/2:11-dio/xvde1)124889 (kworker/2:12-dio/xvde1)124890 (kworker/2:13-dio/xvde1)124891 (kworker/2:14-dio/xvde1)124892 (kworker/2:15-dio/xvde1)124893 (kworker/2:16-cgroup_destroy)124894 (kworker/2:17-mm_percpu_wq)124895 (kworker/2:18)124896 (kworker/2:19)126621 /usr/sbin/mariadbd126960 [lxc126972 /sbin/init127071 /usr/lib/systemd/systemd-journald127093 /usr/lib/systemd/systemd-networkd127098 /usr/lib/systemd/systemd-resolved127111 @dbus-daemon127112 /usr/lib/systemd/systemd-logind127116 /sbin/agetty127124 sshd:127128 ssh:127142 (psimon)127326 (psimon)127336 /usr/sbin/haproxy127737 (kworker/5:0-events_long)128109 /usr/bin/epmd128319 (psimon)128325 /usr/lib/erlang/erts-14.2.5.9/bin/beam.smp128335 erl_child_setup128380 sh128382 /usr/lib/erlang/lib/os_mon-2.9.1/priv/bin/memsup128383 /usr/lib/erlang/lib/os_mon-2.9.1/priv/bin/cpu_sup128384 /usr/lib/erlang/erts-14.2.5.9/bin/inet_gethost128385 /usr/lib/erlang/erts-14.2.5.9/bin/inet_gethost128388 /bin/sh128797 /opt/ansible-runtime/bin/python3129065 ssh129066 bash129067 lxc-attach129068 lxc-attach hSj(* g=EW=EEIUnp;( 'rsssqr{GKY3?EO|~)*+, ",Y[\efgj@BKV\fho 7 N O ; U H Q j  aBYc&\k(Qz/Qu'4@a129289 (psimon)129570 /usr/lib/erlang/erts-14.2.5.9/bin/beam.smp129580 erl_child_setup129625 sh129627 /usr/lib/erlang/lib/os_mon-2.9.1/priv/bin/memsup129628 /usr/lib/erlang/lib/os_mon-2.9.1/priv/bin/cpu_sup129637 (kworker/6:0-events)129638 /usr/lib/erlang/erts-14.2.5.9/bin/inet_gethost129639 /usr/lib/erlang/erts-14.2.5.9/bin/inet_gethost129642 /bin/sh129973 (kworker/u33:1-events_power_efficient)131019 (kworker/u30:0)131026 [lxc131038 /sbin/init131136 (psimon)131138 /usr/lib/systemd/systemd-journald131147 (kworker/u31:0-writeback)131158 /usr/lib/systemd/systemd-networkd131164 /usr/lib/systemd/systemd-resolved131174 @dbus-daemon131176 /usr/lib/systemd/systemd-logind131183 /sbin/agetty131201 sshd:131205 ssh:131775 (kworker/u33:2)131807 (kworker/u36:0-writeback)131814 (kworker/3:0)133011 (kworker/1:2-mm_percpu_wq)133687 (kworker/u34:1-flush-202:64)133710 (kworker/u37:0)133711 (kworker/u37:3)133947 (kworker/0:1)133973 (kworker/u35:2)134472 bash134481 /opt/ansible-runtime/bin/python3134506 (kworker/3:1-cgroup_destroy)134589 sshd:134593 ssh:134753 (kworker/5:1-events_long)134791 /opt/ansible-runtime/bin/python3hT(^ A 7gt Eyr!s^he N7ARS/6FGHLW_TVW9<y*.       )BW{8Nb3Wn ?JVhr136149 sshd:136153 ssh:136247 (psimon)136257 /usr/sbin/haproxy136274 (kworker/7:0)136275 (kworker/7:3-cgroup_destroy)136339 (kworker/2:0-cgroup_destroy)136364 [lxc136376 /sbin/init136476 /usr/lib/systemd/systemd-journald136495 /usr/lib/systemd/systemd-networkd136502 /usr/lib/systemd/systemd-resolved136518 /usr/sbin/cron136519 @dbus-daemon136520 /usr/lib/systemd/systemd-logind136524 /sbin/agetty136535 sshd:136543 ssh:137044 /usr/sbin/apache2137046 /usr/sbin/apache2137047 /usr/sbin/apache2137134 /usr/bin/htcacheclean137529 (kworker/4:2)137676 (kworker/u32:1-flush-202:64)137677 (kworker/u32:2)137788 (kworker/u36:2-writeback)137878 (kworker/u30:0-ext4-rsv-conversion)138361 (kworker/1:0-events)138732 (kworker/4:3-cgroup_destroy)139050 (psimon)139054 sshd:139495 /opt/ansible-runtime/bin/python3139496 ssh139497 sudo139498 lxc-attach139500 su139501 /bin/sh139504 /usr/bin/python3.12hU(j ^haa--.aqs [# U  7RW_TVW9*       #%h''*o+++++,(,D,`,b,c-/-=-c./8/</e/i/////00030<0C0O00000000005'5)5*5+5-5.51527So5v7Nr .Rn.Bi} 7140238 (kworker/1:3)140648 (kworker/3:3-mm_percpu_wq)141221 (kworker/2:1-events)141240 (kworker/6:0-events)141935 (kworker/u37:0-flush-202:64)142220 /usr/sbin/apache2142221 logger142222 /usr/sbin/apache2142223 /usr/sbin/apache2142302 (kworker/0:2-mm_percpu_wq)142376 (psimon)142404 (kworker/u31:1)142432 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi142434 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi142435 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi142639 (kworker/u34:2-flush-202:0)142653 (kworker/7:0-events)142691 (kworker/u34:3)143040 (kworker/u36:1-flush-202:64)143160 sshd:143164 ssh:143205 sshd:143209 ssh:143273 (kworker/u31:3-events_unbound)143306 (psimon)143316 /usr/sbin/haproxy143340 sshd:143344 ssh:143387 (kworker/4:0-mm_percpu_wq)143388 (kworker/u38:0-flush-202:64)143411 (kworker/2:2-events)143420 (kworker/u30:1)143427 [lxc143439 /sbin/init143536 (psimon)143538 /usr/lib/systemd/systemd-journald143557 /usr/lib/systemd/systemd-networkd143563 /usr/lib/systemd/systemd-resolved143574 @dbus-daemon143577 /usr/lib/systemd/systemd-logind143582 /sbin/agetty143587 (kworker/5:0-events)143593 sshd:143597 ssh:144679 /opt/ansible-runtime/bin/python3144681 ssh144682 sudo144683 lxc-attach144685 su144686 /bin/sh144689 /usr/bin/python3.12144690 /openstack/venvs/placement-31.0.0.0b2.dev29/bin/python3hV( BQh3aENj qs"F 7 ja<-c/0<0005'5)5*5+5-5.51526::-;M;;== ==W=l==>>3>8>B>C>K>V>Z>@.@AAAAEE_EF<"Ceu Nr';bv 9Nv144916 (kworker/6:3-mm_percpu_wq)145921 (kworker/u35:2-writeback)145965 (kworker/4:2-mm_percpu_wq)146253 (psimon)146422 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi146424 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi146690 (psimon)146699 /usr/sbin/haproxy146719 (kworker/u33:1-events_power_efficient)146775 (kworker/7:1-cgroup_destroy)146796 (kworker/1:1-rcu_gp)146834 [lxc146849 /sbin/init146975 /usr/lib/systemd/systemd-journald146995 /usr/lib/systemd/systemd-networkd147000 /usr/lib/systemd/systemd-resolved147010 @dbus-daemon147011 /usr/lib/systemd/systemd-logind147019 /sbin/agetty147030 sshd:147034 ssh:147087 (kworker/0:0-events)147502 /usr/sbin/cron147598 /sbin/rpcbind147884 (kworker/u32:2-writeback)147899 (psimon)147935 (kworker/R-rpcio)147936 (kworker/R-xprti)148755 (kworker/2:0)148831 (kworker/3:1-cgroup_bpf_destroy)148932 (kworker/u30:1-ext4-rsv-conversion)149052 (ansible-playboo)]hX( 3C8 SS,D=AF<HHHHHHHHHHKNNNNQRAR`RSTMTRTUTVTWTXT^T_Y`````eeeee4Ih<L$EZ+Dh 4?Ni149691 (kworker/4:4-dio/xvde1)149692 (kworker/4:5)149693 (kworker/4:6)149713 (kworker/0:4-dio/xvde1)149714 (kworker/0:5-dio/xvde1)149715 (kworker/0:6-cgroup_destroy)149716 (kworker/0:7-events)149717 (kworker/0:8-dio/xvde1)149718 (kworker/0:9-dio/xvde1)149719 (kworker/0:10)150296 (kworker/5:1-events_long)151182 (psimon)151233 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi151234 (kworker/6:1-events)151236 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi151809 (kworker/u32:4)152129 (kworker/u37:2-flush-202:0)152160 (kworker/u34:1-writeback)152204 /sbin/rpcbind152467 (kworker/u36:2-events_power_efficient)152653 (psimon)152658 /usr/sbin/tgtd152661 (kworker/R-ib-co)152662 (kworker/R-ib-co)152663 (kworker/R-ib_mc)152664 (kworker/R-ib_nl)152670 (kworker/R-iw_cm)152671 (kworker/R-rdma_)153990 (kworker/1:2-cgroup_destroy)155865 (kworker/u35:0-events_unbound)155866 (kworker/u35:3-events_unbound)155867 (kworker/u35:4-writeback)155868 (kworker/u35:5)155887 /usr/libexec/packagekitd156952 /opt/ansible-runtime/bin/python3156953 ssh156954 /bin/sh156955 /usr/bin/python3.12156956 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3]*hX(0*0hX( 0>hX(012>HhX( HhYF(b `@ rsc O ;#''+-/./8/</e/i/0::-=>V>Z>E_HHHHHHHHQTMTR``eeeeeefeg!h%h&h*hiiiijj jj{jjjjk kkkkkkkkkkl(l)pPrr.ssssssssst$Hi ">bo{.>g!>Uq "=y157125 (kworker/u38:3-flush-202:64)157285 (kworker/4:0-cgroup_destroy)157473 (kworker/u33:0-writeback)157733 /usr/sbin/iscsid157734 /usr/sbin/iscsid157738 (kworker/3:1)157930 (kworker/u31:1-events_unbound)158093 (kworker/u33:1-flush-202:0)158147 /usr/sbin/tgtd158159 (kworker/2:1)158160 (kworker/2:3-events)158215 (kworker/0:0-cgroup_destroy)158218 sshd:158222 ssh:158331 (psimon)158343 /usr/sbin/haproxy158369 (kworker/4:2-cgroup_destroy)158414 (kworker/7:3-cgroup_destroy)158435 (kworker/1:3-cgroup_destroy)158473 [lxc158488 /sbin/init158613 (psimon)158615 /usr/lib/systemd/systemd-journald158635 /usr/lib/systemd/systemd-networkd158640 /usr/lib/systemd/systemd-resolved158655 @dbus-daemon158656 /usr/lib/systemd/systemd-logind158660 /sbin/agetty158673 sshd:158677 ssh:158760 (kworker/u37:3-loop0)158761 (kworker/u37:4)159824 (kworker/6:0-events)160256 (kworker/u30:0-ext4-rsv-conversion)160302 (kworker/5:2-events)160681 /opt/ansible-runtime/bin/python3160683 ssh160684 sudo160685 lxc-attach160687 su160689 /bin/sh160693 /usr/bin/python3.12160698 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3160754 (kworker/u36:1-events_power_efficient)160917 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3hY'v,Xcinder--volumes-cinder--volumes--pool_tmetacinder--volumes-cinder--volumes--pool_tdatacinder--volumes-cinder--volumes--poolhY'v ={"device_name":"cinder--volumes-cinder--volumes--pool_tmeta"} -={"device_name":"cinder--volumes-cinder--volumes--pool_tdata"} -7{"device_name":"cinder--volumes-cinder--volumes--pool"} ' hZr(= mr#sKAy%h//0= =W=lEEfeij jj{kkkl(ssssssssty@yDyEyFyGyIyJyKyLyMyNyOyPyQyRySymyyyyyz;}b#gi}>]y )Hh3Le~ (Ln8y7CUe/Yfr b162112 (kworker/5:4-dio/xvde1)162116 (kworker/5:5-dio/xvde1)162117 (kworker/5:6-dio/xvde1)162118 (kworker/5:7-events)162119 (kworker/5:8)162121 (kworker/6:4-dio/xvde1)162122 (kworker/6:5-dio/xvde1)162123 (kworker/6:6-dio/xvde1)162124 (kworker/6:7-dio/xvde1)162125 (kworker/6:8-dio/xvde1)162126 (kworker/6:9-dio/xvde1)162127 (kworker/6:10-dio/xvde1)162128 (kworker/6:11-events)162129 (kworker/6:12-dio/xvde1)162130 (kworker/6:13)162131 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3162157 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3162241 (kworker/R-kdmfl)162242 (kworker/R-kdmfl)162253 (kworker/R-kdmfl)162254 (kworker/R-kcopy)162255 (kworker/R-dm-th)162363 (kworker/u35:1-writeback)163256 (kworker/u38:0)163726 (kworker/3:2-cgroup_destroy)163798 (kworker/1:0-events)163861 (kworker/2:0-cgroup_destroy)164194 (kworker/3:3-mm_percpu_wq)164387 (kworker/u33:3)164764 (kworker/u32:0-events_unbound)164820 (psimon)164824 /openstack/venvs/cinder-31.0.0.0b2.dev29/bin/python3164843 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi164845 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi164860 sshd:164863 sshd:164868 ssh:164871 ssh:165247 (psimon)165257 /usr/sbin/haproxy165284 sshd:165288 ssh:165324 (kworker/1:1-cgroup_destroy)165336 (kworker/u31:0-events_unbound)165370 [lxc165382 /sbin/init165479 (psimon)165481 /usr/lib/systemd/systemd-journald165501 /usr/lib/systemd/systemd-networkd165506 /usr/lib/systemd/systemd-resolved165521 @dbus-daemon165522 /usr/lib/systemd/systemd-logind165525 /sbin/agetty165526 /usr/lib/systemd/systemd-hostnamed165546 sshd:165550 ssh:166100 /opt/ansible-runtime/bin/python3166102 ssh166103 sudo166104 lxc-attach166107 su166111 /bin/sh166114 /usr/bin/python3.12166539 /sbin/runuser166547 /bin/sh166551 /usr/lib/rabbitmq/lib/rabbitmq_server-4.0.3/escript/rabbitmqctl166557 erl_child_setup zh[(^ CEb*d6HKNY`pPy@yDyEyGyIyJyKyLyMyNyQyR#kd   4Im#Goz!167275 (kworker/u30:1-dm-thin)167296 (kworker/7:1)167307 (kworker/4:0)167308 (kworker/4:1-cgroup_destroy)167340 (kworker/u34:2-writeback)167524 (kworker/0:1)167909 (kworker/5:1-cgroup_destroy)167955 (kworker/u35:2)167965 (kworker/u38:1-writeback)168960 (kworker/3:0-cgroup_destroy)169195 (kworker/u37:1-flush-202:64)169218 /opt/ansible-runtime/bin/python3169219 ssh169220 sudo169221 lxc-attach169223 su169224 /bin/sh169227 sudo169228 /bin/sh169229 /usr/bin/python3.12169230 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3169241 (kworker/1:0-events_long)z/h\( 7virbr0//h\( 6virbr0/ Rh\(Ӏ RM7*oHh*j}g    !"#$%&'()*+,-. Y}OS;Bij- !">Sr1Fe#?Vm'a$er~ ,C[r,Or 3>Mh169242 (kworker/4:4-dio/xvde1)169243 (kworker/4:5-dio/xvde1)169244 (kworker/4:6)169246 (kworker/6:0-dio/xvde1)169247 (kworker/6:1-dio/xvde1)169248 (kworker/6:3-dio/xvde1)169249 (kworker/6:4-dio/xvde1)169250 (kworker/6:5-cgroup_destroy)169251 (kworker/6:6-dio/xvde1)169252 (kworker/6:7-dio/xvde1)169253 (kworker/6:8)169254 (kworker/2:1-dio/xvde1)169255 (kworker/2:4-dio/xvde1)169256 (kworker/2:5-dio/xvde1)169257 (kworker/5:3)169258 (kworker/1:2-dio/xvde1)169259 (kworker/1:4)169260 (kworker/5:4)169261 (kworker/2:6-mm_percpu_wq)169262 (kworker/7:4-events)170662 (kworker/u34:0)170941 (kworker/u32:3)171255 (kworker/u33:3)171787 (kworker/u30:2-dm-thin)171865 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3171901 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3171938 (psimon)171942 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3171974 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi171976 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi171993 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi171995 /openstack/venvs/uwsgi-31.0.0.0b2.dev29-python3/bin/uwsgi172111 sshd:172115 ssh:172347 (kworker/u36:3-events_power_efficient)173890 (kworker/u38:0)173990 (kworker/u33:4-flush-202:0)173991 (kworker/u33:5-flush-202:0)173992 (kworker/u33:6-flush-202:0)173993 (kworker/u33:7)174481 /sbin/multipathd175054 (kworker/u31:2)175096 /usr/lib/systemd/systemd-machined175097 /usr/sbin/libvirtd175209 /usr/sbin/dnsmasq175210 /usr/sbin/dnsmasq175292 (psimon)175331 /usr/sbin/virtlockd175333 /usr/sbin/virtlogd178452 (kworker/u35:2-flush-202:0)178453 (kworker/u35:4-flush-202:0)178454 (kworker/u35:5-flush-202:0)178455 (kworker/u35:6-flush-202:0)178456 (kworker/u35:7)178477 /usr/libexec/packagekitd178567 (kworker/0:2-events)178974 /opt/ansible-runtime/bin/python3178975 ssh178976 /bin/sh178977 /usr/bin/python3.12178978 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3 R(h]B(C7((h]B(C6( ^h](0 uEjqs/R``ijjr.syFyOyPz;kd !#$%&'()+Y}ij- !"B„,ljǭT^=FRWfgkvzΩ !"&'(*+"7Sw7[&3?K[t>g.E\l+6E\y179090 (kworker/4:0-mm_percpu_wq)179343 (kworker/3:1)179348 (kworker/3:2-events)179890 (kworker/2:0-cgroup_destroy)179891 (kworker/2:1-events)180159 (kworker/7:1)180802 (kworker/u37:0)180868 /usr/sbin/libvirtd181036 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3181718 (kworker/6:0-cgroup_destroy)181976 (kworker/5:2-cgroup_destroy)182153 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3182189 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3182231 (psimon)182237 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3182245 sshd:182248 sshd:182255 ssh:182260 ssh:182612 (psimon)182622 /usr/sbin/haproxy182655 (kworker/u36:1-flush-202:64)182726 (kworker/u30:1)182733 [lxc182745 /sbin/init182760 (kworker/1:0-cgroup_destroy)182845 /usr/lib/systemd/systemd-journald182854 (kworker/u32:1-flush-202:64)182866 /usr/lib/systemd/systemd-networkd182871 /usr/lib/systemd/systemd-resolved182886 @dbus-daemon182887 /usr/lib/systemd/systemd-logind182891 /sbin/agetty182902 sshd:182906 ssh:183028 (kworker/u34:1-writeback)183977 (kworker/0:1)184580 (kworker/u38:2)184798 (kworker/u31:3)184844 (psimon)185368 /opt/ansible-runtime/bin/python3185369 ssh185370 sudo185371 lxc-attach185373 su185374 /bin/sh185377 /usr/bin/python3.12185378 /openstack/venvs/wheel-builder-python3/bin/python3185382 git185383 /bin/sh185384 git-upload-pack185386 /usr/lib/git-core/git185387 /usr/lib/git-core/git ^fh_"(su I-T0g!l)b-.OSB !"&'(*+Z/݃UlAD"H_{ ?^}#8\7Vr186086 (kworker/7:2-mm_percpu_wq)186624 (kworker/u35:1-events_unbound)186714 (kworker/u33:0)187439 (kworker/3:1-events)187779 (kworker/0:3-events)187893 (kworker/6:0-dio/xvde1)188560 (kworker/2:0-events)188757 (psimon)188780 (kworker/2:3-dio/xvde1)188847 (kworker/2:4-dio/xvde1)188848 (kworker/2:5-dio/xvde1)188860 (kworker/4:0-dio/xvde1)188861 (kworker/4:2-dio/xvde1)188862 (kworker/4:5-dio/xvde1)188863 (kworker/4:6-mm_percpu_wq)188864 (kworker/4:7-events)188865 (kworker/4:8)188866 (kworker/2:6-dio/xvde1)188869 (kworker/2:7)188870 (kworker/2:8)188872 (kworker/1:1-cgroup_destroy)188876 (kworker/1:4-cgroup_destroy)188877 (kworker/1:5-events)188878 (kworker/6:1-dio/xvde1)188879 (kworker/6:3-dio/xvde1)188880 (kworker/6:4-dio/xvde1)188881 (kworker/6:6-dio/xvde1)188882 (kworker/6:7-dio/xvde1)188883 (kworker/6:8-dio/xvde1)188884 (kworker/6:9-events)188889 (kworker/6:10-dio/xvde1)188907 (kworker/6:11)188993 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3188996 (ansible-playboo)fVh`N(,89:br-intbr-providerovs-systemVVh`N(,789 ovs-systembr-intbr-providerVh`N) 5&03AS",TΩUlADWYd6d{!Ag 189116 (kworker/u30:1-dm-thin)189361 (kworker/u37:3)189571 (kworker/u34:3-events_power_efficient)189572 (kworker/u34:4)189590 (kworker/u38:0-events_power_efficient)189623 (kworker/3:3-events)189783 (psimon)189877 sshd:189881 ssh:190491 (kworker/u31:2-flush-202:0)190861 (psimon)190880 /usr/libexec/packagekitd191577 (kworker/u33:2-events_unbound)191671 (kworker/5:0-events_long)191673 (kworker/u36:0-flush-202:0)191844 ovsdb-server191893 (kworker/0:1-cgroup_destroy)191898 ovs-vswitchd192460 ovn-controller192673 /opt/ansible-runtime/bin/python3 haz(UU ss}AM|qqs sRA`j* vzZdaRM1ef &-S\ct  Y0+:>jn!d   # ' ( + G K  S      {xyz{}~$E\x2Nr>g{$;Ri+:N`t/X(EUj *4C^192865 (kworker/2:2-cgroup_destroy)193106 (kworker/u36:2-writeback)194075 (kworker/u32:2)194125 (kworker/7:1-events)194185 (kworker/6:0-events_long)194219 ovsdb-server194268 ovs-vswitchd194353 (kworker/u37:2-flush-202:64)194405 neutron-ovn-metadata-agent194406 (kworker/0:2)194497 ovn-controller194570 (kworker/4:1-events)194598 (kworker/u34:3-flush-202:64)194605 (kworker/3:2-cgroup_destroy)194643 (kworker/5:3-events_long)194652 (kworker/u30:2)194659 [lxc194676 /sbin/init194796 /usr/lib/systemd/systemd-journald194815 /usr/lib/systemd/systemd-networkd194821 /usr/lib/systemd/systemd-resolved194828 @dbus-daemon194829 /usr/lib/systemd/systemd-logind194832 /sbin/agetty194905 (kworker/1:0-events_long)196267 (psimon)197011 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3197554 neutron-server:197561 neutron-server:197562 neutron-server:197563 neutron-server:197564 neutron-server:197624 (kworker/u37:4-flush-202:64)197680 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3197915 (kworker/2:3-cgroup_destroy)197917 /bin/sh197931 /bin/sh198333 ovsdb-server198356 ovn-northd198367 ovsdb-server198458 sshd:198462 ssh:198506 sshd:198510 ssh:198638 (kworker/u35:0-events_unbound)198679 (psimon)198689 /usr/sbin/haproxy198756 (kworker/1:2-events)198796 [lxc198808 /sbin/init198914 /usr/lib/systemd/systemd-journald198941 /usr/lib/systemd/systemd-networkd198947 /usr/lib/systemd/systemd-resolved198951 @dbus-daemon198952 /usr/lib/systemd/systemd-logind198955 /sbin/agetty198983 sshd:198987 ssh:199378 /usr/sbin/cron199763 /usr/bin/htcacheclean199811 (psimon)199853 (kworker/1:3)199856 /usr/sbin/apache2199858 /usr/sbin/apache2199860 /usr/sbin/apache2200059 (kworker/u33:0-flush-202:64)200312 /opt/ansible-runtime/bin/python3200313 ssh200314 sudo200315 lxc-attach200317 su200318 /bin/sh200321 /usr/bin/python3.12200322 /openstack/venvs/wheel-builder-python3/bin/python3200326 git200327 (sh)200331 /usr/lib/git-core/git <hb(YY BjF1\Yjn     xyz{}~8dAT_cdfgh          .Ja}6D]v(4FP_z200504 (kworker/u38:4-events_power_efficient)200548 (kworker/7:3-events)201230 (kworker/u30:2)201242 (kworker/3:3-events)201380 (kworker/4:2-events)201412 (kworker/0:0-cgroup_destroy)201712 (kworker/4:3-cgroup_destroy)202817 (kworker/u31:0)202836 (psimon)202847 (kworker/0:4)202851 /usr/sbin/apache2202852 logger202854 /usr/sbin/apache2202855 /usr/sbin/apache2202856 /usr/sbin/apache2203549 (kworker/6:1-events_long)204198 (kworker/u36:0)204476 (kworker/u32:1)204477 (kworker/u36:4)204970 /opt/ansible-runtime/bin/python3204971 ssh204972 sudo204973 lxc-attach204975 su204976 /bin/sh204979 /usr/bin/python3.12204981 /usr/bin/python3204982 /openstack/venvs/tempest-31.0.0.0b2.dev29/bin/python3204983 /openstack/venvs/tempest-31.0.0.0b2.dev29/bin/python3<hc(e 6e/M &:>_             !9#$$' '-'1'''''''''''''''(-$Hl:\Sb204999 (kworker/1:0-cgroup_destroy)205000 (kworker/1:3-cgroup_destroy)205019 (kworker/u33:3-flush-202:64)205113 (kworker/5:0-events)205713 (kworker/2:0-events)205854 (kworker/2:1-events)206003 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3206605 (kworker/u31:2-writeback)206637 (kworker/7:1-events)206641 (kworker/5:2-mm_percpu_wq)206795 (kworker/u34:3-flush-202:64)206797 /opt/ansible-runtime/bin/python3206798 ssh206799 sudo206800 lxc-attach206802 su206803 /bin/sh206806 /usr/bin/python3.12206807 /bin/bash206808 /openstack/venvs/tempest-31.0.0.0b2.dev29/bin/python3206816 /bin/sh206817 /openstack/venvs/tempest-31.0.0.0b2.dev29/bin/python3206831 (kworker/3:1-mm_percpu_wq)206832 (kworker/u37:1-flush-202:0)206835 (kworker/u30:2-ext4-rsv-conversion)206893 (kworker/u35:2-flush-202:64)Nhd(V;<tap78a816ca-c0tape41affe7-b5NNhd(V:;tape41affe7-b5tap78a816ca-c0NYhd)e -=hr^S!9#$(d(f()))))7)z))))*L*M*X*@b3m%206948 (kworker/6:2-rcu_gp)206950 (kworker/6:3-cgroup_destroy)207068 (kworker/0:1-mm_percpu_wq)207104 (kworker/u38:1-writeback)207120 (kworker/u36:1-events_power_efficient)207145 (kworker/2:0)207159 (kworker/2:1-events)207226 (kworker/u31:1)207245 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3207274 /openstack/venvs/nova-31.0.0.0b2.dev29/bin/python3207344 /usr/bin/qemu-system-x86_64207355 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3207436 sudo207437 /openstack/venvs/neutron-31.0.0.0b2.dev29/bin/python3207448 haproxy207517 (kworker/u34:1-flush-202:64)Y0he:(;<00he:(:;0