• 记一次jvm闲置,但是应用进程占用高内存


    说明

    记录一下内存泄露的完整排查步骤

    jvm参数设置

    ENV JAVA_OPTS="-Xrs -Dfile.encoding=utf-8 \
      -Xmx2000m -Xms1200m -Xmn750m -Xss256k -XX:MetaspaceSize=400m -XX:MaxMetaspaceSize=800m \
      -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSInitiatingOccupancyOnly \
      -XX:CMSInitiatingOccupancyFraction=75 \
      -XX:+PrintClassHistogram -XX:+PrintGCDetails -XX:+PrintGCTimeStamps \
      -XX:+PrintHeapAtGC -Xloggc:/home/ewei/dump/open_gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/ewei/dump/open_java_error.hprof"

    现象

     

     通过jstat查看当时jvm堆是闲置的,没有发生内存泄露

    jvm申请空间:213952+213952+214080+1280000+180864=2053.5625=2.0g 惰性分配原则,因为有full gc表示使用过了.所以进程占用了2G是正常的,只是需要确认一xms 缩容问题 还有500M左右的溢出

    我觉得如果保持这个溢出不增长我觉得是可以理解的 可能是linux启动应用自身需要的一些内存空间

     这里有个明显问题就是S0 S1的空间太大导致浪费,但是现在侧重点不是这里 具体可以看回收规则

    https://www.cnblogs.com/LQBlog/p/9205848.html#autoid-1-0-0

    java进程占用空间的计算规则

    参考:https://www.cnblogs.com/LQBlog/p/9200810.html#_label8

    可能原因分析

    操作系统 与 JVM的内存分配

    JVM 的自动内存管理,其实只是先向操作系统申请了一大块内存,然后自己在这块已申请的内存区域中进行“自动内存管理”。JAVA 中的对象在创建前,会先从这块申请的一大块内存中划分出一部分来给这个对象使用,在 GC 时也只是这个对象所处的内存区域数据清空,标记为空闲而已,并不会将空闲内存归还给操作系统

    为什么不把内存归还给操作系统?

    JVM 还是会归还内存给操作系统的,只是因为这个代价比较大,所以不会轻易进行。而且不同垃圾回收器 的内存分配算法不同,归还内存的代价也不同。

    比如在清除算法(sweep)中,是通过空闲链表(free-list)算法来分配内存的。简单的说就是将已申请的大块内存区域分为 N 个小区域,将这些区域同链表的结构组织起来,就像这样:

    每个 data 区域可以容纳 N 个对象,那么当一次 GC 后,某些对象会被回收,可是此时这个 data 区域中还有其他存活的对象,如果想将整个 data 区域释放那是肯定不行的。

    所以这个归还内存给操作系统的操作并没有那么简单,执行起来代价过高,JVM 自然不会在每次 GC 后都进行内存的归还。

    如何做到归还 

    通过xms不等于xmx来设置参考https://www.cnblogs.com/LQBlog/p/9194927.html 搜索"xms注意事项" 因为归还有成本所以我们一般设置-xmx 和-xms是一致

    但是这个归还内存的机制,在不同的垃圾回收器,甚至不同的 JDK 版本中还不一样! 如在:CMS垃圾回收器 可能在乎停顿时间 full gc不一定会回收完,就算回收完也不会释放给操作系统,需要自己调用System.gc();每调一次会释放内存到操作系统;基于下面测试数据可以看出来,jdk8默认则是释放回收空闲的50%

    测试代码

    @Controller
    public class HelloWordController {
        List<byte[]> arrList=new LinkedList<>();
        @RequestMapping("/hello")
        @ResponseBody
        public String test(HttpServletRequest request) {
            for(int i=0;i<100;i++) {
                arrList.add(new byte[1024333]);
    
            }
            return String.valueOf(arrList.size());
        }
        @RequestMapping("/clean")
        @ResponseBody
        public String clean(HttpServletRequest request){
            arrList.clear();
            return "true";
        }
        @RequestMapping("/remove")
        @ResponseBody
        public String remove(HttpServletRequest request){
            for(int i=0;i<100;i++) {
                arrList.remove(arrList.size() - 1);
            }
            return String.valueOf(arrList.size());
        }
        @RequestMapping("/gc")
        @ResponseBody
        public String gc(HttpServletRequest request){
           System.gc();
           return "true";
        }
    
    
    }
     
    
    
    参数
        -Xrs -server -verbose:gc -Xms200m -Xmx1024m -XX:MaxHeapFreeRatio=40 -XX:MetaspaceSize=200m -XX:MaxMetaspaceSize=300m -Xss256k  -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSClassUnloadingEnabled -XX:LargePageSizeInBytes=128M -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC 
    
    ------------------------------------CMS-------------------------------
     调用hello次数:10  调用remove 8次释放剩余2  gc前进程占用1095M  gc后进程占用1087M
    GC前:
    S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
    34944.0 34944.0  0.0   30014.8 279616.0 279616.0  699072.0   698679.4  33792.0 31419.8 4352.0 3951.5     19    0.360  20      0.419    0.779
    full gc后扩容GC后:
     jstat -gc 31509
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
    34944.0 34944.0  0.0   1041.0 279616.0 102123.5  699072.0   607755.3  33792.0 31430.6 4352.0 3947.0     19    0.369  72      0.515    0.884
    
    ------------------------------------jdk8默认-------------------------------
    
    参数 
    -Xms200m -Xmx1024m -XX:MaxHeapFreeRatio=40 -XX:MetaspaceSize=200m -XX:MaxMetaspaceSize=300m -Xss256k  -XX:LargePageSizeInBytes=128M  -XX:SoftRefLRUPolicyMSPerMB=0 
    GC前:
      调用hello接口:7 调用remove次数5次剩余2 gc前进程占用1018M  gc后进程占用851M 
    gc前
    liqiangdeMacBook-Pro:~ liqiang$ jstat -gc 31627
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
    88064.0 97280.0 59020.6  0.0   90112.0  15137.7   699392.0   645524.3  33792.0 31418.2 4352.0 3952.4     18    0.286   5      0.316    0.603
    gc后
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT     GCT   
    88064.0 97280.0  0.0    0.0   90112.0  26444.7   473088.0   283694.1  33792.0 31443.3 4352.0 3954.0     18    0.286   6      0.362    0.648
    回收后:(59020+15137+645524+31418)-(26444+283694+31443)=399.919921875  释放了50%空闲

    惰性分配

    进程在申请内存时,并不是直接分配物理内存的,而是分配一块虚拟空间,到真正堆这块虚拟空间写入数据时才会通过缺页异常(Page Fault)处理机制分配物理内存,也就是我们看到的进程 Res 指标。

    可以简单的认为操作系统的内存分配是“惰性”的,分配并不会发生实际的占用,有数据写入时才会发生内存占用,影响 Res。

    所以,哪怕配置了Xms6G,启动后也不会直接占用 6G 内存,实际占用的内存取决于你有没有往这 6G 内存区域中写数据的。

    注意事项:如果在jvm扩容的时候,如果系统可用内存不够的可申请内存 会触发系统的 oom kill 调进程

    还有惰性分配原则 如果看jstat 看C着一项,扩容后未触发full gc 不表示jvm占用,full gc后表示都使用并写入才真正申请了物理内存

    可能原因分析

    top - 10:25:21 up 14 days, 12:09,  2 users,  load average: 0.42, 0.35, 0.33
    Tasks: 103 total,   1 running, 102 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  3.5 us,  1.2 sy,  0.0 ni, 95.3 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
    KiB Mem :  3733564 total,   272588 free,  2379472 used,  1081504 buff/cache
    KiB Swap:        0 total,        0 free,        0 used.  1119908 avail Mem 
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                       
    11126 root      20   0 5881916   1.9g  17356 S   8.7 52.2  52:56.66 java   

    jvm内存分配情况

    jstat -gc 1
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC       MU      CCSC   CCSU      YGC     YGCT    FGC     FGCT     GCT   
    2560.0 2560.0 1114.1  0.0   250880.0 230464.7  512000.0   245147.4  164096.0 151513.7 14336.0 12074.5   4090   32.943   6      4.744   37.687
    2560.0+2560.0+250880.0+512000.0+164096.0=932096KB=910.25M 进程占用1.9个G 溢出的1个G是哪里的,可能需要NAT才能看清楚怀疑是堆外内存泄露

    怀疑是堆外内存泄露 

    上面jstat计算了元空间 那么就可能是堆外的其他堆外内存项占用,具体使用NAT分析 看是哪一块占有高

    参数:-XX:MaxDirectMemorySize 限制堆外内存空间大小 

    Byte Buffer有两种:
    heap ByteBuffer ->-XX:Xmx
    1.一种是heap ByteBuffer,该类对象分配在JVM的堆内存里面,直接由Java虚拟机负责垃圾回收,
    direct ByteBuffer->-XX:MaxDirectMemorySize
    2.一种是direct ByteBuffer是通过jni在虚拟机外内存中分配的。通过无法查看该快内存的使用情况。只能通过top来看它的内存使用情况。
    JVM堆内存大小可以通过-Xmx来设置,同样的direct ByteBuffer可以通过-XX:MaxDirectMemorySize来设置,此参数的含义是当Direct
    ByteBuffer分配的堆外内存到达指定大小后,即触发Full GC。注意该值是有上限的,默认应该受堆空间的可用空间的影响,最大为
    sun.misc.VM.maxDirectMemory,在程序中中可以获得-XX:MaxDirectMemorySizel的设置的值。 Full GC没有回收调则会触发OOM

      @RequestMapping("/max")
        @ResponseBody
        public String max(HttpServletRequest request){
           
            return String.valueOf( sun.misc.VM.maxDirectMemory());
        }

    使用NMP加pmap进行分析 

    调整了一下jvm参数 不使用CMS垃圾回收,因为CMS垃圾回收并不会释放内存给操作系统,需要手动调用system.gc()才会回收

    java -XX:NativeMemoryTracking=summary -Djava.util.logging.config.file=/home/ewei/app/tomcat-ewei-open/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.apache.catalina.security.SecurityListener.UMASK=0027 -Xrs -server -verbose:gc -Xms750m -Xmx2000m -XX:MaxHeapFreeRatio=40 -XX:MetaspaceSize=200m -XX:MaxMetaspaceSize=300m -Xss256k -XX:+DisableExplicitGC -XX:LargePageSizeInBytes=128M -XX:+UseFastAccessorMethods -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+PrintClassHistogram -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintHeapAtGC -Xloggc:/home/ewei/log/gc_ewei-com-open1-%t.log -XX:+HeapDumpOnOutOfMemoryError -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/ewei/dump/ewei-com-open1-oom-dump-%t.hprof -jar /var/lib/jetty/start.jar -Djetty.port=8280

    jstat指标

    jstat -gc 1
     S0C    S1C    S0U    S1U      EC       EU        OC         OU       MC         MU    CCSC   CCSU      YGC     YGCT    FGC    FGCT     GCT   
    3072.0 3072.0  0.0   1183.6 249856.0 130285.9  512000.0   353560.2  167680.0 156222.1 14592.0 12487.1   5497   48.958   7      4.911   53.868
    堆空间 3072.0+3072.0+249856+512000=768 000 KB=750M  表示还未扩容
    元空间 167680

    pmap指标

    anon应该是进程里面申请的空间,linux不知道申请来做什么,可以根据NMT结合来看,因为是JVM申请的 所以JVM肯定知道

    heap不是堆空间heap的意思,往上搜索资料堆,c库中的malloc调用就是从该段获取内存空间。

    或者是linux启动java程序需要的内存 其他服务也有这一块儿空间,但是大小不一致,估计当是linux启动应用需要的空间吧 只要不持续增长就行

    后续备注:jcmd监控取消后这块儿空间变成了200来m 

    [liqiang@ewei-com-open1 pmap1]$ cat pmap-2022-05-19-14-52.txt | head -n 100
    1: java -XX:NativeMemoryTracking=summary -Djava.util.logging.config.file=/home/ewei/app/tomcat-ewei-open/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.apache.catalina.securit
    Address           Kbytes     PSS   Dirty    Swap  Mode  Mapping
    0000000083000000  512000  512000  512000       0  rw-p    [ anon ]
    00000000a2400000  853504       0       0       0  ---p    [ anon ]
    00000000d6580000  256000  256000  256000       0  rw-p    [ anon ]
    00000000e5f80000  426496       0       0       0  ---p    [ anon ]
    0000000100000000   14592   14512   14512       0  rw-p    [ anon ]
    0000000100e40000 1033984       0       0       0  ---p    [ anon ]
    0000558e0d2c1000       4       4       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000558e0d2c2000       4       4       0       0  r-xp  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000558e0d2c3000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000558e0d2c4000       4       4       4       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000558e0d2c5000       4       4       4       0  rw-p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000558e0e185000 1457840  563680  563680       0  rw-p  [heap]
    00007fb6cf4ec000      12       0       0       0  ---p    [ anon ]
    00007fb6cf4ef000     248       8       8       0  rw-p    [ anon ]
    .....忽略其他小的值
    ----------------  ------  ------  ------  ------
    total         5804300 1623024 1604816       0
    结合下面jc log查看
    6580000~5f80000 年轻代
    83000000~a2400000 老年代

    NMT指标

    750+172+108+148+72+15+26=1291=1.26g  有500M左右溢出

    500M左右的溢出

    怀疑是heap 我觉得如果保持这个溢出不增长我觉得是可以理解的 可能是linux启动应用自身需要的一些内存空间

    reserved 为申请的虚拟内存 committed 为使用的物理内存

    Total: reserved=3686MB, committed=1298MB +1MB
    
    -                 Java Heap (reserved=2000MB, committed=750MB)
                                (mmap: reserved=2000MB, committed=750MB)
     
    -                     Class (reserved=1182MB, committed=172MB)
                                (classes #18759 +15)
                                (malloc=8MB #56167 +106)
                                (mmap: reserved=1174MB, committed=164MB)
     
    -                    Thread (reserved=108MB, committed=108MB)
                                (thread #403)
                                (stack: reserved=106MB, committed=106MB)
                                (malloc=1MB #2014)
                                (arena=1MB #802)
     
    -                      Code (reserved=272MB, committed=148MB)
                                (malloc=28MB #23338 +7)
                                (mmap: reserved=244MB, committed=120MB)
     
    -                        GC (reserved=76MB, committed=72MB)
                                (malloc=3MB #913 +15)
                                (mmap: reserved=73MB, committed=69MB)
     
    -                  Compiler (reserved=1MB, committed=1MB)
                                (malloc=1MB #3068 +8)
     
    -                  Internal (reserved=15MB, committed=15MB)
                                (malloc=15MB #29829 +31)
     
    -                    Symbol (reserved=26MB, committed=26MB)
                                (malloc=22MB #245084 +15)
                                (arena=4MB #1)
     
    -    Native Memory Tracking (reserved=6MB, committed=6MB)
                                (tracking overhead=6MB)

    有点怀疑是heap这一项

    正常如果NMP配置的Detail可以根据地址块来判断对应jvm的哪些项 但是我们线上不能执行 如以下detail的输出

    0xd5a00000 - 0xe5a00000为堆空间使用,对应pmap address可以找到对应的地址知道为何使用
    14179:
    
    Native Memory Tracking:
    
    Total: reserved=653853KB, committed=439409KB
    -                 Java Heap (reserved=262144KB, committed=262144KB)
                                (mmap: reserved=262144KB, committed=262144KB) 
    
    -                     Class (reserved=82517KB, committed=81725KB)
                                (classes #17828)
                                (malloc=1317KB #26910) 
                                (mmap: reserved=81200KB, committed=80408KB) 
    
    -                    Thread (reserved=20559KB, committed=20559KB)
                                (thread #58)
                                (stack: reserved=20388KB, committed=20388KB)
                                (malloc=102KB #292) 
                                (arena=69KB #114)
    
    -                      Code (reserved=255309KB, committed=41657KB)
                                (malloc=5709KB #11730) 
                                (mmap: reserved=249600KB, committed=35948KB) 
    
    -                        GC (reserved=1658KB, committed=1658KB)
                                (malloc=798KB #676) 
                                (mmap: reserved=860KB, committed=860KB) 
    
    -                  Compiler (reserved=130KB, committed=130KB)
                                (malloc=31KB #357) 
                                (arena=99KB #3)
    
    -                  Internal (reserved=5039KB, committed=5039KB)
                                (malloc=5007KB #20850) 
                                (mmap: reserved=32KB, committed=32KB) 
    
    -                    Symbol (reserved=18402KB, committed=18402KB)
                                (malloc=14972KB #221052) 
                                (arena=3430KB #1)
    
    -    Native Memory Tracking (reserved=2269KB, committed=2269KB)
                                (malloc=53KB #1597) 
                                (tracking overhead=2216KB)
    
    
    -               Arena Chunk (reserved=187KB, committed=187KB)
                                (malloc=187KB) 
    
    -                   Unknown (reserved=5640KB, committed=5640KB)
                                (mmap: reserved=5640KB, committed=5640KB) 
     . . .
    Virtual memory map:
    
    [0xceb00000 - 0xcec00000] reserved 1024KB for Class from
    [0xced00000 - 0xcee00000] reserved 1024KB for Class from
    . . .
    [0xcf85e000 - 0xcf8af000] reserved and committed 324KB for Thread Stack from
    [0xd4eaf000 - 0xd4f00000] reserved and committed 324KB for Thread Stack from
        [0xf687866e] Thread::record_stack_base_and_size()+0x1be
        [0xf68818bf] JavaThread::run()+0x2f
        [0xf67541f9] java_start(Thread*)+0x119
        [0xf7606395] start_thread+0xd5
    [0xd5a00000 - 0xe5a00000] reserved 262144KB for Java Heap from
    . . .
    [0xe5e00000 - 0xf4e00000] reserved 245760KB for Code from
    [0xf737f000 - 0xf7400000] reserved 516KB for GC from
    [0xf745d000 - 0xf747d000] reserved 128KB for Unknown from
    [0xf7700000 - 0xf7751000] reserved and committed 324KB for Thread Stack from
    [0xf7762000 - 0xf776a000] reserved and committed 32KB for Internal from

     不过我们可以根据gc日志看到部分如

    标红部分去跟pmap比较则知道那一块儿空间是拿来干嘛的

    Heap after GC invocations=7735 (full 11):
     PSYoungGen      total 253440K, used 1549K [0x00000000d6580000, 0x00000000e5f80000, 0x0000000100000000)
      eden space 250880K, 0% used [0x00000000d6580000,0x00000000d6580000,0x00000000e5a80000)
      from space 2560K, 60% used [0x00000000e5d00000,0x00000000e5e836f0,0x00000000e5f80000)
      to   space 2560K, 0% used [0x00000000e5a80000,0x00000000e5a80000,0x00000000e5d00000)
     ParOldGen       total 512000K, used 483338K [0x0000000083000000, 0x00000000a2400000, 0x00000000d6580000)
      object space 512000K, 94% used [0x0000000083000000,0x00000000a0802848,0x00000000a2400000)
     Metaspace       used 160423K, capacity 166904K, committed 169728K, reserved 1204224K
      class space    used 12873K, capacity 14138K, committed 14592K, reserved 1048576K
    }
    {Heap before GC invocations=7736 (full 11):
     PSYoungGen      total 253440K, used 252429K [0x00000000d6580000, 0x00000000e5f80000, 0x0000000100000000)
      eden space 250880K, 100% used [0x00000000d6580000,0x00000000e5a80000,0x00000000e5a80000)
      from space 2560K, 60% used [0x00000000e5d00000,0x00000000e5e836f0,0x00000000e5f80000)
      to   space 2560K, 0% used [0x00000000e5a80000,0x00000000e5a80000,0x00000000e5d00000)
     ParOldGen       total 512000K, used 483338K [0x0000000083000000, 0x00000000a2400000, 0x00000000d6580000)
      object space 512000K, 94% used [0x0000000083000000,0x00000000a0802848,0x00000000a2400000)
     Metaspace       used 160438K, capacity 166928K, committed 169728K, reserved 1204224K
      class space    used 12875K, capacity 14142K, committed 14592K, reserved 1048576K
    80315.866: [GC (Allocation Failure) [PSYoungGen: 252429K->976K(253440K)] 735767K->485216K(765440K), 0.0101597 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 

    最终结论

    排除堆外内存泄露,使用CMS垃圾回收器 xms缩容不稳定,好像几乎不会缩容 导致内存长期占用高,改为默认垃圾回收器,会触发缩容,堆外有500~600的溢出,就是pampa的heap 不持续增长就不管它

    注:一般都不要求缩容 会导致频繁的垃圾回收和操作系统内存申请和释放额外开销,但是我们好像需要用k8s 资源调度需要缩容

    高峰期指标

    top

     jstat

    jcmd

    pmap

    [liqiang@ewei-com-open1 pmap1]$ cat pmap-2022-05-25-09-38.txt |head -n 20
    1: java -XX:NativeMemoryTracking=summary -Djava.util.logging.config.file=/home/ewei/app/tomcat-ewei-open/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.apache.catalina.securit
    Address           Kbytes     PSS   Dirty    Swap  Mode  Mapping
    0000000083000000 1365504 1340780 1340780       0  rw-p    [ anon ]
    00000000d6580000  530432  530424  530424       0  rw-p    [ anon ]
    00000000f6b80000  152064       0       0       0  ---p    [ anon ]
    0000000100000000   17152   17040   17040       0  rw-p    [ anon ]
    00000001010c0000 1031424       0       0       0  ---p    [ anon ]
    0000561b28e66000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e67000       4       0       0       0  r-xp  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e68000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e69000       4       4       4       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e6a000       4       4       4       0  rw-p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b2971c000 1231396  601832  601832       0  rw-p  [heap]
    00007fdfa7a1b000      12       0       0       0  ---p    [ anon ]
    00007fdfa7a1e000     248       8       8       0  rw-p    [ anon ]
    00007fdfa7a5c000      12       0       0       0  ---p    [ anon ]
    00007fdfa7a5f000     248       8       8       0  rw-p    [ anon ]
    00007fdfa7a9d000      12       0       0       0  ---p    [ anon ]
    00007fdfa7aa0000     248       8       8       0  rw-p    [ anon ]
    00007fdfa7ade000      12       0       0       0  ---p    [ anon ]
    .....省略其他
    ----------------  ------  ------  ------  ------
    total            5041524 2907608 2905448       0

    低峰期

    top

     jstat

    jcmd 

     pmap

    [liqiang@ewei-com-open1 pmap1]$ cat pmap-2022-05-25-10-50.txt |head -n 20
    1: java -XX:NativeMemoryTracking=summary -Djava.util.logging.config.file=/home/ewei/app/tomcat-ewei-open/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Dorg.apache.catalina.securit
    Address           Kbytes     PSS   Dirty    Swap  Mode  Mapping
    0000000083000000  518144  512000  512000       0  rw-p    [ anon ]
    00000000a2a00000  847360       0       0       0  ---p    [ anon ]
    00000000d6580000  302592  302592  302592       0  rw-p    [ anon ]
    00000000e8d00000  379904       0       0       0  ---p    [ anon ]
    0000000100000000   17152   17040   17040       0  rw-p    [ anon ]
    00000001010c0000 1031424       0       0       0  ---p    [ anon ]
    0000561b28e66000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e67000       4       0       0       0  r-xp  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e68000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e69000       4       4       4       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b28e6a000       4       4       4       0  rw-p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    0000561b2971c000 1231396  614420  614420       0  rw-p  [heap]
    00007fdfa7be2000      12       0       0       0  ---p    [ anon ]
    00007fdfa7be5000     248     116     116       0  rw-p    [ anon ]
    00007fdfa7d27000      12       0       0       0  ---p    [ anon ]
    00007fdfa7d2a000     248     108     108       0  rw-p    [ anon ]
    00007fdfa7ff2000      12       0       0       0  ---p    [ anon ]
    00007fdfa7ff5000     248     108     108       0  rw-p    [ anon ]
    .....省略其他
    ----------------  ------  ------  ------  ------
    total            5022844 1860180 1855104       0

    linux指标抓取sh脚本

    运维写的 定时每分钟抓取一次

    /usr/bin/docker-compose -f /root/docker-compose-ewei-open.yml exec -T ewei-open  jcmd 1  VM.native_memory summary.diff scale=MB >> /tmp/jcmd-summary1/jcmd-summary-`date +%F-%H-%M`.txt 2>&1
    sleep 5
    /usr/bin/docker-compose -f /root/docker-compose-ewei-open.yml exec -T ewei-open  jcmd 1 VM.native_memory baseline scale=MB >> /tmp/jcmd-baseline1/jcmd-baseline-`date +%F-%H-%M`.txt 2>&1
    sleep 5
    /usr/bin/docker-compose -f /root/docker-compose-ewei-open.yml exec -T ewei-open  pmap -x 1 >> /tmp/pmap1/pmap-`date +%F-%H-%M`.txt 2>&1
    sleep 5
    top -bn 1 -c >> /tmp/top1/top-`date +%F-%H-%M`.txt 2>&1
    sleep 5
    /usr/bin/docker-compose -f /root/docker-compose-ewei-open.yml exec -T ewei-open jstat -gc 1  >> /tmp/jstat1/jstat-`date +%F-%H-%M`.txt 2>&1

    oom kill的情况

    实际jvm空间计算集合pmap查看

    top

    按m可以看百分比

    top - 13:43:50 up 8 days, 22:22,  3 users,  load average: 0.06, 0.07, 0.06
    Tasks: 104 total,   1 running, 103 sleeping,   0 stopped,   0 zombie
    %Cpu(s):  0.7 us,  0.3 sy,  0.0 ni, 98.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.0 st
    KiB Mem : 93.8/3733564  [||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||      ]
    KiB Swap:  0.0/0        [                                                                                                    ]
    
      PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                      
    17103 root      20   0 5141820   2.6g   2876 S   1.0 73.4 208:32.41 java                                                                                                         
     1207 root      10 -10  148936  19488   4456 S   0.7  0.5  95:36.11 AliYunDun                                                                                                    
     1039 root      20   0 1354468  21728   4260 S   0.3  0.6  41:21.24 /usr/local/clou                                                                                              
        1 root      20   0   43848   3320   1792 S   0.0  0.1   0:36.74 systemd                                                                                                      
        2 root      20   0       0      0      0 S   0.0  0.0   0:00.03 kthreadd    

    gc log

    {Heap before GC invocations=4076 (full 26):
     par new generation   total 409600K, used 326157K [0x0000000744400000, 0x00000007637f0000, 0x0000000763800000)
      eden space 307264K, 100% used [0x0000000744400000, 0x0000000757010000, 0x0000000757010000)
      from space 102336K,  18% used [0x0000000757010000, 0x0000000758283530, 0x000000075d400000)
      to   space 102336K,   0% used [0x000000075d400000, 0x000000075d400000, 0x00000007637f0000)
     concurrent mark-sweep generation total 2048000K, used 1617523K [0x0000000763800000, 0x00000007e0800000, 0x00000007e0800000)
     Metaspace       used 81554K, capacity 82710K, committed 83968K, reserved 1124352K
      class space    used 7952K, capacity 8197K, committed 8448K, reserved 1048576K
    740607.521: [GC (Allocation Failure) 740607.521: [ParNew: 326157K->28606K(409600K), 0.0563768 secs] 1943680K->1646437K(2457600K), 0.0564730 secs] [Times: user=0.11 sys=0.00, real=0.05 secs] 

    pmap

    Address           Kbytes     PSS   Dirty    Swap  Mode  Mapping
    0000000744400000  511936  506692  506692       0  rw-p    [ anon ]
    00000007637f0000      64       0       0       0  ---p    [ anon ]
    0000000763800000 2048000 1654116 1654116       0  rw-p    [ anon ]
    00000007e0800000    8448    8320    8320       0  rw-p    [ anon ]
    00000007e1040000 1040128       0       0       0  ---p    [ anon ]
    000055828c8fc000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    000055828c8fd000       4       0       0       0  r-xp  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    000055828c8fe000       4       0       0       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    000055828c8ff000       4       4       4       0  r--p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    000055828c900000       4       4       4       0  rw-p  /usr/lib/jvm/java-1.8-openjdk/jre/bin/java
    000055828dd18000 1032852  381708  381708       0  rw-p  [heap]
    00007f933b154000    1792    1424    1424       0  rw-p    [ anon ]
    00007f933b314000     256       0       0       0  ---p    [ anon ]
    00007f933b417000      12       0       0       0  ---p    [ anon ]
    00007f933b41a000     248     100     100       0  rw-p    [ anon ]
    00007f933b51b000    2048    2048    2048       0  rw-p    [ anon ]
    00007f933b71b000    2048    2048    2048       0  rw-p    [ anon ]
    00007f933b922000      12       0       0       0  ---p    [ anon ]

  • 相关阅读:
    dapper 批量删除、新增、修改说明
    android 加载assets目录下的静态html文件案例
    webstorm中使用git提交代码时出现unversioned files错误
    windows server 2008 R2 x64 部署.net core 3.1项目
    asp.net core 项目添加nlog日志(loggerFactor.AddNLog 过时处理(.net core 3.1))
    机器学习笔记之一步步教你轻松学主成分分析PCA降维算法
    机器学习笔记之类别特征处理
    机器学习笔记之range, numpy.arange 和 numpy.linspace的区别
    机器学习笔记之Numpy的random函数
    机器学习笔记之矩阵分解 SVD奇异值分解
  • 原文地址:https://www.cnblogs.com/LQBlog/p/16229687.html
Copyright © 2020-2023  润新知