echo 1 > /proc/sys/vm/drop_caches 释放不了内存的原因

As for how to clear them, from man 5 proc:

/proc/sys/vm/drop_caches (since Linux 2.6.16)

Writing to this file causes the kernel to drop clean caches, dentries, and inodes from memory, causing that memory to become free.
This can be useful for memory management testing and performing reproducible filesystem benchmarks.
Because writing to this file causes the benefits of caching to be lost, it can degrade overall system performance.

To free pagecache, use: echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes, use: echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes, use: echo 3 > /proc/sys/vm/drop_caches
Because writing to this file is a nondestructive operation and dirty objects are not freeable, the user should run sync(8) first.

You generally don't want to flush the cache, as its entire purpose is to improve performance !!!

 , but for debugging purposes you can do so by using drop_caches like so (note: you must be root to use drop_caches, but sync can be done as any user):

# sync && echo 3 > /proc/sys/vm/drop_caches

Oracle Database - Enterprise Edition - Version 12.2.0.1 and later
Linux OS - Version Oracle Linux 7.0 and later
Linux x86-64

Symptoms

The node crashes regularly due to low available memory and no free swap space.

Cause

The available memory is running out although the amount of memory allocated to cache is large, but the kernel should reclaim the memory allocated to cache because those memory are commonly due to IO buffer cache.
However, issuing "echo 1 > /proc/sys/vm/drop_caches" to free the IO buffer cache freed only about 50 GB of memory while the amount of memory for cache still shows over 300GB.
 

There are 24GB of available memory and 354GB of buff/cache are reported before issuing "echo 1 > /proc/sys/vm/drop_caches"

[root@node04 ~]# free -g
total used free shared buff/cache available
Mem: 755 375 25 321 354 24
Swap: 23 23 0

However, issuing "echo 1 > /proc/sys/vm/drop_caches" freed only little amount of memory from buff/cache area and allocated that to the available memory.

Even after issuing "echo 1 > /proc/sys/vm/drop_caches", available memory is increased by 27GB because only 27GB out of 354GB is released from buff/cache area.

[root@node04 ~]# echo 1 > /proc/sys/vm/drop_caches;free -g
total used free shared buff/cache available
Mem: 755 372 55 321 327 51
Swap: 23 23 0

The reason for this is that most of the memory allocated to cache is shared memory allocate by database (not SGA that is convered by hugepages but shared memory used in /dev/shm).
The amount of memory in /dev/shm grows daily, eventually leading the available memory running out because kernel; cannot free the memory in /dev/shm unless application/database frees them.

[root@node04 ~]# du -s /dev/shm
37460176 /dev/shm

"ls -l /dev/shm" output shows most of the memory allocated to /dev/shm is used by instance2 database for java client connection (i.e./dev/shm/xxxxSHM)

[root@node04 ~]# date; ls -ltrh /dev/shm/xxxxSHM_instance2* |tail -10
Thu Dec 8 17:21:59 GMT 2022
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:12 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704348541
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:13 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704354558
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:14 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704360581
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:15 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704366608
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:16 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704372634
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:17 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704378648
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:18 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704384672
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:19 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704390700
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:20 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704396715
-rwxrwx--- 1 oraph asmadmin 16M Dec 8 17:21 /dev/shm/xxxxSHM_instance2_2_0_1_0_0_704402743

More and more /dev/shm/xxxxSHM_instance2* segments are created daily.


 

Solution

Recycle the problem instance instance2 to free the already allocated memory in /dev/shm.

Find out the cause of increased usage of the shared memory in /dev/shm


The above case is caused by the following known bug:
bug 33423397 - ORA-07445: EXCEPTION ENCOUNTERED: CORE DUMP [xxxx_SHM_MAP_CALLBACK()+65]

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值