linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vernon Yang <vernon2gm@gmail.com>
To: "David Hildenbrand (Red Hat)" <david@kernel.org>
Cc: Wei Yang <richard.weiyang@gmail.com>,
	akpm@linux-foundation.org,  lorenzo.stoakes@oracle.com,
	ziy@nvidia.com, baohua@kernel.org, lance.yang@linux.dev,
	 linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	 Vernon Yang <yanglincheng@kylinos.cn>
Subject: Re: [PATCH 3/4] mm: khugepaged: move mm to list tail when MADV_COLD/MADV_FREE
Date: Thu, 25 Dec 2025 23:12:18 +0800	[thread overview]
Message-ID: <6sciluv3ylow6frheij6imhhsglaez6d6vsbtyndwlfuetzwmf@tbs6ivsitehm> (raw)
In-Reply-To: <ad2c1355-be13-45d9-8474-a32a4b710aa5@kernel.org>

On Tue, Dec 23, 2025 at 10:59:29AM +0100, David Hildenbrand (Red Hat) wrote:
> On 12/21/25 13:34, Vernon Yang wrote:
> > On Sun, Dec 21, 2025 at 10:24:11AM +0100, David Hildenbrand (Red Hat) wrote:
> > > On 12/21/25 05:25, Vernon Yang wrote:
> > > > On Sun, Dec 21, 2025 at 02:10:44AM +0000, Wei Yang wrote:
> > > > > On Fri, Dec 19, 2025 at 09:58:17AM +0100, David Hildenbrand (Red Hat) wrote:
> > > > > > On 12/19/25 06:29, Vernon Yang wrote:
> > > > > > > On Thu, Dec 18, 2025 at 10:31:58AM +0100, David Hildenbrand (Red Hat) wrote:
> > > > > > > > On 12/15/25 10:04, Vernon Yang wrote:
> > > > > > > > > For example, create three task: hot1 -> cold -> hot2. After all three
> > > > > > > > > task are created, each allocate memory 128MB. the hot1/hot2 task
> > > > > > > > > continuously access 128 MB memory, while the cold task only accesses
> > > > > > > > > its memory briefly andthen call madvise(MADV_COLD). However, khugepaged
> > > > > > > > > still prioritizes scanning the cold task and only scans the hot2 task
> > > > > > > > > after completing the scan of the cold task.
> > > > > > > > >
> > > > > > > > > So if the user has explicitly informed us via MADV_COLD/FREE that this
> > > > > > > > > memory is cold or will be freed, it is appropriate for khugepaged to
> > > > > > > > > scan it only at the latest possible moment, thereby avoiding unnecessary
> > > > > > > > > scan and collapse operations to reducing CPU wastage.
> > > > > > > > >
> > > > > > > > > Here are the performance test results:
> > > > > > > > > (Throughput bigger is better, other smaller is better)
> > > > > > > > >
> > > > > > > > > Testing on x86_64 machine:
> > > > > > > > >
> > > > > > > > > | task hot2           | without patch | with patch    |  delta  |
> > > > > > > > > |---------------------|---------------|---------------|---------|
> > > > > > > > > | total accesses time |  3.14 sec     |  2.92 sec     | -7.01%  |
> > > > > > > > > | cycles per access   |  4.91         |  2.07         | -57.84% |
> > > > > > > > > | Throughput          |  104.38 M/sec |  112.12 M/sec | +7.42%  |
> > > > > > > > > | dTLB-load-misses    |  288966432    |  1292908      | -99.55% |
> > > > > > > > >
> > > > > > > > > Testing on qemu-system-x86_64 -enable-kvm:
> > > > > > > > >
> > > > > > > > > | task hot2           | without patch | with patch    |  delta  |
> > > > > > > > > |---------------------|---------------|---------------|---------|
> > > > > > > > > | total accesses time |  3.35 sec     |  2.96 sec     | -11.64% |
> > > > > > > > > | cycles per access   |  7.23         |  2.12         | -70.68% |
> > > > > > > > > | Throughput          |  97.88 M/sec  |  110.76 M/sec | +13.16% |
> > > > > > > > > | dTLB-load-misses    |  237406497    |  3189194      | -98.66% |
> > > > > > > >
> > > > > > > > Again, I also don't like that because you make assumptions on a full process
> > > > > > > > based on some part of it's address space.
> > > > > > > >
> > > > > > > > E.g., if a library issues a MADV_COLD on some part of the memory the library
> > > > > > > > manages, why should the remaining part of the process suffer as well?
> > > > > > >
> > > > > > > Yes, you make a good point, thanks!
> > > > > > >
> > > > > > > > This seems to be an heuristic focused on some specific workloads, no?
> > > > > > >
> > > > > > > Right.
> > > > > > >
> > > > > > > Could we use the VM_NOHUGEPAGE flag to indicate that this region should
> > > > > > > not be collapsed, so that khugepaged can simply skip this VMA during
> > > > > > > scanning? This way, it won't affect the remaining part of the task's
> > > > > > > memory regions.
> > > > > >
> > > > > > I thought we would skip these regions already properly in khugeapged, or
> > > > > > maybe I misunderstood your question.
> > > > > >
> > > > >
> > > > > I think we should, but seems we didn't do this for anonymous memory during
> > > > > khugepaged.
> > > > >
> > > > > We check the vma with thp_vma_allowable_order() during scan.
> > > > >
> > > > >     * For anonymous memory during khugepaged, if we always enable 2M collapse,
> > > > >       we will scan this vma. Even VM_NOHUGEPAGE is set.
> > > > >
> > > > >     * For other cases, it looks good since __thp_vma_allowable_order() will skip
> > > > >       this vma with vma_thp_disabled().
> > > >
> > > > Hi David, Wei,
> > > >
> > > > The khugepaged has already checked the VM_NOHUGEPAGE flag for anonymous
> > > > memory during scan, as below:
> > > >
> > > > khugepaged_scan_mm_slot()
> > > >       thp_vma_allowable_order()
> > > >           thp_vma_allowable_orders()
> > > >               __thp_vma_allowable_orders()
> > > >                   vma_thp_disabled() {
> > > >                        if (vm_flags & VM_NOHUGEPAGE)
> > > >                            return true;
> > > >                   }
> > > >
> > > > REAL ISSUE: when madvise(MADV_COLD),not set VM_NOHUGEPAGE flag to vma,
> > > > so the khugepaged will continue scan this vma.
> > > >
> > > > I set VM_NOHUGEPAGE flag to vma when madvise(MADV_COLD), the test has
> > > > been successful. I will send it in the next version.
> > >
> > > No we must not do that. That's a user-space visible change. :/
> >
> > David, what good ideas do you have to achieve this goal? let me know
> > please, thank!
>
> Your idea would be to skip a VMA when we issues madvise(MADV_COLD).
>
> That sounds like yet another heuristic that can easily be wrong? :/
>
> In particular, imagine if the VMA is much larger than the madvise'd region
> (other parts used for something else) or if the previously cold memory area
> is used for something that is now hot.
>
> With memory allocators that manage most of the memory in a single large VMA,
> it's rather easy to see how such a heuristic would be bad, no?

Thanks for your explain, but I current approach is as follows, the large
VMA will split at this case.

madvise_vma_behavior
    madvise_cold
    madvise_update_vma

Maybe I'll send v2 first, and we'll discuss it more clearly :)

--
Merry Christmas,
Vernon


  reply	other threads:[~2025-12-25 15:12 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-15  9:04 [PATCH 0/4] Improve khugepaged scan logic Vernon Yang
2025-12-15  9:04 ` [PATCH 1/4] mm: khugepaged: add trace_mm_khugepaged_scan event Vernon Yang
2025-12-18  9:24   ` David Hildenbrand (Red Hat)
2025-12-19  5:21     ` Vernon Yang
2025-12-15  9:04 ` [PATCH 2/4] mm: khugepaged: remove mm when all memory has been collapsed Vernon Yang
2025-12-15 11:52   ` Lance Yang
2025-12-16  6:27     ` Vernon Yang
2025-12-15 21:45   ` kernel test robot
2025-12-16  6:30     ` Vernon Yang
2025-12-15 23:01   ` kernel test robot
2025-12-16  6:32     ` Vernon Yang
2025-12-17  3:31   ` Wei Yang
2025-12-18  3:27     ` Vernon Yang
2025-12-18  3:48       ` Wei Yang
2025-12-18  4:41         ` Vernon Yang
2025-12-18  9:29   ` David Hildenbrand (Red Hat)
2025-12-19  5:24     ` Vernon Yang
2025-12-19  9:00       ` David Hildenbrand (Red Hat)
2025-12-19  8:35     ` Vernon Yang
2025-12-19  8:55       ` David Hildenbrand (Red Hat)
2025-12-23 11:18       ` Dev Jain
2025-12-25 16:07         ` Vernon Yang
2025-12-29  6:02         ` Vernon Yang
2025-12-22 19:00   ` kernel test robot
2025-12-15  9:04 ` [PATCH 3/4] mm: khugepaged: move mm to list tail when MADV_COLD/MADV_FREE Vernon Yang
2025-12-15 21:12   ` kernel test robot
2025-12-16  7:00     ` Vernon Yang
2025-12-16 13:08   ` kernel test robot
2025-12-16 13:31   ` kernel test robot
2025-12-18  9:31   ` David Hildenbrand (Red Hat)
2025-12-19  5:29     ` Vernon Yang
2025-12-19  8:58       ` David Hildenbrand (Red Hat)
2025-12-21  2:10         ` Wei Yang
2025-12-21  4:25           ` Vernon Yang
2025-12-21  9:24             ` David Hildenbrand (Red Hat)
2025-12-21 12:34               ` Vernon Yang
2025-12-23  9:59                 ` David Hildenbrand (Red Hat)
2025-12-25 15:12                   ` Vernon Yang [this message]
2025-12-21 12:38             ` Wei Yang
2025-12-15  9:04 ` [PATCH 4/4] mm: khugepaged: set to next mm direct when mm has MMF_DISABLE_THP_COMPLETELY Vernon Yang
2025-12-18  9:33   ` David Hildenbrand (Red Hat)
2025-12-19  5:31     ` Vernon Yang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6sciluv3ylow6frheij6imhhsglaez6d6vsbtyndwlfuetzwmf@tbs6ivsitehm \
    --to=vernon2gm@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=david@kernel.org \
    --cc=lance.yang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=richard.weiyang@gmail.com \
    --cc=yanglincheng@kylinos.cn \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox