From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CEE4EC38A2D for ; Tue, 25 Oct 2022 06:21:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1842980008; Tue, 25 Oct 2022 02:21:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 134E580007; Tue, 25 Oct 2022 02:21:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F3E5680008; Tue, 25 Oct 2022 02:21:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id E24F080007 for ; Tue, 25 Oct 2022 02:21:27 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A225640835 for ; Tue, 25 Oct 2022 06:21:27 +0000 (UTC) X-FDA: 80058475014.09.7DF4063 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf29.hostedemail.com (Postfix) with ESMTP id 0456A120019 for ; Tue, 25 Oct 2022 06:21:25 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R611e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=ningzhang@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0VT1kCFy_1666678879; Received: from 30.240.96.185(mailfrom:ningzhang@linux.alibaba.com fp:SMTPD_---0VT1kCFy_1666678879) by smtp.aliyun-inc.com; Tue, 25 Oct 2022 14:21:20 +0800 Content-Type: multipart/alternative; boundary="------------00JhMcIE0hRNmkplFFHVnUpl" Message-ID: <351a87a2-3bb1-f66b-af95-34ec15a9af54@linux.alibaba.com> Date: Tue, 25 Oct 2022 14:21:18 +0800 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:102.0) Gecko/20100101 Thunderbird/102.3.3 Subject: Re: [PATCH v4 2/3] mm: changes to split_huge_page() to free zero filled tail pages To: "Alex Zhu (Kernel)" , Yu Zhao Cc: "linux-mm@kvack.org" , Kernel Team , Matthew Wilcox , Rik van Riel , Johannes Weiner References: From: Ning Zhang In-Reply-To: ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666678887; a=rsa-sha256; cv=none; b=IMZOhMrIv5+bWv7/9JYqNFaFMeejChpYj1bNXPCeFMyyybruoAZQ70zGitGp4mboPGGhh6 X0/8Li7Z37h4PbTx1oKmSWELsw3fvwbbgHKcAVnvS7QLFpmkm50JFts+g4btLlWrRYcQcx rwoT6iuCppfxhnsHU06rtvjCADXsnbs= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ningzhang@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=ningzhang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666678887; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4FTTh3lS8oBKWvyaDRzDSo+NxCk1442nZ7sNKoTFyAw=; b=Pbob1RMY6QNloHaEYtgAKgWzaUBUhab1aN24ABzyz/zY3Y04la2CnSvm77jcuWAaUjdp4Z TFv6oUQPd9/iqKlmulFa5CbqBOqZtP/PAMGNIYV8jwFe9N1jpnUyFNXoqsEcjtP/vZZP2d W+TMPUEthGFZNSmwPzHq/u3uT60dZZc= X-Rspam-User: X-Rspamd-Queue-Id: 0456A120019 Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of ningzhang@linux.alibaba.com designates 115.124.30.57 as permitted sender) smtp.mailfrom=ningzhang@linux.alibaba.com; dmarc=pass (policy=none) header.from=alibaba.com X-Stat-Signature: omdzztrparfgkipp45xxkcbggto7n6ke X-Rspamd-Server: rspam03 X-HE-Tag: 1666678885-999848 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a multi-part message in MIME format. --------------00JhMcIE0hRNmkplFFHVnUpl Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit 在 2022/10/20 02:48, Alex Zhu (Kernel) 写道: > > >> On Oct 18, 2022, at 10:12 PM, Yu Zhao wrote: >> >> On Tue, Oct 18, 2022 at 9:42 PM wrote: >>> >>> From: Alexander Zhu >>> >>> Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always >>> is set >>> there are a large number of transparent hugepages that are almost >>> entirely >>> zero filled.  This is mentioned in a number of previous patchsets >>> including: >>> https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/ >>> https://lore.kernel.org/all/ >>> 1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/ >>> >>> Currently, split_huge_page() does not have a way to identify zero filled >>> pages within the THP. Thus these zero pages get remapped and continue to >>> create memory waste. In this patch, we identify and free tail pages that >>> are zero filled in split_huge_page(). In this way, we avoid mapping >>> these >>> pages back into page table entries and can free up unused memory within >>> THPs. This is based off the previously mentioned patchset by Yu Zhao. >> >> Hi Alex, >> >> Generally the process [1] to follow is that you keep my patches >> separate from yours, rather than squash them into one, e.g., [2]. >> >> [1]https://www.kernel.org/doc/html/latest/process/submitting-patches.html >> [2]https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/ >> >> Also it's a courtesy to cc Ning, since his approach is (very) similar >> to yours. Naturally he would wonder if you are reinventing the wheel, >> so you'd have to address it in your cover letter. > > Sorry about that. Will cc Ning as well in future iterations. I will > split out the second patch into a few patches as well. > > This patchset differs from Ning's RFC in that we make use of list_lru > and a shrinker, as discussed previously: > https://lore.kernel.org/linux-mm/CAOUHufYeuMN9As58BVwMKSN6viOZKReXNeCBgGeeL6ToWGsEKw@mail.gmail.com/ > > The approach is different, but we are fundamentally still cleaning up > underutilized THPs (contain a large number of zero pages). > I have used a shrinker in previous version (see https://gitee.com/anolis/cloud-kernel/commit/62f8852885cc7f23063886d36fd36d94b48d3982) . But the shrinker has a problem that it can't control the split number accurately. For example, I only want to split two THPs to avoid OOM, but shrinker may split many THPs. >> >>> However, we chose to free anonymous zero tail pages whenever they are >>> encountered instead of only on reclaim or migration. >> >> What are cases that are not on reclaim or migration? > > It would be any case where split_huge_page is called on anonymous > memory. split_huge_page is also called from KSM and madvise. It can > also be called from debugfs, which is what the self test relies on. We > thought this implementation would be more generic. As far as I can > tell there is no reason to keep zero pages around in anonymous THPs > that have been split. > > We also handled remapping to a shared zero page on userfaultfd in a > previous iteration. That is the only use case I am aware of where we > do not want to zap the zero pages. >> >> As I've explained off the mailing list, it's likely a bug if you >> really have one. And I don't think you do. I'm currently under the >> impression that you have a slab shrinker, and slab shrinkers are on >> the reclaim path. >> >> Thanks. > > This shrinker is not only for slabs. It’s for all anonymous THPs in > physical memory. That’s why we needed to add list_lru_add_page and > list_lru_delete_page as well, as list_lru_add/delete assumes slab > objects. > > --------------00JhMcIE0hRNmkplFFHVnUpl Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit
在 2022/10/20 02:48, Alex Zhu (Kernel) 写道:


On Oct 18, 2022, at 10:12 PM, Yu Zhao <yuzhao@google.com> wrote:

On Tue, Oct 18, 2022 at 9:42 PM <alexlzhu@fb.com> wrote:

From: Alexander Zhu <alexlzhu@fb.com>

Currently, when /sys/kernel/mm/transparent_hugepage/enabled=always is set
there are a large number of transparent hugepages that are almost entirely
zero filled.  This is mentioned in a number of previous patchsets
including:
https://lore.kernel.org/all/20210731063938.1391602-1-yuzhao@google.com/
https://lore.kernel.org/all/
1635422215-99394-1-git-send-email-ningzhang@linux.alibaba.com/

Currently, split_huge_page() does not have a way to identify zero filled
pages within the THP. Thus these zero pages get remapped and continue to
create memory waste. In this patch, we identify and free tail pages that
are zero filled in split_huge_page(). In this way, we avoid mapping these
pages back into page table entries and can free up unused memory within
THPs. This is based off the previously mentioned patchset by Yu Zhao.

Hi Alex,

Generally the process [1] to follow is that you keep my patches
separate from yours, rather than squash them into one, e.g., [2].

[1] https://www.kernel.org/doc/html/latest/process/submitting-patches.html
[2] https://lore.kernel.org/linux-mm/cover.1665568707.git.christophe.leroy@csgroup.eu/

Also it's a courtesy to cc Ning, since his approach is (very) similar
to yours. Naturally he would wonder if you are reinventing the wheel,
so you'd have to address it in your cover letter.

Sorry about that. Will cc Ning as well in future iterations. I will split out the second patch into a few patches as well. 

This patchset differs from Ning's RFC in that we make use of list_lru and a shrinker, as discussed previously: 

The approach is different, but we are fundamentally still cleaning up underutilized THPs (contain a large number of zero pages). 

I have used a shrinker in previous version (see https://gitee.com/anolis/cloud-kernel/commit/62f8852885cc7f23063886d36fd36d94b48d3982) .

But the shrinker has a problem that it can't control the split number accurately. For example, I only want to split two THPs to avoid OOM, but shrinker may split many THPs.


However, we chose to free anonymous zero tail pages whenever they are
encountered instead of only on reclaim or migration.

What are cases that are not on reclaim or migration?

It would be any case where split_huge_page is called on anonymous memory. split_huge_page is also called from KSM and madvise. It can also be called from debugfs, which is what the self test relies on. We thought this implementation would be more generic. As far as I can tell there is no reason to keep zero pages around in anonymous THPs that have been split. 

We also handled remapping to a shared zero page on userfaultfd in a previous iteration. That is the only use case I am aware of where we do not want to zap the zero pages. 

As I've explained off the mailing list, it's likely a bug if you
really have one. And I don't think you do. I'm currently under the
impression that you have a slab shrinker, and slab shrinkers are on
the reclaim path.

Thanks.

This shrinker is not only for slabs. It’s for all anonymous THPs in physical memory. That’s why we needed to add list_lru_add_page and list_lru_delete_page as well, as list_lru_add/delete assumes slab objects. 
 


--------------00JhMcIE0hRNmkplFFHVnUpl--