From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D242D13570 for ; Mon, 28 Oct 2024 08:35:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CBE1C6B0085; Mon, 28 Oct 2024 04:35:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6E126B0088; Mon, 28 Oct 2024 04:35:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B35876B008A; Mon, 28 Oct 2024 04:35:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 959B86B0085 for ; Mon, 28 Oct 2024 04:35:37 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 910081A0774 for ; Mon, 28 Oct 2024 08:34:56 +0000 (UTC) X-FDA: 82722351432.25.BDAEBFB Received: from szxga06-in.huawei.com (szxga06-in.huawei.com [45.249.212.32]) by imf04.hostedemail.com (Postfix) with ESMTP id BE00540007 for ; Mon, 28 Oct 2024 08:35:04 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730104377; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=LVtot8KN6lL79TqqPQw2+2uc7Yxxp6olyNbe+c6+s1U=; b=fKeQNko/HfCLqX7+l01nRgd8EZ8WSox7gUjZaRU4Bl0/82dHPmb0y6tqPYblIOR+o0exOz g1AAlf96VNQcGJbgA04m9BP9P4+NUcwlJxOHby16BcLrX1zG9n226GfEgpaasH2ojsHqXQ CbxqVRqUoHHeeRZtj6ID8CnH0A4rwdY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730104377; a=rsa-sha256; cv=none; b=qn8rt1M24fAXVeQx+OmR/nz90iC0L8DvkIBb/2m659U96B9yJNadQjhxAcwFbohV7FcF/p 3EQnM6EwLQKn1TBHnYPQED+GTUxtzRVYh6jwD6cle6/fWvg82tpoFQgxTQPrfWJBkLQYvk QqVuEPhSE/UzXST37IrauRVwyPV1tbY= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.32 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.17]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4XcRY63rHgz1ynj3; Mon, 28 Oct 2024 16:35:38 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 481051A0188; Mon, 28 Oct 2024 16:35:30 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 28 Oct 2024 16:35:29 +0800 Message-ID: Date: Mon, 28 Oct 2024 16:35:29 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 1/2] mm: use aligned address in clear_gigantic_page() To: "Huang, Ying" CC: Andrew Morton , David Hildenbrand , Matthew Wilcox , Muchun Song , References: <20241026054307.3896926-1-wangkefeng.wang@huawei.com> <874j4wycnn.fsf@yhuang6-desk2.ccr.corp.intel.com> <34acebee-f072-47eb-8710-3ef1addd664f@huawei.com> <87zfmowvxi.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Language: en-US From: Kefeng Wang In-Reply-To: <87zfmowvxi.fsf@yhuang6-desk2.ccr.corp.intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BE00540007 X-Stat-Signature: umufhqics9nrdypdy8jyt3jyyyo5s4p3 X-HE-Tag: 1730104504-810751 X-HE-Meta: U2FsdGVkX1+Gox69iuYYgWfytlj80ARzIVgndwwmZRbyTMG6NcLA0+2kXjWShgJ7lR+MPOpDTapPlRIbO7RrwAXTcl+WcxWuoVfkbL9x8nTX7fvCjydP/Wf0yyNX05pFhHkLTbyWUun1v9VZEMGQhum10hQRxRQfoOPRPoAYG528+Rg1tfTCpiLhxS9EAKic65pWyDz55pJj3rzdhF2KlR/+EdVMiG+m29yWN8emdWCs8WFLx3wJgyR0LCdv9+yR43wv0QnAd99DBuzVzFwBPYMOc5ZAtJgomtQddn7fRWtOAC+BtuuISO4iRb+HEkwyUy2n3tf0vc/Sub3llVZ0qGdnqJPMlvxVrl/2TTyxNh02B+RrVmf0mvWAqUEjEbqBEtIjsyO3PQzeui6DHcMfDCPJh1X0vBnj7pAZ25TYu7d/LuKFmGULk/GQzSYCN6nRNsTizdCE+25HM5jDQfceI/mFsl8m+X3QEyEVWTtyFI0b4EYb6MpVjQzCwIQkEmkEcKQQUn/t//IaY1IvvAgraKW7vYvX1T3iWbrnZOTx7YSo04aa7VTJvyQAMAtN5CJevfobFZDFFXyMua6saIizM5xkq7F/Bk3HNWSFqpNHURCqKoKxDWlcBerp2v1/61H+7FKTnwiW89cyoSwUsnZYHIsWHM5JMm3nJYCB5Yj8kTnwUO+kQZKdgvGIaBX5McUjJwrzPGhDMce51573Dslv6Ilw0kIL/Le1P0RZvIsyQMCUaO0Gnhy54BJkUsOw5M7q+gb1+m/EALVMTvEp/Hu3mNkzPVSiBFRK4YOS5eXvBMfWJLQ+8rEdmOr2IXhHGiWwFREleMLpoA3bPOKJN6nyzHvigNn7+RuHIZLFa+PsFPQjhyXyMYWz/1IDxhJun3gPTQN03tdqN6fFDaCEhQFQGpKpx1dKYV0YcRO6PvcKfcN5fuCi+hNtrH08xzxWu2Tg22YN75zztrZFXgyFalG PjiiTGfD hCriWSETtCuQB8mN+cXpXGtNjlewS3NmiapR75bEssV+aOhsZe1QH0ipt3WU7DchWycLAsOg3NlINtXC2F20aI4sNThAZmDm2k425W0mSFCCFvmvS8dgoVuszwgPnUCxUgGiApv5cu0HLrZHRfoCt3OGQcBAH7cPy6duri191keKKrxrhlzETUo4AuGpOZzbfjvfsDyl3e+4sSxm6BcNSMDJkvELGNIntpjDfhJvcD3KMwZ3zlbWHswkAybyPuIfsP7fnff67OBqXFCE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/10/28 15:03, Huang, Ying wrote: > Kefeng Wang writes: > >> On 2024/10/28 14:17, Huang, Ying wrote: >>> Kefeng Wang writes: >>> >>>> When clearing gigantic page, it zeros page from the first page to the >>>> last page, if directly passing addr_hint which maybe not the address >>>> of the first page of folio, then some archs could flush the wrong cache >>>> if it does use the addr_hint as a hint. For non-gigantic page, it >>>> calculates the base address inside, even passed the wrong addr_hint, it >>>> only has performance impact as the process_huge_page() wants to process >>>> target page last to keep its cache lines hot), no functional impact. >>>> >>>> Let's pass the real accessed address to folio_zero_user() and use the >>>> aligned address in clear_gigantic_page() to fix it. >>>> >>>> Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") >>>> Signed-off-by: Kefeng Wang >>>> --- >>>> v2: >>>> - update changelog to clarify the impact, per Andrew >>>> >>>> fs/hugetlbfs/inode.c | 2 +- >>>> mm/memory.c | 1 + >>>> 2 files changed, 2 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c >>>> index a4441fb77f7c..a5ea006f403e 100644 >>>> --- a/fs/hugetlbfs/inode.c >>>> +++ b/fs/hugetlbfs/inode.c >>>> @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, >>>> error = PTR_ERR(folio); >>>> goto out; >>>> } >>>> - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); >>>> + folio_zero_user(folio, addr); >>> 'addr' is set with the following statement above, >>> /* addr is the offset within the file (zero based) */ >>> addr = index * hpage_size; >>> So, we just don't need to ALIGN_DOWN() here. Or do I miss >>> something? >> >> Yes, it is already aligned, >>> >>>> __folio_mark_uptodate(folio); >>>> error = hugetlb_add_to_page_cache(folio, mapping, index); >>>> if (unlikely(error)) { >>>> diff --git a/mm/memory.c b/mm/memory.c >>>> index 75c2dfd04f72..ef47b7ea5ddd 100644 >>>> --- a/mm/memory.c >>>> +++ b/mm/memory.c >>>> @@ -6821,6 +6821,7 @@ static void clear_gigantic_page(struct folio *folio, unsigned long addr, >>>> int i; >>>> might_sleep(); >>>> + addr = ALIGN_DOWN(addr, folio_size(folio)); >> >> but for hugetlb_no_page(), we do need to align the addr as it use >> vmf->real_address, so I move the alignment into the >> clear_gigantic_page. > > That sounds good. You may need to revise patch description to describe > why you make the change. May be something like below? > > In current kernel, hugetlb_no_page() calls folio_zero_user() with the > fault address. Where the fault address may be not aligned with the huge > page size. Then, folio_zero_user() may call clear_gigantic_page() with > the address, while clear_gigantic_page() requires the address to be huge > page size aligned. So, this may cause memory corruption or information > leak. OK, will use it and update all patches, thanks. > >>>> for (i = 0; i < nr_pages; i++) { >>>> cond_resched(); >>>> clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); > > -- > Best Regards, > Huang, Ying >