From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FF13D13588 for ; Mon, 28 Oct 2024 07:07:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A85F16B0092; Mon, 28 Oct 2024 03:07:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A0D896B0093; Mon, 28 Oct 2024 03:07:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 887E56B0095; Mon, 28 Oct 2024 03:07:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 67ADD6B0092 for ; Mon, 28 Oct 2024 03:07:31 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 69636161B57 for ; Mon, 28 Oct 2024 07:07:05 +0000 (UTC) X-FDA: 82722128748.20.F9DB0E9 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.13]) by imf01.hostedemail.com (Postfix) with ESMTP id AF39640017 for ; Mon, 28 Oct 2024 07:07:10 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XckA5Z7b; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1730099133; a=rsa-sha256; cv=none; b=5CCnVGKHygDIDqwfpXJf0CH77sDTco1ccndtjmqNOkRf0x2I/+P87UFFPhuQv25LktClfm BDEa32mGv81V333eZSk9wefW3CsDYEZl/IA+UMaoWB+90wSTBRZUHD8bhX+y0aei/NFeiy 6woVekDwC5l9jmezAJdWKnum5X7b3OI= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=XckA5Z7b; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of ying.huang@intel.com designates 192.198.163.13 as permitted sender) smtp.mailfrom=ying.huang@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1730099133; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eKsCbWqfzqcQeeio1KDPQt+4YkDQvAyD2qF14mIwc7Y=; b=qGROg9puQA15zejydAwW8JUWvh4RCATZ+lJ9vr9Qq3pMLMAWtGQ/FWbvTZgyusyjrc3z+n ftJA2SkIQNMCH6KvfjIuC7b7a9Qa3P4pmeu6IX2RbpV3EZhoBmYrTHrB4VZeOnj7ojBdyM I6fcUOytJ0d15PBQc8oW5X5OSZb8DmU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1730099249; x=1761635249; h=from:to:cc:subject:in-reply-to:references:date: message-id:mime-version; bh=gQTPNK6wjetQHDY6/YLL0BOSGOs9BqTbJjM06DU3CEc=; b=XckA5Z7b4PhYK1LJ0T3Z9wmmJ34ibbqhFGVPD0B80QA8ea/F+cYqRuHt 0YqpHpdX/o28kwbOZCesbD9f/10XnOCx0m2EJqag2Zvx/lwnxFbA5E5F9 ljRe/AaaaFjZPg6JDbqU4WE01bwwl/lvCM4mLVm0uemJmEgaRowBMNKla Wib4KswZg14Y0ZSv8JbG3mHjwygsC9Wv3nSnRGF9CzArG0VTfxF/sC2WA PktTPArnzvH7HYPVkrAOOZ6/KUEP91E50gSgFI3gjhlEYgptKEZLAQskf 4TShUUpPuNqnwtpROCCkVlQWY4f+HIgt1NXiGmstnlVglqLTuAopC42Kg g==; X-CSE-ConnectionGUID: 0l4uNnEUSjC1JDAMX+2ZAA== X-CSE-MsgGUID: DNLly/GeS1aBaEgFD1WdPw== X-IronPort-AV: E=McAfee;i="6700,10204,11238"; a="32547723" X-IronPort-AV: E=Sophos;i="6.11,238,1725346800"; d="scan'208";a="32547723" Received: from fmviesa010.fm.intel.com ([10.60.135.150]) by fmvoesa107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2024 00:07:28 -0700 X-CSE-ConnectionGUID: 2WQCn8P2RGygatgs7kaDiA== X-CSE-MsgGUID: 1HcO0oL2SHGviB4evrOwvw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.11,238,1725346800"; d="scan'208";a="81854056" Received: from yhuang6-desk2.sh.intel.com (HELO yhuang6-desk2.ccr.corp.intel.com) ([10.238.208.55]) by fmviesa010-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2024 00:07:26 -0700 From: "Huang, Ying" To: Kefeng Wang Cc: Andrew Morton , David Hildenbrand , Matthew Wilcox , Muchun Song , Subject: Re: [PATCH v2 1/2] mm: use aligned address in clear_gigantic_page() In-Reply-To: <34acebee-f072-47eb-8710-3ef1addd664f@huawei.com> (Kefeng Wang's message of "Mon, 28 Oct 2024 14:35:57 +0800") References: <20241026054307.3896926-1-wangkefeng.wang@huawei.com> <874j4wycnn.fsf@yhuang6-desk2.ccr.corp.intel.com> <34acebee-f072-47eb-8710-3ef1addd664f@huawei.com> Date: Mon, 28 Oct 2024 15:03:53 +0800 Message-ID: <87zfmowvxi.fsf@yhuang6-desk2.ccr.corp.intel.com> User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 Content-Type: text/plain; charset=ascii X-Rspamd-Queue-Id: AF39640017 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: sgcczfxuzz153k98cpj4e3kkejwp3z46 X-HE-Tag: 1730099230-304798 X-HE-Meta: U2FsdGVkX18L9WCABA3cS2FNBs/28lP18EljqhNK1XloCiZJAWDs8pz1o8Yuse6yN2INuEuD39gGBQnkK8ZwdGRCRWloWkry1RMHcmvRV4sF9pvLdjQ2/ZfUFvsZ0EPhsU4jCXMFK0yy8uFCkeOVeqIOBzskTjnYx9NoKqlWWpDbr136V+IsWM9iu/LJQagZdDezClTIqPHh7fnM23O6Yc3rNzik9EmE65SF0V1o4yM6wATTo4ERKefqaucYefhUMGfvEAwBgPhnyv23zcP7FLfeMvtCEjAWNjU11nWG6oYnltF5b+JE1DrJijVhK2eapsH/oMVJgOAARMapMGNdj3YzatG9fA0uuegeW8Yqca4FSxPrJk9oBa+7y+//lE+R0I49FVBArl2G5K2CPT2tMOvhNDZjtL4UJ1SlZ7IXwIu1UtJwDTTupFgbHxfAbP0IWRZ+bWmSgvJ3Vusoj6NyM4NTZ5MgedMX3EbyNRkh1ZtRuNQ+MwhGamcIxus8rEYg+H+hg6HAyRBhDDSMcW7P/UEl69VRCYy9hmG5nLTaTxXsaJRkdNEyHmKeTkks/3hK142Y+82iIBGmarTXc1NWYt0SvMynuk4zYPDFMVK/JYRhLLHGFIVudBCh0j2DOKryEm3ACPWf59iUloeMO80PWahQqClkBYu1uKlVIEqGb88B61bCuVB6Tt1+kZsEtHgKuk+zjMfzsmpoprXlordXikDHlkdwvWNtXySpfthkCx99N4+qTkA6gUwKPI+vfxKBXKS5C7DdYAf3a/W+U4jKteoBBarg4TeXhpmSfT3rAQEn7eoW5h78752S75GJ+4rx6ca51QqAJnEh9TBh51HUfpqeJ45nKKUmoDsP9iJ+GRTB4CEeeqFwubr6ebvRGZNESm69Ciwcf1fV+PFL+/l0p+UZBUavHzCd1malML/WI/NNKElOWcRfOthgbjQQe3f+/IeexWn+29KFOhDLMHS sGG0ekaa gyhjOB2ZvyD79LDwQL51ZlFhBruDRTjDZdyMy0c3hM/Y+94BgEURJOnnTP3urANOdXyPnLqIM0OWb8D40xuHqOPBJMjD0KJgVY8ThcbSy0sKfOTorPMJZS6D5x1AkNt5V8AUvokHHvjXa0wJB/HLsfBKaP/2x5Xehi0l9FZ0CWX2wFoxHrHpD1ARJV2el5Q59z/pekmFgE7ldq4vDIoT+KVOsKLOVNKZLXm1yWRrLG2G9pqTNIRRVF97GfHZ8qj65ZrwJONUrKdi/mElu94T1YuzjQg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Kefeng Wang writes: > On 2024/10/28 14:17, Huang, Ying wrote: >> Kefeng Wang writes: >> >>> When clearing gigantic page, it zeros page from the first page to the >>> last page, if directly passing addr_hint which maybe not the address >>> of the first page of folio, then some archs could flush the wrong cache >>> if it does use the addr_hint as a hint. For non-gigantic page, it >>> calculates the base address inside, even passed the wrong addr_hint, it >>> only has performance impact as the process_huge_page() wants to process >>> target page last to keep its cache lines hot), no functional impact. >>> >>> Let's pass the real accessed address to folio_zero_user() and use the >>> aligned address in clear_gigantic_page() to fix it. >>> >>> Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") >>> Signed-off-by: Kefeng Wang >>> --- >>> v2: >>> - update changelog to clarify the impact, per Andrew >>> >>> fs/hugetlbfs/inode.c | 2 +- >>> mm/memory.c | 1 + >>> 2 files changed, 2 insertions(+), 1 deletion(-) >>> >>> diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c >>> index a4441fb77f7c..a5ea006f403e 100644 >>> --- a/fs/hugetlbfs/inode.c >>> +++ b/fs/hugetlbfs/inode.c >>> @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, >>> error = PTR_ERR(folio); >>> goto out; >>> } >>> - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); >>> + folio_zero_user(folio, addr); >> 'addr' is set with the following statement above, >> /* addr is the offset within the file (zero based) */ >> addr = index * hpage_size; >> So, we just don't need to ALIGN_DOWN() here. Or do I miss >> something? > > Yes, it is already aligned, >> >>> __folio_mark_uptodate(folio); >>> error = hugetlb_add_to_page_cache(folio, mapping, index); >>> if (unlikely(error)) { >>> diff --git a/mm/memory.c b/mm/memory.c >>> index 75c2dfd04f72..ef47b7ea5ddd 100644 >>> --- a/mm/memory.c >>> +++ b/mm/memory.c >>> @@ -6821,6 +6821,7 @@ static void clear_gigantic_page(struct folio *folio, unsigned long addr, >>> int i; >>> might_sleep(); >>> + addr = ALIGN_DOWN(addr, folio_size(folio)); > > but for hugetlb_no_page(), we do need to align the addr as it use > vmf->real_address, so I move the alignment into the > clear_gigantic_page. That sounds good. You may need to revise patch description to describe why you make the change. May be something like below? In current kernel, hugetlb_no_page() calls folio_zero_user() with the fault address. Where the fault address may be not aligned with the huge page size. Then, folio_zero_user() may call clear_gigantic_page() with the address, while clear_gigantic_page() requires the address to be huge page size aligned. So, this may cause memory corruption or information leak. >>> for (i = 0; i < nr_pages; i++) { >>> cond_resched(); >>> clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); -- Best Regards, Huang, Ying