From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF626C433EF for ; Thu, 9 Sep 2021 02:41:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 646F361166 for ; Thu, 9 Sep 2021 02:41:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 646F361166 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C265D6B0071; Wed, 8 Sep 2021 22:41:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD6576B0072; Wed, 8 Sep 2021 22:41:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC5A4900002; Wed, 8 Sep 2021 22:41:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 9AEFD6B0071 for ; Wed, 8 Sep 2021 22:41:34 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2AEFC1833821C for ; Thu, 9 Sep 2021 02:41:34 +0000 (UTC) X-FDA: 78566484108.15.D27F9CE Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf14.hostedemail.com (Postfix) with ESMTP id 3EC0C6001987 for ; Thu, 9 Sep 2021 02:41:33 +0000 (UTC) Received: from dggeml765-chm.china.huawei.com (unknown [172.30.72.54]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4H4jtM0bMQz8syC; Thu, 9 Sep 2021 10:40:59 +0800 (CST) Received: from huawei.com (10.175.124.27) by dggeml765-chm.china.huawei.com (10.1.199.175) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.8; Thu, 9 Sep 2021 10:41:28 +0800 From: Liu Yuntao To: CC: , , , , , , , , Subject: Re: [PATCH] fix judgment error in shmem_is_huge() Date: Thu, 9 Sep 2021 10:39:19 +0800 Message-ID: <20210909023919.2520886-1-liuyuntao10@huawei.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210908145844.wqkyfuizqaj5mmrj@box> References: <20210908145844.wqkyfuizqaj5mmrj@box> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggeml765-chm.china.huawei.com (10.1.199.175) X-CFilter-Loop: Reflected Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf14.hostedemail.com: domain of liuyuntao10@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=liuyuntao10@huawei.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3EC0C6001987 X-Stat-Signature: nstktm8yaso6m7k6a5unaf15p5kn5sm8 X-HE-Tag: 1631155293-510036 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 8 Sep 2021 17:58:44 +0300, Kirill A. Shutemov wrote: > On Wed, Sep 08, 2021 at 06:26:48PM +0800, Liu Yuntao wrote: > > In the case of SHMEM_HUGE_WITHIN_SIZE, the page index is not rounded > > up correctly. When the page index points to the first page in a huge > > page, round_up() cannot bring it to the end of the huge page, but > > to the end of the previous one. > >=20 > > an example: > > HPAGE_PMD_NR on my machine is 512(2 MB huge page size). > > After allcoating a 3000 KB buffer, I access it at location 2050 KB. > > In shmem_is_huge(), the corresponding index happens to be 512. > > After rounded up by HPAGE_PMD_NR, it will still be 512 which is > > smaller than i_size, and shmem_is_huge() will return true. > > As a result, my buffer takes an additional huge page, and that > > shouldn't happen when shmem_enabled is set to within_size. > >=20 > > Fixes: f3f0e1d2150b2b ("khugepaged: add support of collapse for tmpfs= /shmem pages") > > Signed-off-by: Liu Yuntao > > --- > > mm/shmem.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > >=20 > > diff --git a/mm/shmem.c b/mm/shmem.c > > index 88742953532c..5747572859d1 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -490,7 +490,7 @@ bool shmem_is_huge(struct vm_area_struct *vma, > > case SHMEM_HUGE_ALWAYS: > > return true; > > case SHMEM_HUGE_WITHIN_SIZE: > > - index =3D round_up(index, HPAGE_PMD_NR); > > + index =3D round_up(index + 1, HPAGE_PMD_NR); > > i_size =3D round_up(i_size_read(inode), PAGE_SIZE); > > if (i_size >=3D HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >=3D inde= x) >=20 > With the change, the condition can be simplified to >=20 > if (i_size >> PAGE_SHIFT >=3D index) >=20 > right? Yes, will add it. >=20 > > return true; > > --=20 > > 2.23.0 > >=20 >=20 > --=20 > Kirill A. Shutemov