From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94F13C433E4 for ; Wed, 15 Jul 2020 13:50:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6372020657 for ; Wed, 15 Jul 2020 13:50:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6372020657 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2D1E6B0002; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B6426B0005; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87DBC6B0006; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 6E9866B0002 for ; Wed, 15 Jul 2020 09:50:19 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D7E89181AC9BF for ; Wed, 15 Jul 2020 13:50:18 +0000 (UTC) X-FDA: 77040444516.03.ghost14_1d0ef5b26efa Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 828F728A4E9 for ; Wed, 15 Jul 2020 13:50:18 +0000 (UTC) X-HE-Tag: ghost14_1d0ef5b26efa X-Filterd-Recvd-Size: 4421 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Wed, 15 Jul 2020 13:50:17 +0000 (UTC) IronPort-SDR: STOL+5ZjUtaLkLGDkhXoGgZH9q6zE+fsp5aDMPzPh0zcKtDN/6YD3fusrnWzHJSY9PVaEAhi6d k1u8nsSmAEfQ== X-IronPort-AV: E=McAfee;i="6000,8403,9682"; a="149141380" X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="149141380" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2020 06:50:15 -0700 IronPort-SDR: HGWwevdqxTxDRDHxu6Loac2IHFbvALH+N778mrEpxGQOVurNvv8tjElWvYNleM189heoRQy4oy DriO0I4yhElA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,355,1589266800"; d="scan'208";a="282095027" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 15 Jul 2020 06:50:13 -0700 Received: by black.fi.intel.com (Postfix, from userid 1000) id EA8C476; Wed, 15 Jul 2020 16:50:12 +0300 (EEST) From: "Kirill A. Shutemov" To: Andrew Morton Cc: Linus Torvalds , linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Naresh Kamboju , Joel Fernandes , William Kucharski Subject: [PATCHv2] mm: Fix warning in move_normal_pmd() Date: Wed, 15 Jul 2020 16:50:11 +0300 Message-Id: <20200715135011.42743-1-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: 828F728A4E9 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: mremap(2) does not allow source and destination regions to overlap, but shift_arg_pages() calls move_page_tables() directly and in this case the source and destination overlap often. It confuses move_normal_pmd(): WARNING: CPU: 3 PID: 27091 at mm/mremap.c:211 move_page_tables+0x6ef/0x= 720 move_normal_pmd() expects the destination PMD to be empty, but when ranges overlap nobody removes PTE page tables on source side. move_ptes() only removes PTE entries, leaving tables behind. When the source PMD becomes destination and alignment/size is right we step onto the warning. The warning is harmless: kernel correctly fallbacks to handle entries on per-entry basis. The fix is to avoid move_normal_pmd() if we see that source and destination ranges overlap. Signed-off-by: Kirill A. Shutemov Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions") Link: https://lore.kernel.org/lkml/20200713025354.GB3644504@google.com/ Reported-and-tested-by: Naresh Kamboju Reviewed-by: Joel Fernandes (Google) Cc: William Kucharski --- mm/mremap.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/mm/mremap.c b/mm/mremap.c index 5dd572d57ca9..340a96a29cbb 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -245,6 +245,26 @@ unsigned long move_page_tables(struct vm_area_struct= *vma, unsigned long extent, next, old_end; struct mmu_notifier_range range; pmd_t *old_pmd, *new_pmd; + bool overlaps; + + /* + * shift_arg_pages() can call move_page_tables() on overlapping ranges. + * In this case we cannot use move_normal_pmd() because destination pmd + * might be established page table: move_ptes() doesn't free page + * table. + */ + if (old_addr > new_addr) { + overlaps =3D old_addr - new_addr < len; + } else { + overlaps =3D new_addr - old_addr < len; + + /* + * We are interating over ranges forward. It means we cannot + * handle overlapping ranges with new_addr > old_addr without + * risking data corruption. Don't do this. + */ + WARN_ON(overlaps); + } =20 old_end =3D old_addr + len; flush_cache_range(vma, old_addr, old_end); @@ -282,7 +302,7 @@ unsigned long move_page_tables(struct vm_area_struct = *vma, split_huge_pmd(vma, old_pmd, old_addr); if (pmd_trans_unstable(old_pmd)) continue; - } else if (extent =3D=3D PMD_SIZE) { + } else if (!overlaps && extent =3D=3D PMD_SIZE) { #ifdef CONFIG_HAVE_MOVE_PMD /* * If the extent is PMD-sized, try to speed the move by --=20 2.26.2