From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58AF5C433EF for ; Sun, 6 Feb 2022 21:51:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B39DD6B0072; Sun, 6 Feb 2022 16:51:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC2096B0073; Sun, 6 Feb 2022 16:51:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 93D6E6B0074; Sun, 6 Feb 2022 16:51:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id 7CEC96B0072 for ; Sun, 6 Feb 2022 16:51:49 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 36263181E4CBF for ; Sun, 6 Feb 2022 21:51:49 +0000 (UTC) X-FDA: 79113702738.14.A91F23A Received: from mail-qk1-f170.google.com (mail-qk1-f170.google.com [209.85.222.170]) by imf10.hostedemail.com (Postfix) with ESMTP id C4A16C0005 for ; Sun, 6 Feb 2022 21:51:48 +0000 (UTC) Received: by mail-qk1-f170.google.com with SMTP id 71so9559357qkf.4 for ; Sun, 06 Feb 2022 13:51:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=2F7iZLYdwYPM4sLsD/XEKrj7yTpwecjzx+f9IFfRioI=; b=O2A7e30ZNErMc7sLTDRbVDo+u9vgGRRPsIn3pHksPoXr7DO3I1ltCMlVsvYZ1LrMY/ bEHygfTQCo2ZhfRS7GvvoLp23fy77sW2s7bXsUHlO698TZbolDPRiVAE/uQlpFNN8FT+ +vMcuytvVqjY6qnHuwXZjbSQ9xJXKhn4qljFm7CcUjv9AOOUaOlLOz64fJ+4Ta7E39bP t+r0JgS7YSQoJ1P2tu1+ztrxegopNrrtPcB7CEglsmPWjrZXjGUlZFdTBJMFHFje+sRF 3f9ktE3m4hku9+G61oT4wjShEY7leMZRZpbH8WKc0jKZsvK7Yt6pEfme/mAEhf8YYfSJ QlsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=2F7iZLYdwYPM4sLsD/XEKrj7yTpwecjzx+f9IFfRioI=; b=OdVfHwqwhkbJvVvtuv/LBn2ouez6WYKg8aR/RH/b3c2vjjpgEAZRe8ODeDmG6DykmD 4YAuGC7sSnON0FR0+/oVkRv8W1184i/EJfHvVbP68tJUr8DqQUwlxX87SNrevMexf3rd cDZdxYEd0jJw/FoLI/5lKqM2JsRcUk+asaKtI6NN7LBDB+GyVod0QGBNLpzKoMdY9T4X yUvZIkT8R/19UC36TvzrD/uRpEyYNDFZENO+Tv6BxVkqp3xxhNer1ZEd5F4wVV8ZaX1H 20yiYoaIdFFACCgkW9rMSC4Jau01y+uRkjNkIdRzynyx2YPWflES9TqaV6XD+HerQXLD DrKw== X-Gm-Message-State: AOAM532rmggYbk8wyZFFfNxDQzJBvG5wiqGM+ZoPZWgbD2bNtOP9uIbi euFTAIjER4oa2pKBTktzsYLY8A== X-Google-Smtp-Source: ABdhPJzBUJKcxDIYJSCu6HIiFnNizBKr8UU37iMWcm6qXkJlQTiR0K//R2/i+23oBe3NLv6QegczpA== X-Received: by 2002:a37:9a82:: with SMTP id c124mr5060970qke.433.1644184307966; Sun, 06 Feb 2022 13:51:47 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id n6sm4798616qtx.23.2022.02.06.13.51.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 06 Feb 2022 13:51:47 -0800 (PST) Date: Sun, 6 Feb 2022 13:51:45 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Michal Hocko , Vlastimil Babka , "Kirill A. Shutemov" , Matthew Wilcox , David Hildenbrand , Alistair Popple , Johannes Weiner , Rik van Riel , Suren Baghdasaryan , Yu Zhao , Greg Thelen , Shakeel Butt , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 12/13] mm/thp: collapse_file() do try_to_unmap(TTU_BATCH_FLUSH) In-Reply-To: <8e4356d-9622-a7f0-b2c-f116b5f2efea@google.com> Message-ID: References: <8e4356d-9622-a7f0-b2c-f116b5f2efea@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C4A16C0005 X-Stat-Signature: c9yq66yia7txf11y5jcedf1ttxy5onp6 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=O2A7e30Z; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of hughd@google.com designates 209.85.222.170 as permitted sender) smtp.mailfrom=hughd@google.com X-Rspam-User: nil X-HE-Tag: 1644184308-780009 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: collapse_file() is using unmap_mapping_pages(1) on each small page found mapped, unlike others (reclaim, migration, splitting, memory-failure) who use try_to_unmap(). There are four advantages to try_to_unmap(): first, its TTU_IGNORE_MLOCK option now avoids leaving mlocked page in pagevec; second, its vma lookup uses i_mmap_lock_read() not i_mmap_lock_write(); third, it breaks out early if page is not mapped everywhere it might be; fourth, its TTU_BATCH_FLUSH option can be used, as in page reclaim, to save up all the TLB flushing until all of the pages have been unmapped. Wild guess: perhaps it was originally written to use try_to_unmap(), but hit the VM_BUG_ON_PAGE(page_mapped) after unmapping, because without TTU_SYNC it may skip page table locks; but unmap_mapping_pages() never skips them, so fixed the issue. I did once hit that VM_BUG_ON_PAGE() since making this change: we could pass TTU_SYNC here, but I think just delete the check - the race is very rare, this is an ordinary small page so we don't need to be so paranoid about mapcount surprises, and the page_ref_freeze() just below already handles the case adequately. Signed-off-by: Hugh Dickins --- mm/khugepaged.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index d5e387c58bde..e0883a33efd6 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1829,13 +1829,12 @@ static void collapse_file(struct mm_struct *mm, } if (page_mapped(page)) - unmap_mapping_pages(mapping, index, 1, false); + try_to_unmap(page, TTU_IGNORE_MLOCK | TTU_BATCH_FLUSH); xas_lock_irq(&xas); xas_set(&xas, index); VM_BUG_ON_PAGE(page != xas_load(&xas), page); - VM_BUG_ON_PAGE(page_mapped(page), page); /* * The page is expected to have page_count() == 3: @@ -1899,6 +1898,13 @@ static void collapse_file(struct mm_struct *mm, xas_unlock_irq(&xas); xa_unlocked: + /* + * If collapse is successful, flush must be done now before copying. + * If collapse is unsuccessful, does flush actually need to be done? + * Do it anyway, to clear the state. + */ + try_to_unmap_flush(); + if (result == SCAN_SUCCEED) { struct page *page, *tmp; -- 2.34.1