From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3495AC433F5 for ; Sun, 27 Feb 2022 05:21:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 801538D001A; Sun, 27 Feb 2022 00:21:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 787E48D0007; Sun, 27 Feb 2022 00:21:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6773C8D001A; Sun, 27 Feb 2022 00:21:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 576118D0007 for ; Sun, 27 Feb 2022 00:21:09 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1C1EA22965 for ; Sun, 27 Feb 2022 05:21:09 +0000 (UTC) X-FDA: 79187411058.08.F13D7F6 Received: from mail-oi1-f175.google.com (mail-oi1-f175.google.com [209.85.167.175]) by imf10.hostedemail.com (Postfix) with ESMTP id 97447C0004 for ; Sun, 27 Feb 2022 05:21:07 +0000 (UTC) Received: by mail-oi1-f175.google.com with SMTP id s5so10794107oic.10 for ; Sat, 26 Feb 2022 21:21:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=p2FWvVh852y2LuuIcxOEeOh71QGBakz57Fy4Yg2jNNk=; b=KA7CUCmtmuqwszPd+rLI9ma1GA48BscFi+N6WDfKdfsAoVrlvoilC1tabQctfgtt3M FcQ4B8NXp8iTxwzFiXmhyWenouBWK/KBCSdcFIye0WaFmCkg0Qdfbnd4dnjNCExcysll j++5OjqKJiD0rswZOI76zprXBM5FbnibqxbhC1RZDQNHNU0PNiJ4jaI5TeSc6mQEti/Q AhEWzKAR/Ha51rdBm2yhOqmyo7BBpqwGZSQ3RxOXhVLUsOr41MrP0f9GUg5q2dEFmBAy Q9TYZzN9t8nOeYluwKuE5cLUSg5FWFLhjiTy/GOls52uGKrQKpEOy4+WAJiq+XDW9hy1 ZfZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=p2FWvVh852y2LuuIcxOEeOh71QGBakz57Fy4Yg2jNNk=; b=ZQczrLBKy2vnQFCy3o3VWoqWQP43qlQCedpU8KRKF9xvBrhBK/8mIZWE9Y7powIQ+X vRcZkTZ/tFkBbpbm9TpZtuN1jGltat/I2vnos0aLt42sz4SGtuRfL6ffJJgf5kvNAxR1 /ld+3ohO12FXGrw1IvBurXQ2kfECUG1CD/h4MzdPlE+3V4/N4U+JbjDhluOFQlaSmQMk FLr8v5wpxvSzk40O0RCcvv9MQn3nZ0WdwVF69pYzEv2wg8hl7tjBvAwkSbS6dAlgtqFK 3f9uiHCzr3gKe1Wwkd1GzGOPuDuaQRHdvwgPdaKc5JZ9FUdst34qnl8W1Ug8/+NfLZia DieA== X-Gm-Message-State: AOAM532iVr4ZAs5awY5z7tQloKchiRp6Jmeppegswcs+vRjhqYOjsm+7 LykGnbEZslTiBqfrvkasvs3/eA== X-Google-Smtp-Source: ABdhPJx6ungPL3K2I3A01hBjvb+zC2oK7mINCnfZ+jEAhf8zrgU9LlxSxNMKTYBYD1LkQHJQsjEBvw== X-Received: by 2002:aca:4b96:0:b0:2d7:2d16:f18e with SMTP id y144-20020aca4b96000000b002d72d16f18emr6583764oia.74.1645939266668; Sat, 26 Feb 2022 21:21:06 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id q6-20020a9d57c6000000b005ad5a1edd4csm3302900oti.22.2022.02.26.21.21.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 26 Feb 2022 21:21:06 -0800 (PST) Date: Sat, 26 Feb 2022 21:20:54 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Mike Kravetz , Matthew Wilcox , cgel.zte@gmail.com, kirill@shutemov.name, songliubraving@fb.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, yang.yang29@zte.com.cn, wang.yong12@zte.com.cn Subject: [PATCH] memfd: fix F_SEAL_WRITE after shmem huge page allocated In-Reply-To: <8986d97-3933-8fa7-abba-aabd67924bc2@google.com> Message-ID: References: <20220215073743.1769979-1-cgel.zte@gmail.com> <1f486393-3829-4618-39a1-931afc580835@oracle.com> <8986d97-3933-8fa7-abba-aabd67924bc2@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 97447C0004 X-Stat-Signature: zha8ip159s8imbxebaeborjhcoey76qz Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=KA7CUCmt; spf=pass (imf10.hostedemail.com: domain of hughd@google.com designates 209.85.167.175 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1645939267-858543 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Wangyong reports: after enabling tmpfs filesystem to support transparent hugepage with the following command: echo always > /sys/kernel/mm/transparent_hugepage/shmem_enabled the docker program tries to add F_SEAL_WRITE through the following command, but it fails unexpectedly with errno EBUSY: fcntl(5, F_ADD_SEALS, F_SEAL_WRITE) = -1. That is because memfd_tag_pins() and memfd_wait_for_pins() were never updated for shmem huge pages: checking page_mapcount() against page_count() is hopeless on THP subpages - they need to check total_mapcount() against page_count() on THP heads only. Make memfd_tag_pins() (compared > 1) as strict as memfd_wait_for_pins() (compared != 1): either can be justified, but given the non-atomic total_mapcount() calculation, it is better now to be strict. Bear in mind that total_mapcount() itself scans all of the THP subpages, when choosing to take an XA_CHECK_SCHED latency break. Also fix the unlikely xa_is_value() case in memfd_wait_for_pins(): if a page has been swapped out since memfd_tag_pins(), then its refcount must have fallen, and so it can safely be untagged. Reported-by: Zeal Robot Reported-by: wangyong Signed-off-by: Hugh Dickins Cc: --- Andrew, please remove fix-shmem-huge-page-failed-to-set-f_seal_write-attribute-problem.patch fix-shmem-huge-page-failed-to-set-f_seal_write-attribute-problem-fix.patch from mmotm, and replace them by this patch against 5.17-rc5: wangyong's patch did not handle the case of pte-mapped huge pages, and I had this one from earlier, when I found the same issue with MFD_HUGEPAGE (but MFD_HUGEPAGE did not go in, so I didn't post this one, forgetting the transparent_hugepage/shmem_enabled case). mm/memfd.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) --- 5.17-rc5/mm/memfd.c +++ linux/mm/memfd.c @@ -31,20 +31,28 @@ static void memfd_tag_pins(struct xa_state *xas) { struct page *page; - unsigned int tagged = 0; + int latency = 0; + int cache_count; lru_add_drain(); xas_lock_irq(xas); xas_for_each(xas, page, ULONG_MAX) { - if (xa_is_value(page)) - continue; - page = find_subpage(page, xas->xa_index); - if (page_count(page) - page_mapcount(page) > 1) + cache_count = 1; + if (!xa_is_value(page) && + PageTransHuge(page) && !PageHuge(page)) + cache_count = HPAGE_PMD_NR; + + if (!xa_is_value(page) && + page_count(page) - total_mapcount(page) != cache_count) xas_set_mark(xas, MEMFD_TAG_PINNED); + if (cache_count != 1) + xas_set(xas, page->index + cache_count); - if (++tagged % XA_CHECK_SCHED) + latency += cache_count; + if (latency < XA_CHECK_SCHED) continue; + latency = 0; xas_pause(xas); xas_unlock_irq(xas); @@ -73,7 +81,8 @@ static int memfd_wait_for_pins(struct ad error = 0; for (scan = 0; scan <= LAST_SCAN; scan++) { - unsigned int tagged = 0; + int latency = 0; + int cache_count; if (!xas_marked(&xas, MEMFD_TAG_PINNED)) break; @@ -87,10 +96,14 @@ static int memfd_wait_for_pins(struct ad xas_lock_irq(&xas); xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) { bool clear = true; - if (xa_is_value(page)) - continue; - page = find_subpage(page, xas.xa_index); - if (page_count(page) - page_mapcount(page) != 1) { + + cache_count = 1; + if (!xa_is_value(page) && + PageTransHuge(page) && !PageHuge(page)) + cache_count = HPAGE_PMD_NR; + + if (!xa_is_value(page) && cache_count != + page_count(page) - total_mapcount(page)) { /* * On the last scan, we clean up all those tags * we inserted; but make a note that we still @@ -103,8 +116,11 @@ static int memfd_wait_for_pins(struct ad } if (clear) xas_clear_mark(&xas, MEMFD_TAG_PINNED); - if (++tagged % XA_CHECK_SCHED) + + latency += cache_count; + if (latency < XA_CHECK_SCHED) continue; + latency = 0; xas_pause(&xas); xas_unlock_irq(&xas);