From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DDFBC54EAA for ; Wed, 25 Jan 2023 17:00:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A61406B0075; Wed, 25 Jan 2023 12:00:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9E95D6B0078; Wed, 25 Jan 2023 12:00:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 83C0D6B007B; Wed, 25 Jan 2023 12:00:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 765F36B0075 for ; Wed, 25 Jan 2023 12:00:17 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id CAC1740A3E for ; Wed, 25 Jan 2023 17:00:16 +0000 (UTC) X-FDA: 80393934432.30.DF8AD9E Received: from mail-yw1-f180.google.com (mail-yw1-f180.google.com [209.85.128.180]) by imf02.hostedemail.com (Postfix) with ESMTP id 7DACB80035 for ; Wed, 25 Jan 2023 17:00:14 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=W2wM+2B2; spf=pass (imf02.hostedemail.com: domain of surenb@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674666014; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=L4MXGiF0uBNS4YdEO04YxfSu/PH3d6GEYxinGGkJ0l0=; b=7C4eNN1l1bD9cxfNVLxKc7D1vZuxyoGhJ4ES4gDQTp72W6+Kuh8/AsEcJSpbK7q9l51y5X SqFRIJefSAT1ihXOYILXGQwMPtGRpYW9K3DFEejnnwidF2LEAailqUph0VGxr61GkIdyuN 9pYkWPXme/G8sc+FEOiSWVdNtWRL97Y= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=W2wM+2B2; spf=pass (imf02.hostedemail.com: domain of surenb@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674666014; a=rsa-sha256; cv=none; b=fxyJteyC37Rq/W82W0OC10NVkmycl9ZqxkvkFQ0ioyulYjcJkClSI//ycE3ncJBeRFfkOB BwUCqGvz80iqZ674cXkzhT2JHNtlEq2AOvXMcukVCulu0h8Grb8wd+OPGJV8r4W7AMN68s dHmp83Lv+Wb7ceAar5N/wwU5Lc8GRQw= Received: by mail-yw1-f180.google.com with SMTP id 00721157ae682-4c131bede4bso272307947b3.5 for ; Wed, 25 Jan 2023 09:00:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=L4MXGiF0uBNS4YdEO04YxfSu/PH3d6GEYxinGGkJ0l0=; b=W2wM+2B23DX4UFGjrNjz2QhZIRJJ0rJbNeAj16MEZr1nVm+y2drgVLMWhYKM+RXZwl LyA9Ln6ru3ELYSr9r7So/z16X90Zls60BXzg/QzJqLOLV/j0Tixqf16TWNyXZGnACB9E QkGJl1ScGfsTp7c4ttit0pgHNn4ey7S424sXU9dWBD5cTrLynb/1YBuhhrezEYSZhLSi 3eb09az3BMlgAYXh3d2+5Q7sC0FqvrC0NX040NYjkqv/h2gdKBNO3YpN4EQOf23xtORi NxWfmFe951E1Au7ZDRJqODWALmTWnaVbj7QqRA33jeb/AA79tsFrFPyzpW0c4v2lui2R IvJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=L4MXGiF0uBNS4YdEO04YxfSu/PH3d6GEYxinGGkJ0l0=; b=6Jb1EfL869zmcsKdJob0Q66sdFmwPDmFAQFbuslh6JfKVHUpDMiJrkM0MIM4eJXsOh Nqz128ZJVSnXLWYLDRc3ICWFoDOciKGQlhRhvg2eaNZaI5QZVVbzvu0UcgXeJyIAFCYa AHO2jVdxkgLpj/1lTxwOI4tbsRhBC3+3N8/ATWlc32+vWSlyH5qEQQmVeZ1OuPUki4D3 Un9BhrgTxWfkmxsE/XqhryS3aoYg3c0kQh8lIlCIIWQdxw3rjFlUcWZE+v55RDZP0LNL LyE2tp36otojStBzlSuWDBaOYEzAzIsLMpR3uprXzWghzOPqNgkajwjS6jUyI5ud07og 2Y7A== X-Gm-Message-State: AFqh2koP962YrM43HeO8F2zevMOOJsEwb3NKTmiX49zVfTQGu9nQmoTu fijVJvvj1PZ0sFmGAL0MnHyaqCC0jXGt19FyQuGeiA== X-Google-Smtp-Source: AMrXdXv6WFpwQ2RivVVjLGZIQcqOZe35ABaf7/kGtBR6QRxHke62z7D7Nf0zd78A9HxP1avFtZlnWBmbm7RD5GZUdQA= X-Received: by 2002:a0d:c0c7:0:b0:502:30d7:5fff with SMTP id b190-20020a0dc0c7000000b0050230d75fffmr2052050ywd.347.1674666013171; Wed, 25 Jan 2023 09:00:13 -0800 (PST) MIME-Version: 1.0 References: <20230125083851.27759-1-surenb@google.com> <20230125083851.27759-6-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 25 Jan 2023 09:00:00 -0800 Message-ID: Subject: Re: [PATCH v2 5/6] mm: introduce mod_vm_flags_nolock and use it in untrack_pfn To: Michal Hocko Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, will@kernel.org, aneesh.kumar@linux.ibm.com, npiggin@gmail.com, chenhuacai@kernel.org, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, richard@nod.at, anton.ivanov@cambridgegreys.com, johannes@sipsolutions.net, qianweili@huawei.com, wangzhou1@hisilicon.com, herbert@gondor.apana.org.au, davem@davemloft.net, vkoul@kernel.org, airlied@gmail.com, daniel@ffwll.ch, maarten.lankhorst@linux.intel.com, mripard@kernel.org, tzimmermann@suse.de, l.stach@pengutronix.de, krzysztof.kozlowski@linaro.org, patrik.r.jakobsson@gmail.com, matthias.bgg@gmail.com, robdclark@gmail.com, quic_abhinavk@quicinc.com, dmitry.baryshkov@linaro.org, tomba@kernel.org, hjc@rock-chips.com, heiko@sntech.de, ray.huang@amd.com, kraxel@redhat.com, sre@kernel.org, mcoquelin.stm32@gmail.com, alexandre.torgue@foss.st.com, tfiga@chromium.org, m.szyprowski@samsung.com, mchehab@kernel.org, dimitri.sivanich@hpe.com, zhangfei.gao@linaro.org, jejb@linux.ibm.com, martin.petersen@oracle.com, dgilbert@interlog.com, hdegoede@redhat.com, mst@redhat.com, jasowang@redhat.com, alex.williamson@redhat.com, deller@gmx.de, jayalk@intworks.biz, viro@zeniv.linux.org.uk, nico@fluxnic.net, xiang@kernel.org, chao@kernel.org, tytso@mit.edu, adilger.kernel@dilger.ca, miklos@szeredi.hu, mike.kravetz@oracle.com, muchun.song@linux.dev, bhe@redhat.com, andrii@kernel.org, yoshfuji@linux-ipv6.org, dsahern@kernel.org, kuba@kernel.org, pabeni@redhat.com, perex@perex.cz, tiwai@suse.com, haojian.zhuang@gmail.com, robert.jarzmik@free.fr, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-graphics-maintainer@vmware.com, linux-ia64@vger.kernel.org, linux-arch@vger.kernel.org, loongarch@lists.linux.dev, kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-sgx@vger.kernel.org, linux-um@lists.infradead.org, linux-acpi@vger.kernel.org, linux-crypto@vger.kernel.org, nvdimm@lists.linux.dev, dmaengine@vger.kernel.org, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, etnaviv@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, intel-gfx@lists.freedesktop.org, linux-mediatek@lists.infradead.org, linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, linux-rockchip@lists.infradead.org, linux-tegra@vger.kernel.org, virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org, linux-stm32@st-md-mailman.stormreply.com, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-accelerators@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-scsi@vger.kernel.org, linux-staging@lists.linux.dev, target-devel@vger.kernel.org, linux-usb@vger.kernel.org, netdev@vger.kernel.org, linux-fbdev@vger.kernel.org, linux-aio@kvack.org, linux-fsdevel@vger.kernel.org, linux-erofs@lists.ozlabs.org, linux-ext4@vger.kernel.org, devel@lists.orangefs.org, kexec@lists.infradead.org, linux-xfs@vger.kernel.org, bpf@vger.kernel.org, linux-perf-users@vger.kernel.org, kasan-dev@googlegroups.com, selinux@vger.kernel.org, alsa-devel@alsa-project.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: 71sn6dwbfh8gzdqofxmmn7jx8ha4c566 X-Rspam-User: X-Rspamd-Queue-Id: 7DACB80035 X-Rspamd-Server: rspam06 X-HE-Tag: 1674666014-697360 X-HE-Meta: U2FsdGVkX18QpbMxD8NtjrEjjfZ5p2edvs9VDeJhAtxAwohFl6uqRvSFE7OrMvwKKdLxE0RrlWbrrEzAiGmiC7o/Y8c+INNPze/8JYd/9ydrfGAS/fWjZ8ihi81WXSEjzjlM233EkUPoIjmf919DCXB5XIf2CYZTSdpqYyn1IpJITzvjZmj4xbX/vci849FpPfFOeLoKbAUUe15qjwWw5znSkmpQFcDNUTx9nUQSrY+wA8efU/f+G3bVteArMyyNTf/gmr1s5azK/2ktW5MWC1VobrO0IXvMmkq2sRYQvZ9gW4OYm8JbSGTaEWYT2NYvv1OP+M8+oVyFE0n3idkJGLSOAuK8bBogtgA55TTwhznhKBYK+yBS1sPOrDS5Ag0AmoOaX6IJWcmqSWDZe1+IcJ6LA1PGDKJ5VsJ+oMuz9arXnzco50qQjF/Bz1gG1dR/qAqTiQVBg0p5cC/T9DVVcW5/PE2pnkzYj76pexehDEOnXIFHARkiQ0x59NW7wIHwboh0F+BQ8lHaSd5Kq8S7yKOtD8e+3sTxjVTK0NqAi2zdBizFc4PIp2pQVAIhyxycdsOEJEZdkRuUkKG6rn9RX5kz5M1kdAv2pYx/MCv4UfWHA89aB39gDW8wI28gmpKHm4nat9aAWZBOUOAeXkcda3mgktaA3wH28aVK96+bqRPCJFTOoKl2BNIOB/aR2cYLJF4cFjlq6ZTtSFH0sfvKLsXjOQRBk1fXczeU/hoFsoE6mMjp7oDoU7QMDKJlrjC0HPDkufknaJt3O7rlAzBi2EEyk/j4ryW/CKrL/Q3h3iqrk9PG6ZZz8N7zTx8wy77TN703xMK4wLqt12EQl3KT1miP5ARMTwMbFpKTxzZQVjQnALSV+Z/PIIUuzPwU65HJ6ADUyZWa6NEAONXfzHkq6Ho6/OikqP6a+fzQ5uR/KzfZh/a3G0mHE0J0ttKmFq3gySfIStDyevzpbKp5OCI AlDZ+iZA 9CK+4fIWYpw3j2Vg7PKSa2nQOf/zzwFDupAN3Y68nqF/uNsIXltzN2Rtw1tj2j9rLSbzUjJ0jrBiuz2C4aWHre2bNeoh9UhghV8T79CXkSrTo7srJ3GIxw8mPfz1AGXLcHF8VGm+JsUqhIKdR3OpLX9W4aGA7vKHhoXHNoRYM2Hz8HvFYiJm2zCVzRYeIOspkGODlxti2oK96CWjA9PJf7mTCf9p87Fd7Cy4IJvCGOHpkbb+j65jhb5jeRs3Z7erChkSRq6qTkAqxvcqVf6F3Z3Lm3+/aIqxE/nJdItMulWI9sDAp61CT2h4qgxawH6uHd5p07S/iG+sXUCtAwU2J6vVl5WlyktP0YRfZvosY8LFrw5CAwxuyx7cOozyJmEkcIXF8/RfEynl1GrLxuKRWwsFxuofN0iTHm9pfkVQdTQ/UdEoREEugWo2REO4MdBHdBXF9ynO0pAW6ieWbJ3R+/hf76pJ3DVGejRB5CgKYL1XwUjnX70CJGcKMxqv2OLkv9+KejGXtRVpBfOyFQO+SvNFy5r3POAR9BGUsbC6kWA3Se5GnEa54P3eVD9zuys3NKJxHVzBlvgGG8fgNOU5XwM7XKsresABzb61j/wnQLio2eRjyzapJmLbXUx5zQeOcG1U4+9MmAs2pWyP0Wg+Q5qzSAJZc5x+ODuzoLGf0Y1fEbLjGzIffVYIf/tn0mSThjUPNwjctbQkFRbi+4k/T3aa1/XCyaG59uyDP5D0T6zZ1KUveVztT0wOevbRCJZUhlrCG/BFStwj8JGMFYHe4g0JRiOtN2IyE17mYdm8/AAANhtpxLC0f0dF5h28kZQSzfGSbrC4CaXHqOpqNtoHBQZXQ4RrBURq2U9ZMoKXBDFntXjtX+zfAa5gp1AD5pTBAj8UKqS7KFyKWT1SsItEvYEw0YqpGqahFWjO+2ALQVA6Oq2YTUvye7OfHqH1s43jnOjgXly60CVWkGG325M/agOZk3v7F MK1DeBBZ uPmgoAAo97iSdk2gSK5SPnFABmPz/lPJWsnSzKDTkumyMP3aM8Rw4iNaiJIWPsISFo5micafQJzAMfP8HX4oyQbXxPoWn8STsAUVbRh4VphTClKOSI86TgZLuxAdqYw3Y/A0M2UjyqN12/1kGAhTRhrCal8I7vPUAYEV7kezxX4Kn1vkOfSWMSn1HyZxb/rKrtBhlfBnqFLBM8xXXdrrARUKN+YpWzid5+DIJUk9Iixx9vICBiZxuH6gybf1kd/O02rvAYO+JCZmrJ5lYwSXiPci39w1Mnk2JITuI1FAImgktcYnXsKQaQdAoNB0TbH3j04M0Hv7KL/gZE1AXxM8uObOEaiB3uunjthIDT0R1MrHRw3/V7agZp/v8alS/aSgiqsLBczdsq1+BQKrmRI6jxKFY+FrlL//14vddFTVs+lJT6Z3c67E1+4Gi8rxrNDyDhqEieNIBp+7TCVGm8JOp5Qr+b+/7bKXmKThGr/heKAH/xdH0MKM39E87aJxGhxHuLq+EibkozFsxzObCDi6IM6yDp320YESMg4pNlBg5L076YbvsKlypm50PK3DQ2SNVguLcPeWaGQWDMLwIquuFKEbcGlJbinLpGzfNDXk2GLkzT9O2C8cldibN+H0wyuQHNLs770xvcdB2KjRgYJXZvCAhvGn2Pow045kuzo2lkhuaXflGFhc7t+ceX8zmqPIZ+FWKVfkZEqwq9hjaFuyMgiVg0TI5tsE39qg1Apgs4Q93cNcrlbbk4EXRQ2kftr66qmQItO/PIw4aM5NffxYhJjE070zJ+FpgvdV/htAF/kawS2jDeWk7qtUHLgKhlMILDWBrJ+jO+UkeJZ7eDJ0FOOR9wPQKmmy6lgj2P0HYsS8UcDRdNxNt37hnVvW4iw9PUUvWuQ5L0K5XeXUEKyqgBUOloGvQ6TheSO13OXWFGdyz5qdnhhUxb5UZLiTDqgl/Ng15cQpnan8qcaf8CSKDrTMbB9Ei z3blWGr6 x6JfaieVukhfXdvpv/XdcWFvIE/7JZC28Iuzs+wvRk3lUrqPNXWGIFdPwf+P6wb2f9auzhngaNc4HSx5/7AUZ6FoKcC+R5NgYFOJO1/zJfi+lMQ2QwDazDCXYb1rDF0KHtanwxi2vAPS1SnOH2RnQ7gKzZ7DS6mf65pcM0uk38c64ZVyqo5nGKzit/AaZIL9Qq2yeQxIzjThxaxlAEv1EdErQ+WgtpbaX+qe8cnZtx7tBNbYXSbBnhYH9/RjaYw1iktY3ZShppcYUuRyxe/s6PZegcIPKjQBnK8eY8W/Ut5hVyB3TKMqsQxFkxJi0M8OGsXVIQnkANHjiVlWwq5gSaeaz3ZL8r+/wsHaometxKEMLJi16GmJR5HoFkO85Kw7oKnJyVo5LEiU7qSKl2yF5wc18+YNYaEC3fOtftioQFA7yOuW0hsErhgL7tpdpDvkTKLpp687PqLIZpAZ6c8ldEkGvzsKMQI7Jd00Th0zGSmi4EzG2v7RFq3n/LgXQS9ri4cjAHnk5GGF38Agde98H8mroyDUMGIIcg+O5FuEAISuBea07SVaEnZNl/r1M+BHVBKY6EqKSwnpxI2I2N8fVBA3jO4uXLLUXKdtQIVzn/X3LLeseujr26RSVIgHXRcAO1Th9cNrBU5fU20g/Cmn+crwnQsXC/69ewEFZpnWc4j/Kfmo83JwT4ZGgYYLIjHefwSZW307/soH9lv4fnPyx3KwB68q9F6dKRVVM3zhiR+XjHzDLSZU2krz2Zkk7qhM0d/a4P6uPVoeKicohmK7Sxtq35eNan9K6kk7NJ7RctCuGAeJM3Y8fJVHoYqgIYgbnpC02BfH2/pSEzYlY0QNnDCO6a2AQuVI0chYZk9k6sn8yz+JtnrFIP38deJ6/yTmq9iAYsZCK4go1VYESvY1MTDRbgy5s74PTYzXMvOM0/KDfXszMn/IVevULTYE6ci2fbn4ZfAnhfKM5AnvHLUrJAL5ItAz8 9OEzr+7G L6aEJSiw7LxN4ceIvm2qOOVpigL9AF/KHjtepS/RGxORWaIOcyqQRRLAW4mBirrMRDsKE50QPhaB4UVvTuOS2JV/oyv1LF7exj5gcaT29C9qzePLWwEUUGk6Fr9RB8LOwRtlf05MkO/vZj02Py4q+GSmLdfCf6ODa7+r4WHvYbPDK6maXaOzPEpJQ+gnlOp6mOSA8VSt/YZpvu8Zui1CPZDBYS8/SUBvWNUb6CXjfpkvdjUPg84v/CAAMchWahmbPFm2jAvs48g19hqn3p1/jxlfVsnxgjdgmS/esjVKkvZafxiCeF4D9Lj28jfDivfHBGfhlrGVoKwS6AA+wPwDB6/bBzIdwRZMgbAhZ3GKG9dXvHqEppVUHfB+7pPOvC2Z5Wef7yhbBZt232XzL3WpgleBIjCTuBNkbSLzX2GWmvATY+8iAQRKfwF7rEUpLEf9lfr0xYeI1CGmhlewjCaZ6V64Hp7IMkJSU93IYQt2JQlF/QwQZ1pi9/nG4loFKi/d7XQYQQpUlEcPpAdv5VAjnlQ2r9mQ07qnY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jan 25, 2023 at 1:42 AM Michal Hocko wrote: > > On Wed 25-01-23 00:38:50, Suren Baghdasaryan wrote: > > In cases when VMA flags are modified after VMA was isolated and mmap_lock > > was downgraded, flags modifications would result in an assertion because > > mmap write lock is not held. > > Introduce mod_vm_flags_nolock to be used in such situation. > > Pass a hint to untrack_pfn to conditionally use mod_vm_flags_nolock for > > flags modification and to avoid assertion. > > The changelog nor the documentation of mod_vm_flags_nolock > really explain when it is safe to use it. This is really important for > future potential users. True. I'll add clarification in the comments and in the changelog. Thanks! > > > Signed-off-by: Suren Baghdasaryan > > --- > > arch/x86/mm/pat/memtype.c | 10 +++++++--- > > include/linux/mm.h | 12 +++++++++--- > > include/linux/pgtable.h | 5 +++-- > > mm/memory.c | 13 +++++++------ > > mm/memremap.c | 4 ++-- > > mm/mmap.c | 16 ++++++++++------ > > 6 files changed, 38 insertions(+), 22 deletions(-) > > > > diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c > > index ae9645c900fa..d8adc0b42cf2 100644 > > --- a/arch/x86/mm/pat/memtype.c > > +++ b/arch/x86/mm/pat/memtype.c > > @@ -1046,7 +1046,7 @@ void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, pfn_t pfn) > > * can be for the entire vma (in which case pfn, size are zero). > > */ > > void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, > > - unsigned long size) > > + unsigned long size, bool mm_wr_locked) > > { > > resource_size_t paddr; > > unsigned long prot; > > @@ -1065,8 +1065,12 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, > > size = vma->vm_end - vma->vm_start; > > } > > free_pfn_range(paddr, size); > > - if (vma) > > - clear_vm_flags(vma, VM_PAT); > > + if (vma) { > > + if (mm_wr_locked) > > + clear_vm_flags(vma, VM_PAT); > > + else > > + mod_vm_flags_nolock(vma, 0, VM_PAT); > > + } > > } > > > > /* > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 55335edd1373..48d49930c411 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -656,12 +656,18 @@ static inline void clear_vm_flags(struct vm_area_struct *vma, > > vma->vm_flags &= ~flags; > > } > > > > +static inline void mod_vm_flags_nolock(struct vm_area_struct *vma, > > + unsigned long set, unsigned long clear) > > +{ > > + vma->vm_flags |= set; > > + vma->vm_flags &= ~clear; > > +} > > + > > static inline void mod_vm_flags(struct vm_area_struct *vma, > > unsigned long set, unsigned long clear) > > { > > mmap_assert_write_locked(vma->vm_mm); > > - vma->vm_flags |= set; > > - vma->vm_flags &= ~clear; > > + mod_vm_flags_nolock(vma, set, clear); > > } > > > > static inline void vma_set_anonymous(struct vm_area_struct *vma) > > @@ -2087,7 +2093,7 @@ static inline void zap_vma_pages(struct vm_area_struct *vma) > > } > > void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, > > struct vm_area_struct *start_vma, unsigned long start, > > - unsigned long end); > > + unsigned long end, bool mm_wr_locked); > > > > struct mmu_notifier_range; > > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > > index 5fd45454c073..c63cd44777ec 100644 > > --- a/include/linux/pgtable.h > > +++ b/include/linux/pgtable.h > > @@ -1185,7 +1185,8 @@ static inline int track_pfn_copy(struct vm_area_struct *vma) > > * can be for the entire vma (in which case pfn, size are zero). > > */ > > static inline void untrack_pfn(struct vm_area_struct *vma, > > - unsigned long pfn, unsigned long size) > > + unsigned long pfn, unsigned long size, > > + bool mm_wr_locked) > > { > > } > > > > @@ -1203,7 +1204,7 @@ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot, > > pfn_t pfn); > > extern int track_pfn_copy(struct vm_area_struct *vma); > > extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn, > > - unsigned long size); > > + unsigned long size, bool mm_wr_locked); > > extern void untrack_pfn_moved(struct vm_area_struct *vma); > > #endif > > > > diff --git a/mm/memory.c b/mm/memory.c > > index d6902065e558..5b11b50e2c4a 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -1613,7 +1613,7 @@ void unmap_page_range(struct mmu_gather *tlb, > > static void unmap_single_vma(struct mmu_gather *tlb, > > struct vm_area_struct *vma, unsigned long start_addr, > > unsigned long end_addr, > > - struct zap_details *details) > > + struct zap_details *details, bool mm_wr_locked) > > { > > unsigned long start = max(vma->vm_start, start_addr); > > unsigned long end; > > @@ -1628,7 +1628,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, > > uprobe_munmap(vma, start, end); > > > > if (unlikely(vma->vm_flags & VM_PFNMAP)) > > - untrack_pfn(vma, 0, 0); > > + untrack_pfn(vma, 0, 0, mm_wr_locked); > > > > if (start != end) { > > if (unlikely(is_vm_hugetlb_page(vma))) { > > @@ -1675,7 +1675,7 @@ static void unmap_single_vma(struct mmu_gather *tlb, > > */ > > void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, > > struct vm_area_struct *vma, unsigned long start_addr, > > - unsigned long end_addr) > > + unsigned long end_addr, bool mm_wr_locked) > > { > > struct mmu_notifier_range range; > > struct zap_details details = { > > @@ -1689,7 +1689,8 @@ void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt, > > start_addr, end_addr); > > mmu_notifier_invalidate_range_start(&range); > > do { > > - unmap_single_vma(tlb, vma, start_addr, end_addr, &details); > > + unmap_single_vma(tlb, vma, start_addr, end_addr, &details, > > + mm_wr_locked); > > } while ((vma = mas_find(&mas, end_addr - 1)) != NULL); > > mmu_notifier_invalidate_range_end(&range); > > } > > @@ -1723,7 +1724,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address, > > * unmap 'address-end' not 'range.start-range.end' as range > > * could have been expanded for hugetlb pmd sharing. > > */ > > - unmap_single_vma(&tlb, vma, address, end, details); > > + unmap_single_vma(&tlb, vma, address, end, details, false); > > mmu_notifier_invalidate_range_end(&range); > > tlb_finish_mmu(&tlb); > > } > > @@ -2492,7 +2493,7 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr, > > > > err = remap_pfn_range_notrack(vma, addr, pfn, size, prot); > > if (err) > > - untrack_pfn(vma, pfn, PAGE_ALIGN(size)); > > + untrack_pfn(vma, pfn, PAGE_ALIGN(size), true); > > return err; > > } > > EXPORT_SYMBOL(remap_pfn_range); > > diff --git a/mm/memremap.c b/mm/memremap.c > > index 08cbf54fe037..2f88f43d4a01 100644 > > --- a/mm/memremap.c > > +++ b/mm/memremap.c > > @@ -129,7 +129,7 @@ static void pageunmap_range(struct dev_pagemap *pgmap, int range_id) > > } > > mem_hotplug_done(); > > > > - untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range)); > > + untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true); > > pgmap_array_delete(range); > > } > > > > @@ -276,7 +276,7 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params, > > if (!is_private) > > kasan_remove_zero_shadow(__va(range->start), range_len(range)); > > err_kasan: > > - untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range)); > > + untrack_pfn(NULL, PHYS_PFN(range->start), range_len(range), true); > > err_pfn_remap: > > pgmap_array_delete(range); > > return error; > > diff --git a/mm/mmap.c b/mm/mmap.c > > index 2c6e9072e6a8..69d440997648 100644 > > --- a/mm/mmap.c > > +++ b/mm/mmap.c > > @@ -78,7 +78,7 @@ core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644); > > static void unmap_region(struct mm_struct *mm, struct maple_tree *mt, > > struct vm_area_struct *vma, struct vm_area_struct *prev, > > struct vm_area_struct *next, unsigned long start, > > - unsigned long end); > > + unsigned long end, bool mm_wr_locked); > > > > static pgprot_t vm_pgprot_modify(pgprot_t oldprot, unsigned long vm_flags) > > { > > @@ -2136,14 +2136,14 @@ static inline void remove_mt(struct mm_struct *mm, struct ma_state *mas) > > static void unmap_region(struct mm_struct *mm, struct maple_tree *mt, > > struct vm_area_struct *vma, struct vm_area_struct *prev, > > struct vm_area_struct *next, > > - unsigned long start, unsigned long end) > > + unsigned long start, unsigned long end, bool mm_wr_locked) > > { > > struct mmu_gather tlb; > > > > lru_add_drain(); > > tlb_gather_mmu(&tlb, mm); > > update_hiwater_rss(mm); > > - unmap_vmas(&tlb, mt, vma, start, end); > > + unmap_vmas(&tlb, mt, vma, start, end, mm_wr_locked); > > free_pgtables(&tlb, mt, vma, prev ? prev->vm_end : FIRST_USER_ADDRESS, > > next ? next->vm_start : USER_PGTABLES_CEILING); > > tlb_finish_mmu(&tlb); > > @@ -2391,7 +2391,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma, > > mmap_write_downgrade(mm); > > } > > > > - unmap_region(mm, &mt_detach, vma, prev, next, start, end); > > + /* > > + * We can free page tables without write-locking mmap_lock because VMAs > > + * were isolated before we downgraded mmap_lock. > > + */ > > + unmap_region(mm, &mt_detach, vma, prev, next, start, end, !downgrade); > > /* Statistics and freeing VMAs */ > > mas_set(&mas_detach, start); > > remove_mt(mm, &mas_detach); > > @@ -2704,7 +2708,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr, > > > > /* Undo any partial mapping done by a device driver. */ > > unmap_region(mm, &mm->mm_mt, vma, prev, next, vma->vm_start, > > - vma->vm_end); > > + vma->vm_end, true); > > } > > if (file && (vm_flags & VM_SHARED)) > > mapping_unmap_writable(file->f_mapping); > > @@ -3031,7 +3035,7 @@ void exit_mmap(struct mm_struct *mm) > > tlb_gather_mmu_fullmm(&tlb, mm); > > /* update_hiwater_rss(mm) here? but nobody should be looking */ > > /* Use ULONG_MAX here to ensure all VMAs in the mm are unmapped */ > > - unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX); > > + unmap_vmas(&tlb, &mm->mm_mt, vma, 0, ULONG_MAX, false); > > mmap_read_unlock(mm); > > > > /* > > -- > > 2.39.1 > > -- > Michal Hocko > SUSE Labs