From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE5BC001DB for ; Fri, 4 Aug 2023 19:41:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F07E6B0071; Fri, 4 Aug 2023 15:41:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A0226B0072; Fri, 4 Aug 2023 15:41:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 819C46B0074; Fri, 4 Aug 2023 15:41:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 6EDFB6B0071 for ; Fri, 4 Aug 2023 15:41:39 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 3A534C0901 for ; Fri, 4 Aug 2023 19:41:39 +0000 (UTC) X-FDA: 81087441918.22.53FCFE4 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf12.hostedemail.com (Postfix) with ESMTP id 485B54001F for ; Fri, 4 Aug 2023 19:41:37 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="Vt5J/SYo"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1691178097; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QHMki7/FqDD8zTNQC+pLkhxz8kuXqcTnPlOuRvnVTtE=; b=2dPiEFQZhBCnp+HlNzciLgZUkMz7mL7CTFic9D71V8VNCXHvFJrT1Mfnz8inZhAUo7unv6 ptVwUHXIzXAIJr1MltUmq32NiIEkYaoNjGckhmA3PhDdk7zKlk5L00asNeZg/UlezOocCu GKjWAjoldRAvJftIRJjoIrdmhbU3fbc= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="Vt5J/SYo"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1691178097; a=rsa-sha256; cv=none; b=0LIiPxZW6Neh/xjefV6PdIsCv9cwVnyKrsEtJE6jpHXgqrdeZ/O0TOtLXWJAz/BG+jnAcJ /V8az9OhKjxs/9DmWVOzJQmVR/hU2cl1fLvvNUZRCXibkh/89V4mNDFkcEy7wkumuUfAuM N+/FvHXDOlCWLuQUa50M2hwYLo+595s= Received: by mail-yb1-f176.google.com with SMTP id 3f1490d57ef6-d18566dc0c1so2777990276.0 for ; Fri, 04 Aug 2023 12:41:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1691178096; x=1691782896; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=QHMki7/FqDD8zTNQC+pLkhxz8kuXqcTnPlOuRvnVTtE=; b=Vt5J/SYoA2BETosVLi1Z8rDMJ7C1tMwsq0wbp+Y+7Xe+E8oMkTdmh8xQS2K/M4rgLS qaMb2FAc38/A8ejy72BXIC5uOrzmyaQMRfxOq2uxN5W9WwYlvQMgrAc54upZEZHGZ8gj c6DIfMyXr5q0L3mbNnZkPrSaKymGkPBY3y+ndOBprsxSy2tRplQIhkBTaxOSmiTxcMd2 6gfZdCozLV0NV6CyRgiBE8ubbhhvi5mAuTUDKWXkaVgLtfJcS+X84kQMDTVrP7Fb8aRy 9Rp9VD5tt95j+z55dJsJsOwixY0SHp9pq/IjlFhPlXbVKuL0kNzYc98IijW8e/RLGGa4 c2DQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691178096; x=1691782896; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QHMki7/FqDD8zTNQC+pLkhxz8kuXqcTnPlOuRvnVTtE=; b=dQ7RnH80ZKrFOXBmO9AA/4UMO/bRjdblgfIpdnDe5hV0l/NHm3yJjZ8ZaVZzDXrNS5 Wm73bKR8v9zGX/n1qyYyYqDR8wlMjpKjj0VchRC3bO1z3e6KfFcm6LSMeQd84mFZKqNJ t0BKUJe70Z0ElR/3UFrlhdhA48mFgZWcsE9nqduKRSPFUhA/gLkiWUiB0eWo83FOKtFN 5lXinsuEX6RwFxio5ghygb3516gsZQjgT/RpbVUThWMCqOSoPOAe7YpDHQuLbz8MwhOM DIYU/HGZfwbG0vEae/WuH3w/no9p1djKGTuDRZOj5pxkczz0HWPFaRViMc6vy3+sKkc4 zDiA== X-Gm-Message-State: AOJu0YxfABWKGnQ60N//kMJUcLdJCFSt/qqBrObQN/0yZrHG+xPKRtrD FqZebm+Ew3u+sXF5osSpk5NlrajUJTumRntmSV2m4w== X-Google-Smtp-Source: AGHT+IFbAJ3wcgQP1rxqmFG9jFoIPPqOhwLHRY5JeMk6aighD1yYuboh5YvVtU33dQQ7NA7e7n4hn7JdaWdEvd2TVqQ= X-Received: by 2002:a25:2043:0:b0:d0e:fc28:3777 with SMTP id g64-20020a252043000000b00d0efc283777mr2153163ybg.9.1691178096206; Fri, 04 Aug 2023 12:41:36 -0700 (PDT) MIME-Version: 1.0 References: <20230804152724.3090321-1-surenb@google.com> <20230804152724.3090321-2-surenb@google.com> <20230804121416.533bb81336ded8f170da097e@linux-foundation.org> In-Reply-To: <20230804121416.533bb81336ded8f170da097e@linux-foundation.org> From: Suren Baghdasaryan Date: Fri, 4 Aug 2023 12:41:23 -0700 Message-ID: Subject: Re: [PATCH v4 1/6] mm: enable page walking API to lock vmas during the walk To: Andrew Morton Cc: torvalds@linux-foundation.org, jannh@google.com, willy@infradead.org, liam.howlett@oracle.com, david@redhat.com, peterx@redhat.com, ldufour@linux.ibm.com, vbabka@suse.cz, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, hannes@cmpxchg.org, dave@stgolabs.net, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, stable@vger.kernel.org, kernel-team@android.com, Linus Torvalds Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 485B54001F X-Stat-Signature: 99hko3agb96uhyzj13fhx5agbhsdtmxi X-Rspam-User: X-HE-Tag: 1691178097-609197 X-HE-Meta: U2FsdGVkX18nKaReXm7WZ+L1S4LjEFzXFXbBY0HcTEa7OJ4cFRgBaacUXDEcsdvyu/nA+RFzC1EzPorRbDh10pvpEIsu1sHdQrnWSz4YYybfT5OkliKTf0bWvEJZkv+5dbjeGXHBCN+39Q1SJ8HfI2tNm7XoAm/p4C7CFiK0Ibwt9RcydStJez0nC33U01G21AWAVRJGBoQ5y0ArFL9fknCTjI0w51Lf/wzjpARKhdEfgszRzxlrgKEvf383Xse8VoXF7A9SMq6UrNreBn1ONJtmSlZe0oITLdnVhffHM/yHD6EkKImAYM11eNHEsQiVnSptyiOIm7+A7+cHNZ+leTKH4B4KZ66U0wP2b8IA29+z27ukwQcB3UjIim3OuG9szu5fPmVJXY2zS0BJZ8MchuL95Q1varYCPZBX8ynKaQEPqjlNmOCf/klz1wep1ozy7WSYzYhh/Dzf1LX7+VIMXaSORU+vYpZ73O7+rx/gaRNF4wjfwu5SqnaWFRcSPtijMJ/qog1gvSAWiy7CYiD7l4EkZMNJZmPT/CXwlBR8JC8FH9DWSE+MTwwaWFBn2wmsjJhIp9LlsJ0UaY/0s3tK9EXn29IMJbUOLHJWhwcl/fa5bQyMhTSCd5/nGo+zOXtmOEYToh4hT9YYsi+OCYL00z4FYEl3jKzCXHotbFLquC0Sb6LhgInfE7GuYPFK28KpwOEMHr20MkB7c78qhlEkx70WCY46JWfBLzTFwQhKOHJKWR4NrsElxZW9c5wpGBsBufx5WnBcTQxNoEDrRPjQwTT3zGMISdX/IJiCq6btjCrmw0nCaWLCwrV/0mRJGgJRKQPwyA07EXWiQG7Z3vzR4J7+uvd7F846IonX5UN4500lYCE3sqnV2fn1hh+AZmMbf813wngyTYULiSWMZ7+MYscIF38BvgL8KgHTN4F+0yjl8eP1OjEtizo8tqqqd4jtaXZOM8t8R5tf5LFYXst Su/RLrFT bU/A8Lw2Rxp3L1oeiVhEdqtCIAQ+kHmhuHBdFlmxVh75JulUMUdHTG6Z+J6p0X618b9LBhBR++qrDN+e1HkAETazwEOgJVSfRZZhFZjK0OUbKRs+BsQmqZM0z4qvmqmXfGkhUzzi0fcep1jzDn0Q7JxvUMfSpvWUIADkUHG4PzbRLa98Zz8horOZ5dZOM4BP0kJHQAfyewEjt9X2CnRMy5+tKniAz3tPiWzfbVMQLcqKQ6MfENEcUv8bIIjau22xeW4Z/vtRuxQZW7acQtLDHf/g26NBjIvSWcbsQ39ceeXJrwLIFhTLtEkgk0w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Aug 4, 2023 at 7:14=E2=80=AFPM Andrew Morton wrote: > > On Fri, 4 Aug 2023 08:27:19 -0700 Suren Baghdasaryan = wrote: > > > walk_page_range() and friends often operate under write-locked mmap_loc= k. > > With introduction of vma locks, the vmas have to be locked as well > > during such walks to prevent concurrent page faults in these areas. > > Add an additional member to mm_walk_ops to indicate locking requirement= s > > for the walk. > > > > ... > > > > 18 files changed, 100 insertions(+), 20 deletions(-) > > > > That's a big patch for a -stable backport. > > Presumably the various -stable maintainers will be wondering why we're > doing this. But, as is so often the case, the changelog fails to > describe any user-visible effects of the change. Please send this info > and I'll add it to the changelog. The change ensures that page walks which prevent concurrent page faults by write-locking mmap_lock, operate correctly after introduction of per-vma locks. With per-vma locks page faults can be handled under vma lock without taking mmap_lock at all, so write locking mmap_lock would not stop them. The change ensures vmas are properly locked during such walks. A sample issue this solves is do_mbind() performing queue_pages_range() to queue pages for migration. Without this change a concurrent page can be faulted into the area and be left out of migration. > >