From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51173C001B1 for ; Mon, 3 Jul 2023 15:27:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D69B9280014; Mon, 3 Jul 2023 11:27:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF23F280001; Mon, 3 Jul 2023 11:27:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6B81280014; Mon, 3 Jul 2023 11:27:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9FD57280001 for ; Mon, 3 Jul 2023 11:27:23 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 759F1408AB for ; Mon, 3 Jul 2023 15:27:23 +0000 (UTC) X-FDA: 80970679566.22.E3A07AD Received: from mail-yw1-f180.google.com (mail-yw1-f180.google.com [209.85.128.180]) by imf30.hostedemail.com (Postfix) with ESMTP id 9BD758001B for ; Mon, 3 Jul 2023 15:27:21 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=FbRPMlnU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688398041; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vB8k7kzs7pjh91ZHMLl100x28+Nou3C86dk3f4fXFw8=; b=gIB4hPbHwXr6MVGh84if27tR3I5r6pD4tWG6MsCHvmUsfHqzvB5giemIs0Amxs5KAPeYz9 MIczKY1ZZmLecDO1FVjID78IqoChPJvMKLHAwxwhIysZvIflBr0mQNpLMPABP33cJPYjWN aUmOn0+59ijv3bcRvEDu/4/CJQcOfCo= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=FbRPMlnU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of surenb@google.com designates 209.85.128.180 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688398041; a=rsa-sha256; cv=none; b=Eq++CNlB3CMDgk2coDb98m0dOBigyTbHgLn+brGF/4wJVCl2tYdzBYn3r01c1dCg9sTjFz 7ob7p/2NHssoHjQ/+TuXgyh6QprSPchSNq2DYUQH/hndBBsCgp9A+7JFUzykD0X1Mj5xV7 PMy1WFBxUlg2uB1L+4A1NO4VABeBMlw= Received: by mail-yw1-f180.google.com with SMTP id 00721157ae682-579e5d54e68so26555567b3.1 for ; Mon, 03 Jul 2023 08:27:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688398040; x=1690990040; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=vB8k7kzs7pjh91ZHMLl100x28+Nou3C86dk3f4fXFw8=; b=FbRPMlnUUQ0KJlxPTp1Akvy0wGc+FN2SBdB/wkaqk1dL2eUdwA5Ks9merWRmm6t4eU YzJXoaMuDUeccl4n7fULsEtJ/MdFyTvYmpbeY9sGQYSPhLcZ1eMRLkGJvECDoZorSiVG pz1pDjrSCl7NtgobNEPhL1buvimKffGF5y/tnO+H66sln4C22AOL8blHMFLl1wTle58F j+YL1xktM/NJtvYYWYq195AJ6jqN3veWIGuL5gA3V0g8lv3Hnlfy4yJDUTjQigcmGvPZ b+DQRMxLO9aFvzgyohuo033dVk5gsbyUHGGjFOkWb2TYlP43zIW8rmQJ364g7PfDY4qV zQYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688398040; x=1690990040; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vB8k7kzs7pjh91ZHMLl100x28+Nou3C86dk3f4fXFw8=; b=kToJqehcSNMPW/gCvHDcxslkqcNZSds02bc9OXSumsLDy9IYAnxbvFXwdk29p4zp6G NRWIdQ01261FyhZLd7cZ7DRXUmKAfjIaCBYQlK4PmNezV+EbfoxdXM2NeDdkrOFHzL2m L2dIgUM5cMea9zcDAjquFMqmNFXabCsmdJPWg906OFYYoaVQmBH0qw1NA3thEDmJ66KK o+tjtWC22h9C4oEFL2FPPKqEPp7PH08IxFMPyZ7S1nBtC9taE9rpD7y+GJhMSNKXDfip ODiJ/3QugfNUbJNyY0dZyy2JmxMip0x6/pyqHRfjM2Ivcqa5nhCaazNFVZZSSVrWMyDw BedA== X-Gm-Message-State: ABy/qLaKu/O/oXDWyi/tj5JRgCqrDeDLrqAHm3g6Wgwy4rmEOr/WP/CD 34xDw3evJ2zB7UWSd0ZTWf5gCbbb4mkDRtiIN8Ckew== X-Google-Smtp-Source: APBJJlEm3pwO1G8LhaV06AkpzQLVkMoXEfWuwosTjW1Qlp15+LHR8XDJVWoHyyqjaffw3LoZM9R1mjBUJMqWyDO++Eg= X-Received: by 2002:a81:7c8b:0:b0:577:3adb:cf08 with SMTP id x133-20020a817c8b000000b005773adbcf08mr10119135ywc.27.1688398040554; Mon, 03 Jul 2023 08:27:20 -0700 (PDT) MIME-Version: 1.0 References: <20230630211957.1341547-1-surenb@google.com> <20230702105038.5d0f729109d329013af4caa3@linux-foundation.org> In-Reply-To: <20230702105038.5d0f729109d329013af4caa3@linux-foundation.org> From: Suren Baghdasaryan Date: Mon, 3 Jul 2023 15:27:09 +0000 Message-ID: Subject: Re: [PATCH v7 0/6] Per-VMA lock support for swap and userfaults To: Andrew Morton Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: u3fd8hm4ce5mcshpgaco4huqp79fgpcw X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 9BD758001B X-HE-Tag: 1688398041-484263 X-HE-Meta: U2FsdGVkX18FnQz32I/epcGUgIx9UCcJzV5bc1+U0fOAtU1CRPMpLEd4KCoS0LyzZ/anaIxJf611kn0L6sviZYqe+ynQOl+Duno+qug699sEAGRfqBEo+M/eqOthrOPYqX9bo+atJWDyg1c2O1/EYuWhGej/U1RErF1fyYH3pAta3ai9FzK/p78KpGGJ4y0dii7qD1XEuArAJZ4AxQdwJ8+r0NwB1bnSWSLEr8oDlc56UnGMvoEoXZ0Of8SX0ilgYFsd4CWmEYg4/k1ZKbzw/Ko34TZz82Hm3/3HJawv+CWDDy1e9Sr8I6va6alx/2J0F1sTIt3CbdXBf7QPo9Yb/OmH0ZPUK3aMm42B3P/11vP0P+p1M50n46ZzQxC/qRLayi2nrCyh4wvT/9H/Xzq4nWDhWqVmTNb9EXYOzR47ATh9adim66/ubmtjTY6ghebMAhzdHbk3C/tCNSRs6gp+oU0byVGVxLwlX4L9RHiTAt4CJLkUn0hd96HKHi9NBmMueN9Z9kfci8WLFYyqFNaPO7XUsIbB/wMEnORZiMe7dXbv95+1gng7QCJQZk6oTynXuzqXX4UP6fENlZkxTX3vcYPWC/cYVjqb8uphzD42V4EBXOVgvvtJmhiIT6MtJIfCkwzEBumE/gS4QB8iQwxlNalaS5s80Mt4wl2ibOCo/43lMt9qDFIT5AL/VKUm/JPY61X59YWAupKrBd8AHFiBJdXQIhyz78jIWEJRxIBuKPm+JDjURCvz2ILqAMQDJfBqwpvYaq62MxB1RyMER+5u6qA6KgeM6G1je06l0p05kHo4TDEP2Z2wumPnWa3g0bfRC2EZ0ArGxvIovuSPaMTrX7BaOKVYDYerydWEezh9DBvlAH3JpEgQmLsoeHfhgOHICAuQ/+0zaBtO/lzC81TSW8BegwhO4bAcC23JZZ8GIeykL/u5lxpPYma12n/8FcpP+Cfb/4rsPYtjUmW5qF8 r2YLM9a9 2suk5O3aHK2mnbSvSAs13/qgV11JJt32HPJlFCqSHAy+7Au32z7oWArEfXHLrg41i5sUVHW5zYScphM22FLMxyD9pFmiql5T1hnVnCoUqUtIwvSrlV+KCkSVB/8AGhUn/dQ7GfBwGpU93feONzixSn+37IFcItiWjhm0bPYZrjhBseUkhVz5Y2/NVAfh1DxSWWt0THdhosISgmXh5p0W7i0uiwAem1TFjDC5m/vyHyEZDi/PyVccJIlN2DMz9WYcT48AxES/lCh8vuRyX/2lBjrDtRQ4hOwmmMlndd9DX31ZI7iEzjl8WGeInYg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Jul 2, 2023 at 5:50=E2=80=AFPM Andrew Morton wrote: > > On Fri, 30 Jun 2023 14:19:51 -0700 Suren Baghdasaryan = wrote: > > > When per-VMA locks were introduced in [1] several types of page faults > > would still fall back to mmap_lock to keep the patchset simple. Among t= hem > > are swap and userfault pages. The main reason for skipping those cases = was > > the fact that mmap_lock could be dropped while handling these faults an= d > > that required additional logic to be implemented. > > Implement the mechanism to allow per-VMA locks to be dropped for these > > cases. > > First, change handle_mm_fault to drop per-VMA locks when returning > > VM_FAULT_RETRY or VM_FAULT_COMPLETED to be consistent with the way > > mmap_lock is handled. Then change folio_lock_or_retry to accept vm_faul= t > > and return vm_fault_t which simplifies later patches. Finally allow swa= p > > and uffd page faults to be handled under per-VMA locks by dropping per-= VMA > > and retrying, the same way it's done under mmap_lock. > > Naturally, once VMA lock is dropped that VMA should be assumed unstable > > and can't be used. > > Is there any measurable performance benefit from this? Good point. I haven't measured it but assume it will have the same effect as for other page fault cases handled under per-VMA locks (mmap_lock contention reduction). I'll try to create a test to measure the effects. Thanks, Suren.