From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9ADDC7EE2F for ; Mon, 12 Jun 2023 18:34:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F99F8E0005; Mon, 12 Jun 2023 14:34:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A9A98E0002; Mon, 12 Jun 2023 14:34:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14B178E0005; Mon, 12 Jun 2023 14:34:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 05B7A8E0002 for ; Mon, 12 Jun 2023 14:34:40 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 89F63AF6FC for ; Mon, 12 Jun 2023 18:34:39 +0000 (UTC) X-FDA: 80894946678.12.B482E0E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf03.hostedemail.com (Postfix) with ESMTP id 3917220017 for ; Mon, 12 Jun 2023 18:34:36 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="GuWF+e/x"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686594877; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nvY2kldHbnI0forOMXs5IpIoD8eht53xLaDy+C7VvMQ=; b=5ppiy+EKLevUvy2plapWXJUoq2yBML33ZCFifU4oUEASzoXKZZfJJSM0G9ozCgSnEKlFcE oEWt/a0Q8yk8TG98g9EykjvoBu9fLnERKaVnBo7wptaPzYl1xm+dH8t78XD7gKQ1AukmSr Snzxbk2ruhzieQZPDLtTlr1tOmfLjnU= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="GuWF+e/x"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf03.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686594877; a=rsa-sha256; cv=none; b=LkPvDRdA9ek1UVD3VO4a2TGo+iclBt5KhcyPB1Sl5WjJw+/C9rB2YcrTYVCAP+hrXiRQD1 c6PVFzogjgs54sWUi1pYmspA+t/RhG9Ib3gRZCgQaIspbKQPwBY6mCuEP0NuF0AkrhC+XF 7LL6voP2EtGCHLlfbm6RiuMACntjtoY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1686594876; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nvY2kldHbnI0forOMXs5IpIoD8eht53xLaDy+C7VvMQ=; b=GuWF+e/xwM+LADy3QT8hmDQNX6af5EQKfNftoL04xitgsFbSIdzcHiWKshP0PHKBKF5koW 7RKSFamuKtBbTMxAkE+ucmliOKynkbrsZnndxh5qWqOlttcpO9RugJsiff/eAlDYs8uRet DUODSFT7UrKUKn83LA7bF0LC6+wJWE4= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-21-T2_z16A_N5uzOjbtnD-VGg-1; Mon, 12 Jun 2023 14:34:35 -0400 X-MC-Unique: T2_z16A_N5uzOjbtnD-VGg-1 Received: by mail-qk1-f199.google.com with SMTP id af79cd13be357-75eb82ada06so112729885a.1 for ; Mon, 12 Jun 2023 11:34:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686594874; x=1689186874; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nvY2kldHbnI0forOMXs5IpIoD8eht53xLaDy+C7VvMQ=; b=LYKOgsrcAhfA8uTtTTwTbCv2d34d+T0tz1TG078kCyTsCZhVnpwtzeehffIwcXy1Ev eqY9UQHlwWqzkPUP9JEY6qbZNFUgeioO0X7iYGzhN3UQmWA9x3quk7ybwlDPqYyPkpT+ zxWUfvdMNLQ7OVyenuzJPsUgDiVvJ1COK+tLDuZ380LAhwTa8rY+//3HIGCajljtL57u sRGdVSORukUc+3sHwOBNRSnSCiDDg4jpjK5wjTFAREq36hsIRiu8nnmr+SLtnQMz7xPR a/7Tw+UAkkgih9bhfhe2LFVJnM8IwkvFlN05zJYvc8dhyPjcUbaRGbh7XEi3UbxGusCv xrVw== X-Gm-Message-State: AC+VfDx9KJw0VR4Szu6MgZsigZQB2h5E/aKTuoB1Z9C33PtM6VCEwXmS 9dELwaXdAk/cjyAt1ZAPKJU4wprgOYPyTBDqxGKiyXDkZTaCpV7HfAgRwJK9y5YxLcFwUhCdmLe CIMPipaylbEk= X-Received: by 2002:a05:620a:3d91:b0:75b:23a1:69e4 with SMTP id ts17-20020a05620a3d9100b0075b23a169e4mr9377920qkn.4.1686594874618; Mon, 12 Jun 2023 11:34:34 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6aKEVKZ2FIzN1t/Op9owXNi1km2vS/VHCT9ZQ3B2W4JifsCVcQYVfg3PgJesSfBg/r/kd2kg== X-Received: by 2002:a05:620a:3d91:b0:75b:23a1:69e4 with SMTP id ts17-20020a05620a3d9100b0075b23a169e4mr9377889qkn.4.1686594874287; Mon, 12 Jun 2023 11:34:34 -0700 (PDT) Received: from x1n (cpe5c7695f3aee0-cm5c7695f3aede.cpe.net.cable.rogers.com. [99.254.144.39]) by smtp.gmail.com with ESMTPSA id 6-20020a05620a070600b0075f04e2f556sm2000172qkc.33.2023.06.12.11.34.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Jun 2023 11:34:33 -0700 (PDT) Date: Mon, 12 Jun 2023 14:34:30 -0400 From: Peter Xu To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v2 4/6] mm: drop VMA lock before waiting for migration Message-ID: References: <20230609005158.2421285-1-surenb@google.com> <20230609005158.2421285-5-surenb@google.com> MIME-Version: 1.0 In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: 3917220017 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 9gecg8xecgo7oiifjsmxk6t71qfn1ned X-HE-Tag: 1686594876-120310 X-HE-Meta: U2FsdGVkX184TJasGMBAi+7jw3TnxP82EOAgF00fo8KOFAYFJNETTXgLgxdF1JhpBWPdRlksg67/LS2x3nnfwZXdYNuA0e3N5KD+q9laVI7T+VzfrrTQh5H9axkDEWDCCPbgFphx2WYznMgd2/IGaooXzQ+aNiLVmdFZDJccTWVRAbiImCuL20pkaa0Zaf6KqZU1y024uBllkUSgNOFBjxAnLoqzoHHSq3Q7f83sIW+AhxwLh0hS7Kerqvuf14+V13MROnxthqQX6WGYOMoDDv2oUTVDX3BLnIK068T9a2/xj5UcDRNYnhNb1pxh5K69EEfjsVQEUMozpIb6RG6ntG20m2nQRwclFt4eLm1EFKmUYJV9bgPlOTUrh0e8MM1OVQB9w+Oatx4slHz/xiJqrAX3c9HA6YQnMLxzPiZaLLOwbLv7Z4HWf6gSohMCb1V2U3WKVhyUSMwhF+sLzlSxl9puVAx4HBbtTx6oiKl8wUswHOO9vyE8aVXILUofYU9KycDe3AdlayO04XqCtrhsCXR3w8DJgcR8EOY7saZBbaqz8sl2djdHToxUqpJ2Zx0ykyeBCkhMKRrIezGh82cW6bFU9QYGP+uHFsFIlIUAF7VpmbxP6Hu5MDLjc5VL1oNrXWsX4NCMFyt8J7Bs5Voy7BQnQ14C3lBNRg4xQmQNigRkt94fjdLO5wYeloi8FhQy0X7dARAu4WH7U42b2lw/C7G0DtVHHrxnJmzerVjO69HRqj42IesBK+7zvRrfyQSFRGbimNf84JVv7FuV+nwfDqVK17laNTQ8u5GyCZ47OKaeq5okLqU6L5mT7TEbSgHCDugZE3rHMEfDaEPd4Ru/fJgK/JW341JJz8ibpvulTQIAA4QwyUNXQ94VRlp/CZ7zcAprw8L7G7O25O/U5SJBNFJuEEArsVtI2cBmlxJeOV6fYTvHhrNejZ9ABqYWzpSNGIeUDvgdlpn3ztf7A10 Khvu2RKH 37bp7Po9vzbhmEHAZo7y/qoE/FN2zaoXwcwiSlGp6sB8CHROX4aPngR6teBXCZvU0YlwX/6yE5Lac1zVnIJ0S77ffIsNZ4c2iGrUgRSZGDNEadrZmgLoahqtReh4wHwMUVdIHtQKD5JScJedY9AxDojOoBoR2GZoFd8ae+V+o/NzZBjKZly5Fnnmb2USqudMlz5e9QdvKQOBK795sqRUaHA8jR1TCh3DlkGV9MqgvZFFr+25l3i59uq6bsnav1axtalwrmXe7Zc9pytptVqY/DmMzkXPa5GY2/GQ+1pdJaFTd+7gcrSu98+0woM3KvnyFRCpSuR/CTNcm2Rigvm0on9I9a5CidS9615huYTSLWCho1l1lxOQPdxCDWrRNb07Gopz8iRaerM7T/mgXpojg72pSWNwIl7nSRWRZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 12, 2023 at 09:07:38AM -0700, Suren Baghdasaryan wrote: > On Mon, Jun 12, 2023 at 6:36 AM Peter Xu wrote: > > > > On Fri, Jun 09, 2023 at 06:29:43PM -0700, Suren Baghdasaryan wrote: > > > On Fri, Jun 9, 2023 at 3:30 PM Suren Baghdasaryan wrote: > > > > > > > > On Fri, Jun 9, 2023 at 1:42 PM Peter Xu wrote: > > > > > > > > > > On Thu, Jun 08, 2023 at 05:51:56PM -0700, Suren Baghdasaryan wrote: > > > > > > migration_entry_wait does not need VMA lock, therefore it can be dropped > > > > > > before waiting. Introduce VM_FAULT_VMA_UNLOCKED to indicate that VMA > > > > > > lock was dropped while in handle_mm_fault(). > > > > > > Note that once VMA lock is dropped, the VMA reference can't be used as > > > > > > there are no guarantees it was not freed. > > > > > > > > > > Then vma lock behaves differently from mmap read lock, am I right? Can we > > > > > still make them match on behaviors, or there's reason not to do so? > > > > > > > > I think we could match their behavior by also dropping mmap_lock here > > > > when fault is handled under mmap_lock (!(fault->flags & > > > > FAULT_FLAG_VMA_LOCK)). > > > > I missed the fact that VM_FAULT_COMPLETED can be used to skip dropping > > > > mmap_lock in do_page_fault(), so indeed, I might be able to use > > > > VM_FAULT_COMPLETED to skip vma_end_read(vma) for per-vma locks as well > > > > instead of introducing FAULT_FLAG_VMA_LOCK. I think that was your idea > > > > of reusing existing flags? > > > Sorry, I meant VM_FAULT_VMA_UNLOCKED, not FAULT_FLAG_VMA_LOCK in the > > > above reply. > > > > > > I took a closer look into using VM_FAULT_COMPLETED instead of > > > VM_FAULT_VMA_UNLOCKED but when we fall back from per-vma lock to > > > mmap_lock we need to retry with an indication that the per-vma lock > > > was dropped. Returning (VM_FAULT_RETRY | VM_FAULT_COMPLETE) to > > > indicate such state seems strange to me ("retry" and "complete" seem > > > > Not relevant to this migration patch, but for the whole idea I was thinking > > whether it should just work if we simply: > > > > fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); > > - vma_end_read(vma); > > + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) > > + vma_end_read(vma); > > > > ? > > Today when we can't handle a page fault under per-vma locks we return > VM_FAULT_RETRY, in which case per-vma lock is dropped and the fault is Oh I see what I missed. I think it may not be a good idea to reuse VM_FAULT_RETRY just for that, because it makes VM_FAULT_RETRY even more complicated - now it adds one more case where the lock is not released, that's when PER_VMA even if !NOWAIT. > retried under mmap_lock. The condition you suggest above would not > drop per-vma lock for VM_FAULT_RETRY case and would break the current > fallback mechanism. > However your suggestion gave me an idea. I could indicate that per-vma > lock got dropped using vmf structure (like Matthew suggested before) > and once handle_pte_fault(vmf) returns I could check if it returned > VM_FAULT_RETRY but per-vma lock is still held. > If that happens I can > call vma_end_read() before returning from __handle_mm_fault(). That > way any time handle_mm_fault() returns VM_FAULT_RETRY per-vma lock > will be already released, so your condition in do_page_fault() will > work correctly. That would eliminate the need for a new > VM_FAULT_VMA_UNLOCKED flag. WDYT? Sounds good. So probably that's the major pain for now with the legacy fallback (I'll have commented if I noticed it with the initial vma lock support..). I assume that'll go away as long as we recover the VM_FAULT_RETRY semantics to before. -- Peter Xu