From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F396C54EBE for ; Mon, 16 Jan 2023 22:36:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 73BEF6B0071; Mon, 16 Jan 2023 17:36:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6EB766B0073; Mon, 16 Jan 2023 17:36:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B31A6B0074; Mon, 16 Jan 2023 17:36:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 4881E6B0071 for ; Mon, 16 Jan 2023 17:36:26 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 185D01C2DF1 for ; Mon, 16 Jan 2023 22:36:26 +0000 (UTC) X-FDA: 80362122372.20.FE0CE46 Received: from mail-yb1-f176.google.com (mail-yb1-f176.google.com [209.85.219.176]) by imf05.hostedemail.com (Postfix) with ESMTP id 7809F100009 for ; Mon, 16 Jan 2023 22:36:24 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IRTe23cB; spf=pass (imf05.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673908584; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8x0ax+CrpRdZ474UKgbbkMQWa7fnkqGkNbkY5YDvYN8=; b=CwjERm5p2i+w9Yp59S8b8cbpnvdMztpLPbZXaW6UjUgaEyLH0uWxX09r6zGwFdJDIIXCFV mpcT3y7F+AQdPlKakpAN8zw3wWRiILxlNgfyEPtbbA0e43ac25+HPcc4rDM+4ay92vwLG0 LFYyD8CCVJ0guir2mrwIiiiCZsE23SQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IRTe23cB; spf=pass (imf05.hostedemail.com: domain of surenb@google.com designates 209.85.219.176 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673908584; a=rsa-sha256; cv=none; b=Tjqz/5Prz7LArjcPdnNmfgLRbgdXfoQX12Cmwpe/vPNw/V4rN1OMgcl6qYxAnZMt44r/ms Oth1jh83AF4B35Sr75JsYcWswt/0Iuq9eNQR+E6eDiXc1lO1keKT2mHCJCab5tc2FX9Y8U /Wcfp/i96S2Gh/jg6F3Wt4r4rGRxdOY= Received: by mail-yb1-f176.google.com with SMTP id 188so31849829ybi.9 for ; Mon, 16 Jan 2023 14:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=8x0ax+CrpRdZ474UKgbbkMQWa7fnkqGkNbkY5YDvYN8=; b=IRTe23cB9H6aTVRzS1VkJBbWS3kWAKwm81ESOnh/qvhUb91AYaNRSUNI0S8RUGxEZ3 h4S6eATYVs38FAo2PWtdhGYY4OfXPvM9N4v6Wb5k6QuoWadoD/eIFLn/skx21/ok2jhv SXuEvjJrgFW4RQktr6Ez2nYap+wI5Q5Gr6EcfObHYt0vJs8pEbXEDearImg0E0rzVBAR fjgi4ZzPiPVYLGqkgpdYEU2e530Z0oGeGhXnTFK9GhkK5BpGt4kpwDZtWwalj2LQp3s2 5PC/suWXHjKBxFS9/UvFQjQYa+Pxmg7aD9vbI2IK0nKaNteQTKuPNqnJjAUfXI7dQFcC 426w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=8x0ax+CrpRdZ474UKgbbkMQWa7fnkqGkNbkY5YDvYN8=; b=dBzQIfdWzTITqPRxPgYqf6/ahv8qaXooc28/UGdQ/RxdK6QcSA71bs5eSQfgwX5rOD mFsqy6uiPOt76NCaSjpgOfv3Zi9QukVJJARove+8hImraI9zU7OjMzPZbCHgtVLFiQ8w 8HCHiFfVfcsFNdzgiyotsfrwwSdvNhxxzkzfU7ZoBFfYnA2hxGr3VDYsBRM5NCm7hQh1 xUbMpYY/RDVhNNYMbm4p7w8byHtmj9YhM/LysBsemTgZLmUagBWDTiU5+wC7DEQwHHBb WF7GKt657rvYFyMzDfMpfGj2milFmPegrBdUOoeo8Pl27g4CoNJ2CpeZLBmNWUDipY7c iAbw== X-Gm-Message-State: AFqh2kqfLzQ+B72nKAmBM6CrF42U2rn00e342lrZ6xFLHjsX2PaHVBot hX/LiyjmR/5TFBvGU3xrAKhJ7xvgCuEEtTZt59RuGA== X-Google-Smtp-Source: AMrXdXvljgHOV/UoOcOikWC1/3qvS04jHBKBkL3Vl0H3O9rQK82hflXcJ4xxJZi7iin1IEWYklAFhv7wq6FpD+eNUG8= X-Received: by 2002:a25:ceca:0:b0:7e4:115c:9cf6 with SMTP id x193-20020a25ceca000000b007e4115c9cf6mr160919ybe.316.1673908583348; Mon, 16 Jan 2023 14:36:23 -0800 (PST) MIME-Version: 1.0 References: <20230109205336.3665937-1-surenb@google.com> <20230109205336.3665937-42-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 16 Jan 2023 14:36:12 -0800 Message-ID: Subject: Re: [PATCH 41/41] mm: replace rw_semaphore with atomic_t in vma_lock To: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@techsingularity.net, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, punit.agrawal@bytedance.com, lstoakes@gmail.com, peterjung1337@gmail.com, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, jannh@google.com, shakeelb@google.com, tatashin@google.com, edumazet@google.com, gthelen@google.com, gurua@google.com, arjunroy@google.com, soheil@google.com, hughlynch@google.com, leewalsh@google.com, posk@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: choakqt8o116hg13u3uotk7sj6yki1k9 X-Rspamd-Queue-Id: 7809F100009 X-HE-Tag: 1673908584-972714 X-HE-Meta: U2FsdGVkX18NtHNCmBlJtft2bOA81hiAKatIoO4+cFUGWmdXZxM0EONrxZdKLL+KMgKmmy4/spU8NSMdAllLK6JECbP1Pjze1C7iNWbO7l1q3lqe3Fj0hTajtx9j1eif6EjejucX8bDE5sRTQBZkV3+tnKKl0aYE6xqP6cXksMu1ZaDuiZfxw/gpmdnCKWJWRhCqctTfulMXxZZOwSIVXI+3V8OGA9OVjy4OAYN7ITY6hTD8BojKuSYxFO4VE1XvHchMcaEvYaQoos+GnCaVo6NGE/IMl46rPX9KCoWBn+yixA40PHnEmZaPo8MYA+UCDDm3qFl4KQ7HJ4Lqnhsj1eq6ySily44QSpglXjqOUhqwl4NOU6JJ8QvawTE3G5ip4P0h+cZRtMBwYAJazBbu9O3SlVIhVX/j8iefQDjEKBSSDtQT5hHvOpWeQH+FloXx3d6jVr7yZW5FxgSt5MX7gXhNMp/6ycqY1jKmOr6F08/6E/J8jDIZglPD2bczStbQW5dFxxedxAuMuS/DNTh9PN04qrlPfR8roNqX8tsZ3fV4Xi5JVMqbAHzxy3DYy0HnqtifGRKMvLGbeXCOs6CjE03NV9adhSvvmVcKDeYW8zYkX7eoIsWYoWpz0gfj4gU5dWrtJdik9agkRoKbj8gA4Yi0WuXhva6J2VAa/qBbXehXi5PxraYVLE3lK3zGgG1Yo/GzORV7s69i+Ecp5kRCcsT5GsexVG3hecVJSAy/8k3sLcCVeR6dnaxLUVZRiHCgKbnQ7+uVtW8sbSqQBGggDbILTATpdGTpCSp0j8q6YIhR9qnWdKD/THg2cP3Jj6QF6+onxHRa3pZe0yGovX0zSFDlKCr4AR4xZ7gOE/+KhR2Qf1im+VJ4pmEjTwdnXtmjrU2GpgOUeknBuowd71k3EPCkq7I80GULYyKmOaX7QGoR2MswTtJ6WcnLA15rwrAaH0kfbPhJVwrSiQ4WGbt W7InjFz7 j2wusEab7mfSvxnTVu1cM7IFylvwsH7DQ7s+Tkv0TS0pJ1PA5BLHrhPZRQn4YoBvEU59UGgbbsbp0yhX+hnReUQsFT8cdPlYtFcR3ZjN1JeYYNoSDFpRHq3xVs5aSm7B5lcsQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jan 16, 2023 at 3:15 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote: > > On Mon, Jan 09, 2023 at 12:53:36PM -0800, Suren Baghdasaryan wrote: > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index d40bf8a5e19e..294dd44b2198 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -627,12 +627,16 @@ static inline void vma_write_lock(struct vm_area_struct *vma) > > * mm->mm_lock_seq can't be concurrently modified. > > */ > > mm_lock_seq = READ_ONCE(vma->vm_mm->mm_lock_seq); > > - if (vma->vm_lock_seq == mm_lock_seq) > > + if (vma->vm_lock->lock_seq == mm_lock_seq) > > return; > > > > - down_write(&vma->vm_lock->lock); > > - vma->vm_lock_seq = mm_lock_seq; > > - up_write(&vma->vm_lock->lock); > > + if (atomic_cmpxchg(&vma->vm_lock->count, 0, -1)) > > + wait_event(vma->vm_mm->vma_writer_wait, > > + atomic_cmpxchg(&vma->vm_lock->count, 0, -1) == 0); > > + vma->vm_lock->lock_seq = mm_lock_seq; > > + /* Write barrier to ensure lock_seq change is visible before count */ > > + smp_wmb(); > > + atomic_set(&vma->vm_lock->count, 0); > > } > > > > /* > > @@ -643,20 +647,28 @@ static inline void vma_write_lock(struct vm_area_struct *vma) > > static inline bool vma_read_trylock(struct vm_area_struct *vma) > > { > > /* Check before locking. A race might cause false locked result. */ > > - if (vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) > > + if (vma->vm_lock->lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq)) > > return false; > > > > - if (unlikely(down_read_trylock(&vma->vm_lock->lock) == 0)) > > + if (unlikely(!atomic_inc_unless_negative(&vma->vm_lock->count))) > > return false; > > > > + /* If atomic_t overflows, restore and fail to lock. */ > > + if (unlikely(atomic_read(&vma->vm_lock->count) < 0)) { > > + if (atomic_dec_and_test(&vma->vm_lock->count)) > > + wake_up(&vma->vm_mm->vma_writer_wait); > > + return false; > > + } > > + > > /* > > * Overflow might produce false locked result. > > * False unlocked result is impossible because we modify and check > > * vma->vm_lock_seq under vma->vm_lock protection and mm->mm_lock_seq > > * modification invalidates all existing locks. > > */ > > - if (unlikely(vma->vm_lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) { > > - up_read(&vma->vm_lock->lock); > > + if (unlikely(vma->vm_lock->lock_seq == READ_ONCE(vma->vm_mm->mm_lock_seq))) { > > + if (atomic_dec_and_test(&vma->vm_lock->count)) > > + wake_up(&vma->vm_mm->vma_writer_wait); > > return false; > > } > > With this change readers can cause writers to starve. > What about checking waitqueue_active() before or after increasing > vma->vm_lock->count? The readers are in page fault path, which is the fast path, while writers performing updates are in slow path. Therefore I *think* starving writers should not be a big issue. So far in benchmarks I haven't seen issues with that but maybe there is such a case? > > -- > Thanks, > Hyeonggon > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >