From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10E64C76195 for ; Fri, 24 Mar 2023 22:50:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8849B6B0078; Fri, 24 Mar 2023 18:50:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 834826B007B; Fri, 24 Mar 2023 18:50:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D57F900002; Fri, 24 Mar 2023 18:50:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5BF7D6B0078 for ; Fri, 24 Mar 2023 18:50:50 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 26C8A1A0A7A for ; Fri, 24 Mar 2023 22:50:50 +0000 (UTC) X-FDA: 80605288260.09.1EFA6F1 Received: from mail-ed1-f49.google.com (mail-ed1-f49.google.com [209.85.208.49]) by imf04.hostedemail.com (Postfix) with ESMTP id 4EF4C40003 for ; Fri, 24 Mar 2023 22:50:48 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=PNZilH1G; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679698248; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k4C4+ig9qii8cFMiMH7UB83VKoFh5dhcH5qAN4vMsyQ=; b=v86qDZGt1hpZFIPqW8aiKH1ZFmiZksD5wgriIU0Uk/ex5baT1J0tQJUsYpgBRNDF7xIVlJ xsn7PBAGZcAP2lLE/E8/WOeQRLPSq1ybQ7ng1eJ61rQIgWnoSGrdj3BcysSU/MbgRWF/xS twHvEcFCAr50S+LMaDfWd/4HXB9qrfg= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=PNZilH1G; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679698248; a=rsa-sha256; cv=none; b=6iFbei6QAsJghXnWc5XUgZcS8W2nuOr2ZhC0lURMf7KkLyZflkUhuEiO4HcLyQRg4ZmFRk gvgy78WusAckPtXDwV3ab90Em2nb0goHQtx1TWi6EIVWtKsLJrInpioYbRDW2ZzPuodh8c nzi55EJQg7GAr8GAr51HR1hPxO/ztmk= Received: by mail-ed1-f49.google.com with SMTP id x3so13444565edb.10 for ; Fri, 24 Mar 2023 15:50:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1679698247; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=k4C4+ig9qii8cFMiMH7UB83VKoFh5dhcH5qAN4vMsyQ=; b=PNZilH1Gjf6oog3t5h9HCkdzcH1etVK5NHRvqEGz0tzMA6UkvV8oibBMlnELMdCsTx ESv3g4RuIOXbzQrTC6JLpovuuTr2leIRJnt5pu6Ur8taswzMfoVb3b76ogYPK9rIUKoz ZroJcsC87UZfJJ2+P4nODPSm6IGXHRapFkl2XStb5f6xPFrqB4NCxRgBRY328FFsfaDb t6M4EYsSyH+txGLv4+tb4mRYPL29/PWW8y0ndgEQEPnNflpkS9g0D0g6ILZ3+M7P/deo ZMVQpgzWN70olFlt/oeGeYGAsjRYrdlraZNnRwlFQ+LQ65cNhoGA3o8yS9sSslR5pm/L 5tjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679698247; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k4C4+ig9qii8cFMiMH7UB83VKoFh5dhcH5qAN4vMsyQ=; b=Z/tB9/nsuRvmhIFTcSYxtKkb7TaNvQu13V7xYq+DiiSyr7oCfd0C/5u9vxsRaT127/ 67NDRfrbCKwgerJR9ThRiqZLjAf189kn5UvUSNANSbNl1Xg4mseO+pTPqhDHpxvb0BrC ARVhVmDIZcsl06Yy12m/45c5Q+19ecLIl9ARpgoiN/U3Cyi1d9pkAVqq9/umBP4FL/p+ lXgv2BQDDXHi58rRdXyxNcDEG0LPEbwAoNb1fRrmWXcUljzJqceAnrxEFuTZl0vlzxr0 59KdoqY5MCKts0nMS4WctJHhHMkzhMR5ae8mgUeofAKX4NcT9jOipDN2fm3AkjPJFQRj eenA== X-Gm-Message-State: AAQBX9djoIl46yPOLo58lsz0gfTgaIHEdMQvn/jzcvYi+NxTSxz/47bJ kTTUmY7rspXOLVNpRg+s8jB1JGxvbCMGNlZpAWF+pw== X-Google-Smtp-Source: AKy350as1fjLqRzEXcj4LkTEmL/GZJV50WUfZRkDVfUJxjhMe3gHdgiiXSgPCONXQe0PAnrHWJ2DjRG+gGSUM+zjV0U= X-Received: by 2002:a50:a6d7:0:b0:4fa:71a2:982b with SMTP id f23-20020a50a6d7000000b004fa71a2982bmr2076902edc.0.1679698246555; Fri, 24 Mar 2023 15:50:46 -0700 (PDT) MIME-Version: 1.0 References: <20230323040037.2389095-1-yosryahmed@google.com> <20230323040037.2389095-2-yosryahmed@google.com> <53582a07-81c6-35eb-10bf-7920b27483be@redhat.com> In-Reply-To: <53582a07-81c6-35eb-10bf-7920b27483be@redhat.com> From: Yosry Ahmed Date: Fri, 24 Mar 2023 15:50:10 -0700 Message-ID: Subject: Re: [RFC PATCH 1/7] cgroup: rstat: only disable interrupts for the percpu lock To: Waiman Long Cc: Tejun Heo , Josef Bacik , Jens Axboe , Zefan Li , Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Vasily Averin , cgroups@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, bpf@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 4EF4C40003 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: z98zthaew6frpiottdnyefeqab9k69g3 X-HE-Tag: 1679698248-675662 X-HE-Meta: U2FsdGVkX1+UnkBN+shBkEMq961kSVksJ0QOe2OfENnrbzGsY0SXCKiAD99amadYZZO/S9HN8ffS0rDypGZaXQjoBf771TE1Rw/Z4FOukqhJN8QEU7I2XuyBLpyxFh5rFt8WaG36wznQjUHUyENOKtwMZnLyELRWxrSoIYMzZeXucgENZnKhIP1jjfEkeOkOLDgcOdPw1JMtlO76O6BjqlHWBcd+xhoLMOtHdJ/85xN4LAy3nKNPmrDV7XSZw6vaurB5Q+bHbZ76cTrTKcg9jtlagFFnx6QSkXWm6KzHyfsiH83tpFa78PD6UyISDCHWfZC1OLx5Lr8bQSckI7YQNBmk0C7OXMGluWnpvFQO9RN0jEd4h2tQE34OD5s4er0wy3ezafferEHXUykj/MHmvnCAniHEUbojczgFEBH6dlh5TXNIEl46c4cV74G1NbHmK/f7Z7Ab2JUQtpPLoUODLLNHLMUl9eqjh7v+XVf2ZLRuVp72qSGJGfaW1e+gIW1XMI1IQ1i/vTCfbD2gS5A/d/dDWkf5zc/9DFJEHWXcfMQXanv9lCNnfNkVtttGpeYSaWEJl0FrImi2Lc1kCVK0gIfLdbv7kjEvAV7DdLl3RVoqFbE9ZdhcvdIFLZwmZh9tHZU8ZmwEXa1YeOw5uwlODKxFGKzrzaLBj/JuW+X/HhMrgwFc2nEf0faJxJFDV3LB/TUx3lkzlPcQCr4KJOlfkNO+MpHCGywPNBIxkpQKZ32sI3xR6loORrUIKiaJcetSlQ4q4Ktqk1cTFO7QYV1jyowxMd2+tOGNNrLdk3UMWdWmDHPvJ+xZHRK3Vod07Ut5+ErT0DqLRKJpE4rLYpmhxjUqIv5t5F4IE92oFTGp+3WuI7jpBu+RNcGpnc/x032yXRuVbdvpj5ETNgt2kVaFhQj3DRBIF1D62nwtKMiNL1ZbKt/YRuvn9V1phq4yDJZZMbnoRplXvQ35EdLcf3v xj3udEZZ HeHGHcQEUbRCZCshk5UhJFzwpsoICCcDxH/3ruAsH5hoKSqk9cRIoIg0RWwKPvQwgqhlOgRc72faZ35Pw+Vc7Yy6o8GQXidbxwlh1r1RmdILxVCczGwuxHkTeZHX7PraNjGsgUVMXlZIafLCbLyPS1VK1VRIraj/duWXDjgd1rNyjN057VRsg8vqge4UyKWPnR1sHVUMORrMljnnr3ZLPiHeXTvSjTKRjkJXbVpGzfecinVKoAGZGkLKnj3L+LBq49H9LRmTUEeCVAn5Oma2jxA7OZ3o7WsCx+nwnWaQoWTiXmkLdxLL/t65rFqXtaXo8Li08KzrLRTKjmzE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Mar 24, 2023 at 7:12=E2=80=AFAM Waiman Long wr= ote: > > On 3/24/23 03:22, Yosry Ahmed wrote: > > On Thu, Mar 23, 2023 at 6:39=E2=80=AFPM Tejun Heo wrote= : > >> Hello, > >> > >> On Thu, Mar 23, 2023 at 04:00:31AM +0000, Yosry Ahmed wrote: > >>> Currently, when sleeping is not allowed during rstat flushing, we hol= d > >>> the global rstat lock with interrupts disabled throughout the entire > >>> flush operation. Flushing in an O(# cgroups * # cpus) operation, and > >>> having interrupts disabled throughout is dangerous. > >>> > >>> For some contexts, we may not want to sleep, but can be interrupted > >>> (e.g. while holding a spinlock or RCU read lock). As such, do not > >>> disable interrupts throughout rstat flushing, only when holding the > >>> percpu lock. This breaks down the O(# cgroups * # cpus) duration with > >>> interrupts disabled to a series of O(# cgroups) durations. > >>> > >>> Furthermore, if a cpu spinning waiting for the global rstat lock, it > >>> doesn't need to spin with interrupts disabled anymore. > >> I'm generally not a fan of big spin locks w/o irq protection. They too= often > >> become a source of unpredictable latency spikes. As you said, the glob= al > >> rstat lock can be held for quite a while. Removing _irq makes irq late= ncy > >> better on the CPU but on the other hand it makes a lot more likely tha= t the > >> lock is gonna be held even longer, possibly significantly so depending= on > >> the configuration and workload which will in turn stall other CPUs wai= ting > >> for the lock. Sure, irqs are being serviced quicker but if the cost is= more > >> and longer !irq context multi-cpu stalls, what's the point? > >> > >> I don't think there's anything which requires the global lock to be he= ld > >> throughout the entire flushing sequence and irq needs to be disabled w= hen > >> grabbing the percpu lock anyway, so why not just release the global lo= ck on > >> CPU boundaries instead? We don't really lose anything significant that= way. > >> The durations of irq disabled sections are still about the same as in = the > >> currently proposed solution at O(# cgroups) and we avoid the risk of h= olding > >> the global lock for too long unexpectedly from getting hit repeatedly = by > >> irqs while holding the global lock. > > Thanks for taking a look! > > > > I think a problem with this approach is that we risk having to contend > > for the global lock at every CPU boundary in atomic contexts. Right > Isn't it the plan to just do a trylock in atomic contexts so that it > won't get stuck spinning for the lock for an indeterminate amount of time= ? Not exactly. On the memory controller side, we currently only allow one flusher at a time and force all flushers to flush the full hierarchy, such that concurrent flushers can skip. This is done for both atomic and non-atomic contexts. For flushers outside the memory controller, they can still contend the lock among themselves or with flushers in the memory controller. In this case, instead of contending the lock once, they contend it at each CPU boundary. > > now we contend for the global lock once, and once we have it we go > > through all CPUs to flush, only having to contend with updates taking > > the percpu locks at this point. If we unconditionally release & > > reacquire the global lock at every CPU boundary then we may contend > > for it much more frequently with concurrent flushers. > > Note that with the use of qspinlock in all the major arches, the impact > of thundering herds of lockers are much less serious than before. There > are certainly some overhead in doing multiple lock acquires and > releases, but that shouldn't been too excessive. I ran some tests to measure this. Since I am using a cgroup v1 hierarchy, I cannot reproduce contention between memory controller flushers and non-memory controller flushers, so I removed the "one memory flusher only" restriction to have concurrent memory flushers compete for the global rstat lock to measure the impact: Before (only one flusher allowed to compete for the global rstat lock): ---cgroup_rstat_flush | --1.27%--cgroup_rstat_flush_locked | --0.94%--mem_cgroup_css_rstat_flush After (concurrent flushers allowed to compete for the global rstat lock): ---cgroup_rstat_flush | |--4.94%--_raw_spin_lock | | | --4.94%--queued_spin_lock_slowpath | --0.92%--cgroup_rstat_flush_locked | --0.56%--mem_cgroup_css_rstat_flush This was run with 20 processes trying to flush concurrently, so it may be excessive, but it seems like in this case lock contention makes a significant difference. Again, this is not a regression for non-atomic flushers, as they already compete for the lock at every CPU boundary, but for atomic flushers that don't give up the lock at all today, it would be a regression to start competing for the lock at every CPU boundary. This patch series aims to minimize the number of atomic flushers (brings them down to two, one of which is not common), so this may be fine. My main concern is that for some flushers that this series converts from atomic to non-atomic, we may notice a regression later and revert it (e.g. refault path), which is why I have them in separate patches. If we regress the atomic flushing path, it would be a larger surgery to restore the performance for these paths -- which is why I would rather keep the atomic path without excessive lock contention. Thoughts? > > I am all in for reducing lock hold time as much as possible as it will > improve the response time. > > Cheers, > Longman >