From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CCE30C433F5 for ; Wed, 13 Oct 2021 12:50:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4ED3161165 for ; Wed, 13 Oct 2021 12:50:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4ED3161165 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AAF9B6B006C; Wed, 13 Oct 2021 08:50:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A5FE7900002; Wed, 13 Oct 2021 08:50:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 927556B0072; Wed, 13 Oct 2021 08:50:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 83D3D6B006C for ; Wed, 13 Oct 2021 08:50:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3EEAB18404004 for ; Wed, 13 Oct 2021 12:50:15 +0000 (UTC) X-FDA: 78691397190.15.3718CAD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id B1DCDD03E6BF for ; Wed, 13 Oct 2021 12:50:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634129414; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NFAd5GBKs8q6sU3jv4SolgYmzJ1yPIC9mmOhNrvbsAE=; b=GzEUOmB+oZiD77Zc+I3zLhwbeXXP0mp+XIzhn3LK5NnDXVgMCKZtUHHxUhKIweJ+L7jj+Z yFxvXRKxJCP714tIR+mvW9nUI7godKMwnzDjZQ0Os6qyq8oTkbTqaJ5WUeaIRZfZmHBNNd QYMLXSDF2zQdJ8cQFXdG4yZ2oXueKOk= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-124-L6XpjieOPNGrUOjU4xaa5A-1; Wed, 13 Oct 2021 08:50:11 -0400 X-MC-Unique: L6XpjieOPNGrUOjU4xaa5A-1 Received: by mail-ed1-f72.google.com with SMTP id l10-20020a056402230a00b003db6977b694so2064928eda.23 for ; Wed, 13 Oct 2021 05:50:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:user-agent:mime-version:content-transfer-encoding; bh=NFAd5GBKs8q6sU3jv4SolgYmzJ1yPIC9mmOhNrvbsAE=; b=VrTSArtBKwjDPOWBnvyiu7StefHQrYOcvtSCzicJofSOtRd0brsDJXjJQQxQprishV Enr0v7g/BBRYP/kjkU2IC6ijpFPvMySwgInVOmkj9AKkFG04ztq4MvGnHgnZxa94xtbp WIeAaG88MIIqIErfDsYxnVUOU+6n3VFKNV9AB5RoYoKnCvD6wmsDg17bqJqiAqxQVtr9 SXcvBzOkImAGOge9bLysWEZf8rSZL/E9xcjt4ZC3mnXdEvMMb32q5U4VA6UUNlBGRMMH 0RMtwAcXN4VQ0q3j3roMw32OxLwgwDNgIoiJfRiAyoktLXiEOULMZAlCQcDnMWhAf+D2 b17A== X-Gm-Message-State: AOAM531TpSDISKXgfktEiHC8KvSCrwII7gAnVGVSW1HbiiZj6pI7gFt3 JVR1xspvpLeK4+jkeGqMc5OFUknTT829ZNbA1VP87JZqn9801xdzO+rR3QkzSreLf6ozw1rG1c4 o5A8yJVsz6xI= X-Received: by 2002:a17:907:1627:: with SMTP id hb39mr3810197ejc.161.1634129409690; Wed, 13 Oct 2021 05:50:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzs0aW4WYbvyFArq9cSZU/vgva9AhCirnuFEKBRpJW/Xedns5cO6PNZ7V/VX+SAfUa1gJdiEg== X-Received: by 2002:a17:907:1627:: with SMTP id hb39mr3810158ejc.161.1634129409398; Wed, 13 Oct 2021 05:50:09 -0700 (PDT) Received: from ?IPv6:2a0c:5a80:1d03:b900:c3d1:5974:ce92:3123? ([2a0c:5a80:1d03:b900:c3d1:5974:ce92:3123]) by smtp.gmail.com with ESMTPSA id ck9sm6638092ejb.56.2021.10.13.05.50.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 13 Oct 2021 05:50:09 -0700 (PDT) Message-ID: <06e96819a964ca4b4ba504d0da71e81d79f3a87b.camel@redhat.com> Subject: Re: [RFC 0/3] mm/page_alloc: Remote per-cpu lists drain support From: nsaenzju@redhat.com To: Vlastimil Babka , akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, frederic@kernel.org, tglx@linutronix.de, peterz@infradead.org, mtosatti@redhat.com, nilal@redhat.com, mgorman@suse.de, linux-rt-users@vger.kernel.org, cl@linux.com, paulmck@kernel.org, ppandit@redhat.com Date: Wed, 13 Oct 2021 14:50:08 +0200 In-Reply-To: <38d28332-6b15-b353-5bcb-f691455c6495@suse.cz> References: <20211008161922.942459-1-nsaenzju@redhat.com> <38d28332-6b15-b353-5bcb-f691455c6495@suse.cz> User-Agent: Evolution 3.40.4 (3.40.4-2.fc34) MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: B1DCDD03E6BF X-Stat-Signature: zwuj6kfwr4ewnpxh7m95s3h3fse1pnig Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GzEUOmB+; spf=none (imf21.hostedemail.com: domain of nsaenzju@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=nsaenzju@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1634129414-126709 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Vlastimil, thanks for spending time on this. Also, excuse me if I over explain things. On Tue, 2021-10-12 at 17:45 +0200, Vlastimil Babka wrote: > On 10/8/21 18:19, Nicolas Saenz Julienne wrote: > > This series replaces mm/page_alloc's per-cpu lists drain mechanism in= order for > > it to be able to be run remotely. Currently, only a local CPU is perm= itted to > > change its per-cpu lists, and it's expected to do so, on-demand, when= ever a > > process demands it (by means of queueing a drain task on the local CP= U). Most > > systems will handle this promptly, but it'll cause problems for NOHZ_= FULL CPUs > > that can't take any sort of interruption without breaking their funct= ional > > guarantees (latency, bandwidth, etc...). Having a way for these proce= sses to > > remotely drain the lists themselves will make co-existing with isolat= ed CPUs > > possible, and comes with minimal performance[1]/memory cost to other = users. > >=20 > > The new algorithm will atomically switch the pointer to the per-cpu l= ists and > > use RCU to make sure it's not being used before draining them.=20 > >=20 > > I'm interested in an sort of feedback, but especially validating that= the > > approach is acceptable, and any tests/benchmarks you'd like to see ru= n against >=20 > So let's consider the added alloc/free fast paths overhead: > - Patch 1 - __alloc_pages_bulk() used to determine pcp_list once, now i= t's > determined for each allocated page in __rmqueue_pcplist(). This one I can avoid, I missed the performance aspect of it. I was aiming= at making the code bearable. > - Patch 2 - adds indirection from pcp->$foo to pcp->lp->$foo in each op= eration > - Patch 3 > - extra irqsave/irqrestore in free_pcppages_bulk (amortized) > - rcu_dereference_check() in free_unref_page_commit() and __rmqueue_p= cplist() Yes. > BTW - I'm not sure if the RCU usage is valid here. > > The "read side" (normal operations) is using: > rcu_dereference_check(pcp->lp, > lockdep_is_held(this_cpu_ptr(&pagesets.lock))); >=20 > where the lockdep parameter according to the comments for > rcu_dereference_check() means >=20 > "indicate to lockdep that foo->bar may only be dereferenced if either > rcu_read_lock() is held, or that the lock required to replace the bar s= truct > at foo->bar is held." You missed the "Could be used to" at the beginning of the sentence :). Th= at said, I believe this is similar to what I'm doing, only that the situatio= n is more complex. > but you are not taking rcu_read_lock()=20 I am taking the rcu_read_lock() implicitly, it's explained in 'struct per_cpu_pages', and in more depth below. > and the "write side" (remote draining) actually doesn't take pagesets.l= ock, > so it's not true that the "lock required to replace ... is held"? The w= rite > side uses rcu_replace_pointer(..., mutex_is_locked(&pcpu_drain_mutex)) > which is a different lock. The thing 'pagesets.lock' protects against is concurrent access to pcp->l= p's content, as opposed to its address. pcp->lp is dereferenced atomically, s= o no need for locking on that operation. The drain side never accesses pcp->lp's contents concurrently, it changes pcp->lp's address and makes sure all CPUs are in sync with the new addres= s before clearing the stale data. Just for the record, I think a better representation of what 'check' in rcu_dereference means is: * Do an rcu_dereference(), but check that the conditions under which the * dereference will take place are correct. Typically the conditions * indicate the various locking conditions that should be held at that * point. The check should return true if the conditions are satisfied. * An implicit check for being in an RCU read-side critical section * (rcu_read_lock()) is included. So for the read side, that is, code reading pcp->lp's address and its con= tents, the conditions to be met are: being in a RCU critical section, to make su= re RCU is keeping track of it, and holding 'pagesets.lock', to avoid concurrentl= y accessing pcp->lp's contents. The later is achieved either by disabling l= ocal irqs or disabling migration and getting a per-cpu rt_spinlock. Convenient= ly these are actions that implicitly delimit an RCU critical section (see [1= ] and [2]). So the 'pagesets.lock' check fully covers the read side locking/RCU concerns. On the write side, the drain has to make sure pcp->lp address change is a= tomic (this is achieved through rcu_replace_pointer()) and that lp->drain is em= ptied before a happens. So checking for pcpu_drain_mutex being held is good eno= ugh. > IOW, synchronize_rcu_expedited() AFAICS has nothing (no rcu_read_lock()= to > synchronize against? Might accidentally work on !RT thanks to disabled = irqs, > but not sure about with RT lock semantics of the local_lock... > > So back to overhead, if I'm correct above we can assume that there woul= d be > also rcu_read_lock() in the fast paths. As I explained above, no need. > The alternative proposed by tglx was IIRC that there would be a spinloc= k on > each cpu, which would be mostly uncontended except when draining. Maybe= an > uncontended spin lock/unlock would have lower overhead than all of the > above? It would be certainly simpler, so I would probably try that firs= t and > see if it's acceptable? You have a point here. I'll provide a performance rundown of both solutio= ns. This one is a bit more complex that's for sure. Thanks! [1] See rcu_read_lock()'s description: "synchronize_rcu() wait for region= s of code with preemption disabled, including regions of code with interru= pts or softirqs disabled." [2] See kernel/locking/spinlock_rt.c: "The RT [spinlock] substitutions explicitly disable migration and take rcu_read_lock() across the lock= held section." --=20 Nicol=C3=A1s S=C3=A1enz