From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BD85EE49A5 for ; Wed, 11 Sep 2024 13:39:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 310F694003D; Wed, 11 Sep 2024 09:39:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2BEC2940021; Wed, 11 Sep 2024 09:39:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1868D94003D; Wed, 11 Sep 2024 09:39:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id EFE7B940021 for ; Wed, 11 Sep 2024 09:39:52 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B5234817C0 for ; Wed, 11 Sep 2024 13:39:52 +0000 (UTC) X-FDA: 82552565424.07.D3673AB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf15.hostedemail.com (Postfix) with ESMTP id D54E0A0002 for ; Wed, 11 Sep 2024 13:39:50 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HUKcm5co; spf=pass (imf15.hostedemail.com: domain of longman@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726061852; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VvwU4cqki9TtrB27KP2JwOIZHnNDzLJG2G+q3XLkyHM=; b=5m2b5pYiuNrXELzwa+mFOuNoV6t1Xm+5w1tuT6t7fQldvTI87AC28MwXiTLd+dWT0fu8J+ R8GztFICDkDJZrP09bp9zYllGvCzUOvpxeRjNvfM8z/PaQAOSoE6lf9PU/rmjHQu3q22B8 CJShFXxTVbLUHjsE10wkj41nY7bScPU= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=HUKcm5co; spf=pass (imf15.hostedemail.com: domain of longman@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=longman@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726061852; a=rsa-sha256; cv=none; b=ZBNsLzzi2xNwQYoy1a9T1+3hQkaMozv/PwSUqJ0TTGF3bKU93Oz9Fhxb7FBY3ruTSCNyGC /MA5zwPRY8HjdRfa5mBDFSKQN1EPlxHl/wdtpb9kuYVV5iZ3lUe2HWBrgW9cI/2G7+s9DT /WeuDsvJnf90tomvB3dMysVSWBmFcxY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1726061990; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VvwU4cqki9TtrB27KP2JwOIZHnNDzLJG2G+q3XLkyHM=; b=HUKcm5co2WarBqourRdgYPRyx2ePdmgk6nHOu+zuIWVnQWkJKW03jUmppY6ND1apjX5HLE IPgYBdjluimvw8y3djuVIGO+Egz8eO+ee58EV+QxNm557SOk0xlaSazKXQUcCB0v6UE5uq IeM+AKGVwklK2RUoIU6qH+tfqeNxLwU= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-116-VyqdG2tONZesNhXAM0qVnw-1; Wed, 11 Sep 2024 09:39:47 -0400 X-MC-Unique: VyqdG2tONZesNhXAM0qVnw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 96CDF1955F45; Wed, 11 Sep 2024 13:39:43 +0000 (UTC) Received: from [10.2.16.53] (unknown [10.2.16.53]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 57EFB19560AB; Wed, 11 Sep 2024 13:39:38 +0000 (UTC) Message-ID: Date: Wed, 11 Sep 2024 09:39:37 -0400 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH v1 1/4] Introducing qpw_lock() and per-cpu queue & flush work To: Leonardo Bras Cc: Johannes Weiner , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Thomas Gleixner , Marcelo Tosatti , linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-mm@kvack.org References: <20240622035815.569665-1-leobras@redhat.com> <20240622035815.569665-2-leobras@redhat.com> Content-Language: en-US From: Waiman Long In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D54E0A0002 X-Stat-Signature: afjeuu3c441fzzyzr1g8dkfwr1eah5ge X-Rspam-User: X-HE-Tag: 1726061990-267082 X-HE-Meta: U2FsdGVkX1+UJRpugbRJen6KcIx1R5hC2v96J+bluWrWlenf0GLG59brF9NFWzpDXQ+xQe7TgnmUqeKAvA9yrotmISYxwZfcdrI2yDZ2GDdUOry4NX+Bh2rdqwbx+tOnZDoX1FeqLYnakjXEzwF6ne32whLI7yvcul2ZCLGsILSerQ4S8G8kixySph70nfnrOb5dBxymWpmZb3S9UkSHbebrdQwau6Xmevw+jvronhotnjvfOLiGnxq2+MW/Q7DLkMs+ONtZtnXoAPw9czdVf1YOSZ9EfQ/Sg7x+/36TZPJBkuHEITQcP/rJUx/ksv7qLbzENDU+wZkuKE7qHMpjrSOhWe0esgHrUnE0HboSFSyQnElsYQqYY03mvBESZmegyThnmL/I459P6537qjuZ6oQsZ+b/L2tyeHhz/93nNI4eCl/7sfmE1DtKpA1DstHoR8zn3Odj+V3232MlSYPZpe/uQxSMs9KoYJ08pU8+VMAXL7X3RI46CLfpSqHxA4s6uSp68rJX7EiHDdyI9O2JmTJ5wIsYVnWldJkIImR5Lv/iP1zN+nmx29YynH0vvxwl2wuDqUpQFhv48vvYRHZPvg+zH2G4ALg5935tzCVUOaxZfwnDLIvTaqphAabbRGg5H0TPw5WC/lk16WAk44RWWqsIXoVKfBZ5nyal0xGQMhRDLlghTZu44lGyat0HGFpm3eOTysb+PYUbBDu8wD2+uirsQGRn6ugfm029fo2oMiUTQ1jFyoQcfjVYrom3FgfRfuTekXSSFcQ5FyhdZ/gTErZpbO8MuizKeBtKWIfs/QPDQjMLh1ynQwO+AhvRvO062OOp//Y0BwC620oR7HUIAJGd1L6UEcrGhz5IKOsdPPos+SVTrtbNSwSgGht+SdPOiu1aBv5wmibGG+ycxZt5Gq7A6PGUF20vddLssfolWLK7xLjTN05fI4X65475rCMJW+Ib1CK362ECPT4B6i7 6I/mg2hn G+dnYGc/eaB7bTZ13BV/9ddDP/26szRMPvEeiTLNZIYfpWy49307h25sxX/DxysTzHZzWaAvijMkeDjIV9iEfzyTn+rBOtK2Mb11uXcj6mYRqQaWlBKW+uNzpDV+t8HrwkfTPkhNueEnH3A4XFhBrhZXZaPo3YpyRr35ukVCbIBHa2Y6Fri0FQvKCH89CaHIQbCorcFrn85duFEiF/tc7TzTvk9k6mh+fvklsrJuXb93Si0Vcl5UqgN29wdMkHTDtIYPdt5c/8/onX0Km2qFYdw2SigGPQ7rBrfHgmuXn7eRKn1fNByD1d03YoO4n+Q4xbmU4k58SKOVdgP5iqpVQ6Y3BwriGHMCTWyeZ9vfCtAxCNe/4gWg+CsP/1c9/VO3/fa2smnqpw01Ww680FQXZw2UGtP03BM/Y6iGp4oWmjj3FTS/kcyS8q4/z+A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 9/11/24 03:17, Leonardo Bras wrote: > On Wed, Sep 04, 2024 at 05:39:01PM -0400, Waiman Long wrote: >> On 6/21/24 23:58, Leonardo Bras wrote: >>> Some places in the kernel implement a parallel programming strategy >>> consisting on local_locks() for most of the work, and some rare remote >>> operations are scheduled on target cpu. This keeps cache bouncing low since >>> cacheline tends to be mostly local, and avoids the cost of locks in non-RT >>> kernels, even though the very few remote operations will be expensive due >>> to scheduling overhead. >>> >>> On the other hand, for RT workloads this can represent a problem: getting >>> an important workload scheduled out to deal with some unrelated task is >>> sure to introduce unexpected deadline misses. >>> >>> It's interesting, though, that local_lock()s in RT kernels become >>> spinlock(). We can make use of those to avoid scheduling work on a remote >>> cpu by directly updating another cpu's per_cpu structure, while holding >>> it's spinlock(). >>> >>> In order to do that, it's necessary to introduce a new set of functions to >>> make it possible to get another cpu's per-cpu "local" lock (qpw_{un,}lock*) >>> and also the corresponding queue_percpu_work_on() and flush_percpu_work() >>> helpers to run the remote work. >>> >>> On non-RT kernels, no changes are expected, as every one of the introduced >>> helpers work the exactly same as the current implementation: >>> qpw_{un,}lock*() -> local_{un,}lock*() (ignores cpu parameter) >>> queue_percpu_work_on() -> queue_work_on() >>> flush_percpu_work() -> flush_work() >>> >>> For RT kernels, though, qpw_{un,}lock*() will use the extra cpu parameter >>> to select the correct per-cpu structure to work on, and acquire the >>> spinlock for that cpu. >>> >>> queue_percpu_work_on() will just call the requested function in the current >>> cpu, which will operate in another cpu's per-cpu object. Since the >>> local_locks() become spinlock()s in PREEMPT_RT, we are safe doing that. >>> >>> flush_percpu_work() then becomes a no-op since no work is actually >>> scheduled on a remote cpu. >>> >>> Some minimal code rework is needed in order to make this mechanism work: >>> The calls for local_{un,}lock*() on the functions that are currently >>> scheduled on remote cpus need to be replaced by qpw_{un,}lock_n*(), so in >>> RT kernels they can reference a different cpu. It's also necessary to use a >>> qpw_struct instead of a work_struct, but it just contains a work struct >>> and, in PREEMPT_RT, the target cpu. >>> >>> This should have almost no impact on non-RT kernels: few this_cpu_ptr() >>> will become per_cpu_ptr(,smp_processor_id()). >>> >>> On RT kernels, this should improve performance and reduce latency by >>> removing scheduling noise. >>> >>> Signed-off-by: Leonardo Bras >>> --- >>> include/linux/qpw.h | 88 +++++++++++++++++++++++++++++++++++++++++++++ >>> 1 file changed, 88 insertions(+) >>> create mode 100644 include/linux/qpw.h >>> >>> diff --git a/include/linux/qpw.h b/include/linux/qpw.h >>> new file mode 100644 >>> index 000000000000..ea2686a01e5e >>> --- /dev/null >>> +++ b/include/linux/qpw.h >>> @@ -0,0 +1,88 @@ >>> +/* SPDX-License-Identifier: GPL-2.0 */ >>> +#ifndef _LINUX_QPW_H >>> +#define _LINUX_QPW_H >>> + >>> +#include "linux/local_lock.h" >>> +#include "linux/workqueue.h" >>> + >>> +#ifndef CONFIG_PREEMPT_RT >>> + >>> +struct qpw_struct { >>> + struct work_struct work; >>> +}; >>> + >>> +#define qpw_lock(lock, cpu) \ >>> + local_lock(lock) >>> + >>> +#define qpw_unlock(lock, cpu) \ >>> + local_unlock(lock) >>> + >>> +#define qpw_lock_irqsave(lock, flags, cpu) \ >>> + local_lock_irqsave(lock, flags) >>> + >>> +#define qpw_unlock_irqrestore(lock, flags, cpu) \ >>> + local_unlock_irqrestore(lock, flags) >>> + >>> +#define queue_percpu_work_on(c, wq, qpw) \ >>> + queue_work_on(c, wq, &(qpw)->work) >>> + >>> +#define flush_percpu_work(qpw) \ >>> + flush_work(&(qpw)->work) >>> + >>> +#define qpw_get_cpu(qpw) \ >>> + smp_processor_id() >>> + >>> +#define INIT_QPW(qpw, func, c) \ >>> + INIT_WORK(&(qpw)->work, (func)) >>> + >>> +#else /* !CONFIG_PREEMPT_RT */ >>> + >>> +struct qpw_struct { >>> + struct work_struct work; >>> + int cpu; >>> +}; >>> + >>> +#define qpw_lock(__lock, cpu) \ >>> + do { \ >>> + migrate_disable(); \ >>> + spin_lock(per_cpu_ptr((__lock), cpu)); \ >>> + } while (0) >>> + >>> +#define qpw_unlock(__lock, cpu) \ >>> + do { \ >>> + spin_unlock(per_cpu_ptr((__lock), cpu)); \ >>> + migrate_enable(); \ >>> + } while (0) >> Why there is a migrate_disable/enable() call in qpw_lock/unlock()? The >> rt_spin_lock/unlock() calls have already include a migrate_disable/enable() >> pair. > This was copied from PREEMPT_RT=y local_locks. > > In my tree, I see: > > #define __local_unlock(__lock) \ > do { \ > spin_unlock(this_cpu_ptr((__lock))); \ > migrate_enable(); \ > } while (0) > > But you are right: > For PREEMPT_RT=y, spin_{un,}lock() will be defined in spinlock_rt.h > as rt_spin{un,}lock(), which already runs migrate_{en,dis}able(). > > On the other hand, for spin_lock() will run migrate_disable() just before > finishing the function, and local_lock() will run it before calling > spin_lock() and thus, before spin_acquire(). > > (local_unlock looks like to have an unnecessary extra migrate_enable(), > though). > > I am not sure if it's actually necessary to run this extra > migrate_disable() in local_lock() case, maybe Thomas could help us > understand this. > > But sure, if we can remove this from local_{un,}lock(), I am sure we can > also remove this from qpw. I see. I believe the reason for this extra migrate_disable/enable() is to protect the this_cpu_ptr() call to prevent switching to another CPU right after this_cpu_ptr() but before the migrate_disable() inside rt_spin_lock(). So keep the migrate_disable/enable() as is. Cheers, Longman