From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D936CC3DA6E for ; Wed, 3 Jan 2024 09:13:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4F9E36B0096; Wed, 3 Jan 2024 04:13:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AB8C8D004E; Wed, 3 Jan 2024 04:13:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3237B8D0035; Wed, 3 Jan 2024 04:13:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 223F96B0096 for ; Wed, 3 Jan 2024 04:13:59 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id F2C25160717 for ; Wed, 3 Jan 2024 09:13:58 +0000 (UTC) X-FDA: 81637437756.11.40242D0 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf07.hostedemail.com (Postfix) with ESMTP id BD3F24000D for ; Wed, 3 Jan 2024 09:13:56 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=RcliW7Lr; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b="R0zK4T/y"; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=ctkUNYaY; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=n6qcG7qW; dmarc=pass (policy=none) header.from=suse.de; spf=pass (imf07.hostedemail.com: domain of osalvador@suse.de designates 195.135.223.130 as permitted sender) smtp.mailfrom=osalvador@suse.de ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273237; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OaUGx3HlxrUaYynmTLyTYeD/XbRLOHVkk+knRajgh5o=; b=TW2kJPlcVS6XAG5CsmDyyYOkn3rGwUIM44eFAkGm8nsZ/z5AJvfgQAHuI9/8yB58x/Oqkk 2zQvY8Qemha3Tw733dYFkQUziKnpQLWTY24vLQC7f2fT8BzHcbIfiax1Wm6lWuYd9vaSli aVRNa1GIBO1TVVLmFZt/IWifi7A36Fs= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=RcliW7Lr; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b="R0zK4T/y"; dkim=pass header.d=suse.de header.s=susede2_rsa header.b=ctkUNYaY; dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=n6qcG7qW; dmarc=pass (policy=none) header.from=suse.de; spf=pass (imf07.hostedemail.com: domain of osalvador@suse.de designates 195.135.223.130 as permitted sender) smtp.mailfrom=osalvador@suse.de ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273237; a=rsa-sha256; cv=none; b=P3hn35P5MkL2k1PWXEImQvVzTR9CUYx8l0hSa4GvsVx7kMoy/bCVdL2WnnD2bFzpShdO9M e4F/YvsnPm1fVQuqToBeyJUoK+dBD3DZlryXaeKyJWK1Idr5U6Xkyb6dLL30q9e110kaue BfTvJipdCIQD8CVDxLjLuhBCl5ETvJw= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 90B1621C64; Wed, 3 Jan 2024 09:13:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1704273234; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=OaUGx3HlxrUaYynmTLyTYeD/XbRLOHVkk+knRajgh5o=; b=RcliW7LrGd9bAVkxQbAyz9IUVVua645tpn/dBTcQrx9NDmAWHc0NkagXzNUt2fUklRGUOI QSK2GgqgkqDmYeeQJS58TKHD2JdJPKPQNY0A22Wg/BtxwGzaacv0cosj2OaRfwQ3ZHCq1q p0HYYBA0qhmBw3VHk6h9KAvuCCOLu7o= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1704273234; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=OaUGx3HlxrUaYynmTLyTYeD/XbRLOHVkk+knRajgh5o=; b=R0zK4T/ytaQwvbDYQ6XaCE2xQoSjOeOsY375NdKvwp7RbheVC/woYfVNQ23TEuQF5lA9Rl SMJGJrdZvgwMoCBg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1704273232; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=OaUGx3HlxrUaYynmTLyTYeD/XbRLOHVkk+knRajgh5o=; b=ctkUNYaYKP0uBTYXYnd9sRq2kSYEZW8daIKDpUZhrbXTvJ1270889lq8xx2QO+wq4x5PHd Kn4nrZPTyRKlX0CvbkN8tTB4pADftqXP5R162q5XaILiudseNNLHZq66D6aTPXSZc/iJEw rVwsxMKrhDCO2WdwLDLq6h85xwlevR8= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1704273232; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=OaUGx3HlxrUaYynmTLyTYeD/XbRLOHVkk+knRajgh5o=; b=n6qcG7qWRIsw1BuzO91FDEtwjPo4YMvABjpbo/1dNrsXpyiklUDDCfNnqD2b7wuyVWLvYD Pvy/OQEIF0GxtkCg== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id A25EE13AA6; Wed, 3 Jan 2024 09:13:51 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id sq7tI08llWUNawAAD6G6ig (envelope-from ); Wed, 03 Jan 2024 09:13:51 +0000 Date: Wed, 3 Jan 2024 10:14:42 +0100 From: Oscar Salvador To: andrey.konovalov@linux.dev Cc: Andrew Morton , Andrey Konovalov , Marco Elver , Alexander Potapenko , Dmitry Vyukov , Vlastimil Babka , kasan-dev@googlegroups.com, Evgenii Stepanov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: Re: [PATCH v4 12/22] lib/stackdepot: use read/write lock Message-ID: References: <9f81ffcc4bb422ebb6326a65a770bf1918634cbb.1700502145.git.andreyknvl@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <9f81ffcc4bb422ebb6326a65a770bf1918634cbb.1700502145.git.andreyknvl@google.com> X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: BD3F24000D X-Stat-Signature: x8aycmrd66rhmiggf4dpnjfkptezkfyb X-Rspam-User: X-HE-Tag: 1704273236-593081 X-HE-Meta: U2FsdGVkX19Lnox/4SjUGe9O8p04E/cjzUujrLuMCwjmqphkulX+pTNNPuyf0vtb1ebXpYb5TZmypUNIHa7kkas2GeK2O5UVTIvrHGgFTopTAv5ra0/FnZ5WjmnUy4fZG09IdqyfbjW8TpYcJPx8vWw4GseEtj8QhRjPFWfQBbPB0TTJeuo8R8a0jWOPCSA7+yUeQxe+RK94mj8Hp8x4AqKYy4uOSHj2IdOjk2/uYrlCyQjeOcRZrNT4cRHX9RJfbvPI6TIu2m9EXxjhoZKWw3T/tk8HgyGLQn6qxXRWgZO7SFgx/mpH74r1PEDzfexzFTo9P2Ur+yfMpsaq2OyP6tFsP1RWVUvzTkk6oHprePnZZkshXMwNSfg5P2xUn8oPUmlw/JPdkpcjzupwdrf4ksxt8BvJmzuL/b7w+tKWW7UwtoXf1NhwXItmi08ugfckCR/plMfhE8eXNVjDTWo5/ldGLDhFNfecn3W5iVx1VcwSqu5Dq5KivG60XbxtPBOtQRY9fAxh8lISV8kuTGSvkytETy/S7vDwBB+Ar3Z5GASKzUx2wKxSDFd8qOufnUPQ/fiU7cWGAPcwRiyRMf3UxssK1yeQL6QzOipjKd5tqzjXSwjLPXJ1/GFUYGSzfa5AWaSGv+HEIU2dKC2WOcecBiduPf2rxCqMb6KIpH3s6x4ka0A7FVmVTrTbPmOtiIhke2LR7cJr6j8vQUwuARJQJhnvVQqB04xn2LiUcFEGw6QF8wAbimijE+nSwFFgBQ9ExyfZbvtczwktO5P+x+e1CmPtuRfV+htyH3WGsvz7n5+m4B92M60XXFDLdjc9donx3Zaab/VUj1Mg/ulXYnGhz5m7NeY5uZG6em2Fy0xYiwpSFoinUWeaokKcGsLwL1W9 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Nov 20, 2023 at 06:47:10PM +0100, andrey.konovalov@linux.dev wrote: > From: Andrey Konovalov > > Currently, stack depot uses the following locking scheme: > > 1. Lock-free accesses when looking up a stack record, which allows to > have multiple users to look up records in parallel; > 2. Spinlock for protecting the stack depot pools and the hash table > when adding a new record. > > For implementing the eviction of stack traces from stack depot, the > lock-free approach is not going to work anymore, as we will need to be > able to also remove records from the hash table. > > Convert the spinlock into a read/write lock, and drop the atomic accesses, > as they are no longer required. > > Looking up stack traces is now protected by the read lock and adding new > records - by the write lock. One of the following patches will add a new > function for evicting stack records, which will be protected by the write > lock as well. > > With this change, multiple users can still look up records in parallel. > > This is preparatory patch for implementing the eviction of stack records > from the stack depot. > > Reviewed-by: Alexander Potapenko > Signed-off-by: Andrey Konovalov Reviewed-by: Oscar Salvador > --- > > Changed v2->v3: > - Use lockdep_assert_held_read annotation in depot_fetch_stack. > > Changes v1->v2: > - Add lockdep_assert annotations. > --- > lib/stackdepot.c | 87 +++++++++++++++++++++++++----------------------- > 1 file changed, 46 insertions(+), 41 deletions(-) > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index a5eff165c0d5..8378b32b5310 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -23,6 +23,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -91,15 +92,15 @@ static void *new_pool; > static int pools_num; > /* Next stack in the freelist of stack records within stack_pools. */ > static struct stack_record *next_stack; > -/* Lock that protects the variables above. */ > -static DEFINE_RAW_SPINLOCK(pool_lock); > /* > * Stack depot tries to keep an extra pool allocated even before it runs out > * of space in the currently used pool. This flag marks whether this extra pool > * needs to be allocated. It has the value 0 when either an extra pool is not > * yet allocated or if the limit on the number of pools is reached. > */ > -static int new_pool_required = 1; > +static bool new_pool_required = true; > +/* Lock that protects the variables above. */ > +static DEFINE_RWLOCK(pool_rwlock); > > static int __init disable_stack_depot(char *str) > { > @@ -232,6 +233,8 @@ static void depot_init_pool(void *pool) > const int records_in_pool = DEPOT_POOL_SIZE / DEPOT_STACK_RECORD_SIZE; > int i, offset; > > + lockdep_assert_held_write(&pool_rwlock); > + > /* Initialize handles and link stack records to each other. */ > for (i = 0, offset = 0; > offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE; > @@ -254,22 +257,17 @@ static void depot_init_pool(void *pool) > > /* Save reference to the pool to be used by depot_fetch_stack(). */ > stack_pools[pools_num] = pool; > - > - /* > - * WRITE_ONCE() pairs with potential concurrent read in > - * depot_fetch_stack(). > - */ > - WRITE_ONCE(pools_num, pools_num + 1); > + pools_num++; > } > > /* Keeps the preallocated memory to be used for a new stack depot pool. */ > static void depot_keep_new_pool(void **prealloc) > { > + lockdep_assert_held_write(&pool_rwlock); > + > /* > * If a new pool is already saved or the maximum number of > * pools is reached, do not use the preallocated memory. > - * Access new_pool_required non-atomically, as there are no concurrent > - * write accesses to this variable. > */ > if (!new_pool_required) > return; > @@ -287,15 +285,15 @@ static void depot_keep_new_pool(void **prealloc) > * At this point, either a new pool is kept or the maximum > * number of pools is reached. In either case, take note that > * keeping another pool is not required. > - * smp_store_release() pairs with smp_load_acquire() in > - * stack_depot_save(). > */ > - smp_store_release(&new_pool_required, 0); > + new_pool_required = false; > } > > /* Updates references to the current and the next stack depot pools. */ > static bool depot_update_pools(void **prealloc) > { > + lockdep_assert_held_write(&pool_rwlock); > + > /* Check if we still have objects in the freelist. */ > if (next_stack) > goto out_keep_prealloc; > @@ -307,7 +305,7 @@ static bool depot_update_pools(void **prealloc) > > /* Take note that we might need a new new_pool. */ > if (pools_num < DEPOT_MAX_POOLS) > - smp_store_release(&new_pool_required, 1); > + new_pool_required = true; > > /* Try keeping the preallocated memory for new_pool. */ > goto out_keep_prealloc; > @@ -341,6 +339,8 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > { > struct stack_record *stack; > > + lockdep_assert_held_write(&pool_rwlock); > + > /* Update current and new pools if required and possible. */ > if (!depot_update_pools(prealloc)) > return NULL; > @@ -376,18 +376,15 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) > static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) > { > union handle_parts parts = { .handle = handle }; > - /* > - * READ_ONCE() pairs with potential concurrent write in > - * depot_init_pool(). > - */ > - int pools_num_cached = READ_ONCE(pools_num); > void *pool; > size_t offset = parts.offset << DEPOT_STACK_ALIGN; > struct stack_record *stack; > > - if (parts.pool_index > pools_num_cached) { > + lockdep_assert_held_read(&pool_rwlock); > + > + if (parts.pool_index > pools_num) { > WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", > - parts.pool_index, pools_num_cached, handle); > + parts.pool_index, pools_num, handle); > return NULL; > } > > @@ -429,6 +426,8 @@ static inline struct stack_record *find_stack(struct stack_record *bucket, > { > struct stack_record *found; > > + lockdep_assert_held(&pool_rwlock); > + > for (found = bucket; found; found = found->next) { > if (found->hash == hash && > found->size == size && > @@ -446,6 +445,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > depot_stack_handle_t handle = 0; > struct page *page = NULL; > void *prealloc = NULL; > + bool need_alloc = false; > unsigned long flags; > u32 hash; > > @@ -465,22 +465,26 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > hash = hash_stack(entries, nr_entries); > bucket = &stack_table[hash & stack_hash_mask]; > > - /* > - * Fast path: look the stack trace up without locking. > - * smp_load_acquire() pairs with smp_store_release() to |bucket| below. > - */ > - found = find_stack(smp_load_acquire(bucket), entries, nr_entries, hash); > - if (found) > + read_lock_irqsave(&pool_rwlock, flags); > + > + /* Fast path: look the stack trace up without full locking. */ > + found = find_stack(*bucket, entries, nr_entries, hash); > + if (found) { > + read_unlock_irqrestore(&pool_rwlock, flags); > goto exit; > + } > + > + /* Take note if another stack pool needs to be allocated. */ > + if (new_pool_required) > + need_alloc = true; > + > + read_unlock_irqrestore(&pool_rwlock, flags); > > /* > - * Check if another stack pool needs to be allocated. If so, allocate > - * the memory now: we won't be able to do that under the lock. > - * > - * smp_load_acquire() pairs with smp_store_release() in > - * depot_update_pools() and depot_keep_new_pool(). > + * Allocate memory for a new pool if required now: > + * we won't be able to do that under the lock. > */ > - if (unlikely(can_alloc && smp_load_acquire(&new_pool_required))) { > + if (unlikely(can_alloc && need_alloc)) { > /* > * Zero out zone modifiers, as we don't have specific zone > * requirements. Keep the flags related to allocation in atomic > @@ -494,7 +498,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > prealloc = page_address(page); > } > > - raw_spin_lock_irqsave(&pool_lock, flags); > + write_lock_irqsave(&pool_rwlock, flags); > > found = find_stack(*bucket, entries, nr_entries, hash); > if (!found) { > @@ -503,11 +507,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > > if (new) { > new->next = *bucket; > - /* > - * smp_store_release() pairs with smp_load_acquire() > - * from |bucket| above. > - */ > - smp_store_release(bucket, new); > + *bucket = new; > found = new; > } > } else if (prealloc) { > @@ -518,7 +518,7 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > depot_keep_new_pool(&prealloc); > } > > - raw_spin_unlock_irqrestore(&pool_lock, flags); > + write_unlock_irqrestore(&pool_rwlock, flags); > exit: > if (prealloc) { > /* Stack depot didn't use this memory, free it. */ > @@ -542,6 +542,7 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, > unsigned long **entries) > { > struct stack_record *stack; > + unsigned long flags; > > *entries = NULL; > /* > @@ -553,8 +554,12 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, > if (!handle || stack_depot_disabled) > return 0; > > + read_lock_irqsave(&pool_rwlock, flags); > + > stack = depot_fetch_stack(handle); > > + read_unlock_irqrestore(&pool_rwlock, flags); > + > *entries = stack->entries; > return stack->size; > } > -- > 2.25.1 > -- Oscar Salvador SUSE Labs