From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8EF6C77B60 for ; Fri, 28 Apr 2023 21:38:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 485CB6B0072; Fri, 28 Apr 2023 17:38:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 435D06B0074; Fri, 28 Apr 2023 17:38:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FD0C6B0075; Fri, 28 Apr 2023 17:38:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 20B8B6B0072 for ; Fri, 28 Apr 2023 17:38:27 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DAFDF120331 for ; Fri, 28 Apr 2023 21:38:26 +0000 (UTC) X-FDA: 80732113812.29.EFFFA0D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf30.hostedemail.com (Postfix) with ESMTP id AF8CD8000B for ; Fri, 28 Apr 2023 21:38:24 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="D8/DsiOK"; spf=pass (imf30.hostedemail.com: domain of toke@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=toke@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1682717904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lw+GHneDKoJABYSDhYeX6z3rWG5F6Qi33GTtx7+qfww=; b=On9hrYybfvTNVo5ybgt4Bsk8QW6F47LDjJSzrsjWX9F5ozwqrnJr6H3IIWwJ9kx0pSUO9n ZScEc5ce+UGDQYDzPoG6e4Ed7iH/yRyI+veN0jH0eT1XNZ5X0WD6xQJxAw3UBhZtD1n5qK LTxfDlYYdbsZ4Z79dXT14T6XGKDPTjg= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="D8/DsiOK"; spf=pass (imf30.hostedemail.com: domain of toke@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=toke@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1682717904; a=rsa-sha256; cv=none; b=xMHXyOip/fMLsSDep//4r70LFfALoJDUnDdvQjVj/0SqOtxuecIVJ6LDSOVhVNgxLHEwcU 4a8EWi9tq84U96ufKCEAtIW6N1zX1+I446ZNKc+z+yzIKjMkdTdMNfOyQUKddaWXilFj4p iI1sRh33VOjJRpWqCXfICyHUE5hgk+I= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682717904; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=lw+GHneDKoJABYSDhYeX6z3rWG5F6Qi33GTtx7+qfww=; b=D8/DsiOK4dwZh+C23qOIsPWeoQTPDQx4A1Tf7PHBOip5fQcXL7ATy3wCNaiDV771H/yiXS vxU+ovPhKe7wi4pc+6qH0K5xMf89rB8HPcdHP8J+3ZvIv+5w2C6KFmVQc7wXIz4AR5TRVn zCop05OhoOC+xGdGBrXjTg9Ke4YcFas= Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-502-jL-0KLOGNZaQcDXx1Tmfhw-1; Fri, 28 Apr 2023 17:38:22 -0400 X-MC-Unique: jL-0KLOGNZaQcDXx1Tmfhw-1 Received: by mail-ed1-f72.google.com with SMTP id 4fb4d7f45d1cf-50a145a0b4bso15772628a12.1 for ; Fri, 28 Apr 2023 14:38:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1682717901; x=1685309901; h=mime-version:message-id:date:references:in-reply-to:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lw+GHneDKoJABYSDhYeX6z3rWG5F6Qi33GTtx7+qfww=; b=SKk/HsxgDTy+l4w9BMIPuWcL3CH1Yhapb96WiHq5wKWxfKeh5giWcacz6dLhN+/Nkv CkJrtEKtey0Wmw+WkyKWEY/o6iDtVf0VNQaaKg30/sfKEHgTy3x/7hrzTzuRkMOJL1Ei IderH+r7QK1Y0vdmQzPN2VsHEk8JvPtj0XFbMA2FOOSXmc0VGSSayRGHoOo286M5BVKX 7ZAsR9DPN9VLRrr6BDo74PlnbwvLz3aLJvFqWv0wCY1hqBwnGcXwsK8drc5fRWgOvnl/ NMD3Tk45jAcpzHWIvt5qzTcPA8lJOXs4Wl284f8Gd+fas6aOCNX4WfGak4KQV4Iu6O6n Hxkw== X-Gm-Message-State: AC+VfDxynmwKb63zH8ODZmraYLCXx8gMuvR76xHtm7rit0IsD+3heaLD ujyQI2WVEyauQfJcP54bWtFDrqhgUEyP6k4kQA2fvnH+HPTzBNRYy3IB9Rsuw2+FkEcuIYDbzkQ 8vRZiBeGq8q0= X-Received: by 2002:a17:907:da4:b0:953:838a:ed61 with SMTP id go36-20020a1709070da400b00953838aed61mr6303453ejc.30.1682717901258; Fri, 28 Apr 2023 14:38:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4S6Y4KP7Eq3ttQ2BEwOHXfLGCglBFuXp0G6qs5NF/QK7y10rtkC0P/Uryf4k/u2JW8nQAz2w== X-Received: by 2002:a17:907:da4:b0:953:838a:ed61 with SMTP id go36-20020a1709070da400b00953838aed61mr6303433ejc.30.1682717900793; Fri, 28 Apr 2023 14:38:20 -0700 (PDT) Received: from alrua-x1.borgediget.toke.dk ([45.145.92.2]) by smtp.gmail.com with ESMTPSA id a8-20020a170906670800b0094f257e3e05sm11683796ejp.168.2023.04.28.14.38.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 28 Apr 2023 14:38:20 -0700 (PDT) Received: by alrua-x1.borgediget.toke.dk (Postfix, from userid 1000) id 928E2ADCB0A; Fri, 28 Apr 2023 23:38:19 +0200 (CEST) From: Toke =?utf-8?Q?H=C3=B8iland-J=C3=B8rgensen?= To: Jesper Dangaard Brouer , Ilias Apalodimas , netdev@vger.kernel.org, Eric Dumazet , linux-mm@kvack.org, Mel Gorman Cc: Jesper Dangaard Brouer , lorenzo@kernel.org, linyunsheng@huawei.com, bpf@vger.kernel.org, "David S. Miller" , Jakub Kicinski , Paolo Abeni , Andrew Morton , willy@infradead.org Subject: Re: [PATCH RFC net-next/mm V3 1/2] page_pool: Remove workqueue in new shutdown scheme In-Reply-To: <168269857929.2191653.13267688321246766547.stgit@firesoul> References: <168269854650.2191653.8465259808498269815.stgit@firesoul> <168269857929.2191653.13267688321246766547.stgit@firesoul> X-Clacks-Overhead: GNU Terry Pratchett Date: Fri, 28 Apr 2023 23:38:19 +0200 Message-ID: <87edo37kms.fsf@toke.dk> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: AF8CD8000B X-Stat-Signature: mnnrz5eez5yf31ra157hesyuoz6n5c8o X-Rspam-User: X-HE-Tag: 1682717904-774466 X-HE-Meta: U2FsdGVkX18wEaUvubsB2m+qGSlHXB9SD3uqFkVv7lS6jX9KCIeSG/pfm/6wIceexMhm81+1uqtzR+SV3WLG2g1RhCZmC/f5l330IJUR1mXQ6PaSJy8AF1WoeES2mRsklCVRMBfx8o7zBjqNY6yUB5BaLHZJnAQtCb9nzlzIepGDMHoPDmB/sEPDd/zssFVgrKzVKGlducKFBzVH1ZDIHtBJO0E2EG9CT9z3Pj+GbuIedJHaxF5BtCBkB1XoWNY4XwT50Llmfv0pyblPnFm94oI5JNF5NCY0bkDoPj/Epder3ELZy4wvFVeyjRIQP6MTT5lkwnymJ8y4Qa+NppkYdPFe94zfNV1U0r/cL0pFP+l8URUC/NfgpMtCWoBai9vFfI0kwcqRA2eHjY9bgJUySbdyh50VKNr/jRn/99JDrU7oxob8CACQf57Hc2Z0+MBZT3eA8VcpoqSy8G1RQV2dh1BMBjrphPlJZ8VM0l81nsJtPaelz+ZJ1eZtt1TqFfjFBM2jiM11kiBETWEv/XFgeHQD2vDdPmxG3WT0u7DaQUl2fLYBRS/LnmUyGy6XW5+/OfTZL06FVOtW+D40zdleGfwXGJn/WAOpRSS/z4JZM9urtgh4OHI7KQymyYhAYOvLXRSGLc5tBX5rJuYkmiqXeqqXEFGO47/9yHNqa1PVrIp6VxqaTCycekwpWWVtYQvEqXh48iRGtsHrvOEjAr9Oa7toZpeGu+EY4R1WfwpohAYSlZUixrfPZVzENTZFsaurapRMaGkCPV6G/tevT7EZo/YAyXewB6ZauciYLDzT3yPfkuIpg5yL+sLa0Fyfmwfnr7TVaYtHhO4wZdl6uC0wfGWsk0E0z5IHSTm6lozj2ln1vxK8t1nZQU0sHAPluqcg0g2vtrQt8Q/jdF0mFzQ29L7YRO9QdX4pWttB6pZs9DfHqUrZA/xZwIRuIo1TbqrpUeQC9WsXNmS4vrmDrPJ UiQV8+rf i/hXqbGvecG700APpUG78JBfaxIzbh7F4kShdHZfK7fP25RTxF+S5Py2uhwtsvWJMm8JDA/GeeOz0CgGd8Cjcc/Sj6XeSso/g+4wZp41QvJxO2mBCcEWVLLmNYKLRjBZZCp6KqYyOvllRcSePvqcJp5Cscg2P5eooT2HhL99zypkqcgQzrLS28MVQSKc8CV8LiTVriy5Btqn25fzwSeBq69fczV0eSRXZ/zERDFw0V+vG87IyeYn5kyZVsiBpXOMILmHgnzfErHiqt8Pgh9Ua6JVFeewjQv6Dubhxq1bIlZY7zuRrPmdaPW635BJCqUbfyDv0S8+l+lPMUf5Be+SM/VsQvxgyLwVPX4wrOab5X1UWMLU0UjA0qfMuzoK1JgPmKk4aFC2bQmEGD+Yxo/JkRTxlrQMdBxhdDhpCK3z4THNGnFGPaxsuaYA0mfuhR/xBSZoI5lhVt/8VXaEn6trqbd/cWmLdA2oL04ILDpI6VVZHHAGoP6pReSLgzBm7ofArWPhW X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Jesper Dangaard Brouer writes: > This removes the workqueue scheme that periodically tests when > inflight reach zero such that page_pool memory can be freed. > > This change adds code to fast-path free checking for a shutdown flags > bit after returning PP pages. > > Performance is very important for PP, as the fast path is used for > XDP_DROP use-cases where NIC drivers recycle PP pages directly into PP > alloc cache. > > This patch (since V3) shows zero impact on this fast path. Micro > benchmarked with [1] on Intel CPU E5-1650 @3.60GHz. The slight code > reorg of likely() are deliberate. Oh, you managed to get rid of the small difference you were seeing before? Nice! :) Just a few questions, see below: > [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c > > Signed-off-by: Jesper Dangaard Brouer > --- > include/net/page_pool.h | 9 +-- > net/core/page_pool.c | 138 ++++++++++++++++++++++++++++++++++------------- > 2 files changed, 103 insertions(+), 44 deletions(-) > > diff --git a/include/net/page_pool.h b/include/net/page_pool.h > index c8ec2f34722b..a71c0f2695b0 100644 > --- a/include/net/page_pool.h > +++ b/include/net/page_pool.h > @@ -50,6 +50,9 @@ > PP_FLAG_DMA_SYNC_DEV |\ > PP_FLAG_PAGE_FRAG) > > +/* Internal flag: PP in shutdown phase, waiting for inflight pages */ > +#define PP_FLAG_SHUTDOWN BIT(8) > + > /* > * Fast allocation side cache array/stack > * > @@ -151,11 +154,6 @@ static inline u64 *page_pool_ethtool_stats_get(u64 *data, void *stats) > struct page_pool { > struct page_pool_params p; > > - struct delayed_work release_dw; > - void (*disconnect)(void *); > - unsigned long defer_start; > - unsigned long defer_warn; > - > u32 pages_state_hold_cnt; > unsigned int frag_offset; > struct page *frag_page; > @@ -165,6 +163,7 @@ struct page_pool { > /* these stats are incremented while in softirq context */ > struct page_pool_alloc_stats alloc_stats; > #endif > + void (*disconnect)(void *); > u32 xdp_mem_id; > > /* > diff --git a/net/core/page_pool.c b/net/core/page_pool.c > index e212e9d7edcb..54bdd140b7bd 100644 > --- a/net/core/page_pool.c > +++ b/net/core/page_pool.c > @@ -23,9 +23,6 @@ > > #include > > -#define DEFER_TIME (msecs_to_jiffies(1000)) > -#define DEFER_WARN_INTERVAL (60 * HZ) > - > #define BIAS_MAX LONG_MAX > > #ifdef CONFIG_PAGE_POOL_STATS > @@ -380,6 +377,10 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool, > struct page *page; > int i, nr_pages; > > + /* API usage BUG: PP in shutdown phase, cannot alloc new pages */ > + if (WARN_ON(pool->p.flags & PP_FLAG_SHUTDOWN)) > + return NULL; > + > /* Don't support bulk alloc for high-order pages */ > if (unlikely(pp_order)) > return __page_pool_alloc_page_order(pool, gfp); > @@ -450,10 +451,9 @@ EXPORT_SYMBOL(page_pool_alloc_pages); > */ > #define _distance(a, b) (s32)((a) - (b)) > > -static s32 page_pool_inflight(struct page_pool *pool) > +static s32 __page_pool_inflight(struct page_pool *pool, > + u32 hold_cnt, u32 release_cnt) > { > - u32 release_cnt = atomic_read(&pool->pages_state_release_cnt); > - u32 hold_cnt = READ_ONCE(pool->pages_state_hold_cnt); > s32 inflight; > > inflight = _distance(hold_cnt, release_cnt); > @@ -464,6 +464,17 @@ static s32 page_pool_inflight(struct page_pool *pool) > return inflight; > } > > +static s32 page_pool_inflight(struct page_pool *pool) > +{ > + u32 hold_cnt = READ_ONCE(pool->pages_state_hold_cnt); > + u32 release_cnt = atomic_read(&pool->pages_state_release_cnt); > + return __page_pool_inflight(pool, hold_cnt, release_cnt); > +} > + > +static int page_pool_free_attempt(struct page_pool *pool, > + u32 hold_cnt, u32 release_cnt); > +static u32 pp_read_hold_cnt(struct page_pool *pool); > + > /* Disconnects a page (from a page_pool). API users can have a need > * to disconnect a page (from a page_pool), to allow it to be used as > * a regular page (that will eventually be returned to the normal > @@ -471,8 +482,10 @@ static s32 page_pool_inflight(struct page_pool *pool) > */ > void page_pool_release_page(struct page_pool *pool, struct page *page) > { > + unsigned int flags = READ_ONCE(pool->p.flags); > dma_addr_t dma; > - int count; > + u32 release_cnt; > + u32 hold_cnt; > > if (!(pool->p.flags & PP_FLAG_DMA_MAP)) > /* Always account for inflight pages, even if we didn't > @@ -490,11 +503,15 @@ void page_pool_release_page(struct page_pool *pool, struct page *page) > skip_dma_unmap: > page_pool_clear_pp_info(page); > > - /* This may be the last page returned, releasing the pool, so > - * it is not safe to reference pool afterwards. > - */ > - count = atomic_inc_return_relaxed(&pool->pages_state_release_cnt); > - trace_page_pool_state_release(pool, page, count); > + if (flags & PP_FLAG_SHUTDOWN) > + hold_cnt = pp_read_hold_cnt(pool); > + > + release_cnt = atomic_inc_return(&pool->pages_state_release_cnt); > + trace_page_pool_state_release(pool, page, release_cnt); > + > + /* In shutdown phase, last page will free pool instance */ > + if (flags & PP_FLAG_SHUTDOWN) > + page_pool_free_attempt(pool, hold_cnt, release_cnt); I'm curious why you decided to keep the hold_cnt read separate from the call to free attempt? Not a huge deal, and I'm fine with keeping it this way, just curious if you have any functional reason that I missed, or if you just prefer this style? :) > } > EXPORT_SYMBOL(page_pool_release_page); > > @@ -535,7 +552,7 @@ static bool page_pool_recycle_in_ring(struct page_pool *pool, struct page *page) > static bool page_pool_recycle_in_cache(struct page *page, > struct page_pool *pool) > { > - if (unlikely(pool->alloc.count == PP_ALLOC_CACHE_SIZE)) { > + if (pool->alloc.count == PP_ALLOC_CACHE_SIZE) { > recycle_stat_inc(pool, cache_full); > return false; > } > @@ -546,6 +563,8 @@ static bool page_pool_recycle_in_cache(struct page *page, > return true; > } > > +static void page_pool_empty_ring(struct page_pool *pool); > + > /* If the page refcnt == 1, this will try to recycle the page. > * if PP_FLAG_DMA_SYNC_DEV is set, we'll try to sync the DMA area for > * the configured size min(dma_sync_size, pool->max_len). > @@ -572,7 +591,8 @@ __page_pool_put_page(struct page_pool *pool, struct page *page, > page_pool_dma_sync_for_device(pool, page, > dma_sync_size); > > - if (allow_direct && in_softirq() && > + /* During PP shutdown, no direct recycle must occur */ > + if (likely(allow_direct && in_softirq()) && > page_pool_recycle_in_cache(page, pool)) > return NULL; > > @@ -609,6 +629,8 @@ void page_pool_put_defragged_page(struct page_pool *pool, struct page *page, > recycle_stat_inc(pool, ring_full); > page_pool_return_page(pool, page); > } > + if (page && pool->p.flags & PP_FLAG_SHUTDOWN) > + page_pool_empty_ring(pool); > } > EXPORT_SYMBOL(page_pool_put_defragged_page); > > @@ -646,6 +668,9 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, > recycle_stat_add(pool, ring, i); > page_pool_ring_unlock(pool); > > + if (pool->p.flags & PP_FLAG_SHUTDOWN) > + page_pool_empty_ring(pool); > + > /* Hopefully all pages was return into ptr_ring */ > if (likely(i == bulk_len)) > return; > @@ -737,12 +762,18 @@ struct page *page_pool_alloc_frag(struct page_pool *pool, > } > EXPORT_SYMBOL(page_pool_alloc_frag); > > +noinline > static void page_pool_empty_ring(struct page_pool *pool) > { > - struct page *page; > + struct page *page, *next; > + > + next = ptr_ring_consume_bh(&pool->ring); > > /* Empty recycle ring */ > - while ((page = ptr_ring_consume_bh(&pool->ring))) { > + while (next) { > + page = next; > + next = ptr_ring_consume_bh(&pool->ring); > + > /* Verify the refcnt invariant of cached pages */ > if (!(page_ref_count(page) == 1)) > pr_crit("%s() page_pool refcnt %d violation\n", > @@ -796,39 +827,36 @@ static void page_pool_scrub(struct page_pool *pool) > page_pool_empty_ring(pool); > } > > -static int page_pool_release(struct page_pool *pool) > +/* Avoid inlining code to avoid speculative fetching cacheline */ > +noinline > +static u32 pp_read_hold_cnt(struct page_pool *pool) > +{ > + return READ_ONCE(pool->pages_state_hold_cnt); > +} > + > +noinline > +static int page_pool_free_attempt(struct page_pool *pool, > + u32 hold_cnt, u32 release_cnt) > { > int inflight; > > - page_pool_scrub(pool); > - inflight = page_pool_inflight(pool); > + inflight = __page_pool_inflight(pool, hold_cnt, release_cnt); > if (!inflight) > page_pool_free(pool); > > return inflight; > } > > -static void page_pool_release_retry(struct work_struct *wq) > +static int page_pool_release(struct page_pool *pool) > { > - struct delayed_work *dwq = to_delayed_work(wq); > - struct page_pool *pool = container_of(dwq, typeof(*pool), release_dw); > int inflight; > > - inflight = page_pool_release(pool); > + page_pool_scrub(pool); > + inflight = page_pool_inflight(pool); > if (!inflight) > - return; > - > - /* Periodic warning */ > - if (time_after_eq(jiffies, pool->defer_warn)) { > - int sec = (s32)((u32)jiffies - (u32)pool->defer_start) / HZ; > - > - pr_warn("%s() stalled pool shutdown %d inflight %d sec\n", > - __func__, inflight, sec); > - pool->defer_warn = jiffies + DEFER_WARN_INTERVAL; > - } > + page_pool_free(pool); > > - /* Still not ready to be disconnected, retry later */ > - schedule_delayed_work(&pool->release_dw, DEFER_TIME); > + return inflight; > } > > void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), > @@ -856,6 +884,10 @@ EXPORT_SYMBOL(page_pool_unlink_napi); > > void page_pool_destroy(struct page_pool *pool) > { > + unsigned int flags; > + u32 release_cnt; > + u32 hold_cnt; > + > if (!pool) > return; > > @@ -868,11 +900,39 @@ void page_pool_destroy(struct page_pool *pool) > if (!page_pool_release(pool)) > return; > > - pool->defer_start = jiffies; > - pool->defer_warn = jiffies + DEFER_WARN_INTERVAL; > + /* PP have pages inflight, thus cannot immediately release memory. > + * Enter into shutdown phase, depending on remaining in-flight PP > + * pages to trigger shutdown process (on concurrent CPUs) and last > + * page will free pool instance. > + * > + * There exist two race conditions here, we need to take into > + * account in the following code. > + * > + * 1. Before setting PP_FLAG_SHUTDOWN another CPU released the last > + * pages into the ptr_ring. Thus, it missed triggering shutdown > + * process, which can then be stalled forever. > + * > + * 2. After setting PP_FLAG_SHUTDOWN another CPU released the last > + * page, which triggered shutdown process and freed pool > + * instance. Thus, its not safe to dereference *pool afterwards. > + * > + * Handling races by holding a fake in-flight count, via > + * artificially bumping pages_state_hold_cnt, which assures pool > + * isn't freed under us. For race(1) its safe to recheck ptr_ring > + * (it will not free pool). Race(2) cannot happen, and we can > + * release fake in-flight count as last step. > + */ > + hold_cnt = READ_ONCE(pool->pages_state_hold_cnt) + 1; > + smp_store_release(&pool->pages_state_hold_cnt, hold_cnt); > + barrier(); > + flags = READ_ONCE(pool->p.flags) | PP_FLAG_SHUTDOWN; > + smp_store_release(&pool->p.flags, flags); So in the memory barrier documentation, store_release() is usually paired with read_acquire(), but the code reading the flag uses READ_ONCE(). I'm not sure if those are equivalent? (As in, I am asking more than I'm saying they're not; I find it difficult to keep these things straight...) -Toke