From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC671C07E98 for ; Sat, 3 Jul 2021 23:03:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E761461455 for ; Sat, 3 Jul 2021 23:03:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E761461455 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=roeck-us.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0B0548D0002; Sat, 3 Jul 2021 19:03:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 039A96B0073; Sat, 3 Jul 2021 19:03:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCDF38D0002; Sat, 3 Jul 2021 19:03:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0064.hostedemail.com [216.40.44.64]) by kanga.kvack.org (Postfix) with ESMTP id B0E1F6B0072 for ; Sat, 3 Jul 2021 19:03:03 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4382518086CD4 for ; Sat, 3 Jul 2021 23:03:00 +0000 (UTC) X-FDA: 78322803720.15.A1E560A Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) by imf29.hostedemail.com (Postfix) with ESMTP id ED66C9000094 for ; Sat, 3 Jul 2021 23:02:56 +0000 (UTC) Received: by mail-ot1-f53.google.com with SMTP id 7-20020a9d0d070000b0290439abcef697so14271310oti.2 for ; Sat, 03 Jul 2021 16:02:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=FRTDgOZgZbls4sTZyy9sI9ZjYiInG/eav6pR4QQPXLo=; b=h1D0fBUQCVYZX8BDjPtlRYvPBQzXYsHRZt5iiGkdyJDEEDxpNUgKiKkkzMI6PlLwP9 IgsFCylodGaNuiFVCwGuFnOkXl4+SprHEsdd6I9HI1Z8i1UNptbyP+RbEnxLtvyPwToM IvNv/Q1dXXMQrhYFNL9wB6iTphcHPVWVd/xjqVBBs9XccUmM75F6oB9wULdv9YQjQNK4 W/5MaWbnVcCx5POmNHC6EYzNniWfZtHJiKV8MkBU7yww1Zyqf8KOEpgGrUpKyaBvVh39 Fuxg5wj73UGnibHy65Khw3ysZSO9DZesr6Mti4G3oDHXaTjMYDmbQ15Je4txzI0VzGYh VCrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :mime-version:content-disposition; bh=FRTDgOZgZbls4sTZyy9sI9ZjYiInG/eav6pR4QQPXLo=; b=KJoK9mRDFnHHluDFVIjilNbrCSFko8TU4pXt01g6vVOX31s7PnMyg5lHS5eBWUPtwm dI0xE12mBNp/lOXYxIPoyj5vGE+jxghfjv+GZxcE34r0MZ53p/vd3NrWoy4XXUkzc1W+ hBxPFYyrangd9b9hck+ymuDD4GnGy4yjtzmgkhL1ApNWOqYg6C2Zk4pCYcFy7pTYNMmp 9FVpM6XdJ/2Iay7wtOV3XtOOu4ZW3Luk49nnQLS/EmBO4ynMyA+VFRTTEviJgZbj0hOg xz5Itt3qxNxnrXhX+h4SFIs2J5PzLa/9zdfj85XtmtZdJdNA2qcLayB/0Imm+b0k39LX Gn3Q== X-Gm-Message-State: AOAM533Hddy+FXmr0ERiRf+Uw32utfpgSlNTPdjuRMG97KhkPYa11btk nz3SkU04mWNsxo0eYz6487g= X-Google-Smtp-Source: ABdhPJxfZM8zas4N6eCW6JGwqRlo1tCi1AliiYSBo0EphgrTSVwYjH1EHkJpZQIUDR4fDvox65pnyw== X-Received: by 2002:a9d:60dd:: with SMTP id b29mr3036778otk.289.1625353375270; Sat, 03 Jul 2021 16:02:55 -0700 (PDT) Received: from localhost ([2600:1700:e321:62f0:329c:23ff:fee3:9d7c]) by smtp.gmail.com with ESMTPSA id z2sm1552635otm.2.2021.07.03.16.02.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 03 Jul 2021 16:02:54 -0700 (PDT) Date: Sat, 3 Jul 2021 16:02:53 -0700 From: Guenter Roeck To: Dennis Zhou Cc: Tejun Heo , Christoph Lameter , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] percpu: flush tlb in pcpu_reclaim_populated() Message-ID: <20210703230253.GA2242521@roeck-us.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: ED66C9000094 Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=h1D0fBUQ; spf=pass (imf29.hostedemail.com: domain of groeck7@gmail.com designates 209.85.210.53 as permitted sender) smtp.mailfrom=groeck7@gmail.com; dmarc=none X-Stat-Signature: kacspftjaqmd8m1bmi9nak5bka9fcf7w X-HE-Tag: 1625353376-799864 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Jul 03, 2021 at 09:05:23PM +0000, Dennis Zhou wrote: > Prior to "percpu: implement partial chunk depopulation", > pcpu_depopulate_chunk() was called only on the destruction path. This > meant the virtual address range was on its way back to vmalloc which > will handle flushing the tlbs for us. > > However, with pcpu_reclaim_populated(), we are now calling > pcpu_depopulate_chunk() during the active lifecycle of a chunk. > Therefore, we need to flush the tlb as well otherwise we can end up > accessing the wrong page through an invalid tlb mapping as reported in > [1]. > > [1] https://lore.kernel.org/lkml/20210702191140.GA3166599@roeck-us.net/ > > Fixes: f183324133ea ("percpu: implement partial chunk depopulation") > Reported-by: Guenter Roeck > Signed-off-by: Dennis Zhou Tested-by: Guenter Roeck Guenter > --- > I think I'm happier with this. It does the same thing as [2] but moves > the flush to the caller so we can batch per chunk. > > [2] https://lore.kernel.org/lkml/20210703040449.3213210-1-dennis@kernel.org/ > > mm/percpu-km.c | 6 ++++++ > mm/percpu-vm.c | 5 +++-- > mm/percpu.c | 29 +++++++++++++++++++++++------ > 3 files changed, 32 insertions(+), 8 deletions(-) > > diff --git a/mm/percpu-km.c b/mm/percpu-km.c > index c9d529dc7651..fe31aa19db81 100644 > --- a/mm/percpu-km.c > +++ b/mm/percpu-km.c > @@ -32,6 +32,12 @@ > > #include > > +static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, > + int page_start, int page_end) > +{ > + /* nothing */ > +} > + > static int pcpu_populate_chunk(struct pcpu_chunk *chunk, > int page_start, int page_end, gfp_t gfp) > { > diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c > index ee5d89fcd66f..2054c9213c43 100644 > --- a/mm/percpu-vm.c > +++ b/mm/percpu-vm.c > @@ -303,6 +303,9 @@ static int pcpu_populate_chunk(struct pcpu_chunk *chunk, > * For each cpu, depopulate and unmap pages [@page_start,@page_end) > * from @chunk. > * > + * Caller is required to call pcpu_post_unmap_tlb_flush() if not returning the > + * region back to vmalloc() which will lazily flush the tlb. > + * > * CONTEXT: > * pcpu_alloc_mutex. > */ > @@ -324,8 +327,6 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, > > pcpu_unmap_pages(chunk, pages, page_start, page_end); > > - /* no need to flush tlb, vmalloc will handle it lazily */ > - > pcpu_free_pages(chunk, pages, page_start, page_end); > } > > diff --git a/mm/percpu.c b/mm/percpu.c > index b4cebeca4c0c..8d8efd668f76 100644 > --- a/mm/percpu.c > +++ b/mm/percpu.c > @@ -1572,6 +1572,7 @@ static void pcpu_chunk_depopulated(struct pcpu_chunk *chunk, > * > * pcpu_populate_chunk - populate the specified range of a chunk > * pcpu_depopulate_chunk - depopulate the specified range of a chunk > + * pcpu_post_unmap_tlb_flush - flush tlb for the specified range of a chunk > * pcpu_create_chunk - create a new chunk > * pcpu_destroy_chunk - destroy a chunk, always preceded by full depop > * pcpu_addr_to_page - translate address to physical address > @@ -1581,6 +1582,8 @@ static int pcpu_populate_chunk(struct pcpu_chunk *chunk, > int page_start, int page_end, gfp_t gfp); > static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, > int page_start, int page_end); > +static void pcpu_post_unmap_tlb_flush(struct pcpu_chunk *chunk, > + int page_start, int page_end); > static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp); > static void pcpu_destroy_chunk(struct pcpu_chunk *chunk); > static struct page *pcpu_addr_to_page(void *addr); > @@ -2137,11 +2140,12 @@ static void pcpu_reclaim_populated(void) > { > struct pcpu_chunk *chunk; > struct pcpu_block_md *block; > + int freed_page_start, freed_page_end; > int i, end; > + bool reintegrate; > > lockdep_assert_held(&pcpu_lock); > > -restart: > /* > * Once a chunk is isolated to the to_depopulate list, the chunk is no > * longer discoverable to allocations whom may populate pages. The only > @@ -2157,6 +2161,9 @@ static void pcpu_reclaim_populated(void) > * Scan chunk's pages in the reverse order to keep populated > * pages close to the beginning of the chunk. > */ > + freed_page_start = chunk->nr_pages; > + freed_page_end = 0; > + reintegrate = false; > for (i = chunk->nr_pages - 1, end = -1; i >= 0; i--) { > /* no more work to do */ > if (chunk->nr_empty_pop_pages == 0) > @@ -2164,8 +2171,8 @@ static void pcpu_reclaim_populated(void) > > /* reintegrate chunk to prevent atomic alloc failures */ > if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_HIGH) { > - pcpu_reintegrate_chunk(chunk); > - goto restart; > + reintegrate = true; > + goto end_chunk; > } > > /* > @@ -2194,16 +2201,26 @@ static void pcpu_reclaim_populated(void) > spin_lock_irq(&pcpu_lock); > > pcpu_chunk_depopulated(chunk, i + 1, end + 1); > + freed_page_start = min(freed_page_start, i + 1); > + freed_page_end = max(freed_page_end, end + 1); > > /* reset the range and continue */ > end = -1; > } > > - if (chunk->free_bytes == pcpu_unit_size) > +end_chunk: > + /* batch tlb flush per chunk to amortize cost */ > + if (freed_page_start < freed_page_end) { > + pcpu_post_unmap_tlb_flush(chunk, > + freed_page_start, > + freed_page_end); > + } > + > + if (reintegrate || chunk->free_bytes == pcpu_unit_size) > pcpu_reintegrate_chunk(chunk); > else > - list_move(&chunk->list, > - &pcpu_chunk_lists[pcpu_sidelined_slot]); > + list_move_tail(&chunk->list, > + &pcpu_chunk_lists[pcpu_sidelined_slot]); > } > } > > -- > 2.32.0.93.g670b81a890-goog >