From: Mikulas Patocka <mpatocka@redhat.com>
To: Jan Kara <jack@suse.cz>
Cc: "Vlastimil Babka" <vbabka@suse.cz>,
"Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Matthew Wilcox" <willy@infradead.org>,
"Michal Hocko" <mhocko@suse.com>,
stable@vger.kernel.org, regressions@lists.linux.dev,
"Alasdair Kergon" <agk@redhat.com>,
"Mike Snitzer" <snitzer@kernel.org>,
dm-devel@lists.linux.dev, linux-mm@kvack.org
Subject: Re: Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5
Date: Mon, 30 Oct 2023 12:49:01 +0100 (CET) [thread overview]
Message-ID: <7355fe90-5176-ea11-d6ed-a187c0146fdc@redhat.com> (raw)
In-Reply-To: <20231030112844.g7b76cm2xxpovt6e@quack3>
On Mon, 30 Oct 2023, Jan Kara wrote:
> > >> What if we end up in "goto retry" more than once? I don't see a matching
> > >
> > > It is impossible. Before we jump to the retry label, we set
> > > __GFP_DIRECT_RECLAIM. mempool_alloc can't ever fail if
> > > __GFP_DIRECT_RECLAIM is present (it will just wait until some other task
> > > frees some objects into the mempool).
> >
> > Ah, missed that. And the traces don't show that we would be waiting for
> > that. I'm starting to think the allocation itself is really not the issue
> > here. Also I don't think it deprives something else of large order pages, as
> > per the sysrq listing they still existed.
> >
> > What I rather suspect is what happens next to the allocated bio such that it
> > works well with order-0 or up to costly_order pages, but there's some
> > problem causing a deadlock if the bio contains larger pages than that?
>
> Hum, so in all the backtraces presented we see that we are waiting for page
> writeback to complete but I don't see anything that would be preventing the
> bios from completing. Page writeback can submit quite large bios so it kind
> of makes sense that it trips up some odd behavior. Looking at the code
> I can see one possible problem in crypt_alloc_buffer() but it doesn't
> explain why reducing initial page order would help. Anyway: Are we
> guaranteed mempool has enough pages for arbitrarily large bio that can
> enter crypt_alloc_buffer()? I can see crypt_page_alloc() does limit the
> number of pages in the mempool to dm_crypt_pages_per_client plus I assume
> the percpu counter bias in cc->n_allocated_pages can limit the really
> available number of pages even further. So if a single bio is large enough
> to trip percpu_counter_read_positive(&cc->n_allocated_pages) >=
> dm_crypt_pages_per_client condition in crypt_page_alloc(), we can loop
> forever? But maybe this cannot happen for some reason...
>
> If this is not it, I think we need to find out why the writeback bios are
> not completeting. Probably I'd start with checking what is kcryptd,
> presumably responsible for processing these bios, doing.
>
> Honza
cc->page_pool is initialized to hold BIO_MAX_VECS pages. crypt_map will
restrict the bio size to BIO_MAX_VECS (see dm_accept_partial_bio being
called from crypt_map).
When we allocate a buffer in crypt_alloc_buffer, we try first allocation
without waiting, then we grab the mutex and we try allocation with
waiting.
The mutex should prevent a deadlock when two processes allocate 128 pages
concurrently and wait for each other to free some pages.
The limit to dm_crypt_pages_per_client only applies to pages allocated
from the kernel - when this limit is reached, we can still allocate from
the mempool, so it shoudn't cause deadlocks.
Mikulas
next prev parent reply other threads:[~2023-10-30 11:49 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <ZTNH0qtmint/zLJZ@mail-itl>
[not found] ` <e427823c-e869-86a2-3549-61b3fdf29537@redhat.com>
[not found] ` <ZTiHQDY54E7WAld+@mail-itl>
[not found] ` <ZTiJ3CO8w0jauOzW@mail-itl>
2023-10-25 10:13 ` Mikulas Patocka
2023-10-27 17:32 ` Mikulas Patocka
2023-10-28 9:23 ` Matthew Wilcox
2023-10-28 15:14 ` Mike Snitzer
2023-10-29 11:15 ` Marek Marczykowski-Górecki
2023-10-29 20:02 ` Vlastimil Babka
2023-10-30 7:37 ` Mikulas Patocka
2023-10-30 8:37 ` Vlastimil Babka
2023-10-30 11:22 ` Mikulas Patocka
2023-10-30 11:30 ` Vlastimil Babka
2023-10-30 11:37 ` Mikulas Patocka
2023-10-30 12:25 ` Jan Kara
2023-10-30 13:30 ` Marek Marczykowski-Górecki
2023-10-30 14:08 ` Mikulas Patocka
2023-10-30 15:56 ` Jan Kara
2023-10-30 16:51 ` Marek Marczykowski-Górecki
2023-10-30 17:50 ` Mikulas Patocka
2023-10-31 3:48 ` Marek Marczykowski-Górecki
2023-10-31 14:01 ` Jan Kara
2023-10-31 15:42 ` Marek Marczykowski-Górecki
2023-10-31 17:17 ` Mikulas Patocka
2023-10-31 17:24 ` Mikulas Patocka
2023-11-02 0:38 ` Marek Marczykowski-Górecki
2023-11-02 9:28 ` Mikulas Patocka
2023-11-02 11:45 ` Marek Marczykowski-Górecki
2023-11-02 17:06 ` Mikulas Patocka
2023-11-03 15:01 ` Marek Marczykowski-Górecki
2023-11-03 15:10 ` Keith Busch
2023-11-03 16:15 ` Marek Marczykowski-Górecki
2023-11-03 16:54 ` Keith Busch
2023-11-03 20:30 ` Marek Marczykowski-G'orecki
2023-11-03 22:42 ` Keith Busch
2023-11-04 9:27 ` Mikulas Patocka
2023-11-04 13:59 ` Keith Busch
2023-11-06 7:10 ` Christoph Hellwig
2023-11-06 14:59 ` [PATCH] swiotlb-xen: provide the "max_mapping_size" method Mikulas Patocka
2023-11-06 15:16 ` Keith Busch
2023-11-06 15:30 ` Mike Snitzer
2023-11-06 17:12 ` [PATCH v2] " Mikulas Patocka
2023-11-07 4:18 ` Stefano Stabellini
2023-11-08 7:31 ` Christoph Hellwig
2023-11-06 7:08 ` Intermittent storage (dm-crypt?) freeze - regression 6.4->6.5 Christoph Hellwig
2023-11-02 12:21 ` Jan Kara
2023-11-01 1:27 ` Ming Lei
[not found] ` <ZUG0gcRhUlFm57qN@mail-itl>
[not found] ` <ZUG016NyTms2073C@mail-itl>
2023-11-01 2:35 ` Marek Marczykowski-Górecki
2023-11-01 3:24 ` Ming Lei
2023-11-01 10:15 ` Hannes Reinecke
2023-11-01 10:26 ` Jan Kara
2023-11-01 11:23 ` Ming Lei
2023-11-02 14:02 ` Keith Busch
2023-11-01 12:16 ` Mikulas Patocka
2023-10-30 11:28 ` Jan Kara
2023-10-30 11:49 ` Mikulas Patocka [this message]
2023-10-30 12:11 ` Jan Kara
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7355fe90-5176-ea11-d6ed-a187c0146fdc@redhat.com \
--to=mpatocka@redhat.com \
--cc=agk@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=dm-devel@lists.linux.dev \
--cc=jack@suse.cz \
--cc=linux-mm@kvack.org \
--cc=marmarek@invisiblethingslab.com \
--cc=mhocko@suse.com \
--cc=regressions@lists.linux.dev \
--cc=snitzer@kernel.org \
--cc=stable@vger.kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox