linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: John Hubbard <jhubbard@nvidia.com>
To: David Hildenbrand <david@redhat.com>,
	Alistair Popple <apopple@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
	Shigeru Yoshida <syoshida@redhat.com>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Minchan Kim <minchan@kernel.org>,
	Pasha Tatashin <pasha.tatashin@soleen.com>
Subject: Re: [PATCH] mm/gup: stop leaking pinned pages in low memory conditions
Date: Thu, 17 Oct 2024 14:57:05 -0700	[thread overview]
Message-ID: <a84d415d-dd18-49ef-b72a-ee381a44a429@nvidia.com> (raw)
In-Reply-To: <52bd4862-d6ce-48f6-9bc2-0f7376a56115@redhat.com>

On 10/17/24 2:47 PM, David Hildenbrand wrote:
> On 17.10.24 23:28, Alistair Popple wrote:
>> David Hildenbrand <david@redhat.com> writes:
>>> On 16.10.24 22:22, John Hubbard wrote:
...
>>>> +        if (rc != -EAGAIN && rc != 0)
>>>> +            unpin_user_pages(pages, nr_pinned_pages);
>>>> +
>>>>        } while (rc == -EAGAIN);
>>>
>>> Wouldn't it be cleaner to simply have here after the loop (possibly
>>> even after the memalloc_pin_restore())
>>>
>>> if (rc)
>>>     unpin_user_pages(pages, nr_pinned_pages);
>>>
>>> But maybe I am missing something.
>>
>> I initially thought the same thing but I'm not sure it is
>> correct. Consider what happens when __get_user_pages_locked() fails
>> earlier in the loop. In this case it will have bailed out of the loop
>> with rc <= 0 but we shouldn't call unpin_user_pages().

doh. yes. Thanks for catching that, Alistair! I actually considered
it during the first draft, too, but got tunnel vision during the
review, sigh.

> 
> Ah, I see what you mean, I primarily only stared at the diff.
> 
> We should likely avoid using nr_pinned_pages as a temporary variable that
> can hold an error value.
> 

OK, I still want to defer all the pretty refactoring ideas into some
future effort, so for now, let's just leave v1 alone except for fixing
the typo in the comment, yes?

I'll still send out a 2-patch series with that, plus a suitably
modified fix for the memfd case too.


thanks,
-- 
John Hubbard



  reply	other threads:[~2024-10-17 21:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-16 20:22 John Hubbard
2024-10-16 21:57 ` Andrew Morton
2024-10-16 22:05   ` John Hubbard
2024-10-16 22:41     ` Andrew Morton
2024-10-16 22:13 ` Alistair Popple
2024-10-16 22:22   ` John Hubbard
2024-10-17  8:51 ` David Hildenbrand
2024-10-17  8:53   ` David Hildenbrand
2024-10-17 17:06     ` John Hubbard
2024-10-17 17:10       ` David Hildenbrand
2024-10-17 16:54   ` John Hubbard
2024-10-17 21:28   ` Alistair Popple
2024-10-17 21:47     ` David Hildenbrand
2024-10-17 21:57       ` John Hubbard [this message]
2024-10-17 22:03         ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a84d415d-dd18-49ef-b72a-ee381a44a429@nvidia.com \
    --to=jhubbard@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=pasha.tatashin@soleen.com \
    --cc=syoshida@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox