From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0F49AC4743E for ; Tue, 8 Jun 2021 15:19:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9FADD61285 for ; Tue, 8 Jun 2021 15:19:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9FADD61285 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E5CF6B0070; Tue, 8 Jun 2021 11:19:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 26EEA6B0071; Tue, 8 Jun 2021 11:19:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C2586B0072; Tue, 8 Jun 2021 11:19:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id CB0776B0070 for ; Tue, 8 Jun 2021 11:19:46 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5EDECA759 for ; Tue, 8 Jun 2021 15:19:46 +0000 (UTC) X-FDA: 78230916372.04.702252A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 72C49A00025C for ; Tue, 8 Jun 2021 15:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623165585; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=dBLB4c6IVQivL3hkpAXNnbCJj11WTQNnSoK9+kBvh3Q=; b=THmGp8DQfuAspg7tyOXUhTM6qqqD+FSpIsCNnJkBRratxAwmqnbizMDJXW/mkusZyUTI+0 Ito9fI/tVktpqrgBYefh6SrpUU0lxhGTRwViS8HiDaR+tijkBu/b+R43dKnAZlm+m/X5km BMjTkwz+FW4/KG6sje9twdIwM6qtFEk= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-452-UkR2isHrOVi1rb3Oh3q-pA-1; Tue, 08 Jun 2021 11:19:44 -0400 X-MC-Unique: UkR2isHrOVi1rb3Oh3q-pA-1 Received: by mail-qk1-f197.google.com with SMTP id u9-20020a05620a4549b02902e956c2a3c8so15215069qkp.20 for ; Tue, 08 Jun 2021 08:19:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=dBLB4c6IVQivL3hkpAXNnbCJj11WTQNnSoK9+kBvh3Q=; b=GeOuy6oPdsOdA85+nZ4k+GFmcarrn6tBlcFqhGr2v2mW9lqlbZLPuuicqqUuotLnh7 Nx5R7DB4nRXJOnNFIm/vj0fmASjFIUBVMrrOpOMCBEvGNHwVrWoNhkoPnC/5yL+C+E9T 8GIJZJ6i91hzeB40t1km8tMKlu3ZJzGANHvT1af2uq9ApbAf62cRDYhMZnD8mhWitBHC 4XXrxPB/uftzRHAccCouze1eGUpM/ytoQVKZt+5EhZ/K7KOYrix0HkT4i/BMCL9ZFSlf yf5FBS6IrGDaV+V+vz1/futbxe3F+mvHC/5gX0zKgHgGv1tlRlOGhCDnIcIZLiU0defu /2PA== X-Gm-Message-State: AOAM531BcIF9pdgqfsZaxWmK4A1NbkpNezfy7Qkl0m9cphawPO+vyFD5 TEOGaGx8WFk0XTtYzK2/BPel9QW1uD75Su12OV+7fq9iQxSTLAgT8H9DQGlizV6NiqCF3EJYibb jy3mxMsNVWZM= X-Received: by 2002:a37:62d6:: with SMTP id w205mr15381160qkb.194.1623165584072; Tue, 08 Jun 2021 08:19:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx9tQ2FgzwPiUwvaf6IEdO0u/PAmzdBq3+c1YQavhbNJmHVlWELWivs0V26IRzx6TXY9Z2KvQ== X-Received: by 2002:a37:62d6:: with SMTP id w205mr15381138qkb.194.1623165583888; Tue, 08 Jun 2021 08:19:43 -0700 (PDT) Received: from t490s (bras-base-toroon474qw-grc-88-174-93-75-200.dsl.bell.ca. [174.93.75.200]) by smtp.gmail.com with ESMTPSA id z127sm600836qkb.98.2021.06.08.08.19.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Jun 2021 08:19:43 -0700 (PDT) Date: Tue, 8 Jun 2021 11:19:41 -0400 From: Peter Xu To: Alistair Popple Cc: linux-mm@kvack.org, akpm@linux-foundation.org, rcampbell@nvidia.com, linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org, hughd@google.com, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, hch@infradead.org, bskeggs@redhat.com, jgg@nvidia.com, shakeelb@google.com, jhubbard@nvidia.com, willy@infradead.org Subject: Re: [PATCH v10 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Message-ID: References: <20210607075855.5084-1-apopple@nvidia.com> <20210607075855.5084-7-apopple@nvidia.com> MIME-Version: 1.0 In-Reply-To: <20210607075855.5084-7-apopple@nvidia.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 72C49A00025C Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=THmGp8DQ; spf=none (imf15.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: yiapzx7zqa3n4p3yu53uxtxann7atin3 X-HE-Tag: 1623165585-594934 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 07, 2021 at 05:58:51PM +1000, Alistair Popple wrote: > Currently if copy_nonpresent_pte() returns a non-zero value it is > assumed to be a swap entry which requires further processing outside the > loop in copy_pte_range() after dropping locks. This prevents other > values being returned to signal conditions such as failure which a > subsequent change requires. > > Instead make copy_nonpresent_pte() return an error code if further > processing is required and read the value for the swap entry in the main > loop under the ptl. > > Signed-off-by: Alistair Popple > > --- > > v10: > > Use a unique error code and only check return codes for handling. > > v9: > > New for v9 to allow device exclusive handling to occur in > copy_nonpresent_pte(). > --- > mm/memory.c | 26 ++++++++++++++++---------- > 1 file changed, 16 insertions(+), 10 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 2fb455c365c2..0982cab37ecb 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, > > if (likely(!non_swap_entry(entry))) { > if (swap_duplicate(entry) < 0) > - return entry.val; > + return -EIO; > > /* make sure dst_mm is on swapoff's mmlist. */ > if (unlikely(list_empty(&dst_mm->mmlist))) { > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > continue; > } > if (unlikely(!pte_present(*src_pte))) { > - entry.val = copy_nonpresent_pte(dst_mm, src_mm, > - dst_pte, src_pte, > - src_vma, addr, rss); > - if (entry.val) > + ret = copy_nonpresent_pte(dst_mm, src_mm, > + dst_pte, src_pte, > + src_vma, addr, rss); > + if (ret == -EIO) { > + entry = pte_to_swp_entry(*src_pte); > break; > + } > progress += 8; > continue; > } > @@ -1011,20 +1013,24 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma, > pte_unmap_unlock(orig_dst_pte, dst_ptl); > cond_resched(); > > - if (entry.val) { > + if (ret == -EIO) { > + VM_WARN_ON_ONCE(!entry.val); > if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) { > ret = -ENOMEM; > goto out; > } > entry.val = 0; > - } else if (ret) { > - WARN_ON_ONCE(ret != -EAGAIN); > + } else if (ret == -EAGAIN) { ^ |----------------------------- one more space here > prealloc = page_copy_prealloc(src_mm, src_vma, addr); > if (!prealloc) > return -ENOMEM; > - /* We've captured and resolved the error. Reset, try again. */ > - ret = 0; > + } else if (ret) { > + VM_WARN_ON_ONCE(1); > } > + > + /* We've captured and resolved the error. Reset, try again. */ Maybe better as: /* * We've resolved all error even if there is, reset error code and try * again if necessary. */ as it also covers the no-error path. But I guess not a big deal.. Reviewed-by: Peter Xu Thanks, > + ret = 0; > + > if (addr != end) > goto again; > out: > -- > 2.20.1 > -- Peter Xu