From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AAFFC25B76 for ; Mon, 3 Jun 2024 15:08:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A0E36B0083; Mon, 3 Jun 2024 11:08:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 52A8A6B0088; Mon, 3 Jun 2024 11:08:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A50D6B0089; Mon, 3 Jun 2024 11:08:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 1452E6B0083 for ; Mon, 3 Jun 2024 11:08:37 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 5A2DBA1319 for ; Mon, 3 Jun 2024 15:08:36 +0000 (UTC) X-FDA: 82189909032.26.4A3A3DE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 37EEA180002 for ; Mon, 3 Jun 2024 15:08:32 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XXJbK6IO; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717427313; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7WTDKaHtkHDNxGHyLp1oUvN+cHT5VenD6c5Rq/FFi44=; b=EUpYdo/lWfIe2rKyoJhHInW82bAmgiWLgQvl3KDAEp/88u9pTbO3Lfi/+ZvEX7spXfOfJn XmQGPGcQ2ShGifscOVqPF5m1STsD9IjseLXAR8BbotJeSBo/q7eEzU8VcSdFYpRqb4JWKH t8VMcZUXdtC5AnzH0Nf+y0k9aCVMKv0= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=XXJbK6IO; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717427313; a=rsa-sha256; cv=none; b=caqKSXj5DtegTSb6d/8gIBOwqvicsqfIwHmuW2hwfBLMAjYmYRtoLRkdslI4wj22yVITtv Tv8QFQ/YjSJWHaIfCA9DHtBoG8DWES9Y5j+fYfJRk5SbVdgdfDIIcuHwPhPtYg8meNT7p+ DV3Il3h7eMP7A2nlrDNkCM2MijwGjZI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1717427312; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7WTDKaHtkHDNxGHyLp1oUvN+cHT5VenD6c5Rq/FFi44=; b=XXJbK6IOKlYd96pPY9o/u/mpFmjYP3t/tSIaXNIcxwggITS+90pHXmuYD4P1PhTw3LzFzq ONxHf170w6rHMnFisMn1MXMzuijmxY1j2EUFRPokpHfI6yEOQzCOtc3W+L2bTeJvnkR+NP A0eMOiEkLjbfw8llKIBmwRpsFxa2OQs= Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-240-4OXgnTUlP5-Izwmzmi0CVg-1; Mon, 03 Jun 2024 11:08:31 -0400 X-MC-Unique: 4OXgnTUlP5-Izwmzmi0CVg-1 Received: by mail-qk1-f197.google.com with SMTP id af79cd13be357-792ed3a15ceso23329885a.2 for ; Mon, 03 Jun 2024 08:08:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717427310; x=1718032110; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7WTDKaHtkHDNxGHyLp1oUvN+cHT5VenD6c5Rq/FFi44=; b=HS7s+Yxjlkh8bPItJkiufmo1PVO3548JpDIjmQdwshhobiSPod1iQ3pQMxebhBKBVZ uNj5ksVKyD1ediQCAVbLaU0B6Uia7xLYwoGD8xamX4Yg1Apyzxn/lpoVaql3Qnpu7vLw cR/L/stpuaY2AevAftaffYbg8tPgPc4ebDBNnEf5gmvdacImSJAGHXNQ3Wr+cQTeeAyB pd3r9lCyAcRNiFbm5LfecDmAOm8cihhP3bRkizQdAyuU8twnlsJKGp6jijWgaeOSYHlT QHP9dgMssKgmkE43Na2F+Yzp22x4myvpPY7YfZq7kVIJGDKF4iJskxwJy23Gi4u4nc6Y XlTQ== X-Forwarded-Encrypted: i=1; AJvYcCVg0yT9CE90hinp1tzfBFWCHwe3CH2AruuyT1caja+SiOmbPmErBFvDy7OmgI7/YdyvI24sZqZcd5ff9MA3dm66Pmo= X-Gm-Message-State: AOJu0YzsG3rLTNV8TvkFOH6MMNLY/D56m9Hr26v3t1U34lQjzupPYEda JIZGFjUDtl1aIw6nHshm88ZW/Zr3N5r4GvXAFeXttufz9zeEAjAJwe5xy9HXfWlhAd1d3oUYmG0 KiXjssklHa3+kIWbZmi8MNiNO+PNB5om0Vmiqap5aE3iX2x/M X-Received: by 2002:a05:620a:2603:b0:794:c6a0:258a with SMTP id af79cd13be357-794f5cd378cmr1002213785a.4.1717427310213; Mon, 03 Jun 2024 08:08:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHzP9CEiIlpxklGaIH1jTwQZOdprD5lMjaARwntIlugqLv198+69+4KsoerwcBYSxj6OBsDvQ== X-Received: by 2002:a05:620a:2603:b0:794:c6a0:258a with SMTP id af79cd13be357-794f5cd378cmr1002207985a.4.1717427309501; Mon, 03 Jun 2024 08:08:29 -0700 (PDT) Received: from x1n (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id af79cd13be357-794f3172d94sm289850185a.99.2024.06.03.08.08.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 03 Jun 2024 08:08:29 -0700 (PDT) Date: Mon, 3 Jun 2024 11:08:26 -0400 From: Peter Xu To: David Hildenbrand Cc: Yang Shi , kernel test robot , Jason Gunthorpe , Vivek Kasireddy , Rik van Riel , oe-lkp@lists.linux.dev, lkp@intel.com, linux-kernel@vger.kernel.org, Andrew Morton , Matthew Wilcox , Christopher Lameter , linux-mm@kvack.org Subject: Re: [linus:master] [mm] efa7df3e3b: kernel_BUG_at_include/linux/page_ref.h Message-ID: References: <202405311534.86cd4043-lkp@intel.com> <890e5a79-8574-4a24-90ab-b9888968d5e5@redhat.com> <0edfcfed-e8c4-4c46-bbce-528c07084792@redhat.com> MIME-Version: 1.0 In-Reply-To: <0edfcfed-e8c4-4c46-bbce-528c07084792@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 37EEA180002 X-Stat-Signature: 4wjr631hgxxrc13k5i9earc97pr8gjd9 X-HE-Tag: 1717427312-248220 X-HE-Meta: U2FsdGVkX19GMW+YyXPgXli8iVMvtOCsZvaduUB4GLMY0qvwwew2pjoPXNpP2DJqfl2v6OvAKwhWJk7xBlgRMK0HUj1SIQJB5UzKLJWIxzHbkWVznsAZBb+nSw2fgfT0yaBqRq91P9CkifRoTGCVY9Elzato/lJGLEej+hT2Hgv7zkyLOnaTdxyZWt/7gsL8AW1pLGpVloa4LB7KEZXQ8Wme4DopXD2aUfZxO1pyyl7IKvW3Zafk0YSh5yt4rlP7h9AmDNYDdICE92pc0ho64Zr/Fmh0AstyL1qgJ2H15tQKYxgOFri2mKKEFTNFfEJ4jgZup+Y/H5Bx5Mo63WK7rO534cKXCOBofPrZykY4h7X/vCbaB8jbos469tAYrimK0saiJThy5CqWgXKiqB029NJkNGzdQn2sRp/iUy6QdArdpdUfWoTj2z2gS1no269pdLff1rfKia19O6/MUvcANOkYGOvkwtCuOCweXtfOPAPclkCZHVTSVovtNO9+fEd2aSyQH5GZGqSHeGKfdM34gIflTnM57rZa4fY6BZT+asRrtT5SAwvzoeOJP/XUmufMYUnj8sKd0ib8ZYpQAPFMWImUEndjUbHHhOqEha1I18KFp1cz/6v5eNs36NZh3hHdrSh93Wda7wfU4tSKNlKsiLbfddXRib3K4/DE4jAzLtGffB3BDsNine8i3Hryv+PenKFtz9zPJsCIGUc91Mhjw/JzEqzPaLounoZ7T6ko4wmHne8AKX7oeuEB8wJea124t3jqnt65W9lKMz6pwWj1E1bxFceLLoD/EZGEozyrsHfj4MbIbtI3KWmzLLn/cu2n6bFXk30AGSatS0ECBEbLHNWX4PLqE8uOd2Niv+3GB/XrvyBvQ84V+p/URw0JZva9I/nwwD6+w0u8k2odWoAFHwfeBSWlYnNRe2SMdmDummNxhiWXnvbkEuzxioRT8hbTCS9bwGVsAaqWiBvxBCV iJc6PrCV nbZY+Vi9LVwg12cpwA5LqYcCBgUJzqJTAjDMtBw0jVvB/mI8Mu1JM5YLRn5wdhO/vV4q2jDmdldCJfANFqix74Z7OFXkTDMNtEncSLubcLfOvGsBLDNWr28iNMnItWE8CGQn/dWUnP20Lfuj/DoP4jp/kF1KGt7RBCHQnq3fCN/5DfaANLx4LliO2LvjUMbRQEIkZkDn7m7wsLScSAl+fa9hT/CyRWerpTKS7k3iHJaCH8qrUNwlNRgdVogFkGqvjpqU2nq5HrvlLEv+0dEZ6xujNMvBuiiVX0aESM0yEZPmh6QnN4OwUdIv0r2dBJXJkHcMuANgw0Vsf9DsXmYA/Fo+rgDFQqqlA5HG8B6bbDw+W18/x6Zg4mtpA77xAyW/4mIc2mVnsP1WEu1ceEQe4Lvls5Nf9qtPD8K4c5HHeXUFNz62zKi2zwA7JzGqXozHkgNpuA97Nt3VA684HTRV9OVOURTT1Fii2hHYulQ7T5jMznb+Fk9V+/pI3Mry0hWmyVeqWMd+l68FaUa5xrAzS3zUr/w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jun 01, 2024 at 08:22:21AM +0200, David Hildenbrand wrote: > On 01.06.24 02:59, Yang Shi wrote: > > On Fri, May 31, 2024 at 5:01 PM Yang Shi wrote: > > > > > > On Fri, May 31, 2024 at 4:25 PM Peter Xu wrote: > > > > > > > > On Fri, May 31, 2024 at 07:46:41PM +0200, David Hildenbrand wrote: > > > > > try_grab_folio()->try_get_folio()->folio_ref_try_add_rcu() > > > > > > > > > > Is called (mm-unstable) from: > > > > > > > > > > (1) gup_fast function, here IRQs are disable > > > > > (2) gup_hugepte(), possibly problematic > > > > > (3) memfd_pin_folios(), possibly problematic > > > > > (4) __get_user_pages(), likely problematic > > > > > > > > > > (1) should be fine. > > > > > > > > > > (2) is possibly problematic on the !fast path. If so, due to commit > > > > > a12083d721d7 ("mm/gup: handle hugepd for follow_page()") ? CCing Peter. > > > > > > > > > > (3) is possibly wrong. CCing Vivek. > > > > > > > > > > (4) is what we hit here > > > > > > > > I guess it was overlooked because try_grab_folio() didn't have any comment > > > > or implication on RCU or IRQ internal helpers being used, hence a bit > > > > confusing. E.g. it has different context requirement on try_grab_page(), > > > > even though they look like sister functions. It might be helpful to have a > > > > better name, something like try_grab_folio_rcu() in this case. > > > > > > > > Btw, none of above cases (2-4) have real bug, but we're looking at some way > > > > to avoid triggering the sanity check, am I right? I hope besides the host > > > > splash I didn't overlook any other side effect this issue would cause, and > > > > the splash IIUC should so far be benign, as either gup slow (2,4) or the > > > > newly added memfd_pin_folios() (3) look like to have the refcount stablized > > > > anyway. > > > > > > > > Yang's patch in the other email looks sane to me, just that then we'll add > > > > quite some code just to avoid this sanity check in paths 2-4 which seems > > > > like an slight overkill. > > > > > > > > One thing I'm thinking is whether folio_ref_try_add_rcu() can get rid of > > > > its RCU limitation. It boils down to whether we can use atomic_add_unless() > > > > on TINY_RCU / UP setup too? I mean, we do plenty of similar things > > > > (get_page_unless_zero, etc.) in generic code and I don't understand why > > > > here we need to treat folio_ref_try_add_rcu() specially. > > > > > > > > IOW, the assertions here we added: > > > > > > > > VM_BUG_ON(!in_atomic() && !irqs_disabled()); > > > > > > > > Is because we need atomicity of below sequences: > > > > > > > > VM_BUG_ON_FOLIO(folio_ref_count(folio) == 0, folio); > > > > folio_ref_add(folio, count); > > > > > > > > But atomic ops avoids it. > > > > > > Yeah, I didn't think of why atomic can't do it either. But is it > > > written in this way because we want to catch the refcount == 0 case > > > since it means a severe bug? Did we miss something? > > > > Thought more about it and disassembled the code. IIUC, this is an > > optimization for non-SMP kernel. When in rcu critical section or irq > > is disabled, we just need an atomic add instruction. > > folio_ref_add_unless() would yield more instructions, including branch > > instruction. But I'm wondering how useful it would be nowadays. Is it > > really worth the complexity? AFAIK, for example, ARM64 has not > > supported non-SMP kernel for years. > > > > My patch actually replaced all folio_ref_add_unless() to > > folio_ref_add() for slow paths, so it is supposed to run faster, but > > we are already in slow path, it may be not measurable at all. So > > having more simple and readable code may outweigh the potential slight > > performance gain in this case? > > Yes, we don't want to use atomic RMW that return values where we can use > atomic RMW that don't return values. The former is slower and implies a > memory barrier, that can be optimized out on some arcitectures (arm64 IIRC) > > We should clean that up here, and make it clearer that the old function is > only for grabbing a folio if it can be freed concurrently -- GUP-fast. Note again that this only affects TINY_RCU, which mostly implies !PREEMPTION and UP. It's a matter of whether we prefer adding these bunch of code to optimize that. Also we didn't yet measure that in a real workload and see how that "unless" plays when buried in other paths, but then we'll need a special kernel build first, and again I'm not sure whether it'll be worthwhile. Thanks, -- Peter Xu