From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 99E8CC02193 for ; Wed, 29 Jan 2025 10:48:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9078280043; Wed, 29 Jan 2025 05:48:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A40D2280035; Wed, 29 Jan 2025 05:48:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E12D280043; Wed, 29 Jan 2025 05:48:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6FBEE280035 for ; Wed, 29 Jan 2025 05:48:11 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1AA7EADFB6 for ; Wed, 29 Jan 2025 10:48:11 +0000 (UTC) X-FDA: 83060164782.12.FB1DD21 Received: from mail-wm1-f46.google.com (mail-wm1-f46.google.com [209.85.128.46]) by imf23.hostedemail.com (Postfix) with ESMTP id CB00A140009 for ; Wed, 29 Jan 2025 10:48:08 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=OfQY2eBa; spf=none (imf23.hostedemail.com: domain of simona.vetter@ffwll.ch has no SPF policy when checking 209.85.128.46) smtp.mailfrom=simona.vetter@ffwll.ch; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738147689; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ryBwukoqzZ+fWbwCQpwIcNNyqAhFIJftd2f9BxMzlNM=; b=WcwtWtat4OXItXUfMiJlGLB+T04AZAJrdS1VG3KSPPlFl9Kw+8Cype+tjrXfxEV4X51JUC 2yDrMVf507DH2nBqjvNwbCznzCYJU9K9j9b1gQqK8tRw62AVeTSru7O79PRoQTNuBoh5KD rDtjZsQwACgV4jvqXoUd/2l74n0Z/Dg= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738147689; a=rsa-sha256; cv=none; b=8g80ZgRBBpN2Zj8xAJ1bWOKVfD6wZc7wVbODaX1FCvQ1XJkWXswHEuBL3UDuYiUiS2CsAy bKt1gG4YmAkhsPyhnjJzkzPqFsxiq9ea5j1+wHsr//78G3QAV4jsC9RrIRVEOvZEo0aEDX 85B2VwCYz6GpI6fqIEh9c70fYHM+e/Y= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=ffwll.ch header.s=google header.b=OfQY2eBa; spf=none (imf23.hostedemail.com: domain of simona.vetter@ffwll.ch has no SPF policy when checking 209.85.128.46) smtp.mailfrom=simona.vetter@ffwll.ch; dmarc=none Received: by mail-wm1-f46.google.com with SMTP id 5b1f17b1804b1-436341f575fso73892265e9.1 for ; Wed, 29 Jan 2025 02:48:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ffwll.ch; s=google; t=1738147687; x=1738752487; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=ryBwukoqzZ+fWbwCQpwIcNNyqAhFIJftd2f9BxMzlNM=; b=OfQY2eBaekuvfNYlC3jo6lOLC7FLAl3qUjlMTgZxhftQoqseYHuGic53TZ+V2LJofF WE8zyL0AHGYXDiA/lQSJqMD352+rdPBoHlrPJjZchJBIYK+0b/TJ2jJb3Z+XUlODKg7T Nq6KbB/0MQukhY2M3VofHokb8/aLSp2GVO3bQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1738147687; x=1738752487; h=in-reply-to:content-disposition:mime-version:references :mail-followup-to:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ryBwukoqzZ+fWbwCQpwIcNNyqAhFIJftd2f9BxMzlNM=; b=G+4crAsTz0TkIB1S8vSepY/H5HWq/SeJjU4adP+v1dSEQyK01S/7F454strPBKHPcB ez3pa068qGR2uVLX8AkwbnIMMI2y5d9P2qGe3E4e6piwCFELuU7G9YWdHL3cd2kgu4in RMOQDckYCFJrPO+lYuwAiIc1dZBIkk5syj6JYXOAjMY9m3i3YryRgDHl072n5b67oMvr AmdqitNdijQzEEcAl914tn5Muu4DR4MHXoU210D9oKHw5O3TC0LQr+cs9IklPp40fbJD C47YHRWNZs3OjDBzm3wE3DEHsglVVnFP6+0TBjM80rHVp365Ccrj4l2leOwZ/BZX/uxX piPA== X-Forwarded-Encrypted: i=1; AJvYcCUk/MLjt8Vhm/ej0vpp35cgkW5WlaIg8Cq8b8owRBRSiH91hrGJ5HIzf9aApn84skJCGTIGCFhaCA==@kvack.org X-Gm-Message-State: AOJu0YxjCoL3dOPXEVD4K2b8MiIRHGxfRR8hKybfrGY9dHycSlOePAE1 nfMRHNfHFwDEduxNiT9wxjTrf2gliIPTwSgy/dOskDgWN1Fq6/3+3+ZqW15V1og= X-Gm-Gg: ASbGncsvlu8XOo12YDACqUh1btPD1CgSKL6mmvtFq7+UxGS3wi0pJ22MzQMv2stdCfW 9i2hiCZMXnImu5MvLnc6Iqj1m5N34gQxlGSqffo7hrWcChiOE8XDGA/owt3xdI41tnFVC5fBRB2 Lxn1mPoWYX2hDQe3Mg29P6VkD/9gXJqBDiGvfHilpTVbM5YVRtCgxu+Xd0pK5fBqIDkIrIDROfV ElXqVk/1qRT3jGetRNmD7nsDHPcDqWu+ibGQhvqsaCk+jFpEte9NOmZ+JFqDiHUv8AJdsDHKfIE 9ntfgqXDAfjapQHzN/bJ4XKPlXQ= X-Google-Smtp-Source: AGHT+IHkpU9HVR3dgO6bNsbHY85ZsMiwPjBxrIk+WmzzA2RI0HfvngMMueRn5xza9b8YqDfkhmFu5w== X-Received: by 2002:a05:6000:1548:b0:38c:3f12:64be with SMTP id ffacd0b85a97d-38c51f8a3camr2705778f8f.35.1738147686765; Wed, 29 Jan 2025 02:48:06 -0800 (PST) Received: from phenom.ffwll.local ([2a02:168:57f4:0:5485:d4b2:c087:b497]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-38c2a188a85sm16348103f8f.44.2025.01.29.02.48.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 29 Jan 2025 02:48:06 -0800 (PST) Date: Wed, 29 Jan 2025 11:48:03 +0100 From: Simona Vetter To: David Hildenbrand Cc: Alistair Popple , "linux-mm@kvack.org" , John Hubbard , nouveau@lists.freedesktop.org, Jason Gunthorpe , DRI Development , Karol Herbst , Lyude Paul , Danilo Krummrich Subject: Re: [Question] Are "device exclusive non-swap entries" / "SVM atomics in Nouveau" still getting used in practice? Message-ID: Mail-Followup-To: David Hildenbrand , Alistair Popple , "linux-mm@kvack.org" , John Hubbard , nouveau@lists.freedesktop.org, Jason Gunthorpe , DRI Development , Karol Herbst , Lyude Paul , Danilo Krummrich References: <346518a4-a090-4eaa-bc04-634388fd4ca3@redhat.com> <8c6f3838-f194-4a42-845d-10011192a234@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Operating-System: Linux phenom 6.12.11-amd64 X-Stat-Signature: qm4dickarffo4dbpjcbef66bhfwk63qo X-Rspam-User: X-Rspamd-Queue-Id: CB00A140009 X-Rspamd-Server: rspam03 X-HE-Tag: 1738147688-600131 X-HE-Meta: U2FsdGVkX1/X1GQygvBn+bi0TRig3yyBG8x7EkFuno+EEZUSGg17dRqRadS3lQg0KNzVJx69/f0r91kHMzR/QcFlECrsOEULwFJpxcAibXlwhBc1ctqWeWfjuoshe03ORHNstW6MtU7TkrFAKxe5AJkm1PE3c3H/r14j2QC+rppqx244RMHvC3KKanpDP0vCQx2zbCHNK5cEqaF05Fg69/iFN7tm5ACLS8ylNfwQcwSBE2TO0HM9zoMV5Un2nYxgD7ZpbAITs8z7GutSaZdIQ6ToTFjs3zkRzXY/5rru52vy4LMyfMGa1CQ3lYTDVVAdFrn7StfNLOAuej/RsQDnk+uw9zJ4Qpi3zNTNakcDvTGqq/jn9Ze03OGki2ZLDQGmCdVF6ehrF/+N6b61WtCKZom6o7dfqM5yGKViXV06N7YEdeDiahOIAPbD0IHgQHLJnm46kJE69mjfpS+42Li4XCtJPM9c4hOWtuGKYp7G33mQrAE4uhX0tLH+jHTrP6sElMha6vDXy1M5QD/F+Y8Y/flGBmCAQNIP/ZD9mWk7x2PGcaQyW7e86x1NxITHBDhoC8o9vzp3XM+qlV3fAA7FuSNAZXCFjcmPAiLqYu23rMmZ809a544d/KsRuTEW6/cK3eCwLFnjDmtN16SycD21vQiOFSRXulo0dRhsho/4tgp3xFay7PA/iH7oeF5jNdlSyNswtjY/fcDMTmcEPqGgBqIpHDNHWt0n5wdRUubRZKeW8j1en17gDVBhXEwfdki7/3UZFNubHp1COGgVE+ey90qtPFsMVHqIKK3/9pWc6hiOF099QsYeJh28ziA68R/9PaBhO6IvHYJ3MHXZ9eNnKYEGiE3SzxDB2jqsZi62uD6NJOD2jXwcL3G1v9nm9NKVuMjef/liNd6OyVVZTgdZ1/u6dzuTJftRZAS0xnn8boteL9KJ0728Swv9+ZmkhWu7h6hv2jmcgTkGq6OKT6B /FTCMPXo 3c+03FeqcaDJi48zNpssLRKX3WQgdMnq5K7VhgxUA7k6Hsdkb5I0Gh/noWP0Lk9dV8Ud2OiS1q2d/fre9H/1Na22f5APdoMtk96XmbwapJCwMh/gFhNQgZ0q+9BWp1SZKlj8jkniuN9Q/VcaeyXZbz+4mRlEYirVtnxFOcXB4nbCZb3xlff3x4KtnNIhyoge270NB17+nkq/GBy3jLo5NqJrCCvHmWmK1H2oJUSrlA7OFAguDfykfHMyBjwA/trLOAZaLyH9ehUm2ivOmfK28PDQtuCXO3hKA0q1hZr3M5NSmNsqqS+8i/6juRsHIXuv/qO9mi/P19hm0qgCZzjB56SPkP7I3QDWnGB2zbQL4pe8W9a25OmEmFm8yIiX3l1douBaf X-Bogosity: Ham, tests=bogofilter, spamicity=0.015852, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jan 28, 2025 at 09:24:33PM +0100, David Hildenbrand wrote: > On 28.01.25 21:14, Simona Vetter wrote: > > On Tue, Jan 28, 2025 at 11:09:24AM +1100, Alistair Popple wrote: > > > On Fri, Jan 24, 2025 at 06:54:02PM +0100, David Hildenbrand wrote: > > > > > > > On integrated the gpu is tied into the coherency > > > > > > > fabric, so there it's not needed. > > > > > > > > > > > > > > I think the more fundamental question with both this function here and > > > > > > > with forced migration to device memory is that there's no guarantee it > > > > > > > will work out. > > > > > > > > > > > > Yes, in particular with device-exclusive, it doesn't really work with THP > > > > > > and is only limited to anonymous memory. I have patches to at least make it > > > > > > work reliably with THP. > > > > > > > > > > I should have crawled through the implementation first before replying. > > > > > Since it only looks at folio_mapcount() make_device_exclusive() should at > > > > > least in theory work reliably on anon memory, and not be impacted by > > > > > elevated refcounts due to migration/ksm/thp/whatever. > > > > > > > > Yes, there is -- in theory -- nothing blocking the conversion except the > > > > folio lock. That's different than page migration. > > > > > > Indeed - this was the entire motivation for make_device_exclusive() - that we > > > needed a way to reliably exclude CPU access that couldn't be blocked in the same > > > way page migration can (otherwise we could have just migrated to a device page, > > > even if that may have added unwanted overhead). > > > > The folio_trylock worries me a bit. I guess this is to avoid deadlocks > > when locking multiple folios, but I think at least on the first one we > > need an unconditional folio_lock to guarantee forward progress. > > At least on the hmm path I was able to trigger the EBUSY a couple of times > due to concurrent swapout. But the hmm-tests selftest fails immediately > instead of retrying. My worries with just retrying is that it's very hard to assess whether there's a livelock or whether the retry has a good chance of success. As an example the ->migrate_to_ram path has some trylocks, and the window where all other threads got halfway and then fail the trylock is big enough that once you pile up enough threads that spin through there, you're stuck forever. Which isn't great. So if we could convert at least the first folio_trylock into a plain lock then forward progress is obviously assured and there's no need to crawl through large chunks of mm/ code to hunt for corner cases where we could be too unlucky to ever win the race. > > Since > > atomics can't cross 4k boundaries (or the hw is just really broken) this > > should be enough to avoid being stuck in a livelock. I'm also not seeing > > any other reason why a folio_lock shouldn't work here, but then my > > understanding of mm/ stuff is really just scratching the surface. > > > > I did crawl through all the other code and it looks like everything else > > is unconditional locks. So looks all good and I didn't spot anything else > > that seemed problematic. > > > > Somewhat aside, I do wonder whether we really want to require callers to > > hold the mmap lock, or whether with all the work towards lockless fastpath > > that shouldn't instead just be an implementation detail. > > We might be able to use the VMA lock in the future, but that will require > GUP support and a bunch more. Until then, the mm_lock in read mode is > required. Yup. I also don't think we should try to improve before benchmarks show an actual need. It's more about future proofing and making sure mmap_lock doesn't leak into driver data structures that I'm worried about. Because I've seen some hmm/gpu rfc patches that heavily relied on mmap_lock to keep everything correct on the driver side, which is not a clean design. > I was not able to convince myself that we'll really need the folio lock, but > that's also a separate discussion. This is way above my pay understanding of mm/ unfortunately. > > At least for the > > gpu hmm code I've seen I've tried to push hard towards a world were the > > gpu side does not rely on mmap_read_lock being held at all, to future > > proof this all. And currently we only have one caller of > > make_device_exclusive_range() so would be simple to do. > > We could likely move the mmap_lock into that function, but avoiding it is > more effort. I didn't mean more than just that, which would make sure drivers at least do not rely on mmap_lock being held. That then allows us to switch over to vma lock or anything else entirely within mm/ code. If we leave it as-is then more drivers accidentally or intentionally will rely on this, like I think is the case for ->migrate_to_ram for hmm already. And then it's more pain to untangle. > In any case, I'll send something out probably tomorrow to fix page > migration/swapout of pages with device-exclusive entries and a bunch of > other things (THP, interaction with hugetlb, ...). Thanks a lot! Cheer, Sima > > -- > Cheers, > > David / dhildenb > -- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch