From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F218C61CE8 for ; Sat, 7 Jun 2025 06:40:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E88646B008A; Sat, 7 Jun 2025 02:40:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E10796B008C; Sat, 7 Jun 2025 02:40:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD9906B0092; Sat, 7 Jun 2025 02:40:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id A4F836B008A for ; Sat, 7 Jun 2025 02:40:14 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5330C1016D3 for ; Sat, 7 Jun 2025 06:40:14 +0000 (UTC) X-FDA: 83527655148.17.D4D9CC0 Received: from mx0b-00364e01.pphosted.com (mx0b-00364e01.pphosted.com [148.163.139.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 020FA140003 for ; Sat, 7 Jun 2025 06:40:11 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=XlSgTddv; spf=pass (imf23.hostedemail.com: domain of tz2294@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=tz2294@columbia.edu; dmarc=pass (policy=none) header.from=columbia.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1749278412; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AsCYG8CkIIPZbMRBimL2n8U8A9q+7pGknJ/irwioCr4=; b=PF4t9WODonUf/WJnqmUioIE7BlbhzACy0ojX4TEOArAyIYzqoXGZl0he6Gm7QgQrk3XM7Y Yw1yxVxvNk9qyz0oDfP8UbpQr2/PA9CrgTxcUIdDooZvoZYylUmpS/r+h/NpO9X6Lbz1m8 EMNVqOMF5oeZr5LZHc4x9FaP9yIYayg= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=XlSgTddv; spf=pass (imf23.hostedemail.com: domain of tz2294@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=tz2294@columbia.edu; dmarc=pass (policy=none) header.from=columbia.edu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1749278412; a=rsa-sha256; cv=none; b=mNxSCy8uLMNyAWZ5l2nR2qPJmZoL0sebTRlhDVege4asktlE02AQkR7OrzhxYA2BI/zNaa 9FYAK50rICFbbP0aurYEIJLCFAgqw2yWIS0TJuZrMoZRPQeFgb/NHCc3NnUokhFzGq6B3e iT8VCSkR1Ltjrs21YHBm84kznDxaS/A= Received: from pps.filterd (m0167073.ppops.net [127.0.0.1]) by mx0b-00364e01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 5575c0U9004966 for ; Sat, 7 Jun 2025 02:40:11 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=columbia.edu; h=cc : content-transfer-encoding : content-type : date : from : in-reply-to : message-id : mime-version : references : subject : to; s=pps01; bh=AsCYG8CkIIPZbMRBimL2n8U8A9q+7pGknJ/irwioCr4=; b=XlSgTddvA0Q4Gv1nwjggTtrLVvK4UOBDaL6hYkxLNrNc2JfxL9xK13KJUOsfdizihaMS hvePSAfJ+DdfkwHZiXPnDvFHEQTJs+RHfSMYXtd34fh7kaUlxZ52WQAxOOfsKFGuNEcP i86YKjNQPklHiIgXAhpTC1R37IAXG7GI0BW5AtfPwlgoHcHaKjJfcbIHvdKH1kmsc07l wYzoeetg2LRvLaMTqf2b9HSay/h04s1Kct6tLAxXRyPkri+/4l/uo1ZUObxOMXkLiXuc vtPP6zvtRvdHTKQiuwiXfq5uIVaHUahA8U0WFFB02PZcdnI9AyBVPhNpTwaZiUF9hLnZ yw== Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by mx0b-00364e01.pphosted.com (PPS) with ESMTPS id 473ennw0dr-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Sat, 07 Jun 2025 02:40:10 -0400 Received: by mail-qk1-f200.google.com with SMTP id af79cd13be357-7c793d573b2so521462985a.1 for ; Fri, 06 Jun 2025 23:40:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749278410; x=1749883210; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AsCYG8CkIIPZbMRBimL2n8U8A9q+7pGknJ/irwioCr4=; b=H8svLqQSO2/MlidzOW7R0E+zR+7RsRBiZpXRAXtaOlJXt24pJXUxndNDHPPhlT6pCf JWBWgzDLQ8dUaQsPPzdh6J/xg3quKUzg3RIwt1byXP1/9q0hyJZC8tsK13dYx36vdHlg /X2Fc6Z7mLzBhYwPVP44JAL5R5UBt+8jg9JgRQr9j7qIIL4AdnS9nVF/UaMrz3s2TrmN DbM2mYNbFpfGgoK44SqbuSyLnxyzE44JJvWDnF7FgR+BkK+voCjxUham93bN0uI+FNcW iq1YINyFuRzU4mCy5sR+55BmkoKYMfqtu0R7HowQyjJ58UeB5dcO2xbXiHYhAKTvo01L wfQg== X-Gm-Message-State: AOJu0YwtZLAeq0Xqh1Nus6GEKgj6qUhE761BCSwITnirQTkhGWnVsS4D Ymk0IfqkHYAnft76fR8GwfcufcqcoxK8D6Ha2GkUduQbtlSELNVwcLwhYn7b1mBm31dyuL4b+mO oGNS8PW3mRcawo9U4IAHQznae+lJcGmxLPWR5cAiaqBBcmAMz X-Gm-Gg: ASbGncsYMI7LOJhCQJbORWSZRdvUyVO/Oc9xA3Mw+VDnogoK34Zh+c8HSucJTAv+BYv DtFrK5DruavLvkmbFEtf1Ph07mfpYqyD0cpn1PYpwsGQuopk6E62X6b3PBDEoo2ZlQ9VnBjoi/D r5m+uYVAxONnA3gg7APvZGG0G41hnBAvo6+ocEWNKPvbXdWbMqKQBqpUqJVU567sr/DAz5I+W1O sV9U3DQs+8u+cAiZvJt6RsfXThuHFe8C6zlHzt1olsDcpDg2qyh7ubJVWdpHbTJpJfEF1ePTwae 97YLBbk9hGcbfrtmfKczKCcc+CHUV30+L9XOmk1W6UoewCtTTzAf90wcsKeKMbx7b+Ij X-Received: by 2002:a05:620a:4406:b0:7c5:4711:dc56 with SMTP id af79cd13be357-7d2298fb49amr1059099185a.48.1749278409886; Fri, 06 Jun 2025 23:40:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF2auxeh6j8sDgORgohwuvlEu1znKeOCk52yB8TwnyaKsXohV1V87jQgHvUs+w5vQE4vIOb1g== X-Received: by 2002:a05:620a:4406:b0:7c5:4711:dc56 with SMTP id af79cd13be357-7d2298fb49amr1059095785a.48.1749278409369; Fri, 06 Jun 2025 23:40:09 -0700 (PDT) Received: from [127.0.1.1] (dyn-160-39-33-242.dyn.columbia.edu. [160.39.33.242]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6fb09ac95e5sm24461256d6.43.2025.06.06.23.40.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Jun 2025 23:40:08 -0700 (PDT) From: Tal Zussman Date: Sat, 07 Jun 2025 02:40:01 -0400 Subject: [PATCH v2 2/4] userfaultfd: remove (VM_)BUG_ON()s MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20250607-uffd-fixes-v2-2-339dafe9a2fe@columbia.edu> References: <20250607-uffd-fixes-v2-0-339dafe9a2fe@columbia.edu> In-Reply-To: <20250607-uffd-fixes-v2-0-339dafe9a2fe@columbia.edu> To: Andrew Morton , Peter Xu , "Jason A. Donenfeld" , David Hildenbrand , Alexander Viro , Christian Brauner , Jan Kara , Andrea Arcangeli Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Tal Zussman X-Mailer: b4 0.14.3-dev-d7477 X-Developer-Signature: v=1; a=ed25519-sha256; t=1749278406; l=12862; i=tz2294@columbia.edu; s=20250528; h=from:subject:message-id; bh=8xK9cYLtkyMChjhj6zDEoAbzPlavCj2F/8VVwgmH0dc=; b=S+N7HEkuX1EXoPaYFwQEbrEidF/LHDXVIh0PjyFqrHyCQ7+XyLIIN+W4PRgPVwS7zx5ao/6Nq 2qTTxiB7gZCC0wWB3XOuTAoBcOGOEbzdJ6UnChfrt2PylWcXxKiDuIX X-Developer-Key: i=tz2294@columbia.edu; a=ed25519; pk=BIj5KdACscEOyAC0oIkeZqLB3L94fzBnDccEooxeM5Y= X-Proofpoint-ORIG-GUID: i2kSXDCtp4R5gOyhBortLrywNBfZ58K9 X-Proofpoint-GUID: i2kSXDCtp4R5gOyhBortLrywNBfZ58K9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjA3MDA0NyBTYWx0ZWRfX6Tm/Wv4ffNLV 91WO/WF4Xo5P4UVNPVBvR+jsYb2PJOX5du4BBBEvtnzNEz0i7jU6TI/r8C47Zxd4e65t/yNs6fI NovOEUvKwHnPiLXnF/5i89bMKffnwd8pUoj5oRDtQwYYbWE+MOAq+mc8a/bmAqp7NVuTuaoPApj lbuVc8jRxvky/WrcLurxcs1Hme9Xdicsiv2wydQ6WcMMbf3vnwwtEghW68TxiLlwZefL9BhX5Ft tftU41ksCyqpqrvW3u2xQDGKdNc1mNei/BBci5s+0pzS2dMJrj8r9rDVKkJ4nAnFDExPc3awivx ChDBTvRxDTywvqBEqX06XB/PSWIEi0VF8ZWjS6nCubCjrhzwXiALZOgw7VF6XWWfzQp6kss5BUG +VqIH9Z/ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-07_03,2025-06-05_01,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 bulkscore=10 phishscore=0 priorityscore=1501 mlxscore=0 impostorscore=0 mlxlogscore=999 spamscore=0 clxscore=1015 lowpriorityscore=10 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2505160000 definitions=main-2506070047 X-Rspamd-Queue-Id: 020FA140003 X-Stat-Signature: gmdgwu5k5re37u8yayhtzbn6s1in8e7x X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1749278411-727381 X-HE-Meta: U2FsdGVkX1/k/N3ckh2IZBERo92cnhBBk4IqnYh2ZErBo0TwR6pfFuUQv4kGxPrwEb+U8Gu6PL+svfJVRoZAhAfVq2M52yCZ86Ts0yn0DQOhXEHS8rLltQNK46c+PqLlX2cPoEKLj20GkP30Zug6BqiaCA1AQxeuSNdHO3T1HND2v/S8J/+084fY9IM1nv0m80k0S+Ar1wEyg6ihPKgBTyDx7gtsathIJiivEOVA+GpojwySiUvmKnovEfrX5RJXKBNCZv8ApDwgCBgYAHiZkSic32tO4qSDgFETM9ZzDgtQVq8MBJtU+xXPCHMzj1dvLstpomskZ9oEl91Sl9jD2HCuhYO+89eP8y4gSHQiEwspzypds4xG1SJTVOboELMBqxDjh+hpkckaxizIibXwVDp8DkS9CQ/y8PHPntUYDaBsvxyPxDps+z/pAUrlxdJ+IUfiT4FHSpw0JAuQA3C3j0eczAcY4XExvb3INisKpejUTqOQHhJWJ83uEjnPbUXLscsvHUTMaLxt1UMBW9scT8IkT5NQVnj0K+2MKQWviszZo0tRRwoY4oyNtxzWyQDaeGcLer+J2LNJ0G7Q37+PnCjNi+L8qXOSmZ0ReHjdqT5SW0FGEg9XXM8gKrwYcLwBK7jNkAsDVd5m8ejtiINRS+940kA2RegJd2t0C9R4E85GguywQDM91RKKv/0QPJnZ2rDpvCXliliEX/lFrRtnuErloIuRIiANNF5IufRFo1ZAJOLcaslG++voTKM6ukymmYa7i924nEmcPcAto4NasD5GlmWPzvXgA023ITyM9aY3XoLveocYHYEFOm7xjZjaLs2Opf9Vgrr7RuFWtec+u6Uqzm0sN/VDpsvWDvA4MMqjs/LpFPJAPgfCbS2Sj+T+9KVxehR1aQ7tdS5fULPD36/1ZZ9BZO5z5pCVIbMb1STeoa/nMOh/ZEm3hYp1JDzMIktxSR79hYAnotMVnTV 98Rg/FLU 21JzbYVx/Y2ij+pLLXZTPlaO8BmSzyXsmdnqP9EPEDY7nM1T3i2y1w0FKxOt9QCjy/RXJyDWAS1du7CEz5RbyOkQy0JwZtkO67LD9VsnXiFhiSVP71uVOgHlQsR+WCdJOPi+klNNT1yyHDn1eifhIHHZiGtA9h4fHlgNvdXqWiRnUA3jjPoBnqYAeYA3Nb4KXrOw3OTpTjjIApbO6v9V8uPAIbxVTXgAlxw/A3ZrpOqSsECW0g/w+/oYx5je2oCNK4Cw8dHN44ijghpI/JBZQ2Ta2tL2dn5jLJMirkuZ1pl6h8b5AeJ7xOGchGTA+D6q7cvoo45mQtMAPLSIR5zvsUlYsQTLfN12U6Egbl6yRoFsyK1vqqAsgMeCdB28Q/eD0bHQEChajYpHXVgM1fJRFnzW1PpmI7fFpujdJeo74PmHCSN3niUXk9IHz9o9FSBjnUPtWK0ary7/S3yksNBFNROZc6w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: BUG_ON() is deprecated [1]. Convert all the BUG_ON()s and VM_BUG_ON()s to use VM_WARN_ON_ONCE(). While at it, also convert the WARN_ON_ONCE()s in move_pages() to use VM_WARN_ON_ONCE(), as the relevant conditions are already checked in validate_range() in move_pages()'s caller. [1] https://www.kernel.org/doc/html/v6.15/process/coding-style.html#use-warn-rather-than-bug Signed-off-by: Tal Zussman --- fs/userfaultfd.c | 59 +++++++++++++++++++++++++------------------------- mm/userfaultfd.c | 66 +++++++++++++++++++++++++++----------------------------- 2 files changed, 61 insertions(+), 64 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 22f4bf956ba1..80c95c712266 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -165,14 +165,14 @@ static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx) static void userfaultfd_ctx_put(struct userfaultfd_ctx *ctx) { if (refcount_dec_and_test(&ctx->refcount)) { - VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fault_pending_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fault_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->event_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->event_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fd_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fault_pending_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fault_pending_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fault_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fault_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->event_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->event_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fd_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fd_wqh)); mmdrop(ctx->mm); kmem_cache_free(userfaultfd_ctx_cachep, ctx); } @@ -383,12 +383,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) if (!ctx) goto out; - BUG_ON(ctx->mm != mm); + VM_WARN_ON_ONCE(ctx->mm != mm); /* Any unrecognized flag is a bug. */ - VM_BUG_ON(reason & ~__VM_UFFD_FLAGS); + VM_WARN_ON_ONCE(reason & ~__VM_UFFD_FLAGS); /* 0 or > 1 flags set is a bug; we expect exactly 1. */ - VM_BUG_ON(!reason || (reason & (reason - 1))); + VM_WARN_ON_ONCE(!reason || (reason & (reason - 1))); if (ctx->features & UFFD_FEATURE_SIGBUS) goto out; @@ -411,12 +411,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * to be sure not to return SIGBUS erroneously on * nowait invocations. */ - BUG_ON(vmf->flags & FAULT_FLAG_RETRY_NOWAIT); + VM_WARN_ON_ONCE(vmf->flags & FAULT_FLAG_RETRY_NOWAIT); #ifdef CONFIG_DEBUG_VM if (printk_ratelimit()) { - printk(KERN_WARNING - "FAULT_FLAG_ALLOW_RETRY missing %x\n", - vmf->flags); + pr_warn("FAULT_FLAG_ALLOW_RETRY missing %x\n", + vmf->flags); dump_stack(); } #endif @@ -602,7 +601,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx, */ out: atomic_dec(&ctx->mmap_changing); - VM_BUG_ON(atomic_read(&ctx->mmap_changing) < 0); + VM_WARN_ON_ONCE(atomic_read(&ctx->mmap_changing) < 0); userfaultfd_ctx_put(ctx); } @@ -710,7 +709,7 @@ void dup_userfaultfd_fail(struct list_head *fcs) struct userfaultfd_ctx *ctx = fctx->new; atomic_dec(&octx->mmap_changing); - VM_BUG_ON(atomic_read(&octx->mmap_changing) < 0); + VM_WARN_ON_ONCE(atomic_read(&octx->mmap_changing) < 0); userfaultfd_ctx_put(octx); userfaultfd_ctx_put(ctx); @@ -1317,8 +1316,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, do { cond_resched(); - BUG_ON(!!cur->vm_userfaultfd_ctx.ctx ^ - !!(cur->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!cur->vm_userfaultfd_ctx.ctx ^ + !!(cur->vm_flags & __VM_UFFD_FLAGS)); /* check not compatible vmas */ ret = -EINVAL; @@ -1372,7 +1371,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, found = true; } for_each_vma_range(vmi, cur, end); - BUG_ON(!found); + VM_WARN_ON_ONCE(!found); ret = userfaultfd_register_range(ctx, vma, vm_flags, start, end, wp_async); @@ -1464,8 +1463,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, do { cond_resched(); - BUG_ON(!!cur->vm_userfaultfd_ctx.ctx ^ - !!(cur->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!cur->vm_userfaultfd_ctx.ctx ^ + !!(cur->vm_flags & __VM_UFFD_FLAGS)); /* * Check not compatible vmas, not strictly required @@ -1479,7 +1478,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, found = true; } for_each_vma_range(vmi, cur, end); - BUG_ON(!found); + VM_WARN_ON_ONCE(!found); vma_iter_set(&vmi, start); prev = vma_prev(&vmi); @@ -1490,7 +1489,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, for_each_vma_range(vmi, vma, end) { cond_resched(); - BUG_ON(!vma_can_userfault(vma, vma->vm_flags, wp_async)); + VM_WARN_ON_ONCE(!vma_can_userfault(vma, vma->vm_flags, wp_async)); /* * Nothing to do: this vma is already registered into this @@ -1564,7 +1563,7 @@ static int userfaultfd_wake(struct userfaultfd_ctx *ctx, * len == 0 means wake all and we don't want to wake all here, * so check it again to be sure. */ - VM_BUG_ON(!range.len); + VM_WARN_ON_ONCE(!range.len); wake_userfault(ctx, &range); ret = 0; @@ -1621,7 +1620,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, return -EFAULT; if (ret < 0) goto out; - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); /* len == 0 would wake all */ range.len = ret; if (!(uffdio_copy.mode & UFFDIO_COPY_MODE_DONTWAKE)) { @@ -1676,7 +1675,7 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, if (ret < 0) goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_zeropage.mode & UFFDIO_ZEROPAGE_MODE_DONTWAKE)) { range.start = uffdio_zeropage.range.start; @@ -1788,7 +1787,7 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_continue.mode & UFFDIO_CONTINUE_MODE_DONTWAKE)) { range.start = uffdio_continue.range.start; @@ -1845,7 +1844,7 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_poison.mode & UFFDIO_POISON_MODE_DONTWAKE)) { range.start = uffdio_poison.range.start; @@ -2106,7 +2105,7 @@ static int new_userfaultfd(int flags) struct file *file; int fd; - BUG_ON(!current->mm); + VM_WARN_ON_ONCE(!current->mm); /* Check the UFFD_* constants for consistency. */ BUILD_BUG_ON(UFFD_USER_MODE_ONLY & UFFD_SHARED_FCNTL_FLAGS); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index bc473ad21202..41e67ded5a6e 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -561,7 +561,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } while (src_addr < src_start + len) { - BUG_ON(dst_addr >= dst_start + len); + VM_WARN_ON_ONCE(dst_addr >= dst_start + len); /* * Serialize via vma_lock and hugetlb_fault_mutex. @@ -602,7 +602,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); uffd_mfill_unlock(dst_vma); - BUG_ON(!folio); + VM_WARN_ON_ONCE(!folio); err = copy_folio_from_user(folio, (const void __user *)src_addr, true); @@ -614,7 +614,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( dst_vma = NULL; goto retry; } else - BUG_ON(folio); + VM_WARN_ON_ONCE(folio); if (!err) { dst_addr += vma_hpagesize; @@ -635,9 +635,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out: if (folio) folio_put(folio); - BUG_ON(copied < 0); - BUG_ON(err > 0); - BUG_ON(!copied && !err); + VM_WARN_ON_ONCE(copied < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!copied && !err); return copied ? copied : err; } #else /* !CONFIG_HUGETLB_PAGE */ @@ -711,12 +711,12 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, /* * Sanitize the command parameters: */ - BUG_ON(dst_start & ~PAGE_MASK); - BUG_ON(len & ~PAGE_MASK); + VM_WARN_ON_ONCE(dst_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - BUG_ON(src_start + len <= src_start); - BUG_ON(dst_start + len <= dst_start); + VM_WARN_ON_ONCE(src_start + len <= src_start); + VM_WARN_ON_ONCE(dst_start + len <= dst_start); src_addr = src_start; dst_addr = dst_start; @@ -775,7 +775,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, while (src_addr < src_start + len) { pmd_t dst_pmdval; - BUG_ON(dst_addr >= dst_start + len); + VM_WARN_ON_ONCE(dst_addr >= dst_start + len); dst_pmd = mm_alloc_pmd(dst_mm, dst_addr); if (unlikely(!dst_pmd)) { @@ -818,7 +818,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, up_read(&ctx->map_changing_lock); uffd_mfill_unlock(dst_vma); - BUG_ON(!folio); + VM_WARN_ON_ONCE(!folio); kaddr = kmap_local_folio(folio, 0); err = copy_from_user(kaddr, @@ -832,7 +832,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, flush_dcache_folio(folio); goto retry; } else - BUG_ON(folio); + VM_WARN_ON_ONCE(folio); if (!err) { dst_addr += PAGE_SIZE; @@ -852,9 +852,9 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out: if (folio) folio_put(folio); - BUG_ON(copied < 0); - BUG_ON(err > 0); - BUG_ON(!copied && !err); + VM_WARN_ON_ONCE(copied < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!copied && !err); return copied ? copied : err; } @@ -940,11 +940,11 @@ int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, /* * Sanitize the command parameters: */ - BUG_ON(start & ~PAGE_MASK); - BUG_ON(len & ~PAGE_MASK); + VM_WARN_ON_ONCE(start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - BUG_ON(start + len <= start); + VM_WARN_ON_ONCE(start + len <= start); mmap_read_lock(dst_mm); @@ -1709,15 +1709,13 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, ssize_t moved = 0; /* Sanitize the command parameters. */ - if (WARN_ON_ONCE(src_start & ~PAGE_MASK) || - WARN_ON_ONCE(dst_start & ~PAGE_MASK) || - WARN_ON_ONCE(len & ~PAGE_MASK)) - goto out; + VM_WARN_ON_ONCE(src_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(dst_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - if (WARN_ON_ONCE(src_start + len <= src_start) || - WARN_ON_ONCE(dst_start + len <= dst_start)) - goto out; + VM_WARN_ON_ONCE(src_start + len < src_start); + VM_WARN_ON_ONCE(dst_start + len < dst_start); err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); if (err) @@ -1867,9 +1865,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, up_read(&ctx->map_changing_lock); uffd_move_unlock(dst_vma, src_vma); out: - VM_WARN_ON(moved < 0); - VM_WARN_ON(err > 0); - VM_WARN_ON(!moved && !err); + VM_WARN_ON_ONCE(moved < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!moved && !err); return moved ? moved : err; } @@ -1956,9 +1954,9 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx, for_each_vma_range(vmi, vma, end) { cond_resched(); - BUG_ON(!vma_can_userfault(vma, vm_flags, wp_async)); - BUG_ON(vma->vm_userfaultfd_ctx.ctx && - vma->vm_userfaultfd_ctx.ctx != ctx); + VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); + VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && + vma->vm_userfaultfd_ctx.ctx != ctx); WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); /* @@ -2035,8 +2033,8 @@ void userfaultfd_release_all(struct mm_struct *mm, prev = NULL; for_each_vma(vmi, vma) { cond_resched(); - BUG_ON(!!vma->vm_userfaultfd_ctx.ctx ^ - !!(vma->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!vma->vm_userfaultfd_ctx.ctx ^ + !!(vma->vm_flags & __VM_UFFD_FLAGS)); if (vma->vm_userfaultfd_ctx.ctx != ctx) { prev = vma; continue; -- 2.39.5