From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5169C7115B for ; Fri, 20 Jun 2025 01:25:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EBE66B0095; Thu, 19 Jun 2025 21:25:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C4026B0096; Thu, 19 Jun 2025 21:25:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28B8B6B0098; Thu, 19 Jun 2025 21:25:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 13F146B0095 for ; Thu, 19 Jun 2025 21:25:28 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D4F31C1105 for ; Fri, 20 Jun 2025 01:25:27 +0000 (UTC) X-FDA: 83574036294.11.923FE1D Received: from mx0b-00364e01.pphosted.com (mx0b-00364e01.pphosted.com [148.163.139.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 79B57140005 for ; Fri, 20 Jun 2025 01:25:25 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=pSHeIDDa; spf=pass (imf26.hostedemail.com: domain of tz2294@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=tz2294@columbia.edu; dmarc=pass (policy=none) header.from=columbia.edu ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1750382725; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RWFS/O3+ZD+8YlNR9npeOHUHV0ZFbSZR8/b4rJdEjvc=; b=vtAUJ1qifNYKnNFa8BUqENHRzTx/uLbS300GaNm17vsIqGiKwC2QX1oo6QBPZZdl8rYLE+ TLcqwf5ZuTzN8FJhL3Gw9l5scJPP7ppv6joLp0V5Ez1vMsD0OnS2vyVn0vnNugoGk49jQI YXwCFmt7KpxrZ9JgdeJxX4sTZhEOzv8= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=columbia.edu header.s=pps01 header.b=pSHeIDDa; spf=pass (imf26.hostedemail.com: domain of tz2294@columbia.edu designates 148.163.139.74 as permitted sender) smtp.mailfrom=tz2294@columbia.edu; dmarc=pass (policy=none) header.from=columbia.edu ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1750382725; a=rsa-sha256; cv=none; b=DogKSiDHyUeuBMC9Dx04dLZzPIylDgdFaB8mzkj6+PgpONPe9M9FRqogCFPiQ+iQGx66hq dNxw71xEnqPRXCxlACYaudCYD77poJhmUrV8TAvXs7FiEYf3NAfawD+MiUfJ05+zh6TnLv I4xIfhl9NsyHndqhlxVNU03y/Frw/5M= Received: from pps.filterd (m0167077.ppops.net [127.0.0.1]) by mx0b-00364e01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 55JFieI7028998 for ; Thu, 19 Jun 2025 21:25:24 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=columbia.edu; h=cc : content-transfer-encoding : content-type : date : from : in-reply-to : message-id : mime-version : references : subject : to; s=pps01; bh=RWFS/O3+ZD+8YlNR9npeOHUHV0ZFbSZR8/b4rJdEjvc=; b=pSHeIDDaDLpTbx9GfugZnqnxexbWNlyWpWIgYnJQ+iMefdM4cfoStS/Yf9M8ndhEyEV3 HsAhqlndiqjPke+GBVOx/JYOcCskEguq5faxBnVjdZYnqVW5oxA5mxlidO4KAKwIe/o2 /3VrvFz7Y2ovUuHZcxEQRLPlHx1k1hS+lbIUCYEDPdSw6JvTTfGbkVSvXE6cnNtpSW+c q3pLbQO30FiL8pc2kHNSpnVBmaloaZIt5JEOdUOG8AL1paRpi/VV637YFLpvt7PzRce/ 9I82owaz7asPhvYpkaug5G+nA2PS/ksbbz96tZKb1cYTQNIGCfqWbxDHmdLhezQck0Sg 9g== Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by mx0b-00364e01.pphosted.com (PPS) with ESMTPS id 4792nwcma6-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Thu, 19 Jun 2025 21:25:24 -0400 Received: by mail-qt1-f198.google.com with SMTP id d75a77b69052e-4a581009dc5so20556211cf.0 for ; Thu, 19 Jun 2025 18:25:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750382694; x=1750987494; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RWFS/O3+ZD+8YlNR9npeOHUHV0ZFbSZR8/b4rJdEjvc=; b=d9P7NCuKmsVNKFYew+w+nd12pcU+ou2OqFd/TA5hARlOrs74i9hwjbDrWCcHAwr0PO df0vcxpASTBbQ/HCzbxUt7Vnwg9fHqSW0PEwxDtFsgoYGjSeoUU8rr3ih+l/BYjGDH9B e2IyJG6UtbKjipIVa7r9enZiRwrTrfC8nfOZKK/CwJhX40HMV4GHxY6SsppuVNaWDmPR KNT1i6p+ro/PE3MXIhkz0mRMCj3JNwQFyyHuH/dC/qFwb2YkdOFBnqaQ4cIeNM856W2S /EE3TNB6/UvguEwRrDsUa9z4Tg9dKPtFSbQVQKr61d3ozs66AX0IYv6KMJGsjLr1USOu 7phg== X-Gm-Message-State: AOJu0YzdKrC2jR4jhbxmUdR5qDGfY2uwhydWHbR69/dKgHtcezf6R3d+ uJusyDyF6kictq7zRJ97vs0mEZXhx5MwW8oSwR2soR+aQBZcohGxyIOEgRqGuMagEf/CrSFoLkC LhBMjXjt19ApG3SgJxKoxQZrr8FNNxeFFFAKF8keYcoLAVFFn X-Gm-Gg: ASbGncvWqS+T1/9tTbG82PEbG49q/so5V0HTfSuiIXg5Ez5suQslYm4osYwi+FldEGd eFNQnGB8GnydeaxTH4jTEbwc+oXqo9ts3xEuuO1IBJA0tkDfNBvDpzVj/ao8Oxy1uNWvJTmrrmL WgNxCtA48T86akCCD6Ix4PT3poL1q5VSAJYpAVNa7fztmphxodKKFSBwMpmG+4bxqUmzV1bEt6v soOwfz02YWa+sJhNzh15PqJJLA8Bn6CEiJIfnhQ4TmRq6ZNLZpyS+D38WJa2W7uD8mdv5XL//IL UOqd84rko4ma5kg3xlIXnX2Uu3E0dXlOX/fjWrnjbDSPmQGZzjxhn0SBRRFoqK4aPEmF X-Received: by 2002:ac8:7d91:0:b0:4a7:8af:3372 with SMTP id d75a77b69052e-4a77c2d5a16mr6375781cf.1.1750382693790; Thu, 19 Jun 2025 18:24:53 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFxSBk/DrFdt6HyevBLvxZyXoVmuiIlMGpXyLYAA+lHLIJsHLNL10gZmy4gTm6BrPCle8DH6w== X-Received: by 2002:ac8:7d91:0:b0:4a7:8af:3372 with SMTP id d75a77b69052e-4a77c2d5a16mr6375341cf.1.1750382692318; Thu, 19 Jun 2025 18:24:52 -0700 (PDT) Received: from [127.0.1.1] (dyn-160-39-33-242.dyn.columbia.edu. [160.39.33.242]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-4a779e79c12sm3794321cf.53.2025.06.19.18.24.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Jun 2025 18:24:51 -0700 (PDT) From: Tal Zussman Date: Thu, 19 Jun 2025 21:24:25 -0400 Subject: [PATCH v3 3/4] userfaultfd: remove (VM_)BUG_ON()s MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit Message-Id: <20250619-uffd-fixes-v3-3-a7274d3bd5e4@columbia.edu> References: <20250619-uffd-fixes-v3-0-a7274d3bd5e4@columbia.edu> In-Reply-To: <20250619-uffd-fixes-v3-0-a7274d3bd5e4@columbia.edu> To: Andrew Morton , Peter Xu , "Jason A. Donenfeld" , David Hildenbrand , Alexander Viro , Christian Brauner , Jan Kara , Andrea Arcangeli Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Tal Zussman X-Mailer: b4 0.14.3-dev-d7477 X-Developer-Signature: v=1; a=ed25519-sha256; t=1750382688; l=13816; i=tz2294@columbia.edu; s=20250528; h=from:subject:message-id; bh=owMdNBEirTOUYDKZZnWe9phYEFPqEGWEX+HWk21JD8Q=; b=1td6e7UCiOGMwEkYkMVCA0HLRM6tRyR/78GTYKhDrmwZH2WvlwjaVlh/MrbACG2gDu1MGQkz+ RX31tKbOES0APmpnf+37BCJMSX26cru2Fd6gXWzVi7Omd8r5UHX6wod X-Developer-Key: i=tz2294@columbia.edu; a=ed25519; pk=BIj5KdACscEOyAC0oIkeZqLB3L94fzBnDccEooxeM5Y= X-Proofpoint-Spam-Details-Enc: AW1haW4tMjUwNjIwMDAwOSBTYWx0ZWRfX9dHJ1BlfkSvc VGlsdn9l/PYDgT3Zgrb9Y01UYWpOA5TiORY8NibECuaQFYfLad1xmVRLn9GsFCS1ik6fWUrP/Sq 6t/lSgoCbNepYS8hJAAbs6uy2UJX9R8APSfO95hOomdoT4UMcJpxoO7YizDdEbqXrN5TKisZAws xueGMCu/qCv9TAra2u5PyuoJNumlELJCAgCcYfyOE9GL/EjPbBkvL34LNB7ABzeKMR+P6ko4QL0 3wkZodWO+kbuacSKd1SWgG7IvJoZhfoq4/UaiK1pH2T+Icjtbs3yTXcaYPeE3usV091UZ+yf460 MQ0xJja/JgKd+bK3bnGLJivQWRL46xyB4yklOmvidWx8dX8XHnssVjTpC6seZb07fPi+MMslceR jbLVkhWj X-Proofpoint-ORIG-GUID: i5d8Yo6FxtH3g6FMfLEQg6nb7XtC7szJ X-Proofpoint-GUID: i5d8Yo6FxtH3g6FMfLEQg6nb7XtC7szJ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1099,Hydra:6.0.736,FMLib:17.12.80.40 definitions=2025-06-19_08,2025-06-18_03,2025-03-28_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 impostorscore=0 suspectscore=0 adultscore=0 bulkscore=10 priorityscore=1501 phishscore=0 mlxlogscore=999 spamscore=0 malwarescore=0 mlxscore=0 lowpriorityscore=10 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2505160000 definitions=main-2506200009 X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 79B57140005 X-Stat-Signature: yz9qw8nj3ijobqdsy9styam1hjywwcnh X-HE-Tag: 1750382725-670100 X-HE-Meta: U2FsdGVkX1+Shl8JG5PNF/6q7GW+BYPMDykrdvuORGpVsrvRW+DLhhrGzvkfpggu5C5QIS6q+s8FvYtk4Kqaby21WnJnXazQnHxycN0b3c4VR8GDuHR6IXm2UOWqC9kqt9hHHR+wxVFvL+ORPC5KFUMp3FQY8zCqhczLT41xBrflWboIdUGdDnXpNY5FQ/OSnnHoQ7MwkpOmg21CwqQBLNlka0wF3a9vv+xxhp1ZlDkef8HnYRS/Wp6MoKTW9IpI+BNkts06Na5MZvuqeRukXMMbBeaeG90clLPrxNJNe0pRsJKWPzHCiJMjPuqn3P4KDaGDAUwLPnRAtEYIHlKai3lL8zWqFphT93jATKlaxMJk1JbruJfEGxWPwInj6YDWrgnkj1KjiGVIe1a+AG8Ds2BeYQ6L1m7TLfHLd8KP//TnizazTWPh5cvZU8s1fCZ20gk7tDia9gpZVIRt2vrW5cNg667RIYCJLhtmVqB9rFKztVZyLVR85rtq/DBQ15Gw6SRduIS8wO2nKnkiJEmfe5R+ZmQIo/ED5rLQ7aMsRNHISjzaVmzDXZBs2FbzdE86kiTgu8g9O/aIlkqxPfThi8d66D8Bw6zJoNawNnUfbP9mfD+b3Rbbny4Su5VSyycgnIOZk0U6PyjSLX7qGLpin4P2tSk7xasdq+1BJhQtYgeeGYTXR8cI9NIg7TwFuPvXy5sSmovLPhrlFyqpX6+MWxUZTTwVYpJwnno4KDGAtw9Aj9MqYGfnCd0FK4VvxQj3kOtBruHxzj0QYrYmtXpuubdCCTcA2kJIfkHPGizhlUebVbqjwPdFxfZblYpuiREoIC2+dwRl9qv2TLRqPNPnHibsYixSpS8b1xbp15VJiOD1As1vTDoOpsa3mQI1FkhBLR8WjDM6l6nnCodgm1Idc1NNsIE9hd5QHJpj9id8SnWt4sDn0yi877IKMpnONm3OPxKE01/7T0txI1m7xSd dZnyc71n NbvZLHGfQRA3015TivWhdPb2+54SxTnpmlT/h5gyJ62hiFCke0hpOIXQriMQkSy1dlG5N2JhpHGfozHZ6f8vO3kPGntLkyUdlDnAXeyRX6vJH2Hpr2o6WVeTN9++llOlGbUcPQixZdlqTInIWkMZt9x0XQxT+jNguNsdULStZBfn/FV//WXOea8uI55F1Dk5jROYxEm0ogTWC06N+gRbW3GUfUM6mO4ZxyIKTuxjeMTrHR8VytTYLTHN1kErG91+nJauxB/WKvZ5guWyR/CjA1zlRo7L2auiqMthw8rx+G+9mI72k/l5QEKaZk6lcPRJ7FYtjG/krSI2V82V2A7Lsknv0JUOGbPWmWSRiflnIpDlVj2vdA6nuVXAwRji9S+DxVW0DogxFxoTjbcWkPNKeJHIMbjjaE/lKx+Bz8ObDMoPzgHnwndwho5vQZZ0aT2Wg8zdy7gR9dPLFrVUS9kcW4NzppA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: BUG_ON() is deprecated [1]. Convert all the BUG_ON()s and VM_BUG_ON()s to use VM_WARN_ON_ONCE(). There are a few additional cases that are converted or modified: - Convert the printk(KERN_WARNING ...) in handle_userfault() to use pr_warn(). - Convert the WARN_ON_ONCE()s in move_pages() to use VM_WARN_ON_ONCE(), as the relevant conditions are already checked in validate_range() in move_pages()'s caller. - Convert the VM_WARN_ON()'s in move_pages() to VM_WARN_ON_ONCE(). These cases should never happen and are similar to those in mfill_atomic() and mfill_atomic_hugetlb(), which were previously BUG_ON()s. move_pages() was added later than those functions and makes use of VM_WARN_ON() as a replacement for the deprecated BUG_ON(), but. VM_WARN_ON_ONCE() is likely a better direct replacement. - Convert the WARN_ON() for !VM_MAYWRITE in userfaultfd_unregister() and userfaultfd_register_range() to VM_WARN_ON_ONCE(). This condition is enforced in userfaultfd_register() so it should never happen, and can be converted to a debug check. [1] https://www.kernel.org/doc/html/v6.15/process/coding-style.html#use-warn-rather-than-bug Signed-off-by: Tal Zussman --- fs/userfaultfd.c | 59 ++++++++++++++++++++++++------------------------ mm/userfaultfd.c | 68 +++++++++++++++++++++++++++----------------------------- 2 files changed, 62 insertions(+), 65 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 8e7fb2a7a6aa..771e81ea4ef6 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -165,14 +165,14 @@ static void userfaultfd_ctx_get(struct userfaultfd_ctx *ctx) static void userfaultfd_ctx_put(struct userfaultfd_ctx *ctx) { if (refcount_dec_and_test(&ctx->refcount)) { - VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fault_pending_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fault_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->event_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->event_wqh)); - VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock)); - VM_BUG_ON(waitqueue_active(&ctx->fd_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fault_pending_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fault_pending_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fault_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fault_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->event_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->event_wqh)); + VM_WARN_ON_ONCE(spin_is_locked(&ctx->fd_wqh.lock)); + VM_WARN_ON_ONCE(waitqueue_active(&ctx->fd_wqh)); mmdrop(ctx->mm); kmem_cache_free(userfaultfd_ctx_cachep, ctx); } @@ -383,12 +383,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) if (!ctx) goto out; - BUG_ON(ctx->mm != mm); + VM_WARN_ON_ONCE(ctx->mm != mm); /* Any unrecognized flag is a bug. */ - VM_BUG_ON(reason & ~__VM_UFFD_FLAGS); + VM_WARN_ON_ONCE(reason & ~__VM_UFFD_FLAGS); /* 0 or > 1 flags set is a bug; we expect exactly 1. */ - VM_BUG_ON(!reason || (reason & (reason - 1))); + VM_WARN_ON_ONCE(!reason || (reason & (reason - 1))); if (ctx->features & UFFD_FEATURE_SIGBUS) goto out; @@ -411,12 +411,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * to be sure not to return SIGBUS erroneously on * nowait invocations. */ - BUG_ON(vmf->flags & FAULT_FLAG_RETRY_NOWAIT); + VM_WARN_ON_ONCE(vmf->flags & FAULT_FLAG_RETRY_NOWAIT); #ifdef CONFIG_DEBUG_VM if (printk_ratelimit()) { - printk(KERN_WARNING - "FAULT_FLAG_ALLOW_RETRY missing %x\n", - vmf->flags); + pr_warn("FAULT_FLAG_ALLOW_RETRY missing %x\n", + vmf->flags); dump_stack(); } #endif @@ -602,7 +601,7 @@ static void userfaultfd_event_wait_completion(struct userfaultfd_ctx *ctx, */ out: atomic_dec(&ctx->mmap_changing); - VM_BUG_ON(atomic_read(&ctx->mmap_changing) < 0); + VM_WARN_ON_ONCE(atomic_read(&ctx->mmap_changing) < 0); userfaultfd_ctx_put(ctx); } @@ -710,7 +709,7 @@ void dup_userfaultfd_fail(struct list_head *fcs) struct userfaultfd_ctx *ctx = fctx->new; atomic_dec(&octx->mmap_changing); - VM_BUG_ON(atomic_read(&octx->mmap_changing) < 0); + VM_WARN_ON_ONCE(atomic_read(&octx->mmap_changing) < 0); userfaultfd_ctx_put(octx); userfaultfd_ctx_put(ctx); @@ -1317,8 +1316,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, do { cond_resched(); - BUG_ON(!!cur->vm_userfaultfd_ctx.ctx ^ - !!(cur->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!cur->vm_userfaultfd_ctx.ctx ^ + !!(cur->vm_flags & __VM_UFFD_FLAGS)); /* check not compatible vmas */ ret = -EINVAL; @@ -1372,7 +1371,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, found = true; } for_each_vma_range(vmi, cur, end); - BUG_ON(!found); + VM_WARN_ON_ONCE(!found); ret = userfaultfd_register_range(ctx, vma, vm_flags, start, end, wp_async); @@ -1464,8 +1463,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, do { cond_resched(); - BUG_ON(!!cur->vm_userfaultfd_ctx.ctx ^ - !!(cur->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!cur->vm_userfaultfd_ctx.ctx ^ + !!(cur->vm_flags & __VM_UFFD_FLAGS)); /* * Prevent unregistering through a different userfaultfd than @@ -1487,7 +1486,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, found = true; } for_each_vma_range(vmi, cur, end); - BUG_ON(!found); + VM_WARN_ON_ONCE(!found); vma_iter_set(&vmi, start); prev = vma_prev(&vmi); @@ -1504,7 +1503,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx != ctx); VM_WARN_ON_ONCE(!vma_can_userfault(vma, vma->vm_flags, wp_async)); - WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); if (vma->vm_start > start) start = vma->vm_start; @@ -1569,7 +1568,7 @@ static int userfaultfd_wake(struct userfaultfd_ctx *ctx, * len == 0 means wake all and we don't want to wake all here, * so check it again to be sure. */ - VM_BUG_ON(!range.len); + VM_WARN_ON_ONCE(!range.len); wake_userfault(ctx, &range); ret = 0; @@ -1626,7 +1625,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, return -EFAULT; if (ret < 0) goto out; - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); /* len == 0 would wake all */ range.len = ret; if (!(uffdio_copy.mode & UFFDIO_COPY_MODE_DONTWAKE)) { @@ -1681,7 +1680,7 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx, if (ret < 0) goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_zeropage.mode & UFFDIO_ZEROPAGE_MODE_DONTWAKE)) { range.start = uffdio_zeropage.range.start; @@ -1793,7 +1792,7 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_continue.mode & UFFDIO_CONTINUE_MODE_DONTWAKE)) { range.start = uffdio_continue.range.start; @@ -1850,7 +1849,7 @@ static inline int userfaultfd_poison(struct userfaultfd_ctx *ctx, unsigned long goto out; /* len == 0 would wake all */ - BUG_ON(!ret); + VM_WARN_ON_ONCE(!ret); range.len = ret; if (!(uffdio_poison.mode & UFFDIO_POISON_MODE_DONTWAKE)) { range.start = uffdio_poison.range.start; @@ -2111,7 +2110,7 @@ static int new_userfaultfd(int flags) struct file *file; int fd; - BUG_ON(!current->mm); + VM_WARN_ON_ONCE(!current->mm); /* Check the UFFD_* constants for consistency. */ BUILD_BUG_ON(UFFD_USER_MODE_ONLY & UFFD_SHARED_FCNTL_FLAGS); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index bc473ad21202..240578bed181 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -561,7 +561,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( } while (src_addr < src_start + len) { - BUG_ON(dst_addr >= dst_start + len); + VM_WARN_ON_ONCE(dst_addr >= dst_start + len); /* * Serialize via vma_lock and hugetlb_fault_mutex. @@ -602,7 +602,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( if (unlikely(err == -ENOENT)) { up_read(&ctx->map_changing_lock); uffd_mfill_unlock(dst_vma); - BUG_ON(!folio); + VM_WARN_ON_ONCE(!folio); err = copy_folio_from_user(folio, (const void __user *)src_addr, true); @@ -614,7 +614,7 @@ static __always_inline ssize_t mfill_atomic_hugetlb( dst_vma = NULL; goto retry; } else - BUG_ON(folio); + VM_WARN_ON_ONCE(folio); if (!err) { dst_addr += vma_hpagesize; @@ -635,9 +635,9 @@ static __always_inline ssize_t mfill_atomic_hugetlb( out: if (folio) folio_put(folio); - BUG_ON(copied < 0); - BUG_ON(err > 0); - BUG_ON(!copied && !err); + VM_WARN_ON_ONCE(copied < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!copied && !err); return copied ? copied : err; } #else /* !CONFIG_HUGETLB_PAGE */ @@ -711,12 +711,12 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, /* * Sanitize the command parameters: */ - BUG_ON(dst_start & ~PAGE_MASK); - BUG_ON(len & ~PAGE_MASK); + VM_WARN_ON_ONCE(dst_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - BUG_ON(src_start + len <= src_start); - BUG_ON(dst_start + len <= dst_start); + VM_WARN_ON_ONCE(src_start + len <= src_start); + VM_WARN_ON_ONCE(dst_start + len <= dst_start); src_addr = src_start; dst_addr = dst_start; @@ -775,7 +775,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, while (src_addr < src_start + len) { pmd_t dst_pmdval; - BUG_ON(dst_addr >= dst_start + len); + VM_WARN_ON_ONCE(dst_addr >= dst_start + len); dst_pmd = mm_alloc_pmd(dst_mm, dst_addr); if (unlikely(!dst_pmd)) { @@ -818,7 +818,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, up_read(&ctx->map_changing_lock); uffd_mfill_unlock(dst_vma); - BUG_ON(!folio); + VM_WARN_ON_ONCE(!folio); kaddr = kmap_local_folio(folio, 0); err = copy_from_user(kaddr, @@ -832,7 +832,7 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, flush_dcache_folio(folio); goto retry; } else - BUG_ON(folio); + VM_WARN_ON_ONCE(folio); if (!err) { dst_addr += PAGE_SIZE; @@ -852,9 +852,9 @@ static __always_inline ssize_t mfill_atomic(struct userfaultfd_ctx *ctx, out: if (folio) folio_put(folio); - BUG_ON(copied < 0); - BUG_ON(err > 0); - BUG_ON(!copied && !err); + VM_WARN_ON_ONCE(copied < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!copied && !err); return copied ? copied : err; } @@ -940,11 +940,11 @@ int mwriteprotect_range(struct userfaultfd_ctx *ctx, unsigned long start, /* * Sanitize the command parameters: */ - BUG_ON(start & ~PAGE_MASK); - BUG_ON(len & ~PAGE_MASK); + VM_WARN_ON_ONCE(start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - BUG_ON(start + len <= start); + VM_WARN_ON_ONCE(start + len <= start); mmap_read_lock(dst_mm); @@ -1709,15 +1709,13 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, ssize_t moved = 0; /* Sanitize the command parameters. */ - if (WARN_ON_ONCE(src_start & ~PAGE_MASK) || - WARN_ON_ONCE(dst_start & ~PAGE_MASK) || - WARN_ON_ONCE(len & ~PAGE_MASK)) - goto out; + VM_WARN_ON_ONCE(src_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(dst_start & ~PAGE_MASK); + VM_WARN_ON_ONCE(len & ~PAGE_MASK); /* Does the address range wrap, or is the span zero-sized? */ - if (WARN_ON_ONCE(src_start + len <= src_start) || - WARN_ON_ONCE(dst_start + len <= dst_start)) - goto out; + VM_WARN_ON_ONCE(src_start + len < src_start); + VM_WARN_ON_ONCE(dst_start + len < dst_start); err = uffd_move_lock(mm, dst_start, src_start, &dst_vma, &src_vma); if (err) @@ -1867,9 +1865,9 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start, up_read(&ctx->map_changing_lock); uffd_move_unlock(dst_vma, src_vma); out: - VM_WARN_ON(moved < 0); - VM_WARN_ON(err > 0); - VM_WARN_ON(!moved && !err); + VM_WARN_ON_ONCE(moved < 0); + VM_WARN_ON_ONCE(err > 0); + VM_WARN_ON_ONCE(!moved && !err); return moved ? moved : err; } @@ -1956,10 +1954,10 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx, for_each_vma_range(vmi, vma, end) { cond_resched(); - BUG_ON(!vma_can_userfault(vma, vm_flags, wp_async)); - BUG_ON(vma->vm_userfaultfd_ctx.ctx && - vma->vm_userfaultfd_ctx.ctx != ctx); - WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); + VM_WARN_ON_ONCE(!vma_can_userfault(vma, vm_flags, wp_async)); + VM_WARN_ON_ONCE(vma->vm_userfaultfd_ctx.ctx && + vma->vm_userfaultfd_ctx.ctx != ctx); + VM_WARN_ON_ONCE(!(vma->vm_flags & VM_MAYWRITE)); /* * Nothing to do: this vma is already registered into this @@ -2035,8 +2033,8 @@ void userfaultfd_release_all(struct mm_struct *mm, prev = NULL; for_each_vma(vmi, vma) { cond_resched(); - BUG_ON(!!vma->vm_userfaultfd_ctx.ctx ^ - !!(vma->vm_flags & __VM_UFFD_FLAGS)); + VM_WARN_ON_ONCE(!!vma->vm_userfaultfd_ctx.ctx ^ + !!(vma->vm_flags & __VM_UFFD_FLAGS)); if (vma->vm_userfaultfd_ctx.ctx != ctx) { prev = vma; continue; -- 2.39.5