From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365E4C25B75 for ; Fri, 31 May 2024 09:12:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C11F46B008A; Fri, 31 May 2024 05:12:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC2516B008C; Fri, 31 May 2024 05:12:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A88C76B0093; Fri, 31 May 2024 05:12:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 8A8A56B008A for ; Fri, 31 May 2024 05:12:02 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id ECEE280A62 for ; Fri, 31 May 2024 09:12:01 +0000 (UTC) X-FDA: 82178124042.28.43967E2 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf05.hostedemail.com (Postfix) with ESMTP id 1D7C4100013 for ; Fri, 31 May 2024 09:11:59 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=coPKbAZu; spf=pass (imf05.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717146720; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DDTL4Ai9HAUKZeXb8qttk1mqXc5B8kPTvJ/FkRqwff4=; b=CtU1oQ/DLb1getSZoWx5csXJXCs8y40kUYThCTEmDbk6plg5YGSXp3z8UXGJ8UwmRLBZ68 tM8PUa51vIVBzN3a+Z3MN8LtAV1eW/jFrLrboWkQehPB5t8sfOxvIDwFi9HKx74B7UoxSl yhlSKX30UXls5VOOi1A9kladA8GpZHk= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=coPKbAZu; spf=pass (imf05.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717146720; a=rsa-sha256; cv=none; b=ZgEOkkrfNW+GtIOKdoWMhRnVsHTUZwmlmFib72qZhcE14/Wpa0b5Xq7WspFAREYNeD5Tky gJw6c3ysC+wEdR2L3b6D+jZ6Mf0xogWOl5c6yEf1fhwIKY12wUERA4tPtlNOTz0gnJJhIa XAn23oNcjYNJ+ag1kVHK7f8H8GuHIho= Received: by mail-lj1-f169.google.com with SMTP id 38308e7fff4ca-2e73441edf7so19756901fa.1 for ; Fri, 31 May 2024 02:11:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717146718; x=1717751518; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=DDTL4Ai9HAUKZeXb8qttk1mqXc5B8kPTvJ/FkRqwff4=; b=coPKbAZuOurrU8LZvmP4ZooZiUF8QsI+srv1klY7Prtwwo9hMJgGm/X3Jd7tAUaD42 8msCJbXX3z9OyIO0hOsN9IpSlX2iqe5oe2UIGcAkv+EDUjIp5e3DR1Aw5MpQExxSWnFJ WZlJlBTGwdVymPnJqOeBWeolcJQkQyQ+SGWEv8VUxAkRvTqINhLzYicu9dK7NlxYZIGO WoKMUrX3HfWT2M0CsgtHgsmKI1l1Fye3qJ2tvHYyWIpr9KnJboLw4P6bLfV0stH6Jp2z JARGYjMgfogZNE6jLR7jgQLo0qRFIaxGszRg0HETbFcMpLSm5HkeMbUbZ1cE6/fakyZ+ lQ9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717146718; x=1717751518; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DDTL4Ai9HAUKZeXb8qttk1mqXc5B8kPTvJ/FkRqwff4=; b=ubyFs5LFihZxaoBspf/ywO2IC9P2jjzzcHA7rl3nWd1ijg6V5QXkZrQvmwjwMBNPnP dORV9bO5Ghi//UO84Lq71B1l9wczIvzvPMpqV3llRhXUfDv7e+YclbnjyqqHxavQYV2m FnyRVBQ/zLY6Mzn0giI5Ru1yBY5hQuZPZWXcGzc+cBHa3Y0tyhRZFzkOVci8Zhd6AFFj iWDQSeHZY27UqrfSBSkPCJT4YPQpaen9LDcxt1MVvWwN/mdbCxuvmFIryCdD8pMv6wh2 rlf3VghMa0cVwQUI/7xXKImAYoC/Liwj7yhxPtrrmhPz7/RwYI+xTee9YS227hHauyR/ ACQQ== X-Forwarded-Encrypted: i=1; AJvYcCXNxOtRoYbiFevwDYYr2SfW60Cvx0JchgsrLNFDvzvq8LH36wqx2W3Asp2oEBBcIGtstGHYIGriQ71m5sWcrf02Bew= X-Gm-Message-State: AOJu0YzLtYTNGpidENNyy2mQ2syZwT7k/QBTxnqGd/42l4slNFbXgBuW DXzuUVEV9rohR8yDG7AKaiYhenvrxPfLHe5ZoH0ND0j9TRvCJtsOiZuhWjDQ0/277lrg4Pl2YPQ L63yr8sPz+311YzYIf/uJjtyj0/l8ZFk+ X-Google-Smtp-Source: AGHT+IGvOJMSovM0rMQhVTi97Ctg/NxKx43UQuMGogjYUJXvlYBBL74TBPWdW1JHxE2/8U2nmyzJ1QfpYFBRznw2pP4= X-Received: by 2002:a2e:3c15:0:b0:2e9:5ad5:2bb6 with SMTP id 38308e7fff4ca-2ea951d54d6mr8934101fa.47.1717146718095; Fri, 31 May 2024 02:11:58 -0700 (PDT) MIME-Version: 1.0 References: <20240531030520.1615833-1-zhaoyang.huang@unisoc.com> In-Reply-To: From: Zhaoyang Huang Date: Fri, 31 May 2024 17:11:45 +0800 Message-ID: Subject: Re: [PATCHv3] mm: fix incorrect vbq reference in purge_fragmented_block To: Uladzislau Rezki Cc: "zhaoyang.huang" , Andrew Morton , Christoph Hellwig , Lorenzo Stoakes , Baoquan He , Thomas Gleixner , hailong liu , linux-mm@kvack.org, linux-kernel@vger.kernel.org, steve.kang@unisoc.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: 4z7ynaiderq8kq7bf4yj1n5o5wh6bwps X-Rspamd-Queue-Id: 1D7C4100013 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1717146719-393340 X-HE-Meta: U2FsdGVkX19n3YJZ6bVygusYP6ydy6To0HfAxQjKPSHj/0H+6QxFPp2jGEx2RbCweLQspiEhUY8S3Jf/a5ZBvQFbu1Djm7GYd+2nXTiBcQm812fL69LHazDVJyER/tMOsQvyWIpXzcR90zhaQn/ukAJQ55zGqY5iYKdEdKhzT3EucGSmAEZFOyWEwLJUhbraLEFNPjqSkO58t5lft5Ek8tICAE/NOtMUEYoG30M3lxokvQdu8QWcfR6Y8xOzwDci3IQId3uoRJiWPkXirbEK4/nIJjvwt0/uDiFB0N4LaNltEJKDLuHQYfccJGoCc+dD0+XH5RAFd4Xgxja6WAxJWvSR0EFtmfSHOFVWjZKbs06rEYyvDRjJa7W1u7JHV/ABhJsfjCHZpJoS/qJ7c4ryCm+pCXbNKh5S1h9btPmQbsDrCNlEJrj9GDnnGVCyS+DA3XNfUlBnApn8GCx0cB4Qq2TCW0Wy6z87FmTPQAdXvNmi6DBNOi5gTNuLFloRROzWUbQV0k/jP8xCYhNUsD1m1QQ4LeOIiUBdnFYvPA4hHm8NisuebTgHsVrI0+Qkf0NzhBY8wxI8tnO9Ilptglb4c4b/afkvp7XD2GbXJMmfX8HJF6fL2As7kGq9YfQufW0ez33vecrUeLTSpuZ1uc94lZviifVlDpqGzgFxKAFm+UepmDp0lnS/jyVfreXr/JN7gW/C36rsqUGC8IJPL3o1PIsTMCsrvSVMThdsKR46urc0MndshUyNgY7hAXn9dZQG3Towxbxz7SgQov1toyo7qzBOSq6UjU78C3iH4k7jDPOREUp93h4Ny7JoBHifRBYyE1RM8krSHjPo18t2DVKu/wEIrAuBhFyrYsaPTOukayeGAnt4JLhPsrc/0p91WJtCx8zUHgh07wJMXnAcu+YX6pm3e3y8yqCFmPFJtsXgTgdMQnGK+Sr6xq6ePvwGGwrQgwVjVt5qMa1HpLYuKHy k+lvBDtI OxeKsfuOWBVHFei+2edvkbvpCZ4I1uXJEb+iZNqN9g/yvBAg9uxujTLzXaj5kvqxgC2kLF9JkGw65gNGvihp/7Euyd9lYOANERGiAPB0c79tj85E0PFQkkkTEfTzLmt2uvj8AiwIw0SOexcDQsBNdmhPfjbQOth2THQAvSqRclMIGufar+D/rivBgw1/vny9Cv8MVMMAucMm7zvMhk0upD9A9hqLbpeSVP+CLNMjlcLZVVpwKO10QPv4c9HtNawdrrIsqeXdyizqWi62J50DBlowI1wjNbswdYnkBexF67DD5Mgqv4gLbMoOdyYRUh5smIpmrPNzu8Xp60dhGSd3rXeLPfhiKY7sigZ7YuQUj9NAkT19zbvEKV4sGlGC3/AJQe2NDP9+IGz7LigoHP6UMBDMlsr4Yulbv5QlRvZPa51ZOiRc9HYcGbyb7PRfDP8W/LNFlghFhrpDSP9aBTQ8fQEpkGiRLycDAitWa5qGmGWVhg3OuNdSDfcZvY4mAe+TU8UgT0oUm3hF75uB4JuT2npvSN26JB7RuYQFI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 31, 2024 at 4:05=E2=80=AFPM Uladzislau Rezki = wrote: > > On Fri, May 31, 2024 at 11:05:20AM +0800, zhaoyang.huang wrote: > > From: Zhaoyang Huang > > > > vmalloc area runs out in our ARM64 system during an erofs test as > > vm_map_ram failed[1]. By following the debug log, we find that > > vm_map_ram()->vb_alloc() will allocate new vb->va which corresponding > > to 4MB vmalloc area as list_for_each_entry_rcu returns immediately > > when vbq->free->next points to vbq->free. That is to say, 65536 times > > of page fault after the list's broken will run out of the whole > > vmalloc area. This should be introduced by one vbq->free->next point to > > vbq->free which makes list_for_each_entry_rcu can not iterate the list > > and find the BUG. > > > > [1] > > PID: 1 TASK: ffffff80802b4e00 CPU: 6 COMMAND: "init" > > #0 [ffffffc08006afe0] __switch_to at ffffffc08111d5cc > > #1 [ffffffc08006b040] __schedule at ffffffc08111dde0 > > #2 [ffffffc08006b0a0] schedule at ffffffc08111e294 > > #3 [ffffffc08006b0d0] schedule_preempt_disabled at ffffffc08111e3f0 > > #4 [ffffffc08006b140] __mutex_lock at ffffffc08112068c > > #5 [ffffffc08006b180] __mutex_lock_slowpath at ffffffc08111f8f8 > > #6 [ffffffc08006b1a0] mutex_lock at ffffffc08111f834 > > #7 [ffffffc08006b1d0] reclaim_and_purge_vmap_areas at ffffffc0803ebc3c > > #8 [ffffffc08006b290] alloc_vmap_area at ffffffc0803e83fc > > #9 [ffffffc08006b300] vm_map_ram at ffffffc0803e78c0 > > > > Fixes: fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized = blocks") > > > > Suggested-by: Hailong.Liu > > Signed-off-by: Zhaoyang Huang > > > Is a problem related to run out of vmalloc space _only_ or it is a proble= m > with broken list? From the commit message it is hard to follow the reason= . > > Could you please post a full trace or panic? Please refer to the below scenario for how vbq->free broken. step 1: new_vmap_block is called in CPU0 and get vb->va->addr =3D 0xffffffc000400000 step 2: vb is added to CPU1's vbq->vmap_block(xarray) by xa =3D addr_to_vb_xa(va->va_start); fc1e0d980037 ("mm/vmalloc: prevent stale TLBs in fully utilized blocks") introduce a per_cpu like xarray mechanism to have vb be added to the corresponding CPU's xarray but not local. step 3: vb is added to CPU0's vbq->free by list_add_tail_rcu(&vb->free_list, &vbq->free); step 4 : purge_fragmented_blocks get vbq of CPU1 and then get above vb step 5 : purge_fragmented_blocks delete vb from CPU0's list with taking the vbq->lock of CPU1 step 5': vb_alloc on CPU0 could race with step5 and break the CPU0's vbq->f= ree As fc1e0d980037 solved the problem of staled TLB issue, we need to introduce a new variable to record the CPU in vmap_block instead of reverting to iterate the list(will leave wrong TLB entry) > > > --- > > v2: introduce cpu in vmap_block to record the right CPU number > > v3: use get_cpu/put_cpu to prevent schedule between core > > --- > > --- > > mm/vmalloc.c | 12 ++++++++---- > > 1 file changed, 8 insertions(+), 4 deletions(-) > > > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index 22aa63f4ef63..ecdb75d10949 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -2458,6 +2458,7 @@ struct vmap_block { > > struct list_head free_list; > > struct rcu_head rcu_head; > > struct list_head purge; > > + unsigned int cpu; > > }; > > > > /* Queue of free and dirty vmap blocks, for allocation and flushing pu= rposes */ > > @@ -2586,10 +2587,12 @@ static void *new_vmap_block(unsigned int order,= gfp_t gfp_mask) > > return ERR_PTR(err); > > } > > > > + vb->cpu =3D get_cpu(); > > vbq =3D raw_cpu_ptr(&vmap_block_queue); > > spin_lock(&vbq->lock); > > list_add_tail_rcu(&vb->free_list, &vbq->free); > > spin_unlock(&vbq->lock); > > + put_cpu(); > > > Why do you need get_cpu() here? Can you go with raw_smp_processor_id() > and then access the per-cpu "vmap_block_queue"? get_cpu() disables > preemption and then a spin-lock is take within this critical section. > From the first glance PREEMPT_RT is broken in this case. get_cpu here is to prevent current task from being migrated to other COREs before we get the per_cpu vmap_block_queue. Could you please suggest a correct way of doing this? > > I am on a vacation, responds can be with delays. > > -- > Uladzislau Rezki