From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75E51C3DA4A for ; Fri, 9 Aug 2024 14:26:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F0CF76B008A; Fri, 9 Aug 2024 10:26:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EBCAC6B008C; Fri, 9 Aug 2024 10:26:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5DBE6B0092; Fri, 9 Aug 2024 10:26:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B5C6C6B008A for ; Fri, 9 Aug 2024 10:26:02 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 2B7498108A for ; Fri, 9 Aug 2024 14:26:02 +0000 (UTC) X-FDA: 82432931364.24.EBF4662 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by imf22.hostedemail.com (Postfix) with ESMTP id 135D6C0014 for ; Fri, 9 Aug 2024 14:25:59 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fS9QaRkB; spf=pass (imf22.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723213526; a=rsa-sha256; cv=none; b=CG5DbiSwR6lJDcBNb4xgXz0CGtiDiq1Iq/wTFZAN9HepZtSbo6F9sYF65sYdvXm7qccBfZ Ye8GCFKc/U5Oz8Jc3MOLfDfLmjsHI/PJXnI3ejeEcgF1XC0k53Rq+XmH96pKY4IzQwcRem WO9CaHXPWvda7jSovPXx7v1c5NXUuVY= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=fS9QaRkB; spf=pass (imf22.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723213526; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CVmL+JUtY+Wn7F+T3SL86Jtw2K/7k/JyMHzuTw81wh4=; b=MukEiNWdXkXFAmO/csjKfruLuwlHexDtcggsMjHhVqblijNZh2CLmdbRNAWyVVqGsVJd5K 0fYEf0gBmzrHkjV9O6rtbAsr/8N4QfEPaZV6yB4wBHqqY+xSD9id/ia9WoYe6W4aVNo1Vb MVcaGrWcryFtAy25CxEYCdfzOp8jUAs= Received: by mail-ed1-f52.google.com with SMTP id 4fb4d7f45d1cf-5b8c2a6135eso119893a12.0 for ; Fri, 09 Aug 2024 07:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1723213558; x=1723818358; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=CVmL+JUtY+Wn7F+T3SL86Jtw2K/7k/JyMHzuTw81wh4=; b=fS9QaRkB+WAa3z910ufEJMlq/8f67cLb/n2HUfjHA1oNm4cY/wZWe7notu4DaG3pf9 MTiYOpE6Ul+wvU1iw4pGbTkE1WqO2xew6KJhlajrlkWIa5DOM5sgeepwLsCst9Vmdryq or5BIY+89W16xB9pPA+bS12UFEbJyXt1uENCy+q4YXqXZ6Na2TJBvcqj1e+fiBmxuBtl zR79hKtqcuxO56ef1/WXa2cKspG4rZ4vrkRP4+l+ioDhtEfil4Zj66ks4NV1fgj2GmEk r6GpHsLaYAbXaJhYgzGzFWTiA1SzxjPhI6UUOlY+WQ5pGUS4vD3gIsQNpOA8dv8pRlve q+eA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723213558; x=1723818358; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CVmL+JUtY+Wn7F+T3SL86Jtw2K/7k/JyMHzuTw81wh4=; b=H5HbpMqQ5bkvyxJ6wPWYu5JFWtJBwmIm19/nEstFS5MkONkqQIemNsLMUNNj+0qtOg VcxDhxZ6vN19eu+DkMeTjWJBWz6i2WOZOOKUod8fNV8IjYVfxmNA39JQMNq1aQM+Nl2O xU03dZblOQPSgpLPE6pWeChTuti3B9ECvnDiGiV/vDh/WSHPK8hkZQ/uBJuRgJF9qAKV bvkGqeM1aP3Ds8j6etuFGWDoYxRfqslP5vZYRC0KslsW7tBqRW7NuZXeP52NQlKxIrK5 nwujaR1e3TbteTF9Bpt5YSLCXgtwb9gsU6+6MyQ96DRHS8fSyfoRiDmEbyrc4218h9/p a8Vg== X-Forwarded-Encrypted: i=1; AJvYcCXV0frJcu26owOa2cp69Z3ZxiROun+FgGXCQQUqL9b0YfTesBMVfODcraL+RR8C6aw1UczXL5/lwPd3WSNT/w0nJus= X-Gm-Message-State: AOJu0YxTy3JPqWH0WtCOeNem2q9ve8fiUHM0tGRjkzWk9yUCxwAO7Fxd 8GtKEbKYSMQThuvcr5Bf2G622b4+FeTi8lKqJTni53+VjJl/4/ic X-Google-Smtp-Source: AGHT+IFnExldOKrhy5GHhFfjiCC/km2Eo6GrWXuovjXEgdCYefbnjSGwy+f02GPsbVNwm69iWCHPpw== X-Received: by 2002:a05:6402:27c9:b0:5b8:36b7:ae51 with SMTP id 4fb4d7f45d1cf-5bbb3c417ddmr4729643a12.9.1723213558079; Fri, 09 Aug 2024 07:25:58 -0700 (PDT) Received: from ?IPV6:2a03:83e0:1126:4:eb:d0d0:c7fd:c82c? ([2620:10d:c092:500::6:b73e]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-5bbb2c1d615sm1650192a12.23.2024.08.09.07.25.57 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 09 Aug 2024 07:25:57 -0700 (PDT) Message-ID: Date: Fri, 9 Aug 2024 15:25:56 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 4/4] mm: split underutilized THPs To: David Hildenbrand , akpm@linux-foundation.org, linux-mm@kvack.org Cc: hannes@cmpxchg.org, riel@surriel.com, shakeel.butt@linux.dev, roman.gushchin@linux.dev, yuzhao@google.com, baohua@kernel.org, ryan.roberts@arm.com, rppt@kernel.org, willy@infradead.org, cerasuolodomenico@gmail.com, corbet@lwn.net, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, kernel-team@meta.com References: <20240807134732.3292797-1-usamaarif642@gmail.com> <20240807134732.3292797-5-usamaarif642@gmail.com> <5adb120e-5408-43a6-b418-33dc17c086f0@redhat.com> <3f6e1e0a-6132-4222-abb6-133224e11009@redhat.com> Content-Language: en-US From: Usama Arif In-Reply-To: <3f6e1e0a-6132-4222-abb6-133224e11009@redhat.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Stat-Signature: i7tu3kgtuuja6edfced74r37jar6m37z X-Rspamd-Queue-Id: 135D6C0014 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1723213559-632653 X-HE-Meta: U2FsdGVkX1/l9pjfpduTMlOFmVyYrKb+d+mRBOJLQei5h6GKwLu6JSa5dRYJxuNh5GECFv9elJjJrA81MohuzP5WbsHQz34XiN48wdFjFpNV64RZ8pUYO1wurlEwJsYBiVP4TnLFMnPww34FW4muP3EesYYfj90RzN/6zxYBAIRNg8UpIPpS7etGLih9jCR1vmp5P1hQ3RvhrI1EtIY+YXmlbF/RRSZL07maVfPwU9NU+DpF0SV4ebEwt7+RlNUU7BVKLQ/46Hmm0dh7vKy3kLbrnvtQ9F3MXIHyITNKxXmx2r4mR/CqGzPcL2VxrPskdWfyLzzY1+WfHLtVKNp4QK6MM38l55d7gfNSPxZ7N4glPY49NjFXpSvGyxvVhFSGVcPvgfprvC/SwYl+p6UM/6q5qpHEaj7/qCewabQXv5QLNh2sTaMrGxCEJ0x8mmElIehALKuYl3g+JGKgBjbv4JhOJ8wKe81QQPV3CcZaqR4U410w4/OJaIwOcZ6oQSIIh2Vrnpi+tYpairh9GCyNLLJAiwpYFRP105GJCj5Jsp+VAiS8t+4EJGCKEYkDK2LeYMoEfoMZdwbxrDRZqNQrVLId8Ga0VHFNZomLA7gMDTXa8DtwWpy1xw7jwPgt/s1MJNk9qsGU1kfMt6uzxEzln5zBfriqgB+arEgJmBbgX+vLaoqZCUjDF25jPgEdMGdTpRWlXl9RbUi+ui9fhB80BElSQI5PADRWV3BaDT31BkgiOkYKLfontUUmqjbVTZJQGeZcXw8IRLZ8LBgtboqG+OsZ15iCMGlp+x10DX69V4ES8BXfQIlXlWS+rx+arNC0A+hwSQ8+s5zZQW43tmRsTNnTFYOWoiGBF3Q081TYXYPpk8DmpGEXHqrRLasvMlE4G9Q3YL3ruCztFg2zNiVZHmAp8Ag+6Uji10sw/kpSmqFqHVTfgYJ3nsSjmplBAmFFEPcMQ5Qk+InD00Um8yo D7/bvgj0 13wp0e6jDN1N9N8GHL7nPMuXNVCw72Mj+IXS2iodDtpheAIRgdWOShX66Ho2Mj8XqcONnF56mhTYUGGoD1Csr3kahi/dkIAW/hTXUvGbqZqu2pJPjz0yjig4Jg3BVbpfRq7pWmfk8kNXG0DsQqFRtclns72oVy62yRQTTUqQ3UlOl/M14wdgkKc3VdndkQdgFSqz+wSv2kWAq38leCjxKAkFyNuxJ2RVOHLnik5mEIYAyJn3fwwLW7c4rUXqfoyIFgnzsndcC3RK2mTsqjLUrtq8ESWATjXLXd1sHI7oRYaoi91AiIrGiHykAGV+4XQ/WJBwgo3Ww7S4h+TcPq519GqsDlzzY+5lduaShx74b2zv15CGCMrc/9aCFIWvUjareN9hfjI7Y6MuGQTGgHNSLosqlDOdSVZ8/BsnBP+pEMZUsx/WOt/pBNT6JFy1RXwXSayWY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 09/08/2024 14:21, David Hildenbrand wrote: > On 09.08.24 12:31, Usama Arif wrote: >> >> >> On 08/08/2024 16:55, David Hildenbrand wrote: >>> On 07.08.24 15:46, Usama Arif wrote: >>>> This is an attempt to mitigate the issue of running out of memory when THP >>>> is always enabled. During runtime whenever a THP is being faulted in >>>> (__do_huge_pmd_anonymous_page) or collapsed by khugepaged >>>> (collapse_huge_page), the THP is added to  _deferred_list. Whenever memory >>>> reclaim happens in linux, the kernel runs the deferred_split >>>> shrinker which goes through the _deferred_list. >>>> >>>> If the folio was partially mapped, the shrinker attempts to split it. >>>> A new boolean is added to be able to distinguish between partially >>>> mapped folios and others in the deferred_list at split time in >>>> deferred_split_scan. Its needed as __folio_remove_rmap decrements >>>> the folio mapcount elements, hence it won't be possible to distinguish >>>> between partially mapped folios and others in deferred_split_scan >>>> without the boolean. >>> >>> Just so I get this right: Are you saying that we might now add fully mapped folios to the deferred split queue and that's what you want to distinguish? >> >> Yes >> >>> >>> If that's the case, then could we use a bit in folio->_flags_1 instead? >> Yes, thats a good idea. Will create the below flag for the next revision >> >> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h >> index 5769fe6e4950..5825bd1cf6db 100644 >> --- a/include/linux/page-flags.h >> +++ b/include/linux/page-flags.h >> @@ -189,6 +189,11 @@ enum pageflags { >>     #define PAGEFLAGS_MASK         ((1UL << NR_PAGEFLAGS) - 1) >>   +enum folioflags_1 { >> +       /* The first 8 bits of folio->_flags_1 are used to keep track of folio order */ >> +       FOLIO_PARTIALLY_MAPPED = 8,     /* folio is partially mapped */ >> +} > > This might be what you want to achieve: > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index a0a29bd092f8..d4722ed60ef8 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -182,6 +182,7 @@ enum pageflags { >         /* At least one page in this folio has the hwpoison flag set */ >         PG_has_hwpoisoned = PG_active, >         PG_large_rmappable = PG_workingset, /* anon or file-backed */ > +       PG_partially_mapped, /* was identified to be partially mapped */ >  }; >   >  #define PAGEFLAGS_MASK         ((1UL << NR_PAGEFLAGS) - 1) > @@ -861,8 +862,9 @@ static inline void ClearPageCompound(struct page *page) >         ClearPageHead(page); >  } >  FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE) > +FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE) >  #else > -FOLIO_FLAG_FALSE(large_rmappable) > +FOLIO_FLAG_FALSE(partially_mapped) >  #endif >   >  #define PG_head_mask ((1UL << PG_head)) > > The downside is an atomic op to set/clear, but it should likely not really matter > (initially, the flag will be clear, and we should only ever set it once when > partially unmapping). If it hurts, we can reconsider. > > [...] I was looking for where the bits for flags_1 were specified! I just saw the start of enum pageflags, saw that compound order isn't specified anywhere over there and ignored the end :) Yes, this is what I wanted to do. Thanks.