From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20AC2E77170 for ; Thu, 5 Dec 2024 17:50:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABE316B0185; Thu, 5 Dec 2024 12:50:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A346F6B0179; Thu, 5 Dec 2024 12:50:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D2C16B017E; Thu, 5 Dec 2024 12:50:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 277A56B0174 for ; Thu, 5 Dec 2024 12:50:08 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id CCB911C869B for ; Thu, 5 Dec 2024 17:50:07 +0000 (UTC) X-FDA: 82861643340.08.498203B Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf02.hostedemail.com (Postfix) with ESMTP id C5F0280016 for ; Thu, 5 Dec 2024 17:49:35 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pVlyfgSt; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733420989; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dAjGyiQGNKWMBww5yB85pPs2W1dcE6q7ezO2w5wKNtU=; b=fVxBaL26E7YvzB0RAEn8n+FDK30cY8/MicI2G2/hVlOCAR6NkWIi3laHdxTHnGo38EjVQz GDasUso2rnVq5ViSJVt6MnIiwb5GOor5uPCig9QRqfJx25n2P/TEDr5DTzbrrEoW7xBIT1 5VqcIpfCqw0tLO4q5E9q7ypRzftu9vw= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=pVlyfgSt; spf=none (imf02.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733420989; a=rsa-sha256; cv=none; b=jYHMcHBKbdV8+ZUB2R8FuBXXj/uOsLv96ehfjP8Tx/fhOti5BHKF7iZEE5I/4YH6XR1CAI 8MMHZCbtwEWVCZul4LV6hrrOtSUZWmHwam5pKrHp1hPmnrJ2qSFlq/KbrtlHvS6AzYRjx1 Lw1XmA0t8n74i7UHPe6wMi0AVmUH69A= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dAjGyiQGNKWMBww5yB85pPs2W1dcE6q7ezO2w5wKNtU=; b=pVlyfgStC16x9IFkzfilpykGJy wE6o9OTJt3gg8k4mVEM1tXLr8cl/bfUPqgDt/hpbyssRvD+2H8nJ0ql8N6Lp4YkERKiVweqVaHe6W LpGmafpjAMN+5j/eXhc4ByQP6EC5stttMI9S7oz4zjcBZWGahyP9oCBnFf/+mZplwM1xBjytQhNYQ VSblhFoz3/elHXNgp3bQmEmejDbtIri+ydirZ1vw4po032cqLoHwuCBG3rO2A96NDbfPAFm+pb7ao bjuHdjmUktU03K4db+l+wVpKktMLixc38KdjLoTs8EUbGhJMKQfxiOXP3bjO+9KR8Qst/mGwqysbn +XiB1qfA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tJFzI-0000000DN9Y-34jM; Thu, 05 Dec 2024 17:50:04 +0000 From: "Matthew Wilcox (Oracle)" To: Minchan Kim , Sergey Senozhatsky Cc: Alex Shi , linux-mm@kvack.org Subject: [PATCH v8 21/21] mm/zsmalloc: update comments for page->zpdesc changes Date: Thu, 5 Dec 2024 17:49:58 +0000 Message-ID: <20241205175000.3187069-22-willy@infradead.org> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20241205175000.3187069-1-willy@infradead.org> References: <20241205175000.3187069-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C5F0280016 X-Rspam-User: X-Stat-Signature: 8b5ryerh9xwb7hgsiu3me5giw3prk96w X-HE-Tag: 1733420975-613907 X-HE-Meta: U2FsdGVkX18d5zceNpMi7CUOL0v4RReVmfd3Dprsaf8IRiEe46gvJlelIqaBVusel8DB/jNzIpvrZFrOkSpqZ/NLTBld+x43R7Z0AOY7yA1xBUWr2qpnTb/cQzRSPzf6ag0R3Tx4FkMACCd2jXYPZpqd3TrgzEX5y6ESaoOZdHLOMGqcI/lzj4LFGc9VbLJ06DZQr9E9gjs0ObZbPDUk3FTTz5YkR/JK38cgpwLOm3aEAQRhC2FaiXAODO5nUACbrV2wjiAIRs8MmoQ7M9jVWWJ9pBr0/PLDIfe3xkjaTU5GXsJKejNZ4AvoCrXJXka9bKL/GgdNugsPHJtvYcaVN1p5lZ1aT2l8zm/07JzEed01DP1erk3Jqseokienx3ACg4ZnrEAaUzcv3aRbwu/LTbQeg0IdwIGFC2RFJhq4+a0HpEL/Jj4aKwjc+JBtC8+/9UnyFci+2Vj8O4qG0ClEiypvUgjkuXrPBBE7ThFIWXzINfzwOA94BKa0beuecHpyjdYJ0pkLJYkmudjWOIn6qBRcxvHqB8haU6LuaDOW4yS91VOgo1tMC8r+QIoY5GcFpDlVllB63TfBs6OqvEXsIidHwpkhG/LQ6CDWsWYFuMF1eDfRSM2qvneGYLXR7L4IGHaijIEkcg6ae3krzWZP19/5Mw1nb/LkrOirYzblL0ktzCWeuLkwce1K8psqBEjVH12jPaJeCeyrneZ4RK/aqIOIThB+OEUIb2U59ieYd9zNCTZeVwNArf68Q/Zvfw3cTTWdeQwH3lH/HwOTKG+CfrWSfsn3GSWySwzBqPwZ9OBtZPtUD5IGy23k2A4ljGqRvvFNsNMmbOb3SefGlzwRwu8aEyexpZQkRujTkyh47yujNBeHxstA6Vultql0dFrpWNykxibOfazWLZBBqXOn9QJnlgiSwsz4jlRWqNbTRKTQqkdOk63kvM4WmDnVIN+2ELRRD8FlUsjUmqN3M8R nooJ1BIW Qe/+eiOm4GIclSHKJC0XGu6jZ6K2L25CIE49Z2IPPEBwVk78d5OVWCoVe9VqAolVSl3FPzr2HPOZPbwzcp12RUDJkETPB1hjixsjAyzJPIhHZx492nar9IMbqFag2YgwzuiHyW6vs0ZxIeZetdbGEDp30nsYXh78yCodLqnuaBcotZftMFyJKa/PsnVkArKK4qOF5cmDBWsgChuc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Alex Shi After the page to zpdesc conversion, there still left few comments or function named with page not zpdesc, let's update the comments and rename function create_page_chain() as create_zpdesc_chain(). Signed-off-by: Alex Shi --- mm/zsmalloc.c | 61 ++++++++++++++++++++++++++------------------------- 1 file changed, 31 insertions(+), 30 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index c0e7c055847a..1f5ff0fdeb42 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -15,20 +15,19 @@ /* * Following is how we use various fields and flags of underlying - * struct page(s) to form a zspage. + * struct zpdesc(page) to form a zspage. * - * Usage of struct page fields: - * page->private: points to zspage - * page->index: links together all component pages of a zspage + * Usage of struct zpdesc fields: + * zpdesc->zspage: points to zspage + * zpdesc->next: links together all component zpdescs of a zspage * For the huge page, this is always 0, so we use this field * to store handle. - * page->page_type: PGTY_zsmalloc, lower 24 bits locate the first object - * offset in a subpage of a zspage - * - * Usage of struct page flags: - * PG_private: identifies the first component page - * PG_owner_priv_1: identifies the huge component page + * zpdesc->first_obj_offset: PGTY_zsmalloc, lower 24 bits locate the first + * object offset in a subpage of a zspage * + * Usage of struct zpdesc(page) flags: + * PG_private: identifies the first component zpdesc + * PG_lock: lock all component zpdescs for a zspage free, serialize with */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt @@ -194,7 +193,10 @@ struct size_class { */ int size; int objs_per_zspage; - /* Number of PAGE_SIZE sized pages to combine to form a 'zspage' */ + /* + * Number of PAGE_SIZE sized zpdescs/pages to combine to + * form a 'zspage' + */ int pages_per_zspage; unsigned int index; @@ -908,7 +910,7 @@ static void free_zspage(struct zs_pool *pool, struct size_class *class, /* * Since zs_free couldn't be sleepable, this function cannot call - * lock_page. The page locks trylock_zspage got will be released + * lock_page. The zpdesc locks trylock_zspage got will be released * by __free_zspage. */ if (!trylock_zspage(zspage)) { @@ -965,7 +967,7 @@ static void init_zspage(struct size_class *class, struct zspage *zspage) set_freeobj(zspage, 0); } -static void create_page_chain(struct size_class *class, struct zspage *zspage, +static void create_zpdesc_chain(struct size_class *class, struct zspage *zspage, struct zpdesc *zpdescs[]) { int i; @@ -974,9 +976,9 @@ static void create_page_chain(struct size_class *class, struct zspage *zspage, int nr_zpdescs = class->pages_per_zspage; /* - * Allocate individual pages and link them together as: - * 1. all pages are linked together using zpdesc->next - * 2. each sub-page point to zspage using zpdesc->zspage + * Allocate individual zpdescs and link them together as: + * 1. all zpdescs are linked together using zpdesc->next + * 2. each sub-zpdesc point to zspage using zpdesc->zspage * * we set PG_private to identify the first zpdesc (i.e. no other zpdesc * has this flag set). @@ -1034,7 +1036,7 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, zpdescs[i] = zpdesc; } - create_page_chain(class, zspage, zpdescs); + create_zpdesc_chain(class, zspage, zpdescs); init_zspage(class, zspage); zspage->pool = pool; zspage->class = class->index; @@ -1351,7 +1353,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, /* record handle in the header of allocated chunk */ link->handle = handle | OBJ_ALLOCATED_TAG; else - /* record handle to page->index */ + /* record handle to zpdesc->handle */ zspage->first_zpdesc->handle = handle | OBJ_ALLOCATED_TAG; kunmap_local(vaddr); @@ -1441,7 +1443,6 @@ static void obj_free(int class_size, unsigned long obj) unsigned int f_objidx; void *vaddr; - obj_to_location(obj, &f_zpdesc, &f_objidx); f_offset = offset_in_page(class_size * f_objidx); zspage = get_zspage(f_zpdesc); @@ -1684,19 +1685,19 @@ static int putback_zspage(struct size_class *class, struct zspage *zspage) #ifdef CONFIG_COMPACTION /* * To prevent zspage destroy during migration, zspage freeing should - * hold locks of all pages in the zspage. + * hold locks of all component zpdesc in the zspage. */ static void lock_zspage(struct zspage *zspage) { struct zpdesc *curr_zpdesc, *zpdesc; /* - * Pages we haven't locked yet can be migrated off the list while we're + * Zpdesc we haven't locked yet can be migrated off the list while we're * trying to lock them, so we need to be careful and only attempt to - * lock each page under migrate_read_lock(). Otherwise, the page we lock - * may no longer belong to the zspage. This means that we may wait for - * the wrong page to unlock, so we must take a reference to the page - * prior to waiting for it to unlock outside migrate_read_lock(). + * lock each zpdesc under migrate_read_lock(). Otherwise, the zpdesc we + * lock may no longer belong to the zspage. This means that we may wait + * for the wrong zpdesc to unlock, so we must take a reference to the + * zpdesc prior to waiting for it to unlock outside migrate_read_lock(). */ while (1) { migrate_read_lock(zspage); @@ -1771,7 +1772,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, idx++; } while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL); - create_page_chain(class, zspage, zpdescs); + create_zpdesc_chain(class, zspage, zpdescs); first_obj_offset = get_first_obj_offset(oldzpdesc); set_first_obj_offset(newzpdesc, first_obj_offset); if (unlikely(ZsHugePage(zspage))) @@ -1782,8 +1783,8 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage, static bool zs_page_isolate(struct page *page, isolate_mode_t mode) { /* - * Page is locked so zspage couldn't be destroyed. For detail, look at - * lock_zspage in free_zspage. + * Page/zpdesc is locked so zspage couldn't be destroyed. For detail, + * look at lock_zspage in free_zspage. */ VM_BUG_ON_PAGE(PageIsolated(page), page); @@ -1810,7 +1811,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, /* We're committed, tell the world that this is a Zsmalloc page. */ __zpdesc_set_zsmalloc(newzpdesc); - /* The page is locked, so this pointer must remain valid */ + /* The zpdesc/page is locked, so this pointer must remain valid */ zspage = get_zspage(zpdesc); pool = zspage->pool; @@ -1883,7 +1884,7 @@ static const struct movable_operations zsmalloc_mops = { }; /* - * Caller should hold page_lock of all pages in the zspage + * Caller should hold zpdesc locks of all in the zspage * In here, we cannot use zspage meta data. */ static void async_free_zspage(struct work_struct *work) -- 2.45.2