From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58045C46467 for ; Mon, 16 Jan 2023 08:50:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8974B6B0071; Mon, 16 Jan 2023 03:50:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8479B6B0072; Mon, 16 Jan 2023 03:50:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E7B16B0073; Mon, 16 Jan 2023 03:50:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5EE966B0071 for ; Mon, 16 Jan 2023 03:50:20 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 1658D1602B5 for ; Mon, 16 Jan 2023 08:50:20 +0000 (UTC) X-FDA: 80360040600.30.51DF8CE Received: from out203-205-221-149.mail.qq.com (out203-205-221-149.mail.qq.com [203.205.221.149]) by imf25.hostedemail.com (Postfix) with ESMTP id E4F82A0004 for ; Mon, 16 Jan 2023 08:50:15 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=foxmail.com header.s=s201512 header.b=EpufzcGT; spf=pass (imf25.hostedemail.com: domain of rtoax@foxmail.com designates 203.205.221.149 as permitted sender) smtp.mailfrom=rtoax@foxmail.com; dmarc=pass (policy=none) header.from=foxmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1673859017; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=R1MkVwKw4SVD8VGIJsVY3vh0jT3ecgY6xhTPS0PMHGw=; b=mO6niKVGBQMjjRItm1O9Zv3lrighHgQtDTqstU9j/Oq7Lmnim660VK+SzlGLhFOLKckFXa bA837JDk91CEa1YBWFMhztOjXZv6dQ98RKbFvrkrG18Kcpk7efskTfGcuPGuuVrQOAKYyz F1efuX8teSDWB2uoxWwytgWZp19o3k8= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=foxmail.com header.s=s201512 header.b=EpufzcGT; spf=pass (imf25.hostedemail.com: domain of rtoax@foxmail.com designates 203.205.221.149 as permitted sender) smtp.mailfrom=rtoax@foxmail.com; dmarc=pass (policy=none) header.from=foxmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1673859017; a=rsa-sha256; cv=none; b=bTQHgDTkaBTYAXOUBdwjomoqvtmJSf0N4lwc9um34jcnb65Hb3iyKe+ORPw/l8wtt6bYNp 2iMw62q2Q9whLf7X2at6YnxDZ4Ng49bwQ1p16981iTBNB/2i2bWPTuCD+Xx8SbNHThaM6c 0wSrn8qmkuu5VQi8wdVZJxH2AH0ZcLs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=foxmail.com; s=s201512; t=1673859011; bh=R1MkVwKw4SVD8VGIJsVY3vh0jT3ecgY6xhTPS0PMHGw=; h=From:To:Cc:Subject:Date; b=EpufzcGTSt9snN8+Vb641vh6M8cDh+fqy84AwJIpmFsE8fELSWVU1e20GfrALW7YR NmIHxbX1/1ZisLaAaqrT8D5p4BfBxe4hJaROkExw2yWQ6Uccf9UuUADgII4VygViUa 5zy8lj8HpyTXAXx1NU/QJzOy7lhIpRL+ob2isBN0= Received: from localhost.localdomain ([39.156.73.13]) by newxmesmtplogicsvrsza10-0.qq.com (NewEsmtp) with SMTP id C871FCB7; Mon, 16 Jan 2023 16:50:07 +0800 X-QQ-mid: xmsmtpt1673859007t57b7839j Message-ID: X-QQ-XMAILINFO: N+tZcXNNUKPOz7d0LD9yTwYQiy19jrVBu4JFFzVHpY4SeOf9/tE1JIJZSs03WX m+ryEnM9CkwDhuEgoNiwTeWB8lxeOgnCky+hl/ImZiX7VXkWN1wAQAL+EEYrKzLFdg/osIqOVsGT 9WGtMeuI8eVaY+N8CddcF4yJm1BXjqJ7pEScqoU4MstSrlrRzyfDIwqGmcEA40YfLEgbt05nwCz+ 5+Li96eS4mgMgGkUuXcwLEXfK/iKXhvheVgnUpuZy9YSFeBJcNHP/n479ZrIdOqzyYWj16gVYFuj I9SlibgsbEYDja5qeiCeC7pRQpRQn+c245Np52fNPTc7QjPepuaIF7OufaezrPjQNePB2dINRUKg EsjbXL3ms8t5gULPaw14EmOO2VTD6utAMeQgPh6p+bOuuQDlFYwchGcLSFcitd5QnAuWPzaWxVvo pn0RMG0At4rEPeRYHFsH0zl+qSfez3jo/L3J4Iuh8k7DkCH4NLDq7qLwklF9GZ3vHNXg/prQ07w7 xsgXiHQQaAE32IZ6ZqRA1aGG5+lPDYD/R6bYcj2nP0fxvtvbWyALkT1J3WH4ZZf7Wrx0do8agfW/ nh+mqzGYuS9KOvJNwP25xWz7+3FFUgM4DFNt978IZzi2RGrdunJaTJRA/NxUneFtpdZWWLoycalW dC+Xtx5/fWKnuECBE0ubdrvhJh71kpkB7lP4+iRRpkKNtxUqXnBypPVapxb1KaBvzVsaO4T+19vd uM1lZATnj3icuefl1Hc8Rxc995Yd9oNRP3uy2dc7gjk1RDKfGz4v9wPUvbPPzKOGn7kkvyhZlVKT T9FZ97o8QJDA67v2YkoERgoxzz5IxeLXlRbXfVDbrL96vw2RhaKtUOEJmalYc2FHCZXE162svuRn 7KExYuCKTpZVtI69Kk7693Rhlynqj8eiCxvrDZZm78Vdh1Q5E3LDv+7jxEMOtT+8I3RTj7MysePZ 419Yb19s3i7/VZ/3GkNnhYVCYBSK7s9QEQcGOotI4= From: Rong Tao To: cl@linux.com Cc: sdf@google.com, yhs@fb.com, Rong Tao , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org (open list:SLAB ALLOCATOR), linux-kernel@vger.kernel.org (open list) Subject: [PATCH] mm: Functions used internally should not be put into slub_def.h Date: Mon, 16 Jan 2023 16:50:05 +0800 X-OQ-MSGID: <20230116085005.24972-1-rtoax@foxmail.com> X-Mailer: git-send-email 2.39.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E4F82A0004 X-Rspam-User: X-Stat-Signature: uff1tnewz7eaxfss7ogu7wjk4cbfnmiu X-HE-Tag: 1673859015-164763 X-HE-Meta: U2FsdGVkX18IN4NsY+zJ2eUWlsj136ZEqNSJUAFJxjUkLamHu2cNn8NWiI830gUyIaQd+OJpJZvvK/6RWgzOvnltn80QzHonQ/x71+k+PjezmrphYgbz7/nREG7+930cIgdw9WQ7cdtwdN4DtDJDqdO8oSVdZXnqNyN9CXeaXs+50YB5diKM8Lmd5m3AaK8a1VUudZMwc7dl5PF0cnoVjij3HnbKd/X9bTKj/J9MdERdpp1QpV8X4rCpurk7wt5kAQXrYsd7uapO1lEbbfdDwE3IKakMqZ3fErApuuqJgIwfPuVuREpJ7K47MJETcpQXp6sx5pWC/xPojSz/GZeLhr+EUWVLblQtq8ioG/IiID/75NSrHlHWVPCyXCbxzVOhmZE5XC9LJAf2rWO3I8ED6T3IJ/JTkjDrCcTaCFkgERKWB4RDYZGAumdQa6n05xDjoiNiBkFLsDCokbejp5HuibVdy3WuBWondCa+UO8FqXPdBeqvx0XBmUReYAwQZpjCINCsdBnQ4yLIHL8Uf4Ks4WVFQSA0E8rIUuazhPjXiiAsJzIETAOH2RhK6qLNG7R9tlV54U2TQbaPQGgsN8txOJ84QOCTYwHZlcP9iDWnJSfjlNGGOPi+FmmlKQBYDvND6kHPKSv86w6jMR5G8Oj+f5Uaw+oV+8PD7e95rykTB0D0lOIylhWNzbrW48DiPwd9UwwUw4jsTIgz7jtbV2bzhpI0vV4Tqxpl5rijpveS3bdcng9vWb1bNx3c4r/YaOYIUhb2Vv//knVNOV2l/ZGCDQhr3fj6IBpAVQ+rE9a2PujivPVDpV14BqnwjqFbU2Jg433KPVPxlrp4xALCbjP2ZZ1SA8FWpORoLHbT1+peO+87oxp7KGs8cP3MIMPykBr7LcETCiJso52Ht/S6n/OtILSb053q7rdOn+i1rXo72GzfX64PKFydt6erL7cZO9zPKmuohO/ldjWCNUrB5Ig PN1Q5Jxt YiD/Lt6U7xR9akSU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Rong Tao commit 40f3bf0cb04c("mm: Convert struct page to struct slab in functions used by other subsystems") introduce 'slab_address()' and 'struct slab' in slab_def.h(CONFIG_SLAB) and slub_def.h(CONFIG_SLUB). When referencing a header file in a module or BPF code, 'slab_address()' and 'struct slab' are not recognized, resulting in incomplete and undefined errors(see bcc slabratetop.py error [0]). Moving the function definitions of reference data structures such as struct slab and slab_address() such as nearest_obj(), obj_to_index(), and objs_per_slab() to the internal header file slab.h solves this fatal problem. [0] https://github.com/iovisor/bcc/issues/4438 Signed-off-by: Rong Tao --- include/linux/slab_def.h | 33 -------------------- include/linux/slub_def.h | 32 ------------------- mm/slab.h | 66 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 66 insertions(+), 65 deletions(-) diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h index 5834bad8ad78..5658b5fddf9b 100644 --- a/include/linux/slab_def.h +++ b/include/linux/slab_def.h @@ -88,37 +88,4 @@ struct kmem_cache { struct kmem_cache_node *node[MAX_NUMNODES]; }; -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, - void *x) -{ - void *object = x - (x - slab->s_mem) % cache->size; - void *last_object = slab->s_mem + (cache->num - 1) * cache->size; - - if (unlikely(object > last_object)) - return last_object; - else - return object; -} - -/* - * We want to avoid an expensive divide : (offset / cache->size) - * Using the fact that size is a constant for a particular cache, - * we can replace (offset / cache->size) by - * reciprocal_divide(offset, cache->reciprocal_buffer_size) - */ -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) -{ - u32 offset = (obj - slab->s_mem); - return reciprocal_divide(offset, cache->reciprocal_buffer_size); -} - -static inline int objs_per_slab(const struct kmem_cache *cache, - const struct slab *slab) -{ - if (is_kfence_address(slab_address(slab))) - return 1; - return cache->num; -} - #endif /* _LINUX_SLAB_DEF_H */ diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h index aa0ee1678d29..660fd6b2a748 100644 --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -163,36 +163,4 @@ static inline void sysfs_slab_release(struct kmem_cache *s) void *fixup_red_left(struct kmem_cache *s, void *p); -static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, - void *x) { - void *object = x - (x - slab_address(slab)) % cache->size; - void *last_object = slab_address(slab) + - (slab->objects - 1) * cache->size; - void *result = (unlikely(object > last_object)) ? last_object : object; - - result = fixup_red_left(cache, result); - return result; -} - -/* Determine object index from a given position */ -static inline unsigned int __obj_to_index(const struct kmem_cache *cache, - void *addr, void *obj) -{ - return reciprocal_divide(kasan_reset_tag(obj) - addr, - cache->reciprocal_size); -} - -static inline unsigned int obj_to_index(const struct kmem_cache *cache, - const struct slab *slab, void *obj) -{ - if (is_kfence_address(obj)) - return 0; - return __obj_to_index(cache, slab_address(slab), obj); -} - -static inline int objs_per_slab(const struct kmem_cache *cache, - const struct slab *slab) -{ - return slab->objects; -} #endif /* _LINUX_SLUB_DEF_H */ diff --git a/mm/slab.h b/mm/slab.h index 7cc432969945..38350a0efa91 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -227,10 +227,76 @@ struct kmem_cache { #ifdef CONFIG_SLAB #include + +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, + void *x) +{ + void *object = x - (x - slab->s_mem) % cache->size; + void *last_object = slab->s_mem + (cache->num - 1) * cache->size; + + if (unlikely(object > last_object)) + return last_object; + else + return object; +} + +/* + * We want to avoid an expensive divide : (offset / cache->size) + * Using the fact that size is a constant for a particular cache, + * we can replace (offset / cache->size) by + * reciprocal_divide(offset, cache->reciprocal_buffer_size) + */ +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct slab *slab, void *obj) +{ + u32 offset = (obj - slab->s_mem); + return reciprocal_divide(offset, cache->reciprocal_buffer_size); +} + +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) +{ + if (is_kfence_address(slab_address(slab))) + return 1; + return cache->num; +} #endif #ifdef CONFIG_SLUB #include + +static inline void *nearest_obj(struct kmem_cache *cache, const struct slab *slab, + void *x) { + void *object = x - (x - slab_address(slab)) % cache->size; + void *last_object = slab_address(slab) + + (slab->objects - 1) * cache->size; + void *result = (unlikely(object > last_object)) ? last_object : object; + + result = fixup_red_left(cache, result); + return result; +} + +/* Determine object index from a given position */ +static inline unsigned int __obj_to_index(const struct kmem_cache *cache, + void *addr, void *obj) +{ + return reciprocal_divide(kasan_reset_tag(obj) - addr, + cache->reciprocal_size); +} + +static inline unsigned int obj_to_index(const struct kmem_cache *cache, + const struct slab *slab, void *obj) +{ + if (is_kfence_address(obj)) + return 0; + return __obj_to_index(cache, slab_address(slab), obj); +} + +static inline int objs_per_slab(const struct kmem_cache *cache, + const struct slab *slab) +{ + return slab->objects; +} #endif #include -- 2.39.0