linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] mm/shmem: fix THP allocation and fallback loop
@ 2025-10-23  6:59 Kairui Song
  2025-10-23 16:13 ` David Hildenbrand
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Kairui Song @ 2025-10-23  6:59 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Baolin Wang, Hugh Dickins, Dev Jain,
	David Hildenbrand, Barry Song, Liam Howlett, Lorenzo Stoakes,
	Mariano Pache, Matthew Wilcox, Ryan Roberts, Zi Yan,
	linux-kernel, Kairui Song, stable

From: Kairui Song <kasong@tencent.com>

The order check and fallback loop is updating the index value on every
loop, this will cause the index to be wrongly aligned by a larger value
while the loop shrinks the order.

This may result in inserting and returning a folio of the wrong index
and cause data corruption with some userspace workloads [1].

Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
Signed-off-by: Kairui Song <kasong@tencent.com>

---

Changes from V2:
- Introduce a temporary variable to improve code,
  no behavior change, generated code is identical.
- Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/

Changes from V1:
- Remove unnecessary cleanup and simplify the commit message.
- Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/

---
 mm/shmem.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/shmem.c b/mm/shmem.c
index b50ce7dbc84a..e1dc2d8e939c 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
 	struct shmem_inode_info *info = SHMEM_I(inode);
 	unsigned long suitable_orders = 0;
 	struct folio *folio = NULL;
+	pgoff_t aligned_index;
 	long pages;
 	int error, order;
 
@@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
 		order = highest_order(suitable_orders);
 		while (suitable_orders) {
 			pages = 1UL << order;
-			index = round_down(index, pages);
-			folio = shmem_alloc_folio(gfp, order, info, index);
-			if (folio)
+			aligned_index = round_down(index, pages);
+			folio = shmem_alloc_folio(gfp, order, info, aligned_index);
+			if (folio) {
+				index = aligned_index;
 				goto allocated;
+			}
 
 			if (pages == HPAGE_PMD_NR)
 				count_vm_event(THP_FILE_FALLBACK);
-- 
2.51.0



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23  6:59 [PATCH v3] mm/shmem: fix THP allocation and fallback loop Kairui Song
@ 2025-10-23 16:13 ` David Hildenbrand
  2025-10-23 16:14   ` David Hildenbrand
  2025-10-23 17:48 ` Zi Yan
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2025-10-23 16:13 UTC (permalink / raw)
  To: Kairui Song, linux-mm
  Cc: Andrew Morton, Baolin Wang, Hugh Dickins, Dev Jain, Barry Song,
	Liam Howlett, Lorenzo Stoakes, Mariano Pache, Matthew Wilcox,
	Ryan Roberts, Zi Yan, linux-kernel, Kairui Song, stable

On 23.10.25 08:59, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> The order check and fallback loop is updating the index value on every
> loop, this will cause the index to be wrongly aligned by a larger value
> while the loop shrinks the order.
> 
> This may result in inserting and returning a folio of the wrong index
> and cause data corruption with some userspace workloads [1].
> 
> Cc: stable@vger.kernel.org
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@tencent.com>
> 
> ---
> 
> Changes from V2:
> - Introduce a temporary variable to improve code,
>    no behavior change, generated code is identical.
> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
> 
> Changes from V1:
> - Remove unnecessary cleanup and simplify the commit message.
> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
> 
> ---
>   mm/shmem.c | 9 ++++++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b50ce7dbc84a..e1dc2d8e939c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>   	struct shmem_inode_info *info = SHMEM_I(inode);
>   	unsigned long suitable_orders = 0;
>   	struct folio *folio = NULL;
> +	pgoff_t aligned_index;
>   	long pages;
>   	int error, order;
>   
> @@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>   		order = highest_order(suitable_orders);
>   		while (suitable_orders) {
>   			pages = 1UL << order;
> -			index = round_down(index, pages);
> -			folio = shmem_alloc_folio(gfp, order, info, index);
> -			if (folio)
> +			aligned_index = round_down(index, pages);
> +			folio = shmem_alloc_folio(gfp, order, info, aligned_index);
> +			if (folio) {
> +				index = aligned_index;
>   				goto allocated;
> +			}

Was the found by code inspection or was there a report about this?

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23 16:13 ` David Hildenbrand
@ 2025-10-23 16:14   ` David Hildenbrand
  2025-10-23 17:42     ` Kairui Song
  0 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2025-10-23 16:14 UTC (permalink / raw)
  To: Kairui Song, linux-mm
  Cc: Andrew Morton, Baolin Wang, Hugh Dickins, Dev Jain, Barry Song,
	Liam Howlett, Lorenzo Stoakes, Mariano Pache, Matthew Wilcox,
	Ryan Roberts, Zi Yan, linux-kernel, Kairui Song, stable

On 23.10.25 18:13, David Hildenbrand wrote:
> On 23.10.25 08:59, Kairui Song wrote:
>> From: Kairui Song <kasong@tencent.com>
>>
>> The order check and fallback loop is updating the index value on every
>> loop, this will cause the index to be wrongly aligned by a larger value
>> while the loop shrinks the order.
>>
>> This may result in inserting and returning a folio of the wrong index
>> and cause data corruption with some userspace workloads [1].
>>
>> Cc: stable@vger.kernel.org
>> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
>> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
>> Signed-off-by: Kairui Song <kasong@tencent.com>
>>
>> ---
>>
>> Changes from V2:
>> - Introduce a temporary variable to improve code,
>>     no behavior change, generated code is identical.
>> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
>>
>> Changes from V1:
>> - Remove unnecessary cleanup and simplify the commit message.
>> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
>>
>> ---
>>    mm/shmem.c | 9 ++++++---
>>    1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index b50ce7dbc84a..e1dc2d8e939c 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>>    	struct shmem_inode_info *info = SHMEM_I(inode);
>>    	unsigned long suitable_orders = 0;
>>    	struct folio *folio = NULL;
>> +	pgoff_t aligned_index;
>>    	long pages;
>>    	int error, order;
>>    
>> @@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>>    		order = highest_order(suitable_orders);
>>    		while (suitable_orders) {
>>    			pages = 1UL << order;
>> -			index = round_down(index, pages);
>> -			folio = shmem_alloc_folio(gfp, order, info, index);
>> -			if (folio)
>> +			aligned_index = round_down(index, pages);
>> +			folio = shmem_alloc_folio(gfp, order, info, aligned_index);
>> +			if (folio) {
>> +				index = aligned_index;
>>    				goto allocated;
>> +			}
> 
> Was the found by code inspection or was there a report about this?

Answering my own question, the "Link:" above should be

Closes: 
https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/


-- 
Cheers

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23 16:14   ` David Hildenbrand
@ 2025-10-23 17:42     ` Kairui Song
  2025-10-24  0:47       ` Barry Song
  0 siblings, 1 reply; 8+ messages in thread
From: Kairui Song @ 2025-10-23 17:42 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton
  Cc: linux-mm, Baolin Wang, Hugh Dickins, Dev Jain, Barry Song,
	Liam Howlett, Lorenzo Stoakes, Mariano Pache, Matthew Wilcox,
	Ryan Roberts, Zi Yan, linux-kernel, stable

On Fri, Oct 24, 2025 at 12:14 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 23.10.25 18:13, David Hildenbrand wrote:
> > On 23.10.25 08:59, Kairui Song wrote:
> >> From: Kairui Song <kasong@tencent.com>
> >>
> >> The order check and fallback loop is updating the index value on every
> >> loop, this will cause the index to be wrongly aligned by a larger value
> >> while the loop shrinks the order.
> >>
> >> This may result in inserting and returning a folio of the wrong index
> >> and cause data corruption with some userspace workloads [1].
> >>
> >> Cc: stable@vger.kernel.org
> >> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> >> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> >> Signed-off-by: Kairui Song <kasong@tencent.com>
> >>
> >> ---
> >>
> >> Changes from V2:
> >> - Introduce a temporary variable to improve code,
> >>     no behavior change, generated code is identical.
> >> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
> >>
> >> Changes from V1:
> >> - Remove unnecessary cleanup and simplify the commit message.
> >> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
> >>
> >> ---
> >>    mm/shmem.c | 9 ++++++---
> >>    1 file changed, 6 insertions(+), 3 deletions(-)
> >>
> >> diff --git a/mm/shmem.c b/mm/shmem.c
> >> index b50ce7dbc84a..e1dc2d8e939c 100644
> >> --- a/mm/shmem.c
> >> +++ b/mm/shmem.c
> >> @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> >>      struct shmem_inode_info *info = SHMEM_I(inode);
> >>      unsigned long suitable_orders = 0;
> >>      struct folio *folio = NULL;
> >> +    pgoff_t aligned_index;
> >>      long pages;
> >>      int error, order;
> >>
> >> @@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
> >>              order = highest_order(suitable_orders);
> >>              while (suitable_orders) {
> >>                      pages = 1UL << order;
> >> -                    index = round_down(index, pages);
> >> -                    folio = shmem_alloc_folio(gfp, order, info, index);
> >> -                    if (folio)
> >> +                    aligned_index = round_down(index, pages);
> >> +                    folio = shmem_alloc_folio(gfp, order, info, aligned_index);
> >> +                    if (folio) {
> >> +                            index = aligned_index;
> >>                              goto allocated;
> >> +                    }
> >
> > Was the found by code inspection or was there a report about this?
>
> Answering my own question, the "Link:" above should be
>
> Closes:
> https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/
>

Thanks for the review. It's reported by and fixed by me, so I didn't
include an extra Report-By & Closes, I thought that's kind of
redundant. Do we need that? Maybe Andrew can help add it :) ?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23  6:59 [PATCH v3] mm/shmem: fix THP allocation and fallback loop Kairui Song
  2025-10-23 16:13 ` David Hildenbrand
@ 2025-10-23 17:48 ` Zi Yan
  2025-10-24  0:46 ` Baolin Wang
  2025-10-24 14:02 ` Lorenzo Stoakes
  3 siblings, 0 replies; 8+ messages in thread
From: Zi Yan @ 2025-10-23 17:48 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Andrew Morton, Baolin Wang, Hugh Dickins, Dev Jain,
	David Hildenbrand, Barry Song, Liam Howlett, Lorenzo Stoakes,
	Mariano Pache, Matthew Wilcox, Ryan Roberts, linux-kernel,
	Kairui Song, stable

On 23 Oct 2025, at 2:59, Kairui Song wrote:

> From: Kairui Song <kasong@tencent.com>
>
> The order check and fallback loop is updating the index value on every
> loop, this will cause the index to be wrongly aligned by a larger value
> while the loop shrinks the order.
>
> This may result in inserting and returning a folio of the wrong index
> and cause data corruption with some userspace workloads [1].
>
> Cc: stable@vger.kernel.org
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@tencent.com>
>
> ---
>
> Changes from V2:
> - Introduce a temporary variable to improve code,
>   no behavior change, generated code is identical.
> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
>
> Changes from V1:
> - Remove unnecessary cleanup and simplify the commit message.
> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
>
> ---
>  mm/shmem.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>

Acked-by: Zi Yan <ziy@nvidia.com>


--
Best Regards,
Yan, Zi


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23  6:59 [PATCH v3] mm/shmem: fix THP allocation and fallback loop Kairui Song
  2025-10-23 16:13 ` David Hildenbrand
  2025-10-23 17:48 ` Zi Yan
@ 2025-10-24  0:46 ` Baolin Wang
  2025-10-24 14:02 ` Lorenzo Stoakes
  3 siblings, 0 replies; 8+ messages in thread
From: Baolin Wang @ 2025-10-24  0:46 UTC (permalink / raw)
  To: Kairui Song, linux-mm
  Cc: Andrew Morton, Hugh Dickins, Dev Jain, David Hildenbrand,
	Barry Song, Liam Howlett, Lorenzo Stoakes, Mariano Pache,
	Matthew Wilcox, Ryan Roberts, Zi Yan, linux-kernel, Kairui Song,
	stable



On 2025/10/23 14:59, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> The order check and fallback loop is updating the index value on every
> loop, this will cause the index to be wrongly aligned by a larger value
> while the loop shrinks the order.
> 
> This may result in inserting and returning a folio of the wrong index
> and cause data corruption with some userspace workloads [1].
> 
> Cc: stable@vger.kernel.org
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@tencent.com>
> 
> ---

LGTM. Thanks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23 17:42     ` Kairui Song
@ 2025-10-24  0:47       ` Barry Song
  0 siblings, 0 replies; 8+ messages in thread
From: Barry Song @ 2025-10-24  0:47 UTC (permalink / raw)
  To: Kairui Song
  Cc: David Hildenbrand, Andrew Morton, linux-mm, Baolin Wang,
	Hugh Dickins, Dev Jain, Liam Howlett, Lorenzo Stoakes,
	Mariano Pache, Matthew Wilcox, Ryan Roberts, Zi Yan,
	linux-kernel, stable

> >
> > Answering my own question, the "Link:" above should be
> >
> > Closes:
> > https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/
> >
>
> Thanks for the review. It's reported by and fixed by me, so I didn't
> include an extra Report-By & Closes, I thought that's kind of
> redundant. Do we need that? Maybe Andrew can help add it :) ?

I also think it’s better to use “Closes:”. In that case, we might need
to slightly
adjust the commit log to remove "[1]" here.

" This may result in inserting and returning a folio of the wrong index
 and cause data corruption with some userspace workloads [1]."

With that,
Reviewed-by: Barry Song <baohua@kernel.org>

Thanks
Barry


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3] mm/shmem: fix THP allocation and fallback loop
  2025-10-23  6:59 [PATCH v3] mm/shmem: fix THP allocation and fallback loop Kairui Song
                   ` (2 preceding siblings ...)
  2025-10-24  0:46 ` Baolin Wang
@ 2025-10-24 14:02 ` Lorenzo Stoakes
  3 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2025-10-24 14:02 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Andrew Morton, Baolin Wang, Hugh Dickins, Dev Jain,
	David Hildenbrand, Barry Song, Liam Howlett, Mariano Pache,
	Matthew Wilcox, Ryan Roberts, Zi Yan, linux-kernel, Kairui Song,
	stable

On Thu, Oct 23, 2025 at 02:59:13PM +0800, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
>
> The order check and fallback loop is updating the index value on every
> loop, this will cause the index to be wrongly aligned by a larger value
> while the loop shrinks the order.
>
> This may result in inserting and returning a folio of the wrong index
> and cause data corruption with some userspace workloads [1].
>
> Cc: stable@vger.kernel.org
> Link: https://lore.kernel.org/linux-mm/CAMgjq7DqgAmj25nDUwwu1U2cSGSn8n4-Hqpgottedy0S6YYeUw@mail.gmail.com/ [1]
> Fixes: e7a2ab7b3bb5d ("mm: shmem: add mTHP support for anonymous shmem")
> Signed-off-by: Kairui Song <kasong@tencent.com>

Yikes... LGTM so:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

See below for a small nit.

>
> ---
>
> Changes from V2:
> - Introduce a temporary variable to improve code,
>   no behavior change, generated code is identical.
> - Link to V2: https://lore.kernel.org/linux-mm/20251022105719.18321-1-ryncsn@gmail.com/
>
> Changes from V1:
> - Remove unnecessary cleanup and simplify the commit message.
> - Link to V1: https://lore.kernel.org/linux-mm/20251021190436.81682-1-ryncsn@gmail.com/
>
> ---
>  mm/shmem.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index b50ce7dbc84a..e1dc2d8e939c 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1882,6 +1882,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>  	struct shmem_inode_info *info = SHMEM_I(inode);
>  	unsigned long suitable_orders = 0;
>  	struct folio *folio = NULL;
> +	pgoff_t aligned_index;

Nit, but can't we just declare this in the loop? That makes it even clearer
that we don't reuse the value.

>  	long pages;
>  	int error, order;
>
> @@ -1895,10 +1896,12 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf,
>  		order = highest_order(suitable_orders);
>  		while (suitable_orders) {
>  			pages = 1UL << order;
> -			index = round_down(index, pages);
> -			folio = shmem_alloc_folio(gfp, order, info, index);
> -			if (folio)
> +			aligned_index = round_down(index, pages);
> +			folio = shmem_alloc_folio(gfp, order, info, aligned_index);
> +			if (folio) {
> +				index = aligned_index;
>  				goto allocated;
> +			}
>
>  			if (pages == HPAGE_PMD_NR)
>  				count_vm_event(THP_FILE_FALLBACK);
> --
> 2.51.0
>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-10-24 14:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-23  6:59 [PATCH v3] mm/shmem: fix THP allocation and fallback loop Kairui Song
2025-10-23 16:13 ` David Hildenbrand
2025-10-23 16:14   ` David Hildenbrand
2025-10-23 17:42     ` Kairui Song
2025-10-24  0:47       ` Barry Song
2025-10-23 17:48 ` Zi Yan
2025-10-24  0:46 ` Baolin Wang
2025-10-24 14:02 ` Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox