From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 713BDCD5BA7 for ; Thu, 5 Sep 2024 10:37:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 012EA6B0184; Thu, 5 Sep 2024 06:37:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F06026B01F2; Thu, 5 Sep 2024 06:37:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA4E16B01F3; Thu, 5 Sep 2024 06:37:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id B2AD46B0184 for ; Thu, 5 Sep 2024 06:37:06 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 59D7C140B11 for ; Thu, 5 Sep 2024 10:37:06 +0000 (UTC) X-FDA: 82530332052.05.AFF28E5 Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by imf26.hostedemail.com (Postfix) with ESMTP id 55A5614001B for ; Thu, 5 Sep 2024 10:37:04 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kncfQirB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725532600; a=rsa-sha256; cv=none; b=lLtnwQNRH1N4ij1+Yl/ScvRMHPgjJqltcq6i46a2G4l8AQxon9wa0cxK+VQrUJo6/7WGl+ a1rc7OkUS9+TaXU6tUcoa9VuQw3ERt8Nl1fkO9If1fhgnqec7FFfFiQQEt8339udqqxnPQ J2SE/mbh4G7j6Ln8iiQGk20ILqEO/gg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kncfQirB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf26.hostedemail.com: domain of usamaarif642@gmail.com designates 209.85.218.46 as permitted sender) smtp.mailfrom=usamaarif642@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725532599; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QUORu/jqoGKPwKhVnK7PLu247h5guV9J5aj50WxnErg=; b=TEd1QkSGUYSkv3fcyJVq4U4EibV0JN5BE4dsJFYrkzUO6ADNsbHCs9iMAnXEN+FAqBa2LM 0sp8OR13JOO/verv2ScUOnXAI7GYdnTQ5WRLdKJOnyrNjQtMKatcID8lETxUw+8BnrRnAJ 6gfZ/KseBuYgNwIDFeqb0Naltqx52UY= Received: by mail-ej1-f46.google.com with SMTP id a640c23a62f3a-a868b739cd9so90209666b.2 for ; Thu, 05 Sep 2024 03:37:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1725532623; x=1726137423; darn=kvack.org; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :from:to:cc:subject:date:message-id:reply-to; bh=QUORu/jqoGKPwKhVnK7PLu247h5guV9J5aj50WxnErg=; b=kncfQirB7SUtrfWtfrtJaWXw8ZDngKOenyTa1Nl98QGcMEqZaSLrpXkEwkyXMJmWfP xlem+2TFVtcINLhdrhvS3Uui+iUVd7RuM3OqzyRdp6eu3iisuCgZWI9C/kUIllSBa97A aQWS6Oy1iQ6E5FDOn7WLswmPhpg9DdMP1BjKvXPRX2ADfAvMa9uMEVxuV/R7nWhmuM1Y I7iClIF/phWIw//voxJ0kPC4cNDP0bGQaBkYfVbOdTSZTkADu5fBxqHl5miWRDXYuWV6 jovSLZsLj8udJCinD1zTdA7w4bd7GMgPpKXt+hrDriA+H5D35r1/uUAKyLCzVqKP2yrN UWGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1725532623; x=1726137423; h=content-transfer-encoding:in-reply-to:from:content-language :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QUORu/jqoGKPwKhVnK7PLu247h5guV9J5aj50WxnErg=; b=okgzay1daQwgJXKT4CvsLDNndQl8wCYpr9yQKvGFIP/WSDvWgHtYd3R6dCyVFRZJzU 7SLR+7IfCohHDGLDz7TL21ZwXUbqcL/REoAUv5nzDZRGqLE+Fb9reNRSkulOM/VN+PZZ NQ5lX/JtkNfVBM3vuJngBDTXydL/vnDky5sMuocGwMEgpokNqjSogow5QpncxexNJ5P6 LEvDCcamdjZJxe3YsEbTR+S4gkzQ+U1dB0B6OpwB/V69fKGymUv/FkIPLzvH4fKOwYLl cD92E4whi1OWYD1l7fbTtkmPO9WNV5gickf0osE+5JiM8FVs4pPppdBO/xZl6z4LXrDi ln4g== X-Forwarded-Encrypted: i=1; AJvYcCWq7KNdRcRRD8iXZ0U5ir8BLAMWVakJgDM8LZ9Lq5OAw1wyebAdS5I0nkuJZCqBsJln1u58TCr8rw==@kvack.org X-Gm-Message-State: AOJu0YyRvK6OjKP5yYXm4+121tsAwgKpMHPp77cdOnbbpYvtD3fIRpnX d6b9Ygj+9V1zvJ69hqLEFREQ9p/YulNQEfsXzOY55i0qzpHCneKt X-Google-Smtp-Source: AGHT+IHd718s6Q4+33HKUDRVZO02S1SHFvi5QxUyKe6gJ1pbw54ABPxN7a9BPOu+q85JhAO/UdS4RQ== X-Received: by 2002:a17:907:1c0a:b0:a7a:ab8a:380 with SMTP id a640c23a62f3a-a897fad872amr1930715066b.69.1725532621667; Thu, 05 Sep 2024 03:37:01 -0700 (PDT) Received: from ?IPV6:2a03:83e0:1126:4:eb:d0d0:c7fd:c82c? ([2620:10d:c092:500::5:decd]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a8a623e286dsm117164466b.211.2024.09.05.03.37.01 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 05 Sep 2024 03:37:01 -0700 (PDT) Message-ID: Date: Thu, 5 Sep 2024 11:37:00 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v4 1/2] mm: store zero pages to be swapped out in a bitmap To: Barry Song <21cnbao@gmail.com>, Yosry Ahmed Cc: akpm@linux-foundation.org, chengming.zhou@linux.dev, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, nphamcs@gmail.com, shakeel.butt@linux.dev, willy@infradead.org, ying.huang@intel.com, hanchuanhua@oppo.com References: <20240612124750.2220726-2-usamaarif642@gmail.com> <20240904055522.2376-1-21cnbao@gmail.com> Content-Language: en-US From: Usama Arif In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Queue-Id: 55A5614001B X-Rspamd-Server: rspam01 X-Stat-Signature: tu4neze5ak8z15yzj3wxeh1kmm5sknpt X-HE-Tag: 1725532624-469903 X-HE-Meta: U2FsdGVkX1+ZSbM3klSR4fw+L8/uuiHdGuhFEcx4bLTCTUFdFgS+BS3yKMbJG8Kq6+fAFc+yJ7C7Aa8kGbzyRdJy0R3wuSmATIPgZvFKzB3Q3FRPEthkF9fdUkMABlDV3yyIAh+zeGSOAUwMLIL2KPMcLXsXlam9VvULUbdxD0RkB3tKoOse57V9ScEn8whNXsPd3r+1ssF5SmZXjJ4Ofyd7BAj9x1GaicEhomNA0TdyLiic4cyihiDuOzddNqJBtYt1aqln1xYbNHttAeb733IA3cgdar/wShx3nclQ2OgYHd3a0B4hpPYW4SHKeAZKRG07iSbHnlpCd12QYnLr1QKalKNh9X+qgUFqt7Ik3T5elkBKxrEyWRtllaGJ4A48uhKi/D4CR5LJwYKuBR2oSjSar2wsp/TRv+KcKX43R7hRl2Ffc8T8egodQSQ95h9+xaWc6IITYg2GvL+XZRtpOeMeZC9RypT1Ssa0RzKXE0AGRWrMTJfPab4NQ0AdLFnGriAskJSQ/v/PxDlGmGtNQLTD5IEIo2W4kiYVZc31NPLNivKO17k1biG+5HYw42hCTwztUqq9rLVqq+V5Vg7DYGAF1yVt8TKLaQ0nIpbDivpas/MIQfp4Up1W8/JohvrRlnb7Q3kS5xnVyvuXLF6h5mlkA+rLrl2vswiP3TQmdtcM+fFRV2RdYDNyOBNCJl+N0B3/wwcNlJGqSKZKmkenh5EMf+Il+RVua8PiOoG72SdTQr7a4rRd3mTLOysbQiok19/Ru2V1vz5yx/hjMwxqjlD3WnLJNzF+7PKLOpVbbnsOhA99j9OFsmabRIQEKCBpiAr913K3m+x0pCHEnjz3Natb1NR1fcuIm+gum5KXihxCLafiSVxhWfg8BS3D93+x82nfZnI8ZTiWgYqySM7HPaYUTi1bM+J2Q4k/Nrg6zL1hgSNDz37e7zyIfWuTpNmmHnvq5GLCe+SpTMm8eI3 aV1PNb1l RV0q/FILqUV9al121wx73DxW+RngZnzUd0W6+IpYvoJWfCNeBWqwOUCKl0QXWUAdHLQA4Epbjn/sePgMmaH391H6gd5en0X2fbCJ8EOSD0GfcWu54uTmlfB6slF0xbCfenWdfcTyVumZYMwoMw2FndCDQd4Op1kJpRO/Oh7HHjbfcPPIWcxUaRkr+n5vabW4tFkYPCJRO8kl2BWuAIf08w/Rwf8kkoCbS+psOfVV7o1f3ilF0xPrX2jKCp6f7kKfQb43A21KKhRNwoOK32omv2Dv9a++zCKoA1JC7bNJKlbJFHwKN6Y/Ts/9y64gjF84yhGGPx7YuEvmO1ZKlXXOB5aJCEI3UGmn+lxmqDl3AEfA8neyWRdNnrCVFoymGsnuct7qz8EAOvEWM96SDukb2s+NLu52e8+RxyX8EA79SeCidbwbU2QTq5kn9X7y8ui9neHET1u++z4GfWXUk41dsy2XgmOwz7eM8PpmiKXhFeHo69gufrlVsfhW7fA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 05/09/2024 11:10, Barry Song wrote: > On Thu, Sep 5, 2024 at 8:49 PM Barry Song <21cnbao@gmail.com> wrote: >> >> On Thu, Sep 5, 2024 at 7:55 PM Yosry Ahmed wrote: >>> >>> On Thu, Sep 5, 2024 at 12:03 AM Barry Song <21cnbao@gmail.com> wrote: >>>> >>>> On Thu, Sep 5, 2024 at 5:41 AM Yosry Ahmed wrote: >>>>> >>>>> [..] >>>>>>> I understand the point of doing this to unblock the synchronous large >>>>>>> folio swapin support work, but at some point we're gonna have to >>>>>>> actually handle the cases where a large folio being swapped in is >>>>>>> partially in the swap cache, zswap, the zeromap, etc. >>>>>>> >>>>>>> All these cases will need similar-ish handling, and I suspect we won't >>>>>>> just skip swapping in large folios in all these cases. >>>>>> >>>>>> I agree that this is definitely the goal. `swap_read_folio()` should be a >>>>>> dependable API that always returns reliable data, regardless of whether >>>>>> `zeromap` or `zswap` is involved. Despite these issues, mTHP swap-in shouldn't >>>>>> be held back. Significant efforts are underway to support large folios in >>>>>> `zswap`, and progress is being made. Not to mention we've already allowed >>>>>> `zeromap` to proceed, even though it doesn't support large folios. >>>>>> >>>>>> It's genuinely unfair to let the lack of mTHP support in `zeromap` and >>>>>> `zswap` hold swap-in hostage. >>>>> >>>> >>>> Hi Yosry, >>>> >>>>> Well, two points here: >>>>> >>>>> 1. I did not say that we should block the synchronous mTHP swapin work >>>>> for this :) I said the next item on the TODO list for mTHP swapin >>>>> support should be handling these cases. >>>> >>>> Thanks for your clarification! >>>> >>>>> >>>>> 2. I think two things are getting conflated here. Zswap needs to >>>>> support mTHP swapin*. Zeromap already supports mTHPs AFAICT. What is >>>>> truly, and is outside the scope of zswap/zeromap, is being able to >>>>> support hybrid mTHP swapin. >>>>> >>>>> When swapping in an mTHP, the swapped entries can be on disk, in the >>>>> swapcache, in zswap, or in the zeromap. Even if all these things >>>>> support mTHPs individually, we essentially need support to form an >>>>> mTHP from swap entries in different backends. That's what I meant. >>>>> Actually if we have that, we may not really need mTHP swapin support >>>>> in zswap, because we can just form the large folio in the swap layer >>>>> from multiple zswap entries. >>>>> >>>> >>>> After further consideration, I've actually started to disagree with the idea >>>> of supporting hybrid swapin (forming an mTHP from swap entries in different >>>> backends). My reasoning is as follows: >>> >>> I do not have any data about this, so you could very well be right >>> here. Handling hybrid swapin could be simply falling back to the >>> smallest order we can swapin from a single backend. We can at least >>> start with this, and collect data about how many mTHP swapins fallback >>> due to hybrid backends. This way we only take the complexity if >>> needed. >>> >>> I did imagine though that it's possible for two virtually contiguous >>> folios to be swapped out to contiguous swap entries and end up in >>> different media (e.g. if only one of them is zero-filled). I am not >>> sure how rare it would be in practice. >>> >>>> >>>> 1. The scenario where an mTHP is partially zeromap, partially zswap, etc., >>>> would be an extremely rare case, as long as we're swapping out the mTHP as >>>> a whole and all the modules are handling it accordingly. It's highly >>>> unlikely to form this mix of zeromap, zswap, and swapcache unless the >>>> contiguous VMA virtual address happens to get some small folios with >>>> aligned and contiguous swap slots. Even then, they would need to be >>>> partially zeromap and partially non-zeromap, zswap, etc. >>> >>> As I mentioned, we can start simple and collect data for this. If it's >>> rare and we don't need to handle it, that's good. >>> >>>> >>>> As you mentioned, zeromap handles mTHP as a whole during swapping >>>> out, marking all subpages of the entire mTHP as zeromap rather than just >>>> a subset of them. >>>> >>>> And swap-in can also entirely map a swapcache which is a large folio based >>>> on our previous patchset which has been in mainline: >>>> "mm: swap: entirely map large folios found in swapcache" >>>> https://lore.kernel.org/all/20240529082824.150954-1-21cnbao@gmail.com/ >>>> >>>> It seems the only thing we're missing is zswap support for mTHP. >>> >>> It is still possible for two virtually contiguous folios to be swapped >>> out to contiguous swap entries. It is also possible that a large folio >>> is swapped out as a whole, then only a part of it is swapped in later >>> due to memory pressure. If that part is later reclaimed again and gets >>> added to the swapcache, we can run into the hybrid swapin situation. >>> There may be other scenarios as well, I did not think this through. >>> >>>> >>>> 2. Implementing hybrid swap-in would be extremely tricky and could disrupt >>>> several software layers. I can share some pseudo code below: >>> >>> Yeah it definitely would be complex, so we need proper justification for it. >>> >>>> >>>> swap_read_folio() >>>> { >>>> if (zeromap_full) >>>> folio_read_from_zeromap() >>>> else if (zswap_map_full) >>>> folio_read_from_zswap() >>>> else { >>>> folio_read_from_swapfile() >>>> if (zeromap_partial) >>>> folio_read_from_zeromap_fixup() /* fill zero >>>> for partially zeromap subpages */ >>>> if (zwap_partial) >>>> folio_read_from_zswap_fixup() /* zswap_load >>>> for partially zswap-mapped subpages */ >>>> >>>> folio_mark_uptodate() >>>> folio_unlock() >>>> } >>>> >>>> We'd also need to modify folio_read_from_swapfile() to skip >>>> folio_mark_uptodate() >>>> and folio_unlock() after completing the BIO. This approach seems to >>>> entirely disrupt >>>> the software layers. >>>> >>>> This could also lead to unnecessary IO operations for subpages that >>>> require fixup. >>>> Since such cases are quite rare, I believe the added complexity isn't worth it. >>>> >>>> My point is that we should simply check that all PTEs have consistent zeromap, >>>> zswap, and swapcache statuses before proceeding, otherwise fall back to the next >>>> lower order if needed. This approach improves performance and avoids complex >>>> corner cases. >>> >>> Agree that we should start with that, although we should probably >>> fallback to the largest order we can swapin from a single backend, >>> rather than the next lower order. >>> >>>> >>>> So once zswap mTHP is there, I would also expect an API similar to >>>> swap_zeromap_entries_check() >>>> for example: >>>> zswap_entries_check(entry, nr) which can return if we are having >>>> full, non, and partial zswap to replace the existing >>>> zswap_never_enabled(). >>> >>> I think a better API would be similar to what Usama had. Basically >>> take in (entry, nr) and return how much of it is in zswap starting at >>> entry, so that we can decide the swapin order. >>> >>> Maybe we can adjust your proposed swap_zeromap_entries_check() as well >>> to do that? Basically return the number of swap entries in the zeromap >>> starting at 'entry'. If 'entry' itself is not in the zeromap we return >>> 0 naturally. That would be a small adjustment/fix over what Usama had, >>> but implementing it with bitmap operations like you did would be >>> better. >> >> I assume you means the below >> >> /* >> * Return the number of contiguous zeromap entries started from entry >> */ >> static inline unsigned int swap_zeromap_entries_count(swp_entry_t entry, int nr) >> { >> struct swap_info_struct *sis = swp_swap_info(entry); >> unsigned long start = swp_offset(entry); >> unsigned long end = start + nr; >> unsigned long idx; >> >> idx = find_next_bit(sis->zeromap, end, start); >> if (idx != start) >> return 0; >> >> return find_next_zero_bit(sis->zeromap, end, start) - idx; >> } >> >> If yes, I really like this idea. >> >> It seems much better than using an enum, which would require adding a new >> data structure :-) Additionally, returning the number allows callers >> to fall back >> to the largest possible order, rather than trying next lower orders >> sequentially. > > No, returning 0 after only checking first entry would still reintroduce > the current bug, where the start entry is zeromap but other entries > might not be. We need another value to indicate whether the entries > are consistent if we want to avoid the enum: > > /* > * Return the number of contiguous zeromap entries started from entry; > * If all entries have consistent zeromap, *consistent will be true; > * otherwise, false; > */ > static inline unsigned int swap_zeromap_entries_count(swp_entry_t entry, > int nr, bool *consistent) > { > struct swap_info_struct *sis = swp_swap_info(entry); > unsigned long start = swp_offset(entry); > unsigned long end = start + nr; > unsigned long s_idx, c_idx; > > s_idx = find_next_bit(sis->zeromap, end, start); In all of the implementations you sent, you are using find_next_bit(..,end, start), but I believe it should be find_next_bit(..,nr, start)? TBH, I liked the enum implementation you had in https://lore.kernel.org/all/20240905002926.1055-1-21cnbao@gmail.com/ Its the easiest to review and understand, and least likely to introduce any bugs. But it could be a personal preference. The likelihood of having contiguous zeromap entries *that* is less than nr is very low right? If so we could go with the enum implementation? > if (s_idx == end) { > *consistent = true; > return 0; > } > > c_idx = find_next_zero_bit(sis->zeromap, end, start); > if (c_idx == end) { > *consistent = true; > return nr; > } > > *consistent = false; > if (s_idx == start) > return 0; > return c_idx - s_idx; > } > > I can actually switch the places of the "consistent" and returned > number if that looks > better. > >> >> Hi Usama, >> what is your take on this? >> >>> >>>> >>>> Though I am not sure how cheap zswap can implement it, >>>> swap_zeromap_entries_check() >>>> could be two simple bit operations: >>>> >>>> +static inline zeromap_stat_t swap_zeromap_entries_check(swp_entry_t >>>> entry, int nr) >>>> +{ >>>> + struct swap_info_struct *sis = swp_swap_info(entry); >>>> + unsigned long start = swp_offset(entry); >>>> + unsigned long end = start + nr; >>>> + >>>> + if (find_next_bit(sis->zeromap, end, start) == end) >>>> + return SWAP_ZEROMAP_NON; >>>> + if (find_next_zero_bit(sis->zeromap, end, start) == end) >>>> + return SWAP_ZEROMAP_FULL; >>>> + >>>> + return SWAP_ZEROMAP_PARTIAL; >>>> +} >>>> >>>> 3. swapcache is different from zeromap and zswap. Swapcache indicates >>>> that the memory >>>> is still available and should be re-mapped rather than allocating a >>>> new folio. Our previous >>>> patchset has implemented a full re-map of an mTHP in do_swap_page() as mentioned >>>> in 1. >>>> >>>> For the same reason as point 1, partial swapcache is a rare edge case. >>>> Not re-mapping it >>>> and instead allocating a new folio would add significant complexity. >>>> >>>>>> >>>>>> Nonetheless, `zeromap` and `zswap` are distinct cases. With `zeromap`, we >>>>>> permit almost all mTHP swap-ins, except for those rare situations where >>>>>> small folios that were swapped out happen to have contiguous and aligned >>>>>> swap slots. >>>>>> >>>>>> swapcache is another quite different story, since our user scenarios begin from >>>>>> the simplest sync io on mobile phones, we don't quite care about swapcache. >>>>> >>>>> Right. The reason I bring this up is as I mentioned above, there is a >>>>> common problem of forming large folios from different sources, which >>>>> includes the swap cache. The fact that synchronous swapin does not use >>>>> the swapcache was a happy coincidence for you, as you can add support >>>>> mTHP swapins without handling this case yet ;) >>>> >>>> As I mentioned above, I'd really rather filter out those corner cases >>>> than support >>>> them, not just for the current situation to unlock swap-in series :-) >>> >>> If they are indeed corner cases, then I definitely agree. >> >> Thanks >> Barry