* Re: [PATCH] pre3 corrections!
[not found] <Pine.LNX.3.91.980317105548.385B-100000@mirkwood.dummy.home>
@ 1998-03-17 19:09 ` Linus Torvalds
1998-03-17 20:20 ` Zlatko Calusic
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Linus Torvalds @ 1998-03-17 19:09 UTC (permalink / raw)
To: Rik van Riel; +Cc: Stephen C. Tweedie, linux-mm
[ Cc'd to Stephen and the mm-list because I'm explaining what I really
think should happen and why I've rejected some patches: sorry for the
lack of explanations but I was fairly busy last week.. ]
On Tue, 17 Mar 1998, Rik van Riel wrote:
>
> apparently friday 13th has struck you very badly,
> since some _important_ pieces of kswapd got lost...
> (and page_alloc.c had an obvious bug).
page_alloc.c had an obvious bug, but no, the changes did not "get lost".
I decided that it was time to stop with the band-aid patches, and just
wait for the problem to be fixed _correctly_, which I didn't think this
patch does:
> --- vmscan.c.pre3 Tue Mar 17 10:47:44 1998
> +++ vmscan.c Tue Mar 17 10:52:05 1998
> @@ -568,8 +568,13 @@
> while (tries--) {
> int gfp_mask;
>
> - if (free_memory_available())
> - break;
> + if (BUFFER_MEM < buffer_mem.max * num_physpages / 100) {
> + if (free_memory_available() && nr_free_pages +
> + atomic_read(&nr_async_pages) > freepages.high)
> + break;
> + if( nr_free_pages > freepages.high * 4)
> + break;
> + }
> gfp_mask = __GFP_IO;
> try_to_free_page(gfp_mask);
> /*
Basically, I consider any patch that adds another "nr_free_pages"
occurrence to be buggy.
Why? Because I have 512 MB (yes, that's half a gig) of memory, and I don't
think it is valid to compare the number of free pages against anything,
because they have so little relevance when they may not be the basic
reason for why an allocation failed. I may have 8MB worth of free memory
(aka "a lot"), but if all those twothousand pages are single pages (or
even dual pages) then NFS won't work correctly because NFS needs to
allocate about 9000 bytes for incoming full-sized packets.
That is why I want to have the "free_memory_available()" approach of
checking that there are free large-page areas still, and continuing to
swap out IN THE BACKGROUND when this isn't true.
What I _think_ the patch should look like is roughly something like
do {
if (free_memory_available())
break;
gfp_mask = __GFP_IO;
if (!try_to_free_page(gfp_mask))
break;
run_task_queue(&tq_disk); /* or whatever */
} while (--tries);
AND then "swap_tick()" should also be changed to not look at nr_free_pages
at all, but only at whether we can easily allocate new memory (ie
"free_memory_available()")
The plan would be that
- kswapd should wake up every so often until we have large areas
(swap_tick())
- kswapd would never /ever/ run for too long ("tries"), even when low on
memory. So the solution would be to make "tries" have a low enough
value that kswapd never hogs the machine, and "swap_tick()" would make
sure that while we don't hog the machine we do keep running
occasionally until everything is dandy again..
- "nr_free_pages" should go away: we end up just spending time
maintaining it, yet it doesn't really ever tell us enough about the
actul state of the machine due to fragmentation.
I could do this myself, but I also know that my particular machine usage
isn't interesting enough to guarantee that I get the tweaking anywhere
close to reasonable, which is why I've been so happy that others (ie you)
have been looking into this - you probably have more real-world usage than
I do.
At the same time I _really_ hate the "arbitrary" kinds of tests that you
cannot actually explain _why_ they are there, and the only explanation for
them is that they hide some basic problem. This is why I want to have
"free_memory_available()" be important: because that function very clearly
states whether we can allocate things atomically or not. It's not a case
of "somebody feels that this is the right thing", but a case of "when the
free_memory_available() function returns true, we can _prove_ that some
specific condition is fine (ie the ability to allocate memory)".
(Btw, I think my original "free_memory_available()" function that only
tested the highest memory order was probably a better one: the only reason
it was downgraded was due to the interactive issues due to swap_tick() and
the pageout loop disagreeing about when things should be done).
One other thing that should probably be more aggressively looked into: the
buffer cache. It used to be that the buffer cache was of supreme
importance for performance, and we needed to keep the buffer cache big
because it was also our source of shared pages. That is no longer true.
These days we should penalize the buffer cache _heavily_: _especially_
dirty data pages that have been written out should generally be thrown
away as quickly as possible instead of leaving them in memory. Not
immediately, because re-writing parts of some file is fairly common, but
they should be aged much more aggressively (but we should not age the
metadata pages all that quickly - only the pages we have used for
write-outs).
I've too often seen my machine with 200MB worth of ex-dirty buffers (ie
they are clean now and have been synched, but they still lay around just
in case) when I've written out a large file, and I just know that that is
just all wasted memory.
Again, this is something that needs to be tested on more "normal" machines
than my particular machine is - I doubt my use is even close to what most
people tend to do..
Linus
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] pre3 corrections!
1998-03-17 19:09 ` [PATCH] pre3 corrections! Linus Torvalds
@ 1998-03-17 20:20 ` Zlatko Calusic
1998-03-17 20:20 ` Rik van Riel
1998-03-18 21:16 ` Stephen C. Tweedie
2 siblings, 0 replies; 4+ messages in thread
From: Zlatko Calusic @ 1998-03-17 20:20 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Rik van Riel, Stephen C. Tweedie, linux-mm
Linus Torvalds <torvalds@transmeta.com> writes:
[snip]
> I decided that it was time to stop with the band-aid patches, and just
> wait for the problem to be fixed _correctly_, which I didn't think this
> patch does:
>
> > --- vmscan.c.pre3 Tue Mar 17 10:47:44 1998
> > +++ vmscan.c Tue Mar 17 10:52:05 1998
> > @@ -568,8 +568,13 @@
> > while (tries--) {
> > int gfp_mask;
> >
> > - if (free_memory_available())
> > - break;
> > + if (BUFFER_MEM < buffer_mem.max * num_physpages / 100) {
> > + if (free_memory_available() && nr_free_pages +
> > + atomic_read(&nr_async_pages) > freepages.high)
> > + break;
> > + if( nr_free_pages > freepages.high * 4)
> > + break;
> > + }
> > gfp_mask = __GFP_IO;
> > try_to_free_page(gfp_mask);
> > /*
>
> Basically, I consider any patch that adds another "nr_free_pages"
> occurrence to be buggy.
>
> Why? Because I have 512 MB (yes, that's half a gig) of memory, and I don't
> think it is valid to compare the number of free pages against anything,
> because they have so little relevance when they may not be the basic
> reason for why an allocation failed. I may have 8MB worth of free memory
> (aka "a lot"), but if all those twothousand pages are single pages (or
> even dual pages) then NFS won't work correctly because NFS needs to
> allocate about 9000 bytes for incoming full-sized packets.
>
> That is why I want to have the "free_memory_available()" approach of
> checking that there are free large-page areas still, and continuing to
> swap out IN THE BACKGROUND when this isn't true.
^^^^^^^^^^^^^^^^^
Agreed!!!
>
> What I _think_ the patch should look like is roughly something like
>
> do {
> if (free_memory_available())
> break;
> gfp_mask = __GFP_IO;
> if (!try_to_free_page(gfp_mask))
> break;
> run_task_queue(&tq_disk); /* or whatever */
> } while (--tries);
>
> AND then "swap_tick()" should also be changed to not look at nr_free_pages
> at all, but only at whether we can easily allocate new memory (ie
> "free_memory_available()")
>
> The plan would be that
> - kswapd should wake up every so often until we have large areas
> (swap_tick())
> - kswapd would never /ever/ run for too long ("tries"), even when low on
> memory. So the solution would be to make "tries" have a low enough
> value that kswapd never hogs the machine, and "swap_tick()" would make
> sure that while we don't hog the machine we do keep running
> occasionally until everything is dandy again..
I could not agree more. Few latest kernel revisions were too much
aggresive swapping things out. Of course, we need a way to assure we
have big enough chunks free, but I believe that's tough to accomplish
without deep thinking.
What I don't like is excessive swapout, under which I lose control
over machine for a few seconds, in .89 processes got killed randomly,
sound stops playing (Sound: DMA (output) timed out - IRQ/DRQ config
error?), and eventually I have machine with lots of free (unused!) RAM
and tens of MB's worth of data swapped out that is now slowly paging
in. I wouldn't call that a "performance improvement". :(
Benjamin's patch (rev-ptes) could be a big win in kernel
functionality, if you decide it's good enough to be included in the
mainstream. Reverse page tables is the only thing, _I_ can think of,
that could help in freeing of big areas of memory.
Blindly throwing pages out leads to heavy swapouts. If my machine
swaps out 50MB to get one 128KB free chunk, that's overkill.
> - "nr_free_pages" should go away: we end up just spending time
> maintaining it, yet it doesn't really ever tell us enough about the
> actul state of the machine due to fragmentation.
>
> I could do this myself, but I also know that my particular machine usage
> isn't interesting enough to guarantee that I get the tweaking anywhere
> close to reasonable, which is why I've been so happy that others (ie you)
> have been looking into this - you probably have more real-world usage than
> I do.
>
> At the same time I _really_ hate the "arbitrary" kinds of tests that you
> cannot actually explain _why_ they are there, and the only explanation for
> them is that they hide some basic problem. This is why I want to have
> "free_memory_available()" be important: because that function very clearly
> states whether we can allocate things atomically or not. It's not a case
> of "somebody feels that this is the right thing", but a case of "when the
> free_memory_available() function returns true, we can _prove_ that some
> specific condition is fine (ie the ability to allocate memory)".
Again agreed. I like simple things more than anything and it looks
like kswapd and others are becaming progressively less and less
readable. That leads to bugs that are harder to track.
>
> (Btw, I think my original "free_memory_available()" function that only
> tested the highest memory order was probably a better one: the only reason
> it was downgraded was due to the interactive issues due to swap_tick() and
> the pageout loop disagreeing about when things should be done).
>
> One other thing that should probably be more aggressively looked into: the
> buffer cache. It used to be that the buffer cache was of supreme
> importance for performance, and we needed to keep the buffer cache big
> because it was also our source of shared pages. That is no longer true.
On this one I don't agree. My opinion is that buffer cache gets shrunk
slightly too fast. I don't like unused pages around, too, but it looks
to me that buffer cache pages dissapear five or ten times faster than
pages from page cache. Maybe that is intended behaviour, but nobody
actually profiled caches to see what is really happening.
At one occasion, I did put some code in kernel to calculate hit rate
of the caches, but didn't know how to interpret values I got. :)
But some benchmarking and profiling could be very helpful, definitely.
>
> These days we should penalize the buffer cache _heavily_: _especially_
> dirty data pages that have been written out should generally be thrown
> away as quickly as possible instead of leaving them in memory. Not
> immediately, because re-writing parts of some file is fairly common, but
> they should be aged much more aggressively (but we should not age the
> metadata pages all that quickly - only the pages we have used for
> write-outs).
I believe all your wishes about buffer cache are already fulfilled in
recent kernels. At least on the machines with "normal" amount of
RAM. :)
>
> I've too often seen my machine with 200MB worth of ex-dirty buffers (ie
> they are clean now and have been synched, but they still lay around just
> in case) when I've written out a large file, and I just know that that is
> just all wasted memory.
>
> Again, this is something that needs to be tested on more "normal" machines
> than my particular machine is - I doubt my use is even close to what most
> people tend to do..
>
> Linus
>
>
Hmm... godzilla type of machine. :)
Everything I said applies to 64MB (and 32MB) machines that are, I
presume, "slightly" more common these days. :)
Regards,
--
Posted by Zlatko Calusic E-mail: <Zlatko.Calusic@CARNet.hr>
---------------------------------------------------------------------
Sign here please:_______________________Thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] pre3 corrections!
1998-03-17 19:09 ` [PATCH] pre3 corrections! Linus Torvalds
1998-03-17 20:20 ` Zlatko Calusic
@ 1998-03-17 20:20 ` Rik van Riel
1998-03-18 21:16 ` Stephen C. Tweedie
2 siblings, 0 replies; 4+ messages in thread
From: Rik van Riel @ 1998-03-17 20:20 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Stephen C. Tweedie, linux-mm
On Tue, 17 Mar 1998, Linus Torvalds wrote:
> On Tue, 17 Mar 1998, Rik van Riel wrote:
> >
> > apparently friday 13th has struck you very badly,
> > since some _important_ pieces of kswapd got lost...
> > (and page_alloc.c had an obvious bug).
>
> page_alloc.c had an obvious bug, but no, the changes did not "get lost".
>
> I decided that it was time to stop with the band-aid patches, and just
> wait for the problem to be fixed _correctly_, which I didn't think this
> patch does:
OK, you're right about that. But your free_memory_available()
function is just too easily overwhelmed on normal systems.
Also, on your system, you should have noticed an extra 200
context switches a second, since swap_tick() wakes up kswapd
even when free_memory_available() is satisfied ;-)
!!! The BUFFER_MEM test however needs to stay. It is the way
we implement the maximum quota for buffermem+page_cache_size !!!
> Basically, I consider any patch that adds another "nr_free_pages"
> occurrence to be buggy.
>
> Why? Because I have 512 MB (yes, that's half a gig) of memory, and I don't
> think it is valid to compare the number of free pages against anything,
> because they have so little relevance when they may not be the basic
> reason for why an allocation failed. I may have 8MB worth of free memory
> (aka "a lot"), but if all those twothousand pages are single pages (or
> even dual pages) then NFS won't work correctly because NFS needs to
> allocate about 9000 bytes for incoming full-sized packets.
1. You can tune the amount of pages you want free
2. On small-memory machines, kswapd swaps out up to half of total
memory when free_memory_available() can't be satisfied. We should
place some upper limit to the amount of memory that can be
freed at once.
3. When you only look at free_memory_available() and a huge amount
of memory is allocated at once, the large area's all 'dissappear'
at once, and the system enters swapping madness (has happened to
me and other people too!).
4. Large-mem machines usually have more allocations/second too...
For me, point 3 is the most important one.
> That is why I want to have the "free_memory_available()" approach of
> checking that there are free large-page areas still, and continuing to
> swap out IN THE BACKGROUND when this isn't true.
>
> AND then "swap_tick()" should also be changed to not look at nr_free_pages
> at all, but only at whether we can easily allocate new memory (ie
> "free_memory_available()")
We'll also want a lot of 'extra' free memory around, as it takes
a _long_ time to 'create' new large free areas...
I almost swapped my machine to death when the free-mem limitation
wasn't built into kswapd... And with less memory it's even worse!
> What I _think_ the patch should look like is roughly something like
>
> do {
> if (free_memory_available())
> break;
> gfp_mask = __GFP_IO;
> if (!try_to_free_page(gfp_mask))
> break;
??? Why break when one try_to_free_page() fails ???
> run_task_queue(&tq_disk); /* or whatever */
> } while (--tries);
>
> The plan would be that
> - kswapd should wake up every so often until we have large areas
> (swap_tick())
> - kswapd would never /ever/ run for too long ("tries"), even when low on
> memory. So the solution would be to make "tries" have a low enough
> value that kswapd never hogs the machine, and "swap_tick()" would make
> sure that while we don't hog the machine we do keep running
> occasionally until everything is dandy again..
This was a _serious_ bug in the 1.2 days. Then kswapd
couldn't keep up with the amount of memory allocations,
since there was an upper limit on it's memory freeing rate.
> - "nr_free_pages" should go away: we end up just spending time
> maintaining it, yet it doesn't really ever tell us enough about the
> actul state of the machine due to fragmentation.
I agree that freepages.[min,low,high] could go away, but I'd
like to keep nr_free_pages so we can free on a somewhat more
intelligent rate.
It would be nice to keep enough free pages around for the
allocations that happen until kswapd is woken up again.
> At the same time I _really_ hate the "arbitrary" kinds of tests that you
> cannot actually explain _why_ they are there, and the only explanation for
> them is that they hide some basic problem. This is why I want to have
> "free_memory_available()" be important: because that function very clearly
> states whether we can allocate things atomically or not. It's not a case
> of "somebody feels that this is the right thing", but a case of "when the
> free_memory_available() function returns true, we can _prove_ that some
> specific condition is fine (ie the ability to allocate memory)".
>
> (Btw, I think my original "free_memory_available()" function that only
> tested the highest memory order was probably a better one: the only reason
> it was downgraded was due to the interactive issues due to swap_tick() and
> the pageout loop disagreeing about when things should be done).
Your free_memory_available() test is almost as arbitrary
as the nr_free_pages tests...
Your free_memory_available() test tests whether the system
can do one or two really big allocations.
The nr_free_pages test tests whether the system can do loads
of small allocations.
Since the systems I see usually do a lot of small allocations,
the second test seems quite useful to me...
I can't grasp why it should be eradicated and never used again...
Please enlighten me... :-)
> These days we should penalize the buffer cache _heavily_: _especially_
It is penalized a lot more heavily than the page cache, because:
- the page cache memory gets properly aged
- the buffer cache doesn't grow fast when buffer+page > max, while
the page cache just grows on until kswapd trims 'em both
> dirty data pages that have been written out should generally be thrown
> away as quickly as possible instead of leaving them in memory. Not
> immediately, because re-writing parts of some file is fairly common, but
> they should be aged much more aggressively (but we should not age the
> metadata pages all that quickly - only the pages we have used for
> write-outs).
>
> I've too often seen my machine with 200MB worth of ex-dirty buffers (ie
> they are clean now and have been synched, but they still lay around just
> in case) when I've written out a large file, and I just know that that is
> just all wasted memory.
When it's wasted memory, the system will free it very soon
because:
- the pages aren't touched
- there's a quota for buffer+pagecache now
(there's been a lot of activity lately, so you might have
missed out on some of the subtle stuff. If you haven't, sorry
for the newbie-like speech)
> Again, this is something that needs to be tested on more "normal" machines
> than my particular machine is - I doubt my use is even close to what most
> people tend to do..
It is nothing like the way we mere mortals use our computers.
For example, your original free_memory_available() test might
have worked perfectly when you tested it, but it nearly
killed my box :-)
(and it _did_ kill the boxes of people with only 8 or 12 MB)
Rik.
+-------------------------------------------+--------------------------+
| Linux: - LinuxHQ MM-patches page | Scouting webmaster |
| - kswapd ask-him & complain-to guy | Vries cubscout leader |
| http://www.fys.ruu.nl/~riel/ | <H.H.vanRiel@fys.ruu.nl> |
+-------------------------------------------+--------------------------+
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] pre3 corrections!
1998-03-17 19:09 ` [PATCH] pre3 corrections! Linus Torvalds
1998-03-17 20:20 ` Zlatko Calusic
1998-03-17 20:20 ` Rik van Riel
@ 1998-03-18 21:16 ` Stephen C. Tweedie
2 siblings, 0 replies; 4+ messages in thread
From: Stephen C. Tweedie @ 1998-03-18 21:16 UTC (permalink / raw)
To: Linus Torvalds; +Cc: Rik van Riel, Stephen C. Tweedie, linux-mm
Hi,
On Tue, 17 Mar 1998 11:09:52 -0800 (PST), Linus Torvalds
<torvalds@transmeta.com> said:
> I decided that it was time to stop with the band-aid patches, and
> just wait for the problem to be fixed _correctly_
Indeed --- we've had a series of vm tuning/balancing fixups (remember
1.2.4/1.2.5 and 2.0.30?) which have improved a few cases but have made
for catastrophically bad worst-case behaviour. If we are upgrading
the mechanism, we do finish that first and _then_ concentrate on
policy tuning.
> , which I didn't think this patch does:
> Basically, I consider any patch that adds another "nr_free_pages"
> occurrence to be buggy.
> Why? Because I have 512 MB (yes, that's half a gig) of memory, and I don't
> think it is valid to compare the number of free pages against anything,
> because they have so little relevance when they may not be the basic
> reason for why an allocation failed.
Yes.
> That is why I want to have the "free_memory_available()" approach of
> checking that there are free large-page areas still, and continuing to
> swap out IN THE BACKGROUND when this isn't true.
Absolutely, and this has the second advantage of clearly making the
distinction between the mechanism code and the decision-making policy
code.
> What I _think_ the patch should look like is roughly something like
> do {
> if (free_memory_available())
> break;
> gfp_mask = __GFP_IO;
> if (!try_to_free_page(gfp_mask))
> break;
> run_task_queue(&tq_disk); /* or whatever */
> } while (--tries);
Actually, I'm trying to eliminate some of the GFP_IO stuff in the
future. Something I did a while ago was to separate out the page
scanner from the swap IO code, with separate kswapd and kswiod
threads. That allows us to continue scanning for clean pages to free
even if the IO of dirty pages to disk is blocked. A possible
extension would be to keep a reserved pool of buffer_heads for swap IO
(in much the same way that we have a static struct request pool), to
guarantee that the swapout code can never be deadlocked on memory
failure.
> (Btw, I think my original "free_memory_available()" function that only
> tested the highest memory order was probably a better one: the only reason
> it was downgraded was due to the interactive issues due to swap_tick() and
> the pageout loop disagreeing about when things should be done).
I'd actually like to see _all_ memory scanning reclamation done from
within kswapd. It makes it much more obvious where and when things
are being done. There's no reason why we can't simply wakeup kswapd
and block on a free-memory waitq (or perhaps semaphore, but that's
more messy) when we are want to wait for free memory. If free_page()
wakes up that waitq, then we have all the synchronisation we need, but
with several advantages. In particular, we can minimise context
switching and mmscan restart overhead by keeping in kswapd until we
have freed a small number of pages (still minimising the per-run CPU
usage, of course). I'm still open to persuasion either way on this
one, since at least some cases (such as a single task reclaiming page
cache) may run more slowly due to the extra context switch necessary,
but if kswapd is doing its job properly anyway then there's no point
in letting *everybody* dabble in page reclamation.
> One other thing that should probably be more aggressively looked
> into: the buffer cache. It used to be that the buffer cache was of
> supreme importance for performance, and we needed to keep the buffer
> cache big because it was also our source of shared pages. That is no
> longer true.
> These days we should penalize the buffer cache _heavily_: _especially_
> dirty data pages that have been written out should generally be thrown
> away as quickly as possible instead of leaving them in memory.
True, and on my TODO list. I really want to make writes do a
write-through from the page cache, and use alias buffer_heads to mark
the dirty data. This requires minimal change to the existing code,
but eliminates our extra copy, keeps the written data in the page
cache where it can be found again more quickly, and makes it simple to
keep parity between read and write data in the page cache.
Having said that, Linus, there is a _big_ problem with penalising the
buffer cache today. If there is no space to grow the buffer cache,
then getting a new buffer tries to reuse an old one rather than obtain
a new free page for the cache. If the use of the buffer cache is
readonly, then it is easy to find an unused buffer, so we can end up
making heavy use of the buffer cache but only having a handful of
buffers there. I've spotted quite busy machines doing lots of
directory tree access but with only a dozen or so pages of buffer
cache, and you can hear the results as the disk is thrashing
unnecessarily. This is also a major limiting factor in swap file
performance, since we end up thrashing the swap file's inode
indirection blocks for the same reason.
So, we do need to be careful to avoid arbitrarily penalising _all_ use
of the buffer cache. Writes are the obvious target for elimination
from memory, but other buffers may be much more valuable to us: I
think all metadata buffers ought to get more, not less, protection
than they have right now, since this is generally random-access data
which is more expensive to reread than sequential file data.
> Not immediately, because re-writing parts of some file is fairly
> common, but they should be aged much more aggressively (but we
> should not age the metadata pages all that quickly - only the pages
> we have used for write-outs).
Doing writes through the page cache, combined with a variant on Ben's
page queues, should allow us to identify such pages quite easily.
Cheers,
Stephen.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~1998-03-19 0:07 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <Pine.LNX.3.91.980317105548.385B-100000@mirkwood.dummy.home>
1998-03-17 19:09 ` [PATCH] pre3 corrections! Linus Torvalds
1998-03-17 20:20 ` Zlatko Calusic
1998-03-17 20:20 ` Rik van Riel
1998-03-18 21:16 ` Stephen C. Tweedie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox