On Sat, 31 May 2014 15:44:52 -0700 Daniel Phillips wrote: > On 05/29/2014 04:43 PM, Greg KH wrote: > > ...you know how this all works, we don't have to have meetings in > > order to do design decisions that are "large". > > > Perhaps there is something wrong with that approach. Certainly in > regards to how to bridge the gap between what we now have for logical > volume support, and what we should have, or what BSD has, that approach > is demonstrably a perennial failure. After all these years, we still > have dm and md as separate islands, no usable snapshotting block device, > and roughly zero interaction between filesystems and volume managers. dm-raid.c is a bridge between those islands. Does dm-thin.c not provide usable snapshots? I admit I haven't looked in detail. > The larger issue would be, why is there no design process in Linux for > large design issues? Maybe that is the core topic that is really missing. What sort of "design process" do you imagine? Something like IETF? While it certainly has had some successes I don't see that its process as conducive to quality. The design rule for Linux is simple: show me the code. If if passes review, it goes in. If it doesn't you should know why and can try again. You can certainly start with a design proposal if you like, and you might get valuable feedback from that. The more concrete your design, the easier it is to respond to, so the quality of the responses you get will be higher. But there is no way to escape the fact that, for a "big design" which affects multiple subsystems, you will probably need to develop several prototypes before you find something that works well. Be ready to discard and try again. Like Greg said - it is "evolutionary" and evolution isn't just "survival of the fittest", it is also "death to the weak". NeilBrown