Why not just create all the folders?

Hey Guys,

Product looks good, but why not just have all the folders created and have the files syncable?
This way the OS will automatically sort the .cloud files below the actual folders so its not all a mess.

As it stands, both .cloud and .cloudf files are all mixed together as windows sees them all as just files.

All you would need is a directory tree to be synced and created. Would take up very little space on the users computer and give them a better experience.

4 Likes

Hi,
Take a look at this thread for a possible workaround for folder sorting. It’s not perfect, but it will help when trying to differentiate.

Jake,

That sounds like a good idea, as long as it’s optional. We’ve synced large folder trees containing files that are not essential for day-to-day operations. Nevertheless, some are needed at times. If there were a right-click option to recursively sync a folder tree, creating .cloud placeholder files that would save the tedious click-sync-wait-repeat cycles now needed.

I would argue against making this the default. Folks with shares containing hundreds of thousands of folders should not need to waste disk and index space on these, maintain search databases for large paths, and tie up network bandwidth unnecessarily.

For clarification, you can recursively sync everything, expanding all folders and showing all files as placeholders using a right-click action. As @Ethan stated, there are good reasons why we don’t want to do this by default, but users have the option to do it, if they want to.


Right-click->Sync on the desired folder


Move the slider to “Nothing” and check the “Include subfolders” checkbox

“Save and apply to new files and folders” is a Premium feature that will set a rule on the selected folder enforcing these settings on any remotely added content, as well

2 Likes

I tried this and it works. However, it takes a very long time to sync a small dir tree. I’m guessing you are querying every file/dir individually, syncing the dir and skipping the files. Whatever you’re doing, it’s crazy inefficient. My small OneDrive dir tree is still running after 5+ min and odrive is consuming a steady 20% cpu.

I think the OneDrive api supports searching with a filter to return only folders. Then you can get all the folder names with iterating over everything.

1 Like

Directory expansion requires laying down placeholder files for every file in every folder that is expanded, so a filter for only folders would not do much good.

How small was the directory tree? How many files?

I’m finding this interface unsatisfactory. I was hoping that I could save save a file to any folder on any of my linked accounts, without having to sync all of the data to my local computer (wouldn’t have sufficient space anyway). I find the proposed solutions confusing and counter-intuitive. If it is not going to be a default setting to sync the directory structure, can we at least have a simply “sync directory structure” button somewhere in the interface?

Also, it would be nice if I could click on a place holder from the open-file dialog in an application and have that file sync/download and open in the application. This is less of an issue for me as I can navigate to a file in Internet Explorer, but it would be nice if I could open files from stubs within the open-file dialog box of the application I’m working in.

Because I can’t easily work with O Drive from application other than Windows Explorer, I’m finding it much less useful than I was hoping for.

1 Like

While I don’t find the placeholder files to be a perfect solution, I think that it’s a fine way to solve the problem of progressively syncing thousands of directories in a manageable way. The more I use it, the more it becomes familiar and hence transparent. I’m actually really getting into the whole sync/unsync mindset however, I have tried syncing a big tree without files and it worked after being left for hours but I think that odrive sync engine is still working hard to track the results. I’m seeing this on Amazon Cloud Drive as, I suspect there’s 100,000 folders or more to work with there. (Hey Amazon, if you call it unlimited, I’m going to make good use of that!)

So, to my question: Is this situation simply an unavoidable challenge with tracking files via a remote key value data store or is there a simple workaround like a local cache or some such that could be added in the future… some how?

1 Like

It is really a factor of “proactive syncing” that contributes to the performance overhead on large structures. odrive is actively monitoring both sides of the sync relationship to proactively make sure everything remains in sync. The larger the structure, the more odrive has do to to make absolutely sure that things are correct on both sides.

If we eliminated that proactivity, it would reduce the work needed drastically, but at the expense of the user experience. The upside would be that overhead is reduced as odrive is no longer performing background tasks, like scanning structures and analyzing the remote and local sides to detect and act on changes. The downside is that odrive would only pick up changes as the user navigates through the structure, so things wouldn’t just be happening “magically”.

1 Like