Using junction points to make a "inter-cloud sync" host


First, some background. I use Junctions a lot with my current setup with Dropbox, since it doesn’t have the “sync to” feature that odrive has. This allows any folder on the filesystem to be pointed at the actual folder, which resides in Dropbox. This means applications think they’re using the AppData (or similar) location, but it’s actually updating in the Dropbox folder - which the client can watch, sync, etc. Given that odrive supports multiple cloud providers, I was considering using these to do something crazy.

Since I have unlimited data on a fibre connection available to me, I was considering installing odrive on a machine with a large-ish hard drive, and using that as a “cross-cloud sync host” - where it would have “Sync Everything” rules for certain folders, which would actually be junctions into another cloud provider. This proposal could also work on a hosted VPS using directory hard links on linux, or any such similar setup.

Example. If I had the following folders:

  • odrive\Amazon Cloud\CloudSync\SomethingImportant (an actual folder)
  • odrive\Google Drive\CloudSync\SomethingImportant (a junction to the above folder)

If I was to put something into one of those locations from elsewhere, the “Sync Everything” rule should pull it down into that folder - which would trigger odrive to upload it into the other provider? I am a little concerned with what would happen if the junction was invalid - since it acts as essentially an “access denied” error - would it delete the files from Google Cloud?

This has come about because I currently use MultCloud to get data between cloud providers, however it is a schedule-based system that doesn’t have an option to trigger on new/modified files.

The specific use case that this is important for is my password manager database. My day job allows access only to the company-provided OneDrive for Business, all other cloud providers are blocked. This means I need to have at least a copy of the DB in this provider to allow me to use it in the office. However, I do not want to have it only in that provider, as it will not be accessible if I ever lose access to that company’s data. Currently, it’s synching between a local Dropbox folder on my laptop and the odrive ODfB folder (using “sync with odrive”), but it requires my laptop to be on for it to function. MultCloud is an option, but without an easy way to force it to trigger more than three times a day, the delay becomes quite evident.


Hi @yukihyou,
First, I’ll preface this by saying we don’t officially support syncing across symlinks/junctions . This post goes into more detail about that: Syncing symlinks/hardlinks/junctions/etc.. - how does it work?

Now that that’s out of the way, the idea you propose should be possible, although it is not something we’ve tested so you’d want to experiment with some test data beforehand. The key aspects to hit would be:

  1. Ensure a full recursive sync:
    Recursive sync will need to be done on each folder in the “proxy” system and it will need to done to completion. This can be an issue, at times, because the bulk recursive download operation can abort when certain types of errors, or multiple errors are encountered, leaving folders in a placeholder state. If this happens, then the rules you set on the structure will not affect existing folders that are in a placeholder state, so you will want everything exposed. You will probably want to use a basic script that will brute-force the recursive sync. See here for more details: Downloading my 3 TB+ Amazon Cloud Drive contents to my external drive

  2. Test behavior of a broken junction:
    This will probably depend on the nature of the break, but if odrive’s scanning is able to enumerate the target folder and it looks empty, its going to think you purposely deleted data. With odrive’s trash functionality, remote deletes can be prevented, however, and once the junction is restored the detected deletes will be negated (since the data will be back). You would just want to make sure to not have any auto-trashing rules set.
    You will probably want to conduct a fair amount of testing in this area.


It looks like it may not be easily possible to do what I am after, since this point seems to indicate that I would need to run this on a schedule, and that some folders may end in an inconsistent state.

I agree, but unless there is a much more pressing use case, I think i might wait for the upcoming “odrive of the future” before spending too much time testing an unsupported setup.

Thanks for your comments though!


Hi @yukihyou,

Just to clarify this, you should only have to run that script once, to get everything initially synced, 100% (turn all placeholders into “real” files/folders. After that the sync rules, when applied to your folders, will take care of anything new that shows up.