Hi, I have a 15 GB Disk Image file that is stored on an Encryptor Volume that is hosted on Amazon Drive. If I ever update the image, it obviously resyncs, so I try to keep changes to a minimum, however, whenever there is an update to this file, thre is always a (conflicted) version of the file that has to be sync’d as well which is simply the previous version of the file. What’s worse is that when I attempt to stop the syncing of the file by clicking on it and then clicking “OK” to “Cancel syncing of (conflict).dmg?” it stops and then immediately starts copying the same file again from the beginning! No matter how many times I try this, it just keeps starting over. I even opened a remote session to the other computer that has the encryptor volume on it and the file shared and perform this action at the same time (by opening the dialogue box on both computers and then, when both are open, click "OK) but that doesn’t help either. This is totally insane because the computers that are currently sharing this file are _on the same LAN! _ That means the file is upload and dowloaded twice for each computer, making it transferred FOUR TIMES! on my internet link! (Aside form the problem of there always being a conflict, it is silly to download the “conlficted” version of the file when it’s the old version of the file that it already has! Either way, that’s 60GB of internet data traffic each way! Nothing that I can do stops this insanity except for stopping all syncing for all of drives odrive. It’s frustrating enough that odrive lacks any concept of using the LAN for syncing, but the fact that this file is always handled as a conflict is maddening! (It’s probably because of its large size, which of course is exactly the sort of problem you want to have with a large file, right?) Can anything be done to stop this craziness? This problem just happening once just burned 1/15th of my data cap for the entire month! HELP!
Hi @DarfNader,
Can you send over a diagnostic when this happens next? I will see if I can identify why the conflict is happening.
So when this happens, a conflict version is created and the previous version is always downloaded? Files should not download automatically unless folder sync rules have been set somewhere in the structure above. When you right-click->sync on the parent folder for this file, what settings are set in that dialog?
Sorry, I ended up deleting the fle and starting from scratch because I couldn’t take it anymore. I now had the file locked so that it doesn’t get gratuitous modification data changes which force the thing to update when nothing has actually changed. (When you look for changes, do you actually do a checksum, or do you see the data variance and assume it is different? It would be nice if the file got touched without changes, ODrive saw this, updated the dates appropriately, but didn’t bother syncing data.
Hi @DarfNader,
Currently a modification in the content or the timestamp will trigger an upload. If the file was being touched, so the timestamp changed, but the content didn’t, that would still cause an upload. I have passed on the request for more tolerance in this area to the product team.
Yeah, considering how oDrive has pretty much no support for file metadata except for in Google Drive, I’d think that you probably should get this addressed pronto because if you have 3 computers you are going to trying to keep files in sync oDrive is basically going to have you copying files around between the tree for ever… and ever… and ever… and ever…
Seriously, that’s a pretty big fail.
Hi @DarfNader,
There shouldn’t be any cases on an infinite loop type of situation. If you had 3 machines syncing the same remote location and one made a change to the date or content of a file locally, the file would upload on that machine. The two other machines would see this remote change and download (if set to do so), and then that should be the end of it.
@tony hmm… okay, then… but I don’t understand why for the past week oDrive hase been churning on my files and I honestly haven’t done much of anything within the directories that it syncs, but it is clearly trying to sync them over and over. For instance, right now there is a file being synced for something that I haven’t even opened in probably a year.
I do realize that I did move some files around maybe 24-48 hours ago just to better organize things, but it wasn’t a lot of changes. Would that have triggered oDrive to resync the entire files again, or is is clever enough just to update the file structure?
I just sent a diagnostic report in the hopes it says more about what’s going on.
Thanks for you help, as always, and sorry for when I become a pill in these forums. I appreciate that you always keep your cool.
Hi @DarfNader,
A move can trigger a re-upload if the move isn’t caught within a certain window. What I mean by this is that we try to optimize moves so that only the structure change is synced and the files, themselves, are not, but that optimization can be missed. It depends on a few factors, including current activity, source-type, structure size, and scanning position. I know these are all vague terms, but suffice to say that a move action can end up as an “delete then add” scenario instead of a “move”. Basically, odrive picks up the missing files (the location the files were moved from) as a delete and then, later, sees the files in the new location and picks that up as an add.
Do you have auto-trash rules enabled?
I took a look at the diagnostic, but I didn’t see any individual files actively uploading, at least when the diagnostic was sent. Sometimes activity can be shown in the menu but it is metadata activity, or scanning, and not actual transfer. There are a couple of waiting items due to an internal server error on Google Drive too. These exceptions can also affect move optimization.
No worries about the help. I know this stuff can get frustrating, and our feedback can be murky to non-existent in some places. We don’t have everything perfect yet, although we are striving to make things better and better. Sync is hard…
I can’t remember if I have suggested it to you yet, but I will sometimes suggest utilizing the CLI to get some additional insight into what odrive is doing/seeing.
To use the CLI commands from Mac:
-
Open a terminal session (type “terminal” in Spotlight search):
-
Copy and paste the following command into the terminal and hit enter:
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status
This will return a summary of odrive status. To get more detailed you can add parameters to the status command like this:
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --uploads
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --downloads
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --waiting
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --trash
python $(ls -d "$HOME/.odrive/bin/"*/ | tail -1)odrive.py status --not_allowed
Hi @Tony, I am not sure I know what “auto-trash rules” are or how I can enable them. Being that there are no preferences to adjust in the typical MacOS-sense I wouldn’t even know where to do that.
I have not messed with the CLI commands but now that I know they exist I will surely be writing some shell scripts to report on data. I probably should set up some monitoring so that if requests start stacking of if some sync task just sort of “runs endlessly” I can get alerted. If I get something useful I’ll be sure to share that with the community.
I have to say, since I have written this email, odrive CPU consumption overall seems to be particularly high on the overage. While it does seem to be niced (or whatever they call process prioritization in MacOS) so other processes that spin up with heavy load get dibs on the CPU cycles, it’s like odrive is always doing something. Leave it alone and it will consume 20% cpu just sitting there. I am out the door so I don’t have time to try out the CLI and do some forensics, but I did just send another diagnostic report. For one, there is this file called .dbhsserviced that seems to be stuck in the “Not allowed” state and don’t know why, what the problem might be, or if it will just stay in that “Not allowed” menu until I go a-dgiigng for a file with a matching name and figure out what might be aggravating odrive and what I might do to get it to relieve itself. If the diagnostics might tell you what is going on there, that would at least be helpful.
Thanks again, and I will report back if the CLI shows me anything that seems odd.
Best,
Matt
Hi @DarfNader,
Auto trash rules are an odrive-specific setting. You can learn more about them here: https://docs.odrive.com/docs/sync-changes#section--empty-trash-
CPU use is something we are working hard on in the next version. Right now it can really depend on how much data is being monitored. The expectation is that the CPU ramps you see are periodic and coincide with scanning that odrive needs to do periodically to double-check that everything is in sync. The more data that odrive is overseeing, the larger the impact.
For the not-allowed file, if you click on it in the not-allowed listing, it should show a message that gives a bit more detail on why it is not allowed. Can you take a look at that when you get a chance?
Hi @Tony, the error about the file that is “not allowed” reports only on that machine as either not having permissions to perform the operation or that it contains illegal characters. The file is named .dbsfeventd and do not recognize it.
Hi @DarfNader,
I believe the .dbfseventsd directory is only in the root of a volume, which is strange because odrive shouldn’t be picking it up. It is used for filesystem events-related stuff for MacOS. Where does it say the location of that item is, when you click on it in the not allowed list?
Aha! So much for me being “Mr. Smart Guy”. That file is from one of my machines which has an AFS share of it’s root filesystem as an odrive datastore, which is probably not even remotely smart from a security standpoint. It has come in useful in the past though, but the convenience is not worth the risk, both in having the root FS as a file share nor having it also be an odrive. However, if odrive was to get breached, not only would all of my cloud versions of files would be compromised (all of which I have locally stored somewhere), my primary computer would also be as well AND it would allow it to be completely hacked, turned inside out, and made into a DDOS bot with virtually no effort. So yeah, I probably should turn that off and just limit AFS shares subdirectories that don’t affect system operaton. If I need to get access to that host’s files, I will use ssh and rsync.
Thanks for the follow-up on that @DarfNader. Good catch and I agree, probably best to turn off the root mapping.