Reset my odrive install and getting "This file does not exist" messages anytime I try to sync anything

I have a pretty serious problem. I completely sync’d all cloudf files and ended up with 2million .cloud files Then I had to reset my VM OS. I still have the extra disk 100% ready to sync, but I keep getting “This file does not exist” messages anytime I try to sync anything.

Is there anything I can do to use all my perfectly valid .cloud files? It took 3or4 days to generate all of these. Things were going so slowly though that I had rescale my plans for this migration. So I moved all the files from a 23tb drive to a 4tb drive. In order to delete the 23tb drive I ended up having to delete my OS drive which contained my odrive settings. I figured I could just reinstall odrive though fresh and it could use all my old .cloud files. It doesn’t seem to be willing to do this though. Is there anything I can do here short of deleting everything I have and starting all over?

edit: in theory this is basically what I’m doing to do? yes?

although I don’t really know how to “move” the odrive folder, I already used this command targetting the correct folder and no functionality:
python “$HOME/.odrive-agent/bin/odrive.py” mount /mnt/storage /

I don’t even have my root .cloudf file at all to start the process, that file was created automatically the last time I ran that command, it seems like already having that folder exist blocks the first root .cloudf file from being created? but at the same time, I cannot make use of it?

edit2: this proposes a solution

Tonyodrive TeamMar '171
Hi @christianorpinell,
For all non-Encryptor data, you can basically just copy/move the local data over into the newly initialized odrive folder, alongside the linked storage placeholders. The steps would be:

Install odrive on the new system, go through the setup process, and log back in (make sure you are logging in as the same user you were using previously).
Once odrive fully initializes the go to the new odrive folder, which should now have all placeholder (.cloudf) files. The cache has now been reset.
Go to the old/backup odrive folder from the previous install and move all of the folders inside that old/backup odrive folder to the new odrive folder. You will end up with “real” folders and .cloudf placeholders sitting alongside each other, at first. Once odrive starts processing it will replace the .cloudf files in there “real” folders are now present.

I will try this. Although I randomly already tried to sync a bunch of cloudf files that i didn’t think existed, so who knows what sort of damage I may have caused by following these steps in reverse order of moving the files into place and then installing odrive and pointing at them :-p

edit3: for future reference, of course, this is all my fault, I didn’t need to leave the checkmark “delete boot disk when deleting this instance” checkmarked. If it had been uncheckmarked I’d have had no trouble at all, so, that’s what I’m doing now, so if I do ever get this going again, this proble mwon’t arise again.

edit4: so I tried that and before running a single sync command I ran a status command and got this:

Sync Requests: 0
Background Requests: 1
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 0
Not Allowed: 1
no clue what a not allowed might mean…
figured maybe I didn’t chown my directories so I killed odrive and tried that.
then restarted odrive and got
Sync Requests: 0
Background Requests: 5
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 0
Not Allowed: 1
does this mean odrive is actually working? and it’s going through and fixing things checking things and making them work? and perhaps all this worrying was for nothing? and the not allowed: 1 is because I used the advice in that other thread to trick the system and generate the “Amazon Cloud Drive.cloudf” file that I never actually needed in the first place because I already had the folder? In which case I do nothing and it’ll fix itself?

edit5:
Ran status again a minute later and got odrive Make Cloud Storage THE WAY IT SHOULD BE.
isActivated: True hasSession: True
email: accountType: Google
syncEnabled: True version: prod 924
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0
Sync Requests: 0
Background Requests: 10
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 7
Not Allowed: 1

not sure I like odrive just trying to fix itself on it’s own without me having any ability to effect it at all, getting that many “waiting” right away is a bit odd, although there were 2000 write operations per second on my system at that time, perhaps my computer cannot handle the speed at which odrive is trying to fix itself. I suppose 10 requests at a time is the default background setting?

edit6: so status doesn’t provide me any real information on my progress. I’m not getting spammed by feedback like I would from my own personal sync commands… how can I tell if odrive is “stuck” or if it’s working perfectly? or how much is left to do? should I just go afk for a day?

still seeing:

isActivated: True hasSession: True
email: accountType: Google
syncEnabled: True version: prod 924
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0
Sync Requests: 0
Background Requests: 9
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 11
Not Allowed: 1

edit7:
this is what status shows now.
Sync Requests: 0
Background Requests: 6
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 11
Not Allowed: 1
my monitoring charts also show that disk i/o speed has been cut from 2500 to 1500 (of course I had previously thought 100-200 was the maximum rate of disk i/o operations per second so this is still quite fast).
Is this somehow due to the ratio in background requests to stuck waiting requests? as the waiting number never ever goes down, only up, I feel as though perhaps once an element is stuck waiting it will stay that way forever.

so my questions would be

  1. how can I tell when this process is done?

and 2) if I kill odrive agent and restart it, where will I be left off? will it resume? will it retry those stuck waiting files? etc?

  1. btw there are some issues with a very very small number of files, like maybe 3or4 out of 2million that are in no way odrive’s fault but are filenames using faulty characters outside the alphabet/unicode. I only mention this here because they may or may not have an impact on things getting stuck in “waiting” mode. I don’t actually care about these particular files though in specific.

edit8:
So after the first 2hours the system switched from heavy I/O writing to moderately heavy I/O reading…
Somehow I now have 338 cloudf files (the total number of subdirectories in this cloud is around 80,000). Not really sure if it’s working or not still. Total amount of diskspace used and files on disk seems relatively constantly with only very minor changes, so at least it’s not deleting the 3.7TB of data that’s already been sync’d although it might be redownloading the cloudf file’s for the folders that data is in. Given that I never ran any commands, other than turning odrive agent on. I’m not sure how it will communicate to me if or when it’s done, other than maybe having 0 background status requests on a status command. It’s still got 5. So I’ll try to distract myself and go afk a bit.
edit9: now there are 352 cloudf files. hmm, not sure if that’s good or bad, is it doing all cloudf files over again? or is it somehow comparing new cloudf files with directories and files that already exist? I really am not sure how background requests work, as I was only familiar with manual sync inputs. I also ran find with -mtime -1 to show, in theory at least, 77 .cloud files were created today and existed at the time i ran the find. so the background requests are processing both cloudf’s and cloud’s both? in an order I have no clue about… both of these numbers seem somewhat small though when having to analyze 2million files, although, since those files were already correct… maybe most of them are left unedited?

edit10: Another half hour or so later still 77 .cloud files created or modified in the past day. So, either the process is stuck and I should restart it. OR the process is checking all the perfectly fine files that are already there which is what I want it to do anyways… disk usage seems to suggest it’s still working, not sure how to find any feedback about that though, status remains at 4 background requests 11 waiting.

edit11: ran find . -mmin +1 -mmin -480 -type f | wc -l
found 358… so if I’ve only managed to effect 358 files after 6hours that’s pretty bad, but given how hard the disks are chugging, I’d have expected a lot more. feels like maybe it’s stuck and I should kill it and restart odrive, but I’m still worried that this magical background request system will restart instead of continuing if I use a kill command. If I was confident it would pickup where it left off I could even shut the VM and add ram and restart it, this task seems pretty memory intensive, when I created this VM instance I assumed it would just instantly be ready to download files using .cloud files, I didn’t realize I’d have to re-examine the entire lot. It’s possible that the heavy disk usage is pagefile related. That does sound a little far fetched though.

edit12:
I finally found the flags for the odrive status command!!!
optional arguments: -h, --help
show this help message and exit
–mounts get status on mounts
–backups get status on backup jobs
–sync_requests get status on sync requests
–uploads get status on uploads
–downloads get status on downloads
–background get status on background requests
–trash get status of trash items
–waiting get status of waiting items
–not_allowed get status of not allowed items

python odrive.py status --background
used to be f 0%
UploadAsE 0%
backupsfrom2013summerihaveothercopiesof 0%

these aren’t changing at all, so it’s been stuck, possibly for four hours. Whoops. Also all the waiting files are “illegal filename”

edit13: ran syncstate two are pink two are blue rest are white, white must mean odrive doesn’t know anything about them. I am going to assume that the refresh command used on a white directory will make background requests try to fix it?
Active
Documents
Dec12011 clone from New Volume
Videos
New Volume Samsung 2.0TB This drive was cloned from dec 1 2011
Pictures
320GB settings only mostly
Windows Fat 1.0TB western digital black drive from 2010
WD 2TB Drive Bought Jan2010
C:
phone samsung s7
D:
E:
K:
Elements. That Silver Drive from ages ago backup may 24nd 2017

edit14: didn’t refresh anything, re-ran syncstate and now 6 of them are blue… but when I do status I don’t see any background requests, so for syncstate to be reporting more blueness is odd. on top of that if I enter some of the white directories there are still many blue names inside, so maybe the past 6hours weren’t wasted after all. Still debating popping my refresh, or not popping it, but ATM my 1 concurrent .cloud download is working fine, so I’m just gonna let that run (and hope it makes more 77downloads).

as a test I ctrl-c’d my single odrive download request find .
Instantly background requests started populating, great, I think I totally understand what’s going on! and how to monitor progress! I made mistakes, but it looks like I now know everything I need to know to solve them.

I am debating deleting this thread, it’s entirely worthless now, as I don’t want help anymore.
If any staff waste their time reading this thread, feel free to delete it.
Unless somehow this story is a good teaching moment about how odrive CLI works for other users, in which case, it can continue to exist for some poor soul like me to google up and read a year or two from now and maybe it helps them understand how odrive cli agent works.

Hi @bouya.daman,
My apologies for not getting back sooner. I’ve been heads-down on stuff all day (still am), but I saw this pop-up in my feed.

I haven’t gone through the whole post above yet… heh … but I wanted to point out the expected behavior in a case like this.

When you add in all of the files/folders from a backup into a fresh odrive folder, odrive will still need to processes everything. In other words, it can’t take anything at face value, and it still needs to check the whole local structure and remote structure to make sure everything lines up. This is going to be a time-consuming process, as you know.

Essentially odrive is navigating and re-indexing everything. You don’t really save on metadata processing in this case. However, any files that were already local will be checked and skipped if they are consistent with the remote side, meaning you can save a considerable amount of time (and bandwidth) on transfer.

I think the progress (or lack thereof) you were seeing was a result of odrive just being really busy trying to reindex the entire structure.

It sounds like things are starting to progress more now, which is good to hear. I appreciate the updates you were providing as you were going through the process. The CLI takes a little getting used-to, and that can be frustrating a bit confusing, especially when things aren’t behaving the way you would expect.

I put this in the other thread too but I’ll write it here too.

background requests are ENTIRELY MEANINGLESS TO THIS PROCESS. Having status --background be stuck is not that big a deal (oh also background requests for .cloudf files seem to just go from 0% to 100% instantly, the progress tracker is also meaningless and not actually proof of anything being wrong as I had first feared). the background requests are only for situations in which a new .cloud .cloudf or directory is needed. When everything is 100% fine the files just turn from white to pink to blue silently. This isn’t included in any status report, but it is something I can see by repeatedly using syncstate and targeting new directories as I follow progress.

Also the fact that this might take a long while is not really that big a deal, as I still have 3TB of data to upload to gdrive, which will take roughly 4 days.

The biggest deal isn’t re-indexing, so much as having to redownload 4TB of data, but so far it’s like you said. Actual data I’ve downloaded from ACD is being checked off by odrive as already perfect.

Also even though you said there isn’t going to be much of a savings in terms of metadata, it appears like there is actually a pretty big savings. It might not reduce the number of api requests I’m making to ACD, and I might get banned from ACD again for a day, but it’s running a lot smoother. My guess would be that when it wants to generate new .cloud or .cloudf files and it finds those files already exist, it can simply check them, instead of re-creating them, which I don’t know why, but is somehow a bit faster. I have proof of this too because it’s very rarely creating or modifying any of the .cloud files. I feared at first this was a problem, but I now see that a lot of .cloud files are being ACCESSED then left alone and turned blue by odrive. I was using -mtime not -atime in my find commands.

Thanks @bouya.daman!

I was thinking more about API calls, but you are right, there should be a savings on CPU and I/O overhead in this case since it is not laying down all of the objects it has to when working from a completely fresh cache.

Also don’t worry about not being here for me. It was still you that solved my problems, when I finally found this thread:

Really top tier thread, needs to be more visible, although maybe it was just silly I didn’t find it faster.

Also yeah, getting banned for a day or two because of api requests won’t matter anymore, it was really really bad the first time it happened because I had 0TB of data to work on, just lots of .cloud files. Now that I have 4TB of data I have plenty of slow uploads to make during any future ACD temporary bans.

1 Like

Thanks @bouya.daman,
We also have a section in our docs that covers the basics of the agent and CLI here, for future reference: https://docs.odrive.com/docs/odrive-sync-agent

Hmm, now that I see that odrive will automatically replace any missing .cloudf or .cloud files within the sync folder. How do I disable this? Leave odrive on, but have this feature off?

When I look at status I see:
odrive Make Cloud Storage THE WAY IT SHOULD BE.
isActivated: True hasSession: True
email: accountType: Google
syncEnabled: True version: prod 924
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0

Is there anyway I can adjust these settings? So that I can say, manually run an odrive sync “<list of .cloud files>” then move those files to googledrive, then delete those files, all the while having odrive turned on, but then NOT have odrive try and replace those .cloud files? My goal being to start deleting things after I’m 100% sure I have 100% of the .cloud files I want to sync. Because, well, obviously my ACD cloud isn’t ever going to change again, I’ve never uploading to ACD again… so barring anymore accidentally deletings of odrive, I won’t want this autosync’ing feature say, tomorrow, or the day after that.

I guess what I’m referring to is “unsync’ing”
you did write a script for this here:

I guess in theory I could sync a .cloud file then instantly unsync the file once it’s done being downloaded. For my usage case though it would be nice to simply disable creation of new .cloudf and .cloud files entirely, while at the same time still allowing manual sync commands of specific .cloud files I already have.

I’m assuming if I say, unsync’d a directory full of .cloud files and then tried to manually sync one of those files it would give me an error right? Or am I misunderstanding the unsync command? and it already does exactly what I wanted? Or even the opposite of what I want (heh)?

I even read this:

But I’m still not sure. Is it possible to have a sync’d file in an unsync’d directory? or is it possible to have a sync’d directory with both unsync’d and sync’d files inside? (clearly yes) but what’s the best way for me to achieve this.

The part that scares me about unsync is that in the -help for unsync I see this:
"
–force force unsync a file or a folder - permanently deleting any
changes or files that have not been uploaded.
"

This seems to imply that if I sync a .cloud file to generate a download of the real file. Then unsync the real file. It will instantly delete the real file? When what I want it to do is block attempts to download future placeholders for that file. Given how scary the wording on --force is though I think I might be afraid to attempt unsync’ing a file.

This might mean I end up having to move folders to googledrive in entire lots, and once an entire folder is safely on googledrive I could then unsync the folder in odrive, this might delete the folder but it should also block redownloading the .cloudf file? right? or will I need to build my own list of .cloudf files I want to ignore in future find . -name “*.cloudf” sync commands?

Does the unsync command simply turn healthy files back into placeholder files? Is there no way to block infinite regeneration of placeholder files? (Short of deleting files off of ACD; speaking of which, is there anyway to make odrive delete files off of ACD? Since I’m migrating the copy of the files I care about is the local copy, not the remote copy, the way sync and unsync are worded seems to indicate the main usage of odrive is when the remote is the main copy and the local copy is unimportant).

Or am I still misunderstanding sync states? and once everything is fully refreshed, can I simply sync a .cloud file, move it to googledrive, delete it, and it won’t magically reappear as a .cloud file unless I run a refresh command?

Assuming everything I said above is gibberish, I guess I can just run
find . -name “*.cloud” -mtime -1 -exec python “$HOME/.odrive-agent/bin/odrive.py” sync {} ;

I can simply let odrive functionally normally, and just ignore everything it’s done recently, leaving a bunch of files and placeholder creation that I don’t want done, and well, just ignoring it and leaving it be. This solution is extremely easy and simple, but I don’t like it, it’s messy, and doesn’t fill me with confidence.

Obviously if I were left with moving entire folders to googledrive and then manually supervising their “unsync” I could just delete that folder from ACD. The reason for asking all these annoying questions though is to avoid this level of manual supervision.

I’m avoiding all the problems this user had Using Sync Agent to sync a folder outside odrive folder?
Simply by using odrive connected to ACD, and another program to move things to googledrive (won’t mention it here because I don’t want to advertise a competitor here). Of course maybe this is silly of me, and I should just link googledrive to odrive as well, but well, that method seemed even harder (I’m not actually that well versed in scripting) than the questions I’m asking here.

Hi @bouya.daman,
I think your questions may have been answered in the other thread, but let me know if that is not the case.

1 Like

Yes. Perfect. Answered here Using Sync Agent to sync a folder outside odrive folder?

While I’m asking silly questions I have one more.
So sync’ing colors go from white to pink to light blue. light blue being sync’d.
So then what are dark blue and red? colors I’ve also seen? I’ve only seen folders be dark blue and files be dark red.

Nevermind red is just archive files. duh.

1 Like