“Amazon Cloud Drive enforces a limit on the rate that requests can be made to their service. You have made too many requests recently and hit this limit. Please wait a few minutes before making additional requests to Amazon Cloud Drive.”
Anyone know how long this will last? Odrive crashed roughly 8 hours ago on my VM. I reset odrive agent with kill. Then I started getting the above much scarier error message. So I’m wondering has ACD banned me for the past 8 hours? or when odrive agent froze did it hammer ACD the entire time it was “frozen” with false api requests? (Note while running normally odrive agent was using 30-50% of my cpu but while “frozen” it was using 100%). I should also mention that this weekend I sync’d 2million cloudf files without much trouble. Then I downloaded 2TB of data over 6hours. Then like I said, my logs show that while I was asleep odrive had been “frozen” for about 8hours… now I’m seeing the above scary message from amazon. Is this likely a 24hour ban? is it possibly only a shorter ban? was the ban over the 2million plus api requests made this weekend? or over the 2tb of downloads? (I’m hoping it’s the former, because I have 19TB left to go and I can only afford to use this VM for at most another week or so).
followup question: If I test odrive to see if the error is gone, say… after an hour, will this reset my total ban duration? or am I free to continue checking to see if the ban is gone, and whatever length this ban is, is fixed?
edit: waited an hour, still getting the message. not sure what to do.
edit2: I called them. they said my account is not locked. accounts flagged as locked have 48hour bans. The supervisors guess was that my ban would be at most 24hours. That’s doable, as long as it doesn’t happen again, but I’ve made 2.2 out of 4million needed api requests so far, by my estimation.
as a followup I would guess either amazon’s limit is somewhere around 2.2million api requests per 48hours, OR when odrive agent crashes it generates an infinite number of useless api requests (because it does set my cpu usage to 100% until I kill the agent, whereas normal operation is 30-50%)
Hi @bouya.daman,
In previous instances I have seen the ban was 24 hours or less. The way they enforce their limits, and what triggers them is pretty much a mystery to me. It sounds like you had some very heavy usage in the past 24 hours.
Since it sounds like you are scripting the operations, you may need to keep an eye on things, periodically. odrive will try an operation as often as you tell it to. If you start hitting rate limits, and your script is set to just continue making calls, then odrive will continue to make those calls.
How many concurrent threads are you using for your downloads?
On saturday I was running one cloudf sync one at a time, but I found that it was randomly stalling out after 70-120minutes, making it very annoying.
I then moved to 10, and I found that after many hours odrive status would report that it was waiting on 3 sync’s. This made me feel like what had happened in the past was that when running one at a time, if it ever got stuck waiting, it would require manual input to restart.
So I moved to 25 instances at once. Then I was able to confidently go afk and a day later sunday afternoon it was waiting on 7 files, but that to my mind meant 18 were still running, and sure enough I finished the cloudf sync phase.
I then moved to the sync cloud phase using this time only ten at a time, figuring that with things being slower ten was plenty. That ran for 6hours, and then it stopped running. Although at this time I was away from home. My logs showed that when this stopped working my cpu usage went from 40% to 100% and it lasted this way all night for roughly 8hours until I returned and killed the agent.
So, yes. My usage was excessive. Although it became less excessive before I started getting the message. If it lasts 24hours or less, I should be okay. The rest of the cloud sync phase should continue much more slowly than the original cloudf sync phase. I might try going from 10 to 5 though if I’m able to access ACD again tonight/tomorrow.
Of course my usage is going to be finite. Once this sync is finished I will never ever use ACD ever again. (I’m moving to google).
edit: clarifications, mostly for myself.
9:38pm last night I lost network data. Either because of an odrive crash or an ACD block. I have no way to know. I was too silly to check. Then for unknown reasons my cpu usage spiked from 20-40% to 98.8% at 10:15pm.
To me this means the ACD block was at 9:38pm and then odrive crashed as a result at 10:15pm for some reason the “crash” caused odrive to thrash the system. This is common behavior for odrive (the way I was overusing it via scripts 24/7 at least). There are many alternate theories that might explain this though. Also times are rounded to the nearest 5minute interval in the history of the log, I didn’t witness either event.
edit2: 24hours after these events, It’s working again. So that answers that.
Managed to get the error message again, this time after only 9hours. not quite sure how 2.2million requests over 48hours triggered the message and then 200,000 requests over 33hours triggered it as well (maybe when they unfroze my account they had me on some sort of “thin ice”), this is starting to make me think the task will be impossible, with 1.6million .cloud files left to go. although to be fair this time I did go with 8 concurrent requests instead of 4, like you suggested. 4 wasn’t giving me quite as much bandwidth. So by trying to makeup for lost time, I only ended up costing myself more time.
Although this time status reported I had 24 waiting requests, despite me having only made 8 requests at a time. This disproves my previous understanding of what waiting requests mean/meant. Possibly also disproving my measure of api requests (if odrive is making more requests of ACD than I ask it to, by say, repeating stuck/frozen/crashed requests some large number of times, that I am unaware of).
On the bright side, the logs indicate, it might be possible, that this ban lasted only 30-60minutes and was issued 5or6 times this morning while I was afk.
edit: Just checked the logs and found no evidence to support my previous sentence in the odrive logs, despite the evidence existing in the network traffic logs. Might have to wait 24 or 48 hours and resume with 1connection at a time I guess. Although I really cannot explain this second ban, I was roughly 90% less active in how many requests I tried to make.
edit: Decided to make use of this ban time to start uploading what I had so far to google drive, hit the 750gb google drive upload limit, despite the fact that as far as I know I was transfering data within the same building as per the method that people were using 6 months ago. This means that instead of being able to complete the transfer in 4-8 days it’ll take roughly 30 days. This also means there was no reason for me to be pounding on ACD so hard, and no benefit to these bans. Getting 3.7TB from ACD was meaningless because over the same period of time GD would’ve limited me to 1.5TB of upload anyways. As it stands now I’ll have to monitor this every day for next 30 days, but at least I shouldn’t keep getting banned anymore, I hope. Because I cannot really afford more bans if the speeds will be this low, given that I only have roughly 40 days left total to complete this task. Oh well, silly me, that’s what I get for following a 6month old guide.
Amazon’s algorithm for throttling is not documented anywhere, so its a mystery to me, unfortunately.
I know Google fairly recently (within the last 5 months or so) the 750GB upload limit, although I haven’t actually seen any official documentation on it. The information you are reading may have been written just prior to this limitation going in?
Migrating large amounts of data is definitely a pain…
Yes, one reason I’m trying to document this is because it’s so very undocumented. Even my casual observations may help another user one day if they try to google this topic.
Also as a test I just uploaded a small file via the web interface (to google drive) … it worked just fine… so only my api is facing this 750gb limit?
I got a lot of good replies here so I’ll crosspost this question here:
Big problem. I completely sync’d all cloudf files and ended up with 2million .cloud files Then I had to reset my VM OS. I still have the extra disk 100% ready to sync, but I keep getting “This file does not exist” messages anytime I try to sync anything .
I have a pretty serious problem. I completely sync’d all cloudf files and ended up with 2million .cloud files Then I had to reset my VM OS. I still have the extra disk 100% ready to sync, but I keep getting “This file does not exist” messages anytime I try to sync anything.
Is there anything I can do to use all my perfectly valid .cloud files? It took 3or4 days to generate all of these. Things were going so slowly though that I had rescale my plans for this migration. So I moved all the files from a 23tb drive to a 4tb drive. In order to delete the 23tb drive I ended up having to delete my OS drive which contained my odrive settings. I figured I could just reinstall odrive though fresh and it could use all my old .cloud files. It doesn’t seem to be willing to do this though. Is there anything I can do here short of deleting everything I have and starting all over?
edit: in theory this is basically what I’m doing to do? yes?
although I don’t really know how to “move” the odrive folder, I already used this command targetting the correct folder and no functionality:
python “$HOME/.odrive-agent/bin/odrive.py” mount /mnt/storage /
I don’t even have my root .cloudf file at all to start the process, that file was created automatically the last time I ran that command, it seems like already having that folder exist blocks the first root .cloudf file from being created? but at the same time, I cannot make use of it?
edit2: this proposes a solution
Tonyodrive TeamMar '171
Hi @christianorpinell,
For all non-Encryptor data, you can basically just copy/move the local data over into the newly initialized odrive folder, alongside the linked storage placeholders. The steps would be:
Install odrive on the new system, go through the setup process, and log back in (make sure you are logging in as the same user you were using previously).
Once odrive fully initializes the go to the new odrive folder, which should now have all placeholder (.cloudf) files. The cache has now been reset.
Go to the old/backup odrive folder from the previous install and move all of the folders inside that old/backup odrive folder to the new odrive folder. You will end up with “real” folders and .cloudf placeholders sitting alongside each other, at first. Once odrive starts processing it will replace the .cloudf files in there “real” folders are now present.
I will try this. Although I randomly already tried to sync a bunch of cloudf files that i didn’t think existed, so who knows what sort of damage I may have caused by following these steps in reverse order of moving the files into place and then installing odrive and pointing at them :-p
edit3: for future reference, of course, this is all my fault, I didn’t need to leave the checkmark “delete boot disk when deleting this instance” checkmarked. If it had been uncheckmarked I’d have had no trouble at all, so, that’s what I’m doing now, so if I do ever get this going again, this proble mwon’t arise again.
edit4: so I tried that and before running a single sync command I ran a status command and got this:
Sync Requests: 0
Background Requests: 1
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 0
Not Allowed: 1
no clue what a not allowed might mean…
figured maybe I didn’t chown my directories so I killed odrive and tried that.
then restarted odrive and got
Sync Requests: 0
Background Requests: 5
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 0
Not Allowed: 1
does this mean odrive is actually working? and it’s going through and fixing things checking things and making them work? and perhaps all this worrying was for nothing? and the not allowed: 1 is because I used the advice in that other thread to trick the system and generate the “Amazon Cloud Drive.cloudf” file that I never actually needed in the first place because I already had the folder? In which case I do nothing and it’ll fix itself?
edit5:
Ran status again a minute later and got odrive Make Cloud Storage THE WAY IT SHOULD BE.
isActivated: True hasSession: True
email: accountType: Google
syncEnabled: True version: prod 924
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0
Sync Requests: 0
Background Requests: 10
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 7
Not Allowed: 1
not sure I like odrive just trying to fix itself on it’s own without me having any ability to effect it at all, getting that many “waiting” right away is a bit odd, although there were 2000 write operations per second on my system at that time, perhaps my computer cannot handle the speed at which odrive is trying to fix itself. I suppose 10 requests at a time is the default background setting?
edit6: so status doesn’t provide me any real information on my progress. I’m not getting spammed by feedback like I would from my own personal sync commands… how can I tell if odrive is “stuck” or if it’s working perfectly? or how much is left to do? should I just go afk for a day?
still seeing:
isActivated: True hasSession: True
email: accountType: Google
syncEnabled: True version: prod 924
placeholderThreshold: neverDownload autoUnsyncThreshold: never
downloadThrottlingThreshold: unlimited uploadThrottlingThreshold: normal
autoTrashThreshold: never Mounts: 1
xlThreshold: never Backups: 0
Sync Requests: 0
Background Requests: 9
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 11
Not Allowed: 1
edit7:
this is what status shows now.
Sync Requests: 0
Background Requests: 6
Uploads: 0
Downloads: 0
Trash: 0
Waiting: 11
Not Allowed: 1
my monitoring charts also show that disk i/o speed has been cut from 2500 to 1500 (of course I had previously thought 100-200 was the maximum rate of disk i/o operations per second so this is still quite fast).
Is this somehow due to the ratio in background requests to stuck waiting requests? as the waiting number never ever goes down, only up, I feel as though perhaps once an element is stuck waiting it will stay that way forever.
so my questions would be
how can I tell when this process is done?
and 2) if I kill odrive agent and restart it, where will I be left off? will it resume? will it retry those stuck waiting files? etc?
btw there are some issues with a very very small number of files, like maybe 3or4 out of 2million that are in no way odrive’s fault but are filenames using faulty characters outside the alphabet/unicode. I only mention this here because they may or may not have an impact on things getting stuck in “waiting” mode. I don’t actually care about these particular files though in specific.
edit8:
So after the first 2hours the system switched from heavy I/O writing to moderately heavy I/O reading…
Somehow I now have 338 cloudf files (the total number of subdirectories in this cloud is around 80,000). Not really sure if it’s working or not still. Total amount of diskspace used and files on disk seems relatively constantly with only very minor changes, so at least it’s not deleting the 3.7TB of data that’s already been sync’d although it might be redownloading the cloudf file’s for the folders that data is in. Given that I never ran any commands, other than turning odrive agent on. I’m not sure how it will communicate to me if or when it’s done, other than maybe having 0 background status requests on a status command. It’s still got 5. So I’ll try to distract myself and go afk a bit.
edit9: now there are 352 cloudf files. hmm, not sure if that’s good or bad, is it doing all cloudf files over again? or is it somehow comparing new cloudf files with directories and files that already exist? I really am not sure how background requests work, as I was only familiar with manual sync inputs. I also ran find with -mtime -1 to show, in theory at least, 77 .cloud files were created today and existed at the time i ran the find. so the background requests are processing both cloudf’s and cloud’s both? in an order I have no clue about… both of these numbers seem somewhat small though when having to analyze 2million files, although, since those files were already correct… maybe most of them are left unedited?
edit10: Another half hour or so later still 77 .cloud files created or modified in the past day. So, either the process is stuck and I should restart it. OR the process is checking all the perfectly fine files that are already there which is what I want it to do anyways… disk usage seems to suggest it’s still working, not sure how to find any feedback about that though, status remains at 4 background requests 11 waiting.
edit11: ran find . -mmin +1 -mmin -480 -type f | wc -l
found 358… so if I’ve only managed to effect 358 files after 6hours that’s pretty bad, but given how hard the disks are chugging, I’d have expected a lot more. feels like maybe it’s stuck and I should kill it and restart odrive, but I’m still worried that this magical background request system will restart instead of continuing if I use a kill command.
If I was confident it would pickup where it left off I could even shut the VM and add ram and restart it, this task seems pretty memory intensive, when I created this VM instance I assumed it would just instantly be ready to download files using .cloud files, I didn’t realize I’d have to re-examine the entire lot. It’s possible that the heavy disk usage is pagefile related. That does sound a little far fetched though.
edit12:
I finally found the flags for the odrive status command!!!
optional arguments: -h, --help
show this help message and exit
–mounts get status on mounts
–backups get status on backup jobs
–sync_requests get status on sync requests
–uploads get status on uploads
–downloads get status on downloads
–background get status on background requests
–trash get status of trash items
–waiting get status of waiting items
–not_allowed get status of not allowed items
python odrive.py status --background
used to be f 0%
UploadAsE 0%
backupsfrom2013summerihaveothercopiesof 0%
these aren’t changing at all, so it’s been stuck, possibly for four hours. Whoops. Also all the waiting files are “illegal filename”
edit13: ran syncstate two are pink two are blue rest are white, white must mean odrive doesn’t know anything about them. I am going to assume that the refresh command used on a white directory will make background requests try to fix it?
Active
Documents
Dec12011 clone from New Volume
Videos
New Volume Samsung 2.0TB This drive was cloned from dec 1 2011
Pictures
320GB settings only mostly
Windows Fat 1.0TB western digital black drive from 2010
WD 2TB Drive Bought Jan2010
C:
phone samsung s7
D:
E:
K:
Elements. That Silver Drive from ages ago backup may 24nd 2017
edit14: didn’t refresh anything, re-ran syncstate and now 6 of them are blue… but when I do status I don’t see any background requests, so for syncstate to be reporting more blueness is odd. on top of that if I enter some of the white directories there are still many blue names inside, so maybe the past 6hours weren’t wasted after all. Still debating popping my refresh, or not popping it, but ATM my 1 concurrent .cloud download is working fine, so I’m just gonna let that run (and hope it makes more 77downloads).
as a test I ctrl-c’d my single odrive download request find .
Instantly background requests started populating, great, I think I totally understand what’s going on! and how to monitor progress! I made mistakes, but it looks like I now know everything I need to know to solve them.
I can no longer edit this post, I edited it too many times. Sorry if this is annoying anyone, but keeping my notes here publicly has helped me think through this problem a lot.
This is the edit I wanted to add:
I’ll now note:
background requests are ENTIRELY MEANINGLESS TO THIS PROCESS. Having status --background be stuck is not that big a deal. the background requests are only for situations in which a new .cloud .cloudf or directory is needed. When everything is 100% fine the files just turn from white to pink to blue silently. This isn’t included in any status report, but it is something I can see by repeatedly using syncstate and targeting new directories as I follow progress.