B2 extremely slow

Hi @amazon12,
I’m glad those settings helped to get everything up a little quicker.

The diagnostic shows you have about 850,000 objects to track, so the overhead will be fairly substantial. odrive should be able to deal with it, but it would be best to unsync folders that you can consider “archived” and don’t need quick access to. Are you able to unsync portions to reduce the scope?

Collapsing folders will also help to reduce overall B2 cost, since B2 charges for API calls ( https://www.backblaze.com/b2/b2-transactions-price.html). b2_list_file_names would be reduced for any unsynced folders.

Additionally, there is a setting in the odrive menu to turn off periodic background scans. It is at the top under “Ready to sync new changes” and called “Disable background scanning”. With this setting enabled odrive will still upload any local changes, but it won’t interrogate the storage for remote changes unless you are navigating into those folders (which will then kick off an on-demand remote query for that folder). This setting needs to be enabled anytime odrive is started.

Seeing bills for this and you’re not kidding. :slight_smile:

About $18/month at this point.

I do wish OS-X had the ability to “peek” into a zip like Windows, that would totally solve this problem for me (would be a nice odrive feature!).

What’s the best way to deal with zipping the most obnoxious archives up to minimize B2 transactions? Stop odrive and do my work then restart or just let it run while I’m zipping and then deleting the origin directories?

Hi @amazon12,
If you use the unsync capability on a folder (Manage Disk Space) you can collapse it and remove its structure from odrive’s view. This may depend on if you are able to cordon off particular sections, so that you can unsync them without removing your access to areas to need immediate access to.

Zipping to an archive outside odrive and then moving that in should be fine. If you zip inside odrive, it will constantly interrogate that file to see if it can sync it, which can produce some general overhead. Try to keep the zip files to a reasonable size to facilitate consistent upload and eventual download.

Hi Tony,

So I finally zipped some stuff up. I’ve gone from some multi hundred thousand count of files under my B2 share to just this:

frankentosh:ODRIVE1 spork$ find . | wc -l
11481
frankentosh:ODRIVE1 spork$

Just sent a diag report. Right now it seems to be stuck (since some time late last night) at the deleted stuff and getting it out of the trash:

53%20PM

Hi @amazon12,
Before taking any other actions, please read all of the information below and let me know if you have any questions about anything.

It looks like the two zip files you are trying to upload may be hitting an error on B2 when uploading. Instead of zipping them up and uploading those to B2, I think a better option is to just unsync those folders on the the local side. The cost will actually be less than converting things into zip files, since you’ve already uploaded that data, and you will be able to keep those items as individual files and folders so you can retrieve specific items, if/when needed.

The “sync” cost is really going to come down to the amount of data you have “exposed” locally. This means folders that have not been unsynced (converted into placeholders) yet.

My recommendation is to do the following:

  • Do not sync the local deletes (the items in the odrive Trash bin) to B2.
  • Restore the folders in the odrive trash that you zipped up (from within the “Trash bin” submenu you can click on the individual items to restore them or click on “Restore all trashed items” to recover them all. This cancels the pending deletes so that they are never synced to the cloud. When you restore from trash, the items are restored as placeholders (unsynced).
  • It the future, for the folders that you do not need local access to and want to “archive”, right-click on those folders and select unsync. This will leave them as they are in the remote storage, but turn them into placeholder files locally (.cloudf).
  • Remove the large zip files you created. They will not be needed since you already have that data remotely.

Important: This plan will not work if you have already sent your deletes to B2 (clicked on “Empty trash and sync all deletes” in the “Trash bin” submenu). Please verify this so we know how to proceed.

Hi Tony,

I definitely already sent my deletes, I assumed that’s what’s hanging…

I’m totally OK with not having the unzipped stuff anywhere. It’s archival, and if I need to poke in the zip files I’m OK with unpacking them outside of an odrive-watched directory…

I get the idea of having some stuff only in B2, but I really like the idea of the data existing in at least two places (my desktop machine + cloud).

Hi @amazon12,
Are you seeing an error when clicking on the “Empty trash and Sync all deletes” option, or is it just not doing anything once you do that?

I need to have the team look at why your two files are not uploading, can you send one more diagnostic after retrying the empty trash option?

Hi Tony,

There is no immediate error, but after a very long time I’ll get an error about a single directory. If it pops up again, I’ll record it and post here. I just hit the “empty trash and sync all deletes” button and sent a diagnostic.

Just got this, I guess about 2 hours later:44%20AM

Sent a diag report after closing the alert.

Hi @amazon12,
I think the issue here is that deletes on B2 can be very slow, due to the nature of the storage. Since there aren’t really any folders, odrive has to delete every single one of the files, so the more files you have, to longer it can take and the more often it can run into errors.

If you want, you can run a CLI script that will continue to retry the trash emptying, even if it hits several errors.

We also found an issue with larger file upload (over 5GB in size), which is preventing your two large files from uploading. We should be able to release a new version to address that next week.

Hi @amazon12,
We released a new version of the desktop client that fixes B2 uploads for files over 5GB in size and tweaks the trash beahavior. You may still need to run the empty trash command multiple times if you have lots and lots of items in there, so let me know if you want to explore the CLI method I mentioned in the previous post.

Hi Tony,

The update did fix the upload issue, so that’s all cleared up. I think I will need the command-line stuff to fix up the trash though - it’s pretty much stalled.

Hi @amazon12,

Thanks for the update!

The CLI script for trashing is here:

Don’t worry, it looks worse than it is. It basically comes down to copying and pasting some lines into the terminal, but let me know if you have any questions.

OK, I have that running… Should I see any progress? I’m seeing the “still items in trash” echoed repeatedly, running the status command shows 42 items in the trash - same as when I started about an hour ago. Should I send a debug or is it just going to take a really long time?

Hi @amazon12,
It will probably just take a really long time. The folders in the 42 items listed could have thousands of files inside, so you won’t see visible progress on those until the entire folder is done.

Still going… :slight_smile:

Here’s something to look out for… I had two Desktop folders (let’s call them desktop-folder1 and desktop-folder2) syncing to B2. Apparently when I was moving stuff to Trash, I included my odrive/B2/odrive/desktop-folder1.cloudf and odrive/B2/odrive/desktop-folder2.cloudf. Sometime yesterday, I guess those placeholders got deleted and that triggered the delete of the actual desktop folders.

I guess that makes sense (the whole “sync any folder anywhere” feature is neat, but confusing when you start to think about where the files actually live), but just wanted to verify that’s expected behavior.

Also while I was poking around the B2 web interface it was sorely tempting to just tick the 40-some folders left and hit delete there… What would odrive’s reaction to that be?

Hi @amazon12,

That is correct. The sync engine will keep everything mirrored in terms of what exists on the cloud. Even though the placeholders don’t take up real space locally, they represent objects in the cloud. When you delete something locally it is put into the local odrive trash, which is really just a holding area for remote deletes. When you empty the trash, the remote delete command is sent to delete that item. Any other odrive clients running will see that the item is gone and reflect that change locally.

This would be okay. odrive will pick-up the changes, once they are processed, and remove the queued items from its own trash.

I like this method (delete on B2). Not sure if you’ve seen the interface, but regardless of whether the API just shows you a flat list of files per bucket, in the web UI, there’s a tree (takes a minute to build when logging-in) and you can select a list of things to delete pretty easily. Nuking a bunch of WordPress directories last night took maybe 2-3 hours - not fast, but in odrive I think we running at about one/day.

Here’s what the UI looks like:

Cool @amazon12!

Deletes at the source should definitely be faster. odrive should reflect the changes once they are processed fully by B2.