Bulk sync operations can be very intensive, but I have never seen 60GB of memory utilized, even from folks reportedly uploading over a million objects.
The complexity of the structure can impact performance. Is there anything of note about the structure of the 100,000+ files?
In any case, a bulk operation like this with over 100K objects will have a lot of overhead. We recommend breaking up the total into smaller chunks to alleviate some overhead. We also recommend unsyncing sections that you may not need immediate access to so that odrive can focus on just the items you do need to access/sync on that machine. Since odrive is a full bi-directional, near real-time sync engine, the larger the data set, the more work it will continually have to do to make sure everything is in sync on both the local and the remote sides.
Once the bulk operation has completed, you should see standard behavior, without excessive memory or CPU usage.