Error: argument placeholderPath: invalid unicode_path value

Hello; Hoping someone is able to assist; i am running a CLI sync and I am getting the below error output

error: argument placeholderPath: invalid unicode_path value
Tim\xe2\x80\x99s iPhone.cloudf

I noticed that the comma is not the standard - does not look like it is handling UTF-8, any suggestions? I have a bunch of folder that are showing this way and they are not syncing when using sync

Thank you

Hi @iptvcld,
Can you tell me what operating system you are running?
Can you also provide me with an example of the full command you are running?

i got this correct by update my python to version 3

However; maybe you might know the answer to this…
I have this refresh line in my script:
find /mnt/disks/MediaCapture/odrive/amazonmount -type d -exec “/mnt/cache/appdata/customscripts/odrive/bin/” refresh “{}” ;

And i am seeing this message while the refresh is going on: tput: No value for $TERM and no -T specified

Hi @iptvcld,
To clarify, you fixed your previous issue by running the CLI with python3 instead of python2?

For the tput issue, it sounds like an environmental problem where something is calling tput but $TERM is not defined.

What operating system and version is this?
What shell are you using?

Here are a couple of quick results I found for this error:


Hi @Tony

Yes, i updated my python from 2 to 3 and that fixed the first issue i had and thanks for the info on my tput issue.

Sorry; one more thing. I am trying to understand on this refresh process works. How often is the refresh performed on the backend (without having to run the refresh command manually) - I find sometimes i see the updated/new files on my local mount within 2 to 3 min and sometimes 10 to 15 mins. I tried putting in the refresh line in my script ( refresh mountLocation) - but i find this also takes time for the new changes to come down.

I then tried this:
find mountLocation -type d -exec “location/” refresh “{}” ;
The only thing with this is that it takes a long time to go through each folder to perform a refresh - BUT, it ensures it always gets the new files. Is there a way to make this faster with perhaps multi threads? Thank you!!

Hi @iptvcld,
Can you tell me what storage sources you have linked?

The speed of detecting remote changes will depend on the storage that you are using. If the storage supports an API request to get changes, then odrive should see new remote changes within 5 minutes (some sources are shorter than others). If the storage does not support querying for remote changes then it can be much longer since it will depend on odrive performing a full background scan.

odrive does a full local scan every 30 minutes, so files added or changed locally will be picked up within 30 minutes.

Trying to force faster “hard” remote refreshes, where you refresh in every folder, can cause issues for some storage sources. For example, you can be throttled or even temporarily restricted from access if they perceive the use to be excessive or against their terms of service. You will want to keep this in mind when trying to force faster reflection.

Thank you; i am using Amazon Cloud Drive

Hi @iptvcld,
Amazon Drive’s remote content should refresh every 5 minutes.

They can be pretty picky about request rates, so keep that in mind if you are trying to perform recursive refreshes. They are actually the odrive integration that has the longest interval for requesting changes because of this.

Thank you @Tony!! Thanks for the information on the 5 min refresh rate for Amazon - I will remove my refresh line from my script…

I think oDrive is the only service out there now in which Amazon allows to use their API calls with.

1 Like