TL;DR
The OP problem couldn't be reproduced on identical hardware, thus I believe it to be a misreading of units (10 KB/s == 80 kbps
). A similar but not quite the same problem prompted a bounty on the question, so I have offered some diagnostic techniques for that. I've broken the response into sections:
- Original problem
- Diagnostic step suggestions
- Hypothesis of what may be the problem using
awscli
1. Original problem (non-)reproduction
I set up a Raspberry Pi using the Raspbian OS (no GUI) with exactly the same libraries as the OP (all done with sudo
for speed...)
sudo pip install flickr_api
sudo apt-get install nethogs
sudo apt-get trickle
You then have to register for a Flickr account, create a (non-commercial) app and set up the API keys etc - instructions for app creation here and api key acquisition here or, via the Python wrapper, here.
I found a 1.9MB image, named it test.jpg
and tried the following:
/usr/bin/python flickr.py test.jpg
trickle -s -u 10 /usr/bin/python flickr.py photo.jpg
Results
- the picture uploaded in ~
3
secs, with nethogs
showing a peak upload speed ~400KB/s
- the picture took well over a minute to upload and
nethogs
showed upload rate peaking at 13 KB/s
(briefly) before stabilising at 10.081
.
Everything working as expected
Thus, I believe the OP issue is as simple as this: the upload limit in trickle
is set in kilobytes per second (kB/s
), and I think that the OP was probably reading the speed in kilobits per second (kbps
). Hence the factor of 8.
2. However...Diagnostics for similar problems
@James_pic posted a bounty on this question and it turns out that his scenario is not identical to the OP. In particular, he is observing no limiting when uploading to Amazon Web services using aws-cli
This being the case, I'm going to post some further diagnostic methods. The paper describing how trickle
works is here and describes a couple of scenarios where trickle
won't work:
- When the user does not voluntarily run under trickle
- Secondly and with smaller impact, Trickle cannot work with statically linked binaries.
It is possible that 2 is relevant here, so I'll outline a method you might start to diagnose that using strace
. There is another question on this site covering strace
usage in general and specific usage to look for shared libraries in use here.
As I haven't got a reproducible example with aws-cli
, I'll show what I did to verify operation in the flickr_api
case.
From the man page, strace
intercepts and records the system calls which are called by a process and the signals which are received by a process. As we are investigating which system libraries are in use, we are interested in the open
messages, so I used the following, in effect "wrapping" the commands from above with strace
and then grepping the output for open
:
strace /usr/bin/python flickr.py test.jpg 2>&1 | grep open > strace.out
strace trickle -s -u 10 /usr/bin/python flickr.py test.jpg | grep open > strace_with_trickle.out
Finally, I ran a diff of these two files:
diff strace_with_trickle.out strace.out
Unsurprisingly, the version with trickle
output a few more lines - specifically these:
< open("/lib/arm-linux-gnueabihf/libbsd.so.0", O_RDONLY) = 3
< open("/lib/arm-linux-gnueabihf/libc.so.6", O_RDONLY) = 3
< open("/lib/arm-linux-gnueabihf/libgcc_s.so.1", O_RDONLY) = 3
< open("/usr/lib/trickle/trickle-overload.so", O_RDONLY) = 3
< open("/etc/ld.so.preload", O_RDONLY) = 3
< open("/usr/lib/arm-linux-gnueabihf/libcofi_rpi.so", O_RDONLY) = 3
< open("/etc/ld.so.cache", O_RDONLY) = 3
These are the lines covering the interpositioning of trickle-overload.so
when the process is run. In particular, we can observe that the standard socket libraries are in use and dynamically loaded here - i.e. libc.so.6
adn libbsd.so.0
, giving some confidence that trickle
should work.
This technique is quite generic - use of a working case and a non-working case and then diff
ing the outputs should get you a lot further along the road.
3. Finally ... awscli
hypothesis
I have a suspicion that awscli
might spawn new processes as part of its uploading. If this is the case, I think trickle
with the -s
option will not limit those child processes. You can use it in daemon mode - this might work better - you can see how to do this by typing man trickled
In essence - you first start the daemon e.g.
trickled -u 10
and then "subscribe" processes to it e.g.
trickle /usr/bin/python flickr.py test.jpg
Caveat I have no reproducible test case for the awscli
problem, so this is speculation.