docker-py
An API client for docker written in Python
Or, Saltstack + docker-py AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
I have been digging into this issue to no avail. I'm trying to use SaltStack to manage my docker images/containers but ran into this problem.
Initially I was using the salt state docker.running
but that presented as the command does not exist. When I changed the state to docker.running
, I got the traceback I posted over at that GitHub issue:
ID: scheduler
Function: docker.pulled
Result: False
Comment: An exception occurred in this state: Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/salt/state.py", line 1563, in call
**cdata['kwargs'])
File "/usr/lib/python2.7/dist-packages/salt/states/dockerio.py", line 271, in pulled
returned = pull(name, tag=tag, insecure_registry=insecure_registry)
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 1599, in pull
client = _get_client()
File "/usr/lib/python2.7/dist-packages/salt/modules/dockerio.py", line 277, in _get_client
client._version = client.version()['ApiVersion']
File "/usr/local/lib/python2.7/dist-packages/docker/client.py", line 837, in version
return self._result(self._get(url), json=True)
File "/usr/local/lib/python2.7/dist-packages/docker/clientbase.py", line 86, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 310, in get
#: Stream response content default.
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 279, in request
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 374, in send
url=request.url,
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 155, in send
**proxy_kwargs)
File "/usr/local/lib/python2.7/dist-packages/docker/unixconn/unixconn.py", line 74, in get_connection
with self.pools.lock:
AttributeError: 'RecentlyUsedContainer' object has no attribute 'lock'
Started: 09:33:42.873628
Duration: 22.115 ms
After searching Google a bit more and coming up with nothing, I went ahead and started reading the source.
After reading unixconn.py
and realizing that RecentlyUsedContainer
was coming from urllib3, I went and tracked down the source for that and discovered that there was a _lock
attribute that was changed to lock
a while ago. That seemed strange.
I looked closer at the imports and realized that unixconn.py
was attempting to use requests' built-in urllib3 and then falling back to the stand alone urllib3. So I checked out the requests urllib3 and found that it did, indeed have the _lock -> lock
change. But it was newer than my version of requests. So I upgraded requests and tried again. Still no dice - same AttributeError
.
Now things start to get weird.
In order to get information back to my salt master, I started mucking with the docker-py and urllib3 code on my salt minion. At first I raised exceptions with urllib3.__file__
to make sure I was using the right file. But occasionally the file name that it would return was in a file and a folder that did not exist. Usually it was displaying /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc
, but when I would delete that file thinking that maybe the .pyc being cached was causing a problem it would still say that was the __file__
, even though it didn't exist.
Then I discovered inspect.getfile
. And I got the same bizarre behavior - I could delete the .pyc file and yet inspect.getfile(self.pools)
would return the non-existent file.
To make life even better, I've added
raise Exception('Pining for the Fjords')
to
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.py
At the end of the RecentlyUsedContainer.__init__
. Yet that exception does not raise.
And I have just confirmed that something is in fact lying to me, because despite changing unixconn.py
def get_connection(self, url, proxies=None):
import inspect
r = RecentlyUsedContainer(10)
raise Exception(inspect.getfile(r.__class__) + '\n' + r.__doc__)
which returns /usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/_collections.pyc
, when I go edit that .pyc and modify the RecentlyUsedContainer
's docstring I get the original docstring.
And finally, when I edit /usr/lib/python2.7/dist-packages/urllib3/_collections.pyc
and change it's docstring, (or the same path but _collections.py
instead)...
I still get the same docstring!
Why is the wrong code getting executed here, and how can I find out where it is so I can fix the problem?
Source: (StackOverflow)
I am trying to use Jenkins Docker plugin
. Unfortunately I am not able to run Docker on RHEL to listen on a specific port yet.
I know I have to add:
DOCKER_OPTS="-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock"
But in my RHEL installation dont find the /etc/init/docker.conf
file.
So which file to modify?
Source: (StackOverflow)
I'm trying to use docker-py to run a docker container and drop me into a bash shell in that container. I get as far as running the container (I can see it with docker ps
, and I can attach to it just fine with the native docker client), but when I use attach()
from the official Python library, it just gives me an empty string in response. How do I attach to my bash shell?
>>> import docker
>>> c = docker.Client()
>>> container = c.create_container(image='d11wtq/python:2.7.7', command='/bin/bash', stdin_open=True, tty=True, name='docker-test')
>>> container
{u'Id': u'dd87e4ec75496d8369e0e526f343492f7903a0a45042d312b37859a81e575303', u'Warnings': None}
>>> c.start(container)
>>> c.attach(container)
''
Source: (StackOverflow)
Considering this shell example:
echo "hello" | docker run --rm -ti -a stdin busybox \
/bin/sh -c "cat - >/out"
This will execute a busybox container and create a new file /out
with the contents hello
.
How would I accomplish this with docker-py ?
The docker-py
equivalent:
container = docker_client.create_container( 'busybox',
stdin_open = True,
command = 'sh -c "cat - >/out"'
)
docker_client.start( container )
There is stdin_open = True
, but where do I write the 'hello'
?
Source: (StackOverflow)
The aim here is to use a docker container as a secure sandbox to run untrusted python scripts in, but to do so from within python using the docker-py module, and be able to capture the output of that script.
I'm running a python script foo.py inside a docker container (it's set as the ENTRYPOINT
command in my Dockerfile, so it's executed as soon as the container is run) and am unable to capture the output of that script. When I run the container via the normal CLI using
docker run -v /host_dirpath:/cont_dirpath my_image
(host_dirpath
is the directory containing foo.py) I get the expected output of foo.py printed to stdout, which is just a dictionary of key-value pairs. However, I'm trying to do this from within python using the docker-py module, and somehow the script output is not being captured by the logs
method. Here's the python code I'm using:
from docker import Client
docker = Client(base_url='unix://var/run/docker.sock',
version='1.10',
timeout=10)
contid = docker.create_container('my_image', volumes={"/cont_dirpath":""})
docker.start(contid, binds={"/host_dirpath": {"bind": "/cont_dirpath"} })
print "Docker logs: " + str(docker.logs(contid))
Which just results in "Docker logs: " - nothing is being captured in the logs, neither stdout nor stderr (I tried raising an exception inside foo.py to test this).
The results I'm after are calculated by foo.py and are currently just printed to stdout with a python print
statement. How can I get this to be included in the docker container logs so I can read it from within python? Or capture this output some other way from outside the container?
Any help would be greatly appreciated. Thanks in advance!
EDIT:
Still no luck with docker-py, but it is working well when running the container with the normal CLI using subprocess.Popen - the output is indeed correctly grabbed by stdout when doing this.
Source: (StackOverflow)
I'm trying to pull docker images from a private repository hosted in Docker hub https://registry.hub.docker.com/u/myname/myapp like this using the docker remote API. The doc is not clear as to how to specify the authentication credentials in a POST request like this
curl -XPOST -H "X-Registy-Auth: base64_encoded_authconfig_object" "http://localhost:4243/images/create?fromImage=myname/myapp"
This also does not elaborate on how exactly the authconfig is generated.
This talks about sending in a base 64 encoded json with a structure like this:
{
"index_url": {
"username": "string",
"password": "string",
"email": "string",
"serveraddress": "string"
}
}
But doesnt explain what is index_url and serveraddress. Are they
index_url = https://registry.hub.docker.com/u/myname/myapp
serveraddress = https://registry.hub.docker.com
The above configurations give me 404, probably the registry hub private repo is not being recognized.
I also tried base 64 encoding the contents of my ~/.dockercfg
{
"https://index.docker.io/v1/": {
"auth":"xxxxxxxxxxxxxxxxxxx==",
"email":"myname@myemail.com"
}
}
Could you tell me how to generate the base64 encoded authconfig object and get the above curl command working.
Thanks in advance
Docker version
Client version: 0.11.1
Client API version: 1.11
Go version (client): go1.2.1
Git commit (client): fb99f99
Server version: 0.11.1
Server API version: 1.11
Git commit (server): fb99f99
Go version (server): go1.2.1
Source: (StackOverflow)
I'm having trouble getting the output of a Python script I'm running in a docker container using the docker-py module in Python.
First, some context:
I've created a Dockerfile and built my image (with id 84730be6107f) in the normal way via the command line (docker build -t myimage /path/to/dockerfile
). The Python script is executed as the ENTRYPOINT
command in the Dockerfile:
ENTRYPOINT ["python", "/cont_dirpath/script.py"]
The directory containing the script is added (bound) to the container upon running it.
When I do this via the usual docker command line (in Ubuntu) using:
docker run -v /host_dirpath:/cont_dirpath 84730be6107f
I get the expected Python script output printed to stdout (displayed in the terminal - the script ends with a print result
command). This is exactly the output I'm trying to get while doing this from within Python.
However, when trying to do this from within Python using the docker-py package, I am unable to view the results - the logs
method returns an empty string. Here's the Python code I'm using:
from docker import Client
docker = Client(base_url='unix://var/run/docker.sock',
version='1.10',
timeout=10)
contid = docker.create_container('84730be6107f', volumes={"/cont_dirpath":""})
docker.start(contid, binds={"/host_dirpath": {"bind": "/cont_dirpath"} })
print "Docker logs: \n" + str(docker.logs(contid))
I just get an empty string. I've tried piping the output of the script to echo
and cat
in the ENTRYPOINT
command but to no avail. How can I get the output of script.py run in the ENTRYPOINT
command to be part of the log for that container so I can read it using the docker-py Python methods? I've also tried the docker-py attach
method (I think logs
is just a wrapped version of this) with the same result.
I've considered just writing the script output to a file instead of stdout, but it isn't clear how I would access and read that file using the docker-py methods.
Any insights or suggestions would be greatly appreciated!
Source: (StackOverflow)
is there a way to access control for push and pull for private docker registry ?
I have a machine where I am running a private docker registry like this
sudo yum install python-devel libevent-devel python-pip gcc xz-devel
sudo python-pip install docker-registry[bugsnag]
gunicorn --access-logfile - --debug -k gevent -b 0.0.0.0:5000 -w 1 docker_registry.wsgi:application
I have taken this from the github of docker registry under Run the Registry section.
This works fine but then anybody can pull and push to this. I would like to restrict the control of who can pull/push to registry.
Is there a way to do it ?
Appreciate your response.
Source: (StackOverflow)
I'm using docker-py to manage docker containers for one of my apps. I want to retrieve the list of all running containers, identical to docker ps
. But the containers
method only returns an empty list.
>>> import docker
>>> c = docker.Client()
>>> container = c.create_container(image='base', command='/bin/bash', stdin_open=True, tty=True, name='docker-test')
>>> container
{u'Id': u'1998f08a031d8632400264af667d93162465a04348b61144a4c730c7c4462875', u'Warnings': None}
>>> c.start(container)
>>> c.containers(quiet=False, all=True, trunc=True, latest=False, since=None, before=None, limit=-1)
[]
But, of course, doing $ docker ps
gives me this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1998f08a031d base:latest /bin/bash 3 minutes ago Up 3 minutes docker-test
What am I missing out here?
Source: (StackOverflow)
I think this used to work up to a few months ago. The regular commandline docker:
>> docker run --name 'mycontainer' -d -v '/new' ubuntu /bin/bash -c 'touch /new/hello.txt'
>> docker run --volumes-from mycontainer ubuntu /bin/bash -c 'ls new'
>> hello.txt
works as expected but I cannot get this to work in docker-py:
from docker import Client #docker-py
import time
docker = Client(base_url='unix://var/run/docker.sock')
response1 = docker.create_container('ubuntu', detach=True, volumes=['/new'],
command="/bin/bash -c 'touch /new/hello.txt'", name='mycontainer2')
docker.start(response1['Id'])
time.sleep(1)
response = docker.create_container('ubuntu',
command="/bin/bash -c 'ls new'",
volumes_from='mycontainer2')
docker.start(response['Id'])
time.sleep(1)
print(docker.logs(response['Id']))
..always tells me that new doesn't exist. How is volumes-from
supposed to be done with docker-py?
Source: (StackOverflow)
I'm trying to deploy an ELK stack with docker-py on a VirtualBox VM running Ubuntu 14.04. Currently running docker version 1.7 and am using the Docker Hub library official containers for elasticsearch, kibana, and logstash.
I have written a short script to pull, configure, and start the containers. The elasticsearch and kibana containers are running successfully, but the logstash container is exiting after about 23 seconds.
my logstash.start.py:
from docker import Client
import docker
import simplejson as json
import os
c = Client()
##### LOGSTASH #####
### configure container
logstash = c.create_container(
image = 'logstash:latest',
name = 'logstash',
volumes = ['/home/ops/projects/dockerfiles/scripts/elk/conf-dir', '/data/csv'],
ports = [25826],
host_config = docker.utils.create_host_config(
binds={
'/home/projects/dockerfiles/scripts/elk/conf-dir': {
'bind': '/conf-dir',
'ro': True
},
'/home/ops/csv': {
'bind': '/data/csv',
'ro': True
}
},
links={
'elasticsearch': 'elasticsearch',
},
port_bindings={
25826: 25826
}
)
)
### start container
c.start(logstash)
Any suggestions?
Source: (StackOverflow)
I have a docker image built from ubuntu base image with few softwares installed.
i have a startup script, as below
#!/bin/bash
/usr/local/sbin/process1 -d
/usr/local/sbin/process2 -d
/bin/bash
Now I use docker-py python library to start multiple of these containers from a python file.
c = docker.Client(base_url='unix://var/run/docker.sock',
version='1.12',
timeout=10)
container = c.create_container("p12", command="/startup.sh", hostname=None, user=None,
detach=False, stdin_open=False, tty=False, mem_limit=0,
ports=None, environment=None, dns=None, volumes=None,
volumes_from=None, network_disabled=False, name=None,
entrypoint=None, cpu_shares=None, working_dir=None,
memswap_limit=0)
c.start(container, binds=None, port_bindings=None, lxc_conf=None,
publish_all_ports=False, links=None, privileged=False,
dns=None, dns_search=None, volumes_from=None, network_mode=None,
restart_policy=None, cap_add=None, cap_drop=None)
This worked fine and I can start multiple (say 3) when I tested this on a Ubuntu Desktop, Ubuntu 14.04.1 LTS and with docker-py version of 1.10. It will start the dockers and I can do a docker attach later and work on the terminal.
Now i moved my testing environment to a Ubuntu Server edition with Ubuntu 14.04.1 LTS and with docker-py version of 1.12.
The issue i see is that, when I use the same script and try to start 3 dockers, after starting process1 and process 2 as background processes, all the dockers simply exit. It appears as if /bin/bash doesnt execute at all.
If i execute the same docker image as "docker run -t -i p14 /startup.sh --> then everything is fine again. The docker is started appropriately and i get the terminal access.
The only issue is when i execute this python library.
anybody has any similar issues...any idea on how to debug this problem...or any pointers for the fix ?
Thanks,
Kiran
Source: (StackOverflow)
I've run into issues pulling Docker images from a private DockerHub repo using the Docker module of Ansible, so to sanity-check that code decided to try pulling the image in question first using the shell. This also fails. What's going on here? If I SSH onto the box, I am able to run exactly the same command in the shell and it works, pulling the right image.
Isolated example play:
---
- hosts: <host-ip>
gather_facts: True
remote_user: ubuntu
sudo: yes
tasks:
- include_vars: vars/client_vars.yml
- name: Pull stardog docker image [private]
shell: sudo docker pull {{stardog_docker_repo}}
- name: Tag stardog docker image [private]
shell: sudo docker tag {{stardog_docker_repo}} stardog_tag
The error that's being output is:
failed: [<host-ip>] => {"changed": true, "cmd": "sudo docker pull <org>/<image>:latest", "delta": "0:00:01.395931", "end": "2015-08-05 17:35:22.480811", "rc": 1, "start": "2015-08-05 17:35:21.084880", "warnings": []}
stderr: Error: image <org>/<image>:latest not found
stdout: Pulling repository <org>/<image>
FATAL: all hosts have already failed -- aborting
NB: I've sanitised my <org>
and <image>
but rest assured their image identifier in the playbook and error logging perfectly match the image that I can successfully run in the shell over ssh by doing:
$ sudo docker pull <org>/<image>:latest
I'm aware of various GitHub issues (like this one I had when using the Docker module), patches et cetera related to the docker-py
library, but the thing here is I'm just using the Ansible shell
module. What have I missed?
Source: (StackOverflow)
I'm trying to start a jar file inside a running container.
In order to do this I use this command docker exec -t -d [containerID] java -jar jarname.jar
.
The command is successfully executed but I am unable to see its output.
Docker allocates a new tty in the host but how can I see its output?
What am I doing wrong?
Source: (StackOverflow)
I have integrated the Docker Plugin
with Jenkins.
https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
I am sure the integration was done success as after installing I checked the "Test Connection" and it showing my Docker version correctly.
Now I have trying to provision slave from my build and it is failing. I am getting:
pending #5
(pending—All nodes of label ‘docker test’ are offline
Can anyone please help me how to debug this?
Source: (StackOverflow)