I seem to be running into odd issues accessing volumes that I mounted on Docker from my host (an EC2 instance) which are based on EBS drives that I mounted to it.
For clarification, this is how my physical host is set up:
ubuntu@ip-10-0-1-123:/usr/bin$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 10178756 2226608 7433444 24% /
/dev/xvdb 10190136 23032 9626432 1% /zookeeper/data
/dev/xvdc 10190136 23032 9626432 1% /zookeeper/log
As you can see I have a drive for the root directory and 2 additional EBS drives that I mounted to the host at /zookeeper/data and /zookeeper/log.
When I run my container, I have my Docker volume mounts configured with docker-compose like so:
zookeeper1:
image: lu4nm3/zookeeper:3.4.6
hostname: zookeeper
name: zookeeper
restart: always
privileged: true
volumes:
- /var/log/zookeeper:/var/log/zookeeper # this one is on the root drive
- /opt/zookeeper/3.4.6/conf:/opt/zookeeper/3.4.6/conf # this one is on the root drive
- /zookeeper/data:/zookeeper/data # this one is on EBS drive 1
- /zookeeper/log:/zookeeper/log # this one is on EBS drive 2
So far this seems pretty normal and you would expect it to work fine, but the setup of the host is where it gets strange. I've found that if I mount the EBS drives before installing Docker then everything works as expected. If I install Docker first and then I mount my drives, however, I run into strange issues related to my image that seem to be related to these new drives.
Has anyone ever run into similar issues when working with additional drives that are mounted to your physical host? The behavior I'm seeing based on the ordering of actions described above seems to indicate that Docker performs some sort of check when the daemon/client initializes that looks at all of the drives on the host. And if you mount additional drives after the fact, then Docker doesn't see them. Does this sound accurate?
Aucun commentaire:
Enregistrer un commentaire