Sunday, November 29, 2015

Configure openVswitch with POX openflow controller

查看交换机中的所有 Table
ovs-ofctl dump-tables ovs-switch
查看交换机中的所有流表项
ovs-ofctl  dump-flows  ovs-switch
删除编号为 100 的端口上的所有流表项
ovs-ofctl del-flows ovs-switch "in_port=100"
查看交换机上的端口信息
ovs-ofctl show ovs-switch

Configure openVswitch with POX controller
Configure openVswitch in openVswitch in PC1 

PC1 eth0.10 interface IP is 192.168.10.100

#We attach  PC1 eth0.10 interface to the bridge connection between openVswitch in PC1 and controller

$sudo ovs-vsctl add-br br0
$sudo ovs-vsctl add-port br0 eth0.10
$sudo ifconfig br0 192.168.10.100 netmask 255.255.255.0 
// Define the switch's policy if connection with the server is lost
// standalone or secure, see ovs-vsctl manual
root@debian:/# ovs-vsctl set-fail-mode ovs-switch standalon

#Attach OpenvSwitch to the Controller which is in 192.168.100.30

$ovs-vsctl set-controller br0 tcp:192.168.100.30:6633

To remove openVswitch bridge connection

$sudo ovs-vsctl del-br br-0
$sudo ovs-vsctl del-port br-0 eth0.10

To remove the Controller

$sudo ovs-vsctl del-controller br-0 

http://www.ibm.com/developerworks/cn/cloud/library/1401_zhaoyi_openswitch/index.html#icomments

http://windysdn.blogspot.com/2013/10/configure-openvswitch-with-pox.html

Tuesday, November 17, 2015

Deploy kolla all in one node from source

Dock 1.8.2 and ansible 1.9.4

sudo apt-get install docker-engine=1.8.2-0~trusty



Note when updating ansible, be sure to not only update the source tree, but also the “submodules” in git which point at Ansible’s own modules (not the same kind of modules, alas).
$ git pull --rebase
$ git submodule update --init --recursive

install Kolla Python dependencies

git clone https://git.openstack.org/openstack/kolla
cd kolla
sudo pip install -r requirements.txt
apt-get install -y python-dev python-pip libffi-dev libssl-dev
pip install -U python-openstackclient

Post-Install Setup

ssh-keygen -t rsa -C "ansi@ansi.com"

 ssh-copy-id deployer@host

Building behind a proxy


To use this feature, create a file called .header, with the following content for example:
ENV http_proxy=https://evil.corp.proxy:80
ENV https_proxy=https://evil.corp.proxy:80
Then create another file called .footer, with the following content:
ENV http_proxy=""
ENV https_proxy=""
Finally, pass them to the build script using the -i and -I flags:
tools/build.py -i .header -I .footer  keystone

build ubuntu binary image has bug , only centos works

Can’t build base image because docker fails to install systemd, the workaround is 
add -s devicemapper to DOCKER_OPTS(/etc/default/docker) 
--insecure-registry 172.22.2.81:4000 to DOCKER_OPTS

DOCKER_OPTS="-s devicemapper --insecure-registry 172.22.2.81:4000"

Deploy a v2 register container

docker run -d -p 4000:5000 --restart=always --name registry registry:2

build images from source and push images to local registry

tools/build.py --registry localhost:4000 --base ubuntu --type source  --push  keystone 

tools/build.py  --base ubuntu --type source  -p <proflile> (kolla-build.conf)

Deploy using local image

cp etc/kolla /etc/
./tools/kolla-ansible deploy
use locally built images for an AIO to set the value 'docker_pull_policy: "missing"' in the globals.yml

[ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @./etc/kolla/passwords.yml site.yml --tags  rabbitmq,mariadb]

]


Disable service deployment

Modify ansible/group-vars/all.yml(or overridden in 'etc/kolla/globals.yml' ) , disable haproxy
enable_haproxy: "no"

#stop all container them remove 
$docker stop $(docker ps -a -q)
$docker rm -v $(docker ps -a -q -f status=exited)
#remoe all iamges
$docker rmi -f $(docker images -q)


$. /etc/default/docker
$ sudo docker -d -D $OPT
Warning: '-d' is deprecated, it will be removed soon. See usage.
WARN[0000] please use 'docker daemon' instead.
INFO[0000] API listen on /var/run/docker.sock

FATA[0000] Error starting daemon: error initializing graphdriver: "/var/lib/docker" contains other graphdrivers: devicemapper; Please cleanup or explicitly choose storage driver (-s <DRIVER>)

$sudo rm -rf /var/lib/docker/devicemapper/



docker exec -t kolla_ansible /usr/bin/ansible localhost -m mysql_db -a "login_host='16.158.50.191' login_port='3306' login_user='root' login_password='password' name='keystone'"



Rsyslog container log  all service :

    docker exec -it rsyslog bash

The logs from all services in all containers may be read from
/var/log/SERVICE_NAME

POST-DEPLOY 

OPENRC

ansible-playbook -vvvv -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml post-deploy.yml






Tuesday, November 10, 2015

docker network

默认情况下,启动容器时, docker把宿主系统特定系统文件复制到宿主机中保存容器配置的目录(/var/lib/docker/containers),然后使用绑定挂载(mount --bind)把副本链接到容器里。在启动的容器里执行mount
root@c0bfa0fff107:/# mount |grep etc
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)


进入容器方式

docker exec
or nsenter
就算docker daemon没有响应,无法使

用docker exec,使用nsenter进入容器。

大多数linux都提供了包含nsenter的 util-linux包。

安装nsenter

$docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter





镜像制作时需要考虑镜像的配置可以通过配置文件,命令行参数和环境变量的组合配置来完成。这些配置应该从image内容中解耦,以此来保持容器化应用程序的便携性。

Verify mounts becomes visible in container using "findmnt -o TARGET"

docker save and docker load will preserve image metadata (CMD, ENTRYPOINT, etc) and all layers.
docker export and docker import don't preserve metadata. This is by design and it's not being changed.
docker import will be extended with a --change option to allow CMD, ENTRYPOINT, ENV and many other options to be set. Please take a look at #7239 for the plan concerning this, especially #7239 (comment)
Squashing layers will also be implemented, so that will be another way to address the problem of flattening images while keeping metadata.
This issue is invalid. I'll close it now.




Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services

Default Networks

When you install Docker, it creates three networks automatically. You can list these networks using thedocker network ls command:
sudo iptables -nL
$ docker network ls
$docker network inspect bridge

sudo apt-get install bridge-utils

brctl show

Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option.

The default docker0 bridge network supports the use of port mapping and docker run --link to allow communications between containers in the docker0network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead

Within a user-defined bridge network, linking is not supported. You can expose and publish container ports on containers in this network. This is useful if you want make a portion of the bridge network available to an outside network.

docker behind proxy

/etc/default/docker  to add http_proxy
sudo restart docker
tail -f /var/log/upstart/docker.log

cgroup

$sudo  dpkg --get-selections | grep cgroup
cgroup-lite install
$dpkg-query -L cgroup-lite
/etc/init/cgroup-lite.conf
/bin/cgroups-mount
/bin/cgroups-umount
/usr/bin/cgroups-mount
/usr/bin/cgroups-umount

restart a container

1: docker ps -a |grep "<container-name>"
2:docker run <image-id>
2: docker start <container-id>
3: docker attach <container-id>

2:sudo docker exec -i -<container-id> bash


The build’s context

The build’s context is the files at a specified location PATH or URL. The PATH is a directory on your local filesystem. TheURL is a the location of a Git repository.
The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon. In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile


extract volume data inside a container to host

docker cp $ID:/var/jenkins_home  .
or copy host file  to volume data inside a container
docker cp file $ID:/var/jenkins


docker vs VM






VM STACK


CONTAINER STACK



参考

https://www.openstack.org/summit/tokyo-2015/videos/presentation/beginners-guide-to-containers-technology-and-how-it-actually-works

Gerrit checkout a file from a specified patch set

$git ls-remote | grep 92305(patch-id)
From ssh://wufei@review.hpcloud.net:29418/hp/horizon-selenium.git
f43051737e948ea6278c4c53edaceb1f14ecc2cb refs/changes/05/92305/1
fb99aee1551ad6da69d21f3999969bce63b6a7c9 refs/changes/05/92305/2
0d380f82530903a42f59acf6a1ba73df5a0853e6 refs/changes/05/92305/3
c0892064a341c70481fa7dc1dc11910e94e1acc3 refs/changes/05/92305/4
3b445d0a02bc8439fdf96fd4e218d5a812eb317a refs/changes/05/92305/5
b65f5050a44a851689be0793712621303dccf60f refs/changes/05/92305/6
c8c78f49ffac0ee8ec91a61ca3ca51493f7e5aa3 refs/changes/05/92305/7
b6d5dd7102401b4bd53d6643ef49748d3300abb1 refs/changes/05/92305/8

$ git checkout b6d5dd7102401b4bd53d6643ef49748d3300abb1 <file-name>



docker-machine -D create -d virtualbox mh-keystore

docker $(docker-machine config mh-keystore) run -d \ -p "8500:8500" \ -h "consul" \ progrium/consul -server -bootstrap
Set your local environment to the mh-keystoremachine
$ docker-machine env mh-keystore export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/home/whg/.docker/machine/machines/mh-keystore" export DOCKER_MACHINE_NAME="mh-keystore" # Run this command to configure your shell: # eval "$(docker-machine env mh-keystore)"

$ eval "$(docker-machine env mh-keystore)"


Create a Swarm master

$docker-machine -D create \ -d virtualbox \ --swarm --swarm-master \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo0 Create another host and add it to the Swarm cluster. $docker-machine -D create -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo1

$docker-machine ls



check docker bridge is with wich linux bridge

$brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242a46ea17a no whg@devstack:~$ docker network ls NETWORK ID NAME DRIVER 1868956e675b bridge bridge $ docker network inspect bridge | grep bridge.name "com.docker.network.bridge.name": "docker0"


From a network architecture point of view, all containers on a given Docker host are sitting on bridge interfaces. This means that they are just like physical machines connected through a common Ethernet switch; no more, no less.

docker diff <container-id>
docker history <image-id>



Pluggable Backends

Execution Drivers

$ docker info | grep "Execution Driver"
Execution Driver: native-0.2


If you are considering using Docker containers in production, you should make certain that the systems you are running have AppArmor or SELinux enabled and running. For the most part, both systems are reasonably equivalent. But in the Docker context, one notable limitation of SELinux is that it only works fully on systems that support filesystem metadata, which means that it won’t work for you on BTRFS-backed Docker daemons, for example. Only the devicemapper backend currently fully supports SELinux. Unfortunately, that backend is also not currently very stable for production. The OverlayFS backend is going to support this shortly. AppArmor, on the other hand, does not use filesystem metadata and so works on all of the Docker backends. Which one you use is going to be somewhat distribution-centric, so you may be forced to choose a filesystem backend based on which distribution you run.



If we have a client somewhere on the network that wants to talk to the nginx server running on TCP port 80 inside Container 1, the request will come into the eth0 interface on the Docker server. Because Docker knows this is a public port, it has spun up an instance of docker-proxy to listen on port 10520. So our request is passed to the docker-proxy process, which then makes the request to the correct container address and port on the private network. Return traffic from the request flows through the same route

When Docker creates a container, it creates two virtual interfaces, one of which sits on the server-side and is attached to the docker0 bridge, and one that is exposed into the container’s namespace

It would be entirely possible to run a container without the whole networking configuration that Docker puts in place for you. And the docker-proxy can be somewhat throughput limiting for very high-volume data services. So what does it look like if we turn it off? Docker lets you do this on a per-container basis with the --net=host command-line switch. There are times, like when you want to run high throughput applications, where you might want to do this


Docker containers don’t have a separate kernel, as a VM does. Commands run from a Docker container appear in the process table on the host and, in most ways, look very much like any other process running on the system

To display the processes that a container is running, use the docker top
$docker top container-id


flannel



Running etcd under Docker

Running etcd in standalone mode

export HostIP="192.168.12.50"
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
 --name etcd quay.io/coreos/etcd \
 -name etcd0 \
 -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
 -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
 -initial-advertise-peer-urls http://${HostIP}:2380 \
 -listen-peer-urls http://0.0.0.0:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://${HostIP}:2380 \
 -initial-cluster-state new
docker exec etcd /etcdctl member list
docker exec etcd /etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
curl -L -X PUT http://127.0.0.1:2379/v2/keys/message -d value="Hello"
./etcdctl --endpoint http://10.0.0.10:2379 member list
etcdctl rm /message
etcdctl mkdir /foo-service
etcdctl set /foo-service/container1 localhost:1111
curl -L -X PUT http://127.0.0.1:2379/v2/keys/foo-service/container1 -d value="localhost:1111"
etcdctl ls /foo-service
$ cd flannel
$ docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build"
$ curl -L http://127.0.0.1:4001/v2/keys/coreos.com/network/config
-XPUT -d value='{   
"Network": "10.0.0.0/8",
"SubnetLen": 20,
"SubnetMin": "10.10.0.0",
"SubnetMax": "10.99.0.0",
"Backend": {"Type": "udp",
            "Port": 7890}} '
source /run/flannel/subnet.env
$ sudo ifconfig docker0 ${FLANNEL_SUBNET}
$ sudo docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} &
Setup another flannel agent
./flanneld  -etcd-endpoints http://10.0.0.10:2379

Debug a service in container

This will install nsenter in /usr/local/bin and you will be able to use it immediately. nsenter might also be available in your distro (in the util-linux package).
docker run -v /usr/local/bin:/target jpetazzo/nsenter
First, figure out the PID of the container you want to enter:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
Then enter the container:
nsenter --target $PID --mount --uts --ipc --net --pid
You will get a shell inside the container