Thursday, December 17, 2015

kubernetes up and running on Bare-metal ubuntu

Access a service in a public cloud using ssh local port forward 

$ $ kubectl describe service fabric8

Name:                   fabric8

Namespace:              default

Labels:                 expose=true,group=io.fabric8.apps,project=console,provider=fabric8,version=2.2.173

Selector:               expose=true,group=io.fabric8.apps,project=console,provider=fabric8

Type:                   LoadBalancer

IP:                     192.168.3.145

Port:                   <unset> 80/TCP

NodePort:               <unset> 30156/TCP

Endpoints:              172.16.48.3:9090

Session Affinity:       None

ssh -L <local_port>:<service_endpoint> remote_ssh_user@floating_ip

curl http://localhost:<local_port>


change a deployment online

kubectl edit deployment exposecontroller
kubectl delete deployment exposecontroller



Labels are a set of key / value pairs of strings to attach to pods. Use any keys or values you like. Then you can create a selector, which is a set of key / value pairs to query for the pods you need; i.e. to select all pods with labels matching those in the selector.


kubernetes/cluster/ubuntu/binaries$ ./kubectl describe pod jnlp-slave

   ---   ----                    -------------   --------    ------          -------

  2m            2m              1       {default-scheduler }                    Normal              Scheduled       Successfully assigned jnlp-slave-27ab4c18d2d7a to 10.25.237.100

  1m            1m              1       {kubelet 10.25.237.100}                 Warning             FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (API error (500): Get https://gcr.io/v1/_ping: dial tcp 64.233.189.82:443: i/o timeout\n)"


  30s   30s     1       {kubelet 10.25.237.100}         Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed for gcr.io/google_containers/pause:2.0, this may be because there are no credentials on this request.  details: (API error (500): Get https://gcr.io/v1/_ping: dial tcp 64.233.187.82:443: i/o timeout\n)"


docker pull docker.io/kubernetes/pause
docker tag docker.io/kubernetes/pause gcr.io/google_containers/pause:2.0
docker save gcr.io/google_containers/pause:2.0  > pause.tar
scp pause.tar s1:~/
ssh s1
docker load -i pause.tar



service log /var/log/upstart




Docker daemon can't start after installing Kubernetes in Ubuntu

We have used Kubernetes ubuntu provider scripts to deploy multiple nodes cluster
After installing with kube-up.sh, the script try to override the /etc/default/docker and docker daemon fail to start with this configuration
DOCKER_OPTS=" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.48.1/24 --mtu=8951 --insecure-registry 10.69.1.246 --insecure-registry tobe.com -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.19.1/24 --mtu=8951"
A: ubuntu/reconfDocker.sh   to comment # source /etc/default/docker





Proxy


Wednesday, December 2, 2015

effecive python

The Pylint tool (http://www.pylint.org/) is a popular static analyzer for Python
source code. Pylint provides automated enforcement of the PEP 8 style guide and
detects many other types of common errors in Python programs.

Python supports closures: functions that refer to variables from the scope in which
they were defined. This is why the helper function is able to access the group
argument to sort_priority.
Functions are first-class objects in Python, meaning you can refer to them directly,
assign them to variables, pass them as arguments to other functions, compare them
in expressions and if statements, etc. This is how the sort method can accept a
closure function as the key argument.
Python has specific rules for comparing tuples. It first compares items in index zero,
then index one, then index two, and so on. This is why the return value from the
helper closure causes the sort order to have two distinct groups


This shows that the system calls will all
run in parallel from multiple Python threads even though they’re limited by the GIL. The
GIL prevents my Python code from running in parallel, but it has no negative effect on
system calls. This works because Python threads release the GIL just before they make
system calls and reacquire the GIL as soon as the system calls are done.


Python can work around all these issues with coroutines. Coroutines let you have many
seemingly simultaneous functions in your Python programs. They’re implemented as an
extension to generators (see Item 16: “Consider Generators Instead of Returning Lists”).
The cost of starting a generator coroutine is a function call. Once active, they each use less
than 1 KB of memory until they’re exhausted.


Although it looks simple to the programmer, the multiprocessing module and
ProcessPoolExecutor class do a huge amount of work to make parallelism possible.
In most other languages, the only touch point you need to coordinate two threads is a
single lock or atomic operation. The overhead of using multiprocessing is high
because of all of the serialization and deserialization that must happen between the parent
and child processes.

multiprocessing provides more advanced
facilities for shared memory, cross-process locks, queues, and proxies. But all of these
features are very complex. It’s hard enough to reason about such tools in the memory
space of a single process shared between Python threads

You can start by using the
ThreadPoolExecutor class to run isolated, high-leverage functions in threads. Later,
you can move to the ProcessPoolExecutor to get a speedup. Finally, once you’ve
completely exhausted the other options, you can consider using the multiprocessing
module directly

Moving CPU bottlenecks to C-extension modules can be an effective way to
improve performance while maximizing your investment in Python code. However,
the cost of doing so is high and may introduce bugs.
The multiprocessing module provides powerful tools that can parallelize
certain types of Python computation with minimal effort.
The power of multiprocessing is best accessed through the
concurrent.futures built-in module and its simple
ProcessPoolExecutor class


Sunday, November 29, 2015

Configure openVswitch with POX openflow controller

查看交换机中的所有 Table
ovs-ofctl dump-tables ovs-switch
查看交换机中的所有流表项
ovs-ofctl  dump-flows  ovs-switch
删除编号为 100 的端口上的所有流表项
ovs-ofctl del-flows ovs-switch "in_port=100"
查看交换机上的端口信息
ovs-ofctl show ovs-switch

Configure openVswitch with POX controller
Configure openVswitch in openVswitch in PC1 

PC1 eth0.10 interface IP is 192.168.10.100

#We attach  PC1 eth0.10 interface to the bridge connection between openVswitch in PC1 and controller

$sudo ovs-vsctl add-br br0
$sudo ovs-vsctl add-port br0 eth0.10
$sudo ifconfig br0 192.168.10.100 netmask 255.255.255.0 
// Define the switch's policy if connection with the server is lost
// standalone or secure, see ovs-vsctl manual
root@debian:/# ovs-vsctl set-fail-mode ovs-switch standalon

#Attach OpenvSwitch to the Controller which is in 192.168.100.30

$ovs-vsctl set-controller br0 tcp:192.168.100.30:6633

To remove openVswitch bridge connection

$sudo ovs-vsctl del-br br-0
$sudo ovs-vsctl del-port br-0 eth0.10

To remove the Controller

$sudo ovs-vsctl del-controller br-0 

http://www.ibm.com/developerworks/cn/cloud/library/1401_zhaoyi_openswitch/index.html#icomments

http://windysdn.blogspot.com/2013/10/configure-openvswitch-with-pox.html

Tuesday, November 17, 2015

Deploy kolla all in one node from source

Dock 1.8.2 and ansible 1.9.4

sudo apt-get install docker-engine=1.8.2-0~trusty



Note when updating ansible, be sure to not only update the source tree, but also the “submodules” in git which point at Ansible’s own modules (not the same kind of modules, alas).
$ git pull --rebase
$ git submodule update --init --recursive

install Kolla Python dependencies

git clone https://git.openstack.org/openstack/kolla
cd kolla
sudo pip install -r requirements.txt
apt-get install -y python-dev python-pip libffi-dev libssl-dev
pip install -U python-openstackclient

Post-Install Setup

ssh-keygen -t rsa -C "ansi@ansi.com"

 ssh-copy-id deployer@host

Building behind a proxy


To use this feature, create a file called .header, with the following content for example:
ENV http_proxy=https://evil.corp.proxy:80
ENV https_proxy=https://evil.corp.proxy:80
Then create another file called .footer, with the following content:
ENV http_proxy=""
ENV https_proxy=""
Finally, pass them to the build script using the -i and -I flags:
tools/build.py -i .header -I .footer  keystone

build ubuntu binary image has bug , only centos works

Can’t build base image because docker fails to install systemd, the workaround is 
add -s devicemapper to DOCKER_OPTS(/etc/default/docker) 
--insecure-registry 172.22.2.81:4000 to DOCKER_OPTS

DOCKER_OPTS="-s devicemapper --insecure-registry 172.22.2.81:4000"

Deploy a v2 register container

docker run -d -p 4000:5000 --restart=always --name registry registry:2

build images from source and push images to local registry

tools/build.py --registry localhost:4000 --base ubuntu --type source  --push  keystone 

tools/build.py  --base ubuntu --type source  -p <proflile> (kolla-build.conf)

Deploy using local image

cp etc/kolla /etc/
./tools/kolla-ansible deploy
use locally built images for an AIO to set the value 'docker_pull_policy: "missing"' in the globals.yml

[ansible-playbook -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @./etc/kolla/passwords.yml site.yml --tags  rabbitmq,mariadb]

]


Disable service deployment

Modify ansible/group-vars/all.yml(or overridden in 'etc/kolla/globals.yml' ) , disable haproxy
enable_haproxy: "no"

#stop all container them remove 
$docker stop $(docker ps -a -q)
$docker rm -v $(docker ps -a -q -f status=exited)
#remoe all iamges
$docker rmi -f $(docker images -q)


$. /etc/default/docker
$ sudo docker -d -D $OPT
Warning: '-d' is deprecated, it will be removed soon. See usage.
WARN[0000] please use 'docker daemon' instead.
INFO[0000] API listen on /var/run/docker.sock

FATA[0000] Error starting daemon: error initializing graphdriver: "/var/lib/docker" contains other graphdrivers: devicemapper; Please cleanup or explicitly choose storage driver (-s <DRIVER>)

$sudo rm -rf /var/lib/docker/devicemapper/



docker exec -t kolla_ansible /usr/bin/ansible localhost -m mysql_db -a "login_host='16.158.50.191' login_port='3306' login_user='root' login_password='password' name='keystone'"



Rsyslog container log  all service :

    docker exec -it rsyslog bash

The logs from all services in all containers may be read from
/var/log/SERVICE_NAME

POST-DEPLOY 

OPENRC

ansible-playbook -vvvv -i inventory/all-in-one -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml post-deploy.yml






Tuesday, November 10, 2015

docker network

默认情况下,启动容器时, docker把宿主系统特定系统文件复制到宿主机中保存容器配置的目录(/var/lib/docker/containers),然后使用绑定挂载(mount --bind)把副本链接到容器里。在启动的容器里执行mount
root@c0bfa0fff107:/# mount |grep etc
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/resolv.conf type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/hostname type ext4 (rw,relatime,errors=remount-ro,data=ordered)
/dev/disk/by-uuid/780b6c34-5a41-4be3-a954-171ce2f4c855 on /etc/hosts type ext4 (rw,relatime,errors=remount-ro,data=ordered)


进入容器方式

docker exec
or nsenter
就算docker daemon没有响应,无法使

用docker exec,使用nsenter进入容器。

大多数linux都提供了包含nsenter的 util-linux包。

安装nsenter

$docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter





镜像制作时需要考虑镜像的配置可以通过配置文件,命令行参数和环境变量的组合配置来完成。这些配置应该从image内容中解耦,以此来保持容器化应用程序的便携性。

Verify mounts becomes visible in container using "findmnt -o TARGET"

docker save and docker load will preserve image metadata (CMD, ENTRYPOINT, etc) and all layers.
docker export and docker import don't preserve metadata. This is by design and it's not being changed.
docker import will be extended with a --change option to allow CMD, ENTRYPOINT, ENV and many other options to be set. Please take a look at #7239 for the plan concerning this, especially #7239 (comment)
Squashing layers will also be implemented, so that will be another way to address the problem of flattening images while keeping metadata.
This issue is invalid. I'll close it now.




Basically the official party line from Solomon Hykes and docker is that docker containers should be as close to single processes micro servers as possible. There may be many such servers on a single 'real' server. If a processes fails you should just launch a new docker container rather than try to setup initialization etc inside the containers. So if you are looking for the canonical best practices the answer is yeah no basic linux services. It also makes sense when you think in terms of many docker containers running on a single node, you really want them all to run their own versions of these services

Default Networks

When you install Docker, it creates three networks automatically. You can list these networks using thedocker network ls command:
sudo iptables -nL
$ docker network ls
$docker network inspect bridge

sudo apt-get install bridge-utils

brctl show

Docker does not support automatic service discovery on the default bridge network. If you want to communicate with container names in this default bridge network, you must connect the containers via the legacy docker run --link option.

The default docker0 bridge network supports the use of port mapping and docker run --link to allow communications between containers in the docker0network. These techniques are cumbersome to set up and prone to error. While they are still available to you as techniques, it is better to avoid them and define your own bridge networks instead

Within a user-defined bridge network, linking is not supported. You can expose and publish container ports on containers in this network. This is useful if you want make a portion of the bridge network available to an outside network.

docker behind proxy

/etc/default/docker  to add http_proxy
sudo restart docker
tail -f /var/log/upstart/docker.log

cgroup

$sudo  dpkg --get-selections | grep cgroup
cgroup-lite install
$dpkg-query -L cgroup-lite
/etc/init/cgroup-lite.conf
/bin/cgroups-mount
/bin/cgroups-umount
/usr/bin/cgroups-mount
/usr/bin/cgroups-umount

restart a container

1: docker ps -a |grep "<container-name>"
2:docker run <image-id>
2: docker start <container-id>
3: docker attach <container-id>

2:sudo docker exec -i -<container-id> bash


The build’s context

The build’s context is the files at a specified location PATH or URL. The PATH is a directory on your local filesystem. TheURL is a the location of a Git repository.
The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon. In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile


extract volume data inside a container to host

docker cp $ID:/var/jenkins_home  .
or copy host file  to volume data inside a container
docker cp file $ID:/var/jenkins


docker vs VM






VM STACK


CONTAINER STACK



参考

https://www.openstack.org/summit/tokyo-2015/videos/presentation/beginners-guide-to-containers-technology-and-how-it-actually-works

Gerrit checkout a file from a specified patch set

$git ls-remote | grep 92305(patch-id)
From ssh://wufei@review.hpcloud.net:29418/hp/horizon-selenium.git
f43051737e948ea6278c4c53edaceb1f14ecc2cb refs/changes/05/92305/1
fb99aee1551ad6da69d21f3999969bce63b6a7c9 refs/changes/05/92305/2
0d380f82530903a42f59acf6a1ba73df5a0853e6 refs/changes/05/92305/3
c0892064a341c70481fa7dc1dc11910e94e1acc3 refs/changes/05/92305/4
3b445d0a02bc8439fdf96fd4e218d5a812eb317a refs/changes/05/92305/5
b65f5050a44a851689be0793712621303dccf60f refs/changes/05/92305/6
c8c78f49ffac0ee8ec91a61ca3ca51493f7e5aa3 refs/changes/05/92305/7
b6d5dd7102401b4bd53d6643ef49748d3300abb1 refs/changes/05/92305/8

$ git checkout b6d5dd7102401b4bd53d6643ef49748d3300abb1 <file-name>



docker-machine -D create -d virtualbox mh-keystore

docker $(docker-machine config mh-keystore) run -d \ -p "8500:8500" \ -h "consul" \ progrium/consul -server -bootstrap
Set your local environment to the mh-keystoremachine
$ docker-machine env mh-keystore export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/home/whg/.docker/machine/machines/mh-keystore" export DOCKER_MACHINE_NAME="mh-keystore" # Run this command to configure your shell: # eval "$(docker-machine env mh-keystore)"

$ eval "$(docker-machine env mh-keystore)"


Create a Swarm master

$docker-machine -D create \ -d virtualbox \ --swarm --swarm-master \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo0 Create another host and add it to the Swarm cluster. $docker-machine -D create -d virtualbox \ --swarm \ --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \ --engine-opt="cluster-advertise=eth1:2376" \ mhs-demo1

$docker-machine ls



check docker bridge is with wich linux bridge

$brctl show bridge name bridge id STP enabled interfaces docker0 8000.0242a46ea17a no whg@devstack:~$ docker network ls NETWORK ID NAME DRIVER 1868956e675b bridge bridge $ docker network inspect bridge | grep bridge.name "com.docker.network.bridge.name": "docker0"


From a network architecture point of view, all containers on a given Docker host are sitting on bridge interfaces. This means that they are just like physical machines connected through a common Ethernet switch; no more, no less.

docker diff <container-id>
docker history <image-id>



Pluggable Backends

Execution Drivers

$ docker info | grep "Execution Driver"
Execution Driver: native-0.2


If you are considering using Docker containers in production, you should make certain that the systems you are running have AppArmor or SELinux enabled and running. For the most part, both systems are reasonably equivalent. But in the Docker context, one notable limitation of SELinux is that it only works fully on systems that support filesystem metadata, which means that it won’t work for you on BTRFS-backed Docker daemons, for example. Only the devicemapper backend currently fully supports SELinux. Unfortunately, that backend is also not currently very stable for production. The OverlayFS backend is going to support this shortly. AppArmor, on the other hand, does not use filesystem metadata and so works on all of the Docker backends. Which one you use is going to be somewhat distribution-centric, so you may be forced to choose a filesystem backend based on which distribution you run.



If we have a client somewhere on the network that wants to talk to the nginx server running on TCP port 80 inside Container 1, the request will come into the eth0 interface on the Docker server. Because Docker knows this is a public port, it has spun up an instance of docker-proxy to listen on port 10520. So our request is passed to the docker-proxy process, which then makes the request to the correct container address and port on the private network. Return traffic from the request flows through the same route

When Docker creates a container, it creates two virtual interfaces, one of which sits on the server-side and is attached to the docker0 bridge, and one that is exposed into the container’s namespace

It would be entirely possible to run a container without the whole networking configuration that Docker puts in place for you. And the docker-proxy can be somewhat throughput limiting for very high-volume data services. So what does it look like if we turn it off? Docker lets you do this on a per-container basis with the --net=host command-line switch. There are times, like when you want to run high throughput applications, where you might want to do this


Docker containers don’t have a separate kernel, as a VM does. Commands run from a Docker container appear in the process table on the host and, in most ways, look very much like any other process running on the system

To display the processes that a container is running, use the docker top
$docker top container-id


flannel



Running etcd under Docker

Running etcd in standalone mode

export HostIP="192.168.12.50"
docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \
 --name etcd quay.io/coreos/etcd \
 -name etcd0 \
 -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \
 -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \
 -initial-advertise-peer-urls http://${HostIP}:2380 \
 -listen-peer-urls http://0.0.0.0:2380 \
 -initial-cluster-token etcd-cluster-1 \
 -initial-cluster etcd0=http://${HostIP}:2380 \
 -initial-cluster-state new
docker exec etcd /etcdctl member list
docker exec etcd /etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16" }'
curl -L -X PUT http://127.0.0.1:2379/v2/keys/message -d value="Hello"
./etcdctl --endpoint http://10.0.0.10:2379 member list
etcdctl rm /message
etcdctl mkdir /foo-service
etcdctl set /foo-service/container1 localhost:1111
curl -L -X PUT http://127.0.0.1:2379/v2/keys/foo-service/container1 -d value="localhost:1111"
etcdctl ls /foo-service
$ cd flannel
$ docker run -v `pwd`:/opt/flannel -i -t google/golang /bin/bash -c "cd /opt/flannel && ./build"
$ curl -L http://127.0.0.1:4001/v2/keys/coreos.com/network/config
-XPUT -d value='{   
"Network": "10.0.0.0/8",
"SubnetLen": 20,
"SubnetMin": "10.10.0.0",
"SubnetMax": "10.99.0.0",
"Backend": {"Type": "udp",
            "Port": 7890}} '
source /run/flannel/subnet.env
$ sudo ifconfig docker0 ${FLANNEL_SUBNET}
$ sudo docker -d --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU} &
Setup another flannel agent
./flanneld  -etcd-endpoints http://10.0.0.10:2379

Debug a service in container

This will install nsenter in /usr/local/bin and you will be able to use it immediately. nsenter might also be available in your distro (in the util-linux package).
docker run -v /usr/local/bin:/target jpetazzo/nsenter
First, figure out the PID of the container you want to enter:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
Then enter the container:
nsenter --target $PID --mount --uts --ipc --net --pid
You will get a shell inside the container