Friday, September 11, 2015

Access server side LAN with a routed VPN

Including multiple machines on the server side when using a routed VPN (dev tun)

Once the VPN is operational in a point-to-point capacity between client and server, it may be desirable to expand the scope of the VPN so that clients can reach multiple machines on the server network, rather than only the server machine itself.
For the purpose of this example, we will assume that the server-side LAN uses a subnet of 10.66.0.0/24and the VPN IP address pool uses 10.8.0.0/24 as cited in the server directive in the OpenVPN server configuration file.
First, you must advertise the 10.66.0.0/24 subnet to VPN clients as being accessible through the VPN. This can easily be done with the following server-side config file directive:
push "route 10.66.0.0 255.255.255.0"
Next, you must set up a route on the server-side LAN gateway to route the VPN client subnet (10.8.0.0/24) to the OpenVPN server (this is only necessary if the OpenVPN server and the LAN gateway are different machines).
Make sure that you've enabled IP and TUN/TAP forwarding on the OpenVPN server machine.

Tuesday, August 18, 2015

openstack upstream testing setup from source step-by-step

prerequisite

1: install setuptool, pip
$sudo -E python ez_setup.py
$sudo -E python get-pip.py

$sudo route del default(route del -net 0.0.0.0 netmask 0.0.0.0 gw 172.16.116.2 dev eth0)
$sudo route add default gw 16.158.48.1(route add -net 0.0.0.0 netmask 0.0.0.0 gw 16.158.48.1 dev br-ex)
2:sudo apt-get install build-essential autoconf libtool python-dev libffi-dev libssl-dev

keystone

4:python setup.py install
5:sudo -E pip install -r requirements.txt
6:apt-get install mariadb-server python-mysqldb
[mysqld]
...

bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'

character-set-server = utf8
7:apt-get install rabbitmq-server
$rabbitmqctl add_user openstack
Permit configuration, write, and read access for the openstack user:
$rabbitmqctl set_permissions openstack ".*" ".*" ".*"

8:apt-get install memcached python-memcache

9: Keystone will look in the following directories for a configuration file, in order:
    The Keystone primary configuration file is expected to be     named keystone.conf. When starting Keystone, you can specify a different configuration file to use with --config-file
  • ~/.keystone/
  • ~/
  • /etc/keystone/
  • /etc/
mkdir -p /etc/keystone/
cp etc/keystone-paste.ini /etc/keystone/
cp policy.json /etc/keystone/

10: keystone-manage db_sync
11: keystone-all

osc

12:  INSTALL OSC 


GLANCE GLANCE-CLEINT


cp etc/*.conf /etc/glance
cp /etc/*.ini/  /etc/glance
cp /etc/policy.json /etc/glance
$glance-registry
$glance-api


INSTALL NOVA

sudo apt-get install libxml2-dev libxslt1-dev
 
udo apt-get install libpq-dev
 
sudo pip install tox
tox -egenconfig (error to recreate conf)

apt-get install sysfsutils
apt-get install python-libvirt
sudo usermod -G libvirtd -a <username>
then logout then login
nova-manage db sync
$ sudo mkdir -p /var/lib/nova
$ sudo chown -R whg:whg /var/lib/nova
$nova-api
$nova-cert
$nova-consoleauth
$nova-scheduler
$nova-conductor
$nova-novncproxy
$nova-compute

install neutron



neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
apt-get install openvswitch-switch
ovs-vsctl add-br br-ex
ovs-vsctl add-port br-ex INTERFACE_NAME
neutron-openvswitch-agent --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --config-file /etc/neutron/neutron.conf   --log-file=/var/log/neutron/openvswitch-agent.log
neutron-server  --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini   --log-file=/var/log/neutron/neutron-server.log

cp fwaas_driver.ini /etc/neuron/(uncomment to enable fw)
mkdir /var/lib/neutron && chown -R 
neutron-l3-agent --config-file=/etc/neutron/l3_agent.ini --config-file=/etc/neutron/fwaas_driver.ini --config-file /etc/neutron/neutron.conf --log-file=/var/log/neutron/l3-agent.log
neutron-dhcp-agent --config-file=/etc/neutron/dhcp_agent.ini  --config-file=/etc/neutron/neutron.conf  --log-file=/var/log/neutron/dhcp-agent.log
neutron-metadata-agent  --config-file=/etc/neutron/dhcp_agent.ini  --config-file=/etc/neutron/metadata_agent.ini --log-file=/var/log/neutron/metadata-agent.log
neutron-ovs-cleanup --config-file /etc/neutron/neutron.conf  --log-file=/var/log/neutron/ovs-cleanup.log

horizon

python setup.py compile_catalog
pip install -e .
Add openstack_auth to settings.INSTALLED_APPS
Add 'openstack_auth.backend.KeystoneBackend' to yoursettings.AUTHENTICATION_BACKENDS
Include 'openstack_auth.urls' somewhere in your urls.py file

chown -R $STACK_USER $1/*.egg-info
# using pip before running `setup.py develop`
setup.py develop


sudo pip install .
Include 'openstack_auth.urls' somewhere in your urls.py file

cp openstack_dashboard/local/local_settings.py.example  /etc/openstack_dashboard/local/local_settings.py
$ ./manage.py collectstatic
$ ./manage.py compress
python manage.py compress
COMPRESS_OFFLINE = True  to disable compress offline(openstack_dashboard/local/local_settings.py) if compress offline failed
mkdir -p /var/lib/keystone
./manage.py make_web_conf --apache > /etc/apache2/sites-available/horizon.conf
Same as above but if you want ssl support:
$ ./manage.py make_web_conf --apache --ssl --sslkey=/path/to/ssl/key --sslcert=/path/to/ssl/cert > /etc/apache2/sites-available/horizon.conf
$ sudo a2ensite horizon
$ sudo service apache2 restart



upstart service

If you use Upstart 1.4 or newer, put console log into your Upstart job and all the output to stdout/stderr will end up to /var/log/upstart/<job>.log. Then you can do tail -f /var/log/upstart/<job>.log & to have the output appear in terminal

Tuesday, July 28, 2015

Openstack data_processing deployed in virtual environment for sahara kilo release

prerequisites
  • Cinder  storage_availability_zone = nova
  • Heat 


1:Install  sahara into a virtual environment
    $sudo apt-get install python-setuptools python-virtualenv python-dev
   $ virtualenv sahara-venv
   $ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-stable-kilo.tar.gz'
   $ mkdir sahara-venv/etc
   $ cp sahara-venv/share/sahara/sahara.conf.sample-basic sahara-venv/etc/sahara.conf

2:install local mysql for sahara

3: Sahara Configuration sahara.conf
    [DEFAULT]
    use_neutron=true
    use_namespaces=True
    [database] 
    connection=mysql://username:password@host:port/database
    [keystone_authtoken]
   auth_uri=http://127.0.0.1:5000/v2.0/

   identity_uri=http://127.0.0.1:35357/

4: Policy configuration
    cat sahara-venv/etc/policy.json
   {
    "default": ""
   }
   By default sahara will search for a policy.json file in the    same directory as the configuration file.

5: Create the database schema
     $ sahara-venv/bin/sahara-db-manage --config-file sahara-venv/etc/sahara.conf upgrade head
6: start sahara
   $ sahara-venv/bin/sahara-all --config-file sahara-venv/etc/sahara.conf

7:  register sahara in the Identity service catalog    
$openstack service create --name sahara --description "Sahara Data Processing" data-processing
$openstack endpoint create --region RegionOne \
--publicurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s" \
--adminurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s" \
--internalurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s"\

8:Building Images for sahara Plugin
As of now the sahara plugin works with images with pre-installed versions of Apache Hadoop. To simplify the task of building such images we use Disk Image Builder




  • Clone repository “https://github.com/openstack/sahara-image-elements” locally( sudo bash diskimage-create.sh -h )
  • tox -e venv -- sahara-image-create -p [vanilla|spark|hdp|cloudera|storm|mapr]
    tox -e venv -- sahara-image-create -i [ubuntu|fedora|centos]

  • glance image-create --name=ubuntu_sahara_vanilla_hadoop_2_6 \--disk-format=qcow2 --container-format=bare \--file ./ubuntu_sahara_vanilla_hadoop_2_6_latest.qcow2 --progress
  •   
  •   
  • $sahara image-register --id $IMAGE_ID --username ubuntu
  • Wednesday, July 8, 2015

    Learning neutron by debugging openstack neutron client

    1: install neutron server on a node named neutron-server
        $apt-get install neutron-server neutron-plugin-ml2 python-neutronclient

    2:install neutron Networking on a node named neutron
        $apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
    neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

    3: clone neutron client repo
    https://review.openstack.org/p/openstack/python-neutronclient.git

    4: create a virtual env
        $cd ~$NEUTRON_CLIENT_HOME && virtalenv .venv
        $. ./.venv/bin/activate

    5: pip install -r requirements.txt   test-requirements.txt

    6:Run neutron client from command line
       $export export PYTHONPATH=$PYTHONPATH:.
       $$SWIFT_HOME/bin/neutron ext-list

    7:Run neutron client from IDE
      
       Pycharm setup

    • set project interpreter as $NEUTRON_CLIENT_HOME/.venv/
    • import neutron client source code into pycharm project
    • Run/edit Configuration/

    • keystone environment setup
        


    Note:
    • neutron-server and enutron-client ip and host name entries should be appended to /etc/hosts on neutron client node


    Every network created in Neutron, whether created by an administrator or tenant, hasprovider attributes that describe it. Attributes that describe a network include the network's type (such as flatvlangrevxlan, or local), the physical network interface that the traffic will traverse, and the segmentation ID of the network. The difference between a provider and tenant network is in who or what sets these attributes and how they are managed within OpenStack

    Linux brideg driver doesn't support GRE and dvr


    In the Juno release of OpenStack, the Neutron community introduced two methods of attaining high availability in routing in a reference implementation. This chapter focuses on a method that uses Virtual Routing Redundancy Protocol, also known as VRRP, to implement redundancy between two or more Neutron routers. High availability usingdistributed virtual routers, otherwise known as DVR,


    VRRP utilizes a virtual router identifier, or VRID, within a subnet and exchanges VRRP protocol messages with other routers with the same VRID using multicast to determine the master router. The VRID is 8 bits in length, and the valid range is 1 to 255. As each tenant uses a single administrative network for VRRP communication between routers, tenants are limited to only 255 HA virtual routers.

    Wednesday, July 1, 2015

    oepnstack swift client debug environment setup

    1: install swift proxy on a node named swift
        install swift account,container and object server on a node named swift-node

    2: clone swift client repo
    https://review.openstack.org/p/openstack/python-swiftclient.git

    3: create a virtual env
    $cd ~$SWIFT_CLIENT_HOME && virtalenv .venv
    $. ./.venv/bin/activate

    4: pip install -r requirements.txt   test-requirements.txt

    5:Run swift client from command line
       $export export PYTHONPATH=$PYTHONPATH:.
       $$SWIFT_HOME/bin/swift --auth-version 3 list

    6:Run swift client from IDE
      
       Pycharm setup

    • set project interpreter as $SWIFT_CLIENT_HOME/.venv/
    • import swift client source code into pycharm project
    • Run/edit Configuration/

    • keystone environment setup
        


    Note:

    • --auth-version(-V) has to be appended to swift client otherwise it'll use v1 default  or set OS_AUTH_VERSION in openrc
    • swift-node and swift ip and host name entries should be appended to /etc/hosts on swift client node

    check the serializing data structure of object.builder,account.builder and container.builder in the builder file using pickle
    $ python
    >>>
    >>> import pickle

    >>> print pickle.load(open('object.builder'))

    Wednesday, June 24, 2015

    Add new hard disk partition to ubuntu guest

    install by force after dpkg dependency problem

    You can fix this by installing missing dependencies.
    Just run the following command
    (after you have run sudo dpkg -i google-chrome-stable_current_i386.deb).
    sudo apt-get install -f
    This will install missing dependencies and configure Google Chrome for you.

    MBR or GPT

    MBR works with disks up to 2 TB in size, but it can’t handle disks with more than 2 TB of space. MBR also only supports up to four primary partitions — if you want more, you have to make one of your primary partitions an “extended partition” and create logical partitions inside it. This is a silly little hack and shouldn’t be necessary

    GPT allows for a nearly unlimited amount of partitions, and the limit here will be your operating system — Windows allows up to 128 partitions on a GPT drive, and you don’t have to create an extended partition

    Fdisk does not work for gpt, currently. But parted does. Or you can just easily install gdisk.

    Over 1.5TB (if drive is unpartitioned) or is UEFI it defaults to gpt, otherwise it defaults to MBR

    Prepare a MBR partition table
             #list new hard disk device for MBR
             fdisk -l
             #Partition type has to be primary
             fdisk /dev/sdb
           

    Prepare a GPT partition table
    $ sudo parted -l
    sudo parted /dev/sda
    #MBR DISK
    (parted)mklabel msdos 
    #GPT DISK
    (parted)mklabel gpt  
     (parted)mkpart primary xfs 0 100%
     (parted) quit

    mkfs -t xfs /dev/sdb1



    #verify the file system mounted
    cat /proc/mount

    #find all block devices
    ls /sys/block

    #block device attribute
    blkid


    Mount swift disk automatically at system boot with Upstart script
    $cat /opt/swift/bin/mount_devices

    mount -t xfs -o noatime,nodiratime,logbufs=8 /dev/sdb1 /srv/node/d1

    $chmod +x /opt/swift/bin/mount_devices

    $mkdir -p /srv/node/b1

    $chown -R swift:swift /srv/node

    Next, create an Upstart script in the /etc/init directory called start_swift.conf with the following commands:

    description "mount swift drives"
    start on runlevel [234]
    stop on runlevel [0156]
    exec /opt/swift/bin/mount_devices



    10 Ways to Generate a Random 32 byte strings from the Command Line

    date +%s | sha256sum | base64 | head -c 32 ; echo
    openssl rand -base64 32

    [swift-hash]
    swift_hash_path_suffix = head -c 64 /dev/random | base64
    swift_hash_path_prefix = head -c 64 /dev/random | base64



    Creating the Log Configuration File

    Create a configuration file named 0-swift.conf in the /etc/rsyslog.d directory. It will contain
    one line:
    local0.* /var/log/swift/all.log
    Since we just created a script that will tell the system to log the all.log file in the directory
    /var/log/swift, we will need to create that directory and set the correct permissions
    on it.
    This command will create the directory the log files will be created in:

    mkdir /var/log/swift
    You also need to set permissions on the directory so the log process can write to it. For
    instance, the following commands do this on Ubuntu:
    chown -R syslog.adm /var/log/swift
    chmod -R g+w /var/log/swift

    Tuesday, June 23, 2015

    openstack KILO Minimal deployment with neutron on VMs

    Controller config:
    https://drive.google.com/open?id=0BzMCYv5KIAi-Yko1NURtU0FVcFE&authuser=0

    Network config(flat+GRE):
    https://drive.google.com/open?id=0BzMCYv5KIAi-eTJqT01TeFkwX1k&authuser=0

    Compute config:
    https://drive.google.com/open?id=0BzMCYv5KIAi-N2tCb18tV1lQMTA&authuser=0





    If you are building your OpenStack nodes as virtual machines, you must configure
    the hypervisor to permit promiscuous mode on the external network
    auto eth0
    iface eth0 inet manual
    up ip link set dev $IFACE up
    up ip link set $IFACE promisc on
    down ip link set $IFACE promisc off
    down ip link set dev $IFACE down
    allow-hotplug br-ex
    iface br-ex inet static
            bridge_ports eth0
            address 16.157.134.232
            netmask 255.255.248.0
            gateway 16.157.128.1




    soauser ALL=(ALL) NOPASSWD: ALL

    By default, the SSH server denies password-based login for root. In /etc/ssh/sshd_config, change:


    PermitRootLogin without-password
    to
    PermitRootLogin yes
    And restart SSH:
    sudo service ssh restart

    rabbitmqctl change_password  openstack admin


    apt-get install keystone python-openstackclient apache2 libapache2-mod-wsgi memcached python-memcache

    Note: openstack client will not work in proxy envrionment

    export no_proxy=localhost,127.0.0.1,controller,nova,neutron

    $mysql -u root -p
    SET PASSWORD FOR 'keystone'@'localhost' = PASSWORD('admin');
    SET PASSWORD FOR 'keystone'@'%' = PASSWORD('admin');


    For security reasons, disable the temporary authentication token mechanism:
    Edit the /etc/keystone/keystone-paste.ini file and remove
    admin_token_auth from the [pipeline:public_api],
    [pipeline:admin_api], and [pipeline:api_v3] sections.


    The Identity version 3 API adds support for domains that contain projects and users.
    Projects and users can use the same names in different domains. Therefore, in order
    to use the version 3 API, requests must also explicitly contain at least the default domain
    or use IDs. For simplicity, this guide explicitly uses the default domain so examples
    can use names instead of IDs.
    $ openstack --os-auth-url http://controller:35357 \
    --os-project-domain-id default --os-user-domain-id default \
    --os-project-name admin --os-username admin --os-auth-type password \



    You can store virtual machine images made
    available through the Image service in a variety of locations, from simple file systems to object-
    storage systems like OpenStack Object Storage.


    https://bugs.launchpad.net/openstack-manuals/+bug/1453534(logdir -> log_dir)


    Following the external network subnet, the tenant router gateway should occupy the lowest IP address in the floating IP address
    range,




    /etc/neutron/plugins/ml2/ml2_conf.ini(on every compute to config tunnel network)
    In the [ovs] section, enable tunnels and configure the local tunnel endpoint:
    [ovs]
    ...
    local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
    Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of
    the instance tunnels network interface on your compute node.
    e. In the [agent] section, enable GRE tunnels:
    [agent]
    ...
    tunnel_types = gre



    sed '/^$/d' neutron.conf >neutron-remove-empty-line.conf
    sed '/^#/d' neutron.conf >neutron-remove-line-start-with#.conf

    Edit the /etc/neutron/l3_agent.ini file and complete the following actions:

    [DEFAULT]
    ...
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    external_network_bridge =
    router_delete_namespaces = True
    The external_network_bridge option intentionally lacks a value
    to enable multiple external networks on a single agent.
    b. (Optional) To assist with troubleshooting, enable verbose logging in the

    http://bderzhavets.blogspot.com/2014/10/forwarding-packet-from-br-int-to-br-ex.html