Tuesday, July 28, 2015

Openstack data_processing deployed in virtual environment for sahara kilo release

prerequisites
  • Cinder  storage_availability_zone = nova
  • Heat 


1:Install  sahara into a virtual environment
    $sudo apt-get install python-setuptools python-virtualenv python-dev
   $ virtualenv sahara-venv
   $ sahara-venv/bin/pip install 'http://tarballs.openstack.org/sahara/sahara-stable-kilo.tar.gz'
   $ mkdir sahara-venv/etc
   $ cp sahara-venv/share/sahara/sahara.conf.sample-basic sahara-venv/etc/sahara.conf

2:install local mysql for sahara

3: Sahara Configuration sahara.conf
    [DEFAULT]
    use_neutron=true
    use_namespaces=True
    [database] 
    connection=mysql://username:password@host:port/database
    [keystone_authtoken]
   auth_uri=http://127.0.0.1:5000/v2.0/

   identity_uri=http://127.0.0.1:35357/

4: Policy configuration
    cat sahara-venv/etc/policy.json
   {
    "default": ""
   }
   By default sahara will search for a policy.json file in the    same directory as the configuration file.

5: Create the database schema
     $ sahara-venv/bin/sahara-db-manage --config-file sahara-venv/etc/sahara.conf upgrade head
6: start sahara
   $ sahara-venv/bin/sahara-all --config-file sahara-venv/etc/sahara.conf

7:  register sahara in the Identity service catalog    
$openstack service create --name sahara --description "Sahara Data Processing" data-processing
$openstack endpoint create --region RegionOne \
--publicurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s" \
--adminurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s" \
--internalurl "http://16.158.50.211:8386/v1.1/%(tenant_id)s"\

8:Building Images for sahara Plugin
As of now the sahara plugin works with images with pre-installed versions of Apache Hadoop. To simplify the task of building such images we use Disk Image Builder




  • Clone repository “https://github.com/openstack/sahara-image-elements” locally( sudo bash diskimage-create.sh -h )
  • tox -e venv -- sahara-image-create -p [vanilla|spark|hdp|cloudera|storm|mapr]
    tox -e venv -- sahara-image-create -i [ubuntu|fedora|centos]

  • glance image-create --name=ubuntu_sahara_vanilla_hadoop_2_6 \--disk-format=qcow2 --container-format=bare \--file ./ubuntu_sahara_vanilla_hadoop_2_6_latest.qcow2 --progress
  •   
  •   
  • $sahara image-register --id $IMAGE_ID --username ubuntu
  • Wednesday, July 8, 2015

    Learning neutron by debugging openstack neutron client

    1: install neutron server on a node named neutron-server
        $apt-get install neutron-server neutron-plugin-ml2 python-neutronclient

    2:install neutron Networking on a node named neutron
        $apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
    neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent

    3: clone neutron client repo
    https://review.openstack.org/p/openstack/python-neutronclient.git

    4: create a virtual env
        $cd ~$NEUTRON_CLIENT_HOME && virtalenv .venv
        $. ./.venv/bin/activate

    5: pip install -r requirements.txt   test-requirements.txt

    6:Run neutron client from command line
       $export export PYTHONPATH=$PYTHONPATH:.
       $$SWIFT_HOME/bin/neutron ext-list

    7:Run neutron client from IDE
      
       Pycharm setup

    • set project interpreter as $NEUTRON_CLIENT_HOME/.venv/
    • import neutron client source code into pycharm project
    • Run/edit Configuration/

    • keystone environment setup
        


    Note:
    • neutron-server and enutron-client ip and host name entries should be appended to /etc/hosts on neutron client node


    Every network created in Neutron, whether created by an administrator or tenant, hasprovider attributes that describe it. Attributes that describe a network include the network's type (such as flatvlangrevxlan, or local), the physical network interface that the traffic will traverse, and the segmentation ID of the network. The difference between a provider and tenant network is in who or what sets these attributes and how they are managed within OpenStack

    Linux brideg driver doesn't support GRE and dvr


    In the Juno release of OpenStack, the Neutron community introduced two methods of attaining high availability in routing in a reference implementation. This chapter focuses on a method that uses Virtual Routing Redundancy Protocol, also known as VRRP, to implement redundancy between two or more Neutron routers. High availability usingdistributed virtual routers, otherwise known as DVR,


    VRRP utilizes a virtual router identifier, or VRID, within a subnet and exchanges VRRP protocol messages with other routers with the same VRID using multicast to determine the master router. The VRID is 8 bits in length, and the valid range is 1 to 255. As each tenant uses a single administrative network for VRRP communication between routers, tenants are limited to only 255 HA virtual routers.

    Wednesday, July 1, 2015

    oepnstack swift client debug environment setup

    1: install swift proxy on a node named swift
        install swift account,container and object server on a node named swift-node

    2: clone swift client repo
    https://review.openstack.org/p/openstack/python-swiftclient.git

    3: create a virtual env
    $cd ~$SWIFT_CLIENT_HOME && virtalenv .venv
    $. ./.venv/bin/activate

    4: pip install -r requirements.txt   test-requirements.txt

    5:Run swift client from command line
       $export export PYTHONPATH=$PYTHONPATH:.
       $$SWIFT_HOME/bin/swift --auth-version 3 list

    6:Run swift client from IDE
      
       Pycharm setup

    • set project interpreter as $SWIFT_CLIENT_HOME/.venv/
    • import swift client source code into pycharm project
    • Run/edit Configuration/

    • keystone environment setup
        


    Note:

    • --auth-version(-V) has to be appended to swift client otherwise it'll use v1 default  or set OS_AUTH_VERSION in openrc
    • swift-node and swift ip and host name entries should be appended to /etc/hosts on swift client node

    check the serializing data structure of object.builder,account.builder and container.builder in the builder file using pickle
    $ python
    >>>
    >>> import pickle

    >>> print pickle.load(open('object.builder'))