Main Menu
Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - burnyd

#1
Blogs of Interest and Note / A new journey with Arista
September 20, 2018, 06:02:41 AM
A new journey with Arista

As the title suggested after many years working in large enterprises, services providers and as of recently partners I am headed to a network vendor.  This is a large change for me but a very good one.  I am tremendously excited for the opportunity. I am joining Arista for multiple reasons.  I was able to […]

arista


As the title suggested after many years working in large enterprises, services providers and as of recently partners I am headed to a network vendor.  This is a large change for me but a very good one.  I am tremendously excited for the opportunity.


I am joining Arista for multiple reasons.  I was able to bring in Arista at a prior position and watch the entire network change in a drastic fashion.  We were able to automate many tasks that were previous manual.  It seems like every time I turn around Arista has integration into multiple different technologies ie docker,openstack,golang etc.


I was fortune enough to attend tech field day last year where Ken Duda CTO of Arista networks delivered one of the most awesome presentations on the evolution of EOS and code quality.



Everyone I have met within Arista has been amazing.  This was an easy decision given the talent and passion of every individual I have met within the organization. It fits my career goals of moving towards SDN, continuous integration/continuous delivery and all things automation.  I start late August.


Source: A new journey with Arista
#2
Network Continuous integration using Jenkins,Jinja2 and Ansible.

[html]I have been into the devops and agile life lately.
#3
Tech field day VMware NSX at Interop

[html]
#4
Moving docker containers manually and automated.

[html]This is a blog post that will cover part of my container obsession just the basics.
#5
Blogs of Interest and Note / Arista ZTP basics
September 18, 2018, 06:23:57 PM
Arista ZTP basics

ZTP within Arista switches makes deploying infrastructure really easy.  ZTP takes away a lot of human errors and allows for zero touch provisioning of switches.  Zero touch provisioning switches really falls in line with a lot of the SDN/Automation craze.  We have all been there when it comes to installing a switch and doing the […]

ZTP within Arista switches makes deploying infrastructure really easy.  ZTP takes away a lot of human errors and allows for zero touch provisioning of switches.  Zero touch provisioning switches really falls in line with a lot of the SDN/Automation craze.  We have all been there when it comes to installing a switch and doing the simple CRTL+C and CTRL+V into note pad.  It never really works out to well.  From personal experience at my last position we were able to end the configuration of switches to facilities to plug them in after a simple script was ran per environment.


In this blog post we will work with the the following environment….


ztpenvironment


I have 4 VM’s EOS-1 to EOS-4 within VMware.  vEOS inside of VMware is a easy Install.  Once all 4 vEOS VM’s are loaded a Ubuntu 14.04 LTS VM will be needed.  That VM will require a install of ISC DHCP Server  and  Arista ZTP server.  For the Arista ZTP server simply follow all the apt-get packages and it should install rather easily.  So once that is out of the way for vEOS make sure to go ahead and change the default ZTP method to mac address for all the vEOS VM’s.


Now that everything is installed lets take a look at how ZTP officially works.


ztp


The switch will come online.  In this case vEOS1 -> vEOS4.  The switch will then try to send out a DHCP discover on the management interface first then all interfaces.  Once the switch receives an IP from the DHCP server the switch will then receive a file from the DHCP options.  That is step two.  This file is generally called a “bootstrap”  a bootstrap file is simply  a python script which tells the switch where to grab its config from.  After the script has been downloaded and all script configurations have been applied the switch will then reboot.


Okay thats great but how does that work…


Here is a screen shot of my DHCP server on the Ubuntu 14.04 LTS VM


dhcpd


The http://192.168.3.230:8080/bootstrap is really where the magic happens.  So if everything was followed correctly in the previous guide of the arista ZTP server on the Ubuntu 14.04 VM you should have your ZTP server running on port 8080.  So when the VM is brought online it will receive its bootstrap file directly from the ZTP server.   Lets reload one of my switches and watch the ZTP server at the same time.


ztpswitch


From the switches perspective it booted up and had zero configuration on it.  It sends out a DHCP request first and received the IP address 192.168.3.227/24.  The DHCP server through a DHCP option then instructed the switch to get its Bootstrap file from http://192.168.3.230:8080/bootstrap.  It then downloads the boot strap file.  After downloading the bootstrap file it will then execute the file and reboot.


From the Ubuntu 14.04LTS servers perspective / ZTP server.


ztpservers


The First line says that the node 005056a878fa is requesting the bootstrap file.  Now what I will explain next is how would the ztp server know this is a distinct switch ie ToR5 vs ToR1.  When these first boot up they receive either a system mac address or a serial number.  I just used the mac address feature.  So not to go off path here but 005056a878fa is the switches system mac address.  I will explain the next few lines in a little bit.


So far we know some what how this process works but how do we know where to store the configuration etc for each device?  How do we make each switch unique?


In the following file. /etc/ztpserver/ztpserver.conf


serverconfig


The location is where the ztp files are I will explain later but boot strap for example is located within /usr/share/ztpserver/bootstrap.  The same with image files etc.  Under the unique identifier I chose systemmac.  Serial can be used as well and server url will be where you can run ZTP server.


So under /usr/share/ztp/server there are a list of the VM’s configuration for ZTP I have created.  Lets go to the node where I just ZTP’d.


/share/ztpserver/nodes/005056a878fa/ is where the node information is.  So each time a new switch is created all the information should be under its own directory within the nodes folder with its MAC address.  There are 3 files located within this directory.


-Startup-configuration – Holds the configuration for the unique switch


-Definition – This is one of my favorites with ZTP I will get into but you can have a definition say something similar to make sure when the switch boots it is always on a certain version of code.  Or download this batch,python or GO script.


-Pattern – Pattern allows to dynamically build the environment via LLDP neighbors if need be.


So here are some snippets of my startup-config,definitions and pattern.  My pattern file is default.  But these files are all needed by ZTP.  Keep in mind I built the configs before hand they just ZTP.  Later in a few days or weeks I will get a ZTP going where it will build the configurations based off of a simple python script.


Startup-Config.


startup


Definitions


Screenshot from 2016-05-09 17:28:37


Pattern


Screenshot from 2016-05-09 17:28:54


In my definition files the very first action is to install_image.  What this simply does is make sure that is the current image.  The second takes the dnsscript from /usr/share/ztpserver/files/scripts/dnsscripts and sends it to the the switch.  Now what this does is simply add a DNS A record of the switches management address each time is ZTP’s.  Its  a simple bash script I put together here.


bashscript


What the script says is do a NSUPDATE to the DNS server I have here at home.  Add host which is the string that is returned with hostname then the MGTINT is the ip address with a complex grep of the ifconfig ma1 interface which is the management interface.  I need to go ahead and do this same thing for all interfaces with some sort of fancy for loop here in the future.


So once that script is sent to the switch we will use the CLI once the script is booted within the configuration with a event-handler.


eventhandler.png


Lastly, I wanted to created a quick python script that will simply connect to all 3 vEOS switches and write erase / reload each of the switches.


regenpythonscript


regenpy


Right now its a build up tear down for any type of testing I want to do.  In the future.  I for sure want to get a automated script that will build the configuration and the nodes directory automatically for me once its finished.  But for right now I can wr erase and remove all nodes.  I also have more DNS entries to make for all the interfaces.


 


 


 


Source: Arista ZTP basics
#6
Cloning VM's from powercli then automatically add to a NSX load balancer.

[html]This environment will be built out from the script.
#7
Blogs of Interest and Note / NSX Python automated build
September 18, 2018, 12:31:17 AM
NSX Python automated build

[html]I figured I would share this NSX python script to build an entire environment.
#8
Building Hypervisor leaf spine overlay network with BGP

[html]This has been long overdue.
#9
Blogs of Interest and Note / VCP-NV 610 passed
September 17, 2018, 12:06:19 PM
VCP-NV 610 passed

On my second shot November 27th I passed the VCP-NV 610 exam. The test was all multiple choice and tested my knowledge of NSX in all aspects. For this test I walked in and took it the first time not expecting much. I failed by a lot. I had to go back and study things […]

VCP610nv


On my second shot November 27th I passed the VCP-NV 610 exam. The test was all multiple choice and tested my knowledge of NSX in all aspects. For this test I walked in and took it the first time not expecting much. I failed by a lot. I had to go back and study things I was weak on. For my production / lab environments I focused a lot on the routing aspect of NSX and not enough on all the other services because routing is generally what I felt most comfortable with. Once I went back and made sure I understood all concepts of NSX I passed the exam on the second try.


To be honest the exam is not structured very well. There were a few questions that seemed like they were unrelated to the entire blue print. It is a good think I did not need high marks to pass the exam. Otherwise, if someone had some hands one experience with NSX ie the hands on labs or home lab it would be a easy pass. I would recommend this test to all my network friends as if you have a valid CCNA you do not need the normal VMware class to get the certification.


Source: VCP-NV 610 passed
#10
Blogs of Interest and Note / NSX troubleshooting commands.
September 17, 2018, 06:01:49 AM
NSX troubleshooting commands.

NSX Controller related commands show control-cluster status – Shows if a controller is connected to the cluster This command is ran on every NSX controller to make sure that each controller is added to the 3 node cluster. For some reason or another if the NSX controller is not enabled for all processes it either […]

NSX Controller related commands

show control-cluster status – Shows if a controller is connected to the cluster


This command is ran on every NSX controller to make sure that each controller is added to the 3 node cluster. For some reason or another if the NSX controller is not enabled for all processes it either has to be deleted or rebooted then re added.


If for some reason the join is not completed then do the following.

1.) Ping the other NSX controllers for connectivity

2.) Reload controller.

3.) Check NSX install management to see if the controller is setup.


show control-cluster logical-switch vni xxx – This command shows which one of the NSX controllers handles all the functionality for a particular VXLAN/VNI.

In my experience if you do not see a logical switch /VNI associated with a specific controller please do the following.

1.) Make sure the right VNI is being used

2.) Find the logical switch change its mode to multicast then back to unicast quickly.


show controler-cluster logical-switches vtep-table xxx – Discover what hosts participate in a VXLAN


1.)You do not see VTEPs showing up on the controller who owns that VNI/VTEP – Restart the NetCPA agent by logging into a ESXi host and issuing the following command /etc/ini.d/netcpad restart

2.)Netcpa did not resolve the issue the only way to fix it at this point is a reboot of the host.


show control-cluster logical-switches arp-table xxx – Discover VM’s arp address in a VXLAN

Connection-ID shows the Host where it belongs to. If we look at the previous command.


1.)If a IP address does not show up in a controller issuing the arp-table command for its VXLAN/VNI chances are that VM will not be able to communicate to the outside world due to an issue with the host where it lives. Take that VM and migrate it to another host that has a working VTEP.

2.) IP address shows up but cannot ping its default gateway. Check to see the default gateway of the host and make sure it matches the default gateway of the LIF same goes with hosts OS.


show control-cluster logical-switches mac-table xxx – Discover VM’s mac addresses in a VXLAN


Same thing as the Arp-table the connection-ID directly maps to the VTEP table.


1.)Mac does not show up in the controller. Chances are there is an issue with the host. Check that the hosts VTEP interface shows up when issuing the command to see all the VTEPs that participate within a VXLAN/VNI. VMotion the VM to another host and reboot the non functional host.

2.)Check to make sure that the mac address is correct in the guest operating system.


show control-cluster logical-routers instance all – Shows each edges association with each host.

This command like the other controller commands will look different per controller. The LR-ID number will be needed for future commands.


show control-cluster logical-routers interface-summary – Provides all the interfaces for the LDR / Edge associated


show control-cluster logical-routers interface routerID interface – Provides the default gateway IP / MAC and MTU


show control-cluster logical-routers routes routerID – Shows all the routes for a given ESG. Note this is different per controller.


NSX edge commands

show ip route

Show ip route ospf/static/bgp

Show ip ospf

Show ip ospf neighbors

Show ip ospf database

show firewall flows – Will show every single flow going through the Edge router at that time. Similar to a iptables –L

show firewall flows top 10 – Provides the top 10 largest sessions

show firewall flows top 10 sort-by-pkts – Provides the top 10 by the amount of packet

show flowtable – will show all flows.

show ip forwarding – Displays the FIB as show ip route will show the rib

show system uptime – Shows the uptime of a device.


ESXi Related troubleshooting commands

esxcli network vswitch dvs vmware vxlan list – Lists the VTEP segment and default gateway for the VTEP with MTU

net-vdr -l –instance – Will list the routers along with their associated LIFs etc.

Esxcli software vib list | grep vxlan – This is the installed vib that needs to be installed on each host. If the vib is not installed the host cannot participate in VXLAN.


Source: NSX troubleshooting commands.
#11
A new journey with Arista

[html]As the title suggested after many years working in large enterprises, services providers and as of recently partners I am headed to a network vendor.
#12
Network Continuous integration using Jenkins,Jinja2 and Ansible.

I have been into the devops and agile life lately.  I have read the following books that have been extremely technology changing for myself.. Devops 2.0 Ansible for Devops Learning Continuous integration with Jenkins The Devops 2.0 book is fantastic and @vfarcic constantly updates the book so it is worth the price of admission.  I have said this […]

I have been into the devops and agile life lately.  I have read the following books that have been extremely technology changing for myself..


Devops 2.0

Ansible for Devops

Learning Continuous integration with Jenkins


The Devops 2.0 book is fantastic and @vfarcic constantly updates the book so it is worth the price of admission.  I have said this many times before about Devops.  Devops is simply using open source tooling to run through a workflow in a automated fashion.  The definition of Continuous integration is constantly testing your code to integration into production.


We as network engineers commonly use CLI or manual tasks to put into production because we know what are doing will “Just work.”  We have done it a million times and it has never failed.  We do not take into account human error or issues within the process ie firewall rules, wrong ip address etc.


This is the devops way which should be implemented for anything new configuration wise following the devops practive.


devopslife


Script/configuration is implemented to Jenkins.  Jenkins schedules the task.  Ansible iterates upon the script.  Checks and balances are done.  The script/config is then ran on the switch.  This is another use case for docker on a switch as this could all be ran in a test container before trying the code on the actual switch.  Finally notifications of the test build being successful are sent out either through slack or through email.


So in this blog post we are going to do something really simple so any network engineer can follow along and get their feet wet into the continuous integration book.  I also highly recommend the three books in which I have posted above.


Our topology is simple

-2 Oracle Virtual BOX VM’s

1.) Ubuntu 14.04 LTS

2.) Arista vEOS running 4.16

-Jenkins is running on the Ubuntu VM

-Ansible is running on the Ubuntu VM

-Python 2.7 is running on the Ubuntu VM


I will start with a simple line of configuration.  NTP is probably the easiest 1 liner of configuration we as network people normally use.  An NTP server can be added to a switch with a 1 liner


ntp server 10.10.10.10


We are going to make this slightly fancy here with Ansible and Jinja templates as if we ever wanted to change our NTP server we could do it on a large amount of switches in a ansible inventory.  So here is the J2 template it is very simple.  Lets take a look at our Ansible structure first.

tree

group_vars – Will hold all the variables in this case the NTP server config.

veos.yaml – contains the ntp server

inventory – contains all the hosts within the inventory file.

ntpserver.yaml – The ansible playbook

scripts – optional directory this has a python script we will get to that later

templates – directory that holds Jinja2 templates

ntpserver.j2 – holds the configuration “ntp server x.x.x.x”


Lets first check out our ansible-playbook


ntpp

This ansible-playbook simply uses the veos hosts which are located in the inventory file.

The second step is uses the eos_template which is located in the templates/ntpserver.j2 and applies this NTP server to each host like a giant for loop.


veos.yaml

ntpserver

ntpserver.j2

ntpserverconfig.png


This would simply add a NTP server to a arista vEOS device as long as its in the .eapi file and in the inventory file.


So for those who are following so far the next step is running this through a CI integration.  That is where Jenkins comes through.  Jenkins can execute the playbook and run through its checks and balances.  Here this is extremely simple as we are not doing much.  The work flow is as follows once again..

devopslife


Here is the job we will run.

jenkins

Here is a example of the job.

job


The first step is to run the playbook and run it in  “–Check” Mode.  This will run the play book as a dry run and not make any changes.  This is simply for any sort of error corrections or something that would be wrong with trying to connect to something within the host file.  So if either switch did not connect.


The second step is to start the configuration.  This will apply the configuration on the switch as it is in the Jinja2 templates.


The last step is a Python script shown here.

python


Alright so enough typing lets go ahead and run the script and hit the console into JenkinsCI.

jenkinsci

We were able to run through the tests everything is successful. Checking the switch it shoes the new NTP server of 10.10.10.10.


Lets purposely fail this setup and try to make a NTP server of something bogus that the switch would never take.


failure


This time it failed.  The first dry run will work because it can connect.  The second execution will not run as it will simply fail at trying to add commands to the switch.

Typically at this point either a email or a chat notification is sent out to make the rest of the team aware.


This was a good exercise walking through CI of network changes.  This is for sure the way to go for testing and checking for network changes in live production.


 


Source: Network Continuous integration using Jenkins,Jinja2 and Ansible.
#13
Tech field day VMware NSX at Interop

        I was fortunate enough to go to Interop in Las Vegas on the behalf of tech field day.  So I cannot thank them enough for the opportunity to go to Interop.  For me this was my first Interop I was able to take part in.  Which was really eye-opening and will for […]

run_NSX


        I was fortunate enough to go to Interop in Las Vegas on the behalf of tech field day.  So I cannot thank them enough for the opportunity to go to Interop.  For me this was my first Interop I was able to take part in.  Which was really eye-opening and will for sure add many new fortes in regards to new trends in networking.


        I have been using the NSX product for the better half almost 3 years in a large environment.  Before NSX I also leveraged VMware VCNS the predecessor to NSX.  I have written numerous blog posts and scripts all related to NSX.  Alright enough about me and on to the VMware NSX presentations.



        Bruce Davie CTO of VMware networking and security business unit presented upon where VMware is currently on the state of VMware NSX.  VMware’s three largest and most compelling reasons for customers to view or run NSX in their environments comes down to three large reasons.

the-vision-for-the-future-of-network-virtualization-with-vmware-nsx-9-638

Agility – Having an open and central API that can talk to all components within the NSX stack for example NSX edge routers,distributed firewall etc.


Security – Allowing true micro segmentation of VM’s at the vnic level and the availability to virtualize security components as virtual machines.


Application Continuity – Allowing NSX related virtual machines to live within the private or public cloud.


Where NSX is today customer wise.

Screenshot from 2016-05-18 09:44:25

Where NSX was 8 months ago at VMworld.

Screenshot from 2016-05-18 09:43:50


        The numbers are quite impressive to double your customers and triple the amount of customers going into production with NSX based networks.  Looking at the different verticles it is quite impressive.  The list of customers shows different use cases between Health care providers, financial institutions , large enterprises and retailers.



       Talking about operations and visibility Bruce was quick to point criticism of lack of operations and visibility once a customer moves to an overlay network.  VMware took those criticisms very heavily and made it a priority to invest into visibility.  Bruce gives a demo which was from VMworld last year.  In the demo Bruce presents vrops which is VMware’s go to tool operationally for all of their products.  This particular demo had the network management pack.  The network management pack to monitor and alert on both the physical and virtual networks.  Traceflow  was also mentioned.  Traceflow will interject real data packets into a NSX virtual machine to another NSX virtual machine for troubleshooting issues to see if a particular service is blocked a long the path within the virtual network.


        VMware has integrated many third-party monitoring tools within their portfolio.  The two which Bruce gave mention Gigamon which allows for a virtual tap directly into the hypervisor or Gigamon can simply strip VXLAN headers to view raw data packets on the physical network and arkin which is a super slick UI that will give performance statistics on both the physical and virtual networks as well as vsphere information.



        NSX everywhere was quite possibly the most intriguing presentation of the day.  Bruce talked about the future of NSX taking what security policies are within the private cloud and move the same security policies to any hypervisor or baremetal machine given in AWS,Azure etc.  We also touched on the point of running NSX on VDI and airwatch VMware’s mobile VDI product.


        We talked about the possibilities of future hypervisors and platforms which NSX will integrate into. There are plans to integrate NSX in the future to Hyper-V.  As of today the NSX transformers is supported on bare metal linux, KVM and vsphere 6.x. NSX-V will continue to operate on vsphere 6.x.



Source: Tech field day VMware NSX at Interop

#14
Moving docker containers manually and automated.

This is a blog post that will cover part of my container obsession just the basics.  Ill go over how to manually create a docker container.  Moving a container anywhere.  Publishing the container to docker hub then finally I wrote a quick python script that will take and automate the creation of as many containers […]

This is a blog post that will cover part of my container obsession just the basics.  Ill go over how to manually create a docker container.  Moving a container anywhere.  Publishing the container to docker hub then finally I wrote a quick python script that will take and automate the creation of as many containers as a user wants.


This is all within my home test lab which was using the following

-Ubuntu 14.04 LTS

-Docker 1.11.0

-Python 2.7 (yah yah I know)

-Assuming the docker client and daemon are on the same host.


First things first in typical Debian we will want to go ahead and install docker.  The funny thing about Ubuntu is that there was a process called docker previously. So in the ubuntu world we want to run the following commands


#sudo apt-get update

#sudo apt-get install docker.io

Screenshot from 2016-05-16 13:02:53


With the 14.04 repository 1.6 is the current docker version which will most likely work for this exercise.  I upgraded to 1.11 due to the ip mac/vlan project.  So we will upgrade to the latest and greatest straight from docker source.


#sudo wget -qO- https://get.docker.com/ | sh

This command should grab the latest version off of the docker repo

Screenshot from 2016-05-16 13:05:56


Alright.  So we are about to deploy our very first container within ubuntu 14.04.  The process is really easy.  I love what docker has done with containers to package everything up so it is so simple that it can be ran from a simple one liner.  So lets get into it.


#sudo docker run -dit ubuntu:14.04.1 /bin/bash

Docker run is command to start a container the argument -dit is disconnect and interactive ubuntu 14.04.1 is the container image and /bin/bash is the run command.

Screenshot from 2016-05-16 13:09:01

It is important to note that this ubuntu image is directly from the docker hub.  Within Dockerhub anyone can pull any public image down from anywhere running docker.  So After docker realizes that ubuntu image is not local it will pull the image down.  Now there are 4 pulls.  This is what makes docker really significant.  There are 4 levels of this image.  What is even more awesome is that if I tied something to the ubuntu image I would only have to ever update the level I tied to it.  So I would never have to keep downloading over and over again the ubuntu image.


So the container is running within this system.

#sudo docker ps -a

Screenshot from 2016-05-16 13:14:28

We can see this containers unique ID which I will get into in a bit.  The image it uses.  The starting bash how long it has ran for.  Ports are something I am not going to get into but basically we can expose network tcp/udp ports to each one of these containers.


We can easily kill the container with the following commands

#sudo docker stop CONTAINERID

#sudo docker rm CONTAINERID

The container needs to be stopped and deleted.  Just as a FYI the container can be brought back up at any point.  The files live within /var/lib/docker/aufs/diff using their container ID.  So they will need to be deleted as well.


So lets go ahead and create out very own docker container unique to lets say a project and try to move it from this VM I am using and off to another docker host.


First thing is create a file in whichever directory called Dockerfile it has to be called exactly that.

#sudo vim Dockerfile

Screenshot from 2016-05-16 16:20:01

This file is where docker will build the image from.  So for example the first line is a simple comment.  The from Ubuntu is saying pull down the ubuntu image.  I am the maintainer.  The next two commands tell the container to first update your respo then go ahead and download iperf. The last is a simple write the output to testfile so later we know it all worked.


We are ready to create our docker image.

#docker build -t firstimage:0.1 Dockerfile

#docker run -it firstimage:0.1 /bin/bash

Screenshot from 2016-05-16 16:27:45

We can see that this has iperf installed from the apt-get we did told it to in the Dockerfile.


Now the first option here we can do to get this container off of this host is to manually move it.  Its possible once the docker container is up and running to save it and move it to any docker platform and run it there!


Saving the docker image

#docker save -o firstimage.tar firstimage:0.1


The image is now saved we need to SCP it to another host

#scp firstimage user@machine:/pathtoscpfile

Screenshot from 2016-05-16 16:45:34

The part that always gets me is that this file is only 156MB!


Next lets go ahead and jump on the machine we moved the file to and run the file


#docker load -i  /pathtofolder

#docker run -it


Screenshot from 2016-05-16 16:45:34

So in that tutorial we have successfully moved a container the manual way.


Next we will walk through how to push this to docker hub.  The first thing to do is sign up for docker hub.


So here is my docker hub repository

Screenshot from 2016-05-16 16:53:08

The repository we are going to move firstimage to is going to be called burnyd/iperf-test


We first want to create a docker tag we need to find the unique docker ID

#docker images | grep firstimage

Screenshot from 2016-05-16 17:08:54

We have found the unique image file.  Now we need to create the tag

#docker tag 4ea31abc539d burnyd/iperf-test:1.0

If we check the images again we can see there is a new image created.






So lets go ahead and push this to dockerhub.

#docker push burny/iperf-test:1.0

Screenshot from 2016-05-16 17:11:01

Now if we check our docker hub we should see this image.

Screenshot from 2016-05-16 17:12:19

It is that simple.  I should be able to pull this container and run it on any docker running host lets try a new host.


#docker pull burny/iperf-test:1.0

Screenshot from 2016-05-16 17:15:45.png

So now we should be able to simply run the image.

#docker run -it burnyd/iperf-test:1.0 /bin/bash

Screenshot from 2016-05-16 17:16:49

So there we have it.  We can grab this image from anywhere in the world!


The last section of this blog post is a automated script I created to run docker images. Schedulers are really the way to go like Kubernetes and Swarm.  I simply created a python script that will prompt the user and ask them how many containers and what commands they would like to run I posted it on Github here.


On this test we will create 20 containers.  Let them run then delete them and tear it all down.  We could easily create 20 iperf client streams to a server if we wanted to for example.


Screenshot from 2016-05-16 17:35:33


Okay now lets go ahead and tear it all down!

Screenshot from 2016-05-16 17:36:20

Here is the code for anyone interested without going to git hub.


Screenshot from 2016-05-16 17:37:30


 


Source: Moving docker containers manually and automated.
#15
Blogs of Interest and Note / Arista ZTP basics
June 09, 2018, 06:02:05 AM
Arista ZTP basics

ZTP within Arista switches makes deploying infrastructure really easy.  ZTP takes away a lot of human errors and allows for zero touch provisioning of switches.  Zero touch provisioning switches really falls in line with a lot of the SDN/Automation craze.  We have all been there when it comes to installing a switch and doing the […]

ZTP within Arista switches makes deploying infrastructure really easy.  ZTP takes away a lot of human errors and allows for zero touch provisioning of switches.  Zero touch provisioning switches really falls in line with a lot of the SDN/Automation craze.  We have all been there when it comes to installing a switch and doing the simple CRTL+C and CTRL+V into note pad.  It never really works out to well.  From personal experience at my last position we were able to end the configuration of switches to facilities to plug them in after a simple script was ran per environment.


In this blog post we will work with the the following environment….


ztpenvironment


I have 4 VM’s EOS-1 to EOS-4 within VMware.  vEOS inside of VMware is a easy Install.  Once all 4 vEOS VM’s are loaded a Ubuntu 14.04 LTS VM will be needed.  That VM will require a install of ISC DHCP Server  and  Arista ZTP server.  For the Arista ZTP server simply follow all the apt-get packages and it should install rather easily.  So once that is out of the way for vEOS make sure to go ahead and change the default ZTP method to mac address for all the vEOS VM’s.


Now that everything is installed lets take a look at how ZTP officially works.


ztp


The switch will come online.  In this case vEOS1 -> vEOS4.  The switch will then try to send out a DHCP discover on the management interface first then all interfaces.  Once the switch receives an IP from the DHCP server the switch will then receive a file from the DHCP options.  That is step two.  This file is generally called a “bootstrap”  a bootstrap file is simply  a python script which tells the switch where to grab its config from.  After the script has been downloaded and all script configurations have been applied the switch will then reboot.


Okay thats great but how does that work…


Here is a screen shot of my DHCP server on the Ubuntu 14.04 LTS VM


dhcpd


The http://192.168.3.230:8080/bootstrap is really where the magic happens.  So if everything was followed correctly in the previous guide of the arista ZTP server on the Ubuntu 14.04 VM you should have your ZTP server running on port 8080.  So when the VM is brought online it will receive its bootstrap file directly from the ZTP server.   Lets reload one of my switches and watch the ZTP server at the same time.


ztpswitch


From the switches perspective it booted up and had zero configuration on it.  It sends out a DHCP request first and received the IP address 192.168.3.227/24.  The DHCP server through a DHCP option then instructed the switch to get its Bootstrap file from http://192.168.3.230:8080/bootstrap.  It then downloads the boot strap file.  After downloading the bootstrap file it will then execute the file and reboot.


From the Ubuntu 14.04LTS servers perspective / ZTP server.


ztpservers


The First line says that the node 005056a878fa is requesting the bootstrap file.  Now what I will explain next is how would the ztp server know this is a distinct switch ie ToR5 vs ToR1.  When these first boot up they receive either a system mac address or a serial number.  I just used the mac address feature.  So not to go off path here but 005056a878fa is the switches system mac address.  I will explain the next few lines in a little bit.


So far we know some what how this process works but how do we know where to store the configuration etc for each device?  How do we make each switch unique?


In the following file. /etc/ztpserver/ztpserver.conf


serverconfig


The location is where the ztp files are I will explain later but boot strap for example is located within /usr/share/ztpserver/bootstrap.  The same with image files etc.  Under the unique identifier I chose systemmac.  Serial can be used as well and server url will be where you can run ZTP server.


So under /usr/share/ztp/server there are a list of the VM’s configuration for ZTP I have created.  Lets go to the node where I just ZTP’d.


/share/ztpserver/nodes/005056a878fa/ is where the node information is.  So each time a new switch is created all the information should be under its own directory within the nodes folder with its MAC address.  There are 3 files located within this directory.


-Startup-configuration – Holds the configuration for the unique switch


-Definition – This is one of my favorites with ZTP I will get into but you can have a definition say something similar to make sure when the switch boots it is always on a certain version of code.  Or download this batch,python or GO script.


-Pattern – Pattern allows to dynamically build the environment via LLDP neighbors if need be.


So here are some snippets of my startup-config,definitions and pattern.  My pattern file is default.  But these files are all needed by ZTP.  Keep in mind I built the configs before hand they just ZTP.  Later in a few days or weeks I will get a ZTP going where it will build the configurations based off of a simple python script.


Startup-Config.


startup


Definitions


Screenshot from 2016-05-09 17:28:37


Pattern


Screenshot from 2016-05-09 17:28:54


In my definition files the very first action is to install_image.  What this simply does is make sure that is the current image.  The second takes the dnsscript from /usr/share/ztpserver/files/scripts/dnsscripts and sends it to the the switch.  Now what this does is simply add a DNS A record of the switches management address each time is ZTP’s.  Its  a simple bash script I put together here.


bashscript


What the script says is do a NSUPDATE to the DNS server I have here at home.  Add host which is the string that is returned with hostname then the MGTINT is the ip address with a complex grep of the ifconfig ma1 interface which is the management interface.  I need to go ahead and do this same thing for all interfaces with some sort of fancy for loop here in the future.


So once that script is sent to the switch we will use the CLI once the script is booted within the configuration with a event-handler.


eventhandler.png


Lastly, I wanted to created a quick python script that will simply connect to all 3 vEOS switches and write erase / reload each of the switches.


regenpythonscript


regenpy


Right now its a build up tear down for any type of testing I want to do.  In the future.  I for sure want to get a automated script that will build the configuration and the nodes directory automatically for me once its finished.  But for right now I can wr erase and remove all nodes.  I also have more DNS entries to make for all the interfaces.


 


 


 


Source: Arista ZTP basics