Practical Network Automation, Part 4: Building the Framework
A “how to” guide for building an automation framework using Nautobot, Nornir and some other tools.
Overview
By the end of this lab, you’ll configure a bgp leaf/spine exclusively via Nautobot, then validate the network state with SuzieQ (or CLI). Here’s an overview of the architecture we’ll build:
Nautobot:
- Front-end for device backups, generating intended configs, pushing configs
- Data storage (intended state of the network)
- Dynamic device automation inventory
Github:
- Version control for intended/backup configs, Jinja2 templates, Job code
Nornir:
- The configuration engine used by Nautobot Jobs
SuzieQ:
- Observability tool for network information and validation
My Environment
I’m using two GCP VMs - one for eve-ng pro and another for Nautobot. So far I’m happy with the setup. Here are the resources I used:
- Installing EVE-NG - Section 3.4, “GOOGLE CLOUD PLATFORM”
- Making EVE-NG VMs reachable from other VMs in GCP
- Installing Nautobot
Lab Instructions
1. Install Virtualization App to Run Virtual Network Devices
- For running the virtual devices in your lab, I’d recommend eve-ng or gns3.
- This lab is based on Arista’s
vEOS-lab
image, but this process can be replicated for other vendor devices. - I’m using version
vEOS-lab 4.26.5M
. - I will provide Jinja2 templates for Arista’s EOS.
- You’ll need 5 devices for this lab - 2 spines, 3 leafs.
- Out-of-band connectivity to devices from the Nautobot VM is required for this lab.
- The out-of-band network for this lab is
192.168.0.0/24
- Your out-of-band gateway should be configured with
192.168.0.1/24
- You may need to run
zerotouch cancel
on the Arista devices on first boot. - The credentials (baked into the Jinja2 templates) are admin/admin.
If you follow the details closely (including IP addressing) then this solution will work nicely out of the box.
Copy the base config, update the device hostname and management IP address (interface Management1), and provision your devices. That’s right, no ZTP instructions this time. Strap in - lots of copy/paste incoming!
devices:
- hostname: dc1-spine-1
mgmt_ip_addr: 192.168.0.2
- hostname: dc1-spine-2
mgmt_ip_addr: 192.168.0.3
- hostname: dc1-leaf-1
mgmt_ip_addr: 192.168.0.6
- hostname: dc1-leaf-2
mgmt_ip_addr: 192.168.0.7
- hostname: dc1-leaf-3
mgmt_ip_addr: 192.168.0.8
Base config:
config t
no aaa root
!
username admin privilege 15 role network-admin secret sha512 $6$9Twcc4lhC5y86gWc$ZBcxj95YVFMlKqOxEf/kfGHsDFZaSjRV7YSUcq09xx3ze6CpZkGgBsLUJizyN/M6IWU6b.eFA5Gw5WM/zZof//
!
service routing protocols model multi-agent
!
hostname "hostname"
!
management api http-commands
no shutdown
!
interface Management1
ip address "mgmt_ip_addr"/24
!
ip routing
!
ip route 0.0.0.0/0 192.168.0.1
!
end
wr m
Once your devices are provisioned, run cables between interfaces like so:
dc1-spine-1 Ethernet1 <-> dc1-leaf-1 Ethernet1
dc1-spine-1 Ethernet2 <-> dc1-leaf-2 Ethernet1
dc1-spine-1 Ethernet3 <-> dc1-leaf-3 Ethernet1
dc1-spine-2 Ethernet1 <-> dc1-leaf-1 Ethernet2
dc1-spine-2 Ethernet2 <-> dc1-leaf-2 Ethernet2
dc1-spine-2 Ethernet3 <-> dc1-leaf-3 Ethernet2
Finally, reboot all devices. This step is necessary!
2. Install Nautobot
If you are labbing with an Arista image, Use python3.9 to install Nautobot!! Python3.10+ has security challenges when interfacing with Arista devices. Using python3.9 will make our lives simple for now. More info here
- Use this guide to install Nautobot.
- I installed my Nautobot instance natively on an Ubuntu 22.04 virtual machine.
3. Install Nautobot Apps
If you didn’t create a local_requirements.txt file in Nautobot’s home directory during installation, do so now:
Do this step as the nautobot user.
nautobot@nb:~$ pwd
/opt/nautobot
nautobot@nb:~$ echo -e "nautobot-golden-config\nnautobot-plugin-nornir\nnautobot-bgp-models\nnautobot-device-lifecycle-mgmt" >> local_requirements.txt
nautobot@nb:~$ cat local_requirements.txt
nautobot-golden-config
nautobot-plugin-nornir
nautobot-bgp-models
nautobot-device-lifecycle-mgmt
Install the apps:
nautobot@nb:~$ pip install -r local_requirements.txt
Configure the plugins by copying these settings into your nautobot_config.py
PLUGINS
section:
PLUGINS = ["nautobot_plugin_nornir", "nautobot_golden_config", "nautobot_bgp_models", "nautobot_device_lifecycle_mgmt"]
# Plugins configuration settings. These settings are used by various plugins that the user may have installed.
# Each key in the dictionary is the name of an installed plugin and its value is a dictionary of settings.
#
PLUGINS_CONFIG = {
"nautobot_golden_config": {
"per_feature_bar_width": 0.15,
"per_feature_width": 13,
"per_feature_height": 4,
"enable_backup": True,
"enable_compliance": True,
"enable_intended": True,
"enable_sotagg": True,
"sot_agg_transposer": None,
"enable_postprocessing": False,
"postprocessing_callables": [],
"postprocessing_subscribed": [],
"platform_slug_map": None,
# "get_custom_compliance": "my.custom_compliance.func"
},
"nautobot_plugin_nornir": {
"use_config_context": {"secrets": False, "connection_options": True},
# Optionally set global connection options.
"connection_options": {
"napalm": {
"extras": {
"optional_args": {"global_delay_factor": 1},
},
},
"netmiko": {
"extras": {
"global_delay_factor": 1,
},
},
},
"nornir_settings": {
"credentials": "nautobot_plugin_nornir.plugins.credentials.settings_vars.CredentialsSettingsVars",
"runner": {
"plugin": "threaded",
"options": {
"num_workers": 20,
},
},
},
"username": "admin",
"password": "admin",
},
"nautobot_bgp_models": {
"default_statuses": {
"AutonomousSystem": ["active", "available", "planned"],
"Peering": ["active", "decommissioned", "deprovisioning", "offline", "planned", "provisioning"],
}
},
"nautobot_device_lifecycle_mgmt": {
"barchart_bar_width": float(os.environ.get("BARCHART_BAR_WIDTH", 0.1)),
"barchart_width": int(os.environ.get("BARCHART_WIDTH", 12)),
"barchart_height": int(os.environ.get("BARCHART_HEIGHT", 5)),
},
}
Also in nautobot_config.py, add admin credentials for use with napalm:
NAPALM_USERNAME = os.getenv("NAUTOBOT_NAPALM_USERNAME", "admin")
NAPALM_PASSWORD = os.getenv("NAUTOBOT_NAPALM_PASSWORD", "admin")
Run the following for database migrations:
nautobot@nb:~$ nautobot-server post-upgrade
Finally, as non-nautobot user, restart nautobot and related services:
nautobot@nb:~$ exit
logout
housepbass@nb:~$ sudo systemctl restart nautobot nautobot-worker nautobot-scheduler
4. Populate Nautobot Data
Navigate to the URL for your new Nautobot instance and get a feel for the layout. Move on once you’re semi-comfortable navigating through the views.
For each data set find the blue Import
button and copy/paste these lines to import
the data.
- Region
name,slug,parent,description
Main Campus,main-campus,,"Main Campus Location"
- Sites
name,status,region,facility
Primary Data Center,active,Main Campus,DC1
- Racks
site,name,status,width,u_height
Primary Data Center,DC1-R1-R1,active,19,42
Primary Data Center,DC1-R1-R2,active,19,42
- Manufacturer
name
Arista
- Device Type
manufacturer: Arista
model: vEOS-lab
slug: veos-lab
u_height: 1
is_full_depth: true
interfaces:
- name: Ethernet1
type: 1000base-t
- name: Ethernet2
type: 1000base-t
- name: Ethernet3
type: 1000base-t
- name: Ethernet4
type: 1000base-t
- name: Ethernet5
type: 1000base-t
- name: Ethernet6
type: 1000base-t
- name: Ethernet7
type: 1000base-t
- name: Ethernet8
type: 1000base-t
- name: Management1
type: 1000base-t
mgmt_only: true
- Platforms
name,slug,manufacturer,napalm_driver
Arista,arista-eos,Arista,eos
- Device Roles
name,color
Leaf,0000ff
Spine,00ff00
- Devices
manufacturer,device_type,status,site,platform,rack,position,face,device_role,name
Arista,vEOS-lab,active,Primary Data Center,Arista,DC1-R1-R1,38,front,Spine,dc1-spine-1
Arista,vEOS-lab,active,Primary Data Center,Arista,DC1-R1-R1,37,front,Spine,dc1-spine-2
Arista,vEOS-lab,active,Primary Data Center,Arista,DC1-R1-R2,38,front,Leaf,dc1-leaf-1
Arista,vEOS-lab,active,Primary Data Center,Arista,DC1-R1-R2,37,front,Leaf,dc1-leaf-2
Arista,vEOS-lab,active,Primary Data Center,Arista,DC1-R1-R2,36,front,Leaf,dc1-leaf-3
- Interfaces
device,name,type,status
dc1-spine-1,Loopback0,virtual,active
dc1-spine-2,Loopback0,virtual,active
dc1-leaf-1,Loopback0,virtual,active
dc1-leaf-2,Loopback0,virtual,active
dc1-leaf-3,Loopback0,virtual,active
- VRF
name
management
- Prefixes
prefix,vrf,tenant,site,location,vlan_group,vlan,status,role,is_pool,description
10.0.0.0/8,,,Primary Data Center,,,,active,,False,DC1 Supernet
10.0.0.0/24,,,Primary Data Center,,,,active,,True,DC1 Loopbacks
10.0.1.0/24,,,Primary Data Center,,,,active,,True,dc1-spine-1 Underlay Transits
10.0.1.0/31,,,Primary Data Center,,,,active,,True,
10.0.1.2/31,,,Primary Data Center,,,,active,,True,
10.0.1.4/31,,,Primary Data Center,,,,active,,True,
10.0.2.0/24,,,Primary Data Center,,,,active,,True,dc1-spine-2 Underlay Transits
10.0.2.0/31,,,Primary Data Center,,,,active,,True,
10.0.2.2/31,,,Primary Data Center,,,,active,,True,
10.0.2.4/31,,,Primary Data Center,,,,active,,True,
192.168.0.0/24,management,,Primary Data Center,,,,active,,True,OOB Mgmt
- IP addresses
address,vrf,status,device,interface,is_primary,description
10.0.0.1/32,,active,dc1-spine-1,Loopback0,false,
10.0.0.2/32,,active,dc1-spine-2,Loopback0,false,
10.0.0.5/32,,active,dc1-leaf-1,Loopback0,false,
10.0.0.6/32,,active,dc1-leaf-2,Loopback0,false,
10.0.0.7/32,,active,dc1-leaf-3,Loopback0,false,
10.0.1.0/31,,active,dc1-spine-1,Ethernet1,false,
10.0.1.1/31,,active,dc1-leaf-1,Ethernet1,false,
10.0.1.2/31,,active,dc1-spine-1,Ethernet2,false,
10.0.1.3/31,,active,dc1-leaf-2,Ethernet1,false,
10.0.1.4/31,,active,dc1-spine-1,Ethernet3,false,
10.0.1.5/31,,active,dc1-leaf-3,Ethernet1,false,
10.0.2.0/31,,active,dc1-spine-2,Ethernet1,false,
10.0.2.1/31,,active,dc1-leaf-1,Ethernet2,false,
10.0.2.2/31,,active,dc1-spine-2,Ethernet2,false,
10.0.2.3/31,,active,dc1-leaf-2,Ethernet2,false,
10.0.2.4/31,,active,dc1-spine-2,Ethernet3,false,
10.0.2.5/31,,active,dc1-leaf-3,Ethernet2,false,
192.168.0.1/24,management,reserved,,,,OOB Gateway
192.168.0.2/24,management,active,dc1-spine-1,Management1,true,
192.168.0.3/24,management,active,dc1-spine-2,Management1,true,
192.168.0.4/24,management,reserved,,,,
192.168.0.5/24,management,reserved,,,,
192.168.0.6/24,management,active,dc1-leaf-1,Management1,true,
192.168.0.7/24,management,active,dc1-leaf-2,Management1,true,
192.168.0.8/24,management,active,dc1-leaf-3,Management1,true,
- Cables
status,side_a_type,side_b_type,side_a_name,side_b_name,side_a_device,side_b_device
connected,dcim.interface,dcim.interface,Ethernet1,Ethernet1,dc1-spine-1,dc1-leaf-1
connected,dcim.interface,dcim.interface,Ethernet2,Ethernet1,dc1-spine-1,dc1-leaf-2
connected,dcim.interface,dcim.interface,Ethernet3,Ethernet1,dc1-spine-1,dc1-leaf-3
connected,dcim.interface,dcim.interface,Ethernet1,Ethernet2,dc1-spine-2,dc1-leaf-1
connected,dcim.interface,dcim.interface,Ethernet2,Ethernet2,dc1-spine-2,dc1-leaf-2
connected,dcim.interface,dcim.interface,Ethernet3,Ethernet2,dc1-spine-2,dc1-leaf-3
- Config Contexts
- Name: Spine BGP
- Weight: 1000
- Roles: 'Spine'
Data:
{
"router_bgp": {
"cluster_id": "10.0.0.1",
"evpn_overlay_neighbors": [
{
"address": "10.0.0.5",
"description": "dc1-leaf-1"
},
{
"address": "10.0.0.6",
"description": "dc1-leaf-2"
},
{
"address": "10.0.0.7",
"description": "dc1-leaf-3"
}
],
"local_asn": "65000"
}
}
- Name: Leaf BGP
- Weight: 1000
- Roles: 'Leaf'
- Data:
{
"router_bgp": {
"evpn_overlay_neighbors": [
{
"address": "10.0.0.1",
"description": "dc1-spine-1"
},
{
"address": "10.0.0.2",
"description": "dc1-spine-2"
}
],
"local_asn": "65000"
}
}
5. Attach Github Repository
We’ll use the Golden Configuration App to manage device backup and intended configurations. This app requires a Github repo for configuration and template storage. Lucky for you, there’s a public repo ready for use out of the box! The only thing you’ll need is a new private repo. We’re going to duplicate my public repo into your private repo to give you a head start on this project.
Github’s docs for this process can be found here
Head to Github and create a new private repository. Once it’s created, follow this process in your linux or mac terminal.
Clone the public repo onto your machine:
git clone --bare https://github.com/housepbass9664/nautobot-blog-repo.git
Enter the created directory:
cd nautobot-blog-repo.git
Push the contents of my provided repo to your private repo:
git push --mirror [email protected]:YOUR_GITHUB_USER/YOUR_NEW_PRIVATE_REPO.git
Delete the original repo contents:
cd ..
rm -rf nautobot-blog-repo.git
Your repo is now ready to be used by your nautobot instance. Let’s attach your instance to this repo.
Navigate to Extensibilty -> Git Repositories -> Add
. Fill out the fields and in the Provides
section
choose the following options:
jobs
backup configs
intended configs
jinja templates
Click Create & Sync
. If the Job status is Completed
, you’re good to move on to the next section. Otherwise, spend time getting your repo connected.
This feature is well documented in the NTC docs.
6. Finish Golden Config App Settings
This app leverages the integrated graphql to query nautobot data for use with jinja templates. This is how intended configs are generated. Add a graphql query now:
Navigate to: Extensibility -> GraphQL Queries -> Add
and choose a name. Then,
populate the query:
query ($device_id: ID!){
device(id: $device_id) {
config_context
hostname: name
rack {
name
}
serial
device_role {
name
}
interfaces {
description
enabled
name
mode
mtu
type
lag {
name
}
ip_addresses {
address
family
tags {
name
}
vrf {
name
}
}
connected_circuit_termination {
circuit {
cid
commit_rate
provider {
name
}
}
}
tagged_vlans {
name
vid
}
untagged_vlan {
name
vid
}
cable {
termination_a_type
status {
name
}
color
}
tags {
name
}
}
}
}
Navigate to Golden Config -> Settings -> Default Settings -> Edit
Update the following fields with these settings:
- Backup repository:
Your Repo
- Backup Path:
backup/{{obj.site.slug}}/{{obj.name}}.cfg
- Intended repository:
Your Repo
- Intended Path:
intended/{{obj.site.slug}}/{{obj.name}}.cfg
- Jinja repository:
Your Repo
- Template Path:
templates/{{obj.platform.slug}}.j2
- Sot agg query:
Your graphql Query
Save the settings via Update
.
7. Provision the Devices
Now we’re ready to configure the devices.
Go to Jobs -> Jobs
and for each job set enabled
to true
.
From the same Jobs
page, click on Generate Intended Configurations
and then Run/Schedule
.
This page is where you choose the inventory to run jobs against. By not specifying any, the job will
run against all devices. Click Run Job Now
.
Your intended configurations have now been generated and can be viewed from several places including the Device page and the Golden Config app homepage.
Now we need to push the new intended configurations.
From the Jobs
page, click the blue run
button on the Replace configuration
job. Again, don’t
specify any targets so that the job runs against all targets. Uncheck Dry Run
. Click Run Job Now
.
Since we replaced the configuration, any lines that needed to be deleted are now gone and all the new lines we want are configured. Horray for declarative provisioning!
8. Observe the Network
Traditionally, validation for a deployment like this would be login to router, run cli commands
:
show lldp neighbor
show ip ospf neighbor
show bgp epvn summary
Alternatively you can use an observability tool named SuzieQ which will allow you to skip the manual CLI checks altogether. I’m going to use SuzieQ to validate the deployment now.
Are the devices reachable?
suzieq> device show
namespace hostname model version vendor architecture status address bootupTimestamp
0 network dc1-leaf-1 vEOS-lab 4.26.5M Arista i686 alive 192.168.0.6 2023-04-19 17:33:30+00:00
1 network dc1-leaf-2 vEOS-lab 4.26.5M Arista i686 alive 192.168.0.7 2023-04-19 17:33:24+00:00
2 network dc1-leaf-3 vEOS-lab 4.26.5M Arista i686 alive 192.168.0.8 2023-04-19 17:33:23+00:00
3 network dc1-spine-1 vEOS-lab 4.26.5M Arista i686 alive 192.168.0.2 2023-04-19 17:33:28+00:00
4 network dc1-spine-2 vEOS-lab 4.26.5M Arista i686 alive 192.168.0.3 2023-04-19 17:33:30+00:00
Do all devices have LLDP adjacency?
suzieq> lldp show hostname=dc1-spine-1
namespace hostname ifname peerHostname peerIfname description mgmtIP
0 network dc1-spine-1 Ethernet1 dc1-leaf-1 Ethernet1 Arista Networks EOS version 4.26.5M running on... 10.0.0.5
1 network dc1-spine-1 Ethernet2 dc1-leaf-2 Ethernet1 Arista Networks EOS version 4.26.5M running on... 10.0.0.6
2 network dc1-spine-1 Ethernet3 dc1-leaf-3 Ethernet1 Arista Networks EOS version 4.26.5M running on... 10.0.0.7
suzieq> lldp show hostname=dc1-spine-2
namespace hostname ifname peerHostname peerIfname description mgmtIP
0 network dc1-spine-2 Ethernet1 dc1-leaf-1 Ethernet2 Arista Networks EOS version 4.26.5M running on... 10.0.0.5
1 network dc1-spine-2 Ethernet2 dc1-leaf-2 Ethernet2 Arista Networks EOS version 4.26.5M running on... 10.0.0.6
2 network dc1-spine-2 Ethernet3 dc1-leaf-3 Ethernet2 Arista Networks EOS version 4.26.5M running on... 10.0.0.7
What about OSPF and BGP adjacencies?
suzieq> ospf show
namespace hostname vrf ifname peerHostname area ifState nbrCount adjState peerIP numChanges lastChangeTime
0 network dc1-leaf-1 default Ethernet1 dc1-spine-1 0.0.0.0 up 1 full 10.0.1.0 6.0 2023-04-19 17:40:39.647000+00:00
1 network dc1-leaf-1 default Ethernet2 dc1-spine-2 0.0.0.0 up 1 full 10.0.2.0 7.0 2023-04-19 17:52:34.918000+00:00
2 network dc1-leaf-1 default Loopback0 0.0.0.0 up 0 fail - 0.0 1970-01-01 00:00:00+00:00
3 network dc1-leaf-2 default Ethernet1 dc1-spine-1 0.0.0.0 up 1 full 10.0.1.2 6.0 2023-04-19 17:40:08.252000+00:00
4 network dc1-leaf-2 default Ethernet2 dc1-spine-2 0.0.0.0 up 1 full 10.0.2.2 6.0 2023-04-19 17:52:34.252000+00:00
5 network dc1-leaf-2 default Loopback0 0.0.0.0 up 0 fail - 0.0 1970-01-01 00:00:00+00:00
6 network dc1-leaf-3 default Ethernet1 dc1-spine-1 0.0.0.0 up 1 full 10.0.1.4 6.0 2023-04-19 17:39:48.680000+00:00
7 network dc1-leaf-3 default Ethernet2 dc1-spine-2 0.0.0.0 up 1 full 10.0.2.4 6.0 2023-04-19 17:52:34.528000+00:00
8 network dc1-leaf-3 default Loopback0 0.0.0.0 up 0 fail - 0.0 1970-01-01 00:00:00+00:00
9 network dc1-spine-1 default Ethernet1 dc1-leaf-1 0.0.0.0 up 1 full 10.0.1.1 7.0 2023-04-19 17:40:39.055000+00:00
10 network dc1-spine-1 default Ethernet2 dc1-leaf-2 0.0.0.0 up 1 full 10.0.1.3 7.0 2023-04-19 17:40:08.057000+00:00
11 network dc1-spine-1 default Ethernet3 dc1-leaf-3 0.0.0.0 up 1 full 10.0.1.5 7.0 2023-04-19 17:39:48.057000+00:00
12 network dc1-spine-1 default Loopback0 0.0.0.0 up 0 fail - 0.0 1970-01-01 00:00:00+00:00
13 network dc1-spine-2 default Ethernet1 dc1-leaf-1 0.0.0.0 up 1 full 10.0.2.1 7.0 2023-04-19 17:52:34.846000+00:00
14 network dc1-spine-2 default Ethernet2 dc1-leaf-2 0.0.0.0 up 1 full 10.0.2.3 7.0 2023-04-19 17:52:33.847000+00:00
15 network dc1-spine-2 default Ethernet3 dc1-leaf-3 0.0.0.0 up 1 full 10.0.2.5 7.0 2023-04-19 17:52:34.846000+00:00
16 network dc1-spine-2 default Loopback0 0.0.0.0 up 0 fail - 0.0 1970-01-01 00:00:00+00:00
suzieq> bgp show
namespace hostname vrf peer peerHostname state afi safi asn peerAsn pfxRx pfxTx numChanges estdTime
0 network dc1-leaf-1 default 10.0.0.1 dc1-spine-1 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:01:44.994000+00:00
1 network dc1-leaf-1 default 10.0.0.1 dc1-spine-1 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:01:44.994000+00:00
2 network dc1-leaf-1 default 10.0.0.1 dc1-spine-1 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:01:44.994000+00:00
3 network dc1-leaf-1 default 10.0.0.2 dc1-spine-2 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:58.991000+00:00
4 network dc1-leaf-1 default 10.0.0.2 dc1-spine-2 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:58.991000+00:00
5 network dc1-leaf-1 default 10.0.0.2 dc1-spine-2 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:58.991000+00:00
6 network dc1-leaf-2 default 10.0.0.1 dc1-spine-1 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:01:53.994000+00:00
7 network dc1-leaf-2 default 10.0.0.1 dc1-spine-1 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:01:53.994000+00:00
8 network dc1-leaf-2 default 10.0.0.1 dc1-spine-1 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:01:53.994000+00:00
9 network dc1-leaf-2 default 10.0.0.2 dc1-spine-2 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:06.988000+00:00
10 network dc1-leaf-2 default 10.0.0.2 dc1-spine-2 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:06.988000+00:00
11 network dc1-leaf-2 default 10.0.0.2 dc1-spine-2 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:06.988000+00:00
12 network dc1-leaf-3 default 10.0.0.1 dc1-spine-1 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:18.994000+00:00
13 network dc1-leaf-3 default 10.0.0.1 dc1-spine-1 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:18.994000+00:00
14 network dc1-leaf-3 default 10.0.0.1 dc1-spine-1 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:18.994000+00:00
15 network dc1-leaf-3 default 10.0.0.2 dc1-spine-2 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:18.992000+00:00
16 network dc1-leaf-3 default 10.0.0.2 dc1-spine-2 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:18.992000+00:00
17 network dc1-leaf-3 default 10.0.0.2 dc1-spine-2 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:18.992000+00:00
18 network dc1-spine-1 default 10.0.0.5 dc1-leaf-1 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:01:45.994000+00:00
19 network dc1-spine-1 default 10.0.0.5 dc1-leaf-1 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:01:45.994000+00:00
20 network dc1-spine-1 default 10.0.0.5 dc1-leaf-1 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:01:45.994000+00:00
21 network dc1-spine-1 default 10.0.0.6 dc1-leaf-2 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:01:45.992000+00:00
22 network dc1-spine-1 default 10.0.0.6 dc1-leaf-2 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:01:45.992000+00:00
23 network dc1-spine-1 default 10.0.0.6 dc1-leaf-2 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:01:45.992000+00:00
24 network dc1-spine-1 default 10.0.0.7 dc1-leaf-3 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:02.988000+00:00
25 network dc1-spine-1 default 10.0.0.7 dc1-leaf-3 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:02.988000+00:00
26 network dc1-spine-1 default 10.0.0.7 dc1-leaf-3 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:02.988000+00:00
27 network dc1-spine-2 default 10.0.0.5 dc1-leaf-1 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:39.994000+00:00
28 network dc1-spine-2 default 10.0.0.5 dc1-leaf-1 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:39.994000+00:00
29 network dc1-spine-2 default 10.0.0.5 dc1-leaf-1 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:39.994000+00:00
30 network dc1-spine-2 default 10.0.0.6 dc1-leaf-2 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:39.992000+00:00
31 network dc1-spine-2 default 10.0.0.6 dc1-leaf-2 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:39.992000+00:00
32 network dc1-spine-2 default 10.0.0.6 dc1-leaf-2 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:39.992000+00:00
33 network dc1-spine-2 default 10.0.0.7 dc1-leaf-3 Established ipv4 unicast 65000 65000 0 0 1 2023-04-19 00:00:43.988000+00:00
34 network dc1-spine-2 default 10.0.0.7 dc1-leaf-3 Established ipv6 unicast 65000 65000 0 0 1 2023-04-19 00:00:43.988000+00:00
35 network dc1-spine-2 default 10.0.0.7 dc1-leaf-3 Established l2vpn evpn 65000 65000 0 0 1 2023-04-19 00:00:43.988000+00:00
Everything looks good! I can also run assert
test against the network to get more specific
about what’s valid
:
suzieq> bgp assert
namespace hostname vrf peer asn peerAsn state peerHostname result assertReason
0 network dc1-leaf-1 default 10.0.0.1 65000 65000 Established dc1-spine-1 pass -
1 network dc1-leaf-1 default 10.0.0.2 65000 65000 Established dc1-spine-2 pass -
2 network dc1-leaf-2 default 10.0.0.1 65000 65000 Established dc1-spine-1 pass -
3 network dc1-leaf-2 default 10.0.0.2 65000 65000 Established dc1-spine-2 pass -
4 network dc1-leaf-3 default 10.0.0.1 65000 65000 Established dc1-spine-1 pass -
5 network dc1-leaf-3 default 10.0.0.2 65000 65000 Established dc1-spine-2 pass -
6 network dc1-spine-1 default 10.0.0.5 65000 65000 Established dc1-leaf-1 pass -
7 network dc1-spine-1 default 10.0.0.6 65000 65000 Established dc1-leaf-2 pass -
8 network dc1-spine-1 default 10.0.0.7 65000 65000 Established dc1-leaf-3 pass -
9 network dc1-spine-2 default 10.0.0.5 65000 65000 Established dc1-leaf-1 pass -
10 network dc1-spine-2 default 10.0.0.6 65000 65000 Established dc1-leaf-2 pass -
11 network dc1-spine-2 default 10.0.0.7 65000 65000 Established dc1-leaf-3 pass -
Assert passed
SuzieQ deserves a standalone deep-dive, so I’ll expand more on these outputs and SuzieQ’s capabilities in a later blog post. Stay tuned!
9. Final Thoughts
Obviously this tutorial was copy/paste heavy, but it gives you a usable foundation to experiment with and learn from. You can extend it to support other platforms, modify the Jinja2 to suit your configuration needs, model and test different network configurations, write custom jobs, etc… all this in addition to Nautobot’s fantastic out-of-the-box feature set.
Feel free to message me on LinkedIn if you have questions or need assistance getting through this tutorial.
Side note: I’m open to suggestions on reducing the copy/paste material. I thought about writing a custom job or pg_dump to seed the learner’s Nautobot instance, but I figured copy/paste would be the easiest to follow and most compatible method.