I am getting a reply from tunnel0 of the home router. However I am able to communicate using the radios at work from home. 10.10.10.1 is the VLAN 10 on the home router.
This is likely a difference in you using real hardware and I used a virtual instance of IOSv. There's random nuances like that. The most important things are that the multicast traffic transited the tunnel and your system works.
I did have to make a few changes to the config you posted on both routers, the home router FA0/0 is the WAN port and is configured as DHCP, it gets it address from the AT&T NVG router the AT&T requires. On both routers the VLAN are there own interfaces and on the home router I did not implement ip route 0.0.0.0 0.0.0.0 200.200.200.254, or actually any iproute that would point any any out side instead I used the ACL of INSIDE_NETWORKS to limit what I wanted pointed out side my router. in this case VLANS 1&2, I would be in big trouble if the ipads and computers or Alexa didn' work> not so happy wife!
Yes, I was expecting you'd have to make changes to adapt to the difference in interface names and the fact you were using DHCP for your WAN interface. I wanted you to see which IPs went where, so I hard coded the hubs WAN interface so that you'd see the IP listed in the configuration and be able to match it to the commands where it was necessary to enter your WAN IP.
I wanted to take a moment and clarify what the commands do so that you understand them for when you roll out the other spokes.
This enables Multicast Routing (show ip mroute)
ip multicast-routing
The following config is on the Hub side of the network. We don't have to specify any IPs or mappings because in DMVPN, all spokes phone home into the hub and do the initiation of the tunnel.
interface Tunnel0
ip address 172.16.0.1 255.255.255.0 <-- All Tunnel Interfaces will need to be in this subnet. Think of the Tunnel 0 interfaces as all directly connected to each other on the same switch. They will need to directly communicate with each other, so we put them all in the same subnet.
no ip redirects
ip mtu 1400
ip pim sparse-dense-mode <-- This tells the tunnel 0 interface to partcipate in PIM
ip nhrp authentication cisco123 <-- This is like the DMVPN password. It's used to control who can join the DMVPN network
ip nhrp network-id 1 <-- This is a unique number that identifies this particular DMVPN Network Instance. It will be the same across all devices on this DMVPN network. Sometimes we do crazy things like run multiple parallel DMVPN networks using the same routers on either end. We might do this to create private networks for different customers, or different classes of service or to match different physical paths. I've done it for numerous reasons over the years.
tunnel source GigabitEthernet0/0 <-- This sets the source IP address of the encapsulated tunnel traffic.
tunnel mode gre multipoint <-- This enables a form of GRE tunnel which supports multiple end points
interface GigabitEthernet0/0
ip address 200.200.200.1 255.255.255.0 <-- This is my hardcoded WAN IP. I put it here because you will have to manually enter the WAN IP of the Hub in all of the Spokes so they know where to phone home to.
ip nat outside
ip virtual-reassembly in
!
interface GigabitEthernet0/1
description VLAN10
ip address 10.10.10.1 255.255.255.0 <-- This is the local LAN subnet and must be unique at each site. If using static routing you will need to manually enter routes on the Hub side that point to this network across the tunnels.
ip pim sparse-dense-mode
ip nat inside
ip virtual-reassembly in
ip igmp join-group 225.8.11.81 <-- This forces the IGMP join for this LAN segment. If your device uses IGMP you don't have to do this and you might not want to as you'd want the multicast traffic to only flow when there's a receiver actually asking for it.
!
ip nat inside source list INSIDE_NETWORKS
interface GigabitEthernet0/0 overload <-- While people often call this NAT, it's actually PAT since we are having multiple source IPs all being masked as the same outside IP address. I point out this distinction because if you go looking in the help docs you'll need to look at PAT instead of NAT.
ip route 0.0.0.0 0.0.0.0 200.200.200.254 <-- Since I hard coded the WAN IP I need to set the gateway of last resort called the default gateway. When you get your WAN IP via DHCP, they will normally send you the default gateway in the DHCP reply and you won't need to hard code this. In fact, you shouldn't if you're using DHCP because your default gateway may change if your WAN IP changes.
ip route 10.10.11.0 255.255.255.0 172.16.0.2 <-- Since I'm not running OSPF, EIGRP or RIP (ewww) I have to set a static route that points traffic destined for the Spoke LAN to the proper Spoke tunnel endpoint IP. You'll have to do this for every single spoke you add unless you enable a dynamic routing protocol.
!
ip access-list standard INSIDE_NETWORKS
permit 10.10.10.0 0.0.0.255 <-- This is the ACL that controls the PAT translation. It matches on source IP and if the source IP is in the permit, it translates the source IP to be the WAN IP per the PAT command earlier.
Now let's look at the Spoke side. I'll remove most of the other config to make this simpler and focus only on the DMVPN portion.
interface Tunnel0
ip address 172.16.0.2 255.255.255.0 <-- As we said above, each tunnel endpoint must have a unique IP in the same subnet as all the others. I hate .1 default gateways and as a matter of preference always put my upstreams at the top of the IP block. For instance if I did your config, my Hub would have been numbered 172.16.0.254. Then I would number my spokes starting at 172.16.0.1. That way Spoke 1 would be .1 and Spoke 2 would be .2 and so on. It's just my personal preference. Doing things like that also makes DHCP pools cleaner if you ever set those up because they start issuing addresses at the bottom of the subnet (.1) and I put all my infrastructure stuff at the top. Usually I reserve .250 - .254 for things I might need to run the network. Like if I use HSRP I'd have Router1 as .253, Router2 as .252 and my HSRP IP would be .254. Anyway it's again just personal preference.
no ip redirects
ip mtu 1400
ip pim sparse-dense-mode
ip nhrp authentication cisco123
ip nhrp map 172.16.0.1 200.200.200.1 <-- This is the first command that is unique to spokes. We need to know how to reach the hub since we are the one initiating contact. This command gives us a mapping of the inner tunnel IP of the Hub to its outer WAN IP. We only have to enter the mapping statically for the Hub routers because all other routers will be learned dynamically on the fly via NHRP. If you were running Dual Hub DMVPN then you'd have two mappings for the two hubs.
ip nhrp map multicast 200.200.200.1 <-- This is a static mapping telling us where to point multicast traffic to for distribution. We will take the multicast traffic and encapsulate it in a unicast frame destined for this IP and let the HUB be responsible for further distribution. This means that all multicast traffic is sent to the Hub even if it's not needed there. We have to do it this way because the underlying transport (Internet) is not multicast enabled and because the distribution tree can get quite complex if we tried to send MC from Spoke to Spoke. In this case we send it all to the Hub and the Hub is responsible for reflecting it down to the Spokes.
ip nhrp network-id 1
ip nhrp nhs 172.16.0.1 <-- This tells us that the Next Hop Lookup Server is running on 172.16.0.1. This command is matched to the static mapping command above to determine how to send lookup requests to the DMVPN Hub (NHRP Server) for resolution.
tunnel source GigabitEthernet0/0
tunnel mode gre multipoint
!
ip route 10.10.10.0 255.255.255.0 172.16.0.1 <-- Again we aren't running dynamic routing so we have to hardcode the routing across the tunnel.
This is the most basic DMVPN configuration you can possibly have. As such it's functional but it does have some issues. Foremost is that you have to add static routes on the hub and every single spoke to reach from end to end. Most people use DMVPN because they want to enable spoke-to-spoke traffic which gets rather painful and dangerous when you have to configure static routes and you may not have direct connectivity everywhere. That’s when you’d switch to dynamic routing.
Now the more I think about your application the more I’m seeing it as primarily hub and spoke traffic flows. Especially since that’s how your multicast will flow. I can see several reasons to have your unicast traffic to follow the same path for simplicity and that would enable you to configure a supernet route on the spokes which forwards all internal traffic via the tunnel to the hub. Then the hub will have each subnet as an individual route pointed to the proper spoke. Makes your config simple and clean.
For instance, assuming spokes are 10.10.X.0/24 where X represents the LAN subnet for each spoke.
On all the Spokes we’d add a single supernet route pointing to the hub.
ip route 10.10.0.0 255.255.0.0 172.16.0.1
Then on the hub we’d have:
ip route 10.10.11.0 255.255.255.0 172.16.0.2
ip route 10.10.12.0 255.255.255.0 172.16.0.3
ip route 10.10.13.0 255.255.255.0 172.16.0.4
…
Now if you didn’t want to go that way, we could enable dynamic routing using an IGP like OSPF. That introduces a number of complications. One of those is how traffic will be handled when the tunnel is down. A basic OSPF configuration will cause us to leak traffic around the tunnel when it's down. This is because traffic destined for the remote LAN will be checked against the routing table and it will match the default route and will be sent towards the WAN bypassing the inactive tunnel since a route learned via OSPF will not be present. Since we have PAT enabled we will start blasting traffic out to the Internet that should have been dropped and never left our network. Probably not a good behavior. This means that you need to now insert a null route that will be used to black hole (or squash) traffic when the dynamic routing protocol is down.
ip route 10.10.11.0 255.255.255.0 null 0 250
That route will only appear in the table when a route with a better metric does not exist. It's fairly typical to use a metric of 250 for black hole or hold down routes.
Anyway there are other things that we sometimes enable that do more complicated things such as provide redundancy (dual hub and/or using loopbacks if we have multiple uplinks) or greater traffic isolation (VRFs) but those can get extremely complex.
Andrew