Monday, June 30, 2014

WiFi Site Survey

 

Many network managers will simply install extra WiFi APs as a quick fix to increase the wireless coverage capability. However, that often does little .

There are three different types of site surveys widely used in the industry: passive site survey, active site survey, and predictive site survey.

A passive site survey tool listens to existing access points and, outside your managed infrastructure, for signal strength, interference, and AP coverage. Passive site surveys, in which surveyed WiFi adapters don't need to associate to the AP or SSID, give a good overall picture of the RF characteristics of existing wireless networks.

During an active site survey, the survey WiFi adapter is associated to the AP(s) and exchanges packets. This allows gathering of very detailed information. Actual network traffic, throughput packet loss, and physical (PHY) rates can be captured. Active surveys are commonly used for new WLAN deployments.

A predictive site survey is performed without any type of field measurements. It uses RF planning software tools that can predict wireless coverage of the APs. To perform this site survey, a floor-plan drawing (AutoCAD, JPEG, PDF) is a must-have. Predictive site surveys are used when the site or building is not yet built and are helpful for budgeting purposes.

The goal of all of these wireless site surveys is to provide detailed information that addresses the site’s radio frequency coverage. Before implementing or attempting to optimize a WLAN, you’ll want to understand all the possible areas of interference, AP placements, power considerations, and wiring requirements that are needed. A wireless site survey can provide all of this information and more, so you have the tools you need to design, implement, and optimize your wireless network.

Sunday, June 29, 2014

Which is faster 40Mhz vs 2x20Mhz ?

Well the question is whether should we use one AP working on 40 Mhz channel width or should we have 2 AP with each channel width configured to 20 Mhz.

The answer is somewhat clear that 2 AP will provide good redundancy and better management.

Points to Consider:

1. CPU Utilization will be less and get shared on 2 AP , better than loading single AP with more clients which will result to High CPU Utilization and degrade AP performance.

2. Contention mechanism would allow better chances to the clients.

3. The more degrees for freedom offered in network solution, the more resilience and quality of user experience is likely achieved.



Thursday, June 26, 2014

VSS : Virtual Switching System

Combing/integrating two switches virtually and make them work as one , a redundancy feature called Virtual switching system (VSS).


vss.jpg

The main reason for VSS is something that is typically addressed when there are redundant routing platforms on a network. A VSS is network system virtualization technology that pools multiple Cisco switches into one virtual switch, increasing operational efficiency, boosting nonstop communications, and scaling system bandwidth capacity to 1.4 Tbps.

VSS actually removes the need for a next-hop redundancy protocol like HSRP or VRRP. These first-hop redundancy protocols are usually heavily tied to a fast-converging routing protocol like EIGRP, and still require that each device maintain it’s own control plane. Often, two switches are configured, and one responds to ARP requests while the other does not. This is an active/passive relationship. VSS takes this a step further and actually merges the two switches into one virtual “mega-switch”, rather than wasting a perfectly good switch. There’s still a master/slave relationship, but rather than placing one switch in standby while the other is active, this determines which switch maintains control over the other. The function of the supervisor module, as well as the configuration of both switches, becomes the responsibility of the primary switch.

VSS utilizes the port channel between the switches to merge them together into one massive switch. As a result, redundant connections from the Access layer to the Core no longer need to be blocked because since they’re virtually both connected to the same switch, they can be configured in a port-channel, as shown by the diagram to the right. http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-6500-virtual-switching-system-1440/prod_qas0900aecd806ed74b.html

Thursday, June 19, 2014

IoT : Internet of Things

The Internet of Things (IoT) is the network of physical objects accessed through the Internet, as defined by technology analysts and visionaries. These objects contain embedded technology to interact with internal states or the external environment. In other words, when objects can sense and communicate, it changes how and where decisions are made, and who makes them.

"or"

The Internet of Things (IoT) is a scenario in which objects, animals or people are provided with unique identifiers and the ability to automatically transfer data over a network without requiring human-to-human or human-to-computer interaction. IoT has evolved from the convergence of wireless technologies, micro-electromechanical systems (MEMS) and the Internet.

A thing, in the Internet of Things, can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low -- or any other natural or man-made object that can be assigned an IP address and provided with the ability to transfer data over a network. So far, the Internet of Things has been most closely associated with machine-to-machine (M2M) communication in manufacturing and power, oil and gas utilities. Products built with M2M communication capabilities are often referred to as being smart.

IPv6’s huge increase in address space is an important factor in the development of the Internet of Things. According to Steve Leibson, who identifies himself as “occasional docent at the Computer History Museum,” the address space expansion means that we could “assign an IPV6 address to every atom on the surface of the earth, and still have enough addresses left to do another 100+ earths.” In other words, humans could easily assign an IP address to every "thing" on the planet. An increase in the number of smart nodes, as well as the amount of upstream data the nodes generate, is expected to raise new concerns about data privacy, data sovereignty and security.
Although the concept wasn't named until 1999, the Internet of Things has been in development for decades. The first Internet appliance, for example, was a Coke machine at Carnegie Melon University in the early 1980s. The programmers could connect to the machine over the Internet, check the status of the machine and determine whether or not there would be a cold drink awaiting them, should they decide to make the trip down to the machine.http://whatis.techtarget.com/definition/Internet-of-Things

Friday, June 13, 2014

3 Layer Hierarchical Internetworking Model

The Hierarchical internetworking model, or three-layer model, is a network design model first proposed by Cisco. The three-layer model divides enterprise networks into three layers: core, distribution, and access layer. Each layer provides different services to end-stations and servers.




Each layer in the three-tier hierarchical model has a unique role to perform: 

Access Layer—The primary function of an access-layer is to provide network access to the end user. This layer often performs OSI Layer-2 bridge function that interconnects logical Layer-2 broadcast domains and provides isolation to groups of users, applications, and other endpoints. The access-layer interconnects to the distribution layer.

Distribution Layer—Multi-purpose system that interfaces between access layer and core layer. Some of the key function for a distribution layer include the following:
Aggregate and terminate Layer-2 broadcast domains
Provide intelligent switching, routing, and network access policy function to access the rest of the network.
Redundant distribution layer switches provides high availability to the end-user and equal-cost paths to the core. It can provide differentiated services to various class-of-service applications at the edge of network.

Core Layer—The core-layer provides high-speed, scalable, reliable and low-latency connectivity. The core layer aggregates several distribution switches that may be in different buildings. Backbone core routers are a central hub-point that provides transit function to access the internal and external network.
 http://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Education/SchoolsSRA_DG/SchoolsSRA-DG/SchoolsSRA_chap3.html

Thursday, June 12, 2014

DHCP : Dynamic Host Configuration Protocol

Well , DHCP is not a confusing topic to discuss on. But people sometimes get confused when asked whether DHCP OFFER or DHCP ACK  is a unicast or broadcast packet.

Lets understand the DHCP DORA process.

D : DHCP Discover
O : DHCP Offer
R : DHCP Request
A : DHCP Acknowledgement

DHCP Discover :  is broadcast packet
 DHCP Offer :  can be a broadcast or unicast packet (depends upon flag set in client  DHCP Discover                                  broadcast field)
DHCP Request :  is a broadcast packet
 DHCP Acknowledgement :  can be a broadcast or unicast packet

Reason why DHCP Offer and Ack can be either unicast or broadcast packet ?

A client that cannot receive unicast IP datagrams until its protocol software has been configured with an IP address SHOULD set the BROADCAST bit in the 'flags' field to 1 in any DHCPDISCOVER or DHCPREQUEST messages that client sends. The BROADCAST bit will provide a hint to the DHCP server and BOOTP relay agent to broadcast any messages to the client on the client's subnet. A client that can receive unicast IP datagrams before its protocol software has been configured SHOULD clear the BROADCAST bit to 0.






Tuesday, June 10, 2014

PBR or VRF

(For the basics on PBR and VRF , please refer the other posts in the same blog.)


Most of the times , it seems PBR will work and there is no need to create VRF instances on a router.Well, what VRF gives you is completely de-coupled routing tables between interfaces. So for one ingress interface into the router, you use routing table A, and for another ingress interface, routing table B.

All interfaces belong to *one* VRF only, so if you want to share an interface between traffic of "sort A" and "sort B", things with VRFs get tricky. You can do this with VRF select ("match an access-list, and depending on the result, go to VRF routing table A or B or C..."), but that's a lot of configuration stuff if all you need to do is sort incoming traffic on one interface.

PBR will give you a lever to sort incoming traffic according to some rules you define in a route-map, bypassing(!) normal routing tables. PBR is more powerful than VRFs, if the point is "sorting traffic coming in on *one* interface", but if you need to scale this to dozens of routers, and hundreds of interfaces, PBR will just be too complex to get right.

VRF : Virtual Routing & Forwarding

If you are keen and interested in Virtualization then VRF (Virtual Routing and Forwarding) is the technology you should definately have a look.

VRF , also known as VPN routing and forwarding.

Virtual routing and forwarding (VRF) is a technology included in IP (Internet Protocol) network routers that allows multiple instances of a routing table to exist in a router and work simultaneously. This increases functionality by allowing network paths to be segmented without using multiple devices. Because traffic is automatically segregated, VRF also increases network security and can eliminate the need for encryption and authentication. Internet service providers (ISPs) often take advantage of VRF to create separate virtual private networks (VPNs) for customers.

Each VRF acts as a separate router. Each router will have its own interfaces and its own routing table. The routes in the routing table of one VRF are not visible in any other VRF neither in the global routing table.

VRFs are to Layer 3 what VLANs are to layer 2. They provide a fully isolated network path. Nothing can map from one to the other without the administrator creating a link. VRFs are most common in service providers MPLS networks to isolate different customers. They also can have a roll in corporate networks as well in the form of VRF-lite. Let’s look at a sample deployment scenario: You have two internet connections, one for guest users and one for corporate users. Each is required to be completely isolated from the other. You have VLANs to separate these two classes of users within your network as well. Your network has grown to the point of needing routing inside the corporate network. Enter VRFs. Let’s look at the design in the image below:
VRFs split a routed network while VLANs split a switched network
The Blue lines indicate our corporate network while Red is the guest network. Each has its own path to the internet despite the fact common hardware is in use.

PBR : Policy Based Routing

Policy-Based Routing (PBR) allows you to use ACLs and route maps to selectively modify and route IP packets in hardware.
(PBR) allows you to use ACLs and route maps to selectively modify and route IP packets in hardware. Basically, the ACLs classify the traffic and route maps that match on the ACLs set routing attributes for the traffic.
A PBR policy specifies the next hop for traffic that matches the policy:
·         For standard ACLs with PBR, you can route IP packets based on their source IP address.
·         For extended ACLs with PBR, you can route IP packets based on all of the matching criteria in the extended ACL.

The problem that many network engineers find with typical routing systems and protocols is that they are based on routing the traffic based on the destination of the traffic. Now under normal situations this is fine, but when the traffic on your network requires a more hands on solution policy based routing takes over.
Destination based routing systems make it quite hard to change the routing behavior of specific traffic. With PBR, a network engineer has the ability to dictate the routing behavior based on a number of different criteria other than destination network, including source or destination network, source or destination address, source or destination port, protocol, packet size, and packet classification among others.
PBR also has the ability to implement QoS by classifying and marking traffic at the network edge and then using PBR throughout the network to route marked traffic along a specific path.
So why would you do this? Well consider a company that has two links between locations, one a high bandwidth, low delay expensive link and the other a low bandwidth, higher delay lower expense link.
Now using traditional routing protocols the higher bandwidth link would get most if not all of the traffic sent across it based on the metric savings obtained by the bandwidth and/or delay (using EIGRP or OSPF) characteristics of the link. PBR would give you the ability to route higher priority traffic over the high bandwidth/low delay link while sending all other traffic over the low bandwidth/high delay link.
This way the traffic which requires the characteristics of the high bandwidth/low delay link would be possible without sending all traffic over the link.
http://blog.pluralsight.com/pbr-policy-based-routing