Tag Archives: Netapp

WHAT ARE NETAPP VIRTUAL INTERFACES(VIF)?!

I heard the term and threw me off. So this is me making it easier for myself and for all of you on  what VIF’s are all about.

vQuicky

> VIF = Virtual Interface

> VIF is a feature in Data ONTAP. Data ONTAP is the OS that NETAPP storage devices run on. So basically VIF is a feature of NETAPP storage devices.

> VIFs implement link aggregation – combining multiple network links to work as one.

> Other vendors may call VIFs as Virtual aggregations, link aggregations, trunks and Etherchannels.

> There are three types of VIFs – Single-Mode VIF, Static Multimode VIF and Dynamic multimode VIF.

> VIFs allow you to have higher throughput, fault tolerance and allow no single point of failure to allow HA features for your storage system

inDepth

VIF stands for Virtual Interface which is a feature of your Netapp storage system. The feature allows you to aggregate multiple network interfaces into one logical interface. This allows for a variety of options that now allow higher throughput, fault tolerance and no single point of failure.

Now lets talk about why would you need or use a VIF on your Netapp Storage system. The main bottle neck between compute and storage is the network connectivity between them, among other things.

You may have your Netapp storage equipped with flash cards and SSDs however all that is still throttled because of that 1 GB link you have between your Netapp and your Hypervisor. That means regardless of how fast your random read writes are on your storage system, it is only as good as as that 1GB link. With the VIF you can now aggregate your multiple nics on your Netapp storage system – so that 1 GB link is now looking as a 4GB link by aggregating 4 nics as 1 VIF.

Gets even better – now if you loose one port on that aggregate – no problem! You lost 1 GB bandwidth however you still have 3GB of throughput to your storage system and your virtual machines are still up and no downtime!

You can create VIFS in three different types – single-mode VIF, Static multimode VIF and Dynamic multimode VIF.

In a Single-mode VIF, only one interface is active while others are on standby and ready to take over if the active nic fails. One thing to remember is that all interfaces share a common MAC address. If there are more than one interfaces on stand by in a Single-mode VIF configuration – the storage system picks the interface randomly should the active NIC fail. Since the link failover is monitored and controlled by the storage system, in a Single-mode VIF you DO NOT need a switch that supports link aggregation.

In a Static Multimode VIF all the interfaces in the VIF are active and share a single MAC address. Unlike the Single-mode VIF where only one interface is active at all times, Static multimode VIF has all interfaces communicating at all times. This mode is in compliance with IEEE 802.3ad (static) standard which is the Link aggregation protocol. Any switch that supports aggregates could be used for this mode. The switch here does NOT have to control the packet flow or exchange because that will be taken care of by the storage system and devices on the other end. It is important to remember that the Static Multimode VIF does NOT support IEEE 802.3ad (dynamic) which is also known as Link aggregation control protocol (LACP) or Cisco’s proprietary – Port Aggregation Protocol (PAgP).

In a static multimode VIF the failure tolerance rate is “n-1” where n is the number of interfaces participating in the aggregate. Flow control from transmission perspective is controlled by the NETAPP but it cannot control how inbound frames arrive. That control rests on the devices on the other side of the switch, pressumbly a hypervisor.

Dynamic multimode VIFS can not only detect loss of link status but also loss of data flow as well. This is the most used in high-availability environments. This mode is also in compliance with the IEEE 802.3ad (dynamic) standard which is the Link aggregation control protocol (LACP). However there are some things to remember about Dynamic multimode VIFS.

1. Dynamic multimode VIFS must be connected to a switch that supports LACP

2. The VIFs must be configured as first-level VIFS. First-level VIF is nothing but a trunked interface.

3. They have to be configured to use port based and IP based load balancing methods.

It should be noted that in this mode all interfaces are active and share a single MAC address.

Since we mentioned Load balancing here – it is worth noting that there are three load balancing methods for a multimode VIF. By default when no method is specified, the IP address based load balancing method is used.

The three load balancing methods are, Ip address and Mac address load balancing, Round robin load balancing and Port based load balancing.

Hope this helps 😉 Please comment for any clarifications.

 

NETAPP vs VMWARE FLOW CONTROL DILEMMA

vQuicky

> Performance improves were seen on environments with ESXi 5.1/NetApp/10G switches having

Flow control disabled

> VMware recommends leaving flow control enabled while NetApp best practice recommends disabling it if using 10G switches.

> VMware recommends investigating pause frames and if too many were found – indicates at an underlying problem.

flowcontrol

 

inDepth

We had recently seen some random datastore drops and issues in our virtualized environment which had a backend Netapp storage. Upon investigation and some deep-dive it was found that flow control was enabled on the entire stack.

This is another article
worth looking – talks about NetApp sending too many pause frames.

Read More …

SHIFT CLOUD PLATFORMS USING NETAPP SHIFT!

This is cool – however more details are awaited but Vaughn Stewart of NetApp talks about a project they have been working on called NetApp Shift (atleast that’s the temporary name). Shift is said to be all about moving virtual machines between different cloud platforms.

NetApp Shift is a tool that is used to move the virtual machines from – let’s say – VMware to Windows Hyper V. It uses underlying flex clone technology and does this at lighting fast speeds.

The only downtime for the vms is that a reboot is needed. And that’s it! Currently it works for VMware, Hyper-V and Citrix XenServer but Vaughn says that they will be opening up for more hypervisor vendors soon.

I pinged him asking for support for migrating work loads between Openstack and VMware or any other combination with OpenStack. I am sure its on their road map!

A perfect scenario for this tool is to use a public cloud OpenStack foot print utilizing (highly efficient due to dedup disk savings) NetApp for test environment but seamlessly being replicated to a production NetApp and being migrated into VMware for production deployment using NetApp shift. That is a much realistic scenario in the Enterprise segment where dollar conscious customers don’t want to burn up expensive VMware licenses for test environments and want to be able to test on commodity hardware. It will also be amazing if the use of this tool (NetApp Shift)  is extended to a NetApp virtual appliance as well which will make the above use case much more easier to achieve and implement.

On the flip side – customer’s may want to test applications on other platforms and move it to Openstack foot prints – however I don’t see this likely. More likely is where customers may test from one Openstack foot print and migrate it to another foot print of the same OpenStack distro. Basically a test bed to a production environment. Catch is migration becomes very easy with Shift!

Source