Pages

Total Pageviews

Monday, September 16, 2013

Vmware Direct Path IO

Introduction 

Direct I/O is available from vsphere 4 and later versions that leverage Intel  VT-D and AMD V CPU hardware feature . With Direct I/O VM can directly access physical network card by passing  Virtualized NIC ( E1000 , Valance ) and para virtualized NIC ( Vm Next ) . With Direct I/O VM can sustain high bandwidth beyond 10 GIG additional to saving CPU cycles . VMware recommends to use only when VM has high IO load and Saving CPU benefit the overall infra.

Para virtualized NICs can provide the throughput 9 GIG +, But the vsphere handles all the network related tasks  like physical NIC interrupts, processes packets, determines the recipient of the packet and copies them into the destination VM, if needed. The vSphere host also mediates packet transmissions over the physical NIC.This will consume lot of CPU.

Direct IO bypass the virtualized network layer while saving the CPU cycle , but this feature trades the virtualization features like physical NIC sharing , Vmotion and Network IO control . The VM need to have memory reserved to avoid the  Memory swap while physical NIC is processing.


vSphere Features 

Features that are not available for the VMs which had Direct IO control .

  • Hot adding and removing of virtual devices
  • Suspend and resume
  • Record and replay
  • Fault tolerance
  • High availability
  • DRS (limited availability. The virtual machine can be part of a cluster, but cannot migrate across hosts)
  • Snapshots
Cisco UCS  and Direct IO 

Following features are available while using Direct IO with Cisco UCS 

  • Vmotion 
  • Hot Adding and removing of virtual hardware
  • Suspend and resume
  • High availability
  • DRS
  • Snapshots
Configure Passthrough Devices on a Host 

vsphere Client 
  1. Select a host from the inventory panel of the vSphere Client.
  2. On the Configuration tab, click Advanced Settings.
  3. The Passthrough Configuration page appears, listing all available passthrough devices. A green icon indicates that a device is enabled and active. An orange icon indicates that the state of the device has changed and the host must be rebooted before the device can be used.
  4. Click Edit.Select the devices to be used for passthrough and click OK
Web Client 

  1. Browse to a host in the vSphere Web Client navigator.
  2. Click the Manage tab, click Settings.
  3. In the Hardware section, click PCI Devices.
  4. To add a PCI device to the host, click Edit.


    Key Points : 

    1. An adapter can only be used by a single virtual machine when using DirectPath I/O.
    2. Only two devices can be passed through to a virtual machine.
    3. Virtual machine hardware version 7 must be used.
    4. Host requires a restart once the device has been added for passthrough.
    5. Check the VMware HCL (Hardware Compatibility Guide) to make sure the device is supported.
    6. It is typically used on virtual machines that have very high I/O requirements such as database servers that need direct access to a storage HBA (host bus adapter).
    7. It relies on Intel VT-d (Virtual Technology for Directed I/O) or AMD IOMMU (IO Memory Management Unit), although the latter is experimental. Remember to enable this option in the BIOS!


    Friday, September 13, 2013

    HA configuration Parameters

     “das.maskCleanShutdownEnabled” :This is enabled by default from vsphere 5.0 , ie HA is going to assume a virtual machine needs to be restarted when it is powered and isn’t able to update the config files. (Config files contain the details about the shutdown state normally, was it an admin initiated shutdown?)

    disk.terminateVMOnPDLDefault  :  This is not enabled by default . 
    If this setting is not explicitly enabled then the virtual machine will not be killed and HA won’t be able to take action. In other words, if your storage admin changes the presentation of your LUNs and removes a host accidentally the virtual machine will just sit there without access to disk. The OS might fail at some point, your application will definitely not be happy, but this is it.


    Note : Big thanks to Yellow Bricks .. 

    Wednesday, September 11, 2013

    what's new in vSphere 5.5

    vSphere 5.5 beta version is already released in market. Following are some of new features introduced in 5.5 release.


    1. Support of Reliable Memory Technology : This is a CPU hardware feature , which will report a region of memory as reliable memory .By using Reliable Memory technology , Kernel and other critical component of ESXI are run on reliable memory which will provide greater resiliency and protect memory errors.
    2. Host-Level Configuration Maximums –   The maximum number of logical CPUs has doubled from 160 to 320, the number of NUMA nodes doubled from 8 to 16, the number of virtual CPUs has doubled from 2048 to 4096, and the amount of RAM has also doubled from 2TB to 4TB. There is virtually no workload that is too big for vSphere 5.5!
    3. Hot-pluggable PCIe SSD Devices – vSphere 5.5 provides the ability to perform hot-add and remove of SSD devices to/from a vSphere 5.5 host.  With the increased adoption of SSD, having the ability to perform both orderly as well as unplanned SSD hot-add/remove operations is essential to protecting against downtime and improving host resiliency.
    4. Improved Power Management – ESXi 5.5 provides additional power savings by leveraging CPU deep process power states (C-states).   By leveraging the deeper CPU sleep states ESXi can minimizes the amount of power consumed by idle CPUs during periods of inactivity.  Along with the improved power savings comes additional performance boost on Intel chipsets as turbo mode frequencies can be reached more quickly when CPU cores are in a deep C-State.
    5. Virtual Machine Compatibility ESXi 5.5 (aka Virtual Hardware 10) – ESXi 5.5 provides a new Virtual Machine Compatibility level that includes support for a new virtual-SATA Advance Host Controller Interface (AHCI) with support for up to 120 ( previous versions support up to 60 ) virtual disk and CD-ROM devices per virtual machine.   This new controller is of particular benefit when virtualizing Mac OS X as it allows you to present a SCSI based CD-ROM device to the guest.
    6. VM Latency Sensitivity – included with the new virtual machine compatibility level comes a new “Latency Sensitivity” setting that can be tuned to help reduce virtual machine latency.  When the Latency sensitivity is set to high the hypervisor will try to reduce latency in the virtual machine by reserving memory, dedicating CPU cores and disabling network features that are prone to high latency.
    7. Expanded vGPU Support – vSphere 5.5 extends VMware’s hardware-accelerated virtual 3D graphics support (vSGA) to include GPUs from AMD.  The multi-vendor approach provides customers with more flexibility in the data center for Horizon View virtual desktop workloads.  In addition 5.5 enhances the “Automatic” rendering by enabling the migration of virtual machines with 3D graphics enabled between hosts running GPUs from different hardware vendors as well as between hosts that are limited to software backed graphics rendering. It can be done only via web client .
    8. Graphics Acceleration for Linux Guests – vShere 5.5 also provides out of the box graphics acceleration for modern GNU/Linux distributions that include VMware’s guest driver stack, which was developed by VMware and made available to all Linux vendors at no additional cost. Ubuntu: 12.04 and later , fedora 17 and later and RHEL 7 is supported.
    9. vCenter Single Sign-On (SSO) – in vSphere 5.5 SSO comes with many improvements.   There is no longer an external database required for the SSO server, which together with the vastly improved installation experience helps to simplify the deployment of SSO for both new installations as well as upgrades from earlier versions.   This latest release of SSO provides enhanced active directory integration to include support for multiple forest as well as one-way and two-way trusts.  In addition, a new multi-master architecture provides built in availability that helps not only improve resiliency for the authentication service, but also helps to simplify the overall SSO architecture.
    10. vSphere Web Client – the web client in vSphere 5.5 also comes with several notable enhancements.  The web client is now supported on Mac OS X, to include the ability to access virtual machine consoles, attach client devices and deploy OVF templates.  In addition there have been several usability improvements to include support for drag and drop operations, improved filters to help refine search criteria and make it easy to find objects, and the introduction of a new “Recent Items” icon that makes it easier to navigate between commonly used views.Web Client is the ultimate clinet now . C# Client is used for backword compatibility . For instance you can create 62 TB VMDK file only via web clinet. 
    11. vCenter Server Appliance – with vSphere 5.5 the vCenter Server Appliance (VCSA) now uses a reengineered, embedded vPostgres database that offers improved scalability.  I wasn’t able to officially confirm the max number of hosts and VMs that will be supported with the embedded DB.  They are targeting 100 hosts and 3,000 VMs but we’ll need to wait until 5.5 releases to confirm these numbers.  However, regardless what the final numbers are, with this improved scalability the VCSA is a very attractive alternative for folks who may be looking to move away from a Windows based vCenter.
    12. vSphere App HA – App HA brings application awareness to vSphere HA helping to further improve application uptime.  vSphere App HA works together with VMware vFabric Hyperic Server to monitor application services running inside the virtual machine, and when issues are detected perform restart actions as defined by the administrator in the vSphere App HA Policy.This can be done with Special API's.
    13. vSphere HA Compatibility with DRS Anti-Affinity Rules –vSphere HA will now honor DRS anti-affinity rules when restarting virtual machines.  If you have anti-affinity rules defined in DRS that keep selected virtual machines on separate hosts, VMware HA will now honor those rules when restarting virtual machines following a host failure.In previous version , HA would restart the VM and DRS (  if fully automated ) will place (vmotion ) the VM accourding to affinity rule. With this New feature VM will be placed in correct host without a Migration .
    14.  vSphere Big Data Extensions(BDE) – Big Data Extensions is a new addition to the VMware vSphere Enterprise and Enterprise Plus editions.  BDE is a vSphere plug-in that enables administrators to deploy and manage Hadoop clusters on vSphere using the vSphere web client.
    15. Support for 62TB VMDK – vSphere 5.5 increases the maximum size of a virtual machine disk file (VMDK) to 62TB (note the maximum VMFS volume size is 64TB where the max VMDK file size is 62TB).  The maximum size for a Raw Device Mapping (RDM) has also been increased to 62TB.Privious limitation was 2 TB - 512 b.
    16. Microsoft Cluster Server (MCSC) Updates – MSCS clusters running on vSphere 5.5 now support Microsoft Windows 2012, round-robin path policy for shared storage, and iSCSI and Fibre Channel over Ethernet (FCoE) for shared storage.
    17. 16Gb End-to-End Support – In vsphere 5.5 16Gb end-to-end FC support is now available.  Both the HBAs and array controllers can run at 16Gb as long as the FC switch between the initiator and target supports it.
    18. Auto Remove of Devices on PDL – This feature automatically removes a device from a host when it enters a Permanent Device Loss (PDL) state.  Each vSphere host is limited to 255 disk devices, removing devices that are in a PDL state prevents failed devices from occupying a device slot.
    19. VAAI UNMAP Improvements – vSphere 5.5 provides  and new “esxcli storage vmfs unmap” command with the ability to specify the reclaim size in blocks, opposed to just a percentage, along with the ability to reclaim space in increments rather than all at once.
    20. VMFS Heap Improvements – vSphere 5.5 introduces a much improved heap eviction process, which eliminates the need for large heap sizes.  With vSphere 5.5 a maximum of 256MB of heap is needed to enable vSphere hosts to access the entire address space of a 64TB VMFS.
    21. vSphere Flash Read Cache – a new flash-based storage solution that enables the pooling of multiple flash-based devices into a single consumable vSphere construct called a vSphere Flash Resource, which can be used to enhance virtual machine performance by accelerating read-intensive workloads.
    22. Link Aggregation Control Protocol (LACP) Enhancements – with the vSphere Distributed Switch in vSphere 5.5 LACP now supports 22 new hashing algorithms, support for up to 64 Link Aggregation Groups (LAGs), and new workflows to help configure LACP across large numbers of hosts.
    23. Traffic Filtering Enhancements – the vSphere Distributed Switch now supports packet classification and filtering based on MAC SA and DA qualifiers, traffic type qualifiers (i.e. vMotion, Management, FT), and IP qualifiers (i.e. protocol, IP SA, IP DA, and port number).
    24. Quality of Service Tagging – vSphere 5.5 adds support for Differentiated Service Code Point (DCSP) marking.  DSCP marking support enables users to insert tags in the IP header which helps in layer 3 environments where physical routers function better with an IP header tag than with an Ethernet header tag.
    25. Single-Root I/O Virtualization (SR-IOV) Enhancements – vSphere 5.5 provides improved workflows for configuring SR-IOV as well as the ability to propagate port group properties to up to the virtual functions.
    26. Enhanced Host-Level Packet Capture – vSphere 5.5 provides an enhanced host-level packet capture tool that is equivalent to the command-line tcpdump tool available on the Linux platform.
    27. 40Gb Bandwidth  Support – vSphere 5.5 provides support for 40Gb bandwidth.  In 5.5 the functionality is limited to the Mellanox ConnectX-3 VPI adapters configured in Ethernet mode.
    28. vSphere Data Protection (VDP) – VDP has also been updated in 5.5 with several great improvements to include the ability to replicate  backup data to EMC Avamar,  direct-to-host emergency restore, the ability to backup and restore of individual .vmdk files, more granular scheduling for backup and replication jobs, and the ability to mount existing VDP backup data partitions when deploying a new VDP appliance. 

    Monday, August 19, 2013

    VLAN Tagging

    VLAN tagging in ESX can be
    • VST - Virtual Switch Tagging 
    • EST - External Switch Tagging 
    • VGT - Virtual Guest Tagging 

    VST - Virtual Switch Tagging 

    • VLAN tagging for all packets is performed by the Virtual Switch before leaving the ESX/ESXI host
    • Port groups on the Virtual switch of ESX server should be configured with VLAN ID (1-4094).
      Note : VLAN ID 0 (zero) Disables VLAN tagging on port group (EST Mode)
      VLAN ID 4095 Enables trunking on port group (VGT Mode)
    • Reduces the number of Physical nics on the ESX Host by running all the VLANs over one physical nic.
    • The physical switch port connecting the uplink from the ESX should be configured as Trunk port And all the vlans defined in vSwitch need to be allowed.
    • virtual machine network Packet is delivered to vSwitch and before it is sent to physical switch the packet is tagged with vlan id according to the port group membership of originating virtual machine.
    • switch NIC teaming policy to Route based on originating virtual port ID (this is set by default)
    • Physical  Switch Port Configuration :
      switch port need to be set to TRUNK mode
      dot1q encapsulation should be enabled. 
    EST- External Switch Tagging 
    • Esx host doesn't see the vlan tagging . vlan Tagging is done by Physical switch.
    • No. of Physical Nics = No. of vlans
    • Port groups on the Virtual switch of ESX server need not to be configured with the VLAN number or configure VLAN ID 0 (if it is not native VLAN
    • Physical Switch Port configuration :
      Port need to be configured as access port.
    VGT - Virtual Guest Tagging


    • Install 8021.Q VLAN trunking driver inside virtual machine guest operating system.
    •  All the VLAN tagging is performed by the virtual machine with use of trunking driver in the guest.VLAN tags are understandable only between the virtual machine and external switch when frames are passed to/from virtual switches.
    • Virtual Switch will not be involved or aware of this operation. Vswitch only forwards the packets from Virtual machine to physical switch and will not perform any operation.
    • Port group of the virtual machine should be configured with VLAN ID 4095
    • Physical  Switch Port Configuration :
      switch port need to be set to TRUNK mode

    Tuesday, June 4, 2013

    Restart management Agents

    ESXi Management agent can be restarted in couple of ways 

    DUCI 


    • Connect to ESXi Host 
    • Press F2 , provide the credentials ( Login using root)
    • Go to Trouble shooting , Navigate to Restart ,Management Agents

    Local Console or ssh 

    Method 1 : No Down time to VMs

    • /sbin/services.sh restart

    will restart all the management agents, hostd, ntpd, sfcbd, slpd, wsman, vodb

    Method 2 :

    Run following commands,

    • /etc/init.d/hostd restart
    • /etc/init.d/vpxa restart


    Method 3 : 

    • service mgmt-vmware restart
    • service vmware-vpxa rest

    If Automatic Startup/shutdown is enabled on VMs , virtual machine may restart.


    Friday, May 31, 2013

    vCenter Roles and Privilages

    Role and Privileges

    Vcenter privileges are fairly different than the  Active directory (Discretionary access control ) . vCenter uses role based access control - RBAC .

    There are three type of roles 

    • System
    • Sample
    • Custom 


    System Roles

    There are 3 type of system roles these are default and cannot be changed

    • NO access  - User cannt see the object
    • Read Only - User can see the object but right click options are grayed out
    • Administrator - users have all the privilege on the object


    Sample Roles : default sample roles are 


    • Virtual machine power on 
    • Datastore consumer
    • Network consumer
    • Virtual Machine User
    • Resource Pool administrator
    • vmware consolidated backup user
    Note : Its advised not to change the Sample roles . Its better to clone the roles and apply to the object


    Custom Roles:

    When you create additional roles in the vCenter are called custom roles.

    How Permissions are applied and inherited ?

    • Permissions applied on the objects supersedes a permission that is inherited
    • Permissions applied on the user supersedes permission which is inherited from being part of a group.
    Examples :

    • User A - has admin access on DataCenter and No Access on VM1.Result  : This implies User A can see and modify all the objects under the datacenter but he cant see VM1
    • Group_A - Power on VMGroup_B - take SnapshotUser_A - Memeber of Group_A and Group_BUser_B- Group_AUser_C- Group_BResult : User_A can power on and take snapshot of all the vmsUser_B - Can take snapshot of vms but cant power on the vmUser_C - can only power on the machine
    • Group_A : Administrator
      Group_B: Read only VM2
      User_A : Group_A , Group_B
      User_B : Group_A
      User_C: Group_B
      Result : User_A : Can see and perform admin activity on all the objects accept VM2
      User_B : Has Administrative privilege on all the object including the vm2
      User_C : Can see only VM2 , no other objects in the datacenter
    • Group_A - Power on VMGroup_B - Take Snapshot
      User_A - ReadOnly on Datacenter
      Result : Even though user is part of both groups A and B , user will be able to see only the objects but all the options will be grayed out.

    Thursday, May 16, 2013

    Multiple Page files - single volume

    Using GUI , you can define only one page files for each volume. It is possible to create multiple page files for a single volume , it can be done by modifying registry . 


    To create multiple paging files on one volume 

    • On the drive or volume you want to hold the paging files, create folders for the number of paging files you want to create on the volume. For example, C:\Pagefile1, C:\Pagefile2, and C:\Pagefile3.
    • Click Start, Click Run, type regedit in the Open box, and then click OK.
    • In the left pane, locate and click the following registry subkey: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SessionManager\MemoryManagement
    • Find the Pagingfiles value, and then double-click it to open it.
    • Remove any existing values, and add the following values:
      c:\pagefile1\pagefile.sys 3000 4000
      c:\pagefile2\pagefile.sys 3000 4000
      c:\pagefile3\Pagefile.sys 3000 4000
    • Click OK, and then quit Registry Editor.
    • Restart the computer to cause the changes to take effect.