Quantcast
Channel: Jose Barreto's Blog
Viewing all 182 articles
Browse latest View live

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-2/ConnectX-3 using InfiniBand – Step by Step

$
0
0

1) Introduction

We have covered the basics of SMB Direct and some of the use cases in previous blog posts and TechNet articles. You can find them at http://smb3.info.

However, I get a lot of questions about specifically which cards work with this new feature and how exactly you set those up. This is one in a series of blog posts that cover specific instructions for RDMA NICs. In this specific post, we’ll cover all the details to deploy the Mellanox ConnectX-2 and ConnectX-3 adapters, using the InfiniBand “flavor” of RDMA.

 

2) Hardware and Software

To implement and test this technology, you will need:

  • Two or more computers running Windows Server 2012 Release Candidate
  • One or more Mellanox ConnectX-2 or ConnectX-3 adapters for each server
  • One or more Mellanox InfiniBand switches
  • Two or more cables required for InfiniBand (typically using QSFP connectors)

Mellanox states support for Windows Server 2012 SMB Direct and Kernel-mode RDMA capabilities on the following adapter models:

  • Mellanox ConnectX-2. This card uses Quad Data Rate (QDR) InfiniBand at 32 Gbps data rate.
  • Mellanox ConnectX-3. This card uses Fourteen Data Rate (FDR) InfiniBand at 54 Gbps data rate.

You can find more information about these adapters on Mellanox’s web site.

Important note: The older Mellanox InfiniBand adapters (including the first generation of ConnectX adapters and the InfiniHost III adapters), won't work with SMB Direct in Windows Server 2012.

There are many options in terms of adapters, cables and switches.  At the Mellanox web site you can find more information about these InfiniBand adapters (http://www.mellanox.com/content/pages.php?pg=infiniband_cards_overview&menu_section=41) and InfiniBand switches (http://www.mellanox.com/content/pages.php?pg=switch_systems_overview&menu_section=49). Here are some examples of configurations you can use to try the Windows Server 2012:

2.1) Two computers using QDR

If you want to setup a simple pair of computers to test SMB Direct, you simply need two InfiniBand cards and a back-to-back cable. This could be used for simple testing like one file server and one Hyper-V server. If you want the most affordable InfiniBand solution, you can use a single-port QDR card, which operates at 32Gbps data rate. Here are the parts you will need:

  • 2 x ConnectX-2, Single port, QSFP connector, QDR InfiniBand (part # MHQH19B-XTR)
  • 1 x QSFP to QSFP cables, 1m (part # MC2206130-001)

2.2) Eight computers using QDR

If you want to try a more realistic configuration with InfiniBand, you could setup a two-node file server cluster connected to a six-node Hyper-V cluster. In this setup, you will need 8 computers, each with an InfiniBand card. You will also need a switch with at least 8 ports (Mellanox offers an 8-port model). Using QDR speeds, you’ll need the following parts:

  • 8 x ConnectX-2, Single port, QSFP connector, QDR InfiniBand (part # MHQH19B-XTR)
  • 8 x QSFP to QSFP cables, 1m (part # MC2206130-001)
  • 1 x IS5022 InfiniBand Switch, 8 ports, QSFP, QDR (part # MIS5022Q-1BFR)

2.3) Two computers using FDR

You may also try the faster FDR speeds (54Gbps data rate). The minimum setup in this case would again be two cards and a cable. Please note that the QDR and FDR cables are different, although they use similar connectors. Here’s what you will need:

  • 2 x ConnectX-3 adapter, Single port, QSFP, FDR InfiniBand (part # MCX353A-FCBT)
  • 1 x QSFP to QSFP cables (FDR), 1m (part # MC2207130-001)

Please note that you will need a system with PCIe Gen3 slots to achieve the rated speed in this card. These slots are available on newer system like the ones equipped with an Intel Romley motherboard. If you use an older system, the card will be limited by the speed of the older PCIe Gen2 bus.

2.4) Ten computers using dual FDR cards

If you’re interested in experience great throughput in a private cloud setup, you could configure a two-node file server cluster plus an eight-node Hyper-V cluster. You could also use two InfiniBand cards for each system, for added performance and fault tolerance. In this setup, you would need 20 FDR cards and a 20-port FDR switch (Mellanox sells a model with 36 FDR ports). Here are the parts required:

  • 20 x ConnectX-3 adapter, Single port, QSFP, FDR InfiniBand (part # MCX353A-FCBT)
  • 20 x QSFP to QSFP cables (FDR), 1m (part # MC2207130-001)
  • 1 x SX6036 InfiniBand Switch, 36 ports, QSFP, FDR 

 

3) Download and update the drivers

Windows Server 2012 RC includes an inbox driver for the Mellanox ConnectX-2 and ConnectX-3 cards. However, Mellanox provides updated firmware and drivers for download. You should be able to use the inbox driver to access the Internet to download the updated driver.

The latest Mellanox drivers for Windows Server 2012 RC can be downloaded from the Windows Server 2012 tab on this page on the Mellanox web site: http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=32&menu_section=34.

The package is provided to you as a single executable file. Simply run the EXE file to update the firmware and driver. This package will also install Mellanox tools on the server. Please note that this package is different from the Windows Server 2012 Beta package. Make sure you grab the latest version.

After the download, simply run the executable file and choose one of the installation options (complete or custom). The installer will automatically detect if you have at least one card with an old firmware, offering to update it. You should always update to the latest firmware provided.


Note 1: This package does not update firmware for OEM cards. If you using this type of card, contact your OEM for an update.

Note 2: Certain Intel Romley systems won't boot Windows Server 2012 when an old Mellanox firmware is present. It might be required for you to update the firmware of the Mellanox card using another system before you can use that Mellanox card on the Intel Romley system. That issue might also be addressed in certain cases by updating the firmware/BIOS of the Intel Romley system.

 

4) Configure a subnet manager

When using an InfiniBand network, you are required to have a subnet manager running. The best option is to use a managed InfiniBand switch (which runs a subnet manager), but you can also install a subnet manager on a computer connected to an unmanaged switch. Here are some details:

4.1) Best option – Using a managed switches with a built-in subnet manager

For this option, make sure you use managed switches. These switches come ready to run their own subnet manager and all you have to do is enable that option using the switch’s web interface.

clip_image001

 

4.2) Using OpenSM with a single unmanaged switch

If you don’t have a managed switch, you can use one of the computers running Windows Server 2012 to run your subnet manager. When you installed the Mellanox tools on step 3, you also installed the OpenSM.EXE tool, which is a subnet manager that runs on Windows Server. You want to make sure you install it as an auto-starting service.

Although the installation program configures OpenSM to run as a service, it misses the parameter to limit the log size. Here are a few commands to remove the default service and add a new one that has all the right parameters and starts automatically. Run them from a PowerShell prompt running as Administrator:

SC.EXE delete OpenSM
New-Service –Name "OpenSM" –BinaryPathName "`"C:\Program Files\Mellanox\MLNX_VPI\IB\Tools\opensm.exe`" --service -L 128" -DisplayName "OpenSM" –Description "OpenSM" -StartupType Automatic
Start-Service OpenSM

Note 1: This assumes that you installed the tools to the default location: C:\Program Files\Mellanox\MLNX_VPI

Note 2: For fault tolerance, make sure you have two computers on your network configured to run OpenSM. It is not recommended to run OpenSM in more than two computers connected to a switch.

4.3) Using OpenSM with two unmanaged switches

For complete fault tolerance, you want to have two switches and have two cards (or a dual-ported card) per computer, one going to each switch. With SMB Multichannel, you get fault tolerance in case a single card, cable or switch has a problem. However, each instance of OpenSM can only handle a single switch. In this case, you need two instances of OpenSM.EXE running on the computer, one for each card, working as a subnet manager for each of the two unmanaged switches.

In order to identify the two ports you have on the system (either on a single dual-ported card or in two single-ported cards). To do this, you need to run the IBSTAT tool from Mellanox, which will show you the identification for each InfiniBand port in your system (look for a line showing the port GUID). Here’s a sample with the two port GUIDs highlighted:

PS C:\> ibstat
CA 'ibv_device0'
        CA type:
        Number of ports: 2
        Firmware version: 0x20009209e
        Hardware version: 0xb0
        Node GUID: 0x0002c903000f9956
        System image GUID: 0x0002c903000f9959

        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 40
                Base lid: 1
                LMC: 0
                SM lid: 1
                Capability mask: 0x90580000
                Port GUID: 0x0002c903000f9957

        Port 2:
                State: Down
                Physical state: Polling
                Rate: 70
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x90580000
                Port GUID: 0x0002c903000f9958

Once you have identified the two port GUIDs, you can run the following commands from a PowerShell prompt running as Administrator:

SC.EXE delete OpenSM
New-Service –Name "OpenSM1" –BinaryPathName "`"C:\Program Files\Mellanox\MLNX_VPI\IB\Tools\opensm.exe`" --service -g 0x0002c903000f9957 -L 128" -DisplayName "OpenSM1" –Description "OpenSM for the first IB subnet" -StartupType Automatic
New-Service –Name "OpenSM2" –BinaryPathName "`"C:\Program Files\Mellanox\MLNX_VPI\IB\Tools\opensm.exe`"  --service -g 0x0002c903000f9958 -L 128" -DisplayName "OpenSM2" –Description "OpenSM for the second IB subnet" -StartupType Automatic
Start-Service OpenSM1
Start-Service OpenSM2

Note 1: This assumes that you installed the tools to the default location: C:\Program Files\Mellanox\MLNX_VPI

Note 2: For fault tolerance, make sure you have two computers on your network, both configured to run two instances of OpenSM. It is not recommended to run OpenSM in more than two computers connected to a switch.

 

5) Configure IP Addresses

After you have the drivers in place, you should configure the IP address for your NIC. If you’re using DHCP, that should happen automatically, so just skip to the next step.

For those doing manual configuration, assign an IP address to your interface using either the GUI or something similar to the PowerShell below. This assumes that the interface is called RDMA1, that you’re assigning the IP address 192.168.1.10 to the interface and that your DNS server is at 192.168.1.2.

Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -AddressFamily IPv4 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.1.10 -PrefixLength 24 -Type Unicast
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.1.2

 

6) Verify everything is working

Follow the steps below to confirm everything is working as expected:

6.1) Verify network adapter configuration

Use the following PowerShell cmdlets to verify Network Direct is globally enabled and that you have NICs with the RDMA capability. Run on both the SMB server and the SMB client.

Get-NetOffloadGlobalSetting | Select NetworkDirect
Get-NetAdapterRDMA
Get-NetAdapterHardwareInfo

6.2) Verify SMB configuration

Use the following PowerShell cmdlets to make sure SMB Multichannel is enabled, confirm the NICs are being properly recognized by SMB and that their RDMA capability is being properly identified.

On the SMB client, run the following PowerShell cmdlets:

Get-SmbClientConfiguration | Select EnableMultichannel
Get-SmbClientNetworkInterface

On the SMB server, run the following PowerShell cmdlets:

Get-SmbServerConfiguration | Select EnableMultichannel
Get-SmbServerNetworkInterface
netstat.exe -xan | ? {$_ -match "445"}

Note: The NETSTAT command confirms if the File Server is listening on the RDMA interfaces.

6.3) Verify the SMB connection

On the SMB client, start a long-running file copy to create a lasting session with the SMB Server. While the copy is ongoing, open a PowerShell window and run the following cmdlets to verify the connection is using the right SMB dialect and that SMB Direct is working:

Get-SmbConnection
Get-SmbMultichannelConnection
netstat.exe -xan | ? {$_ -match "445"}

Note: If you have no activity while you run the commands above, it’s possible you get an empty list. This is likely because your session has expired and there are no current connections.

 

7) Review Performance Counters

There are several performance counters that you can use to verify that the RDMA interfaces are being used and that the SMB Direct connections are being established. You can also use the regular SMB Server and and SMB Client performance counters to verify the performance of SMB, including IOPs (data requests per second), Latency (average seconds per request) and Throughput (data bytes per second). Here's a short list of the relevant performance counters.

On the SMB Client, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Client Shares - One instance per SMB share the client is currently using

On the SMB Server, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Server Shares - One instance per SMB share the server is currently sharing
  • SMB Server Session - One instance per client SMB session established with the server

 

8) Review the connection log details (optional)

SMB 3.0 now offers a “Object State Diagnostic” event log that can be used to troubleshoot Multichannel (and therefore RDMA) connections. Keep  in mind that this is a debug log, so it’s very verbose and requires a special procedure for gathering the events. You can follow the steps below:

First, enable the log in Event Viewer:

  • Open Event Viewer
  • On the menu, select “View” then “Show Analytic and Debug Logs”
  • Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” pane on the right, select “Enable Log”
  • Click OK to confirm the action.

After the log is enabled, perform the operation that requires an RDMA connection. For instance, copy a file or run a specific operation.
If  you’re using mapped drives, be sure to map them after you enable the log, or else the connection events won’t be properly captured.

Next, disable the log in Event Viewer:

  • In Event Viewer, make sure you select Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” page on the right, “Disable Log”

Finally, review the events on the log in Event Viewer. You can filter the log to include only the SMB events that confirm that you have an SMB Direct connection or only error events.

The “Smb_MultiChannel” keyword will filter for connection, disconnection and error events related to SMB. You can also filter by event numbers 30700 to 30706.

  • Click on the “ObjectStateDiagnostic” item on the tree on the left.
  • On the “Actions” pane on the right, select “Filter Current Log…”
  • Select the appropriate filters

You can also use a PowerShell window and run the following cmdlets to view the events. If there are any RDMA-related connection errors, you can use the following:

Get-WinEvent -LogName Microsoft-Windows-SMBClient/ObjectStateDiagnostic -Oldest |? Message -match "RDMA"

 

9) Conclusion

I hope this helps you with your testing of the Mellanox InfiniBand adapters. I wanted to covered all different angles to make sure you don’t miss any relevant steps. I also wanted to have enough troubleshooting guidance here to get you covered for any known issues. Let us know how was your experience by posting a comment.


Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Intel NetEffect NE020 card using iWARP – Step by Step

$
0
0

1) Introduction

We have covered the basics of SMB Direct and some of the use cases in previous blog posts and TechNet articles. You can find them at http://smb3.info.

However, I get a lot of questions about specifically which cards work with this new feature and how exactly you set those up. This is one in a series of blog posts that cover specific instructions for RDMA NICs. In this specific post, we’ll cover all the details to deploy the Intel NE020 adapters, the iWARP “flavor” of RDMA.

 

2) Hardware and Software

To implement and test this technology, you will need:

  • Two or more computers running Windows Server 2012 Release Candidate
  • One or more Intel NetEffect Ethernet Adapter (NE020) card for each server
  • One or more 10GbE switches
  • Two or more cables required for the NE020 (typically using SFP+ connectors)

Intel states support for Windows Server 2012 SMB Direct and Kernel-mode RDMA capabilities on the following adapter models:

  • NetEffect™ Ethernet Server Cluster Adapter CX4 (Dover)
  • NetEffect™ Ethernet Server Cluster Adapter SFP+SR (Argus)
  • NetEffect™ Ethernet Server Cluster Adapter DA (Argus)

You can find more information about these adapters on Intel’s web site at http://www.intel.com/Products/Server/Adapters/Server-Cluster/Server-Cluster-overview.htm.

Important note: You should use NE020 cards with the chip version X2A1 (this is printed on the chip itself). If you have an old card with a X2A chip, it won't work with SMB Direct in Windows Server 2012.

 

3) Download and update the drivers

Windows Server 2012 includes an inbox driver for the Intel NE020. However, Intel provides an updated driver for download. You should be able to use the inbox driver to access the Internet to download the updated driver.

The latest Intel NE020 driver can be downloaded from: http://www.intel.com/support/go/network/adapter/ne020/win8. The driver is provided to you as a single ZIP file that you should extract to a specific folder. It will contain a few files, including one or more SYS files with the driver itself, an INF text file with the required information to install the driver and a few supporting files.

To update to the new driver, follow these steps:

  • Open “Device Manager” and find the NE020 device, under “Network Adapters”
  • Right click the device and select “Update Driver Software”
  • Click on “Browse my computer for driver software”
  • Point to the folder where you extracted the ZIP file you downloaded
  • Follow the wizard to complete the installation
  • Restart the computer to reload the drivers and services

Performance Note: The Intel NE020 driver has been found to benefit from using 4 SMB Direct connections per session, instead of the default 2 connections. While this increases the use of NIC resources, it has been found to provide significant IOPS improvement. To implement this optimization, run the following PowerShell cmdlet on all computers using the NE020 adapters:

Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters" ConnectionCountPerRdmaNetworkInterface -Value 4 –Force

 

4) Configure IP Addresses

After you have the drivers in place, you should configure the IP address for your NIC. If you’re using DHCP, that should happen automatically, so just skip to the next step.

For those doing manual configuration, assign an IP address to your interface using either the GUI or something similar to the PowerShell below. This assumes that the interface is called RDMA1, that you’re assigning the IP address 192.168.1.10 to the interface, that your default gateway is at 192.168.1.1 and that your DNS server is at 192.168.1.2.

Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -AddressFamily IPv4 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.1.10 -PrefixLength 24 -Type Unicast -DefaultGateway 192.168.1.1
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.1.2

 

5) Configure the firewall

iWARP uses TCP/IP for communications, so you need to configure the Firewall to allow that traffic. You essentially need to add a firewall rule to the SMB Server to allow incoming traffic from the SMB Direct clients. In Windows Server 2012, SMB Direct with iWARP uses TCP port 5445, in addition to the traditional 445 port used for SMB.

Here’s how you would enable the built-in firewall rule using a PowerShell cmdlet on the SMB server to allow access by the clients:

Enable-NetFirewallRule FPSSMBD-iWARP-In-TCP

Note that the FPSSMBD-iWARP-In-TCP is preconfigured for all Windows Servers and it essentially allows incoming traffic on port 5445. It should only be enabled for systems with iWARP network interfaces with the proper drivers for SMB Direct usage.

If you have multiple SMB servers, you will need to enable this firewall rule on every server that will use SMB Direct with iWARP.

 

6) Allow cross-subnet access (optional)

One of the main advantages of iWARP RDMA technology is the ability to be routed across different subnets. While the most common setup is a single subnet (or maybe even single rack) deployment, you can use the Intel NE020 cards to connect computers across subnets. However, this capability is disabled by default on Windows Server  2012 since not everyone will require it.

To enable Network Direct (and therefor SMB Direct) in this fashion, you do need to configure every system (SMB Servers and SMB Clients) to allow routing RDMA across subnets. This is done using the following PowerShell cmdlet:

Set-NetOffloadGlobalSetting -NetworkDirectAcrossIPSubnets Allow
Disable-NetAdapter -InterfaceAlias RDMA1
Enable-NetAdapter -InterfaceAlias RDMA1

Note: Disabling and re-enabling the interface makes the settings change effective without a reboot.

We recommend that you apply the configuration change above before creating any shares. If you do happen to apply it (or make any other major network configuration changes), the SMB client will re-evaluate its connections when new interfaces are detected or every 10 minutes. You can also tell SMB to update its connections immediately by using the following cmdlet on the SMB clients:

Update-SmbMultichannelConnection

 

7) Verify everything is working

Follow the steps below to confirm everything is working as expected:

7.1) Verify network adapter configuration

Use the following PowerShell cmdlets to verify Network Direct is globally enabled and that you have NICs with the RDMA capability. Run on both the SMB server and the SMB client.

Get-NetOffloadGlobalSetting | Select NetworkDirect
Get-NetAdapterRDMA
Get-NetAdapterHardwareInfo

7.2) Verify SMB configuration

Use the following PowerShell cmdlets to make sure SMB Multichannel is enabled, confirm the NICs are being properly recognized by SMB and that their RDMA capability is being properly identified.

On the SMB client, run the following PowerShell cmdlets:

Get-SmbClientConfiguration | Select EnableMultichannel
Get-SmbClientNetworkInterface

On the SMB server, run the following PowerShell cmdlets:

Get-SmbServerConfiguration | Select EnableMultichannel
Get-SmbServerNetworkInterface
netstat.exe -xan | ? {$_ -match "445"}

Note: The NETSTAT command confirms if the File Server is listening on the RDMA interfaces.

7.3) Verify the SMB connection

On the SMB client, start a long-running file copy to create a lasting session with the SMB Server. While the copy is ongoing, open a PowerShell window and run the following cmdlets to verify the connection is using the right SMB dialect and that SMB Direct is working:

Get-SmbConnection
Get-SmbMultichannelConnection
netstat.exe -xan | ? {$_ -match "445"}

Note: If you have no activity while you run the commands above, it’s possible you get an empty list. This is likely because your session has expired and there are no current connections.

 

8) Review Performance Counters

There are several performance counters that you can use to verify that the RDMA interfaces are being used and that the SMB Direct connections are being established. You can also use the regular SMB Server and and SMB Client performance counters to verify the performance of SMB, including IOPs (data requests per second), Latency (average seconds per request) and Throughput (data bytes per second). Here's a short list of the relevant performance counters.

On the SMB Client, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Client Shares - One instance per SMB share the client is currently using

On the SMB Server, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Server Shares - One instance per SMB share the server is currently sharing
  • SMB Server Session - One instance per client SMB session established with the server

 

9) Review the connection log details (optional)

SMB 3.0 now offers a “Object State Diagnostic” event log that can be used to troubleshoot Multichannel (and therefore RDMA) connections. Keep  in mind that this is a debug log, so it’s very verbose and requires a special procedure for gathering the events. You can follow the steps below:

First, enable the log in Event Viewer:

  • Open Event Viewer
  • On the menu, select “View” then “Show Analytic and Debug Logs”
  • Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” pane on the right, select “Enable Log”
  • Click OK to confirm the action.

After the log is enabled, perform the operation that requires an RDMA connection. For instance, copy a file or run a specific operation.
If  you’re using mapped drives, be sure to map them after you enable the log, or else the connection events won’t be properly captured.

Next, disable the log in Event Viewer:

  • In Event Viewer, make sure you select Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” page on the right, “Disable Log”

Finally, review the events on the log in Event Viewer. You can filter the log to include only the SMB events that confirm that you have an SMB Direct connection or only error events.

The “Smb_MultiChannel” keyword will filter for connection, disconnection and error events related to SMB. You can also filter by event numbers 30700 to 30706.

  • Click on the “ObjectStateDiagnostic” item on the tree on the left.
  • On the “Actions” pane on the right, select “Filter Current Log…”
  • Select the appropriate filters

You can also use a PowerShell window and run the following cmdlets to view the events. If there are any RDMA-related connection errors, you can use the following:

Get-WinEvent -LogName Microsoft-Windows-SMBClient/ObjectStateDiagnostic -Oldest |? Message -match "RDMA"

 

10) Conclusion

I hope this helps you with your testing of the Intel NE020 cards. I wanted to covered all different angles to make sure you don’t miss any relevant steps. I also wanted to have enough troubleshooting guidance here to get you covered for any known issues. Let us know how was your experience by posting a comment.

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Mellanox ConnectX-3 using 10GbE/40GbE RoCE – Step by Step

$
0
0

1) Introduction

We have covered the basics of SMB Direct and some of the use cases in previous blog posts and TechNet articles. You can find them at http://smb3.info.

However, I get a lot of questions about specifically which cards work with this new feature and how exactly you set those up. This is one in a series of blog posts that cover specific instructions for RDMA NICs. In this specific post, we’ll cover all the details to deploy the Mellanox ConnectX-3 adapters using the RoCE (RDMA over Converged Ethernet) “flavor” of RDMA.

 

2) Hardware and Software

To implement and test this technology, you will need:

  • Two or more computers running Windows Server 2012 Release Candidate
  • One or more Mellanox ConnectX-3 adapter for each server
  • One or more 10GbE or 40GbE Ethernet switches with the Priority Flow Control (PFC) capability
  • Two or more cables required for the ConnectX-3 card (typically using SFP+ connectors for 10GbE or QSFP connectors for 40GbE)

Note 1: The older Mellanox InfiniBand adapters (including the first generation of ConnectX adapters and the InfiniHost III adapters), won't work with SMB Direct in Windows Server 2012.

Note 2: Although the Mellanox ConnectX-2 adapters are supported for InfiniBand, they are not recommended with RoCE because they won’t fully support Priority Flow Control (PFC).

There are many options in terms of adapters, cables and switches.  At the Mellanox web site you can find more information about these RoCE adapters (http://www.mellanox.com/content/pages.php?pg=ethernet_cards_overview&menu_section=28) and Ethernet switches (http://www.mellanox.com/content/pages.php?pg=ethernet_switch_overview&menu_section=71). Here are some examples of configurations you can use to try the Windows Server 2012:

2.1) Two computers using 10GbE RoCE

If you want to setup a simple pair of computers to test SMB Direct, you simply need two adapters and a back-to-back cable. This could be used for simple testing like one file server and one Hyper-V server.
For 10Gbit Ethernet, you can use adapters with SFP+ connectors. Here are the parts you will need:

  • 2 x ConnectX-3 adapter, dual port, 10GbE, SFP+ connector (part # MCX312A-XCBT)
  • 1 x SFP+ to SFP+ cable, 10GbE, 1m (part # MC3309130-001)

2.2) Eight computers using dual 10GbE RoCE

If you want to try a more realistic configuration with RoCE, you could setup a two-node file server cluster connected to a six-node Hyper-V cluster. In this setup, you will need 8 computers, each with a dual port RoCE adapter. You will also need a 10GbE switch with at least 16 ports. Using 10GbE and SFP+ connectors, you’ll need the following parts:

  • 8 x ConnectX-3 adapter, dual port, 10GbE, SFP+ connector (part # MCX312A-XCBT)
  • 16 x SFP+ to SFP+ cable, 10GbE, 1m (part # MC3309130-001)
  • 1 10GbE Switch, 64 ports, SFP+ connectors, PFC capable (part # MSX1016X)

Note: You can also use a 10GbE switch from another vendor, as long as it provides support for Priority Flow Control (PFC). A common example is the Cisco Nexus 5000 series of switches.

2.3) Two computers using 40GbE RoCE

You may also try the faster FDR speeds (54Gbps). The minimum setup in this case would again be two cards and a cable. Please note that you need a cable with a specific type of QSFP connector for 40GbE . Here’s what you will need:

  • 2 x ConnectX-3 adapter, dual port, 40GbE, QSFP connector  (part # MCX314A-BCBT)
  • 1 x QSFP to QSFP cable, 40GbE, 1m  (part # MC2206130-001)

Note: You will need a system with PCIe Gen3 slots to achieve the rated speed in this card. These slots are available on newer system like the ones equipped with an Intel Romley motherboard. If you use an older system, the card will be limited by the speed of the older PCIe Gen2 bus.

2.4) Ten computers using dual 40GbE RoCE

If you’re interested in experience great throughput in a private cloud setup, you could configure a two-node file server cluster plus an eight-node Hyper-V cluster. You could also use two 40GbE RoCE adapters for each system, for added performance and fault tolerance. In this setup, you would need 20 adapters and a 20-port switch. Here are the parts required:

  • 20 x ConnectX-3 adapter, dual port, 40GbE, QSFP connector (part # MCX314A-BCBT)
  • 20 x QSFP to QSFP cable, 40GbE, 1m (part # MC2206130-001)
  • 1 40GbE Switch, 36 ports, SFP+ connectors, PFC capable (part # MSX1036B)

Note: You will need a system with PCIe Gen3 slots to achieve the rated speed in this card. These slots are available on newer system like the ones equipped with an Intel Romley motherboard. If you use an older system, the card will be limited by the speed of the older PCIe Gen2 bus.

 

3) Download and update the drivers

Windows Server 2012 RC includes an inbox driver for the Mellanox ConnectX-3 cards. However, Mellanox provides updated firmware and drivers for download. You should be able to use the inbox driver to access the Internet to download the updated driver.

The latest Mellanox drivers for Windows Server 2012 RC can be downloaded from the Windows Server 2012 tab on this page on the Mellanox web site: http://www.mellanox.com/content/pages.php?pg=products_dyn&product_family=32&menu_section=34.

The package is provided to you as a single executable file. Simply run the EXE file to update the firmware and driver. This package will also install Mellanox tools on the server. Please note that this package is different from the Windows Server 2012 Beta package. Make sure you grab the latest version.

After the download, simply run the executable file and choose one of the installation options (complete or custom). The installer will automatically detect if you have at least one card with an old firmware, offering to update it. You should always update to the latest firmware provided.

 

clip_image002
 
 

Note 1: This package does not update firmware for OEM cards. If you using this type of card, contact your OEM for an update.

Note 2: Certain Intel Romley systems won't boot Windows Server 2012 when an old Mellanox firmware is present. It might be required for you to update the firmware of the Mellanox card using another system before you can use that Mellanox card on the Intel Romley system. That issue might also be addressed in certain cases by updating the firmware/BIOS of the Intel Romley system.

 

4) Configure the cards for RoCE

The ConnectX-3 cards can be used for both InfiniBand and Ethernet, so you need to make sure they are in the right protocol.

To do that using a GUI, follow the steps below:

  • Open the Device Manager
  • Right click on the "Mellanox ConnectX VPI" device under System Devices and click on Properties, then click on Port Protocol
  • Change the port types to be "ETH" instead of "Auto" or "IB" 

 

clip_image002

 

Using PowerShell, you can achieve the same results by running the following cmdlets:

Dir HKLM:'SYSTEM\CurrentControlSet\Control\Class\' -ErrorAction SilentlyContinue -Recurse | ? {
(Get-ItemProperty $_.pspath -Name 'DriverDesc' -ErrorAction SilentlyContinue ) -match 'Mellanox ConnectX VPI' } | % {          
Set-ItemProperty ($_.pspath+”\Parameters”) -Name PortType –Value “eth,eth”

Note: If the card you have supports only RoCE (this is true for specific cards with SFP+ connectors), Ethernet will be the only choice and the IB option will be greyed out.

 

5) Configuring Priority Flow Control (PFC)

In order to function reliably, RoCE needs the configuration of PFC (Priority Flow Control) on all nodes and also all switches in the flow path.

5.1) Configuring PFC on Windows

To configure PFC on the Windows Servers, you need to perform the following steps:

  • Enable the Data Center Bridging (DCB) feature on both client and server
  • Create a Quality of Service (QoS) policy to tag RoCE traffic on both client and server
  • Enable Priority Flow Control (PFC) on a specific priority (the example below use priority 4)
  • Enable DCB-capable NICs on both client and server.
  • Plumb down the DCB settings to the NICs (the example below assumes the NIC is called RDMA1)
  • Optionally, you can limit the bandwidth used by the SMB traffic (the example below limits that to 60%)

Here are the cmdlets to perform all the steps above using PowerShell:

Install-WindowsFeature Data-Center-Bridging
New-NetQosPolicy “RoCE” –NetDirectPortMatchCondition 445 -PriorityValue8021Action 4
Enable-NetQosFlowControl –Priority 4
Enable-NetAdapterQos –InterfaceAlias RDMA1
Set-NetQosDcbxSetting –willing 0
New-NetQoSTrafficClass "RoCE" -Priority 4 -Bandwidth 60 -Algorithm ETS

Note: When you have a Kernel Debugger attached to the computer (this is only applicable for developers), flow control is always disabled. In that case, you need to run the following PowerShell cmdlet to disable this behavior:

Set-ItemProperty HKLM:"\SYSTEM\CurrentControlSet\Services\NDIS\Parameters" AllowFlowControlUnderDebugger -Type DWORD -Value 1 –Force

5.2) Configuring PFC on the Switch

You need to enable Priority Flow Control on the switch as well. This configuration will vary according to the switch you chose. Refer to your switch documentation for details.

 

6) Configure IP Addresses

After you have the drivers in place, you should configure the IP address for your NIC. If you’re using DHCP, that should happen automatically, so just skip to the next step.

For those doing manual configuration, assign an IP address to your interface using either the GUI or something similar to the PowerShell below. This assumes that the interface is called RDMA1, that you’re assigning the IP address 192.168.1.10 to the interface and that your DNS server is at 192.168.1.2.

Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -AddressFamily IPv4 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.1.10 -PrefixLength 24 -Type Unicast
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.1.2

 

7) Verify everything is working

Follow the steps below to confirm everything is working as expected:

7.1) Verify network adapter configuration

Use the following PowerShell cmdlets to verify Network Direct is globally enabled and that you have NICs with the RDMA capability. Run on both the SMB server and the SMB client.

Get-NetOffloadGlobalSetting | Select NetworkDirect
Get-NetAdapterRDMA
Get-NetAdapterHardwareInfo

7.2) Verify SMB configuration

Use the following PowerShell cmdlets to make sure SMB Multichannel is enabled, confirm the NICs are being properly recognized by SMB and that their RDMA capability is being properly identified.

On the SMB client, run the following PowerShell cmdlets:

Get-SmbClientConfiguration | Select EnableMultichannel
Get-SmbClientNetworkInterface

On the SMB server, run the following PowerShell cmdlets:

Get-SmbServerConfiguration | Select EnableMultichannel
Get-SmbServerNetworkInterface
netstat.exe -xan | ? {$_ -match "445"}

Note: The NETSTAT command confirms if the File Server is listening on the RDMA interfaces.

7.3) Verify the SMB connection

On the SMB client, start a long-running file copy to create a lasting session with the SMB Server. While the copy is ongoing, open a PowerShell window and run the following cmdlets to verify the connection is using the right SMB dialect and that SMB Direct is working:

Get-SmbConnection
Get-SmbMultichannelConnection
netstat.exe -xan | ? {$_ -match "445"}

Note: If you have no activity while you run the commands above, it’s possible you get an empty list. This is likely because your session has expired and there are no current connections.

 

8) Review Performance Counters

There are several performance counters that you can use to verify that the RDMA interfaces are being used and that the SMB Direct connections are being established. You can also use the regular SMB Server and and SMB Client performance counters to verify the performance of SMB, including IOPs (data requests per second), Latency (average seconds per request) and Throughput (data bytes per second). Here's a short list of the relevant performance counters.

On the SMB Client, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Client Shares - One instance per SMB share the client is currently using

On the SMB Server, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Server Shares - One instance per SMB share the server is currently sharing
  • SMB Server Session - One instance per client SMB session established with the server

 

9) Review the connection log details (optional)

SMB 3.0 now offers a “Object State Diagnostic” event log that can be used to troubleshoot Multichannel (and therefore RDMA) connections. Keep  in mind that this is a debug log, so it’s very verbose and requires a special procedure for gathering the events. You can follow the steps below:

First, enable the log in Event Viewer:

  • Open Event Viewer
  • On the menu, select “View” then “Show Analytic and Debug Logs”
  • Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” pane on the right, select “Enable Log”
  • Click OK to confirm the action.

After the log is enabled, perform the operation that requires an RDMA connection. For instance, copy a file or run a specific operation.
If  you’re using mapped drives, be sure to map them after you enable the log, or else the connection events won’t be properly captured.

Next, disable the log in Event Viewer:

  • In Event Viewer, make sure you select Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” page on the right, “Disable Log”

Finally, review the events on the log in Event Viewer. You can filter the log to include only the SMB events that confirm that you have an SMB Direct connection or only error events.

The “Smb_MultiChannel” keyword will filter for connection, disconnection and error events related to SMB. You can also filter by event numbers 30700 to 30706.

  • Click on the “ObjectStateDiagnostic” item on the tree on the left.
  • On the “Actions” pane on the right, select “Filter Current Log…”
  • Select the appropriate filters

You can also use a PowerShell window and run the following cmdlets to view the events. If there are any RDMA-related connection errors, you can use the following:

Get-WinEvent -LogName Microsoft-Windows-SMBClient/ObjectStateDiagnostic -Oldest |? Message -match "RDMA"

 

10) Conclusion

I hope this helps you with your testing of the Mellanox RoCE adapters. I wanted to covered all different angles to make sure you don’t miss any relevant steps. I also wanted to have enough troubleshooting guidance here to get you covered for any known issues. Let us know how was your experience by posting a comment.

Deploying Windows Server 2012 with SMB Direct (SMB over RDMA) and the Chelsio T4 cards using iWARP – Step by Step

$
0
0

1) Introduction

We have covered the basics of SMB Direct and some of the use cases in previous blog posts and TechNet articles. You can find them at http://smb3.info.

However, I get a lot of questions about specifically which cards work with this new feature and how exactly you set those up. This is one in a series of blog posts that cover specific instructions for RDMA NICs. In this specific post, we’ll cover all the details to deploy the Chelsio T4 series of adapters, the iWARP “flavor” of RDMA.

 

2) Hardware and Software

To implement and test this technology, you will need:

  • Two or more computers running Windows Server 2012 Release Candidate
  • One or more Chelsio T4 cards for each server
  • One or more 10GbE switches
  • Two or more cables as required for the Chelsio T4 cards (typically SFP+ connectors)

Chelsio states support for Windows Server 2012 SMB Direct and Kernel-mode RDMA capabilities on the following adapter models:

  • Chelsio T420-CR adapters (dual-port 10GbE card with iWARP support)
  • Chelsio T440-CR adapters (quad-port 10GbE card with iWARP support)

You can find more information about these adapters on Chelsio’s web site at http://www.chelsio.com/products/t4_unified_wire_adapters.

Note: The memory-free T4 adapters do not support SMB Direct. That includes the T420-SO-CR.

 

3) Download and update the drivers

Chelsio is has drivers for Windows Server 2012 with RDMA support. That includes a WHQL certified driver available for public download.

The latest Chelsio T4 beta driver for Windows Server 2012 can be downloaded from: http://service.chelsio.com (look for the Windows Server 2012 driver under "Latest Releases"). 

The site also offers a README file, Release notes and a User Guide. These are specific for Windows Server 2012, and you should definitely review them before installing the driver.

To update to the new driver, run the executable file and follow the wizard. The software will copy the driver files and install them, prompting for a restart at the end of the process.

This driver should also be available via Windows Update, so you can get it by simply checking for updates on your Windows Server.

 

4) Configure IP Addresses

After you have the drivers in place, you should configure the IP address for your NIC. If you’re using DHCP, that should happen automatically, so just skip to the next step.

For those doing manual configuration, assign an IP address to your interface using either the GUI or something similar to the PowerShell below. This assumes that the interface is called RDMA1, that you’re assigning the IP address 192.168.1.10 to the interface, that your default gateway is at 192.168.1.1 and that your DNS server is at 192.168.1.2.

Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -AddressFamily IPv4 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.1.10 -PrefixLength 24 -Type Unicast -DefaultGateway 192.168.1.1
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.1.2

 

5) Configure the firewall

iWARP uses TCP/IP for communications, so you need to configure the Firewall to allow that traffic. You essentially need to add a firewall rule to the SMB Server to allow incoming traffic from the SMB Direct clients. In Windows Server 2012, SMB Direct with iWARP uses TCP port 5445, in addition to the traditional 445 port used for SMB.

Here’s how you would enable the built-int firewall rule using a PowerShell cmdlet on the SMB server to allow access by the clients:

Enable-NetFirewallRule FPSSMBD-iWARP-In-TCP

Note that the FPSSMBD-iWARP-In-TCP is preconfigured for all Windows Servers and it essentially allows incoming traffic on port 5445. It should only be enabled for systems with iWARP network interfaces with the proper drivers for SMB Direct usage.

If you have multiple SMB servers, you will need to enable this firewall rule on every server that will use SMB Direct with iWARP.

 

6) Allow cross-subnet access (optional)

One of the main advantages of iWARP RDMA technology is the ability to be routed across different subnets. While the most common setup is a single subnet (or maybe even single rack) deployment, you can use the Chelsio T4 cards to connect computers across subnets. However, this capability is disabled by default on Windows Server  2012 since not everyone will require it.

To enable Network Direct (and therefor SMB Direct) in this fashion, you do need to configure every system (SMB Servers and SMB Clients) to allow routing RDMA across subnets. This is done using the following PowerShell cmdlet:

Set-NetOffloadGlobalSetting -NetworkDirectAcrossIPSubnets Allow
Disable-NetAdapter -InterfaceAlias RDMA1
Enable-NetAdapter -InterfaceAlias RDMA1

Note: Disabling and re-enabling the interface makes the settings change effective without a reboot.

We recommend that you apply the configuration change above before creating any shares. If you do happen to apply it (or make any other major network configuration changes), the SMB client will re-evaluate its connections when new interfaces are detected or every 10 minutes. You can also tell SMB to update its connections immediately by using the following cmdlet on the SMB clients:

Update-SmbMultichannelConnection

 

7) Verify everything is working

Follow the steps below to confirm everything is working as expected:

7.1) Verify network adapter configuration

Use the following PowerShell cmdlets to verify Network Direct is globally enabled and that you have NICs with the RDMA capability. Run on both the SMB server and the SMB client.

Get-NetOffloadGlobalSetting | Select NetworkDirect
Get-NetAdapterRDMA
Get-NetAdapterHardwareInfo

7.2) Verify SMB configuration

Use the following PowerShell cmdlets to make sure SMB Multichannel is enabled, confirm the NICs are being properly recognized by SMB and that their RDMA capability is being properly identified.

On the SMB client, run the following PowerShell cmdlets:

Get-SmbClientConfiguration | Select EnableMultichannel
Get-SmbClientNetworkInterface

On the SMB server, run the following PowerShell cmdlets:

Get-SmbServerConfiguration | Select EnableMultichannel
Get-SmbServerNetworkInterface
netstat.exe -xan | ? {$_ -match "445"}

Note: The NETSTAT command confirms if the File Server is listening on the RDMA interfaces.

7.3) Verify the SMB connection

On the SMB client, start a long-running file copy to create a lasting session with the SMB Server. While the copy is ongoing, open a PowerShell window and run the following cmdlets to verify the connection is using the right SMB dialect and that SMB Direct is working:

Get-SmbConnection
Get-SmbMultichannelConnection
netstat.exe -xan | ? {$_ -match "445"}

Note: If you have no activity while you run the commands above, it’s possible you get an empty list. This is likely because your session has expired and there are no current connections.

 

8) Review Performance Counters

There are several performance counters that you can use to verify that the RDMA interfaces are being used and that the SMB Direct connections are being established. You can also use the regular SMB Server and and SMB Client performance counters to verify the performance of SMB, including IOPs (data requests per second), Latency (average seconds per request) and Throughput (data bytes per second). Here's a short list of the relevant performance counters.

On the SMB Client, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Client Shares - One instance per SMB share the client is currently using

On the SMB Server, watch for the following performance counters:

  • RDMA Activity - One instance per RDMA interface
  • SMB Direct Connection - One instance per SMB Direct connection
  • SMB Server Shares - One instance per SMB share the server is currently sharing
  • SMB Server Session - One instance per client SMB session established with the server

 

9) Review the connection log details (optional)

SMB 3.0 now offers a “Object State Diagnostic” event log that can be used to troubleshoot Multichannel (and therefore RDMA) connections. Keep  in mind that this is a debug log, so it’s very verbose and requires a special procedure for gathering the events. You can follow the steps below:

First, enable the log in Event Viewer:

  • Open Event Viewer
  • On the menu, select “View” then “Show Analytic and Debug Logs”
  • Expand the tree on the left: Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” pane on the right, select “Enable Log”
  • Click OK to confirm the action.

After the log is enabled, perform the operation that requires an RDMA connection. For instance, copy a file or run a specific operation.
If  you’re using mapped drives, be sure to map them after you enable the log, or else the connection events won’t be properly captured.

Next, disable the log in Event Viewer:

  • In Event Viewer, make sure you select Applications and Services Log, Microsoft, Windows, SMB Client, ObjectStateDiagnostic
  • On the “Actions” page on the right, “Disable Log”

Finally, review the events on the log in Event Viewer. You can filter the log to include only the SMB events that confirm that you have an SMB Direct connection or only error events.

The “Smb_MultiChannel” keyword will filter for connection, disconnection and error events related to SMB. You can also filter by event numbers 30700 to 30706.

  • Click on the “ObjectStateDiagnostic” item on the tree on the left.
  • On the “Actions” pane on the right, select “Filter Current Log…”
  • Select the appropriate filters

You can also use a PowerShell window and run the following cmdlets to view the events. If there are any RDMA-related connection errors, you can use the following:

Get-WinEvent -LogName Microsoft-Windows-SMBClient/ObjectStateDiagnostic -Oldest |? Message -match "RDMA"

 

10) Conclusion

I hope this helps you with your testing of the Chelsio T4 cards. I wanted to covered all different angles to make sure you don’t miss any relevant steps. I also wanted to have enough troubleshooting guidance here to get you covered for any known issues. Let us know how was your experience by posting a comment.

Windows Server 2012 Scale-Out File Server for SQL Server 2012 - Step-by-step Installation

$
0
0

Outline

1. Introduction
1.1. Overview
1.2. Hardware
1.3. Software
1.4. Notes and disclaimers

2. Install Windows Server 2012
2.1. Preparations
2.2. Install the OS
2.3. Rename the computer
2.4. Enable Remote Desktop

3. Configure the Hyper-V Host
3.1. Install the Hyper-V role to the server
3.2. Create the VM switches
3.3. Rename the network adapters
3.4. Assign static IP addresses for the Hyper-V host

4. Create the Base VM
4.1. Preparations
4.2. Create a Base VM
4.3. Install Windows Server 2012 on the VM
4.4. Sysprep the VM
4.5. Remove the base VM

5. Configure the 4 VMs
5.1. Create 4 new differencing VHDs using the Base VHD
5.2. Create 4 similarly configured VMs
5.3. Start the 4 VMs
5.4. Complete the mini-setup for the 4 VMs
5.5. Change the computer name for each VM
5.6. For each VM, configure the networks
5.7. Review VM name and network configuration

6. Configure DNS and Active Directory
6.1. Install DNS and Active Directory Domain Services
6.2. Configure Active Directory
6.3. Join the other VMs to the domain
6.4. Create the SQL Service account

7. Configure iSCSI
7.1. Add the iSCSI Software Target
7.2. Create the LUNs and Target
7.3. Configure the iSCSI Initiators
7.4. Configure the disks

8. Configure the File Server
8.1 Install the required roles and features
8.2. Validate the Failover Cluster Configuration
8.3. Create a Failover Cluster
8.4. Configure the Cluster Networks
8.6. Create the Scale-Out File Server
8.7. Create the folders and shares

9. Configure the SQL Server
9.1. Mount the SQL Server ISO file
9.2. Run SQL Server Setup
9.3. Create a database using the clustered file share

10. Verify SMB features
10.1. Verify that SMB Multichannel is working
10.2. Query the SMB Sessions and Open Files
10.3. Planned move of a file server node
10.4. Unplanned failure of a file server node
10.5. Surviving the loss of a client NIC

11. Shut down, startup and install final notes

12. Conclusion

 


1. Introduction

1.1. Overview

In this document, I am sharing all the steps I used to create a Windows Server 2012 File Server demo or test environment, so you can experiment with some of the new technologies yourself. You only need a single computer (the specs are provided below) and the ISO file with the Windows Server 2012 evaluation version available as a free download. For the SQL part, you will need the SQL Server 2012 evaluation version, which is also available as a free download.

The demo setup includes 4 virtual machines: one domain controller and iSCSI target, two file servers and a SQL server. You need the iSCSI target and two file servers because we’re using Failover Clustering to showcase SMB Transparent Failover and SMB Scale-Out. We’ll also use multiple Hyper-V virtual networks, so we can showcase SMB Multichannel.

image

This will probably require a few hours of work end-to-end, but it is a great way to experiment with a large set of Microsoft technologies in Windows Server 2012, including:

  • Hyper-V
  • Networking
  • Domain Name Services (DNS)
  • Active Directory Domain Services (AD-DS)
  • iSCSI Target
  • iSCSI Initiator
  • Failover Clustering
  • File Servers
  • PowerShell

Follow the steps and let me know how it goes in the comment section. If you run into any issues or found anything particularly interesting, don’t forget to mention the number of the step.

1.2. Hardware

You will need the following hardware to perform the steps described here:

  • Computer capable of running Windows Server 2012 and Hyper-V (64-bit, virtualization technology) with at least 8GB of RAM
  • An 8GB USB stick, if you’re installing Windows Server from USB and copying the downloaded software around (you can also burn the software to a DVD)
  • Internet connection for downloading software and updates (DHCP enabled)

1.3. Software

You will need the following software to perform the steps described here:

1.4. Notes and disclaimers

  • The text for each step also focuses on the specific actions that deviate from the default or where a clear default is not provided. If you are asked a question or required to perform an action that you do not see described in these steps, go with the default option.
  • Obviously, a single-computer solution can never be tolerant to the failure of that computer. So, the configuration described here is not really continuously available. It’s just a simulation.
  • The configuration described here is for demo, testing or learning. You would definitely need a different configuration for a production deployment.
  • A certain familiarity with Windows administration and configuration is assumed. If you're new to Windows, this document is not for you. Sorry...
  • There are usually several ways to perform a specific configuration or administration task. What I describe here is one of those many ways. It's not necessarily the best way, just the one I personally like best at the moment.
  • In theory, you could run this demo on a host computer with only 4GB of RAM, but you would need to configure each VM to run with only 512MB of RAM, which is the bare minimum required. It works, but it will run slower.
  • This blog post is an update on a previous blog post that provided similar steps for Windows Server 2012 Beta. This post supersedes the old one.

2. Install Windows Server 2012

2.1. Preparations

  • Format a USB disk using Windows 8 or Windows Server 2012.
  • Copy the contents of the Windows Server 2012 ISO file to the USB disk.
  • To read the files from within the ISO file using Windows 8 or Windows Server 2012, just double click the ISO file to mount it.
  • Make sure the computer BIOS is configured for Virtualization. Each computer BIOS is different, so you need to find the right settings.

2.2. Install the OS

  • Use your computer’s BIOS option to boot from the USB disk.
  • After “Windows Setup” starts from the USB disk, enter the required information to install the OS:
  • Select language, time and currency format and keyboard. Then click “Next”.
  • Click “Install Now”.
  • Select the “Windows Server Datacenter - Server with a GUI” option and click “Next”.  
  • Accept the license agreement and click “Next”.
  • Select the “Custom: Install Windows only” option.
  • Select the install location for Windows Server and click “Next”.
  • Wait for the installation to complete. This will take a few minutes.
  • After the installation is completed, the OS will boot.
  • Type the administrator password twice, then click on “Finish”.

2.3. Rename the computer

  • Login to the computer using the Administrator password and rename the computer.

2.3.PS. Using PowerShell

Rename-Computer DEMO-HV0 -Restart

2.3.GUI. Using Server Manager

  • In Server Manager, click on “Local Server” on the list on left.
  • Click on the default name next to “Computer Name”.
  • Click on “Change”.
  • Enter “DEMO-HV0” as the new Computer Name and click “OK”.
  • Accept the option to restart the computer.

2.4. Enable Remote Desktop

  • Log in using the Administrator account and enable Remote Desktop.
  • After completing this step, you will be able work from a Remote Desktop connection.

2.4.PS. Using SCONFIG.EXE

  • Use a command prompt to start SCONFIG.EXE
  • Use option 7 in SCONFIG to enable remote desktop

2.4.GUI. Using Server Manager

  • In Server Manager, click on “Local Server” on the list on left.
  • Click on the “Disabled” item next to “Remote Desktop”.
  • Select the option to “Allow connections from computers running any version…”
  • Click “OK” multiple times until you’re back to the “Local Server” screen.

3. Configure the Hyper-V Host

3.1. Install the Hyper-V role to the server

  • Install the Hyper-V role and the required management tools. The computer will restart.

3.1.PS. Using PowerShell

Install-WindowsFeature Hyper-V, Hyper-V-PowerShell, Hyper-V-Tools -Restart

3.2.OUT. Sample output

PS C:\> Get-WindowsFeature *Hyper*

Display Name                                            Name                 Install State
------------                                            ----                 -------------
[X] Hyper-V                                             Hyper-V              Installed
        [X] Hyper-V Management Tools                    RSAT-Hyper-V-Tools   Installed
            [X] Hyper-V GUI Management Tools            Hyper-V-Tools        Installed
            [X] Hyper-V Module for Windows PowerShell   Hyper-V-PowerShell   Installed

3.1.GUI. Using Server Manager

  • In Server Manager, click on “Dashboard” on the list on left.
  • Click on the “Add Roles and Features”, which is option 2 under “Configure this local server”.
  • On the “Before You Begin” page, just click “Next”.
  • On the “Installation Type” page, click “Role-base or feature-based installation” and click “Next”.
  • On the “Server Selection” page, select your server and click “Next”.
  • On the “Server Role” page, select “Hyper-V”.
  • On the “Add features that are required for Hyper-V” dialog, click “Add features”, then click “Next”.  
  • On the “Features” page, just click “Next”.
  • On the “Hyper-V” page, just click “Next”.
  • On the “Create Virtual Switches” page, just click “Next”.
  • On the “Virtual Machine Migration” page, just click “Next”.
  • On the “Default Stores” page, just click “Next”.
  • On the “Confirmation” page, click “Restart target machine automatically if needed”, click “Yes” to confirm and then click “Install”.
  • The role will be installed and the computer will restart in the end.

3.2. Create the VM switches

  • Create one external virtual network (VM switch that is connected to the external network interface).
  • Create three internal virtual networks (VM switches used just to communicate between the VMs).

3.2.PS. Using PowerShell

Get-NetAdapter
Rename-NetAdapter –InterfaceDescription *Gigabit* -NewName External
New-VMSwitch –Name External -NetAdapterName External
1..3 | % { New-VMSwitch -Name Internal$_ -SwitchType Internal }
Get-VMSwitch
Get-NetAdapter

  • Note: If you’re connected via “Remote Desktop” to the server, you might temporarily lose the connection when you create the External VMSwitch.
    If using DHCP on that interface, you will be able to reconnect. If you’re using static IP addresses, you should run this step locally, not via “Remote Desktop”.

3.2.GUI. Using Hyper-V Manager

  • In Server Manager, click on “Tools” in the upper left and select “Hyper-V Manager”
  • In Hyper-V Manager, click on the server name (DEMO-HV0) on the pane on the left
  • On the task pane on the right, click on the “Virtual Switch Manager”  
  • Use the “New virtual switch” option to add 1 network named “External”, select “External” for type and select your physical NIC. Click “Apply” to confirm.
  • Use the “New virtual switch” option to add 3 networks named “Internal1”, “Internal2” and “Internal3”, each of type “Internal”. 
  • After creating the four new NICs, you should see the four new NICs in Server Manager, under Local Server
     

3.3. Rename the network adapters

  • You should now configure the 4 virtual network interfaces on the parent.
  • This includes renaming them to the names of the switches and configuring static IP addresses for the 3 internal NICs (the external NIC should be DHCP enabled, so it does not need IP address configuration).

3.3.PS. Using PowerShell

1..3 | % {Rename-NetAdapter *Internal$_* -NewName ParentInternal$_}
Rename-NetAdapter "vEthernet (External)" -NewName ParentExternal
Get-NetAdapter

3.3.OUT. Sample Output

PS C:\> Get-NetAdapter

Name               InterfaceDescription                    ifIndex Status  MacAddress          LinkSpeed
----               --------------------                    ------- ------  ----------          ---------
ParentInternal3    Hyper-V Virtual Ethernet Adapter #5          45 Up      00-15-5D-B5-AE-07     10 Gbps
ParentInternal2    Hyper-V Virtual Ethernet Adapter #4          36 Up      00-15-5D-B5-AE-06     10 Gbps
ParentInternal1    Hyper-V Virtual Ethernet Adapter #3          20 Up      00-15-5D-B5-AE-05     10 Gbps
ParentExternal     Hyper-V Virtual Ethernet Adapter #2          16 Up      00-21-9B-31-BA-15     10 Gbps
External           Intel(R) 82566DM-2 Gigabit Network C...      12 Up      00-21-9B-31-BA-15      1 Gbps

3.3.GUI. Using Server Manager

  • In Server Manager, click on “Local Server” on the list on left.
  • In the properties pane on the right, scroll to see the list of “Wired Internet Connections” (there will be 4 of them, as we showed in the previous Server Manager screenshot).
  • Click on the “Ipv4 address…” link next to one of the interfaces, the “Network Connections” window will show.
  • Select the interface that shows as “Enabled” and click on “Rename this connection” to rename it to “External”.
  • Rename the 3 interfaces on an “Unidentified network” to “Internal1”, “Internal2” and “Internal3”  
  • Close the “Network Connections” and refresh the “Local Server” view. 
     

3.4. Assign static IP addresses for the Hyper-V host

  • In this step, you will assign a static IP address to the 3 internal virtual NICs on the parent partition.
  • These NICs initially use the default setting (DHCP), but there is no DHCP server for the internal network.
  • The table below shows the desired configuration for each interface.

Machine

Parent
External

Parent
Internal1

Parent
Internal2

Parent
Internal3

Parent

DHCP

192.168.101.100

192.168.102.100

192.168.103.100

  • Note 1: The ParentExternal network does not need any further configuration, since the default is already to use DHCP.
  • Note 2: The preferred DNS IP address for all 3 internal interfaces should be set to 192.168.101.1 (that will be the IP address of the DNS server we will configure later).

3.4.PS. Using PowerShell

1..3 | % {
Set-NetIPInterface –InterfaceAlias ParentInternal$_ -DHCP Disabled
Remove-NetIPAddress –InterfaceAlias ParentInternal$_ -AddressFamily IPv4 -Confirm:$false
New-NetIPAddress –InterfaceAlias ParentInternal$_  -IPAddress "192.168.10$_.100" -PrefixLength 24 -Type Unicast
Set-DnsClientServerAddress –InterfaceAlias ParentInternal$_ -ServerAddresses 192.168.101.1
}

Get-NetIPAddress –AddressFamily Ipv4 | FT

3.4.OUT. Sample Output

PS C:\> Get-NetIPAddress –AddressFamily Ipv4 | Format-Table

ifIndex IPAddress           PrefixLength PrefixOrigin SuffixOrigin AddressState PolicyStore
------- ---------           ------------ ------------ ------------ ------------ -----------
45      192.168.103.100               24 Manual       Manual       Preferred    ActiveStore
36      192.168.102.100               24 Manual       Manual       Preferred    ActiveStore
20      192.168.101.100               24 Manual       Manual       Preferred    ActiveStore
16      10.123.181.174                23 Dhcp         Dhcp         Preferred    ActiveStore
1       127.0.0.1                      8 WellKnown    WellKnown    Preferred    ActiveStore
   

3.4.GUI. Using Server Manager

  • In Server Manager, click on “Local Server” on the list on left.
  • In the properties pane on the right, scroll to see the list of Ethernet interfaces (there will be 4 of them)
  • Click on the “Ipv4 address…” link next to one of the interfaces, the “Network Connections” window will show
  • On the list of network connections, right click the Internal1 interface and click “Properties”
  • On the “ParentInternal1 Properties” window, select “Internet Protocol Version 4 (TCP/IPv4)” and click “Properties”.
  • On TCP/IPv4 Properties window, select the option to “Use the following IP address”.
  • Enter the IP address 192.168.101.100 and the subnet mask 255.255.255.0 (as shown below).
  • Select the option “Use the following DNS server address”, enter 192.168.101.1 and click “OK”
  • Repeat this for the Internal2 and Internal3 networks, making sure to use the correct IP address (see table shown in item 3.4) and use the same Preferred DNS server IP address.
  • Close the “Network Connections” and refresh the “Local Server” view.

4. Create the Base VM

4.1. Preparations

  • Create a folder for your ISO files at C:\ISO and a folder for your VMs at C:\VMS
  • Copy the Windows Server 2012 ISO file to C:\ISO (I renamed the file to WindowsServer2012.ISO)

4.2. Create a Base VM

  • Create a new VM that will be used as the base image for our 4 VMs.
  • Store it in the C:\VMS folder and attach the Windows Server 2012 ISO file to it.

4.2.PS. Using PowerShell

MD C:\VMS
New-VHD -Path C:\VMS\BASE.VHDX -Dynamic -SizeBytes 127GB
New-VM -Name Base -VHDPath C:\VMS\BASE.VHDX -SwitchName External -Memory 1GB
Set-VMDvdDrive –VMName Base -Path C:\ISO\WindowsServer2012.ISO
Start-VM Base

4.2.OUT. Sample Output

PS C:\> Get-VM

Name State   CPUUsage(%) MemoryAssigned(M) Uptime   Status
---- -----   ----------- ----------------- ------   ------
Base Running 2           1024              00:01:49 Operating normally

4.2.GUI. Using Hyper-V Manager

  • In Windows Explorer, create a new C:\VMS folder.
  • In Server Manager, click on “Tools” in the upper left and select “Hyper-V Manager”.
  • In Hyper-V Manager, click on the server name on the pane on the left.
  • On the task pane on the right, click on “New”, then click on “Virtual Machine…”.
  • On the “Before you begin” page, just click “Next”.
  • On the “Specify Name and Location” page, use “Base” for the name, and “C:\VMS” for location. Click “Next”.  
  • On the “Assign Memory” page, use “1024” MB and click “Next”.
  • On the “Configure Networking” page, use “External”.
  • On the “Connect Virtual Disk” page, select the option to “Create a virtual hard disk”.
  • Use “Base.vhdx” for name, “C:\VMS” for location and “127” GB for size. Click “Next”. 
  • On the “Installation Options” page, select the option to install from DVD, select the option to use an ISO file and enter the path to the Windows Server 2012 ISO file on your C:\ISO folder. Click “Finish”.  
  • In Hyper-V Manager, right-click the VM and select “Start”

4.3. Install Windows Server 2012 on the VM

  • In Server Manager, click on “Tools” in the upper left and select “Hyper-V Manager”.
  • In Hyper-V Manager, click on the server name on the pane on the left.
  • On the list of VMs, right-click the VM called “Base” and click on “Connect…”
  • Follow the instructions on the screen, as you did in item 2.2.
  • Set a password, but don’t install any roles.
  • Don’t bother renaming the computer, since we’re sysprep’ing the VM anyway.

4.4. Sysprep the VM

  • After you have the OS installed on the VM, sign in and run C:\Windows\System32\Sysprep\Sysprep.exe
  • Select the options to run the OOBE, generalize and shutdown 
  • After Sysprep completes, the VM will be shut down.

4.5. Remove the base VM

  • Remove the BASE VM, leaving just the BASE.VHDX.
  • You should have a new base VHD ready to use at C:\VMS\BASE.VHD. Its size should be around 9GB.

4.5.PS. Using PowerShell

Remove-VM Base

4.5.GUI. Using Hyper-V Manager

  • In Hyper-V Manager, click on the server name on the pane on the left
  • On the list of VMs, right-click the VM called “Base” and click on “Delete”

5. Configure the 4 VMs

5.1. Create 4 new differencing VHDs using the Base VHD

5.1.PS. Using PowerShell

1..4 | % { New-VHD -ParentPath C:\VMS\BASE.VHDX –Path C:\VMS\VM$_.VHDX }

5.1.OUT. Sample Output

PS C:\> Dir C:\VMS

    Directory: C:\VMS

Mode                LastWriteTime     Length Name
----                -------------     ------ ----
-a---         8/17/2012  10:00 AM 9634316288 BASE.VHDX
-a---         8/17/2012  10:01 AM    4194304 VM1.VHDX
-a---         8/17/2012  10:01 AM    4194304 VM2.VHDX
-a---         8/17/2012  10:01 AM    4194304 VM3.VHDX
-a---         8/17/2012  10:01 AM    4194304 VM4.VHDX

5.1.GUI. Using Hyper-V Manager

  • In Hyper-V Manager, click on the server name on the pane on the left.
  • On the task pane on the right, click on “New”, then click on “Hard disk…”
  • On the “Before you begin” page, just click “Next”.
  • On the “Choose disk format” page, select “VHDX” and click “Next”.
  • On the “Choose disk type” page, select “Differencing”.  
  • On the “Specify Name and Location” page, use “VM1.VHDX” for name and “C:\VMS” for location. Click “Next”.
  • On the “Configure disk” page, use “C:\VMS\BASE.VHDX” for the location of the parent VHD.
  • After this, you will have a new differencing VHD at VM1.VHD that’s 4MB in size.
  • Since we’re creating 4 VMS, copy that file into VM2.VHD, VM3.VHD and VM4.VHD.  

5.2. Create 4 similarly configured VMs

  • You should create five VMs as follows:

VM

Role

Computer Name

External 

Internal 1

Internal 2

Internal 3

VM1

DNS, DC, iSCSI Target

DEMO-DC.DEMO.TEST

DHCP

192.168.101.1

N/A

N/A

VM2

File Server 1

DEMO-F1.DEMO.TEST

DHCP

192.168.101.3

192.168.102.3

192.168.103.3

VM3

File Server 2

DEMO-F2.DEMO.TEST

DHCP

192.168.101.4

192.168.102.4

192.168.103.4

VM4

SQL Server

DEMO-DB.DEMO.TEST

DHCP

192.168.101.5

192.168.102.5

192.168.103.5

  • Note 1: Each VM will use one of the VHD files we created in the previous step.
  • Note 2: Each VM will use 1GB of memory.

5.2.PS. Using PowerShell

1..4 | % { New-VM -Name VM$_ -VHDPath C:\VMS\VM$_.VHDX -Memory 1GB -SwitchName External}
1..4 | % { Add-VMNetworkAdapter VM$_ –SwitchName Internal1 }
2..4 | % { Add-VMNetworkAdapter VM$_ –SwitchName Internal2 }
2..4 | % { Add-VMNetworkAdapter VM$_ –SwitchName Internal3 }

5.2.OUT. Sample Output

PS C:\> Get-VM | % { $_ ; $_ | Get-VMNetworkAdapter | FT }

Name State CPUUsage(%) MemoryAssigned(M) Uptime   Status
---- ----- ----------- ----------------- ------   ------
VM1  Off   0           0                 00:00:00 Operating normally

Name            IsManagementOs VMName SwitchName MacAddress   Status IPAddresses
----            -------------- ------ ---------- ----------   ------ -----------
Network Adapter False          VM1    External   000000000000        {}
Network Adapter False          VM1    Internal1  000000000000        {}

Name State CPUUsage(%) MemoryAssigned(M) Uptime   Status
---- ----- ----------- ----------------- ------   ------
VM2  Off   0           0                 00:00:00 Operating normally

Name            IsManagementOs VMName SwitchName MacAddress   Status IPAddresses
----            -------------- ------ ---------- ----------   ------ -----------
Network Adapter False          VM2    External   000000000000        {}
Network Adapter False          VM2    Internal1  000000000000        {}
Network Adapter False          VM2    Internal2  000000000000        {}
Network Adapter False          VM2    Internal3  000000000000        {}

Name State CPUUsage(%) MemoryAssigned(M) Uptime   Status
---- ----- ----------- ----------------- ------   ------
VM3  Off   0           0                 00:00:00 Operating normally

Name            IsManagementOs VMName SwitchName MacAddress   Status IPAddresses
----            -------------- ------ ---------- ----------   ------ -----------
Network Adapter False          VM3    External   000000000000        {}
Network Adapter False          VM3    Internal1  000000000000        {}
Network Adapter False          VM3    Internal2  000000000000        {}
Network Adapter False          VM3    Internal3  000000000000        {}

Name State CPUUsage(%) MemoryAssigned(M) Uptime   Status
---- ----- ----------- ----------------- ------   ------
VM4  Off   0           0                 00:00:00 Operating normally

Name            IsManagementOs VMName SwitchName MacAddress   Status IPAddresses
----            -------------- ------ ---------- ----------   ------ -----------
Network Adapter False          VM4    External   000000000000        {}
Network Adapter False          VM4    Internal1  000000000000        {}
Network Adapter False          VM4    Internal2  000000000000        {}
Network Adapter False          VM4    Internal3  000000000000        {}

5.2.GUI. Using Hyper-V Manager

  • In Hyper-V Manager, click on the server name on the pane on the left.
  • On the task pane on the right, click on “New”, then click on “Virtual Machine…”
  • On the “Before you begin” page, just click “Next”.
  • On the “Specify Name and Location” page, use “VM1” for the name, and “C:\VMS” for location. Click “Next”.
  • On the “Assign Memory” page, use “1024” MB and click “Next”.
  • On the “Configure Networking” page, use “External”.
  • On the “Connect Virtual Disk” page, select the option to “Use an existing virtual hard disk”.
  • Use “C:\VMS\VM1.vhdx” for name. Click on “Finish”.
  • In Hyper-V Manager, click on the server name on the pane on the left.
  • On the list of VMs, right-click the VM you just created (VM1) and click on “Settings…”
  • On the “Settings for VM1” window, select “Add Hardware”, then “Network Adapter”.
  • Select the “Internal1” interface and click OK.
  • Repeat the process for VMs 2 to 4.
  • For VMs 2 to 4, make sure to add networks Internal2 and Internal3 as well Internal1.

5.3. Start the 4 VMs

5.3.PS. Using PowerShell

Start-VM VM*

5.3.GUI. Using Hyper-V Manager

  • In Hyper-V Manager, click on the server name on the pane on the left
  • In Hyper-V Manager, multi-select VMs 1 to 5, right click them and click on “Start”

5.4. Complete the mini-setup for the 4 VMs

  • Using Hyper-V manager, multi-select VMs 1 to 4, right click them and click on “Connect…”
  • Let the mini-setup complete, and configure each of the five VMs.
  • You will be prompted for the usual items, like license agreement, clock/language/region settings and a password.

5.5. Change the computer name for each VM

  • Change the computer name for each VM, using the names defined in item 5.2
  • The examples below are for VM1 (the DNS and Domain Controller).
  • You should repeat this for each of the 4 VMs.
  • At this point, you can also use this opportunity to enable Remote Desktop for each VM.
  • This would be done for each VM as you did for the host in item 2.4, if you want to access the VMs remotely.

5.5.PS. Using PowerShell (for VM1, for instance)

Rename-Computer DEMO-DC -Restart

5.5.GUI. Using Server Manager (for VM1, for instance)

  • In Server Manager, click on “Local Server” on the list on left.
  • Click on the default name next to “Computer Name”.
  • Click on “Change”.
  • Enter the new computer name as “DEMO-DC”.
  • Click “OK” accept the changes.
  • Click “OK” to acknowledge that you need to restart to apply changes.
  • Click “Restart Now”.

5.6. For each VM, configure the networks

  • In this step you will configure the network for each VM as shown on the table in item 5.2
  • We first rename the Network Connections in each guest for easy identification.
  • The External network is identified as being the only one with a DHCP address.
  • The remaining networks are renamed to Internal1, Internal2 and Internal3.
  • For internal networks static IPs are configured, with mask 255.255.255.0 and DNS set to 192.168.101.1.
  • The Internal 1 network will be used for DNS, Active Directory and the iSCSI Target.
  • The External network is useful for downloading from the Internet and remotely connecting to the 4 VMs.
  • You could configure a DHCP server for the internal interfaces.
  • However, due to the risk of accidentally creating a rogue DHCP server in your network, fixed IPs are used.

5.6.PS. Using PowerShell (for VM1, for instance)

## External NIC is the only one with a DHCP server
 
Get-NetIPAddress -PrefixOrigin DHCP | % { 
     Rename-NetAdapter -InterfaceAlias $_.InterfaceAlias –NewName External
}
 
## $IC – Internal Count – Number of Internal networks
 
$IC=0
Get-NetAdapter Ethernet* | Sort MacAddress | % {
   $IC++
   Rename-NetAdapter -InterfaceAlias $_.InterfaceAlias –NewName Internal$IC
}
 
## $VM is the VM Number, between 1 and 4. Used as the last portion of the IP address.
 
$VM=1
1..$IC | % {
   Set-NetIPInterface –InterfaceAlias Internal$_ -DHCP Disabled
   Remove-NetIPAddress –InterfaceAlias Internal$_ -AddressFamily IPv4 –Confirm:$false
   New-NetIPAddress –InterfaceAlias Internal$_ -IPAddress "192.168.10$_.$VM" -PrefixLength 24 -Type Unicast 
   Set-DnsClientServerAddress –InterfaceAlias Internal$_ -ServerAddresses 192.168.101.1
}

5.6.GUI. Using Server Manager (For VM1, for instance)

  • This step is similar to step 3.4, but this time performed on the 4 VM.
  • Inside the VM, in Server Manager, click on “Local Server” on the list on left.
  • In the properties pane on the right, Click on the “Ipv4 address…” link next to one of the interfaces.
  • The “Network Connections” window will show.
  • As you did with the Hyper-V host, rename the NIC with the DHCP connection to “External”.
    This NIC needs no further configuration.
  • Rename the remaining NIC to “Internal1”. (Other VMs will also have Internal2 and Internal3).
  • For each internal NIC, right click the Internal1 interface and click “Properties”.
  • On the “Internal1 Properties” window, select “Internet Protocol Version 4 (TCP/IPv4)” and click “Properties”.
  • On TCP/IPv4 Properties window, select the option to “Use the following IP address”.
  • Enter the corresponding IP address (see table on item 5.2) and the subnet mask 255.255.255.0.
  • Select the option “Use the following DNS server address”, enter 192.168.101.1 and click “OK”.
  • Repeat this for the Internal2 and Internal3 networks using the corresponding IP address and the DNS above.
  • Close the “Network Connections” and refresh the “Local Server” view.

 

  • Note: If you can’t tell which Internal network is which inside the VMs with multiple Internal networks, you can temporarily set one of the adapters to “Not Connected” in the VM Settings and verify which one shows as “Network cable unplugged”.

5.7. Review VM name and network configuration

  • After renaming the computer, renaming the network and configuring IP addresses, review the configuration on each VM to make sure you did not miss any step. Examples are shown below for VM1 and VM2.

5.7.PS. Using PowerShell

Get-WmiObject Win32_ComputerSystem
Get-NetAdapter
Get-NetIPAddress -AddressFamily IPv4 | Sort IfIndex | FT

5.7.OUT. Sample Output (for VM2, a.k.a. DEMO-F1)

PS C:\> Get-WmiObject Win32_ComputerSystem

Domain              : WORKGROUP
Manufacturer        : Microsoft Corporation
Model               : Virtual Machine
Name                : DEMO-F1
PrimaryOwnerName    : Windows User
TotalPhysicalMemory : 1072799744

PS C:\> Get-NetAdapter

Name         InterfaceDescription                    ifIndex Status  MacAddress          LinkSpeed
----         --------------------                    ------- ------  ----------          ---------
External     Microsoft Hyper-V Network Adapter #4         15 Up      00-15-5D-B5-AE-12     10 Gbps
Internal3    Microsoft Hyper-V Network Adapter #3         14 Up      00-15-5D-B5-AE-15     10 Gbps
Internal1    Microsoft Hyper-V Network Adapter            12 Up      00-15-5D-B5-AE-13     10 Gbps
Internal2    Microsoft Hyper-V Network Adapter #2         13 Up      00-15-5D-B5-AE-14     10 Gbps

PS C:\> Get-NetIPAddress -AddressFamily IPv4 | Sort IfIndex | FT

ifIndex IPAddress         PrefixLength PrefixOrigin SuffixOrigin AddressState PolicyStore
------- ---------         ------------ ------------ ------------ ------------ -----------
1       127.0.0.1                    8 WellKnown    WellKnown    Preferred    ActiveStore
12      192.168.101.2               24 Manual       Manual       Preferred    ActiveStore
13      192.168.102.2               24 Manual       Manual       Preferred    ActiveStore
14      192.168.103.2               24 Manual       Manual       Preferred    ActiveStore
15      10.123.181.211              23 Dhcp         Dhcp         Preferred    ActiveStore

5.7.GUI. Using Server Manager

  • In Server Manager, click on “Local Server” on the list on left.
  • Verify the network configuration.

6. Configure DNS and Active Directory

6.1. Install DNS and Active Directory Domain Services

  • Install the required DNS and Active Directory Domain Services roles to VM1 (DEMO-DC)

6.1.PS. Using PowerShell

Install-WindowsFeature DNS, AD-Domain-Services, RSAT-AD-PowerShell, RSAT-ADDS-Tools

6.1.GUI. Using Server Manager

  • In Server Manager, click on “Dashboard” on the list on left.
  • Click on the “Add Roles and Features”, which is option 2 under “Configure this local server”.
  • On the “Before You Begin” page, just click “Next”.
  • On the “Installation Type” page, click “Role-base or feature-based installation” and click “Next”.
  • On the “Server Selection” page, select your server and click “Next”.
  • On the “Server Role” page, select “Active Directory Domain Services”.
  • On the dialog about adding required services, click “Add Features”.
  • On the “Server Role” page, select “DNS Server” and click “Next”.
  • On the dialog about adding required services, click “Add Features”.   
  • On the “Feature” page, just click “Next”.
  • On the “Active Directory Domain Services” page, just click “Next”.
  • On the “DNS Server” page, just click “Next”.
  • On the “Confirmation” page, click “Install”.
  • The roles will be installed.

6.2. Configure Active Directory

  • Create a new domain and forest for the DEMO.TEST domain.

6.2.PS. Using PowerShell

Import-Module ADDSDeployment
Install-ADDSForest `
-CreateDNSDelegation:$false `
-DatabasePath "C:\Windows\NTDS" `
-DomainMode "Win2008R2" `
-DomainName "DEMO.TEST" `
-DomainNetBIOSName "DEMO" `
-ForestMode "Win2008R2" `
-InstallDNS:$true `
-LogPath "C:\Windows\NTDS" `
-SafeModeAdministratorPassword (Read-Host -AsSecureString -Prompt "Enter Password") `
-SYSVOLPath "C:\Windows\SYSVOL"

6.2.GUI. Using Server Manager

  • Open Server Manager and click on the “AD DS” option on the right.
  • On the yellow band showing “Configuration Required for Active Directory...” click “More…”
  • On the “All Server Task Details”, click on the action to “Promote this server…”
  • The “Active Directory Domain Services Configuration Wizard” will start.
  • On the “Deployment Configuration” page, select “Add a new forest”.
  • Enter “DEMO.TEST” as the root domain name and click “Next”.
  • On the “Domain Controller Option”, enter the password twice and click “Next”.
  • On the “DNS Options” page, click “Next”.
  • On the “Additional Options” page, click “Next”. (NETBIOS name check takes a while)
  • On the “Paths” page, click “Next”.
  • On the “Review Options” page, click “Next”.
  • On the “Pre-requisites” page, click “Next”. (Pre-requisite checks takes a while)
  • Click “Install”.

6.3. Join the other VMs to the domain

  • After the Domain Controller reboots, for every one of the other 3 VMs, join the domain
  • You will need to provide the domain name (DEMO.TEST) and the Administrator credentials
  • From now on, always log on to any of the VMs using the domain credentials: DEMO.TEST\Administrator

6.3.PS. Using PowerShell (for VM2 to VM4)

Add-Computer -DomainName DEMO.TEST -Restart

6.4. Create the SQL Service account

  • In the Domain Controller, use Active Directory Users and Computers to create a new user account for SQL.
  • The account should be called SQLService and should not require change in the next logon.
  • Set a password for the SQLService account.

6.4.PS. Using PowerShell

New-ADUser -Name SQLService –Enabled $True -UserPrincipalName SQLService@DEMO.TEST `
-DisplayName SQLService -ChangePasswordAtLogon $False -PasswordNeverExpires $True `
-AccountPassword (Read-Host -AsSecureString "Enter password")

6.4.OUT. Sample Output

PS C:\> Get-ADUser -Filter {Name -like "SQL*"}

DistinguishedName : CN=SQLService,CN=Users,DC=DEMO,DC=TEST
Enabled           : True
GivenName         :
Name              : SQLService
ObjectClass       : user
ObjectGUID        : 7a02941d-10c7-45f8-b986-1b67a08ddd06
SamAccountName    : SQLService
SID               : S-1-5-21-3876617879-1076079722-1647216889-1107
Surname           :
UserPrincipalName :
SQLService@DEMO.TEST

6.4.GUI. Using Server Manager

  • Open Server Manager
  • In the Tools menu on the upper right, select “Active Directory Users and Computers”
  • Right click the “Users” container on the tree on the left, then select “New”, then “User”
  • Enter “SQLService” as Full Name and User Logon Name, then click “Next”
  • Enter the password twice as required
  • Uncheck “user must change password at next logon” and check “Password never expires”
  • Click “Next”, then click “Finish”

7. Configure iSCSI

  • We’ll create a single Target with 3 Devices (LUNs or VHD files) and used by 2 initiators (DEMO-F1 and DEMO-F2).
  • The devices will include a 1GB VHD for the Cluster Witness volume and two 20GB VHDs for the data volumes.
  • We’ll then configure the initiators and volumes from the File Server side.

7.1. Add the iSCSI Software Target

  • Add the iSCSI Software Target role service to VM2 (DEMO-IT.DEMO.TEST)

7.1.PS. Using PowerShell

Install-WindowsFeature FS-iSCSITarget-Server

7.1.GUI. Using Server Manager

  • In Server Manager, click on “Dashboard” on the list on left
  • Click on the “Add Roles and Features”, which is option 2 under “Configure this local server”
  • On the “Before You Begin” page, just click “Next”
  • On the “Installation Type” page, click “Role-base or feature-based installation” and click “Next”
  • On the “Server Selection” page, expand “File and Storage Services”, then “File Services”
  • Select the “iSCSI Target Server”
  • On the dialog about adding required services, click “Add Features”   
  • Click “Next”
  • On the “Feature” page, just click “Next”
  • On the “Confirmation” page, click “Install”
  • The role will be installed

7.2. Create the LUNs and Target

  • Create the 1st LUN with the file at C:\LUN0.VHD, 1GB in size, description “LUN0”.
  • Create the 2nd and 3rd LUNs at C:\LUN1.VHD and C:\LUN2.VHD, both with 20GB.
  • Add those to a single target, exposed to two initiators by IP address (192.168.101.3 and 192.168.101.4)

7.2.PS. Using PowerShell

New-IscsiServerTarget -TargetName FileCluster -InitiatorID IPAddress:192.168.101.2, IPAddress:192.168.101.3
New-IscsiVirtualDisk -DevicePath C:\LUN0.VHD -Size 1GB
1..2 | % {New-IscsiVirtualDisk -DevicePath C:\LUN$_.VHD -Size 20GB}
0..2 | % {Add-iSCSIVirtualDiskTargetMapping -TargetName FileCluster -DevicePath C:\LUN$_.VHD}

7.2.OUT. Sample output

PS C:\> Get-IscsiServerTarget 

ChapUserName                :
ClusterGroupName            :
ComputerName                : DEMO-DC.DEMO.TEST
Description                 :
EnableChap                  : False
EnableReverseChap           : False
EnforceIdleTimeoutDetection : True
FirstBurstLength            : 65536
IdleDuration                : 00:00:21
InitiatorIds                : {IPAddress:192.168.101.2, IPAddress:192.168.101.3}
LastLogin                   :
LunMappings                 : {TargetName:FileCluster;VHD:"C:\LUN0.VHD";LUN:0,
                              TargetName:FileCluster;VHD:"C:\LUN1.VHD";LUN:1,
                              TargetName:FileCluster;VHD:"C:\LUN2.VHD";LUN:2}
MaxBurstLength              : 262144
MaxReceiveDataSegmentLength : 65536
ReceiveBufferCount          : 10
ReverseChapUserName         :
Sessions                    : {}
Status                      : NotConnected
TargetIqn                   : iqn.1991-05.com.microsoft:demo-dc-filecluster-target
TargetName                  : FileCluster 

PS C:\> Get-IscsiVirtualDisk 

ClusterGroupName   :
ComputerName       : DEMO-DC.DEMO.TEST
Description        :
DiskType           : Fixed
HostVolumeId       : {C4A5E065-E88F-11E1-93EB-806E6F6E6963}
LocalMountDeviceId :
OriginalPath       :
ParentPath         :
Path               : C:\LUN0.VHD
SerialNumber       : 3FDD6603-2F45-4E95-8C0F-0B6A574DA84A
Size               : 1073741824
SnapshotIds        :
Status             : NotConnected
VirtualDiskIndex   : 119718233

ClusterGroupName   :
ComputerName       : DEMO-DC.DEMO.TEST
Description        :
DiskType           : Fixed
HostVolumeId       : {C4A5E065-E88F-11E1-93EB-806E6F6E6963}
LocalMountDeviceId :
OriginalPath       :
ParentPath         :
Path               : C:\LUN2.VHD
SerialNumber       : 981545EA-32FF-4BA4-856D-C6F464FEC82F
Size               : 21474836480
SnapshotIds        :
Status             : NotConnected
VirtualDiskIndex   : 1469988013

ClusterGroupName   :
ComputerName       : DEMO-DC.DEMO.TEST
Description        :
DiskType           : Fixed
HostVolumeId       : {C4A5E065-E88F-11E1-93EB-806E6F6E6963}
LocalMountDeviceId :
OriginalPath       :
ParentPath         :
Path               : C:\LUN1.VHD
SerialNumber       : BBCB273F-74EF-4E50-AA07-EDCD2E955A3B
Size               : 21474836480
SnapshotIds        :
Status             : NotConnected
VirtualDiskIndex   : 1581769191

7.2.GUI. Using Server Manager

  • In Server Manager, click on “File and Storage Services” on the list on left
  • Click on “iSCSI Virtual Disks”
  • On the “Tasks” menu on the right, select “New Virtual Disk…”
  • The “New iSCSI Virtual Disk Wizard” will start
  • On the “Virtual Disk Location” page, with the DEMO-IT server and “C:” volume selected, click “Next”
  • On the “Virtual Disk Name” page, enter “LUN0” as the Name, then click “Next”
  • On the “Virtual Disk Size” page, enter 1GB as the size, then click “Next”
  • On the “iSCSI Target” page, with the “New iSCSI target” option selected, click “Next”
  • On the “iSCSI Target Name” page, enter “FileCluster” as the name, then click “Next”
  • On the “Access Servers” page, click on “Add…”
  • Select “Enter a value...”, select “IP Address”, enter “192.168.101.3”, then click “OK”
  • On the “Access Servers” page, click on “Add…” again
  • Select “Enter a value...”, select “IP Address”, enter “192.168.101.4”, then click “OK”
  • With the two iSCSI Initiators specified, click “Next”
  • On the “Enable Authentication” page, click “Next”
  • On the “Confirmation” page, click “Create”
  • When the wizard is done, click “Close”.
  • On the “Tasks” menu on the right, select “New Virtual Disk…”
  • The “New iSCSI Virtual Disk Wizard” will start
  • On the “Virtual Disk Location” page, with the DEMO-IT server and “C:” volume selected, click “Next”
  • On the “Virtual Disk Name” page, enter “LUN1” as the Name, then click “Next”
  • On the “Virtual Disk Size” page, enter 20GB as the size, then click “Next”
  • On the “iSCSI Target” page, with the “Select Existing iSCSI target” option selected, click “Next”
  • On the “Confirmation” page, click “Create”
  • When the wizard is done, click “Close”.
  • Repeat the steps above to create a LUN2 with 20GB and add to the same target.

7.3. Configure the iSCSI Initiators

  • Now we shift to the two File Servers, which will run the iSCSI Initiator.
  • We’ll do this on VM2 and VM3 (or DEMO-F1 and DEMO-F2).
  • Make sure to log on using the domain administrator (DEMO\Administrator), not the local Administrator.
  • You will then start the iSCSI Initiator, configuring the service to start automatically.
  • You will then connect the initiator to the iSCSI Target we just configured on DEMO-IT

7.3.PS. Using PowerShell

Set-Service MSiSCSI -StartupType automatic
Start-Service MSiSCSI
New-iSCSITargetPortal -TargetPortalAddress 192.168.101.1
Get-iSCSITarget | Connect-iSCSITarget
Get-iSCSISession | Register-iSCSISession

7.3.OUT. Sample output

PS C:\> Get-IscsiTargetPortal 

InitiatorInstanceName  :
InitiatorPortalAddress :
IsDataDigest           : False
IsHeaderDigest         : False
TargetPortalAddress    : 192.168.101.1
TargetPortalPortNumber : 3260
PSComputerName         : 

PS C:\> Get-IscsiTarget | Format-List 

IsConnected    : True
NodeAddress    : iqn.1991-05.com.microsoft:demo-dc-filecluster-target
PSComputerName : 

PS C:\> Get-IscsiConnection 

ConnectionIdentifier : fffffa8002d12020-3
InitiatorAddress     : 0.0.0.0
InitiatorPortNumber  : 37119
TargetAddress        : 192.168.101.1
TargetPortNumber     : 3260
PSComputerName       :

PS C:\> Get-Disk

Number Friendly Name                            OperationalStatus      Total Size Partition Style
------ -------------                            -----------------      ---------- ---------------
0      Virtual HD ATA Device                    Online                     127 GB MBR
1      MSFT Virtual HD SCSI Disk Device         Offline                      1 GB RAW
2      MSFT Virtual HD SCSI Disk Device         Offline                     20 GB RAW
3      MSFT Virtual HD SCSI Disk Device         Offline                     20 GB RAW

7.3.GUI. Using Server Manager

  • Open Server Manager
  • In the Tools menu on the upper right, select “iSCSI Initator”
  • Click on “Yes” on the prompt about automatically starting the iSCSI Initiator.
  • Enter “192.168.101.2” on the Target field and click the “Quick Connect…” button next to it.
  • Verify the status shows as “Connect” and click on “Done”
  • Click on the “Volume and Devices” tab and click on the “Auto Configure” button.
  • Verify that three volumes show up on the Volume List.
  • Click “OK” to close the iSCSI Initiator.

7.4. Configure the disks

  • Execute this task only on the first of the two file server (VM2, a.k.a. DEMO-F1).
  • This will configure the three disks exposed by the iSCSI Target (the iSCSI LUNs).
  • They first need to be onlined, initialized and partitioned (we’re using MBR partitions, since the disks are small).
  • Then you will format them and assign each one a driver letter (W:, X: and Y:).
  • Drive W: will be the used as witness disks, while X: and Y: will be data disks for the file server cluster.

7.4.PS. Using PowerShell

1..3 | % {
$d = “-WXY”[$_]
Set-Disk -Number $_ -IsReadOnly 0
Set-Disk -Number $_ -IsOffline 0
Initialize-Disk -Number $_ -PartitionStyle MBR
New-Partition -DiskNumber $_ -DriveLetter $d –UseMaximumSize
Initialize-Volume -DriveLetter $d -FileSystem NTFS -Confirm:$false
}

7.4.OUT. Sample output

PS C:\> Get-Disk

Number Friendly Name                     OperationalStatus  Total Size Partition Style
------ -------------                     -----------------  ---------- ---------------
0      Virtual HD ATA Device             Online                 127 GB MBR
1      MSFT Virtual HD SCSI Disk Device  Online                   1 GB MBR
2      MSFT Virtual HD SCSI Disk Device  Online                  20 GB MBR
3      MSFT Virtual HD SCSI Disk Device  Online                  20 GB MBR


PS C:\> Get-Volume | Sort DriveLetter

DriveLetter   FileSystemLabel  FileSystem   DriveType   HealthStatus   SizeRemaining        Size
-----------   ---------------  ----------   ---------   ------------   -------------        ---- 
              System Reserved  NTFS         Fixed       Healthy             108.7 MB      350 MB
A                                           Removable   Healthy                  0 B         0 B
C                              NTFS         Fixed       Healthy            118.29 GB   126.66 GB
D                                           CD-ROM      Healthy                  0 B         0 B
W                              NTFS         Fixed       Healthy            981.06 MB  1022.93 MB
X                              NTFS         Fixed       Healthy              19.9 GB       20 GB
Y                              NTFS         Fixed       Healthy              19.9 GB       20 GB

7.4.GUI. Using Server Manager

  • Open the Disk Management tool
  • Online all three offline disks (the iSCSI LUNs)
  • Initialize them (you can use MBR partitions, since they are small)
  • Create a new Simple Volume on each one using all the disk space on the LUN
  • Quick-format them with NTFS as the file system
  • Assign each one a drive letter (W:, X: and Y:)

8. Configure the File Server

8.1 Install the required roles and features

  • Now we need to configure VM2 and VM3 as file servers and cluster nodes

8.1.PS. Using PowerShell (from both VM2 and VM3)

Install-WindowsFeature File-Services, FS-FileServer, Failover-Clustering
Install-WindowsFeature RSAT-Clustering -IncludeAllSubFeature
Install-WindowsFeature RSAT-File-Services -IncludeAllSubFeature

8.1.OUT. Sample output

PS C:\> Get-WindowsFeature *File*, *Cluster*

Display Name                                            Name                       Install State
------------                                            ----                       -------------
[X] File And Storage Services                           FileAndStorage-Services        Installed
    [X] File and iSCSI Services                         File-Services                  Installed
        [X] File Server                                 FS-FileServer                  Installed
[X] Failover Clustering                                 Failover-Clustering            Installed
        [X] Failover Clustering Tools                   RSAT-Clustering                Installed
            [X] Failover Cluster Management Tools       RSAT-Clustering-Mgmt           Installed
            [X] Failover Cluster Module for Windows ... RSAT-Clustering-Powe...        Installed
            [X] Failover Cluster Automation Server      RSAT-Clustering-Auto...        Installed
            [X] Failover Cluster Command Interface      RSAT-Clustering-CmdI...        Installed
        [X] File Services Tools                         RSAT-File-Services             Installed
            [X] Share and Storage Management Tool       RSAT-CoreFile-Mgmt             Installed

8.1.GUI. Using Server Manager

  • For both DEMO-F1 and DEMO-F2, from Server Manager, select Add Role and check File and Storage Services.
  • Next, select Add Feature and check Failover Clustering

8.2. Validate the Failover Cluster Configuration

8.2.PS. Using PowerShell (From VM2, DEMO-F1)

Test-Cluster -Node DEMO-F1, DEMO-F2

8.2.OUT. Sample output

image

8.2.GUI. Using Server Manager

  • On VM2 (DEMO-F1), open Server Manager.
  • On the Tools menu on the upper right, select “Failover Cluster Manager”
  • In Failover Cluster Manager and click on the option to “Validate a Configuration…”
  • The “Validate a Configuration Wizard” will start. Click “Next”
  • Add the two file servers: DEMO-F1 and DEMO-F2. Then click “Next”
  • Select the option to “Run all tests”. Click “Next”. Click “Next” again to confirm.
  • Let the validation process run. It will take a few minutes to complete.
  • Validation should not return any errors.
  • If it does, review the previous steps and make sure to address any issues listed in the validation report.

8.3. Create a Failover Cluster

8.3.PS. Using PowerShell (From VM2, DEMO-F1)

New-Cluster –Name DEMO-FC -Node DEMO-F1, DEMO-F2

8.3.GUI. Using Server Manager

  • On VM2 (DEMO-F1), open Server Manager.
  • On the Tools menu on the upper right, select “Failover Cluster Manager”
  • In Failover Cluster Manager and click on the option to “Create a Cluster…”
  • The “Create a Cluster Wizard” will start. Click “Next”
  • Add the two file servers: DEMO-F1 and DEMO-F2. Then click “Next”
  • Enter the Cluster Name: DEMO-FC. Then click “Next”
  • Click “Next” again to confirm.
  • Click “Finish” after the cluster is created.

image

8.4. Configure the Cluster Networks

  • For consistency, you should rename the Cluster networks to match the names used previously.
  • You should also configure the Internal networks to be used by cluster, but not the External one.

8.4.PS. Using PowerShell (From VM2, DEMO-F1)

(Get-ClusterNetwork | ? Address -like 192.168.101.* ).Name = "Internal1"
(Get-ClusterNetwork | ? Address -like 192.168.102.* ).Name = "Internal2”
(Get-ClusterNetwork | ? Address -like 192.168.103.* ).Name = "Internal3”
(Get-ClusterNetwork | ? Name -notlike Internal* ).Name = "External"
(Get-ClusterNetwork Internal1).Role = 3
(Get-ClusterNetwork Internal2).Role = 3
(Get-ClusterNetwork Internal3).Role = 3
(Get-ClusterNetwork External).Role = 1

8.4.OUT. Sample Output

PS C:\> Get-ClusterNetwork | Select *


Cluster           : DEMO-FC
State             : Up
Name              : External
Ipv6Addresses     : {2001:4898:2a:3::, 2001:4898:0:fff:0:5efe:10.123.180.0}
Ipv6PrefixLengths : {64, 119}
Ipv4Addresses     : {10.123.180.0}
Ipv4PrefixLengths : {23}
Address           : 10.123.180.0
AddressMask       : 255.255.254.0
Description       :
Role              : 1
AutoMetric        : True
Metric            : 30240
Id                : 14cab1e8-c16c-46fa-bf01-afc808d29368

Cluster           : DEMO-FC
State             : Up
Name              : Internal1
Ipv6Addresses     : {}
Ipv6PrefixLengths : {}
Ipv4Addresses     : {192.168.101.0}
Ipv4PrefixLengths : {24}
Address           : 192.168.101.0
AddressMask       : 255.255.255.0
Description       :
Role              : 3
AutoMetric        : True
Metric            : 70242
Id                : 16603183-5639-44a0-8e5e-3934280866cd

Cluster           : DEMO-FC
State             : Up
Name              : Internal2
Ipv6Addresses     : {}
Ipv6PrefixLengths : {}
Ipv4Addresses     : {192.168.102.0}
Ipv4PrefixLengths : {24}
Address           : 192.168.102.0
AddressMask       : 255.255.255.0
Description       :
Role              : 3
AutoMetric        : True
Metric            : 70241
Id                : 528c89bc-8704-4d1a-aa80-65bd5c25e3e5

Cluster           : DEMO-FC
State             : Up
Name              : Internal3
Ipv6Addresses     : {}
Ipv6PrefixLengths : {}
Ipv4Addresses     : {192.168.103.0}
Ipv4PrefixLengths : {24}
Address           : 192.168.103.0
AddressMask       : 255.255.255.0
Description       :
Role              : 3
AutoMetric        : True
Metric            : 70243
Id                : 0f59076d-5536-4d69-af43-271cc4f76723

8.4.GUI. Using Server Manager

  • In Failover Cluster Manager, expand the nodes until you find the “Networks” node.
  • For each network, right-click the name and click “Properties”.
  • Enter the name Internal1, Internal2, Internal3 or External, according to their IP addresses.
  • For the External network, make sure “Allow cluster…” is selected and “Allow clients…” is *not* checked.
  • For all Internal networks, select “Allow cluster…” and check the “Allow clients…” checkbox.

image

8.5. Add data disks to Cluster Shared Volumes (CSV)

  • Add the disks to the list of Cluster Shared Volumes.

8.5.PS. Using PowerShell (From VM2, DEMO-F1)

Get-ClusterResource | ? OwnerGroup -like Available* | Add-ClusterSharedVolume

8.5.GUI. Sample Output

PS C:\> Get-ClusterResource

Name                          State    OwnerGroup      ResourceType
----                          -----    ----------      ------------
Cluster Disk 3                Online   Cluster Group   Physical Disk
Cluster IP Address            Online   Cluster Group   IP Address
Cluster IP Address 2001:48... Online   Cluster Group   IPv6 Address
Cluster Name                  Online   Cluster Group   Network Name


PS C:\> Get-ClusterSharedVolume

Name              State      Node
----              -----      ----
Cluster Disk 1    Online     DEMO-F1
Cluster Disk 2    Online     DEMO-F2

8.5.GUI. Using Server Manager

  • In Failover Cluster Manager, expand the nodes until you find the “Storage” node.
  • Select the two disks currently assigned to “Available Storage”.
  • Right click the two selected disks and click on “Add to Cluster Shared Volumes”

image

8.6. Create the Scale-Out File Server

  • Create a Scale-Out File Server.

8.6.PS. Using PowerShell (From VM2, DEMO-F1)

Add-ClusterScaleOutFileServerRole -Name DEMO-FS

8.6.OUT. Sample Output

PS C:\> Get-ClusterGroup DEMO-FS

Name        OwnerNode    State
----        ---------    -----
DEMO-FS     DEMO-F1      Online


PS C:\> Get-ClusterGroup DEMO-FS | Get-ClusterResource

Name                          State     OwnerGroup     ResourceType
----                          -----     ----------     ------------
DEMO-FS                       Online    DEMO-FS        Distributed Network Name
Scale-Out File Server (\\D... Online    DEMO-FS        Scale Out File Server

8.6.GUI. Using Server Manager

  • On the Failover Cluster Manager, select the main node on the tree (with the cluster name)
  • On the actions menu on the right, select “Configure Role…”
  • The “High Availability Wizard” will start. Click “Next”
  • On the “Select Role” page, select “File Server” and click “Next”
  • On the “File Server Type” page, select “File Server for scale-out application data” and click “Next”
  • On the “Client Access Point” page, specify the name of the service as DEMO-FS
  • On the “Confirmation” page, click “Next”.
  • Click “Finish” after the configuration is completed.

image

8.7. Create the folders and shares

  • In this step, you will create two shares: one for database files and one for log files

8.7.PS. Using PowerShell (From VM2, DEMO-F1)

MD C:\ClusterStorage\Volume1\DATA
New-SmbShare -Name DATA -Path C:\ClusterStorage\Volume1\DATA -FullAccess DEMO.TEST\Administrator, DEMO.TEST\SQLService
(Get-SmbShare DATA).PresetPathAcl | Set-Acl

MD C:\ClusterStorage\Volume2\LOG
New-SmbShare -Name LOG -Path C:\ClusterStorage\Volume2\LOG -FullAccess DEMO.TEST\Administrator, DEMO.TEST\SQLService
(Get-SmbShare LOG).PresetPathAcl | Set-Acl

8.7.OUT. Sample Output

PS C:\> Get-SmbShare Data, Log

Name    ScopeName    Path                          Description
----    ---------    ----                          -----------
DATA    DEMO-FS      C:\ClusterStorage\Volume1\...
LOG     DEMO-FS      C:\ClusterStorage\Volume2\LOG

PS C:\> Get-SmbShare Data, Log | Select *

PresetPathAcl         : System.Security.AccessControl.DirectorySecurity
ShareState            : Online
AvailabilityType      : ScaleOut
ShareType             : FileSystemDirectory
FolderEnumerationMode : Unrestricted
CachingMode           : Manual
CATimeout             : 0
ConcurrentUserLimit   : 0
ContinuouslyAvailable : True
CurrentUsers          : 0
Description           :
EncryptData           : False
Name                  : DATA
Path                  : C:\ClusterStorage\Volume1\DATA
Scoped                : True
ScopeName             : DEMO-FS
SecurityDescriptor    : O:BAG:DUD:(A;;FA;;;S-1-5-21-3876617879-1076079722-1647216889-500)(A;;FA;;;S-1-5-21-3876617879-1
                        076079722-1647216889-1107)
ShadowCopy            : False
Special               : False
Temporary             : False
Volume                : \\?\Volume{4789973e-1f33-4d27-9bf1-2e9ec6da13a0}\
PSComputerName        :
CimClass              : ROOT/Microsoft/Windows/SMB:MSFT_SmbShare
CimInstanceProperties : {AvailabilityType, CachingMode, CATimeout, ConcurrentUserLimit...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

PresetPathAcl         : System.Security.AccessControl.DirectorySecurity
ShareState            : Online
AvailabilityType      : ScaleOut
ShareType             : FileSystemDirectory
FolderEnumerationMode : Unrestricted
CachingMode           : Manual
CATimeout             : 0
ConcurrentUserLimit   : 0
ContinuouslyAvailable : True
CurrentUsers          : 0
Description           :
EncryptData           : False
Name                  : LOG
Path                  : C:\ClusterStorage\Volume2\LOG
Scoped                : True
ScopeName             : DEMO-FS
SecurityDescriptor    : O:BAG:DUD:(A;;FA;;;S-1-5-21-3876617879-1076079722-1647216889-500)(A;;FA;;;S-1-5-21-3876617879-1
                        076079722-1647216889-1107)
ShadowCopy            : False
Special               : False
Temporary             : False
Volume                : \\?\Volume{888f5e8c-c91c-4bcf-b4b2-cc4e427ee54c}\
PSComputerName        :
CimClass              : ROOT/Microsoft/Windows/SMB:MSFT_SmbShare
CimInstanceProperties : {AvailabilityType, CachingMode, CATimeout, ConcurrentUserLimit...}
CimSystemProperties   : Microsoft.Management.Infrastructure.CimSystemProperties

8.7.GUI. Using Server Manager

  • On the Failover Cluster Manager, select the Roles node on the tree on the left.
  • Click on the DEMO-FS role and then click on “Add Shared Folder” on the actions menu on the right.
  • The “New Share Wizard” will start.
  • On the “Select Profile” page, select “SMB Share – Server Application” and click “Next”
  • On the “Share Location” page, select “C:\ClusterStorage\Volume1” as the location. Click “Next”.
  • On the “Share Name” page, enter “Data” as the share name, click “Next”.
  • On the “Other Settings” page, just click “Next”
  • On the “Permissions” page, click on “Customize permissions…”
  • Click on “Add”, then click on “Select a Principal”.
  • Enter “DEMO\Administrator”, click on “Check Names” and then click “OK”.
  • Click “Full Control” and click “OK”.
  • Click on “Add”, then click on “Select a Principal”.
  • Enter “DEMO\SQLService”, click on “Check Names” and then click “OK”.
  • Click “Full Control” and click “OK”.
  • Click “OK”, then click on “Next”, then click on “Create”
  • Click on “Close” after the wizard finishes creating the share.
  • Repeat the process for a share called “LOG” on Volume2.

image


9. Configure the SQL Server

9.1. Mount the SQL Server ISO file

  • Copy the SQL Server 2012 ISO file to the C:\ISO folder.
  • Mount that in the DVD for VM4, DEMO-DB.

9.1.PS. Using PowerShell

Set-VMDvdDrive –VMName VM4 -Path C:\ISO\SQLFULL_ENU.iso

9.1.GUI. Using Hyper-V Manager

  • In Server Manager, click on “Tools” in the upper left and select “Hyper-V Manager”
  • In Hyper-V Manager, click on the server name on the pane on the left
  • Right-click VM4 and click on “Connect…”
  • In the “Media” menu, select “DVD Drive” and then “Insert Disk…”
  • Point to the SQL Server 2012 ISO file under the C:\ISO folder.

9.2. Run SQL Server Setup

  • From VM4 (DEMO-DB), run SQL Server 2012 setup from the DVD.
  • In the “SQL Server Installation Center”, click on “Installation”, then select “New SQL Server stand-alone…”
  • Let it verify the SQL Server Setup Support Rules pass and click “OK”
  • Select “Evaluation” under “Specify a free edition” and click “Next”
  • Review the licensing terms and click “Next”
  • “SQL Server 2012 Setup” will start. Let it verify Setup Support Rules pass and click “Next”.
  • In the “Setup Role” page, select “SQL Server Feature Installation” and click “Next”.
  • In the “Feature selection” page, select only the “Database Engine Services” and the “Management Tools”.
  • Use the default directories. Click “Next”.
  • In the “Installation Rules” page, click “Next”.
  • In the “Instance Configuration” page, click “Next”
  • In the “Disk Space Requirements” page, click “Next”.
  • In the “Server Configuration” page, enter “DEMO.TEST\SQLService” as the account name for the SQL Server Database Engine and the SQL Server Agent, set them both to start automatically. Click “Next”
     
    clip_image024
     
  • In the “Database Engine Configuration” page, click on “Add Current User”.
  • Click on the “Data Directories” tab. Enter “\\DEMO-FS\DATA” as the “Data Root Directory”.
  • Fix the two path for Log directories to use “\\DEMO-FS\LOG” instead of the data folder.
     
    clip_image025
     
  • You will be prompted to confirm the right permissions are assigned on the share. Click “Yes”.
     
    clip_image026
     
  • On the “Error reporting” page, click “Next”
  • On the “Installation Configuration Rules” page, click “Next”
  • On the “Ready to Install” page, click “Install”
  • The installation will complete.

9.3. Create a database using the clustered file share

  • On the SQL Server VM, open SQL Server Management Studio.
  • On the “Connect to Server” window, accept the default server name and authentication. Click “Connect”.
  • Right click the main node, select Properties and click on the “Database Settings”.
  • Verify that that the “Database default locations” point to the file shares entered earlier.
     
    clip_image027
     
  • Click “OK” to close the “Server Properties”.
  • Expand the tree on the left to find the Databases node.
  • Right-click “Databases” and select “New Database…”
  • Enter “Orders” as the database name and note the path pointing to the clustered file share.
  • Scroll the bar to the right to see the Path column:
     
    clip_image028
     
  • Click “OK” to create the database.

10. Verify SMB features

10.1. Verify that SMB Multichannel is working

  • Use PowerShell to verify that SMB is indeed using multiple interfaces.

10.1.PS. Using PowerShell (from VM4, DEMO-DB)

Get-SmbConnection
Get-SmbMultichannelConnection

10.1.OUT. Sample output

PS C:\> Get-SmbConnection

ServerName   ShareName    UserName            Credential          Dialect     NumOpens
----------   ---------    --------            ----------          -------     --------
DEMO-FS      Data         DEMO\SQLService     DEMO.TEST\SQLSer... 3.00        11
DEMO-FS      Log          DEMO\SQLService     DEMO.TEST\SQLSer... 3.00        2

PS C:\> Get-SmbMultichannelConnection

Server Name  Selected   Client IP      Server IP      Client         Server         Client RSS     Client RDMA
                                                      Interface      Interface      Capable        Capable
                                                      Index          Index
-----------  --------   ---------      ---------      -------------- -------------- -------------- --------------
DEMO-FS      True       192.168.103.4  192.168.103.3  23             22             True           False
DEMO-FS      True       192.168.101.4  192.168.101.3  19             18             True           False
DEMO-FS      True       192.168.102.4  192.168.102.3  21             16             True           False

10.2. Query the SMB Sessions and Open Files

  • Use PowerShell to verify sessions and open files.

10.2.PS. Using PowerShell (from VM2, DEMO-F1 or VM3, DEMO-F2)

Get-SmbSession
Get-SmbOpenFile | Sort Path
Get-SmbOpenFile | Sort Path | FT Path

10.2.OUT. Sample output

PS C:\> Get-SmbSession

SessionId       ClientComputerName            ClientUserName                NumOpens
---------       ------------------            --------------                --------
154618822685    [fe80::a08a:1e3d:8e27:3288]   DEMO\DEMO-F2$                 0
154618822681    [fe80::407e:dd35:3c1c:bec5]   DEMO\DEMO-F1$                 0
8813272891477   192.168.101.4                 DEMO\SQLService               13

PS C:\> Get-SmbOpenFile | Sort Path

FileId          SessionId        Path                ShareRelativePath   ClientComputerName  ClientUserName
------          ---------        ----                -----------------   ------------------  --------------
8813272893453   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893465   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893577   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893589   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893545   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893557   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893993   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893665   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893417   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893505   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893497   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272894041   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService
8813272893673   8813272891477    C:\ClusterStorag... MSSQL11.MSSQLSER... 192.168.101.4       DEMO\SQLService

PS C:\> Get-SmbOpenFile | Sort Path | FT Path

Path
----
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\master.mdf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\mastlog.ldf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\model.mdf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\modellog.ldf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\MSDBData.mdf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\MSDBLog.ldf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\Orders.mdf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\DATA\tempdb.mdf
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\Log\log_1.trc
C:\ClusterStorage\Volume1\DATA\MSSQL11.MSSQLSERVER\MSSQL\Log\system_health_0_129897856369790000.xel
C:\ClusterStorage\Volume2\LOG\MSSQL11.MSSQLSERVER\MSSQL\Data\Orders_log.ldf
C:\ClusterStorage\Volume2\LOG\MSSQL11.MSSQLSERVER\MSSQL\Data\templog.ldf

10.3. Planned move of a file server node (with SMB Transparent Failover of SQL Client)

  • Use SMB Transparent Failover and Witness to move the SMB Client (SQL Server) between the two File Server cluster nodes.
  • Note: Moves can sometimes take a few seconds to complete, so you might need to repeat the command to see the witness.

10.3.PS. Using PowerShell (from VM2, DEMO-F1 or VM3, DEMO-F2)

Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName
Move-SmbWitnessClient -ClientName DEMO-DB -DestinationNode DEMO-F1
1..10 | % {Start-Sleep 5; Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName }

10.3.OUT. Sample output

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F1

PS C:\> Move-SmbWitnessClient -ClientName DEMO-DB -DestinationNode DEMO-F2

Confirm
Are you sure you want to perform this action?
Performing operation 'Move' on Target 'DEMO-DB'.
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"):

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F2


PS C:\> Move-SmbWitnessClient -ClientName DEMO-DB -DestinationNode DEMO-F1

Confirm
Are you sure you want to perform this action?
Performing operation 'Move' on Target 'DEMO-DB'.
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"):

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F2


PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F1

10.4. Unplanned failure of a file server node (with SMB Transparent Failover of SQL Client)

  • Restart one of the servers to force an SMB Transparent Failover to move the SMB Client (SQL Server) between the two File Server cluster nodes.
  • Note: After the failover, the client moves to the surviving nodes transparently (in just a few seconds). However, there will be no witness servers available then. That’s fine, but during that time we’re running with a single surviving file server node, with no witness. After the file server restarts and becomes available again (which could take a minute or so) the client will reconnect to it as a witness and we are back in a two-file-server configuration.

10.4.PS. Using PowerShell (from VM2, DEMO-F1)

Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName
Restart-Computer –ComputerName DEMO-F2 –Force
1..100 | % { Start-Sleep 5; Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName }

10.4.OUT. Sample output

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F2

PS C:\> Restart-Computer -ComputerName DEMO-F2 -Force

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

PS C:\> Get-SmbWitnessClient | FT ClientName, NetworkName, FileServerNodeName

ClientName     NetworkName     FileServerNodeName
----------     -----------     ------------------
DEMO-DB        DEMO-FS         DEMO-F1

10.5. Surviving the loss of a client NIC

  • Observe that SMB Multichannel will protect the SQL Server from the failure of a NIC.

10.5.PS. Using PowerShell (from VM4, DEMO-DB)

Get-SmbMultichannelConnection
Disable-NetAdapter -InterfaceAlias Internal3 –Confirm:$false ; Start-Sleep 20

Get-SmbMultichannelConnection

Enable-NetAdapter -InterfaceAlias Internal3 ; Start-Sleep 20
Get-SmbMultichannelConnection

10.5.OUT. Sample output

PS C:\> Get-SmbMultichannelConnection

Server Name  Selected   Client IP      Server IP      Client         Server         Client RSS     Client RDMA
                                                      Interface      Interface      Capable        Capable
                                                      Index          Index
-----------  --------   ---------      ---------      -------------- -------------- -------------- --------------
DEMO-FS      True       192.168.103.4  192.168.103.3  23             22             True           False
DEMO-FS      True       192.168.101.4  192.168.101.3  19             18             True           False
DEMO-FS      True       192.168.102.4  192.168.102.3  21             16             True           False


PS C:\> Disable-NetAdapter -InterfaceAlias Internal3 –Confirm:$false ; Start-Sleep 20
PS C:\> Get-SmbMultichannelConnection

Server Name  Selected   Client IP      Server IP      Client         Server         Client RSS     Client RDMA
                                                      Interface      Interface      Capable        Capable
                                                      Index          Index
-----------  --------   ---------      ---------      -------------- -------------- -------------- --------------
DEMO-FS      True       192.168.101.4  192.168.101.3  19             18             True           False
DEMO-FS      True       192.168.102.4  192.168.102.3  21             16             True           False


PS C:\> Enable-NetAdapter -InterfaceAlias Internal3 ; Start-Sleep 20
PS C:\> Get-SmbMultichannelConnection

Server Name  Selected   Client IP      Server IP      Client         Server         Client RSS     Client RDMA
                                                      Interface      Interface      Capable        Capable
                                                      Index          Index
-----------  --------   ---------      ---------      -------------- -------------- -------------- --------------
DEMO-FS      True       192.168.103.4  192.168.103.3  23             22             True           False
DEMO-FS      True       192.168.101.4  192.168.101.3  19             18             True           False
DEMO-FS      True       192.168.102.4  192.168.102.3  21             16             True           False


11. Shut down, startup and install final notes

  • Keep in mind that there are dependencies between the services running on each VM.
  • To shut them down, start with VM4 and end with VM1, waiting for each one to go down completely before moving to the next one.
  • To bring the VMs up, go from VM1 to VM4, waiting for the previous one to be fully up (with low to no CPU usage) before starting the next one.
  • You might want to also take a snapshot of the VMs after you shut them down, just in case you want to bring them back to the original state after experimenting with them for a while.
  • If you do, you should always snapshot all of them, again due to dependencies between them. Just right-click the VM and select the “Snapshot” option.
  • As a last note, the total size of the VHDX files (base plus 4 diffs), after all the steps were performed, was around 19 GB. 

image


12. Conclusion

I hope you enjoyed these step-by-step instructions. I strongly encourage you to try them out and perform the entire installation yourself. It’s a good learning experience.

 

Note: If you want to print this post, I have attached a printer-friendly PDF version below. Thanks for the suggestion, Keith!

New Microsoft Logo

Test Hyper-V over SMB configuration with Windows Server 2012 - Step-by-step Installation using PowerShell

$
0
0

1. Overview

In this post, I am sharing all the steps I used to create a Windows Server 2012 File Server test environment that I used for some of my Hyper-V over SMB demonstrations. My goal with this post is to share some of configuration details and the exact PowerShell scripts I used to configure the environment (if you look carefully, you might be able to spot a few PowerShell tricks and tips). For me, this is a convenient reference post that I will likely use myself to cut/paste from when configuring my demo systems in the future.

This uses 5 physical machines, since the scenario involves deploying Hyper-V hosts and you can’t virtualize Hyper-V itself (I have another post that cover SQL Server over SMB in a fully virtualized environment). I am also using RDMA interfaces on the setup with SMB Direct, and those also can’t be virtualized. The demo setup includes one domain controller (which also doubles as an iSCSI target), two file servers and two Hyper-V hosts.

This is probably the most basic fault-tolerant Hyper-V over SMB setup you can create that covers the entire spectrum of new SMB 3.0 capabilities (including SMB Transparent Failover, SMB Scale-Out, SMB Direct and SMB Multichannel). If you build a similar configuration, please share your experience in the comments below the post.

Please keep in mind that this is not a production-ready configuration. I built it entirely using 5-year-old desktop class machines. To improved disk performance, I did add 3 SSDs to one of the machines to used as storage for my cluster, which I configured using Storage Spaces and the Microsoft iSCSI Software target included in Windows Server 2012. However, since I only had three small SSDs, I used a simple space, which cannot tolerate disk failures. In production, you should use mirrored spaces. Also keep in mind that the FST2-DC1 machine itself is a single point of failure, so you’re really only tolerant to the failure of one the two Hyper-V hosts or one of the File Server nodes. In summary, this is a test-only configuration.

 

2. Environment details

The environment is deployed as 5 physical machines, all using the FST2.TEST domain. Here’s a diagram of the setup so you can better understand it:

image

 

Here are the details about the names, roles and IP addresses for each of the computers involved, including the cluster objects and VMs:

ComputerRoleExternalInternalRDMA 1RDMA 2
fst2-dc1DNS, Domain Controller, iSCSI TargetDHCP192.168.100.10/24192.168.101.10/24N/A
fst2-fs1File Server 1DHCP192.168.100.11/24192.168.101.11/24192.168.102.11/24
fst2-fs2File Server 2DHCP192.168.100.12/24192.168.101.12/24192.168.102.12/24
fst2-hv1Hyper-V Server 1DHCP192.168.100.13/24192.168.101.13/24192.168.102.13/24
fst2-hv2Hyper-V Server 2DHCP192.168.100.14/24N/A192.168.102.14/24
fst2-fscFile Server Cluster Name ObjectDHCPN/AN/AN/A
fst2-fsClassic File Server ClusterN/A192.168.100.22/24192.168.101.22/24192.168.102.22/24
fst2-soScale-Out​ File Server ClusterN/AN/AN/AN/A
fst2-hvcHyper-V Cluster Name ObjectDHCPN/AN/AN/A
fst2-vm*Virtual MachineDHCPN/AN/AN/A

 

Last but not least, here’s a picture of the setup, so you can get a sense of what it looks like:

image

 

3. Steps to configure FST2-DC1 (DNS, Domain Controller, iSCSI Target)

# Note 1: This post assumes you already installed Windows Server 2012 and configured the computer name. For details on how to do this, see this previous blog post.
# Note 2: This setup uses InfiniBand RDMA interfaces. For specific instructions on that part of the configuration (driver download, OpenSM subnet manager), see this previous blog post.
#

#
# Set power profile
#
POWERCFG.EXE /S SCHEME_MIN

#
# Configure all 4 interfaces (1 DHCP, 3 static)

#
# Rename External, no further action required, since this is DHCP
#
Get-NetAdapter -InterfaceDescription "*Intel*"   | Rename-NetAdapter -NewName "External"

#
# Rename Internal, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*Realtek*" | Rename-NetAdapter -NewName "Internal"
Set-NetIPInterface -InterfaceAlias Internal -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias Internal -Confirm:$false
New-NetIPAddress -InterfaceAlias Internal -IPAddress 192.168.100.10 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias Internal -ServerAddresses 192.168.100.10

#
# Rename RDMA1, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | Select -Last 1 | Rename-NetAdapter -NewName RDMA1
Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.101.10 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.100.10
Get-NetAdapter -InterfaceDescription "*IPoIB*" | ? {$_.Name -ne "RDMA1"} | Rename-NetAdapter -NewName RDMA2

#
# Disable RDMA2, since this system only uses one RDMA interface
#
Disable-NetAdapter -InterfaceAlias RDMA2 -Confirm:$false

#
# Configure Storage Spaces, create pool with 3 disks, single simple space
#
$s = Get-StorageSubSystem -FriendlyName *Spaces*
New-StoragePool -FriendlyName Pool1 -StorageSubSystemFriendlyName $s.FriendlyName -PhysicalDisks (Get-PhysicalDisk -CanPool $true)
Set-ResiliencySetting -Name Simple -NumberofColumnsDefault 3 -StoragePool (Get-StoragePool -FriendlyName Pool1)

#
# Create Space (virtual disk)
#
New-VirtualDisk -FriendlyName Space1 -StoragePoolFriendlyName Pool1 -ResiliencySettingName Simple -UseMaximumSize

#
# Initialize Space, partition, create volume, format as X:
#
$c = Get-VirtualDisk -FriendlyName Space1 | Get-Disk
Set-Disk -Number $c.Number -IsReadOnly 0
Set-Disk -Number $c.Number -IsOffline 0
Initialize-Disk -Number $c.Number -PartitionStyle GPT
New-Partition -DiskNumber $c.Number -DriveLetter X -UseMaximumSize
Initialize-Volume -DriveLetter X -FileSystem NTFS -Confirm:$false

#
# Install iSCSI Software Target
#
Install-WindowsFeature FS-iSCSITarget-Server

#
# Create iSCSI target for two initiators (configured by IP address) with 5 LUNs (1GB for witness disks, four 100GB for data disks)
#
New-IscsiServerTarget -TargetName FSTarget -InitiatorID IPAddress:192.168.101.11, IPAddress:192.168.101.12
New-IscsiVirtualDisk -DevicePath X:\LUN0.VHD -size 1GB
1..4 | % {New-IscsiVirtualDisk -DevicePath X:\LUN$_.VHD -size 100GB}
Add-iSCSIVirtualDiskTargetMapping -TargetName FSTarget -DevicePath X:\LUN0.VHD
1..4 | % {Add-iSCSIVirtualDiskTargetMapping -TargetName FSTarget -DevicePath X:\LUN$_.VHD}

#
# Install Active Directory
#
Install-WindowsFeature AD-Domain-Services

#
# Create AD forest, reboots at the end
#
Install-ADDSForest `
-CreateDNSDelegation:$false `
-DatabasePath "C:\Windows\NTDS" `
-DomainMode "Win2008R2" `
-DomainName "FST2.TEST" `
-DomainNetBIOSName "FST2" `
-ForestMode "Win2008R2" `
-InstallDNS:$true `
-LogPath "C:\Windows\NTDS" `
-SafeModeAdministratorPassword (Read-Host -AsSecureString -Prompt "Enter Password") `
-SYSVOLPath "C:\Windows\SYSVOL"

 

4. Steps to configure FST2-FS1 (File Server 1)

#
# Set service power profile
#
POWERCFG.EXE /S SCHEME_MIN  
 
#
# Configure all 4 interfaces (1 DHCP, 3 static)
#

#
# Rename External, no further action required, since this is DHCP
#
Get-NetAdapter -InterfaceDescription "*Intel*" | Rename-NetAdapter -NewName "External"

#
# Rename Internal, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*Realtek*" | Rename-NetAdapter -NewName "Internal"
Set-NetIPInterface -InterfaceAlias Internal -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias Internal -Confirm:$false
New-NetIPAddress -InterfaceAlias Internal -IPAddress 192.168.100.11 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias Internal -ServerAddresses 192.168.100.10

#
# Rename RDMA1, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | Select -Last 1 | Rename-NetAdapter -NewName RDMA1
Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.101.11 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.100.10

#
# Rename RDMA2, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | ? {$_.Name -ne "RDMA1"} | Rename-NetAdapter -NewName RDMA2
Set-NetIPInterface -InterfaceAlias RDMA2 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA2 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA2 -IPAddress 192.168.102.11 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA2 -ServerAddresses 192.168.100.10

#
# Join Domain, restart the machine
#
Add-Computer -DomainName FST2.TEST -Credential (Get-Credential) -Restart

#
# Install File Server
#
Install-WindowsFeature File-Services, FS-FileServer, Failover-Clustering
Install-WindowsFeature RSAT-Clustering -IncludeAllSubFeature

#
# Start iSCSI Software Initiator
#
Set-Service MSiSCSI -StartupType automatic
Start-Service MSiSCSI

#
# Configure iSCSI Software Initiator
#

New-iSCSITargetPortal -TargetPortalAddress 192.168.101.10
Get-iSCSITarget | Connect-iSCSITarget
Get-iSCSISession | Register-iSCSISession

#
# Configure the five iSCSI LUNs (initialize, create partition, volume, format as drives J: to N:
#
1..5 | % { 
    $Letter ="JKLMN"[($_-1)]
    Set-Disk -Number $_ -IsReadOnly 0
    Set-Disk -Number $_ -IsOffline 0
    Initialize-Disk -Number $_ -PartitionStyle MBR
    New-Partition -DiskNumber $_ -DriveLetter $Letter -UseMaximumSize 
    Initialize-Volume -DriveLetter $Letter -FileSystem NTFS -Confirm:$false
}

 

5. Steps to configure FST2-FS2 (File Server 2)

#
# Set service power profile
#
POWERCFG.EXE /S SCHEME_MIN 

#
# Configure all 4 interfaces (1 DHCP, 3 static)
#

#
# Rename External, no further action required, since this is DHCP
#
Get-NetAdapter -InterfaceDescription "*Intel*" | Rename-NetAdapter -NewName "External"

#
# Rename Internal, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*Realtek*" | Rename-NetAdapter -NewName "Internal"
Set-NetIPInterface -InterfaceAlias Internal -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias Internal -Confirm:$false
New-NetIPAddress -InterfaceAlias Internal -IPAddress 192.168.100.12 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias Internal -ServerAddresses 192.168.100.10

#
# Rename RDMA1, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | Select -Last 1 | Rename-NetAdapter -NewName RDMA1
Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.101.12 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.100.10

#
# Rename RDMA2, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | ? {$_.Name -ne "RDMA1"} | Rename-NetAdapter -NewName RDMA2
Set-NetIPInterface -InterfaceAlias RDMA2 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA2 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA2 -IPAddress 192.168.102.12 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA2 -ServerAddresses 192.168.100.10

#
# Join Domain
#
Add-Computer -DomainName FST2.TEST -Credential (Get-Credential) -Restart

#
# Install File Server
#
Install-WindowsFeature File-Services, FS-FileServer, Failover-Clustering
Install-WindowsFeature RSAT-Clustering -IncludeAllSubFeature

#
# Start iSCSI Software Initiator
#
Set-Service MSiSCSI -StartupType automatic
Start-Service MSiSCSI

#
# Configure iSCSI Software Initiator
#
New-iSCSITargetPortal -TargetPortalAddress 192.168.101.10
Get-iSCSITarget | Connect-iSCSITarget
Get-iSCSISession | Register-iSCSISession

#
# No need to configure LUNs here. In a cluster, this is done only from one of the nodes. We did it in FS1.
#

 

6. Steps to configure FST2-HV1 (Hyper-V host 1)

#
# Set service power profile
#
POWERCFG.EXE /S SCHEME_MIN 

#
# Configure all 4 interfaces (1 DHCP, 3 static)
#

#
# Rename External, no further action required, since this is DHCP
#
Get-NetAdapter -InterfaceDescription "*82566DM*" | Rename-NetAdapter -NewName "External"

#
# Rename Internal, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*PRO/100*" | Rename-NetAdapter -NewName "Internal"
Set-NetIPInterface -InterfaceAlias Internal -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias Internal -Confirm:$false
New-NetIPAddress -InterfaceAlias Internal -IPAddress 192.168.100.13 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias Internal -ServerAddresses 192.168.100.10

#
# Rename RDMA1, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | Select -Last 1 | Rename-NetAdapter -NewName RDMA1
Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.101.13 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.100.10

#
# Rename RDMA2, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | ? {$_.Name -ne "RDMA1"} | Rename-NetAdapter -NewName RDMA2
Set-NetIPInterface -InterfaceAlias RDMA2 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA2 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA2 -IPAddress 192.168.102.13 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA2 -ServerAddresses 192.168.100.10

#
# Install Hyper-V
#
Install-WindowsFeature Hyper-V, Hyper-V-PowerShell, Hyper-V-Tools, Failover-Clustering
Install-WindowsFeature RSAT-Clustering -IncludeAllSubFeature

#
# Join Domain, restart
#
Add-Computer -DomainName FST2.Test -Credential (Get-Credential) –Restart

 

7. Steps to configure FST2-HV2 (Hyper-V host 2)

#
# Set service power profile
#
POWERCFG.EXE /S SCHEME_MIN 

#
# Configure all 4 interfaces (1 DHCP, 3 static)
#

#
# Rename External, no further action required, since this is DHCP
#
Get-NetAdapter -InterfaceDescription "*82566DM*" | Rename-NetAdapter -NewName "External"

#
# Rename Internal, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*PRO/100*" | Rename-NetAdapter -NewName "Internal"
Set-NetIPInterface -InterfaceAlias Internal -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias Internal -Confirm:$false
New-NetIPAddress -InterfaceAlias Internal -IPAddress 192.168.100.14 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias Internal -ServerAddresses 192.168.100.10

#
# Rename RDMA1, set to manual IP address, configure IP Address, DNS
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | Select -Last 1 | Rename-NetAdapter -NewName RDMA1
Set-NetIPInterface -InterfaceAlias RDMA1 -DHCP Disabled
Remove-NetIPAddress -InterfaceAlias RDMA1 -Confirm:$false
New-NetIPAddress -InterfaceAlias RDMA1 -IPAddress 192.168.102.14 -PrefixLength 24
Set-DnsClientServerAddress -InterfaceAlias RDMA1 -ServerAddresses 192.168.100.10

#
# Disable RDMA2, since this system only uses one RDMA interface
#
Get-NetAdapter -InterfaceDescription "*IPoIB*" | ? {$_.Name -ne "RDMA1"} | Rename-NetAdapter -NewName RDMA2
Disable-NetAdapter -InterfaceAlias RDMA2 -Confirm:$false

#
# Install Hyper-V
#
Install-WindowsFeature Hyper-V, Hyper-V-PowerShell, Hyper-V-Tools, Failover-Clustering
Install-WindowsFeature RSAT-Clustering -IncludeAllSubFeature

#
# Join Domain, restart
#
Add-Computer -DomainName FST2.Test -Credential (Get-Credential) -Restart

 

8. Steps to configure the Cluster FST2-FSC (run from FST2-FS1)

#
# Run Failover Cluster Validation
#
Test-Cluster -Node FST2-FS1, FST2-FS2

#
# Create cluster
#
New-Cluster –Name FST2-FSC -Node FST2-FS1, FST2-FS2

#
# Rename Networks
#
(Get-ClusterNetwork | ? {$_.Address -like "192.168.100.*" }).Name = "Internal"
(Get-ClusterNetwork | ? {$_.Address -like "192.168.101.*" }).Name = "RDMA1"
(Get-ClusterNetwork | ? {$_.Address -like "192.168.102.*" }).Name = "RDMA2"
(Get-ClusterNetwork | ? {$_.Address -like "172.*" }).Name = "External"

#
# Configure Cluster Network Roles (0=Not used, 1=Cluster only, 3=Cluster+Clients)
#
(Get-ClusterNetwork Internal).Role = 3
(Get-ClusterNetwork RDMA1).Role = 3
(Get-ClusterNetwork RDMA2).Role = 3
(Get-ClusterNetwork External).Role = 1

#
# Rename Witness Disk
#
$w = Get-ClusterResource | ? { $_.OwnerGroup -eq "Cluster Group" -and $_.ResourceType -eq "Physical Disk"}
$w.Name = "WitnessDisk"

 

9. Steps to configure the Classic File Server Cluster FST2-FS (run from FST2-FS1):

#
# Move all disks to node one, rename Cluster Disks
#
Get-ClusterGroup | Move-ClusterGroup -Node FST2-FS1
(Get-Volume -DriveLetter I | Get-Partition | Get-Disk | Get-ClusterResource).Name = "FSDisk1"
(Get-Volume -DriveLetter J | Get-Partition | Get-Disk | Get-ClusterResource).Name = "FSDisk2"

#
# Create a classic file server resource group
#
Add-ClusterFileServerRole -Name FST2-FS -Storage FSDisk1, FSDisk2 –StaticAddress 192.168.100.22/24, 192.168.101.22/24, 192.168.102.22/24

#
# Create Folders
#
Move-ClusterGroup -Name FST2-FS -Node FST2-FS1
md I:\VMS
md J:\VMS

#
# Create File Shares
#
New-SmbShare -Name VMS1 -Path I:\VMS -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HV1$, FST2.Test\FST2-HV2$
New-SmbShare -Name VMS2 -Path J:\VMS -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HV1$, FST2.Test\FST2-HV2$

#
# Set NTFS permissions
#
(Get-SmbShare VMS1).PresetPathAcl | Set-Acl
(Get-SmbShare VMS2).PresetPathAcl | Set-Acl

 

10. Steps to configure the Scale-Out File Server Cluster FST2-SO (run from FST2-FS1):

#
# Add two remaining disks to Cluster Shared Volumes
#
Get-ClusterResource | ? OwnerGroup -eq "Available Storage" | Add-ClusterSharedVolume

#
# Create a scale out file server resource group
#
Add-ClusterScaleOutFileServerRole -Name FST2-SO

#
# Create Folders
#
MD C:\ClusterStorage\Volume1\VMS
MD C:\ClusterStorage\Volume2\VMS

#
# Create File Shares
#
New-SmbShare -Name VMS3 -Path C:\ClusterStorage\Volume1\VMS -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HV1$, FST2.Test\FST2-HV2$
New-SmbShare -Name VMS4 -Path C:\ClusterStorage\Volume2\VMS -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HV1$, FST2.Test\FST2-HV2$

#
# Set NTFS permissions
#
(Get-SmbShare VMS3).PresetPathAcl | Set-Acl
(Get-SmbShare VMS4).PresetPathAcl | Set-Acl

 

11. Steps to configure the VMs in FST2-HV1

#
# Create VM Switch (if doing this remotely, you will need to reconnect)
#
New-VMSwitch -NetAdapterName External -Name External
Get-NetAdapter -InterfaceDescription Hyper* | Rename-NetAdapter -NewName ExternalVirtual

#
# Create VHD files for two VMs
#
New-VHD -Path \\FST2-FS\VMS1\VM1.VHDX -Fixed -SizeBytes 20GB
New-VHD -Path \\FST2-SO\VMS3\VM3.VHDX -Fixed -SizeBytes 20GB

#
# Create two VMs
#
New-VM -Path \\FST2-FS\VMS1 -Name VM1 -VHDPath \\FST2-FS\VMS1\VM1.VHDX -SwitchName External -Memory 1GB
New-VM -Path \\FST2-SO\VMS3 -Name VM3 -VHDPath \\FST2-SO\VMS3\VM3.VHDX -SwitchName External -Memory 1GB
Set-VMDvdDrive -VMName VM1 -Path D:\WindowsServer2012.iso
Set-VMDvdDrive -VMName VM3 -Path D:\WindowsServer2012.iso
Start-VM VM1, VM3

 

12. Steps to configure the VMs in FST2-HV2:

#
# Create VM Switch (if doing this remotely, you will need to reconnect)
#
New-VMSwitch -NetAdapterName External -Name External
Get-NetAdapter -InterfaceDescription Hyper* | Rename-NetAdapter -NewName ExternalVirtual

#
# Create VHD files for two VMs
#
New-VHD -Path \\FST2-FS\VMS2\VM2.VHDX -Fixed -SizeBytes 20GB
New-VHD -Path \\FST2-SO\VMS4\VM4.VHDX -Fixed -SizeBytes 20GB

#
# Create and start two VMs
#
New-VM -Path \\FST2-FS\VMS2 -Name VM2 -VHDPath \\FST2-FS\VMS2\VM2.VHDX -SwitchName External -Memory 1GB
New-VM -Path \\FST2-SO\VMS4 -Name VM4 -VHDPath \\FST2-SO\VMS4\VM4.VHDX -SwitchName External -Memory 1GB
Set-VMDvdDrive -VMName VM2 -Path D:\WindowsServer2012.iso
Set-VMDvdDrive -VMName VM4 -Path D:\WindowsServer2012.iso
Start-VM VM2, VM4

 

13. Steps to create a Hyper-V Cluster using file share storage

#
# on FST2-HV1
#

#
# Create Hyper-V Cluster called FST2-HVC
#
New-Cluster –Name FST2-HVC -Node FST2-HV1, FST2-HV2

#
# on FST2-FS1
#

#
# Create Folder and File Share for File Share Witness
#
MD C:\ClusterStorage\Volume1\Witness
New-SmbShare -Name Witness -Path C:\ClusterStorage\Volume1\Witness -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HVC$
(Get-SmbShare Witness).PresetPathAcl | Set-Acl

#
# on FST2-HV1
#

#
# Configure FST2-HVC Cluster with a File Share Witness
#
Set-ClusterQuorum -NodeAndFileShareMajority \\FST2-SO\Witness

#
# Make VMs in FST2-HV1 Highly available
#
Add-VMToCluster VM1
Add-VMToCluster VM3

#
# on FST2-HV2
#

#
# Make VMs in FST2-HV2 Highly available
#
Add-VMToCluster VM2
Add-VMToCluster VM4

 

14. Optional: Steps to create a nonclustered file share on FST2-FS1

#
# on FST2-FS1
#
MD D:\VMS
New-SmbShare -Name VMS5 -Path D:\VMS -FullAccess FST2.Test\Administrator, FST2.Test\FST2-HV1$, FST2.Test\FST2-HV2$
(Get-SmbShare VMS5).PresetPathAcl | Set-Acl

#
# on FST2-HV1
#
New-VHD -Path \\FST2-FS1\VMS5\VM5.VHDX -Fixed -SizeBytes 20GB
New-VM -Path \\FST2-FS1\VMS5 -Name VM5 -VHDPath \\FST2-SO\VMS3\VM3.VHDX -SwitchName External -Memory 1GB
Set-VMDvdDrive -VMName VM5 -Path D:\WindowsServer2012.iso
Start-VM VM5

 

15. Conclusion

Sorry for the somewhat terse post, written mostly in PowerShell instead of English :-).
I hope you enjoy the PowerShell scripting and try at least some of it in your configurations.
For additional test scenarios you could try on a Hyper-V over SMB setup, see this previous blog post.

The built-in SMB PowerShell aliases in Windows Server 2012 and Windows 8

$
0
0

1 – Overview

 

Windows 8 and Windows Server 2012 introduced a new set of PowerShell cmdlets to manage File Servers and File Shares. If you're not familiar with them, I would recommend reviewing my blog on The Basics of SMB PowerShell.

With the rule that cmdlets have to be written in a way that their purpose is obvious to the user, we sometimes end up with fairly lengthy cmdlets to type. There are two ways to overcome that: one is to use the <TAB> key while you type to autocomplete the cmdlet and the other is to use an alias.

An alias is basically an abbreviated form that maps one-to-one to the complete form of a cmdlet. For instance, if you want to get a list of SMB Shares, you can type just “gsmbs” instead of “Get-SmbShare”. Or, for a more compelling example, you can use just “gsmbsn” instead of “Get-SmbServerNetworkInterface”. You can also use the alias with the same parameters you use with the full cmdlet.

 

2 – The List of Aliases

 

Now, as you would expect from a blog like this, here is a complete list of the SMB PowerShell aliases and matching cmdlets:

Alias Cmdlet
gsmbsGet-SmbShare
nsmbsNew-SmbShare
rsmbsRemove-SmbShare
ssmbsSet-SmbShare
blsmbaBlock-SmbShareAccess
gsmbaGet-SmbShareAccess
grsmbaGrant-SmbShareAccess
rksmbaRevoke-SmbShareAccess
ulsmbaUnblock-SmbShareAccess
gsmbcGet-SmbConnection
gsmbmGet-SmbMapping
nsmbmNew-SmbMapping
rsmbmRemove-SmbMapping
cssmbseClose-SmbSession
gsmbseGet-SmbSession
cssmboClose-SmbOpenFile
gsmboGet-SmbOpenFile
gsmbccGet-SmbClientConfiguration
ssmbccSet-SmbClientConfiguration
gsmbscGet-SmbServerConfiguration
ssmbscSet-SmbServerConfiguration
gsmbcnGet-SmbClientNetworkInterface
gsmbsnGet-SmbServerNetworkInterface
gsmbmcGet-SmbMultichannelConnection
udsmbmcUpdate-SmbMultichannelConnection
gsmbtGet-SmbMultichannelConstraint
nsmbtNew-SmbMultichannelConstraint
rsmbtRemove-SmbMultichannelConstraint
gsmbwGet-SmbWitnessClient
msmbwMove-SmbWitnessClient


 

3 – Don’t just Memorize. Learn!

 

Now, more important than memorizing the aliases is to understand the logic behind the way the aliases are created. While the SMB cmdlets are created with a verb, a dash, the term “Smb”  and a noun, the SMB aliases are created using a prefix (which is an abbreviation of a verb), the term “smb”  and a suffix (which is an abbreviation of a noun).

 

Here are the prefixes that we use in SMB aliases, in alphabetical order:

Prefix Verb
BlBlock
CsClose
GGet
GrGrant
MMove
NNew
RRemove
RkRevoke
SSet
UdUpdate
UlUnblock

 

Here are the suffixes that we use in SMB Aliases, also in alphabetical order:

Suffix Noun
AShareAccess
CConnection
CCClientConfiguration
CNClientNetworkInterface
MMapping
MCMultichannelConnection
OOpenFile
SShare
SCServerConfiguration
SeSession
SNServerNetworkInterface
TMultichannelConstraint
WWitnessClient


 

Now that you know the logic behind it, you can guess the aliases more easily. For instance, Get-SmbConnection is gsmbc (g=Get, smb, c=Connection). Also, Grant-SmbShareAccess is grsmba (gr=Grant, smb, a=ShareAccess).


 

4 – Conclusion

 

I have mixed feeling about aliases. If you’re writing a script that someone else will have to understand, the full form of the cmdlet is probably a better idea that would make your scripts more legible. In that case, using the <TAB> key can help with the typing. On the other hand they can be a great time saver while typing a quick cmdlet here and there. Use with care… :-)


SNIA’s Storage Developers Conference 2012 is just a few weeks away

$
0
0

The Storage Networking Industry Association (SNIA) is hosting the 9th Storage Developer Conference (SDC) in the Hyatt Regency in beautiful Santa Clara, CA (Silicon Valley) on the week of September 17th. As usual, Microsoft is underwriting the SMB/SMB2/SMB3 Plugfest, which is co-located with the SDC event.

For developers working with storage-related technologies, this event gathers a unique crowd and includes a rich agenda that you can find at http://www.snia.org/events/storage-developer2012/agenda2012.  All the key industry players are represented. It lists presentations from Arista, Cleversafe, Dell, EMC, Fusion-io, HP, IBM, Intel, Mellanox, Micron, Microsoft, NEC, NetApp, Oracle, Pure Storage, Red Hat, Samba Team, Seagate, Spectra Logic, SwiftTest, Tata, Wipro and many others.

It’s always worth reminding you that the SDC presentations are usually delivered to developers by the actual product development teams and frequently the actual developer of the technology is either delivering the presentation or is in the room to take questions. That kind of deep insight is not common in every conference out there.

Presentations by Microsoft this year include:

DateTimeSessionPresenter(s)
Mon10:35SMB 3.0 ( Because 3 > 2 )David Kruse, Principal Software Development Lead
Mon11:35Understanding Hyper-V over SMB 3.0 Through Specific Test CasesJose Barreto, Principal Program Manager
Mon1:30Continuously Available SMB – Observations and Lessons LearnedDavid Kruse, Principal Software Development Lead Mathew George, Principal Software Developer
Mon2:30“Storage Spaces” - Next Generation Virtualized Storage for WindowsKaran Mehra, Principal Software Development Engineer
Tue10:40Windows File and Storage DirectionsSurendra Verma, Partner Development Manager
Tue1:00Hyper-V Storage Performance and ScalingJoe Dai, Principal Software Design Engineer
Liang Yang, Senior Performance Engineer
Tue2:00NFSv4.1 Architecture and Tradeoffs in Windows Server 2012Roopesh Battepati, Principal Development Lead
Tue2:00The Virtual Desktop Infrastructure Storage Behaviors and RequirementsSpencer Shepler, Performance Architect
Tue3:05NoSQL in the Clouds with Windows Azure TableJai Haridas, Principal Development Manager
Tue3:05NFSv4.1 Server Protocol Compliance, Security, Performance and Scalability Testing: Implement the RFC, Going Beyond POSIX Interop!Raymond Wang, Senior Software Design Engineer in Test
Tue3:05SQL Server: Understanding the Application/Data Workload, and Designing Storage Products to Match Desired Characteristics for Better PerformanceGunter Zink, Principal Program Manager
Claus Joergensen, Principal Program Manager
Wed1:15NAS Management using Microsoft Corporation System Center 2012 Virtual Machine Manager and SMI-SAlex Naparu, Software Design Engineer
Madhu Jujare, Senior Software Design Engineer
Wed3:20Erasure Coding in Windows Azure StorageCheng Huang, Researcher
Wed4:20ReFS - Next Generation File System for WindowsJ.R. Tipton, Principal Software Development Engineer
Malcolm Smith, Senior Software Design Engineer
Thu9:30Primary Data Deduplication in Windows Server 8Sudipta Sengupta, Senior Research Scientist
Jim Benton, Principal Software Design Engineer
Thu10:30High Performance File Serving with SMB3 and RDMA via the SMBDirect ProtocolTom Talpey, Software Architect
Greg Kramer, Sr. Software Development Engineer
Thu11:25SMB 3.0 Application End-to-End PerformanceDan Lovinger, Principal Software Architect

Registration is open at http://www.snia.org/events/storage-developer2012/registration and you should definitely plan to attend. If you are registered, leave a comment and let’s plan to meet when we get there!

Compilation of my live tweets from SNIA’s SDC 2012 (Storage Developer Conference)

$
0
0

Here is a compilation of my live tweets from SNIA’s SDC 2012 (Storage Developers Conference).
You can also read those directly from twitter at http://twitter.com/josebarreto (in reverse order)

Notes and disclaimers

  • These tweets were typed during the talks and they include typos and my own misinterpretations.
  • Text under each talk are quotes from the speaker or text from the speaker slides, not my personal opinion.
  • If you feel that I misquoted you or badly represented the content of a talk, please add a comment to the post.
  • I spent just limited time fixing typos or correcting the text after the event. Just so many hours in a day...
  • I have not attended all sessions (since there are 4 or 5 at a time, that would actually not be possible :-)…
  • SNIA usually posts the actual PDF decks a few weeks after the event. Attendees have access immediately.

Linux CIFS/SMB2 Kernel Clients - A Year In Review by Steven French, IBM

  • SMB3 will be important for Linux, not just Windows #sdc2012
  • Linux kernel supports SMB. Kernel 3.7 (Q4-2012) includes 71 changes related to SMB (including SMB 2.1), 3.6 has 61, 3.5 has 42
  • SMB 2.1 kernel code in Linux enabled as experimental in 3.7. SMB 2.1 will replace CIFS as the default client when stable.
  • SMB3 client (CONFIG_EXPERIMENTAL) expected by Linux kernel 3.8.
  • While implementing Linux client for SMB3, focusing on strengths: clustering, RDMA. Take advantage of great protocol docs

Multiuser CIFS Mounts, Jeff Layton, Red Hat

  • I attended this session, but tweeted just the session title.

How Many IOPS is Enough by Thomas Coughlin, Coughlin Associates

  • 79% of surveyed people said they need between 1K and 1M IOPs. Capacity: from 1GB to 50TB with sweet spot on 500GB.
  • 78% of surveyed people said hardware delivers between 1K and 1M IOPs, with a sweet spot at 100K IOPs. Matches requirements  
  • Minimum latency system hardware (before other bottleneck) ranges between >1sec to <10ns. 35% at 10ms latency.
  • $/GB for SDD and HDD both declining in parallel paths. $/GB roughly follows IOPs.
  • Survey results will be available in October...

SMB 3.0 ( Because 3 > 2 ) - David Kruse, Microsoft

  • Fully packed room to hear David's SMB3 talk. Plus a few standing in the back... pic.twitter.com/TT5mRXiT
  • Time to ponder: When should we recommend disabling SMB1/CIFS by default?

Understanding Hyper-V over SMB 3.0 Through Specific Test Cases with Jose Barreto

  • No tweets during this session. Hard to talk and tweet at the same time :-)

Continuously Available SMB – Observations and Lessons Learned - David Kruse and Mathew George.

  • I attended this session, but tweeted just the session title.

Status of SMB2/SMB3 Development in Samba, Michael Adam, Samba Team

  • SMB 2.0 officially supported in Samba 3.6 (about a year ago, August 2011)
  • SMB 2.1 work done in Samba for Large MTU, multi-credit, dynamic re-authentication
  • Samba 4.0 will be the release to incorporate SMB 3.0 (encryption and secure negotiate already done)

The Solid State Storage (R-)Evolution, Michael Krause, Hewlett-Packard

  • Storage (especially SSD) performance constrained by SAS interconnects
  • Looking at serviceability from DIMM to PCIe to SATA to SAS. Easy to replace x perfor
  • No need to re-invent SCSI. All OS, hypervisors, file systems, PCIe storage support SCSI.
  • Talking SCSI Express. Potential to take advantage of PCIe capabilities.
  • PCIe has benefits but some challenges: Non optimal DMA "caching", non optimal MMIO performance
  • everything in the world of storage is about to radically change in a few years: SATA, SAS, PCIe, Memory
  • Downstream Port Containment. OS informed of async communications lost.
  • OCuLink: new PCIe cable technology
  • Hardware revolution: stacked media, MCM / On-die, DIMM. Main memory in 1 to 10 TB. Everything in memory?
  • Express Bay (SFF 8639 connector), PCIe CEM (MMIO based semantics), yet to be developed modules
  • Media is going to change. $/bit, power, durability, performance vs. persistence. NAND future bleak.
  • Will every memory become persistent memory? Not sic-fi, this could happen in a few years...
  • Revolutionary changes coming in media. New protocols, new hardware, new software. This is only the beginning

Block Storage and Fabric Management Using System Center 2012 Virtual Machine Manager and SMI-S, Madhu Jujare, Microsoft

  • Windows Server 2012 Storage Management APIs are used by VMM 5012. An abstraction of SMI-S APIs.
  • SMAPI Operations: Discovery, Provisioning, Replication, Monitoring, Pass-thru layer
  • Demo of storage discovery and mapping with Virtual Machine Manager 2012.SP1. Using Microsoft iSCSI Target!

Linux Filesystems: Details on Recent Developments in Linux Filesystems and Storage by Chris Mason, Fusion-io

  • Many journaled file systems introduced in Linux 2.4.x in the early 2000s.
  • Linux 2.6.x. Source control at last. Kernel development moved more rapidly. Specially after Git.
  • Backporting to Enterprise. Enterprise kernels are 2-3 years behind mainline. Some distros more than others.
  • Why are there so many filesystems? Why not pick one? Because it's easy and people need specific things.
  • Where Linux is now. Ext4, XFS (great for large files). Btrfs (snapshots, online maintenance). Device Mapper.
  • Where Linux is now. CF (Compact Flash). Block. SCSI (4K, unmap, trim, t10 pi, multipath, Cgroups).
  • NFS. Still THE filesystem for Linux. Revisions introduce new features and complexity. Interoperable.
  • Futures. Atomic writes. Copy offload (block range cloning or new token based standard). Shingled drives (hybrid)
  • Futures. Hinting (tiers, connect blocks, IO priorities). Flash (seems appropriate to end here :-)

Non-volatile Memory in the Storage Hierarchy: Opportunities and Challenges by Dhruva Chakrabarti, HP

  • Will cover a few technologies coming the near future. From disks to flash and beyond...
  • Flash is a huge leap, but NVRAM presents even bigger opportunities.
  • Comparing density/retention/endurance/latency/cost for hdd/sdd (nand flash)/dram/nvram
  • Talking SCM (Storage Class Memory). Access choices: block interface or byte-addressable model.
  • Architectural model for NVRAM. Coexist with DRAM. Buffers/caches still there. Updates may linger...
  • Failure models. Fail-stop. Byzantine. Arbitrary state corruption. Memory protection.
  • Store to memory must be failure-atomic.
  • NVRAM challenges. Keep persistent data consistent. Programming complexity. Models require flexibility.
  • Visibility ordering requirements. Crash can lead to pointers to uninitialized memory, wild pointers.
  • Potential inconsistencies like persistent memory leaks. There are analogs in multi-threading.
  • Insert a cache line flush to ensure visibility in NVRAM. Reminiscent of a disk cache flush.
  • Many flavors of cache flushes. Intended semantics must be honored red. CPU instruction or API?
  • Fence-based programming has not been well accepted. Higher level abstractions? Wrap in transactions?
  • Conclusion. What is the right API for persistent memory? How much effort? What's the implementation cost?

Building Next Generation Cloud Networks for Big Data Applications by Jayshree Ullal, Arista Networks

  • Agenda: Big Data Trends, Data Analytics, Hadoop.
  • 64-bit CPUs trends, Data storage trends. Moore's law is alive and well.
  • Memory hierarchy is not changing. Hard drives not keeping up, but Flash...
  • Moore's law for Big Data, Digital data doubling every 2 years. DAS/NAS/SAN not keeping up.
  • Variety of data. Raw, unstructured. Not enough minds around to deal with all the issues here.
  • Hadoop means the return of DAS. Racks of servers, DAS, flash cache, non-blocking fabric.
  • Hadoop. 3 copies of the data, one in another rack. Protect you main node, single point of failure.
  • Hadoop. Minimum 10Gb. Shift from north-south communications to east-west. Servers talking to each other.
  • From mainframe, to client/server, to Hadoop clusters.
  • Hadoop pitfalls. Not a layer 2 thing. Highly redundant, many paths, routing. Rack locality. Data integrity
  • Hadoop. File transfers in chunks and blocks. Pipelines replication east-west. Map and Reduce.
  • showing sample 2-rack solution. East-west interconnect is very important. Non-blocking. Buffering.
  • Sample conf. 4000 nodes. 48 servers per cabinet. High speed network backbone. Fault tolerant main node
  • Automating cluster provisioning. Script using DHCP for zero touch provisioning.
  • Buffer challenges. Dynamic allocations, survive micro bursts.
  • Advanced diagnostics and management. Visibility to the queue depth and buffering. Graph historical latency.
  • my power is running out. I gotta speak fast. :-)

Windows File and Storage Directions by Surendra Verma, Microsoft

  • Landscape: pooled resources, self-service, elasticity, virtualization, usage-based, highly available
  • Industry-standard parts to build very high scale, performing systems. Greater number of less reliable parts.
  • Services influencing hardware. New technologies to address specific needs. Example: Hadoop.
  • OS storage built to address specific needs. Changing that requires significant effort.
  • You have to assume that disks and other parts will fail. Need to address that in software.
  • If you have 1000 disks in a system, some are always failing, you're always reconstructing.
  • ReFS: new file system in Windows 8, assumes that everything is unreliable underneath.
  • Other relevant features in Windows Server 2012: Storage Spaces, Clustered Shared Volumes, SMB Direct.
  • Storage Spaces provides resiliency to media failures. Mirror (2 or 3 way), parity, hot spares.
  • Shared Storage Spaces. Resiliency to node and path failures using shared SAS disks.
  • Storage Spaces is aware of enclosures, can tolerate failure of an entire enclosure.
  • ReFS provides resiliency to media failures. Never write metadata in place. Integrity streams checksum.
  • integrity Streams. User data checksum, validated on every read. Uses Storage Spaces to find a good copy.
  • You own application can use an API to talk to Storage Spaces, find all copies of the data, correct things.
  • Resiliency to latent media errors. Proactively detect and correct, keeping redundancy levels intact.
  • ReFS can detect/correct corrupted data even for data not frequently read. Do it on a regular basis.
  • What if all copies are lost? ReFS will keep the volume online, you can still read what's not corrupted.
  • example configuration with 4 Windows Server 2012 nodes connected to multiple JBODs.

Hyper-V Storage Performance and Scaling with Joe Dai & Liang Yang, Microsoft

Joe Dai:

  • New option in Windows Server 2012: Virtual Fibre Channel. FC to the guest. Uses NPIV. Live migration just works.
  • New in WS2012: SMB 3.0 support in Hyper-V. Enables Shared Nothing Live Migration, Cross-cluster Live Migration.
  • New in WS 2012: Storage Spaces. Pools, Spaces. Thin provisioning. Resiliency.
  • Clustered PCI RAID. Host hardware RAID in a cluster setup.
  • Improved VHD format used by Hyper-V. VHDX. Format specification at http://www.microsoft.com/en-us/download/details.aspx?id=29681 Currently v0.95. 1.0 soon
  • VHDX: Up to 64TB. Internal log for resiliency. MB aligned. Larger blocks for better perf. Custom metadata support.
  • Comparing performance. Pass thru, fixed, dynamic, differencing. VHDX dynamic ~= VHD fixed ~= physical disk.
  • Offloaded Data Transfers (ODX). Reduces times to merge, mirror and create VHD/VHDX. Also works for IO inside the VM.
  • Hyper-V support for UNMAP. Supported on VHDX, Pass-thru. Supported on VHDX Virtual SCSI, Virtual FC, Virtual IDE.
  • UNMAP in Windows Server 2012 can flow from virtual IDE in VM to VHDX to SMB share to block storage behind share.

Laing Yang:

  • My job is to find storage bottlenecks in Hyper-V storage and hand over to Joe to fix them. :-)
  • Finding scale limits in Hyper-V synthetic SCSI IO path in WS2008R2. 1 VSP thread, 1 VMBus channel per VM, 256 queue depth per
  • WS2012: From 4 VPs per VM to 64 VP per VM. Multi-threaded IO model. 1 channel per 16 VPs. Breaks 1 million IOPs.
  • Huge performance jump in WS2012 Hyper-V. Really close to physical even with high performance storage.
  • Hyper-V Multichannel (not to be confused with SMB Multichannel) enables the jump on performance.
  • Built 1 million IOPs setup for about $10K (excluding server) using SSDs. Demo using IOmeter. Over 1.22M IOPs...

The Virtual Desktop Infrastructure Storage Behaviors and Requirements with Spencer Shepler, Microsoft

  • Storage for Hyper-V in Windows Server 2012: VHDX, NTFS, CSV, SMB 3.0.
  • Review of SMB 3.0 advantages for Hyper-V: active recovery, Multichannel, RDMA.
  • Showing results for SMB Multichannel with four traditional 10GbE. Line rate with 64KB IOs. CPU bound with 8KB.
  • Files used by Hyper-V. XML, BIN, CSV, VHD, VHDX, AVHDX. Gold, diff and snapshot disk relationships.
  • improvements in VHDX. Up to 64TB size. 4KB logical sector size, 1MB alignment for allocations. UNMAP. TRIM.
  • VDI: Personal desktops vs. Pooled desktops. Pros and cons.
  • Test environment. WS2012 servers. Win7 desktops. Login VSI http://www.loginvsi.com - 48 10K rpm HDD.
  • Workload. Copy, word, print pdf, find/replace, zip, outlook e-mail, ppt, browsing, freemind. Realistic!
  • Login VSI fairly complex to setup. Login frequency 30 seconds. Workload started "randomly" after login.
  • Example output from Login VSI. Showing VSI Max.
  • Reading of BIN file during VM restore is sequential. IO size varies.
  • Gold VHDX activity. 77GB over 1 hour. Only reads, 512 bytes to 1MB size IOs. 25KB average. 88% are <=32KB
  • Distribution for all IO. Reads are 90% 64KB or less. Writes mostly 20KB or less.
  • AVHD activity 1/10 read to write ratio. Flush/write is 1/10. Range 512 bytes to 1MB. 90% are 64KB or less.
  • At the end of test run for 1 hour with 85 desktops. 2000 IOPs from all 85 VMs, 2:1 read/write ratio.

SQL Server: Understanding the Data Workload by Gunter Zink, Microsoft (original title did not fit a tweet)

  • Looking at OLTP and data warehousing workloads. What's new in SQL Server 2012.
  • Understanding SQL Server. Store and retrieve structured data, Relation, ACID, using schema.
  • Data organized in tables. tables have columns. Tables stored in 8KB pages. Page size fixed, not configurable.
  • SQL Server Datafile. Header, GAM page (bitmap for 4GB of pages), 4GB of pages, GAM page, 4GB of pages, etc...
  • SQL Server file space allocated in extents. An extent is 8 pages or 64KB. Parameter for larger extent size.
  • SQL Server log file: Hreader, log records (512 bytes to 60KB). Checkpoint markers. truncated after backup.
  • If your storage reports 4KB sector size, minimum log write for SQL Server is 4KB. Records are padded.
  • 2/3 of SQL Servers run OLTP workloads. Many active users, lightweight transactions.
  • Going over what happens when you run OLTP. Read cache or read disk, write log to disk and mark page as dirty
  • Log buffer. Circular buffer, no fixed size. One buffer written to disk, another being filled with changes.
  • If storage is not fast enough, writing log takes longer and buffer changes grows larger.
  • Lazy writer. Writes dirty pages to disk (memory pressure). Checkpoint: Writes pages, marks log file (time limit)
  • Checkpoint modes: Automatic, Indirect, Manual. Write rate reduced if latency reaches 20ms (can be configured)
  • Automatic SQL Checkpoint. Write intensity controlled by recovery interval. Default is 0 = every two minutes.
  • New in SQL Server 2012. Target_Recovery_Time. Makes checkpoint less spikey by constantly writing dirty pages.
  • SQL Server log file. Change records in sequence. Mostly just writes. Except in recovery or transaction rollback.
  • Data file IO. 8KB random reads, buffered (based on number of user queries). Can be done in 64KB at SQL start up.
  • Log file IO: unbuffered small sequential writes (depends on how many inserts/updates/deletes).
  • About 80% of SQL Server performance problems are storage performance problems. Not enough spindles or memory.
  • SQL Server problems. 20ms threshold too high for SSDs. Use -k parameter to limit (specified in MB/sec)
  • Issues. Checkpoint floods array cache (20ms). Cache de-staging causes log drive write performance.
  • Log writes must go to disk, no buffering. Data writes can be buffered, since it can recover from the log.
  • SQL Server and Tiered Storage. We probably won't read what we've just written.
  • Data warehouse. Read large amounts of data, mostly no index, table scans. Hourly or daily updates (from OLTP).
  • Understanding a data warehouse query. Lots of large reads. Table scans and range scans. Reads: 64KB up to 512KB.
  • DW. Uses TempDB to handle intermediate results, sort. Mostly 64KB writes, 8KB reads. SSDs are good for this.
  • DW common problems: Not enough IO bandwidth. 2P server can ingest 10Gbytes/sec. Careful with TP, pooled LUNs.
  • DW common problems. Arrays don't read from multiple mirror copies.
  • SMB file server and SQL Server. Limited support in SQL Server 2008 R2. Fully supported with SQL Server 2012.
  • I got my fastest data warehouse performance using SMB 3.0 with RDMA. Also simpler to manage.
  • Comparing steps to update SQL Server with Fibre Channel and SMB 3.0 (many more steps using FC).
  • SQL Server - FC vs. SMB 3.0 connectivity cost comparison. Comparing $/MB/sec with 1GbE, 10GbE, QDR IB, 8G FC.

The Future of Protocol and SMB2/3 Analysis with Paul Long, Microsoft

  • We'll talk about Message Analyzer. David is helping.
  • Protocol Engineering Framework
  • Like Network Monitor. Modern message analysis tool built on the Protocol Engineering Framework
  • Source for Message Analyzer can be network packets, ETW events, text logs, other sources. Can validate messages.
  • Browse for message sources, Select a subset of messages, View using a viewer like a grid..
  • New way of viewing starting from the top down, instead of the bottom up in NetMon.
  • Unlike NetMon, you can group by any field or message property. Also payload rendering (like JPG)
  • Switching to demo mode...
  • Guidance shipped online. Starting with a the "Capture/Trace" option.
  • Trace scenarios: NDIS, Firewall, Web Proxy, LAN , WLAN, Wifi. Trace filter as well.
  • Doing a link layer capture (just like old NetMon). Start capture. Generate some web traffic.
  • Stop the trace. Group by module. Look at all protocols. Like HTTP. Drill in to see operations.
  • Looking at operations. HTTP GET. Look at the details. High level stack view.
  • Now grouping on both protocol and content type. Easily spots pictures over HTTP. Image preview.
  • Easier to see time elapsed per operation when you group messages. You dig to individual messages
  • Now looking at SMB trace. Trace of a file copy. Group on the file name (search for the property)
  • Now grouped on SMB.Filename. You can see all SB operations to copy a specific file.
  • Now looking at a trace of SMB file copy to an encrypted file share.
  • Built in traces to capture from the client side or server side. Can do full PDU or header.only
  • This can also be used to capture SMB Direct data, using the SMB client trace.
  • Showing the trace now with both network traffic and SMB client trace data (unencrypted).
  • Want to associate the wire capture with the SMB client ETW trace? Use the message ID
  • Showing mix of firewall trace and SMB client ETW trace. You see it both encrypted and not.
  • SMB team at Microsoft is the first to add native protocol unit tracing. Very useful...
  • Most providers have ETW debug logging but not the actual messages.
  • You can also get the trace with just NetSh or LogMan and load the trace in the tool later.
  • We also can deploy the tool and use PowerShell to start/stop capture.
  • If the event provider offers them, you can specify level and keywords during the capture.
  • Add some files (log file and wireshark trace). Narrow down the time. Add selection filter.
  • Mixing wireshark trace with a Samba text log file (pattern matching text log).
  • Audience: As a Samba hacker, Message Analyzer is one of the most interesting tools I have seen!
  • Jaws are dropping as Paul demos analyzing a trace from WireShark + Samba taken on Linux.
  • Next demo: visualizations. Two separate file copies. Showing summary view for SMB reads/writes
  • Looking at a graph of bytes/second for SMB reads and writes. Zooming into a specific time.
  • From any viewer you should be any to do any kind of selection and then launch another viewer.
  • If you're a developer, you can create a very sophisticated viewer.
  • Next demo: showing the protocol dashboard viewer. Charts with protocol bars. Drills into HTTP.

Storage Systems for Shingled Disks, with Garth Gibson, Panasas

  • Talking about disk technology. Reaction of HDD to what's going with SSDs.
  • Kryder's law for magnetic disks. Expectation is that disks will cost next to nothing.
  • High capacity disk. As bits get smaller, the bit might not hold it's orientation 10 years later.
  • Heat assisted to make it possible to write, then keep it longer when cold. Need to aim that laser precisely..
  • New technology. Shingled writing. Write head is wider than read head. Density defined by read head, not write head.
  • As you write, you overwrite a portion of what you wrote before, but you can still read it.
  • Shingled can be done with today's heads with minor changes, no need to wait for heat assisted technology.
  • Shingled disks. Large sequential writes. Disks becomes tape!!
  • Hard to see just the one bit. Safe plan is to see the bit from slightly different angles and use signal processing.
  • if aiming at 3x the density: cross talk. Signal processing using 2 dimensions TMDR. 3-5 revs to to read a track.
  • Shingled disks. Initial multiplier will be a factor of 2. Seek 10nm instead of 30 nm. Wider band with sharp edges.
  • Write head edge needs to be sharp on one side, where the tracks will overlap. Looking at different widths.
  • Aerial density favors large bands that overlap. Looking at some math that proves this.
  • You could have a special place in the disk with no shingles for good random write performance, mixed with shingled.
  • Lots of question on shingled disks. How to handle performance, errors, etc.
  • Shingled disks. Same problem for Flash. Shingled disks - same algorithms as Flash.
  • modify software to avoid or minimize read, modify, write. Log structured file systems are 20 years old.
  • Key idea is that disk attribute says "sequential writing". T13 and t10 standards.
  • Shingled disks. Hadoop as initial target. Project with mix of shingled and unshingled disks. Could also be SSD+HDD.
  • Prototype banded disk API. Write forward or move back to 0. Showing test results with new file system.
  • future work. Move beyond hadoop to general workloads, hurts with lots of small files. Large files ok.
  • future work. Pack metadata. All of the metadata into tables, backed on disk by large blob of changes.
  • Summary of status. Appropriate for Big Data. One file = one band. Hadoop is write once. Next steps: pack metadata.

The Big Deal of Big Data to Big Storage with Benjamin Woo, Neuralytix

  • Can't project to both screens because laptop does not have VGA. Ah, technology... Will use just right screen.
  • Even Batman is into big data. ?!
  • What's the big picture for big data. Eye chart with lots of companies, grouped into areas...
  • We have a problem with storage/data processing today. Way too many hops. (comparing to airline routes ?!)
  • Sample path: Oracle to Informatica to Microstategy and Hadoop. Bring them together. Single copy of "the truth".
  • Eliminate the process of ETL. Eliminate the need for exports. Help customers to find stuff in the single copy.
  • You are developers. You need to find a solution for this problem. Do you buy into this?
  • Multiple copies OK for redundancy or performance, but shouldn't it all be same source of truth?
  • Single copy of the truth better for discovery. Don't sample, don't summarize. You will find more than you expect.
  • We're always thinking about the infrastructure. Remove yourself from the hardware and think about the data!
  • The challenge is how to think about the data. Storage developers can map that to the hardware.
  • Send complaints to /dev/null. Tweet at @BenWooNY
  • Should we drop RDBMS altogether? Should we add more metadata to them? Maybe.
  • Our abstractions are already far removed from the hardware. Think virtual disks in VM to file system to SAN array.
  • Software Defined Storage is something we've been doing for years in silicon.
  • Remember what we're here for. It's about the data. Otherwise there is no point in doing storage.
  • Is there more complexity in having a single copy of the truth? Yes, but that is part of what we do! We thrive there!
  • Think about Hadoop. They take on all the complexity and use dumb hardware. That's how they create value!

Unified Storage for the Private Cloud with Dennis Chapman, NetApp

  • 10th anniversary of SMI-S. Also 10th anniversary of pirate day. Arghhh...
  • application silos to virtualization to private clouds (plus public and hybrid clouds)
  • Focusing on the network. Fundamentally clients talking to storage in some way...
  • storage choices for physical servers. Local (DAS) and remote (FC, iSCSI, SMB). Local for OS, remote for data.
  • Linux pretty much the same as Windows. Difference is NFS instead of SMB. Talking storage affinities.
  • Windows OS. Limited booting from iSCSI and FC. Mostly local.
  • Windows. Data mostly on FC and iSCSI, SMB still limited (NFS more well established on Linux).
  • shifting to virtualized workloads on Windows. Opts for local and remote. More choices, storage to the guest.
  • Virtualized workloads are the #1 configuration we provide storage for.
  • Looking at options for Windows and Linux guests, hosted on both VMware and Hyper-V hosts. Table shows options
  • FC to the guest. Primary on Linux, secondary on Windows. Jose: FC to the guest new in WS2012.
  • File storage (NFS) primary on Linux, but secondary on Windows (SMB). Jose: again, SMB support new in WS2012.
  • iSCSI secondary for Linux guest, but primary for Windows guests.
  • SMB still limited right now, expect it to grow. Interested on how it will play, maybe as high as NFS on Linux
  • Distributed workload state. Workload domain, hypervisors domain, storage domain.
  • Guest point in time consistency. Crash consistency or application consistency. OS easier, applications harder
  • Hibernation consistency. Put the guest to sleep and snapshot. Works well for Windows VMs. Costs time.
  • Application consistency. Specific APIs. VSS for Windows. I love this! Including remote VSS for SMB shares.
  • Application consistency for Linux. Missing VSS. We have to do specific things to make it work. Not easy.
  • hypervisors PIT consistency. VMware, cluster file system VMFS. Can store files on NFS as well.
  • Hypervisors PIT for Hyper-V. Similar choices with VHD on CSV. Also now option for SMB in WS2012.
  • Affinities and consistency. Workload domain, Hypervisors domain and Storage domain backups. Choices.
  • VSS is the major difference between Windows and Linux in terms of backup and consistency.
  • Moving to the Storage domain. Data ONTAP 8 Clustering. Showing 6-node filer cluster diagram.
  • NetApp Vservers owns a set of Flexvols, with contain close objects (either LUN or file).
  • Sample workflow with NetApp with remote SMB storage. Using remote VSS to create a backup using clones.
  • Sample workflow. App consistent backup from a guest using an iSCSI LUN.
  • Showing eye charts with integration with VMware and Microsoft.
  • Talking up the use of PowerShell, SMB when integrating with Microsoft.
  • Talk multiple protocols, rich services, deep management integration, highly available and reliable.

SNIA SSSI PCIe SSD Round Table. Moderator + four members.

  • Introductions, overview of SSSI PCIe task force and committee.
  • 62 companies in the last conference. Presentations available for download. http://www.snia.org/forums/sssi/pcie
  • Covering standards, sites and tools available from the group. See link posted
  • difference between PCIE SSDs look just other drives, but there are differences. Bandwidth is one of them.
  • Looking at random 4KB write IOPs and response time for different types of disks: HDD, MLC, SLC, PCIe.
  • Different SSD tech offer similar response rates. Some high latencies due to garbage collection.
  • comparing now DRAM, PCIe, SAS and SATA. Lower latencies in first two.
  • Comparing CPU utilization. From less than 10% to over 50%. What CPU utilization to achieve IOPs...
  • Other system factors. Looking at CPU affinity effect on random 4KB writes... Wide variation.
  • Performance measurement. Response time is key when testing PCIe SSDs. Power mgmt? Heat mgmt? Protocol effect on perf?
  • Extending the SCSI platform for performance. SCSI is everywhere in storage.
  • Looking at server attached SSDs and how much is SATA, SAS, PCIe, boot drive. Power envelope is a consideration.
  • SCSI is everywhere. SCSI Express protocol for standard path to PCIe. SoP (SCSI over PCIe). Hardware and software.
  • SCSI Express: Controllers, Drive/Device, Drivers. Express bay connector. 25 watts of power.
  • Future: 12Gbps SAS in volume at the end of 2013. Extended copy feature. 25W devices. Atomic writes. Hinting. SCSI Express.
  • SAS controllers > 1 million IOPs and increased power for SAS. Reduces PCIe SSD differentiation. New form factors?
  • Flash drives: block storage or memory.
  • Block versus Memory access. Storage SSDs, PCIe SSDs, memory class SCM compared in a block diagram. Looking at app performance
  • optimization required for apps to realize the memory class benefits. Looking at ways to address this.
  • Open industry directions. Make all storage look like SCSI or offer apps other access models for storage?
  • Mapping NVMExpress capability to SCSI commands. User-level abstractions. Enabling SCM by making it easy.
  • Panel done with introductions. Moving to questions.
  • How is Linux support for this? NVMExpress driver is all that exists now.
  • How much of the latency is owned by the host and the PCIe device? Difficult to answer. Hardware, transport, driver.
  • Comparing to DRAM was excellent. That was very helpful.
  • How are form factors moving forward? 2.5" HDD format will be around for a long time. Serviceability.
  • Memory like access semantics - advantages over SSDs. Lower overhead, lots in the hardware.
  • Difference between NVMe and SOP/PQI? Capabilities voted down due to complexity.
  • What are the abstractions like? Something like a file? NVMe has a namespace. Atomic write is a good example. How to overlay?
  • It's easy to use just a malloc, but it's a cut the block, run with memory. However, how do you transition?

NAS Management using System Center 2012 Virtual Machine Manager and SMI-S with Alex Naparu and Madhu Jujare

  • VMM for Management of Virtualized Infrastructure: VMM 2012 SP1 covers block storage and SMB3 shares
  • Lots of SMB 3.0 sessions here at SDC...
  • VMM offers to manage your infrastructure. We'll be focusing on storage. Lots enabled by Windows Server 2012.
  • There's an entire layer in Windows Server 2012 dedicated to manage storage. Includes translation of WMI to SMI-S
  • All of this can be leveraged using PowerShell.
  • VMM NAS Management: Discovery (Servers, Systems, Shares), Creation/Removal (Systems, Shares), Share Permissions
  • How did we get there? With a lot of help from our partners. Kick-off with EMC and NetApp. More soon. Plugfests.
  • Pre-release providers. If you have any questions on the availability of providers, please ask EMC and NetApp.
  • Moving now into demo mode. Select provider type. Specify discovery scope. Provide credentials. Discovering...
  • Discovered some block storage and file storage. Some providers expose one of them, some expose both.
  • Looking at all the pools and all the shares. Shallow discovery at first. After selection, we do deep discovery.
  • Each pool is given a tag, called classification. Tagged some as Gold, some as Platinum. Finishing discovery.
  • Deep discovery completed. Looking at the Storage tree in VMM, with arrays, pools, LUNs, file shares.
  • Now using VMM to create a file share. Provide a name, description, file server, storage pool and size.
  • Creates a logical disk in the pool, format with a file system, then create a file share. All automated.
  • Now going to a Hyper-V host, add a file share to the host using VMM. Sets appropriate permissions for the share.
  • VMM also checks the file access is good from that host.
  • Now let's see how that works for Windows. Add a provider, abstracted. Using WMI, not SMI-S. Need credentials.
  • Again, shows all shares, select for deep discovery. Full management available after that.
  • Now we can assign Windows file share to the host, ACLs are set. Create a share. All very much the same as NAS.
  • VMM also verifies the right permissions are set. VMM can also repair permission to the share if necessary.
  • Now using VMM to create a new VM on the Windows SMB 3.0 file share. Same as NAS device with SMB 3.0.
  • SMI-S support. Basic operations supported on SMI-S 1.4 and later. ACL management. requires SMI-S 1.6.
  • SMI-S 1.4 profiles: File server, file share, file system discovery, file share creation, file share removal.
  • Listing profiles that as required for SMI-S support with VMM. Partial list: NAS Head, File System, File Export
  • SMI-S defines a number of namespaces. "Interop" namespace required. Associations are critical.
  • Details on Discovery. namespaces, protocol support. Filter to get only SMB 3.0 shares.
  • Discovery of File Systems. Reside on logical disks. That's the tie from file storage to block storage.
  • Different vendors have different way to handle File Systems. Creating a new one is not trivial. Another profile.
  • VMM creates the file system and file share in one step. Root of FS is the share. Keeping things simple.
  • Permissions management. Integrated with Active Directory. Shares "registered" with Hyper-V host. VMM adds ACLs.
  • Demo of VMM specific PowerShell walking the hierarchy from the array to the share and back.
  • For VMM, NAS device and SMI-S must be integrated with Active Directory. Simple Identity Management Subprofile.
  • CIM Passthrough API. WMI provider can be leveraged via code or PowerShell.

SMB 3, Hyper-V and ONTAP, Garrett Mueller, NetApp

  • Senior Engineer at NetApp focused on CIFS/SMB.
  • What we've done with over 30 developers: features, content for Windows Server 2012. SMB3, Witness, others.
  • Data ONTAP cluster-mode architecture. HA pairs with high speed interconnect. disk "blade" in each node.
  • Single SMB server spread across multiple nodes in the cluster. Each an SMB server with same configuration
  • Each instance of the SMB server in a node has access to the volumes.
  • Non-disruptive operations. Volume move (SMB1+). Logical Interface move (SMB2+). Move node/aggregate (SMB3).
  • We did not have a method to preserve the locks between nodes. That was disruptive before SMB3.
  • SMB 3 and Persistent Handles. Showing two nodes and how you can move a persistent SMB 3 handle.
  • Witness can be used in lots of different ways. Completely separate protocol. NetApp scoped it to an HA pair.
  • Diagram explaining how NetApp uses Witness protocol with SMB3 to discover, monitor, report failure.
  • Remote VSS. VSS is Microsoft's solution for app consistent snapshot. You need to back up your shares!
  • NetApp implemented a provider for Remote VSS for SMB shares using the documented protocol. Showing workflow.
  • All VMs within a share are SIS cloned. SnapManager does backup. After done, temp SIS clones are removed.
  • Can a fault occur during a backup. If there is a failure, the backup will fail. Not protected in that way.
  • Offloaded Data Transfer (ODX). Intra-volume: SIS clones. Inter-volume/inter-node: back-end copy engine.
  • ODX: The real benefit is in the fact that it's used by default in Windows Server 2012. It just works!
  • ODX implications for Hyper-V over SMB: Rapid provisioning, rapid storage migrations, even disk within a VM.
  • Hyper-V over SMB. Putting it all together. Non-disruptive operations, Witness, Remote VSS, ODX.
  • No NetApp support for SMB Multichannel or SMB Direct (RDMA) with SMB 3.

Design and Implementation of SMB Locking in a Clustered File System with Aravind Velamur Srinivasan, EMC - Isilon

  • Part of SMB team at EMC/Isilon. Talk agenda covers OneFS and its distributed locking mechanism.
  • Overview of OneFS. NAS file server, scalable, 8x mirror, +4 parity. 3 to 144 nodes, using commodity hardware.
  • Locking: avoid multiple writers to the same file. Potentially in different file server nodes.
  • DLM challenges: Performance, multiple protocols ands requirements. Expose appropriate APIs.
  • Diagram explaining the goals and mechanism of the Distributed Locking Manager (DLM) Isilon's OneFS
  • Going over requirements of the DLM. Long list...

Scaling Storage to the Cloud and Beyond with Ceph with Sage Weil, Inktank

  • Trying to catch up with ongoing talk on ceph. Sage Weil talks really fast and uses dense slides...
  • Covering RADOS block device being used by virtualization, shared storage. http://ceph.com/category/rados/
  • Covering ceph-fs. Metadata and data paths. Metadata server components. Combined with the object store for data.
  • Legacy metadata storage: bad. Ceph-fs metadata does not use block lists or inode tables. Inode in directory.
  • Dynamic subtree partitioning very scalable. Hundreds of metadata servers. Adaptive. Preserves locality.
  • Challenge dealing metadata Io. Use metadata server as cache, prefect dir-inode. Large journal or log.
  • What is journaled? Lots of state. Sessions, metadata changes. Lazy flush.
  • Client protocol highly stateful. Metadata servers, direct access to IDS.
  • explaining the ceph-fs workflow using ceph-mon, ceph-mds, ceph-osd.
  • Snapshots. Volume and subvolume unusable at petabyte scale. Snapshot arbitrary directory
  • client implementations. Linux kernel client. Use Samba to reexport as CIFS. Also NFS and Hadoop.
  • Current status of the project: most components: status=awesome. Ceph-fs nearly awesome :-)
  • Why do it? Limited options for scalable open source storage. Proprietary solutions expensive.
  • What to do with hard links? They are rare. Using auxiliary table, a little more expensive, but works.
  • How do you deal with running out of space? You don't. Make sure utilization on nodes balanced. Add nodes.

Introduction to the last day

  • Big Data is like crude oil, it needs a lot of refining and filtering...
  • Growing from 2.75 Zettabytes in 2012 to 8 ZB in 2015. Nice infographic showing projected growth...

The Evolving Apache Hadoop Eco System - What It Means for Big Data and Storage Developers, Sanjay Radia, Hortonworks

  • One of the surprising things about Hadoop is that is does not RAID on the disks. It does surprise people.
  • Data is growing. Lots of companies developing custom solutions since nothing commercial could handle the volume.
  • web logs with terabytes of data. Video data is huge, sensors. Big Data = transactions + interactions + observations.
  • Hadoop is commodity servers, jbod disk, horizontal scaling. Scale from small to clusters of thousands of servers..
  • Large table with use cases for Hadoop. Retail, intelligence, finance, ...
  • Going over classic processes with ETL, BI, Analytics. A single system cannot process huge amounts of data.
  • Big change is introducing a "big data refinery". But you need a platform that scales. That's why we need Hadoop.
  • Hadoop can use a SQL engine, or you can do key-value store, NoSQL. Big diagram with Enterprise data architecture.
  • Hadoop offers a lot tools. Flexible metadata services across tools. Helps with the integration, format changes.
  • Moving to Hadoop and Storage. Looking at diagram showing racks, servers, 6k nodes, 120PB. Fault tolerant, disk or node
  • manageability. One operator managing 3000 nodes! Same boxes do both storage and computation.
  • Hadoop uses very high bandwidth. Ethernet or InfiniBand. Commonly uses 40GbE.
  • Namespace layer and Block storage layer. Block pool Isis a set of blocks, like a LUN. Did/file abstraction on namesp.
  • Data is normally accessed locally, but can pull from any other servers. Deals with failures automatically.
  • looking at HDFS. Goes back to 1978 paper on separating data from function in a DFS. Luster, Google, pNFS.
  • I attribute the use of commodity hardware and replication to the GoogleFS. Circa 2003. Non-posix semantics.
  • Computation close to data is an old model. Map Reduce.
  • Significance of not using disk RAID. Replication factor of Hadoop is 3. Node can be fixed when convenient.
  • HDFS recovers at a rate of 12GB in minutes, done in parallel. Even faster for larger clusters. Recovers automatically.
  • Clearly there is an overhead. It's 3x instead of much less for RAID. Used only for some of the data.
  • Generic storage service opportunities for innovation. Federation, partitioned namespace, independent block pools.
  • Archival data. Where should it sit? Hadoop encourages keeping old data for future analysis. Hot/ cold? Tiers? Tape?
  • Two versions of Hadoop. Hadoop 1 (GA) and Hadoop 2 (alpha). One is stable. Full stack HA work in progress.
  • Hadoop full stack HA architecture diagram. Slave nodes layer + HA Cluster layer. Improving performance, DR, upgrades.
  • upcoming features include snapshots, heterogeneous storage (flash drives), block grouping, other protocols (NFS).
  • Which Apache Hadoop distro should you use? Little marketing of Hortonworks. Most stable version of components.
  • It's a new product. At yahoo we needed to make sure we did not lose any data. Needs it to be stable.
  • Hadoop changes the game. Cost, storage and compute. Scales to very very large. Open, growing ecosystem, no lock in.
  • Question from the audience. What is Big Data? What is Hadoop? You don't' need to know what it is, just buy it :-)
  • Sizing? The CPU performance and disk performance/capacity varies a lot. 90% of disk performance for sequential IO.
  • Question: Security? Uses Kerberos authentication, you can conned to Active Directory. There is a paper on this.
  • 1 name node to thousands of nodes, 200M files. Hadoop moving to more name nodes to match the capacity of working set.

Primary Data Deduplication in Windows Server 2012 with Sudipta Sengupta, Jim Benton

Sudipta Sengupta:

  • Growing file storage market. Dedup is the #1 feature customers asking for. Lots of acquisitions in dedup space.
  • What is deduplication, how to do it. Content based chucking using a sliding window, computing hashes. Rabin method.
  • Dedup for data at rst, data on the wire. Savings in your primary storage more valuable, more expensive disks...
  • Dimensions of the problem: Primary storage, locality, service data to components, commodity hardware.
  • Extending the envelope from backup scenarios only to primary deduplication.
  • Key design decisions: post-processing, granularity and chucking, scale slowly to data size, crash consistent
  • Large scale study of primary datasets. Table with different workloads, chunking.
  • Looking at whole-file vs. sub-file. Decided early on to do chunking. Looking at chunk size. Compress the chunks!
  • Compression is more efficient on larger chunk sizes. Decided to use larger chunk size, pays off in metadata size.
  • You don't want to compress unless there's a bang for the buck. 50% of chunks = 80% for compression savings.
  • Basic version of the Rabin fingerprinting based chunking. Large chunks, but more uniform chunk size distribution
  • In Windows average chunk size is 64KB. Jose: Really noticing this guy is in research :-) Math, diagrams, statistics
  • Chunk indexing problem. Metadata too big to fit in RAM. Solution via unique chunk index architecture. Locality.
  • Index very frugal on both memory usage and IOPs. 6 bytes of RAM per chunk. Data partitioning and reconciliation.

Jim Benton:

  • Windows approach to data consistency and integrity. Mandatory block diagram with deduplication components.
  • Looking at deduplication on-disk structures. Identify duplicate data (chunks), optimize target files (stream map)
  • Chunk store file layout. Data container files: chunks and stream maps. Chunk ID has enough data to locate chunk
  • Look at the rehydration process. How to get the file back from the steam map and chunks.
  • Deduplicated file write path partial recall. Recall bitmap allows serving IO from file stream or chunk store.
  • Crash consistency state diagram. One example with partial recall. Generated a lot of these diagrams for confidence.
  • Used state diagrams to allow test team to induce failures and verify deduplication is indeed crash consistent.
  • Data scrubbing. Induce redundancy back in, but strategically. Popular chunks get more copies. Checksum verified.
  • Data scrubbing approach: Detection, containment, resiliency, scrubbing, repair, reporting. Lots of defensive code!
  • Deduplication uses Storage Spaces redundancy. Can use that level to recover the data from another copy if possible.
  • Performance for deduplication. Looking at a table with impact of dedup. Looking at options using less/more memory.
  • Looking at resource utilization for dedup. Focus on converging them.
  • Dedup performance varies depending on data access pattern. Time to open office file, almost no difference.
  • Dedup. Time to copy large VHD file. Lots of common chunks. Actually reduces copy time for those VHD files. Caching.
  • dedup write performance. Crash consistency hurts performance, so there is a hit. In a scenario, around 30% slower.
  • Deduplication around the top features in Windows Server 2012. Mentions at The Register, Ars Technica, Windows IT Pro
  • Lots of great questions being asked. Could not capture it all.

High Performance File Serving with SMB3 and RDMA via the SMBDirect Protocol with Tom Talpey and Greg Kramer

Tom Talpey:

  • Where we are with SMB Direct, where we are going, some pretty cool performance results.
  • Last year here at SDC we had our coming out party for SMB Direct. Review of what's SMB Direct.
  • Nice palindromic port for SMB direct 5455. Protocol documented at MS-SMBD. http://msdn.microsoft.com/en-us/library/hh536346(v=PROT.13).aspx
  • Covering the basic of SMB Direct. Only 3 message types. 2 way full duplex. Discovered via SMB Multichannel.
  • Relationship with the NDKPI in Windows. Provider interface implemented by adapter vendors.
  • Send/receive model. Possibly sent as train. Implements crediting. Direct placement (read/write). Scatter/gather list
  • Going over the details on SMB Direct send transfers. Reads and writes, how they map to SMB3. Looking at read transfer
  • looking at exactly how the RDMA reads and writes work. Actual offloaded transfers via RDMA. Also covering credits.
  • Just noticed we have a relatively packed room for such a technical talk...And it's the larger room here...
  • interesting corner cases for crediting. Last credit case. Async, cancels and errors. No reply, many/large replies
  • SMB Direct efficiency. Two pipes, one per direction, independent. Truly bidirectional. Server pull model. Options.
  • SMB Direct options for RDMA efficiency. FRMR, silent completions, coalescing, etc.
  • Server pull model allows for added efficiency, in addition to improved security. Server controls all RDMA operations.

Greg Kramer:

  • On the main event. That's why you're here, right? Performance...
  • SDC 2011 results. 160k iops, 3.2 GBytes/sec.
  • New SDC 2012 results. Dual CX3 InfiniBand, Storage Spaces, two SAS HBAs, SSDs. SQLIO tool.
  • Examining the results. 7.3 Gbytes / sec with 512KB IOs at 8.6% CPU. 453K 8KB IOs at 60% CPU.
  • Taking it 11. Three InfiniBand links. Six SAS HBAs. 48 SSDs. 16.253 GBytes/sec!!! Still low CPU utilization...
  • NUMA effects on performance. Looking at NUMA disabled versus enabled. 16% percent in CPU utilization.
  • That's great! Now what? Looking at potential techniques to reduce the cost of IOs, increase IOPs further.
  • Looking at improving how invalidation consumes CPU cycles, RNIC bus cycles. But you do need to invalidate agressively
  • Make invalidate cheaper. Using "send with invalidate". Invalidate done as early as possible, fewer round trips.
  • Send with invalidate: supported in InfiniBand, iwarp and roce. No changes to SMB direct protocol. Not committed plan
  • Shout out to http://smb3.info  Thanks, Greg!
  • Question: RDMA and encryption? Yes, you can combine them. SMB Direct will use RDMA send recives in that case.
  • Question: How do you monitor at packet level? Use Message Analyzer. But careful drinking from the fire hose :-)
  • Question: Performance monitor? There are counters for RDMA, look out for stalls, hints on how to optimize.

SMB 3.0 Application End-to-End Performance with Dan Lovinger

  • Product is released now, unlike last year. We're now showing final results...
  • Scenarios with OLTP database, cluster motion, Multichannel. How we found issues during development.
  • Summary statistics. You can drown on river with an average depth of six inches.
  • Starting point: Metric versus time. Averages are not enough, completely miss what's going on.
  • You should think about distribution. Looking at histogram. The classic Bell Curve. 34% to each side.
  • Standard deviation and median. Mid point of all data points. What makes sense for latency, bandwidth?
  • Looking at percentiles. Cumulative distributions. Remember that from College?
  • OLTP workload. Transaction rate, cumulative distribution. How we found and solved an issue that makes SMB ~= DAS
  • OLTP. Log file is small to midsize sequential IO, database file is small random IO.
  • Found 18-year-old perfor bug that affects only SMB and only in an OLTP workload. Leftover from FAT implementation.
  • Found this "write bubble" performance bug look at average queue length. Once fixed, SMB =~ DAS.
  • back to OLTP hardware configuration. IOPs limited workload does not need fast interconnect.
  • Comparing SMB v. DAS transaction rate at ingest. 1GbE over SMB compared to 4GbFC. Obviously limited by bandwidth.
  • As soon as the ingest phase is done, then 1GbE is nearly identical to 4GbFC. IOPs limited on disks. SMB=~DAS.
  • This is just a sample of why workload matters, why we need these performance analysis to find what we can improve.
  • IOmeter and SQLIO are not enough. You need to look at a real workload to find these performance issues.
  • Fix for this issue in Windows Server 2012 and also back ported to Windows Server 2008 R2.
  • Another case: Looking at what happens when you move a cluster resource group from one node to another.
  • 3 file server cluster groups, 40 disk on each. How resource control manager handles the move. Needed visualization.
  • Looking at a neat visualization of how cluster disks are moved from one node to another. Long pole operations.
  • Found that every time we offline a disk, there as a long running operation that was not needed. We fixed that.
  • We also found a situation that took multiple TCP timeouts, leading to long delay in the overall move. Fixed!
  • Final result, dramatic reduction of cluster move time. Entire move time from 55 seconds to under 10 seconds.
  • Now we can do large cluster resource group moves with 120 disks in under 10 seconds. Not bad...
  • Last case study. SMB Multichannel performance. Looking at test hardware configuration. 24 SSDs, 2 SAS HBAs, IOmeter
  • Looking at local throughput at different IO sizes, as a baseline.
  • SMB Multichannel. We can achieve line rate saturation at about 16KB with four 40GBE interfaces.
  • Curve for small IOs matches between DAS and SMB at line rate..

Closing tweet

  • #SDConference is finished. Thanks for a great event! Meet you at SNW Fall 2012 in a month, right back here. On my way back to Redmond now...

Talks at SNW Fall 2012 in October will cover SMB Direct, Hyper-V over SMB and SMB 3.0

$
0
0

I have three presentations lined up for the ComputerWorld/SNIA SNW Fall 2012 Conference, scheduled for October 16-19, 2012 in Santa Clara, California. Here are the details for each one, taken from the official event web site.

Industry Perspective: High Throughput File Servers with SMB Direct, Using the Three Flavors of RDMA network adapters
Wednesday, 10/17/2012, 11:40 AM -12:25 PM

In Windows Server 2012, we introduce the “SMB Direct” protocol, which allows file servers to use high throughput/low latency RDMA network interfaces. However, there are three distinct flavors of RDMA, each with their own specific requirements and advantages, their own pros and cons. In this session, we'll look into iWARP, InfiniBand and RoCE, outline the differences between them. We'll also list the specific vendors that offer each technology and provide step-by-step instructions for anyone planning to deploy them. The talk will also include an update on RDMA performance and a customer case study.

Industry Perspective: Hyper-V over SMB: Remote File Storage Support in Windows Server 2012 Hyper-V
Friday, 10/19/2012, 10:20 AM - 11:05 AM

In this session, we cover the Windows Server 2012 Hyper-V support for remote file storage using SMB 3.0. This introduces a new first-class storage option for Hyper-V that is a flexible, easy to use and cost-effective alternative to block storage. We detail the basic requirements for Hyper-V over SMB and outline the specific enhancements to SMB 3.0 to support server application storage, including SMB Transparent Failover, SMB Scale-Out, SMB Multichannel, SMB Direct (SMB over RDMA), SMB Encryption, SMB PowerShell, SMB performance counters and VSS for Remote File Shares. We conclude with a few suggested configurations for Hyper-V over SMB, including both standalone and clustered options. SMB 3.0 is an open protocol family, which is being implemented by several major vendors of enterprise NAS, and by the Samba open-source CIFS/SMB package in Linux and other operating systems.

SNIA Tutorial: SMB Remote File Protocol (including SMB 3.0)
Friday, 10/19/2012, 11:15 AM - 12:00 PM

The SMB protocol has evolved over time from CIFS to SMB1 to SMB2, with implementations by dozens of vendors including most major Operating Systems and NAS solutions. The SMB 3.0 protocol, announced at the SNIA SDC Conference in September 2011, is expected to have its first commercial implementations by Microsoft, NetApp and EMC by the end of 2012 (and potentially more later). This SNIA Tutorial describes the basic architecture of the SMB protocol and basic operations, including connecting to a share, negotiating a dialect, executing operations and disconnecting from a share. The second part of the talk will cover improvements in the version 2.0 of the protocol, including a reduced command set, support for asynchronous operations, compounding of operations, durable and resilient file handles, file leasing and large MTU support. The final part of the talk covers the latest changes in the SMB 3.0 version, including persistent handles (SMB Transparent Failover), active/active clusters (SMB Scale-Out), multiple connections per sessions (SMB Multichannel), support for RDMA protocols (SMB Direct), snapshot-based backups (VSS for Remote File Shares) opportunistic locking of folders (SMB Directory Leasing), and SMB encryption.

If you’re not registered yet, there’s still time. Visit the official web site at http://www.snwusa.com and click on the Register link. I look forward to seeing you there…

Windows Server 2012, File Servers and SMB 3.0 – Simpler and Easier by Design

$
0
0

1. Introduction

I have been presenting our solution in Windows Server 2012 for File Storage for Virtualization (Hyper-V over SMB) and Databases (SQL Server over SMB) for a while now. I always start the conversation talking about how simple and easy it is for an IT Administrator or an Application Developer to use SMB 3.0.

However, I am frequently challenged to detail exactly what that means. After all, both “simple” and “easy” are fairly subjective concepts. In this blog post, I will enumerate some design decisions regarding the SMB 3.0 protocol and its implementation in Windows to make this more concrete.

Please note that, while the topic here is simplicity, I will essentially be going into details behind the implementation, so this could get a little intricate. Not to scare anyone, but if you’re just getting started with SMB, you should probably start with a lighter topic. Or proceed at your own risk :-).

Here’s a summary of the items on this post:

    • 1. Introduction
    • 2. SMB Transparent Failover
      • 2.1. SMB Transparent Failover – No application failures, no application changes
      • 2.2. SMB Transparent Failover – Continuous Availability turned on by default
      • 2.3. SMB Transparent Failover – Witness Service
    • 3. SMB Scale-Out
      • 3.1. SMB Scale-Out – Volumes, not drive letters
      • 3.2. SMB Scale-Out – Single name, dynamic
      • 3.3. SMB Scale-Out – Using node IP addresses
    • 4. SMB Multichannel
      • 4.1. SMB Multichannel – Auto-discovery
      • 4.2. SMB Multichannel – Transparent failover
      • 4.3. SMB Multichannel – Interface arrival
      • 4.4. SMB Multichannel – Link-local
    • 5. SMB Direct
      • 5.1. SMB Direct – Discovery over TCP/IP
      • 5.2. SMB Direct – Fail back to TCP/IP
    • 6. VSS for SMB File Share
      • 6.1. VSS for SMB File Share – Same model as block VSS
      • 6.2. VSS for SMB File Shares – Stream snapshots from file server
    • 7. SMB Encryption
      • 7.1. SMB Encryption – No PKI or certificates required
      • 7.2. SMB Encryption – Hardware acceleration
    • 8. Server Manager
      • 8.1. Server Manager – Simple Wizards
      • 8.2. Server Manager – Fewer knobs
    • 9. SMB PowerShell
      • 9.1. SMB PowerShell – Permissions for a New Share
      • 9.2. SMB PowerShell – Permissions on the Folder
      • 9.3. SMB PowerShell – Cluster type based on disk type
      • 9.4. SMB PowerShell – Shares, Sessions and Open Files across Scale-Out nodes
      • 9.5. SMB PowerShell – Connection
    • 10. Conclusion

2. SMB Transparent Failover

SMB Transparent Failover is a feature that allows an SMB client to continue to work uninterrupted when there’s a failure in the SMB file server cluster node that it’s using. This includes preserving information on the server side plus allowing the client to automatically reconnect to the same share and files on a surviving file server cluster node.

More information: http://blogs.technet.com/b/clausjor/archive/2012/06/07/smb-transparent-failover-making-file-shares-continuously-available.aspx

2.1. SMB Transparent Failover – No application failures, no application changes

Over the years, we have improved SMB to make the protocol more fault tolerant. We’ve been taking some concrete steps that direction, like introducing SMB Durability in SMB 2.0 (if you lose a connection, SMB will attempt to reconnect, but without guarantees) and SMB Resiliency in SMB 2.1 (guaranteed re-connection, but application need to be changed to take advantage of it).

With SMB 3.0, we can now finally guarantee a transparent failover without any application changes through the use of SMB Persistence (also known as Continuously Available file shares). This final step gives us the ability to simply create a share that supports persistency (Continuously Available share) with the application accessing that share automatically enjoying the benefit, without any changes to the way files are opened and no perceived errors that the application needs to handle.

2.2. SMB Transparent Failover – Continuous Availability turned on by default

Once we had the option to create Continuously Available file shares, we considered the option to make it a default setting or leaving it up to the administrator to set it. In the end, we made Continuous Availability the default for any share in a file server cluster. In order to feel comfortable with that decision, we worked through all the different scenarios to make sure we would not break any existing functionality.

clip_image001

Expert Mode: There is a PowerShell setting to turn off Continuous Availability. You can also use PowerShell to tweak the default time that SMB 3.0 waits for a reconnection, which defaults to 60 seconds. These parameters are available for both New-SmbShare and Set-SmbShare. See http://blogs.technet.com/b/josebda/archive/2012/06/27/the-basics-of-smb-powershell-a-feature-of-windows-server-2012-and-smb-3-0.aspx

2.3. SMB Transparent Failover – Witness Service

Another problem we had with the Clustered File Server failover was TCP/IP timeouts. If the file server cluster node you were talking to had a hardware issue (say, someone pulled the plug on it), the SMB client would need to wait for the TCP/IP timeout. This was really bad if we had a requirement to make the failover process automatic and fast. To make that work better, we created a new (optional) protocol and service to simply detect these situations and act faster.

In Windows Server 2012, the client connect to one cluster node for file access and to another cluster node for witness service. The witness service is automatically loaded on the server when necessary and the client is always ready for it, with no additional configuration required on either client or server. Another key aspect was making the client find a witness when connecting to a cluster, if available, without any manual steps. Also, there’s logic in place to find a new witness if the one chosen by the client for some reason is no longer there. Witness proved out to be a key component, which is also used for coordinating moves to rebalance a Scale-Out cluster.

Regarding simplicity, no additional IT Administrator action is required on the file server cluster to enable the Witness service. No client configuration is required either.

3. SMB Scale-Out

SMB Scale-out is the new ability in SMB 3.0 in cluster configuration to show a share in all nodes of a cluster. This active/active configuration makes it possible to scale file server clusters further, without a complex configuration with multiple volumes, shares and cluster resources.

clip_image002

Expert mode: There are specific capabilities that cannot be combined with the Scale-Out file servers, like DFS Replication, File Server quotas and a few others, as outlined in the screenshot above. If you have workloads that require those abilities, you will need to deploy Classic File Servers (called File Server for general use in the screenshot above) to handle them. The Hyper-V over SMB and SQL Server over SMB scenarios are a good fit for Scale-Out File Servers.

More information: http://technet.microsoft.com/en-us/library/hh831349.aspx

3.1. SMB Scale-Out – Volumes, not drive letters

This new Scale-Out capability is enabled by using a couple of feature from Failover Clustering: Cluster Shared Volumes (CSV) and the Dynamic Network Name (DNN).

CSV is a special volume that shows on every cluster node simultaneously, so you don’t have to be worried about which cluster node can access the data (they all do). These CSV volumes show under the C:\ClusterStorage path, which means you don’t need a drive letter for every cluster disk. Definitely simpler.

3.2. SMB Scale-Out – Single name, dynamic

The DNN is also much easier to use, since you can have a single name that allows SMB clients to connect to the cluster nodes. A client will be automatically directed to one of the nodes in the cluster in a round-robin fashion. No extra configuration required. Also, there is really no good reason anymore to have multiple names in a single cluster for File Servers. One name to rule them all.

3.3. SMB Scale-Out – Using node IP addresses

In the past, File Server Cluster groups required specific IP addresses in addition to the regular IP addresses for each node (or a set of IP addresses per cluster group, if you have multiple networks). Combined with the fact that we used to need multiple cluster names to leverage multiple nodes, that made for a whole lot of IP addresses to assign. Even if you were using DHCP, that would still be complex.

With SMB Scale-Out, as we explained, you only need one name. In addition to that, we no longer need additional IP addresses besides the regular cluster node IP addresses. The DNN simply points to the existing node IP addresses of every node in the cluster. This means fewer IP addresses to configure and fewer Cluster resources to manage.

4. SMB Multichannel

We introduced the ability to use multiple network interfaces for aggregated throughput and fault tolerance in SMB 3.0, which we call SMB Multichannel.

More information: http://blogs.technet.com/b/josebda/archive/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0.aspx

4.1. SMB Multichannel – Auto-discovery

We struggled with the best way to configure SMB Multichannel, since there are many possible configurations. In other solutions in the same space, you are sometimes required to specify each client the multiple IP addresses on each server, requiring updates whenever the cluster or networking layout changes. We thought that was too complex.

In probably the best example of the commitment to keeps things simple and easy, we decided to change the SMB protocol to include the ability for the client to query the server network configuration. This allows the client to know the type, speed and IP address of every NIC on the server. With that information at hand, the client can then automatically select the best combination of paths to use. It was not an easy thing to implement, but we thought it was worth doing. The end result was better than what we expected. SMB 3.0 does all the hard work and we got extensive feedback that this is one of the most impressive new abilities in this version.

Expert mode: There are PowerShell cmdlets to define SMB Multichannel Constraints, in case the configuration automatically calculated by SMB Multichannel is less than ideal. We strongly recommend against using this, since it will make SMB unable to adapt to changing configurations. It’s an option, though. See link above for details.

4.2. SMB Multichannel – Transparent failover

With multiple network interfaces used by SMB Multichannel, we now have the ability to survive the failure of a network path, as long as there is at least one surviving path. To keep things simple and easy, this happens automatically and the user and applications do not see any interruption of their work.

For instance, if you have two 10GbE NICs in use on both server and client, SMB Multichannel will use them both to achieve up to 20Gbps throughput. If someone pulls off one of the cables in the client, SMB will automatically detect that situation instantly and move all the items queued on the failed NIC to the surviving one. Since SMB is already fully asynchronous, there is no problem with packet ordering.  You obviously will work at only 10Gbps after this failure, but this will otherwise be completely transparent.

If an interface fails on the server side, the behavior is a bit different. It will take a TCP/IP timeout to figure out that server interface was lost and this means a few packets might be delayed a few seconds. Once the situation is detected, the behavior described for the client-side interface loss will take place. Again, this is all transparent to users and applications, but it will take a few more seconds to happen.

4.3. SMB Multichannel – Interface arrival

Beyond surviving the failure of an interface, SMB Multichannel can also re-establish the connections when a NIC “comes back”. Again, this happens automatically and typically within seconds, due to the fact that the SMB client will listen to the networking activity of the machine, being notified whenever a new network interface arrives on the system.

Using the example on the item above, if the cable for the second 10GbE interface is plugged back in, the SMB client will get a notification and re-evaluate its policy. This will lead to the second set of channel being brought back and the throughput going up to 20Gbps again, automatically.

If a new interface arrives on the server side, the behavior is slightly different. If you lost one of the server NICs and it comes back, the server will immediately accept connections on the new interface. However, clients with existing connections might take up to 10 minutes the readjust their policies. This is because they will poll the server for configuration adjustments in 10-minute intervals. After 10 minutes, all clients will be back to full throughput automatically.

4.4. SMB Multichannel – Link-local

There is a class of IP addresses referred to as link-local addresses. These are the IPv4 addresses starting with 169.254 and IPv6 addresses starting with FE80. These special IP addresses are automatically assigned when no manual addresses are configured and no DHCP server is found on the network.  They are not routable (in these configurations, there is no default gateway) and we were tempted to drop support for them in SMB Multichannel, since it would complicate our automatic policy logic.

However, we heard clear feedback that these type of addresses can be useful and they are the simplest and easiest form of addressing. For certain types of back-end network like a cluster internal network, they could be an interesting choice. Because of that, we put in the work to transparently support both IPv4 and IPv6 link-local addresses.

Expert mode: You should never mix static and link-local addresses on a single interface. If that happens, SMB will ignore link-local addresses in favor of the other address.

5. SMB Direct

SMB Direct introduced the ability to use RDMA network interfaces for high throughput with low latency and low CPU utilization.

More information: http://technet.microsoft.com/en-us/library/jj134210.aspx

5.1. SMB Direct – Discovery over TCP/IP

When using these RDMA network interfaces, you need specific connections to leverage their enhanced data transfer mode. However, since not every server hardware out there will have this RDMA capability, you have to somehow discover when it’s OK to use RDMA connections.

In order to keep things simple for the administrator, all RDMA network interfaces that support SMB Direct, regardless of technology, are required to behave as a regular NIC, with an IP address and a TCP/IP stack.  For the IT administrator, these RDMA-capable interfaces look exactly like regular network interfaces. Their configuration is also done in the same familiar fashion.

The initial SMB negotiation will always happens over traditional TCP/IP connections and SMB Multichannel will be used to find if a network interface has this RDMA capability. The shift from TCP/IP to RDMA connections happens automatically, shortly after SMB discovers that the client and server are capable of using the 3.0 dialect, they both support the SMB Multichannel capability and that there are RDMA NICs on both ends.

Expert mode: This process is so simple and transparent that IT Administrators sometimes have difficulty telling that RDMA is actually being used, besides the fact that you enjoy better throughput and lower CPU usage. You can specifically find out by using PowerShell cmdlets (like Get-SmbMultichannelConnection) and performance counters (there are counters for the RDMA traffic and the SMB Direct connections).

5.2. SMB Direct – Fail back to TCP/IP

Whenever SMB detects an RDMA-capable network, it will automatically try to use their RDMA capability. However, if for any reason the SMB client fails to connect using the RDMA path, it will simply continue to use TCP/IP connections instead. As we mentioned before, all RDMA interfaces compatible with SMB Direct are required to also implement a TCP/IP stack, and SMB Multichannel is aware of that.

Expert mode: If SMB is using the fallback TCP/IP connection instead of RDMA connections, you are essentially using a less efficient mode of your RDMA-capable network interface. As explained before, you can detect this condition by using specific PowerShell cmdlets and performance counters. There are also specific events logged when that occurs.

6. VSS for SMB File Share

Backup and Restore are important capabilities in Storage. In Windows Server, Volume Shadowcopy Services (VSS) is a key piece of infrastructure for creating application-consistent snapshots of data volumes.

More information: http://blogs.technet.com/b/clausjor/archive/2012/06/14/vss-for-smb-file-shares.aspx

6.1. VSS for SMB File Share – Same model as block VSS

VSS has a well-established ecosystem for block solutions and we decided that the best path for file shares was to follow the same model. By offering VSS for SMB File Shares, the backup applications and the protected applications can leverage their investments in VSS. For an IT Administrators already using VSS, the solution looks familiar and easy to understand.

6.2. VSS for SMB File Shares – Stream snapshots from file server

In most backup scenarios, after a snapshot is created, you need to stream the data from the application server to a backup server. That’s, for instance, the model used by Data Protection Manager (DPM). While there are technologies to move snapshots around in some cases, some types of snapshot can only be surfaced on the same host where the volume was originally snapped.

By using VSS for SMB File Shares the resulting snapshot is represented as a UNC path pointing to the file server. That means it’s simple to tell that you can rely on the file server to stream the backup data and not have to bother the application server for anything else after the snapshot is taken.

7. SMB Encryption

Encryption solutions are an important capability, but they are typically associated with Public Key Infrastructure (PKI) and the management of certificates. This leads to everyone concluding this is complex and hard to deploy. While SMB can be used in conjunction with IPSec encryption, we thought we needed something much simpler to deploy.

More information: http://blogs.technet.com/b/josebda/archive/2012/06/26/episode-20-of-quot-from-end-to-edge-and-beyond-quot-covers-smb-encryption-in-windows-server-2012-and-smb-3-0.aspx

7.1. SMB Encryption – No PKI or certificates required

With SMB Encryption, the encryption key is derived from the existing session key, so you don’t need PKI or certificates. These keys are not shared on the wire and you can simply enjoy yet another benefit of having already deployed your Active Directory infrastructure. From an IT administrator perspective, enabling encryption simply means checking a box (or adding a single parameter to the PowerShell cmdlet that creates the share). Clients don’t need to do anything, other than support SMB 3.0.

clip_image003

Expert mode: The default configuration is to configure encryption per share, but there is an option to enable encryption for the entire server, configured via PowerShell. You simply need to use Set-SmbServerConfiguration –EncryptData $true.

7.2. SMB Encryption – Accelerated Encryption

Probably the second highest concern with encryption solution is that it would make everything run slower. For SMB encryption, we decided to use a modern algorithm (128-bit AES-CCM) which can be accelerated significantly on CPUs supporting the AES-NI instruction set. That means that, for most modern CPUs like the Intel Core i5 and Intel Core i7, the cost of encryption becomes extremely low. That removes another potential blocker for adoption of encryption, which is the complexity associated in planning additional processing capacity to handle it. It’s also nice to note that AES-CCM provides integrity, in addition to privacy.

There’s no need to configure any settings on the client or the server to benefit from this accelerated encryption.

8. Server Manager

For IT Administrator handling individual shares using a GUI, we are providing in Windows Server 2012 a new Server Manager tool. This tool allows you to create your SMB file shares using a simple set of wizards.

More information: http://technet.microsoft.com/en-us/library/hh831456.aspx

8.1. Server Manager – Simple Wizards

In Server Manager, you get an “SMB Share – Quick” that creates a simple share, an “SMB Share – Advanced” wizard to create shares with additional options (like quotas and screening) and an “SMB Share – Applications” wizard specifically for creating shares intended for server-to-server workloads (like Hyper-V over SMB or SQL Server over SMB).

clip_image005

These wizards are exactly what you would expect from Windows: a set of simple questions that are easy to follow, with plenty of explanatory text around them. It’s also important to mention that you can run Server Manager against remote servers. In fact, you can even run it on the IT Administrator’s Windows 8 PC or tablet.

8.2. Server Manager – Fewer knobs

When creating the new wizards, we tried as much as possible to simplify the workflow. The simple share wizard asks only a few questions: what disk the share will sit on, what’s the name of the share, what are the options you want to enable and who has access. For each page, we provided a simple selection to make the process as straightforward as possible.

For instance, in the selection of the location of the share, we ask you to select a server, than a volume (based what’s available to the server you targeted). You are shown information on the type of server and on capacity/free disk on each volume to help with your selections. You don’t even have to create a folder structure. The wizard will automatically create a folder with the specified share name under the “Shares” folder on the selected volume. You can still specify a full path if you want.

clip_image007

When we started thinking about the options page, we had a huge list of knobs and switches. In the end, the options page was trimmed down to show only the four most commonly used options, graying them out if not relevant for that type of wizard. We also spent lots of time thinking of the right default selections for these options. For instance, in the “SMB Share – Applications” options page, only encryption is offered. In that case, Continuous Availability is always selected, while Access-based enumeration and BranchCache are always deselected.

9. SMB PowerShell

While most people looking for the most simplicity and ease of use will likely be using the Server Manager GUI, most IT Administrators managing a significant number of shares will be using PowerShell. PowerShell is also a tool that will let you go deeper into understanding the inner workings of SMB 3.0. However, we wanted to make SMB PowerShell simple enough that any IT Administrator could understand it.

More information: http://blogs.technet.com/b/josebda/archive/2012/06/27/the-basics-of-smb-powershell-a-feature-of-windows-server-2012-and-smb-3-0.aspx

9.1. SMB PowerShell – Permissions for a New Share

The New-SmbShare cmdlet, used to create new SMB file shares, is very straightforward. You provide a share name, a path and optionally some share settings like access-based enumeration or encryption. Then there’s permissions. At first, we considered requiring the administrator to use a second cmdlet (Grant-SmbShareAccess) to add the required permissions to the share. However, since some set of permissions are always required for a share, we thought of a way to add permissions in the same cmdlet you use to create the share. We did this by using the –FullAccess, -ChangeAccess, –ReadAccess and –NoAccess parameters. This makes the entire share creation a single line of PowerShell that is easy to write and understand.

For instance, to create a share for Hyper-V over SMB using a couple of Hyper-V hosts, you can use:
New-SmbShare –Name VMS –Path C:\ClusterStorage\Volume1\VMS –FullAccess Domain\HVAdmin, Domain\HV1$, Domain\HV2$

9.2. SMB PowerShell – Permissions on the Folder

While we believe we found an simple and easy way to declare the share permissions, anyone experienced with SMB knows that we also need to grant permissions at the file system level. Essentially, we need to grant the NTFS permissions on the folder behind the SMB share. In most cases, we want the NTFS folder permissions to match the SMB share permissions. However, we initially overlooked the fact that we were requiring the IT Administrator to declare those permissions twice. That was additional, repetitive work that we could potentially avoid. And granting permissions at the NTFS level is a little more complex, because there are options for inheritance and a more granular permission model than the SMB permissions.

To facilitate that, we introduced an option to address this. The basic idea is to provide a preconfigured set of NTFS permissions based on the SMB share permissions. The goal was to create the Access Control List (ACL) automatically (we called it the “Preset Path ACL”), providing the IT Administrator with a simple way to apply the SMB share ACL to the NTFS folder. It is an optional step, but we believe it is useful.

Here’s what that command line looks like, assuming your share is called “VMS”:
(Get-SmbShare –Name VMS).PresetPathAcl | Set-Acl

9.3. SMB PowerShell – Cluster type based on disk type

Another item that was high on our list was automatically detecting if the share was sitting on a Nonclustered File Server, on a Classic File Server cluster or on a Scale-Out File Server. This is not a trivial thing, since the same Windows Server is actually capable of hosting all three types of file shares at the same time. At first, we were considering requiring the IT Administrator to explicitly state what type of file share should be used and exactly what cluster name (scope name) should be used.

In the end, we decided that we could infer that information on almost all cases. A Scale-Out file share always uses a folder in a Cluster Shared Volumes (CSV), while a Classic clustered file share will always use a traditional cluster disk and a nonclustered file share will use a disk that is not part of the cluster. With that in mind, the “New-SmbShare” cmdlet (or more precisely the infrastructure behind it) is able to find the scope name for a given share without requiring the IT Administrator to specify the scope name.

Type of DiskType of shareScope Name
Local disk (non-clustered)Nonclustered file shareN/A
Traditional cluster diskClassic clustered file shareName resource on Cluster Group
Cluster Shared VolumeScale-Out file shareDistributed Network Name (DNN)

Expert mode: The only situation where you must specify the Scope Name while creating a new SMB file share is when there are two DNNs in a clusters. We recommend that you use only one DNN per cluster. In fact, we don’t see any good reason to use more than one DNN per cluster.

9.4. SMB PowerShell – Shares, Sessions and Open Files across Scale-Out nodes

SMB Scale-Out introduces the concept of multiple file server cluster nodes hosting the same shares and spreading client connections across nodes. However, for an IT Administrator managing this kind of cluster, it might seem more complicated. To avoid that, we made sure that any node of a Scale-Out File Server answers consistently for the entire cluster. For the list of shares that was easy, since every node (by definition) has the full list of shares. For sessions and open files that was a bit more complex and we actually had make the node you’re talking to gather the information from the other nodes.

In the end, the IT Administrator can rest assured that, when using Get-SmbSession or a Get-SmbOpenFile in a Scale-Out File Server, what is provided is a combined view of the cluster and there’s no need to query every node in the cluster and collate that information manually.

9.5. SMB PowerShell – Connection

With all these new capabilities in SMB 3.0 and the potential in the protocol to negotiate down to previous versions of SMB, it’s important to understand what dialect was negotiated. There is a simple “Get-SmbConnection” cmdlet, which you should run on an SMB client, that shows information on which SMB connections are currently established.

clip_image009

This can tell you which shares you are connected to (for a given server in the example above, but you can also omit the -ServerName parameter to see all connections) and which version of SMB (dialect) you are using (SMB 3.0 in the example above). You can also see how many files you have opened and what credentials you are offering to the server. If you look at all the data returned (by piping the output to “Select *” as shown below), you get the details on whether the share is continuously available or if the connection is encrypted.

clip_image011

You might argue that is not necessarily the simplest command output you have seen. However, this is much simpler than what we used before. To find out this type of information in previous releases, you would need to break out a network tracing tool like Network Monitor or WireShark and starting capturing network packets, analyzing the traffic to find the negotiation messages.

10. Conclusion

I hope this post clarifies our claims for simplicity and ease of use in SMB 3.0 and the Windows Server 2012 File Server. It is always harder to code a solution that “just works” with minimal administration intervention, but we do believe this is something worth doing it.

I would love to hear back from you on the progress we made in Windows Server 2012. Were these the right decisions? Did we miss anything important? Are there areas where things are still complex or hard to use? Let me know through the blog post comments…

SNIA Tutorials cover a wealth of Storage knowledge (SNW Fall 2012 Update)

$
0
0

I’m always amazed by the vast amount of knowledge being posted online for free download. If you are an IT Professional who deals with Storage (or if you manage one of them), check out the high-quality tutorials just posted online by the Storage Network Industry Association (SNIA).

These peer-reviewed, vendor-neutral tutorials, offered by the SNIA Education Committee, cover a variety of very current Storage-related topics. They were delivered live during the recent Storage Networking World conference in Santa Clara, CA (SNW Fall 2012). I delivered one of them and I was also able to attend a few of them in person.

 

Here’s a snapshot of what they covered, organized by track:

Track: Big Data

  • Introduction to Analytics and Big Data – Hadoop (Rob Peglar)
  • Protecting Data in the "Big Data" World (Thomas Rivera)
  • How to Cost Effectively Retain Reference Data for Analytics and Big Data (Molly Rector)
  • Big Data Storage Options for Hadoop (Dr. Sam Fineberg)
  • Massively Scalable File Storage (Philippe Nicolas)
  • Can You Manage at Petabyte Scale? (John Webster)

Track: Cloud and Emerging Technologies

  • Interoperable Cloud Storage with the CDMI Standard (Mark Carlson)
  • The Business Case for the Cloud (Alex McDonald)
  • Archiving and Preservation in the Cloud: Business Case, Challenges and Best Practices (Chad Thibodeau, Sebastian Zangaro)
  • Enterprise Architecture and Storage Clouds (Marty Stogsdill)

Track: Data Protection and Management

  • Introduction to Data Protection: Backup to Tape, Disk and Beyond (Jason Iehl)
  • Trends in Data Protection and Restoration Technologies (Michael Fishman)
  • Advanced Data Reduction Concepts (Gene Nagle, Thomas Rivera)

Track: File Systems and File Management

  • The File Systems Evolution (Thomas Rivera)
  • SMB Remote File Protocol, including SMB 3.0 (Jose Barreto)

Track: Green Storage

  • Green Storage and Energy Efficiency (Carlos Pratt)
  • Green Storage - the Big Picture ("Green is More Than kWh!") (SW Worth)

Track: Networking

  • Technical Overview of Data Center Networks (Dr. Joseph White)
  • Single and Multi Switch Designs with FCoE (Chad Hintz)
  • The Unintended Consequences of Converged Data Center Deployment Models (Simon Gordon)
  • FCoE Direct End-Node to End-Node (a/k/a FCoE VN2VN) (John Hufferd)
  • PCI Express IO Virtualization Overview (Ron Emerick)
  • How Infrastructure as a Service (laaS) and Software Defined Networks (SDN) will Change the Data Center (Samir Sharma)
  • IP Storage Protocols: iSCSI (John Hufferd)
  • Fibre Channel Over Ethernet (FCoE) (John Hufferd)

Track: Professional Development

  • Social Media and the IT Professional - Are You a Match? (Marty Foltyn)
  • Reaction Management and Self-facilitation Techniques (David Deming)

Track: Security

  • Practical Storage Security with Key Management (Russ Fellows)
  • Unmasking Virtualization Security (Eric Hibbard)
  • Data Breaches and the Encryption Safe Harbor (Eric Hibbard)
  • Storage Security - the ISO/IEC Standard (Eric Hibbard)
  • A Hype-free Stroll Through Cloud Security (Eric Hibbard)
  • Got Lawyers? They've Got Storage and ESI in the Cross-hairs! (Eric Hibbard)

Track: Solid State Storage

  • The Benefits of Solid State in Enterprise Storage Systems (David Dale)
  • An In-depth Look at SNIA's Enterprise Solid State Storage Test Specification (PTS v1.1) (Easen Ho)
  • Realities of Solid State Storage (Luanne Dauber)
  • What Happens When Flash Moves to Triple Level Cell (TLC) (Luanne Dauber)
  • NVMe the nextGen Interface for Solid State Storage (Anil Vasudeva)
  • SCSI Express - Fast & Reliable Flash Storage for the Enterprise (Marty Czekalski)

Track: Storage and Storage Management

  • What's Old is New Again - Storage Tiering (Kevin Dudak)
  • Simplified Integration and Management in Multi-Vendor SAN Environments (Chauncey Schwartz)
  • SAS: The Emerging Storage Fabric (Marty Czekalski)

Track: Virtualization/Applications

  • Virtualization Practices: Providing a Complete Virtual Solution in a Box (Jyh-shing)
  • VDI Optimization - Real World Learnings (Russ Fellows) 

 

I would encourage all my industry colleagues to check these out!

To view the slides for each one of the tutorials listed above in PDF format, visit http://www.snia.org/education/tutorials/2012/fall. Be sure to check the Legal Notices on the documents about how you can use them.

For more information about SNIA, check http://www.snia.org/. It's also never too early to plan to check the next wave, to be delivered during the next SNW in the Spring.

Windows Server 2012 File Server Tip: Switch to the High Performance power profile

$
0
0

When you install a fresh copy of Windows Server 2012 and configure it with the File Server role, the default Power Setting balances power efficiency and performance.

For this reason, even if you have a few high speed network interfaces and the fastest SSD storage out there, you might not be getting the very best IOPS and throughput possible.

To get the absolute best performance from your file server, you can set the server to the "High Performance" power profile.

To configure this using a GUI, go to the Start Menu, search for "Choose a Power Plan" under Settings, then select the "High Performance" option.

image

To configure this from a command line, use "POWERCFG.EXE /S SCHEME_MIN".

Windows Server 2012 File Server Tip: Make sure your network interfaces are RSS-capable

$
0
0

The new SMB Multichannel feature improves performance for network interfaces by using multiple TCP connections for a single network interface automatically. SMB will only do this if your network interface reports itself as RSS-capable, which means it can use Receive Side Scaling. You can check that with the Get-SmbServerNetworkInterface or the Get-SmbClientNetworkInterface PowerShell cmdlets. See below:

image

Without this capability being present on the NIC, using additional TCP connections is not helpful, so SMB will only use a single connection, as it did in previous versions. Virtually all server-class network interfaces should report themselves as RSS-capable these days, but some desktop-class NICs might still say they don’t support RSS. I’ve also seen a few cases where an old driver or a misconfiguration might lead even a 10GbE NIC to report itself as non-RSS-capable. Another possibility is that the NIC reports itself as RSS-capable but with only one queue, which effective means that using multiple connections will not help. Those NICs are treated by SMB as non-RSS capable, even if they are reported as RSS-capable by the networking stack. You can verify that using the Get-NetAdapterRSS cmdlet. See below:

image

In the example above, we have four adapters. Two RDMA adapters that are also RDMA capable, one 1Gbps adapter and a 100Mbps. One of the two RDMA adapters have no cable connected (see status showing as “Not Present”). The 100Mbps adapter is an old one that does not support RSS. The 1GbE adapter supports RSS, but only offers 1 queue, so SMB treats it as non-RSS.

SMB will always prefer RDMA-capable NICs, then RSS-capable NICs and then all other NICs. In general that works fine, but in certain configuration it can lead to unexpected behavior. For instance, SMB might prefer a 1GbE NIC over a 10GbE NIC if the 10GbE is reported as non-RSS and the 1GbE NIC reports RSS support. In this case, first confirm that the 10GbE has the latest drivers (you might want to check Windows  Update or the manufacturer’s web site). Also, you want to make sure that the NIC configuration wasn’t changed to disable RSS. This can be done using PowerShell cmdlets (Enable-NetAdapterRss and Disable-NetAdapterRss) and is also commonly found under the advanced properties pages for the NIC. See below:

image

Getting the right type of NIC with the right RSS-capable driver is always your best choice. However, if you really can’t make your preferred NIC show up as RSS-capable, you might need to take some more drastic measures like configuring SMB Multichannel Constraints or disabling RSS on the other NICs.

For more information about SMB Multichannel, see The basics of SMB Multichannel, a feature of Windows Server 2012 and SMB 3.0.


Windows Server 2012 File Server Tip: Use multiple subnets when deploying SMB Multichannel in a cluster

$
0
0

SMB Multichannel will let you use multiple network interfaces at once for added throughput and network fault tolerance. When using it with non-clustered file servers, you have the most flexible options, including using multiple NICs on the same subnet. In fact, you can have all the multiple NICs on the same server configured automatically via DHCP on the same subnet.

However, when using a clustered file server, you must configure a separate subnet for every NIC for SMB Multichannel to use the multiple paths simultaneously. This is because Failover Clustering will only use one IP address per subnet, even if you have multiple NICs on that subnet. This is true for both classic file server clusters and the new Scale-Out file server clusters.

For classic file server cluster, you must configure a cluster IP address for every network interface in every file server cluster. This is not changed from previous releases, but is more important now due to the new capabilities in SMB Multichannel. Here’s what the configuration should look like:

image

 

For a scale-out file server, you use the node IP address instead of specific IP addresses per file server cluster. However, the same rule applies. You must have different subnets for cluster to offer multiple IP addresses for a single cluster node. You can easily verify that by using the Get-SmbServerNetworkInterface cmdlet. See below:

 

image

 

On the output above, you see that the “*” scope shows all IP addresses on all interfaces. That’s the scope for any non-clustered file shares in the machine. If the server is not on a Failover Cluster, that’s all you should see. Next you see a number of scopes, which each correspond to a Cluster Name and the IP addresses associated with them. The FST2-FS name, the classic file server cluster shown in the first picture, is shown with its three distinct IP addresses. The FST2-SO name is associated with a Scale-Out File Server cluster, which also uses three IP addresses (the node addresses).

Windows Server 2012 File Server Tip: Disable 8.3 Naming (and strip those short names too)

$
0
0

This has been a performance tip for File Servers for some time now: disable short names. There are big performance savings in disabling 8.3 naming and also for removing existing short names on a volume. Here’s a diagram from a presentation I delivered last year:

image

The old “8dot3 naming” convention has been obsolete for a while and most people can’t even remember the time when 8 characters in the name was the limit. In fact, recent versions of Windows Server don’t even enable 8.3 naming when you format new data volumes. However, as you can see in the chart above, it takes time to generate them and enumerate them.

But, if these short names are no longer enabled on data volumes, why even talking 8dot3 naming anymore? Well, the main problem is that most people upgrade to the new versions of Windows Server without reformatting their data volumes. In fact, a very common case of upgrade is to simply install a new server with Windows Server 2012 and reconfigure your SAN to expose those old LUNs to the new server. When you do that, if the volume was formatted back in Windows Server 2003 days, they carry over the short name setting.

You can use the FSUTIL.EXE tool to query the state of a volume. Just use “FSUTIL.EXE 8dot3name query D:”.

PS C:\Windows\system32> FSUTIL.EXE 8dot3name query D:
The volume state is: 0 (8dot3 name creation is enabled).
The registry state is: 2 (Per volume setting - the default).

Based on the above two settings, 8dot3 name creation is enabled on D:

As you can see on the output above, there is a per-volume setting and a system wide setting. While the default for new volumes is to disable short names, the default system wide setting is to let you control 8.3 naming for each volume individually. In the example above, you have a volume configured for 8.3 naming. You can easily disable it using FSUTIL. Here’s an example:

PS C:\Windows\system32> FSUTIL.EXE 8dot3name set D: 1
Successfully disabled 8dot3name generation on D:

PS C:\Windows\system32> FSUTIL.EXE 8dot3name query D:
The volume state is: 1 (8dot3 name creation is disabled).
The registry state is: 2 (Per volume setting - the default).

Based on the above two settings, 8dot3 name creation is disabled on D:

Simple, huh? But you’re not quite done yet. New files and folders won’t get a short name anymore, but you could have tons of old files and folders that were created with both the short name and the long name versions stored in the volume. You could also go ahead and strip the existing short names using a specific FSUTIL option. The only potential problem is that you might have some old software that still has references to those old 8.3 names (some old installers, for instance, do that). However, the FSUTIL tool is careful enough to actually check in the registry for any such issues. Here’s a full example of a series of commands that let you enable, disable and then strip short file names from a volume.

1. Volume is empty

C:\Users\jose>dir K:\

Volume in drive K is TestVHD
Volume Serial Number is F8F8-0E8D

Directory of K:\

File Not Found

2. Enable 8.3 name creation (default for Windows Server 2008 R2 and Windows Server 2012 is off)

C:\Users\jose>fsutil 8dot3name set K: 0
Successfully set 8dot3name behavior.

C:\Users\jose>fsutil 8dot3name query K:
The volume state for Disable8dot3 is 0 (8dot3 name creation is enabled).
The registry state of NtfsDisable8dot3NameCreation is 2, the default (Volume level setting).
Based on the above two settings, 8dot3 name creation is enabled on K:.

3. Create some files using both short names and long names.

C:\Users\jose>echo Test data  1>K:\Small.TXT

C:\Users\jose>for /L %a in (1 1 10) do copy K:\Small.TXT "K:\File Number %a.TXT"

C:\Users\jose>copy K:\Small.TXT "K:\File Number 1.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 2.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 3.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 4.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 5.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 6.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 7.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 8.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 9.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 10.TXT"
        1 file(s) copied.

4. Use “DIR /X” to view both short names and long names. Note that Small.TXT does not get a short name, since its name is already short.

C:\Users\jose>dir K:\ /x

Volume in drive K is TestVHD
Volume Serial Number is F8F8-0E8D

Directory of K:\

02/26/2011  04:37 PM                12 FILENU~1.TXT File Number 1.TXT
02/26/2011  04:37 PM                12 FI8FE5~1.TXT File Number 10.TXT
02/26/2011  04:37 PM                12 FILENU~2.TXT File Number 2.TXT
02/26/2011  04:37 PM                12 FILENU~3.TXT File Number 3.TXT
02/26/2011  04:37 PM                12 FILENU~4.TXT File Number 4.TXT
02/26/2011  04:37 PM                12 FICD2E~1.TXT File Number 5.TXT
02/26/2011  04:37 PM                12 FI9706~1.TXT File Number 6.TXT
02/26/2011  04:37 PM                12 FI7C9D~1.TXT File Number 7.TXT
02/26/2011  04:37 PM                12 FIC1D1~1.TXT File Number 8.TXT
02/26/2011  04:37 PM                12 FI1706~1.TXT File Number 9.TXT
02/26/2011  04:37 PM                12              Small.TXT

              11 File(s)            132 bytes
               0 Dir(s)  10,649,354,240 bytes free

5. We now disable short names for the volume

C:\Users\jose>fsutil 8dot3name set K: 1
Successfully set 8dot3name behavior.

C:\Users\jose>fsutil 8dot3name query K:
The volume state for Disable8dot3 is 1 (8dot3 name creation is disabled).
The registry state of NtfsDisable8dot3NameCreation is 2, the default (Volume level setting).
Based on the above two settings, 8dot3 name creation is disabled on K:.

6. Next, we create a few more files with long names.

C:\Users\jose>for /L %a in (11 1 20) do copy K:\Small.TXT "K:\File Number %a.TXT"

C:\Users\jose>copy K:\Small.TXT "K:\File Number 11.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 12.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 13.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 14.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 15.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 16.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 17.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 18.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 19.TXT"
        1 file(s) copied.

C:\Users\jose>copy K:\Small.TXT "K:\File Number 20.TXT"
        1 file(s) copied.

7. The volume now has a mix of files where short names are generated or not generated.

C:\Users\jose>dir K:\ /x

Volume in drive K is TestVHD
Volume Serial Number is F8F8-0E8D

Directory of K:\

02/26/2011  04:37 PM                12 FILENU~1.TXT File Number 1.TXT
02/26/2011  04:37 PM                12 FI8FE5~1.TXT File Number 10.TXT
02/26/2011  04:37 PM                12              File Number 11.TXT
02/26/2011  04:37 PM                12              File Number 12.TXT
02/26/2011  04:37 PM                12              File Number 13.TXT
02/26/2011  04:37 PM                12              File Number 14.TXT
02/26/2011  04:37 PM                12              File Number 15.TXT
02/26/2011  04:37 PM                12              File Number 16.TXT
02/26/2011  04:37 PM                12              File Number 17.TXT
02/26/2011  04:37 PM                12              File Number 18.TXT
02/26/2011  04:37 PM                12              File Number 19.TXT
02/26/2011  04:37 PM                12 FILENU~2.TXT File Number 2.TXT
02/26/2011  04:37 PM                12              File Number 20.TXT
02/26/2011  04:37 PM                12 FILENU~3.TXT File Number 3.TXT
02/26/2011  04:37 PM                12 FILENU~4.TXT File Number 4.TXT
02/26/2011  04:37 PM                12 FICD2E~1.TXT File Number 5.TXT
02/26/2011  04:37 PM                12 FI9706~1.TXT File Number 6.TXT
02/26/2011  04:37 PM                12 FI7C9D~1.TXT File Number 7.TXT
02/26/2011  04:37 PM                12 FIC1D1~1.TXT File Number 8.TXT
02/26/2011  04:37 PM                12 FI1706~1.TXT File Number 9.TXT
02/26/2011  04:37 PM                12              Small.TXT

              21 File(s)            252 bytes
               0 Dir(s)  10,649,354,240 bytes free

8. We now use the command to strip the short names on the volume

C:\Users\jose>fsutil 8dot3name strip /s /v K:\

Scanning registry...
Registry Data

  Registry Key Path
-----------------------------------------------------------------------
Total affected registry keys:                   0

Stripping 8dot3 names...

8dot3 Name      FileId                Full Path
-------------   -------------------   ---------------------------------
FILENU~1.TXT    0x7000000000024       "K:\File Number 1.TXT"
FI8FE5~1.TXT    0xa00000000002d       "K:\File Number 10.TXT"
FILENU~2.TXT    0xb000000000025       "K:\File Number 2.TXT"
FILENU~3.TXT    0xa000000000026       "K:\File Number 3.TXT"
FILENU~4.TXT    0xa000000000027       "K:\File Number 4.TXT"
FICD2E~1.TXT    0xa000000000028       "K:\File Number 5.TXT"
FI9706~1.TXT    0xa000000000029       "K:\File Number 6.TXT"
FI7C9D~1.TXT    0xa00000000002a       "K:\File Number 7.TXT"
FIC1D1~1.TXT    0xa00000000002b       "K:\File Number 8.TXT"
FI1706~1.TXT    0xa00000000002c       "K:\File Number 9.TXT"

Total files and directories scanned:           21
Total 8dot3 names found:                       10
Total 8dot3 names stripped:                    10

For details on the operations performed please see the log:
  "C:\Users\jose\AppData\Local\Temp\8dot3_removal_log @(GMT 2011-02-27 00-37-34).log"

9. Finally, all short names are gone…

C:\Users\jose>dir K:\ /x

Volume in drive K is TestVHD
Volume Serial Number is F8F8-0E8D

Directory of K:\

02/26/2011  04:37 PM                12              File Number 1.TXT
02/26/2011  04:37 PM                12              File Number 10.TXT
02/26/2011  04:37 PM                12              File Number 11.TXT
02/26/2011  04:37 PM                12              File Number 12.TXT
02/26/2011  04:37 PM                12              File Number 13.TXT
02/26/2011  04:37 PM                12              File Number 14.TXT
02/26/2011  04:37 PM                12              File Number 15.TXT
02/26/2011  04:37 PM                12              File Number 16.TXT
02/26/2011  04:37 PM                12              File Number 17.TXT
02/26/2011  04:37 PM                12              File Number 18.TXT
02/26/2011  04:37 PM                12              File Number 19.TXT
02/26/2011  04:37 PM                12              File Number 2.TXT
02/26/2011  04:37 PM                12              File Number 20.TXT
02/26/2011  04:37 PM                12              File Number 3.TXT
02/26/2011  04:37 PM                12              File Number 4.TXT
02/26/2011  04:37 PM                12              File Number 5.TXT
02/26/2011  04:37 PM                12              File Number 6.TXT
02/26/2011  04:37 PM                12              File Number 7.TXT
02/26/2011  04:37 PM                12              File Number 8.TXT
02/26/2011  04:37 PM                12              File Number 9.TXT
02/26/2011  04:37 PM                12              Small.TXT

              21 File(s)            252 bytes
               0 Dir(s)  10,649,354,240 bytes free

C:\Users\jose>

Lastly, some references that might be useful:

Windows Server 2012 File Server Tip: Continuous Availability does not work with volumes using 8.3 naming or NTFS compression

$
0
0

When deploying the Continuous Availability feature of the new File Server clusters in Windows Server 2012, be careful not to use volumes that have either 8.3 naming or NTFS compression enabled.

If you have these features enabled on the volume, the File Server won’t be able to properly track the ongoing operations on the volume using the Resume Key Filter and Continuous Availability won’t work.

You might see this issue in the SMB Server event log as event ID 1801: “CA failure - Failed to set continuously available property on a new or existing file share as Resume Key filter is not started.”

You will also see an associated error in the the ResumeKeyFilter event log: event ID 1008: “The filter failed to attach to a volume because the volume supports short names but the filter does not support short names.”

To fix this issue, you need to disable 8.3 naming and NTFS compression on the volumes to be used on those File Server clusters.

More information:

Windows Server 2012 File Server Tip: Enable CSV Caching on Scale-Out File Server Clusters

$
0
0

Cluster Shared Volumes (CSV) in Windows Server 2012 has a great new feature to allow using system memory as a write-through cache. Since Scale-Out File Server Clusters use CSV, enabling this CSV cache has a huge impact on the performance of this type of File Server. This has a direct impact on common scenarios like Hyper-V over SMB, especially when used for roles that use differencing disks like Virtual Desktop Infrastructure (VDI). The base VHD file is frequently used in these scenarios and they will typically end up cached in memory.

Even with a modest amount of memory dedicated to the cache, you will see significant performance improvements. You can start with just 512MB of RAM and do some testing. If you have more memory, you can dedicate more for the cache. In fact, you can use up to 20% of the total physical RAM for this cache. For instance, with a file server with 32GB of RAM, you could dedicate 6GB of memory for caching.

A recent TechEd presentation by File Server PM Claus Joergensen has shown the impact of using the CSV cache in a VDI environment:

image

As you can see above, when using 8GB of RAM for caching in a Hyper-V over SMB scenario with VDI, boot time for a set of 5,120 VMs (deployed using 16 Hyper-V hosts, 320 VMs per host) was dramatically improved. The average boot time for a VM went from 211 seconds without the CSV cache to just 29 seconds with the CSV cache, with 90% of VMs booting in less than 40 seconds.  You can see the full video for this TechEd presentation at http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/WSV410.

You can read more about the CSV cache, including how to enable it, in this blog post by Cluster PM Elden Christensen: http://blogs.msdn.com/b/clustering/archive/2012/03/22/10286676.aspx.

Windows Server 2012 File Server Tip: Avoid loopback configurations for Hyper-V over SMB

$
0
0

When deploying Hyper-V over SMB (storing your live configuration and live VHD/VHDX files on an SMB 3.0 file share), make sure you don’t use a loopback configuration. A loopback configuration means that the Hyper-V role and the File Server role are on the same computer. While you can actually have both roles on the same machine, you should not use a UNC path pointing back to the same server. This is not a supported configuration.

The main reason for this restriction is the way permissions need to be configured for Hyper-V over SMB. You need to grant access on the file share to the computer account of the Hyper-V host. Well, when use a loopback configuration, this permission model does not work (the System account used by Hyper-V only gets translated to a computer computer when you’re accessing a remote file share). The end result will be an “Access Denied” error.

Loopback configurations also include deploying the Hyper-V role and the File Server role in the same Failover Cluster. While this can work when the VM is running in one node and the File Server is running on another node of the same cluster, it will fail if both roles happen to land on the same node at the same time. You could in theory make this work by configuring preferred nodes for each role, effectively making sure they never land on the same cluster node, but you really should configure two separate clusters, one for Hyper-V hosts and the other for File Servers.

If you really do need to have the Hyper-V role and the File Server role running on the same box, it’s really not a problem. Just use a local path using a driver letter (X:\Folder\File.VHDX) instead of a UNC path (\\server\share\folder\File.VHDX). The same goes for the cluster configuration: just use the local path to the cluster disk or cluster shared volume.

Viewing all 182 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>