《[实用参考]Vplex产品及技术手册.pptx》由会员分享,可在线阅读,更多相关《[实用参考]Vplex产品及技术手册.pptx(92页珍藏版)》请在课桌文档上搜索。
1、Module 1:VPLEX Product and Technology Details,1,Describe common VPLEX terminology and configurationsDescribe the VPLEX product hardware and software architecture Describe how VPLEX integrates into an existing environmentList and describe various use cases for VPLEXDescribe the life of an I/O operati
2、on in VPLEX,VPLEX Product and Technology Details,2,VPLEX Terminology,Capabilities,and Use Cases,List and define common VPLEX terminologyDescribe VPLEX architecture,capabilities,Local and Distributed Federation and VPLEX use cases,VPLEX Terminology,3,VPLEX Terminology continued,4,VPLEX Terminology co
3、ntinued,5,EMC Vision for Virtual Storage,6,Journey to the Private Cloud,7,VPLEX Product Family,8,VPLEX Architecture,Next Generation Data Mobility and AccessScale Out Cluster ArchitectureStart small and grow big with predictable service levelsAdvanced Data Caching Improve I/O performance and reduce s
4、torage array contentionPrefetch OperationsDistributed Cache CoherenceAutomatic sharing,balancing and failover of storage domains within and across VPLEX Engines,9,Local and DistributedFederation,VPLEX Local Overview,Simplify provisioning and volume managementCentralize management of block storage in
5、 the data centerPhysical storage needs to be provisioned once to the virtualization layerNon-disruptive data mobilityOptimize performance,redistribute and balance work loads among arraysDomain FailureCan protect against failure domains such as an entire array going downWorkload resiliencyImprove rel
6、iability,scale out performance Storage poolingManage available capacity across multiple frames based on SLAs,10,VPLEX Local Configuration,One Rack1 4 EnginesFully redundant hardwareN+1 performance and scalingMore ports and more engines1+N fault toleranceAll directors but one can fail without causing
7、 Data U/Data Loss,11,VPLEX Metro Overview,AccessAnywhere:Block storage access within,between and across data centersWithin synchronous distancesLess than 5 msec latency requiredConnect two VPLEX storage clusters together over distanceEnables virtual volumes to be shared by both clustersProvides uniq
8、ue distributed cache coherency for all reads and writesBoth clusters maintain the same identity for a volume,and preserve the same SCSI state for the logical unitEnables VMware VMotion over distance,faster than Storage Motion,12,VPLEX Metro Configuration,Two Racks of fully redundant hardware2 to 8 E
9、ngines total in a VPLEX MetroEach rack can consist of 1 4 enginesFC-WAN 8 Gb Fibre Channel ports4 per engine/2 per director,13,VPLEX Geo Overview,AccessAnywhereBlock storage access within,between and across data centersWithin asynchronous distancesMax 50 ms round-trip latencyEnables virtual volumes
10、to be shared by both clustersProvides unique distributed cache coherency for all reads and writesBoth clusters maintain the same identity for a volume,and preserve the same SCSI state for the logical unit,14,VPLEX Geo Configuration,Two Racks of fully redundant hardware2 to 8 Engines total in a VPLEX
11、 GeoEach rack can consist of 1 4 enginesIP-WAN 10 Gb(1 Gb)Ethernet ports4 per engine/2 per director,15,Typical VPLEX Use Cases,Data center consolidationZero downtime data center RelocationAbility to move VM/data to new data center with no downtimeDecommission the original data centerDistributed load
12、 balancingPre-emptive disaster avoidanceZero downtime maintenanceLease rollovers,16,VPLEX Local Target Use Cases,17,VPLEX Remote Target Use Cases,18,VPLEX Product and Technology Details,19,VPLEX 4.X Architecture,List and define VPLEX 4.X hardware and software architectureDescribe product packagingDe
13、fine how VPLEX VPLEX 4.X maps native array volumes to VPLEX virtual volumes,VPLEX Metro VPN,Creates a secure connect between the two management servers Uses x.509 certificates and 2048-bit RSA encryption Configured and setup during installation when using the EZ Setup Wizard,20,VPLEX Architecture,21
14、,Hardware Components,EngineI/O ModuleI/O CarrierDAE and SSD cardPower SupplyFanManagement Server,22,VPLEX v4.0 Engine Overview,Scale out cluster architecture Start small and grow big with predictable service levelsAdvanced data caching Improve I/O performance and reduce storage array contentionDistr
15、ibuted cache coherencyAutomatic balancing and failover of storage domains within and across Data CentersFederated information access“AccessAnywhere”enables geographic data distribution,23,VPLEX v4.0 Engine,24,VPLEX v4.0 I/O Modules,25,VPLEX v4.0 I/O Module Carrier,26,VPLEX v4.0 I/O Module Types,4 po
16、rt 8 Gbps Fibre Channel I/O ModuleUsed for FC COM and FC WAN connectivity with an I/O Module carrier,27,VPLEX v4.0 Management and Power Modules,28,VPLEX v4.0 SPS LEDs,29,VPLEX v4.0 Engine Fans,Fans are monitored through the power supplies.Labeled A to D from left to right.Loss of one fan does not im
17、pact the system.System remains on for three minutes if two fans fail.,30,VPLEX V4.X Configurations at a Glance,31,VPLEX Product and Technology Details,32,VPLEX 5.X Architecture,List and define VPLEX 5.X hardware and software architectureDescribe product packagingDefine how VPLEX 5.X maps native arra
18、y volumes to VPLEX virtual volumes,VPLEX v5.0 Engine,33,VPLEX v5.0 I/O Modules,34,VPLEX v5.0 I/O Module Types,35,VPLEX v5.0 Management and Power/Fan modules,36,VPLEX v5.0 SPS LEDs,37,VPLEX V 5.X Configurations at a Glance,38,VPLEX Product and Technology Details,39,VPLEX Management and Packaging,List
19、 and define VPLEX management hardwareDescribe product packaging,VPLEX Management Server,40,VPLEX Management IP Infrastructure,41,Switch and Management Server IP Formulas,42,VPLEX Director IP Formula,43,VPLEX Local IP Layout,44,Management ServerManagement Port for B-side network:128.221.253.33Service
20、 Port:128.221.252.2Management Port for A-side network:128.221.252.33Public LAN port:Customer Assigned IP Address FC Switch 1 IP Address:128.221.252.34Engine 4,Cluster 1Director A IP Addresses:128.221.252.41,128.221.253.41Director B IP Addresses:128.221.252.42,128.221.253.42Engine 3,Cluster 1Director
21、 A IP Addresses:128.221.252.39,128.221.253.39Director B IP Addresses:128.221.252.40,128.221.253.40Engine 2,Cluster 1Director A IP Addresses:128.221.252.37,128.221.253.37Director B IP Addresses:128.221.252.38,128.221.253.38Engine 1,Cluster 1Director A IP Addresses:128.221.252.35,128.221.253.35Directo
22、r B IP Addresses:128.221.252.36,128.221.253.36FC Switch 2 IP Address:128.221.253.34,Fibre Channel COM Switches,Connectrix DS-300BCreates redundant Fibre Channel network for COMEach director has two independent COM paths to every other directorRequired on all VPLEX Medium and Large systemsDual use 4
23、ports per switchQuad use 8 ports per switch16 ports are unused/unlicensed(disabled),45,VPLEX Supported Configurations,46,Volumes Required for Cluster Operations,Metadata volumeRequired to hold VPLEX volume configurations within a clusterFront-end ports cannot be enabled without this volumeCreation o
24、f a metadata volumes is part of installationMust be at least 78 GBRecommended to be mirrored across two arraysFollow stated best practices to backup metadata volumes when requiredLogging volume(VPLEX Metro only)Required for creating distributed devicesKeeps track of blocks written during a loss of c
25、onnectivity between clusters10 GB logging can support 320 TB of distributed storageReceives a lot of I/O during link outagesShould be fast(host on Fibre Channel disks),47,Management Server User Accounts,Linux shelladminPerforms administrative user management actions SCP files onto the management ser
26、ver to the directories they have access toModify the public Ethernet settingsserviceStarts and stops necessary OS and VPLEX ServicesVPLEX Management Console(CLI and GUI)admin Access to the management server desktop,VPlexcli,and Management Console GUIAbility to start and stop management server servic
27、esAccess to most files on the filesystemserviceAbility to create,modify,and delete VPLEX user accountsAccess to management server desktop,VPlexcli,and GUIAbility to start and stop management server services,48,VPLEX Constructs,49,VPLEX Product and Technology Details,50,VPLEX I/O Operations,Describe
28、the life of an I/O operation within a single cluster and VPLEX MetroDefine caching layers,roles and interactions within a VPLEX systemDescribe system interaction during reads and writesDemonstrate path and system component redundancy,Cache Coherency,51,Cache,Cache,Cache,Cache,New Write:Block 3,Read:
29、Block 3,How a Read is Handled,Local Cache HitDirector searches its local cacheData exists and is delivered to host from local cacheGlobal Cache HitDirector searches its local cacheData does not exist local cache missDirector check the global cache coherence tableData does exist on a different direct
30、or Global cache hitDirector receives an acknowledgement telling it which director to read fromDirector reads from the cache of another director,52,I/O Flow of a Local Read Hit,53,I/O Flow of a Global Read Hit,54,I/O Flow of a Read Miss,55,How a Write is Handled,OperationHost sends a write to the dir
31、ectorDirector checks the global cache coherence table to ensure that another director does not have a lock on the locationDirector acquires global cache coherence lock(s)on blocksDirector notifies the other directors within the cluster of the global cache coherence changeDirector writes I/O to diskD
32、irector sends an acknowledgement to the hostProtectionEvery write is acknowledged by the array before sending an ACK In VPLEX Metro each leg is updated before sending an ACK,56,I/O Flow of a Write Hit,57,ACK,I/O Flow of a Write Miss,58,ACK,I/O Flow of a Distributed Devices Write,59,ClusteredApplicat
33、ion,Host in Data Center B writes data to shared volume,Data is written through cache to backend storage,Data is acknowledged by back-end arrays,Data is acknowledged to host once safe,10110,ACK,ACK,Mirrored Volume,10110,10110,ACK,10110,ACK,FC,I/O Flow of Remote Access Reads and Writes,60,Host in Data
34、 Center B writes data to volume,10110,10110,READ,READ,Host in Data Center A reads data from volume,10110,10110,11001,Host in Data Center A writes data to volume,11001,11001,Path Redundancy and Failure Handling,61,Path Redundancy Across Engines,62,VPLEX Metro Partition and Site Failures,Consider a di
35、stributed system with two sitesFrom Site As perspective the following two conditions are indistinguishableAddressing this is fundamental to the design of distributed applications.,63,Detach Rules to Deal with Failure Situations,Cluster-1-detachesReads and writes will continue for the leg of the dist
36、ributed device at cluster-1Logging volume will track the changes to the leg at cluster-1Reads and writes will suspend for the leg at cluster-2Cluster-2-detachesReads and writes will continue for the leg of the distributed device at cluster-2Logging volume will track the changes to the leg at cluster
37、-2Reads and writes will suspend for the leg at cluster-1Manual detachBoth legs of a distributed device will suspendAdministrator must pick the winning side/legAvailable with VPLEXCLI only,64,VPlexcli:/distributed-storage/distributed-devices llName Status Operational Health Auto Rule Set Name WOF Tra
38、nsfer-Status State Resume-Group Size-Name-DS_1 running ok ok true cluster-1-detaches-2MDS_2 running ok ok true cluster-1-detaches-2MDS_DISK running ok ok true cluster-2-detaches-2MVPlexcli:/distributed-storage/distributed-devices set DS_DISK:rule-set-name cluster1_Active“,Detach Rule Timer,Inter-Clu
39、ster Link LossDefault timer of 5 secondsI/O is immediately suspended at both sites and timer is startedIf connectivity between the two VPLEX clusters is restored within the given periodI/O is automatically resumed and the distributed mirror is kept intactIf connectivity between the two clusters is n
40、ot restored within the timeout periodI/O is resumed at the biased siteI/O is suspended at the non-biased site,65,Distributed Device Failure Scenarios,66,Distributed Device Failure Scenarios continued,67,VPLEX Product and Technology Details,68,VPLEX Consistency Group Operations,Define Consistency Gro
41、up operationsList the components and configuration optionsDescribe failure and recovery operations,Consistency Groups-Overview,A set of virtual volumes that are grouped together that require Write order consistencySame I/O behavior in the event of a link or site outageExample:Set of LUNs set of for
42、a database applicationProperties are set for the entire Consistency GroupData on disk is guaranteed to represent a consistent point in timeAsynchronous Consistency Groups commit their deltas to disk in a coordinated fashion,69,Consistency Groups Behaviors,Storage-at-cluster and visibility must be se
43、t before the cache mode can be set to asynchronousStorage-at-cluster=cluster-1,cluster-2Visibility=cluster-1,cluster-2All volumes used by the same application and/or same host should be grouped together in a Consistency GroupOnly volumes with storage at both clusters are allowed in asynchronous Cons
44、istency GroupsSwitching between asynchronous and synchronous cache mode of a Consistency Group between when I/O is active is supportedMay result in hosts experiencing a short DU during the operationConsistency Groups containing remote volumes cannot be set to asynchronous mode,70,Consistency Groups
45、I/O Behaviors,Administrators can control what happens after an inter-cluster link failure for a set of volumesStop I/O on both clusters(no Rule)Continue I/O on one of the clustersIf one site was recently actively writing,then that site will continue I/O and the other will not(active-island-detach)Al
46、l I/O to the volumes is coordinated across both clusters and all directors active in the Consistency GroupVolumes can only be in one Consistency Groups at a time,71,Consistency Groups Restrictions,72,Asynchronous Consistency Groups Use Case,Supports virtual volumes visible to both clusters with over
47、 5 ms of round trip latency between clusters and less than 50 msAllow I/O at clustersMaintain cache coherencyAsynchronous consistency groups with latencies 5msPerformance for some applications would better than syncIn the event of a link loss or site/cluster failureMaintains a crash-consistent data
48、image across a group of volumesCrash consistent image of all virtual volumes will be available at the surviving clusterApplications with roll back capabilities will be able to recover Potential for some data loss,73,Synchronous Consistency Groups,Local and Global Synchronous Consistency GroupsStorag
49、e resides at one clusterLocal or remote volumes can be added to the Consistency GroupVisibility of Local Volumes changes to globalSimilar to remote volumes as volumes within the Consistency Group can be seen by the other clusterData is written to underlying storage at one clusters before an acknowle
50、dgement is sent to the hostDistributed Synchronous Consistency GroupsData is written to underlying storage at both clusters before an acknowledgement is sent to the hostSimilar to individual synchronous distributed volumesDependent on latency between clusters and the application to tolerate the late