Разделы презентаций


Сетевые решения UCS Антон Погребняк Инструктор Fast Lane RCIS

Содержание

UCS LAN Deep Dive - Agenda High-level system overview – Unified Ports – I/O module Fabric Interconnect Forwarding modes – End-host mode (EHM) vs Switch mode – Dynamic and static pinning concepts Server Connectivity Options –

Слайды и текст этой презентации

Слайд 1Сетевые решения UCS
Антон Погребняк
Инструктор Fast Lane RCIS

Сетевые решения UCSАнтон ПогребнякИнструктор Fast Lane RCIS

Слайд 2UCS LAN Deep Dive - Agenda


 High-level system overview
– Unified

Ports
– I/O module
 Fabric Interconnect Forwarding modes
– End-host mode (EHM)

vs Switch mode
– Dynamic and static pinning concepts
 Server Connectivity Options
– Cisco VIC1200 series
 C-Series Integration
UCS LAN Deep Dive - Agenda High-level system overview	– Unified Ports	– I/O module Fabric Interconnect Forwarding modes	–

Слайд 3UCS 6248: Unified Ports
Dynamic Port Allocation: Lossless Ethernet or Fibre

Channel
Eth

Lossless Ethernet:
1/10GbE, FCoE, iSCSI, NAS

Use-cases

 Flexible LAN & storage

convergence based on
business needs
 Service can be adjusted based on the demand
for specific traffic

FC


Native Fibre Channel



Benefits
 Simplify switch purchase - remove
ports ratio guess work
 Increase design flexibility
 Remove specific protocol bandwidth
bottlenecks

UCS 6248: Unified PortsDynamic Port Allocation: Lossless Ethernet or Fibre Channel			EthLossless Ethernet:1/10GbE, FCoE, iSCSI, NAS				Use-cases	  Flexible

Слайд 4Base card – 32 Unified Ports
GEM – 16 Unified Ports
Eth
FC
Eth
FC
UCS

6248: Unified Ports
Dynamic Port Allocation: Lossless Ethernet or Fibre Channel


Ports on the base card or the Unified Port GEM Module can either be Ethernet
or FC
 Only a continuous set of ports can be configured as Ethernet or FC
 Ethernet Ports have to be the 1st set of ports
 Port type changes take effect after next reboot of switch for Base board ports
or power-off/on of the GEM for GEM unified ports.
Base card – 32 Unified PortsGEM – 16 Unified PortsEthFCEthFCUCS 6248: Unified PortsDynamic Port Allocation: Lossless Ethernet

Слайд 5Configuring Unified Ports

Configuring Unified Ports

Слайд 6Unified Port Screen




Configured on a per FI basis
Slider based configuration
Reboot

is required for the new port personality to take into

affect
Recommendation is to configure GEM card, therefore GEM is only needed to
be rebooted
Unified Port ScreenConfigured on a per FI basisSlider based configurationReboot is required for the new port personality

Слайд 7UCS Fabric Topologies
Chassis Bandwidth Options
2x 4 Link
80 Gbps per Chassis
2x

8 Links
160 Gbps per Chassis
2x 2 Link
40 Gbps per Chassis
2x

1 Link
20 Gbps per Chassis

2208XP
only

UCS Fabric TopologiesChassis Bandwidth Options	2x 4 Link80 Gbps per Chassis	2x 8 Links160 Gbps per Chassis	2x 2 Link40

Слайд 8IOM Connections


 A IOM (sometimes called ‘Fabric Extender’) provides
– 1

for internal managment
– 10G-KR sever facing links (HIF)
– Fabric links

(NIF)
 The servers’ mezz cards use those IO channels for external connectivity
 Each IOM provides a separate dedicated IO channel for internal management
connectivity
IOM Connections A IOM (sometimes called ‘Fabric Extender’) provides	– 1 for internal managment	– 10G-KR sever facing links

Слайд 9UCS 2204 IO Module
Enable Dual 20 Gbps to Each Blade

Server





UCS-IOM-2204XP

Bandwidth increase for improved response esp
for bursty Applications
o
o
40G to the

Network

160G to the Host Redundant

o

(2x10G/ Half width slot; 4x10G/ Full
width slot)



Latency Lowered to 0.5us within IOM

Investment Protection with Backward and
Forward Compatibility

UCS 2204 IO ModuleEnable Dual 20 Gbps to Each Blade Server	UCS-IOM-2204XP•Bandwidth increase for improved response espfor bursty

Слайд 10UCS 2208 IO Module
Enable Dual 40 Gbps to Each Blade

Server





UCS-IOM-2208XP

Bandwidth increase for improved response esp
for bursty Applications
o
o
80G to the

Network

320G to the Host Redundant

o

(4x10G/ Half width slot; 8x10G/ Full
width slot)



Latency Lowered to 0.5us within IOM

Investment Protection with Backward and
Forward Compatibility

UCS 2208 IO ModuleEnable Dual 40 Gbps to Each Blade Server	UCS-IOM-2208XP•Bandwidth increase for improved response espfor bursty

Слайд 112208
FLASH
EEPROM
DRAM
Chassis
Management
Controller







Chassis
Signals
Control

IO



Switch
Internal backplane ports to blades
No Local Switching – ever!
Traffic goes

up to FI
220x-XP Architecture

Fabric Ports to FI
2208
2204
WoodsideASIC








2204
Feature
2204-XP
2208-XP
ASIC


Fabric Ports
(NIF)

Host Ports
(HIF)


CoS
Woodside


4


16


8
Woodside


8


32


8
Latency
~ 500ns
~

500ns
2208FLASHEEPROMDRAM			Chassis	Management		ControllerChassisSignals	Control		IOSwitchInternal backplane ports to bladesNo Local Switching – ever!	Traffic goes up to FI220x-XP Architecture	Fabric Ports to FI	22082204WoodsideASIC	2204Feature2204-XP2208-XP		ASICFabric

Слайд 12Eth1/1/1
1 eth

access up

none 10G(D) --

Eth1/1/2 1 eth access down Administratively down 10G(D) --
Eth1/1/3 1 eth access up none 10G(D) --
Eth1/1/4 1 eth access down Administratively down 10G(D) --
Eth1/1/5 1 eth vntag down Link not connected 10G(D) 1365
Eth1/1/6 1 eth access down Administratively down 10G(D) --
Eth1/1/7 1 eth vntag down Link not connected 10G(D) 1369
Eth1/1/8 1 eth access down Administratively down 10G(D) --
Eth1/1/9 1 eth access down Administratively down 10G(D) --
Eth1/1/10 1 eth access down Administratively down 10G(D) --
Eth1/1/11 1 eth access down Administratively down 10G(D) --
Eth1/1/12 1 eth access down Administratively down 10G(D) –


Blade Northbound Ports


 These interfaces (show int brief – NXOS shell) are backplane traces
 Eth x/y/z where
– x = chassis number
– y = is always 1
– z = host interface port number
--------------------------------------------------------------------------------

Eth1/1/11         eth   access  up

Слайд 13CPU
CPU
Mezz Card

x16 Gen 2

x16 Gen 2
IOH
16x SFP+
16x SFP+
Expansion Module
16x SFP+
16x

SFP+
UCS Internal Block Diagram

UCS 6248
Expansion Module
UCS 6248
2208XP
2208XP
Double the Fabric Uplinks
Mezzanine




Server

Blade

Midplane

IO Modules

Fabric

Interconnects

Quadruple the Downlinks

CPUCPUMezz Card		x16 Gen 2			x16 Gen 2	IOH16x SFP+16x SFP+Expansion Module16x SFP+16x SFP+UCS Internal Block Diagram	UCS 6248Expansion ModuleUCS 62482208XP2208XPDouble

Слайд 14IO Module HIF to NIF Pinning
2208XP – 1 Link
1-4

5-8


9-12


13-16


17-20


21-24


25-28


29-32
FEX
Fabric
Interconnect
Slot 1


Slot

2


Slot 3


Slot 4


Slot 5


Slot 6


Slot 7


Slot 8

IO Module HIF to NIF Pinning2208XP – 1 Link		1-4		5-8	9-1213-1617-20		21-24		25-28		29-32FEXFabricInterconnectSlot 1Slot 2Slot 3Slot 4Slot 5Slot 6Slot 7Slot 8

Слайд 15IO Module HIF to NIF Pinning
2208XP – 4 Link
FEX
Fabric
Interconnect
1-4

5-8


9-12


13-16


17-20


21-24


25-28


29-32
Slot 1


Slot

2


Slot 3


Slot 4


Slot 5


Slot 6


Slot 7


Slot 8

IO Module HIF to NIF Pinning2208XP – 4 LinkFEXFabricInterconnect		1-4		5-8	9-1213-1617-20		21-24		25-28		29-32Slot 1Slot 2Slot 3Slot 4Slot 5Slot 6Slot 7Slot 8

Слайд 16IOM and Failover


 What happens in a 4-link topology when

you loose 1 link?




Servers’ vNIC on that link will lose

a data path.
The remaining 3 links will still pass traffic for the other blade servers
To recover the failed servers’ vNIC, re-acknowledged of the chassis is required
Since we only support 1, 2, and 4 link topologies the UCS will fall back to 2 links with
regards to blade to fabric port mapping.
IOM and Failover What happens in a 4-link topology when you loose 1 link?––––Servers’ vNIC on that

Слайд 17Switch
Blade1
Blade2
Blade3
Blade4
Blade5
Blade6
Blade7
Blade8
4 links active


IOM 1
1
2
3
4
IOM and Failover




Loose IOM link 1
Loose connectivity

on
mezzanine port mapped
to IOM 1 for blades 1
and 5

SwitchBlade1Blade2Blade3Blade4Blade5Blade6Blade7Blade8	4 links activeIOM 11234IOM and Failover	Loose IOM link 1Loose connectivity onmezzanine port mappedto IOM 1 for blades

Слайд 18F
F
Fabric
E
Interconnect
X
E
Increased Bandwidth Access to Blades
slot 1
slot 2
slot 3
slot 4
slot 5
slot

6
slot 7
slot 8
4 links, Discrète - Today



Fabric
E

Interconnect
X
 Available bandwidth per
blade – 10Gb

 Statically pinned to
individual fabric links

 Deterministic Path

8 links, Discrète


slot 1
slot 2
slot 3
slot 4
slot 5
slot 6
slot 7
slot 8
 Available bandwidth per
blade – 20Gb

 Statically pinned to
individual fabric links

 Deterministic Path

 Guaranteed 10Gb to
each blade

Up to 8 links, Port-channel

F
Fabric
Interconnect
X
 Available bandwidth per
blade – up to 160Gb

 Statically pinned to Port-
channel

 Increased and shared
bandwidth

 Higher Availability

FFFabricEInterconnectXEIncreased Bandwidth Access to Bladesslot 1slot 2slot 3slot 4slot 5slot 6slot 7slot 8			4 links, Discrète - Today				Fabric		E

Слайд 19 IOM



Pinned
to Po
Port-Channel Pinning


 No slot based pinning
 No invalid link

count for NIF ports




VIC1200
adaptor with
DCE links in
Port-Channel





Gen-1 adaptor
with single 10G
link

IOMPinned	to PoPort-Channel Pinning No slot based pinning No invalid link count for NIF ports	VIC1200	adaptor with	DCE links in	Port-Channel	Gen-1

Слайд 20UCS FI and IOMconnectivity
Fabric Interconnect VIF calculation
1
2
3
4
5
6
 Every 8 10GbE

ports (on FI) are controlled by the same Unified Port

Controller (UPC)
 Connect fabric links from IOM to the FI to the same UPC
 For fabric port-channeling, Virtual Interface (VIF) namespace varies, depending on number and
how the fabric links are connected to the FI ports.
– Connecting to the same UPC (a set of eight ports), Cisco UCS Manager maximizes the number of
VIFs used in service profiles deployed on the servers.
– If uplink connections are distributed across UPC, the VIF count is decreased. For example, if you
connect seven (IOM) fabric links to (FI) ports 1-7, but the eighth fabric link to FI port 9, the number
of available VIFs is based on 1 link – IOM port 8 to FI port 9.
UCS FI and IOMconnectivityFabric Interconnect VIF calculation123456 Every 8 10GbE ports (on FI) are controlled by the

Слайд 21Adapter
Switch
Eth 1/1
IOM A



10GE
A
6200-A
Physical Cable

Virtual Cable
(VN-Tag)
Abstracting the Logical Architecture
Blade
vEth
1
IOM A



10GE
A
6200-A
vFC
1
Service Profile
(Server)
Cable
vEth
1
6200-A
vFC
1

Dynamic,

Rapid
Provisioning

State abstraction




Location
Independence

Blade or Rack
Logical
Physical
vHBA
1
vNIC
1

AdapterSwitchEth 1/1IOM A	10GE		A6200-APhysical CableVirtual Cable(VN-Tag)Abstracting the Logical ArchitectureBladevEth	1IOM A	10GE		A6200-AvFC	1Service Profile	(Server)CablevEth	16200-AvFC	1Dynamic, RapidProvisioningState abstractionLocationIndependenceBlade or RackLogicalPhysicalvHBA	1vNIC	1

Слайд 22VN-Tag: Instantiation of Virtual Interfaces


 Virtual interfaces (VIFs) help distinguish

between FC and Eth interfaces
 They also identify the origin

server
 VIFs are instantiated on the FI and correspond to frame-level tags assigned to
blade mezz cards
 A 6-byte tag (VN-Tag) is preprended by Palo and Menlo as traffic leaves the
server to identify the interface


VN-Tag associates frames to a VIF

 VIFs are ‘spawned off’ the server’s EthX/Y/Z interfaces (examples follow)

VN-Tag: Instantiation of Virtual Interfaces Virtual interfaces (VIFs) help distinguish between FC and Eth interfaces They also

Слайд 23VIFs


 Ethernet and FC are muxed on the same physical

links  concept of virtual
interfaces (vifs) to split Eth and

FC
 Two types of VIFs: veth and vfc
– Veth for Ethernet and FCoE; vfc for FC traffic
 Each EthX/Y/Z or Po interface typically has multiple vifs attached to it to carry
traffic to and from a server
 To find all vifs associated with a EthX/Y/Z or Po interface, do this:
VIFs Ethernet and FC are muxed on the same physical links  concept of virtual	interfaces (vifs) to

Слайд 25












Host connectivity PCIe Gen2 x16
PCIe Gen 2 x16 bandwidthlimit is

32 Gbps
HW Capable of 256 PCIe devices

OS restriction apply
PCIe virtualizationOS

independent(same as M81KR)
Single OS driver image for both M81KR and 1280 VIC
FabricFailover supported
Eth hash inputs : Source MAC Address, DestinationMAC Address,
Source Pprt, DestinationPort,Source IP address, Destination,P
address and VLAN
FC Hash inputs: Source MAC Address

DestinationMAC Address, FC SID and FC DID

UCS Cisco 1280 VIC Adapter

Customerbenefits


Dual 4x 10 GE (80 Gb per host)

VM-FEX scale, up to 112 VM interfaces /w ESX 5.0

Featuredetails

Dual 4x 10 GE port-channels to a single server slot

UCS 2208 IOM

Side B

Side A
UCS 1280 VIC




256 PCIe devices

UCS 2208 IOM

••••••••Host connectivity PCIe Gen2 x16PCIe Gen 2 x16 bandwidthlimit is 32 GbpsHW Capable of 256 PCIe devices	•		OS

Слайд 262208 IOM










Side A





vNIC1


VM
2.
10 Gb UDP traffic
Side B

UCS 1280 VIC




VM Flows
1.

10 Gb FTP traffic
Connectivity IOM to Adapter


Up to

32 Gbps throughput per vNIC using flow based port-channel hash

2208 IOM

 Implicit Port-channel between UCS 1280 VIC
adapter and UCS 2208 IOM


 7-Tuple Flow based hash


 A vNIC is active on side A or B.


 A vNIC have access to up to 32 Gbps

throughput .

2208 IOM		Side A	vNIC1	VM2.10 Gb UDP traffic		Side BUCS 1280 VIC	VM Flows	1.   10 Gb FTP trafficConnectivity IOM

Слайд 27Block Diagram: Next Gen UCS Fabric Details
16x SFP+
16x SFP+
Expansion Module
16x

SFP+
16x SFP+
UCS 6248
Expansion Module
UCS 6248
2208XP







1280 VIC

x16 Gen 2

x16 Gen 2
IOH

CPU
CPU
2208XP





4x10 Gbps Ether channel from VIC 1280 to 2208 IO
Modules
 No user configuration required
 vNIC flows are 7-tuple Load Balanced across links
 Each individual flow limited to 10Gb
 Fabric Failover available

UCS Blade Chassis

Fabric

Interconnects





IO Modules



Midplane



Mezzanine




Server Blade

Block Diagram: Next Gen UCS Fabric Details16x SFP+16x SFP+Expansion Module16x SFP+16x SFP+UCS 6248Expansion ModuleUCS 62482208XP	1280 VIC			x16 Gen

Слайд 28Fabric Forwarding Mode of Operations
Modes of Operation


 End-host mode (EHM):

Default mode





No spanning-tree protocol (STP); no blocked ports
Admin differentiates between

server and network ports
Using dynamic (or static) server to uplink pinning
No MAC address learning except on the server ports; no unknown unicast flooding
Fabric failover (FF) for Ethernet vNICs (not available in switch mode)

 Switch mode: User configurable
– Fabric Interconnects behave like regular ethernet switches
– STP parameters are lock

Fabric Forwarding Mode of OperationsModes of Operation End-host mode (EHM): Default mode–––––No spanning-tree protocol (STP); no blocked

Слайд 29 Completely transparent to the
network
– Presents itself as a bunch

of hosts
to the network
 No STP – simplifies upstream
connectivity

 All

uplinks ports are forwarding
– never blocked

VNIC 0


Server 2

L2
Switching


VNIC 0


Server 1

MAC
Learning

MAC
Learning

VLAN 10

End Host Mode



Spanning
Tree

LAN

FI A

vEth 3
Fabric A

vEth 1

 Completely transparent to the	network		– Presents itself as a bunch of hosts			to the network No STP –

Слайд 30 MAC/VLAN plus policy based
forwarding
– Server pinned to uplink ports

Policies to prevent packet
looping
– déjà vu check
– RPF
– No uplink

to uplink forwarding

 No unknown unicast or multicast
– igmp-snooping can be disable on
per-VLAN basis

VLAN 10

Uplink Ports

FI

RPF

Deja-Vu

vEth 1






VNIC 0


Server 2

vEth 3






VNIC 0


Server 1

End Host Mode
Unicast Forwarding



LAN

Server 2

 MAC/VLAN plus policy basedforwarding– Server pinned to uplink ports Policies to prevent packetlooping– déjà vu check	–

Слайд 31End Host Mode
Multicast Forwarding
 Broadcast traffic for a VLAN is
pinned

on exactly one uplink
port (or port-channel) i.e., it is
dropped when

received on other
uplinks

 Server to server multicast traffic
is locally switched

 RPF and déjà vu check also
applies for multicast traffic

Uplink
Ports


FI

vEth 3






VNIC 0


Server 1

LAN

Broadcast
Listener
per VLAN

B

vEth 1




B
VNIC 0


Server 2

B

End Host ModeMulticast Forwarding Broadcast traffic for a VLAN is	pinned on exactly one uplink	port (or port-channel) i.e.,

Слайд 32 Fabric Interconnect behaves like
a normal L2 switch

 Rapid-STP+ to

prevent loops
– STP parameters are not
configurable
 Server vNIC traffic follows

STP
forwarding states
– Use VPC to get around blocked
ports

 VTP is not supported

 MAC address learning on both
uplinks and server links

LAN








MAC
Learning

vEth 3









VNIC 0


Server 2

vEth 1
VLAN 10


L2
Switching


VNIC 0


Server 1

Switch Mode


Root

 Fabric Interconnect behaves like	a normal L2 switch Rapid-STP+ to prevent loops		– STP parameters are not			configurable Server

Слайд 33End Host Mode - Dynamic Pinning
 UCSM manages the vEth
pinning

to the uplink

 UCSM will periodically vEth
distribution and redistribute the
vEths

across the uplinks

VNIC 0


Server 1

VNIC 0


Server 2

FI A

LAN

vEth 2

vEth 1

VNIC 0


Server 3

vEth 3

Pinning

Switching

VLAN 10

End Host Mode - Dynamic Pinning UCSM manages the vEth	pinning to the uplink UCSM will periodically vEth	distribution

Слайд 34Server 2
VNIC 0
Pinning
End Host Mode – Individual Uplinks

Dynamic Re-pinning of

failed uplinks
FI-A
VLAN 10


L2
Switching
Sub-second re-pinning
vEth 1





VNIC stays up

VNIC 0
MAC A
MAC C
Switching





vSwitch

/ N1K
ESX HOST 1
VM 1
VM 2

MAC B

vEth 3
Fabric A

All uplinks forwarding for all VLANs
GARP aided upstream convergence
No STP
Sub-second re-pinning
No server NIC disruption

Server 2VNIC 0PinningEnd Host Mode – Individual Uplinks	Dynamic Re-pinning of failed uplinksFI-AVLAN 10	L2SwitchingSub-second re-pinningvEth 1	VNIC stays up		VNIC

Слайд 35Pinning
End Host Mode – Port Channel Uplinks
Recommended: Port Channel Uplinks
FI-A
vEth

3
Fabric A

More Bandwidth per Uplink
Per flow uplink diversity
No Server NIC

disruption
Fewer GARPs needed
Faster bi-directional convergence
Fewer moving parts


RECOMMENDED

VLAN 10


L2
Switching






VNIC 0

vEth 1





NIC stays up

VNIC 0
MAC A



Server 2

No disruption


No GARPs
needed

Sub-second convergence

Switching





vSwitch / N1K
ESX HOST 1
VM 1 VM 2
MAC B MAC C

PinningEnd Host Mode – Port Channel Uplinks	Recommended: Port Channel UplinksFI-A			vEth 3		Fabric AMore Bandwidth per UplinkPer flow uplink

Слайд 36End Host Mode – Static Pinning
 Administer controls the vEth

pinning
 Deterministic traffic flow
 Pinning configuration is done under the
LAN

tab -> LAN Pin groups and
assigned under the vNIC
 No re-pinning with in the same FI
 Static and dynamic pinning can co-
exist

FI A

LAN

vEth 2









VNIC 0


Server 2

Pinning

vEth 1
Switching







VNIC 0


Server 1

vEth 3
VLAN 10







VNIC 0


Server 3

vEth Interfaces
vEth 1
vEth 2
vEth 3

Uplink
Blue
Blue
Purple

Administrator Pinning Definition

End Host Mode – Static Pinning Administer controls the vEth pinning Deterministic traffic flow Pinning configuration is

Слайд 37vNIC
vNIC
vNIC
vNIC
Fabric Failover
End Host Mode (only)
 Fabric provides NIC failover
capabilities chosen

when
defining a service profile

 Traditionally done using NIC
bonding driver in

the OS

 Provides failover for both

unicast and multicast traffic

 Works for any OS on
bare metal and hypervisors

LAN

SAN B

SAN A

Fabric Extender





Adapter



CiMC


Half Width Blade

Fabric Extender





Adapter



CiMC


Half Width Blade

UCS Fabric
Interconnects

Chassis

vNICvNICvNICvNICFabric FailoverEnd Host Mode (only) Fabric provides NIC failover	capabilities chosen when	defining a service profile Traditionally done using

Слайд 38vPC/VSS
Forwarding Layer 2 links
Recommended Topology for Upstream
Connectivity



Access/Aggregation Layer
Fabric Interconnect A
Fabric

Interconnect B

vPC/VSSForwarding Layer 2 linksRecommended Topology for UpstreamConnectivity	Access/Aggregation LayerFabric Interconnect AFabric Interconnect B

Слайд 39 GE LOM

CIMC
C250M2, C260M2 or
C460M2
Mix of B & C Series is
supported

(no B Series
required)
Nexus 2232




PCIe Adapter

CPU

Mem
OS or Hypervisor

Nexus 2232



2 LOM ports
exclusive CIMC
connectivity

C200M2, C210M2,
C220M3, C240M3,

Adapter support:
Emulex CNA
Qlogic CNA
Intel 10g NIC
Broadcom 10g NIC
Cisco VIC

Mgmt Traffic
Data Traffic

C-Series UCSM Integration

GE LOMCIMCC250M2, C260M2 orC460M2Mix of B & C Series issupported (no B Seriesrequired)			Nexus 2232PCIe Adapter	CPU

Слайд 40 GE LOM

CIMC
PCIe Adapter

CPU
Mem
OS or Hypervisor
C260M2, C460M2
C220M3, C240M3
C22M3, C24M3
Mix of B

& C Series is
supported (no B Series
required)
Nexus 2232
Nexus 2232
Mgmt Traffic
Data

Traffic

C-Series UCSM Integration
Single Wire Management with VIC1225

GE LOMCIMCPCIe Adapter	CPUMemOS or HypervisorC260M2, C460M2C220M3, C240M3C22M3, C24M3Mix of B & C Series issupported (no B Seriesrequired)Nexus

Слайд 41Спасибо за внимание!
Антон Погребняк a.pogrebnyak@flane.ru

Спасибо за внимание!Антон Погребняк a.pogrebnyak@flane.ru

Обратная связь

Если не удалось найти и скачать доклад-презентацию, Вы можете заказать его на нашем сайте. Мы постараемся найти нужный Вам материал и отправим по электронной почте. Не стесняйтесь обращаться к нам, если у вас возникли вопросы или пожелания:

Email: Нажмите что бы посмотреть 

Что такое TheSlide.ru?

Это сайт презентации, докладов, проектов в PowerPoint. Здесь удобно  хранить и делиться своими презентациями с другими пользователями.


Для правообладателей

Яндекс.Метрика