Tesla GPU Optimized Servers

GPU optimized SuperComputing servers offer massive processing power and HPC performance, considerably accelerating applications.

Deployed by some of the planets largest supercomputing enterprises, NVIDIA Tesla is the worlds leading platform for accelerating data centers. Key in the heart of this platform is the hugely parallel PU accelerators that provide hugely improved throughput for compute-intensive workloads - without an increased budget and physical footprint.


NVIDIA Tesla Elite Partner 25% Discount.

Drive Bay Qty
Drive Bay Size
CyberServe Xeon SP1-104S G4 GPU

Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 4 x 3.5" NVMe/SATA hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Configure From: $3,130
Configure
CyberServe Xeon SP1-110S G3

Ideal for virtualisation, cloud computing, enterprise server. 2x PCI-E 4.0 x16 slots. Intel® Ethernet Controller X550 2x 10GbE RJ45. Redundant power supplies.

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
8x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Configure From: $3,320
Configure
CyberServe Xeon SP1-110S NVMe G4 GPU

Supports 1x double slot GPU card, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 10 x 2.5" NVMe/SATA hot-swappable bays

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Configure From: $3,338
Configure
CyberServe Xeon SP1-208S NVMe G4 GPU

Supports 2x double slot GPU cards, 4th Gen Intel Xeon Scalable processor, dual 1Gb/s LAN ports, redundant power supply, 8 x 3.5" NVMe/SATA hot-swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
8x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Configure From: $3,425
Configure
CyberServe Xeon SP1-102N G3

Edge Server – 1U 3rd Gen. Intel Xeon Scalable GPU server system, ideal for AI & Edge applications.

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
1x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth, Front I/O Ports
Max RAM Capacity:
2TB
Configure From: $3,447
Configure
CyberServe Xeon SP1-202 G4 GPU

4th Gen Intel Xeon Scalable processor, single 1Gb/s LAN port, redundant power supply, 2 x 2.5" NVMe/SATA hot-swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth
Max RAM Capacity:
2TB
Configure From: $3,837
Configure
CyberServe Xeon SP1-212 G4 GPU

Supports up to 3 x double slot Gen5 GPU cards, single 1Gb/s LAN port, redundant power supply, 12 x 3.5/2.5" SATA/SAS hot-swappable bays, 4th Gen Intel Xeon Scalable processor

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
3x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $4,363
Configure
3.5" Drives 
CyberServe EPYC EP1-G242-Z11

Up to 4 x NVIDIA ® PCIe Gen4 GPU cards. NVIDIA-Certified system for scalability, functionality, security, and performance. Dedicated management port. Redundant power.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
4
Drive Interface:
SATA
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
8x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion
Max RAM Capacity:
1TB
Configure From: $4,788
Configure
Short Depth 2.5" Drives NVMe Drives 
CyberServe EPYC EP1 202-NVMe-G G4

Short Depth Single AMD EPYC 9004 Series Edge Server with 2x GPU Slots, 2x 2.5" Gen4 NVMe/ SATA Hot-Swappable bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
2
Drive Interface:
SATA , NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
12x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Full Height/Length Expansion, Redundant Power Supply - Standard, Short Depth
Max RAM Capacity:
1.5TB
Configure From: $5,148
Configure
CyberServe EPYC EP1 212-8NVMe G4

Single AMD EPYC 9004 Series, Supports up to 2x FHFL PCIe Gen5 x16 slots - 12x 3.5" NVMe / SATA Drives.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
12x 4800MHz
GPU Slots:
2x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1.5TB
Configure From: $5,577
Configure
3.5" Drives NVMe Drives 
CyberServe Xeon SP2-212NS G3

Supports 3x double slot GPU cards, dual 1Gb/s LAN ports, 5x PCIe Gen4 x16 slots, redundant power supply.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
3x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $5,678
Configure
CyberServe Xeon SP2-408S NVMe G4 GPU

Dual 4th Gen Intel Xeon Scalable Gen4 Processor, GPU Computing Pedestal Supercomputer Server, 4x Tesla, RTX GPU Cards

Form Factor:
Pedestal
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $6,163
Configure
2.5" Drives NVMe Drives 
CyberServe Xeon SP2-208-2S-SFF-GPU G3

2U GPU server powered by dual-socket 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 2 M.2, four NVMe (by SKU), total eleven PCIe 4.0 slots

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $6,207
Configure
2.5" Drives NVMe Drives 
CyberServe EPYC EP1-G292-Z20 GPU Server

8x PCIe Gen4 expansion slots for GPUs, 2 x 10Gb/s SFP+ LAN ports (Mellanox® ConnectX-4 Lx controller), 2 x M.2 with PCIe Gen3 x4/x2 interface

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
8x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
1TB
Configure From: $6,220
Configure
Ultra High-Performance 2.5" Drives NVMe Drives 10Gb Lan 
CyberServe SP2-104-2S-GPU G3

GPU server optimised for HPC, Scientific Virtualisation and AI. Powered by 3rd Gen Intel Xeon Scalable processors. 6x PCIe Gen 4.0 x16, 1x M.2

Form Factor:
1U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $6,708
Configure
Rackmount or Tower 3.5" Drives NVMe Drives 10Gb Lan 
Cyberserve Xeon SP2-408-4S GPU G3

Ideal for scientific virtualisation and HPC. 6x PCI-E 4.0 x16 slots. 2x M.2 NVMe or SATA supported. Redundant power supplies.

Form Factor:
Pedestal
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $6,864
Configure
Ultra High-Performance 3.5" Drives 2.5" Drives NVMe Drives 
CyberServe EPYC EP2-G292-Z42 GPU Server

8x PCIe Gen3 expansion slots for GPUs, 2x 10Gb/s BASE-T LAN ports (Intel® X550-AT2 controller), 4x NVMe and 4x SATA/SAS 2.5" hot-swappable HDD/SSD bays

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
16x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $7,768
Configure
3.5" Drives 
CyberServe Xeon SP2-208-2S-GPU G3

2U dual-socket GPU server powered 3rd Gen Intel Xeon Scalable processors that supports up to 16 DIMM, four dual-slot GPU, 4 M.2, eight NVMe (by SKU), total eleven PCIe 4.0 slots.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
16x 3200MHz
GPU Slots:
4x Double Width GPU / 8x Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
2TB
Configure From: $7,940
Configure
Ultra High-Performance 2.5" Drives NVMe Drives 
CyberServe SP2-G292-280 G3

GPU Server - 2U 8 x GPU Server | Application: AI , AI Training , AI Inference , Visual Computing & HPC. Dual 10Gb/s BASE-T LAN ports.

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
24x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
3.1TB
Configure From: $8,696
Configure
Ultra High-Performance 3.5" Drives 10Gb Lan 
CyberServe Xeon SP2-412G-GPU G3

Up to 8x PCIe Gen4 GPGPU cards, dual 10Gb/s LAN ports, redundant power option.

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Configure From: $9,741
Configure
2.5" Drives NVMe Drives 
CyberServe EPYC EP2-G482-Z51 GPU Server

Up to 8 x PCIe Gen4 GPGPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8-Channel RDIMM/LRDIMM DDR4 per processor, 32 x DIMMs

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
10
Drive Interface:
SATA
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $9,767
Configure
CyberServe EPYC EP2-4124GS-TNR GPU Server

8 PCI-E 4.0 x16 + 3 PCI-E 4.0 x8 slots, Up to 24 Hot-swap 2.5" drive bays, 2 GbE LAN ports (rear)

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $9,850
Configure
CyberServe Xeon SP2-208-4G NVMe G4 GPU

Supports up to 8 x double slot Gen4 GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
3.1TB
Configure From: $11,260
Configure
Ultra High-Performance 3.5" Drives NVMe Drives 
CyberServe EPYC EP2-G482-Z50 GPU Server

10 x FHFL Gen3 expansion slots for GPU cards, 2 x 10Gb/s BASE-T LAN ports (Intel® X550-AT2), 8 x 2.5" NVMe, 2 x SATA/SAS 2.5" hot-swappable HDD/SSD bays, 12 x 3.5" SATA/SAS hot-swappable HDD/SSD bays

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
22
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $11,543
Configure
CyberServe EPYC EP2 424-4NVMe-G GPU Server G4

Dual AMD EPYC 9004 Series 8x GPU Server - 24x 2.5" NVMe / SATA / SAS + 4x NVMe Dedicated Drives

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
3.1TB
Configure From: $11,561
Configure
Ultra High-Performance 3.5" Drives NVMe Drives 10Gb Lan 
CyberServe Xeon SP2 412-8G GPU G3

Up to 10x PCIe Gen4 GPGPU cards, dual 10Gb/s BASE-T LAN, redundant power supply.

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Configure From: $11,583
Configure
CyberServe Xeon SP2-412T G3 GPU

Supports 10x double slot GPU cards, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Xeon Scalable Processor - Gen 3
Memory DIMMS:
32x 3200MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Configure From: $13,957
Configure
CyberServe Xeon SP2-412 NVMe G4 GPU

Supports 10x double slot GPU cards, dual 10Gb/s BASE-T LAN ports, redundant power supply, 12 x 3.5/2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
3.5" Drives
Qty Drives:
12
Drive Interface:
SATA , 12Gb/s SAS, NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
32x 4800MHz
GPU Slots:
10x Double / Single Width GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
4.1TB
Configure From: $24,111
Configure
Promotion - Test Drive This Server 
CyberServe EPYC EP2-2124GQ-NART GPU Server

High Density 2U System with NVIDIA® HGX™ A100 4-GPU, Direct connect PCI-E Gen4 Platform with NVIDIA® NVLink™, IPMI 2.0 + KVM with dedicated 10G LAN

Form Factor:
2U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
4
Drive Interface:
SATA , 12Gb/s SAS, NVMe
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $80,838
Configure
Promotion - Test Drive This Server 
CyberServe EPYC EP2-4124GO-NART GPU Server

8x NVIDIA A100 Gen4, 6x NVLink Switch Fabric, 2x M.2 on board and 4 Hybrid SATA/Nvme, 8x PCIe x16 Gen4 Slots

Form Factor:
4U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
6
Drive Interface:
NVMe, M.2
Server Processor:
AMD EPYC 7003 Processor
Memory DIMMS:
32x 3200MHz
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, High RAM Capacity, Extra Expansion Slots, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $147,017
Configure
CyberServe Xeon SP2-308 NVMe G4 GPU

Supports 4x SXM5 GPU Modules, dual 10Gb/s BASE-T LAN ports, redundant power supply, 8 x 2.5" NVMe/SATA hot-swappable bays. Built for AI & HPC

Form Factor:
3U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
SATA , NVMe
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
16x 4800MHz
GPU Slots:
4x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
VMware Compatible, Extra Expansion Slots, Redundant Power Supply - Standard, Front I/O Ports
Max RAM Capacity:
2TB
Configure From: $149,438
Configure
CyberServe Xeon SP2-824 NVMe G4 GPU

Supports 8x HGX H100 GPUs, dual 10Gb/s BASE-T LAN ports, redundant power supply, 16 x 2.5" NVMe, 8x SATA hot-swappable bays. Built for AI Training and Inferencing.

Form Factor:
8U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
Memory DIMMS:
32x 4800MHz
GPU Slots:
8x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Extra Expansion Slots, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
4.1TB
Configure From: $293,969
Configure
CyberServe EPYC EP2-824 NVMe G4 GPU Server

Supports 8x HGX H100 GPUs, Dual AMD EPYC 9004 Series 8x GPU Server - 16x 2.5" NVMe + 8x SATA Drives Hot-Swappable bays. Built for AI Training and Inferencing.

Form Factor:
8U
Drive Bays:
Hot-Swap Drives
HDD Size:
2.5" Drives
Qty Drives:
24
Drive Interface:
SATA , NVMe, M.2
Server Processor:
AMD EPYC 9004 Series
Memory DIMMS:
24x 4800MHz
GPU Slots:
8x SXM GPU
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Full Height/Length Expansion, Redundant Power Supply - Standard
Max RAM Capacity:
3.1TB
Configure From: $295,065
Configure
NVIDIA DGX A100

NVIDIA DGX A100 with 8x NVIDIA A100 80GB Tensor Core GPUs, Dual AMD Rome 7742 Processors, 2TB Memory, 2x 1.92TB NVMe M.2 & 8x 3.84TB NVMe U.2.

Form Factor:
8U
Drive Bays:
Fixed Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
NVMe, M.2
Server Processor:
AMD EPYC 7003 Processor
GPU Slots:
8x A100 Tensor Core GPUs
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Redundant Power Supply - Standard
Max RAM Capacity:
0GB
Configure From: $366,436
Configure
NVIDIA DGX H100

NVIDIA DGX H100 with 8x NVIDIA H100 Tensor Core GPUs, Dual Intel® Xeon® Platinum 8480C Processors, 2TB Memory, 2x 1.92TB NVMe M.2 & 8x 3.84TB NVMe U.2.

Form Factor:
8U
Drive Bays:
Fixed Drives
HDD Size:
2.5" Drives
Qty Drives:
8
Drive Interface:
NVMe, M.2
Server Processor:
Intel Xeon Scalable Processor Gen 4
GPU Slots:
8x H100 Tensor Core GPUs
GPU Support:
Tesla GPU Optimised
Features:
High RAM Capacity, Redundant Power Supply - Standard
Max RAM Capacity:
0GB
Configure From: $486,716
Configure
NVIDIA P40
NVIDIA P100 PCIe
NVIDIA V100S
NVIDIA Titan V
NVIDIA T4
Architecture Pascal Pascal Volta Volta Turing
SMs 30 56 .... 80 72
CUDA Cores 3,840 3,584 5,120 5,120 2,560
Tensor Cores N/A N/A 640 640 320
Frequency 1,303 MHz 1,126 MHz 1,267 MHz 850 MHz 1590 MHz
TFLOPs (double) - 4.7 8.2 7.5 65
TFLOPs (single) 12 9.3 16.4 15 8.1
TFLOPs (half/Tensor) - 18.7 130 30 ...
Cache 3 MB L2 4 MB L2 6 MB 4.5 MB L2 4 MB
Max. Memory 24 GB 16 GB 32GB 12 GB 16 GB
Memory B/W 346 GB/s 720 GB/s 1134 GB/s 652 GB/s 350 GB/s


The NVIDIA Tesla P40 GPU accelerator works with NVIDIA Quadro vDWS software and is the first system to combine an enterprise-grade visual computing platform for simulation, HPC rendering, and design with virtual applications, desktops, and workstations.This gives organizations the freedom to virtualise both complex visualization and compute (CUDA and OpenCL) workloads.

The NVIDIA Tesla P40 taps into the industry-leading NVIDIA Pascal architecture to deliver up to twice the professional graphics performance of the NVIDIA Tesla M60 (Refer to Performance Graph).With 24 GB of framebuffer and 24 NVENC encoder sessions, it supports 24 virtual desktops (1 GB profile) or 12 virtual workstations (2 GB profile ), providing the best end - user scalability per GPU.This powerful GPU also supports eight different user profiles, so virtual GPU resources can be efficiently provisioned to meet the needs of the user. And it's available in a wide variety of industry-standard 2U servers.

With NVIDIA virtual GPU software and the NVIDIA Tesla P40, organizations can now virtualise high- end applications with large, complex datasets for rendering and simulations, as well as virtualizing modern business applications.Resource allocation ensures that users have the right GPU acceleration for the task at hand.NVIDIA software shares the power of Tesla P40 GPUs across multiple virtual workstations, desktops, and apps.This means you can deliver an immersive user experience for everyone from office workers to mobile professionals to designers through virtual workspaces with improved management, security, and productivity

Exceptional User Experience

Get the ultimate user experience for any workload or vGPU profile. NVIDIA Quadro vDWS software with Tesla P40 GPU supports compute workloads (CUDA AND OpenCL) for every vGPU, enabling professional and design engineering workflows at peak performance. The Tesla P40 delivers up TO 2X the graphics performance compared to the M60 (Refer to Performance Graph).Users can count on consistent performance with the new resource scheduler, which provides deterministic QoS AND eliminates the problem of a "noisy neighbor."

Optimal Management and Monitoring

Management tools give you vGPU visibility into the host or guest level, with application level monitoring capabilities.This lets IT intelligently design, manage, and support their end user's experience. End-to-end management and monitoring also deliver real-time insight into GPU performance. And integration with VMware vRealise Operations (vROps), Citrix Director and XenCenter put flexibility and control in the palm of your hand

Flexible GPU Infrastructure

Support up to 50% more users per Pascal GPU relative to a single Maxwell GPU, for scaling high performance virtual graphics and compute.More granular user profiles give you more precise provisioning of vGPU resources, and larger profile sizes - up to 3X larger GPU framebuffer than the M60 - for supporting your most demanding users.The P40 provides utilization and flexibility to your NVIDIA Quadro vDWS solution helping you drive down overall TCO.


NVIDIA Tesla P100 GPU accelerators are the world's first AI supercomputing data center GPUs. They tap into NVIDIA Pascal GPU architectureto deliver a unified platform for accelerating both HPC and AI. With higher performance and fewer, lightning-fast nodes, Tesla P100 enables data centers to dramatically increase throughput while also saving money.

With over 500 HPC applications accelerated - including 15 out of top 15 - as well as all deep learning frameworks, every HPC customer can deploy accelerators in their data centers.

Tesla P100 for PCIe enables mixed-workload HPC data centers to realise a dramatic jump in throughput while saving money.For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications.Completing all the jobs with far fewer powerful nodes means that customers can save up to 70% in overall data center costs

The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level.Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.

Exponential Performance Leap with Pascal Architecture

The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads.With more than 21 teraflops of FP16 performance, Pascal is optimized to drive exciting new possibilities in deep learning applications.Pascal also delivers over 5 and 10 teraflops of double and single precision performance for HPC workloads.

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS (Chip- on -Wafer- on -Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell architecture.This provides a generational leap in time - to -solution for data -intensive applications.

Applications at Massive Scale with NVIDIA NVLink

Performance is often throttled by the interconnect.The revolutionary NVIDIA NVLink high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5X higher performance compared to today's best-in-class technology.

Simpler Programming with Page Migration Engine

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement.Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory.


NVIDIA TITAN V is the most powerful graphics card ever created for the PC, driven by the world's most advanced architecture - NVIDIA Volta. NVIDIA's supercomputing GPU architecture is now here for your PC, and fueling breakthroughs in every industry.

AI is not defined by any one industry.It exists in fields of supercomputing, healthcare, financial services, big data analytics, and gaming.It is the future of every industry and market because every enterprise needs intelligence, and the engine of AI is the NVIDIA GPU computing platform.

NVIDIA Volta is the new driving force behind artificial intelligence.Volta will fuel breakthroughs in every industry.Humanity's moonshots like eradicating cancer, intelligent customer experiences, and self-driving vehicles are within reach of this next era of AI.

Every industry needs AI, and with this massive leap forward in speed, AI can now be applied to every industry.Equipped with 640 Tensor Cores, Volta delivers over 100 Teraflops per second (TFLOPS) of deep learning performance, over a 5X increase compared to prior generation NVIDIA Pascal architecture.

Humanity's greatest challenges will require the most powerful computing engine for both computational and data science. With over 21 billion transistors, Volta is the most powerful GPU architecture the world has ever seen. It pairs NVIDIA CUDA and Tensor Cores to deliver the performance of an AI supercomputer in a GPU.

Volta uses next generation revolutionary NVIDIA NVLink high-speed interconnect technology.This delivers 2X the throughput, compared to the previous generation of NVLink.This enables more advanced model and data parallel approaches for strong scaling to achieve the absolute highest application performance.


NVIDIA T4 GPUs power the planets most reliable mainstream servers. They can fit easily into standard data center infrastructures. Designed into a low-profile, 70-watt package, T4 is powered by NVIDIA Turing Tensor Cores, supplying innovative multi-precision performance to accelerate a vast range of modern applications.

It is almost certain that we are heading towards a future where each of your customer interactions, every one of your products and services will be influenced and enhanced by Artificial Intelligence. AI is going to become the driving force behind all future business, and whoever adapts first to this change is going to hold the key to business success in the long term. We realise the future will require a computing platform that is able to accelerate the full diversity of modern AI. Allowing businesses to reimagine how they meet customer demands and to cost-effectively scale artificial intelligence-based services.

The NVIDIA T4 GPU accelerates diverse cloud workloads. These include high-performance computing, data analytics, deep learning training and inference, graphics and machine learning. T4 features multi-precision Turing Tensor Cores and new RT Cores. It is based on NVIDIA Turing architecture and comes in a very energy efficient small PCIe form factor. T4 delivers ground-breaking performance at scale.

T4 harnesses revolutionary Turing Tensor Core technology featuring multi-precision computing to deal with diverse workloads. Capable of truly blazing fast speeds, T4 delivers up to 40x higher performance than CPUs.

User engagement will be a vital component of successful AI implementation, with responsiveness being one of the main keys. This will be especially apparent in services such as visual search, conversational AI and recommender systems. Over time as models continue to advance and increase in complexity, ever growing compute capability will be required. T4 provides up to 40x better through, allowing for more requests to be served in real time.

The medium of online video is quite possibly the number one way of delivering information in the modern age. As we move forward into the future, the volume of online videos will only continue to grow exponentially. Simultaneously, the demand for answers to how to efficiently search and gain insights from video continues to grow.

T4 provides ground-breaking performance for AI video applications, featuring dedicated hardware transcoding engines which deliver 2x the decoding performance possible with previous-generation GPUs. T4 is able to decode up to nearly 40 full high definition video streams, making it simple to integrate scalable deep learning into video pipelines to provide inventive, smart video services.


With 32 GB HBM2 memory and powered by the newest GPU architecture NVIDIA Volta, the NVIDIA Tesla V100S delivers the performance of up to 100 CPUs within a single GPU. Allowing data engineers, researchers and scientists to undertake challenges once believed to be impossible.

The NVIDIA Tesla V100S is the most advanced breakthrough data center GPU ever created to accelerate AI, Graphics and HPC. Tesla V100S is the crown jewel of the Tesla data center computing platform for deep learning, graphics and HPC. Over 450 HPC applications and every major deep learning framework can be accelerated by the Tesla platform. Available everywhere from desktops to servers to cloud services, providing humungous performance gains and cost saving opportunities.

The previous Tesla V100 has had been hailed as the most advanced data center graphics card, with this new GPU taking things up a notch. Designed for AI acceleration, high performance computing, graphics and data science, the Nvidia Tesla V100S is a real game changer.

The Tesla V100S is an upgrade over the Tesla V100. While both seem similar on the outside, featuring a dual-slot design and a cooler, the performance of the V100S goes above and beyond what was possible with the V100.

The main difference between the two is in the memory capacities available. The NVIDIA Tesla V100S only has a 32 GB HBM2 version and boasts higher boost clock speeds (1601MHz) and memory bandwidth (1134 GBps).

With this enhanced clock speed, the V100S can deliver up to 17.1% higher single and double-precision performance, with 16.4TFLOPs and 8.2TFLOPs respectively in comparison to the original V100. Tensor performance has also been enhanced by 16.1%, now reaching 130TFLOPs.


Broadberry GPU Servers harness the processing power of nVidia Tesla graphics processing units for millions of applications such as image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

As computing evolves, and processing moves from the CPU to co-processing between the CPU and GPU's NVIDIA invented the CUDA parallel computing architecture to harness the performance benefits.

Speak to Broadberry GPU computing experts to find out more.


Accelerating scientific discovery, visualizing big data for insights, and providing smart services to consumers are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training sophisticated deep learning networks These workloads also require accelerating data centers to meet the growing demand for exponential computing.

NVIDIA Tesla is the world's leading platform for accelerated data centers, deployed by some of the world's largest supercomputing centers and enterprises. It combines GPU accelerators, accelerated computing systems, interconnect technologies, development tools and applications to enable faster scientific discoveries and big data insights.

At the heart of the NVIDIA Tesla platform are the massively parallel PU accelerators that provide dramatically higher throughput for compute-intensive workloads - without increasing the power budget and physical footprint of data centers.


Call a Broadberry Storage & Server Specialist Now: 1 800 496 9918




Extensive Testing

Before leaving our build and configuration facility, all of our server and storage solutions undergo an extensive 48 hour testing procedure. This, along with the high quality industry leading components ensures all of our systems meet the strictest quality guidelines.


Customization Service

Our main objective is to offer great value, high quality server and storage solutions, we understand that every company has different requirements and as such are able to offer a complete customization service to provide server and storage solutions that meet your individual needs.

Trusted by the World's Biggest Brands

We have established ourselves as one of the biggest storage providers in the US, and since 1989 been trusted as the preferred supplier of server and storage solutions to some of the world's biggest brands, including: