Não têm produtos no Carrinho.

sem IVA

Aetina NVIDIA MGX™ Short-Depth Server
Accelerate your Enterprise AI, On-premises LLM with our Edge AI Server
The AEX-2UA1 NVIDIA MGX™ short-depth edge server, built on a modular architecture, is a single socket server integrating Intel® Xeon® 6700-series with performance-cores (P-cores) and dual high-performance double-width GPUs with NVIDIA NVLink™, delivering ultimate edge AI computing performance in a compact 2U form factor, while offering maximum flexibility and compatibility for both current and future hardware.
AEX-2UA1 Front-Access x86Short-Depth Server

- Single Intel® Xeon® 6700-series with P-cores.
- 2U server - 420mm (16.5") short-depth rackmount server.
- 2x PCIe Gen5 x16 slots (FHFL) and 1x PCIe Gen5 x8 slot (FHFL) – supporting up to 2 double-width PCIe GPUs.
- 1x PCIe Gen5 x16 slot (FHHL) - supporting NVIDIA Bluefield® -3 or NVIDIA ConnectX® -7.
- Up to 4x Hot-Swap PCIe Gen5 E1.S and 2x PCIe Gen5 M.2 slots.
- 1+1 Redundant Titanium level power supply.
Powered by Single Intel Xeon 6700-series with P-cores and Dual Double-width GPUs to Unleash Incredible AI Power
The AEX-2UA1 short-depth edge server is powered by the latest Intel Xeon 6700-series with P-cores, providing exceptional performance for compute-intensive AI and analytics workloads. With robust support for up to dual double-width GPUs and advanced NVIDIA NVLink™ bridge, it delivers unmatched AI computing power and ultra-fast GPU interconnection. Designed to optimize AI training and inference within constrained spaces at the edge, it is the perfect solution for enterprise server rooms or non-rack locations.
Designed with Modularized Architecture
Delivering greater scalability for enterprises in the present and future
The AEX-2UA1 NVIDIA MGX™ short-depth edge server is based on modularized architecture, offering exceptional flexibility, expansion, and compatibility with current and future NVIDIA hardware. This makes it an ideal solution for enterprises seeking to optimize compute and accelerated hardware, maximizing efficiency in AI-driven solutions while reducing time to market.

Accelerate On-premises Enterprise AI Deployment
As generative AI and large language models (LLMs) permeate business and consumer lifestyles, organizations seek reliable, compact, and cost-effective edge servers to accommodate their diverse accelerated computing and unique AI workloads.
The AEX-2UA1 accelerates deployment of diverse AI-driven applications at the edge, empowering enterprises to effectively handle sensitive data with private LLM (large language model), VLM (vision language model) and diverse enterprise AI applications, particularly valuable in sectors such as finance, healthcare and 5G telco.

Entrega Normal | 5 a 7 dias úteis
Entrega Urgente | 3 a 5 dias úteis
*salvo rutura de stock
Trabalhamos intensamente cumprir os prazos de entrega anunciados. No entanto, enfrentamos uma escassez global relativamente a alguns componentes e fabricantes.
Em caso de dúvidas ou preocupações, entre em contacto connosco para garantir o tempo de entrega.
Poderemos aconselhar sobre a disponibilidade de hardware alternativo e fornecer recomendações de componentes disponíveis para opções de envio mais rápidas.