Categories

Browse detailed information about models, CPUs, and GPUs used in LLM benchmarks.

Models

Llama 2 7B

Llama 2 7B is an open-source large language model developed by Meta AI. Released in July 2023, it's the smallest model in the Llama 2 family but offe...

CPUs

AMD EPYC 7000 Series (Rome - Milan)

Summary Table of Key Specifications CPU Model Manufacturer Architecture Process Node Cores/Threads Base / Boost Clock Supported ISA Cache (L1d/L2/...

AMD EPYC 9000 Series (Genoa - Turin)

Summary Table – AMD EPYC Genoa vs. Turin CPU Model AMD EPYC 9654 (Genoa, 4th Gen) AMD EPYC 9965 (Turin Dense, 5th Gen) Manufacturer AMD AMD ...

AMD Ryzen Threadripper PRO CPUs

Summary Table of Key Specifications CPU Model Manufacturer Architecture (µarch) Process Node Cores (P+E) Threads Base Clock Max Turbo Supported IS...

Apple M1 Series CPUs

1. Summary of Specifications (M1, M1 Pro, M1 Max) The Apple M1 series (M1, M1 Pro, M1 Max) are Arm-based system-on-chip (SoC) processors designed by ...

Apple M2 Series CPUs

Summary Table CPU (Model) Manufacturer Architecture Process Node Core Count (Perf + Eff) Thread Count Base Clock (GHz) Max Turbo (GHz) Supported I...

Apple M3 Series CPUs

Summary of Apple M3 CPU (for Local LLM Inference) Feature Apple M3 SoC CPU Specifications CPU Name & Model Apple M3 (M3 series SoC) (Appl...

Apple M4 Series CPUs

1. Summary Table Feature Apple M4 Apple M4 Pro Apple M4 Max Manufacturer Apple (SoC design); fab by TSMC Apple (SoC design); fab by TSMC Apple...

Mac Studio M3 Ultra (512 GB RAM) for Local LLM Inference

1. Summary Specifications CPU Model Manufacturer Architecture Process Node Cores (P+E) Threads Clock Speed Max Turbo Instruction Sets & Featu...

GPUs

AMD Radeon Instinct MI60 Technical Report – LLM Inference Capabilities

1. Summary Table GPU Name Manufacturer Architecture Process Node (nm) Stream Processors (SP) AI Accelerators Base Clock (MHz) Boost Clock (MHz) Me...

AMD Radeon RX 7900 series

1. Summary Table GPU Model Manufacturer Architecture Process Node Stream Processors AI Accelerators Base Clock Boost Clock Memory Type Memory Size...

NVIDIA A100 Series

Summary of NVIDIA A100 Series GPUs (for Local LLM Inference) GPU Model Manufacturer Architecture Process Node CUDA Cores Tensor Cores Base Clock B...

Nvidia-A6000-GPUs

Summary of Key Specifications (NVIDIA RTX A6000) GPU NVIDIA RTX A6000 (Quadro/Workstation GPU) Manufacturer NVIDIA Architecture Ampere (CUDA ...

NVIDIA GB10 DIGITS and GB10 GPU Technical Analysis

NVIDIA’s Project DIGITS is a compact AI supercomputer for the desktop, powered by the new GB10 Grace-Blackwell Superchip. This device (the small gold...

NVIDIA H100 Series

Summary of NVIDIA H100 Series Specifications The table below summarizes key specifications of NVIDIA’s H100 series data-center GPUs, highlighting the...

Nvidia-Jetson

Jetson GPU Hardware Summary (for LLM Inference) Jetson Module (GPU) Manufacturer Architecture Process Node CUDA Cores Tensor / AI Cores Base Clock ...

1. Summary Table

Specification Details GPU Name (Model) NVIDIA P102-100 ([NVIDIA To Release A Cypto-Mining Card Based on The GP102-100 GPU Manufacturer NVIDI...

NVIDIA RTX 3090 – Capabilities and Performance for Local LLM Inference

Summary Table Feature NVIDIA GeForce RTX 3090 Specification GPU Name / Model GeForce RTX 3090 ([NVIDIA GeForce RTX 3090 Specs Manufacturer N...

NVIDIA RTX 5090 – Technical Analysis for Local LLM Inference

GPU Summary: The NVIDIA GeForce RTX 5090 (“Blackwell” architecture) is a flagship GPU built for extreme compute workloads, making it highly suitable ...

Apple M1 Series CPUs: Architecture and LLM Inference Performance

1. Summary of Specifications (M1, M1 Pro, M1 Max) The Apple M1 series (M1, M1 Pro, M1 Max) are Arm-based system-on-chip (SoC) processors designed by ...