© 2025 LLM Performance Benchmarks

    Categories

    Browse detailed information about models, CPUs, and GPUs used in LLM benchmarks.

    Models

    Llama 2 7B

    Llama 2 7B is an open-source large language model developed by Meta AI. Released in July 2023, it's the smallest model in the Llama 2 family but offe...

    View Details

    CPUs

    AMD EPYC 7000 Series (Rome - Milan)

    Summary Table of Key Specifications CPU Model Manufacturer Architecture Process Node Cores/Threads Base / Boost Clock Supported ISA Cache (L1d/L2/...

    View Details
    AMD EPYC 9000 Series (Genoa - Turin)

    Summary Table – AMD EPYC Genoa vs. Turin CPU Model AMD EPYC 9654 (Genoa, 4th Gen) AMD EPYC 9965 (Turin Dense, 5th Gen) Manufacturer AMD AMD ...

    View Details
    AMD Ryzen Threadripper PRO CPUs

    Summary Table of Key Specifications CPU Model Manufacturer Architecture (µarch) Process Node Cores (P+E) Threads Base Clock Max Turbo Supported IS...

    View Details
    Apple M1 Series CPUs

    1. Summary of Specifications (M1, M1 Pro, M1 Max) The Apple M1 series (M1, M1 Pro, M1 Max) are Arm-based system-on-chip (SoC) processors designed by ...

    View Details
    Apple M2 Series CPUs

    Summary Table CPU (Model) Manufacturer Architecture Process Node Core Count (Perf + Eff) Thread Count Base Clock (GHz) Max Turbo (GHz) Supported I...

    View Details
    Apple M3 Series CPUs

    Summary of Apple M3 CPU (for Local LLM Inference) Feature Apple M3 SoC CPU Specifications CPU Name & Model Apple M3 (M3 series SoC) (Appl...

    View Details
    Apple M4 Series CPUs

    1. Summary Table Feature Apple M4 Apple M4 Pro Apple M4 Max Manufacturer Apple (SoC design); fab by TSMC Apple (SoC design); fab by TSMC Apple...

    View Details
    Mac Studio M3 Ultra (512 GB RAM) for Local LLM Inference

    1. Summary Specifications CPU Model Manufacturer Architecture Process Node Cores (P+E) Threads Clock Speed Max Turbo Instruction Sets & Featu...

    View Details

    GPUs

    AMD Radeon Instinct MI60 Technical Report – LLM Inference Capabilities

    1. Summary Table GPU Name Manufacturer Architecture Process Node (nm) Stream Processors (SP) AI Accelerators Base Clock (MHz) Boost Clock (MHz) Me...

    View Details
    AMD Radeon RX 7900 series

    1. Summary Table GPU Model Manufacturer Architecture Process Node Stream Processors AI Accelerators Base Clock Boost Clock Memory Type Memory Size...

    View Details
    NVIDIA A100 Series

    Summary of NVIDIA A100 Series GPUs (for Local LLM Inference) GPU Model Manufacturer Architecture Process Node CUDA Cores Tensor Cores Base Clock B...

    View Details
    Nvidia-A6000-GPUs

    Summary of Key Specifications (NVIDIA RTX A6000) GPU NVIDIA RTX A6000 (Quadro/Workstation GPU) Manufacturer NVIDIA Architecture Ampere (CUDA ...

    View Details
    NVIDIA GB10 DIGITS and GB10 GPU Technical Analysis

    NVIDIA’s Project DIGITS is a compact AI supercomputer for the desktop, powered by the new GB10 Grace-Blackwell Superchip. This device (the small gold...

    View Details
    NVIDIA H100 Series

    Summary of NVIDIA H100 Series Specifications The table below summarizes key specifications of NVIDIA’s H100 series data-center GPUs, highlighting the...

    View Details
    Nvidia-Jetson

    Jetson GPU Hardware Summary (for LLM Inference) Jetson Module (GPU) Manufacturer Architecture Process Node CUDA Cores Tensor / AI Cores Base Clock ...

    View Details
    NVIDIA P102-100 Technical Report – LLM Inference Capabilities

    1. Summary Table Specification Details GPU Name (Model) NVIDIA P102-100 ([NVIDIA To Release A Cypto-Mining Card Based on The GP102-100 GPU M...

    View Details
    NVIDIA RTX 3090 – Capabilities and Performance for Local LLM Inference

    Summary Table Feature NVIDIA GeForce RTX 3090 Specification GPU Name / Model GeForce RTX 3090 ([NVIDIA GeForce RTX 3090 Specs Manufacturer N...

    View Details
    NVIDIA RTX 5090 – Technical Analysis for Local LLM Inference

    GPU Summary: The NVIDIA GeForce RTX 5090 (“Blackwell” architecture) is a flagship GPU built for extreme compute workloads, making it highly suitable ...

    View Details
    Apple M1 Series CPUs: Architecture and LLM Inference Performance

    1. Summary of Specifications (M1, M1 Pro, M1 Max) The Apple M1 series (M1, M1 Pro, M1 Max) are Arm-based system-on-chip (SoC) processors designed by ...

    View Details