Primate Labs introduces Geekbench AI: a new benchmarking tool

Primate Labs has officially released Geekbench AI, a benchmarking tool tailored for machine learning and AI-focused workloads.

Geekbench AI 1.0 represents the culmination of years of development and collaboration with customers, partners, and the AI engineering community. Formerly known as Geekbench ML during its preview phase, the benchmark has been rebranded to match industry terminology and clarify its purpose.

Now available on the Primate Labs website for Windows, macOS, and Linux, Geekbench AI can also be accessed via the Google Play Store and Apple App Store for mobile devices.

This new benchmarking tool aims to provide a standardized method for assessing and comparing AI capabilities across various platforms and architectures. It features a distinctive approach by offering three overall scores, capturing the complexity and diversity of AI workloads. Geekbench AI’s three-score system addresses varied precision levels and hardware optimizations in modern AI implementations. This multi-dimensional approach provides deeper insights into AI performance across different scenarios for developers, hardware vendors, and enthusiasts.

A key feature of Geekbench AI is the addition of accuracy measurements for each test, reflecting that AI performance involves both speed and result quality. By combining speed and accuracy metrics, the benchmark offers a comprehensive view of AI capabilities, highlighting the trade-offs between performance and precision.

Geekbench AI 1.0 supports a wide range of AI frameworks, including OpenVINO on Linux and Windows, and vendor-specific TensorFlow Lite delegates such as Samsung ENN, ArmNN, and Qualcomm QNN on Android. This extensive framework support ensures the benchmark remains relevant with the latest tools and methodologies.

The benchmark uses diverse and extensive datasets, enhancing accuracy evaluations and better representing real-world AI use cases. Each workload in Geekbench AI 1.0 runs for at least one second, enabling devices to reach peak performance while reflecting real-world application bursts.

Primate Labs has provided detailed technical descriptions of the workloads and models used in Geekbench AI 1.0, demonstrating their commitment to transparency and industry-standard testing. Integrated with the Geekbench Browser, the benchmark allows for easy cross-platform comparisons and result sharing.

Primate Labs plans regular updates to Geekbench AI to adapt to market changes and emerging AI features. The company believes the benchmark’s current reliability makes it suitable for professional workflows, with major tech companies like Samsung and Nvidia already using it.

Leave a Comment

Your email address will not be published. Required fields are marked *