Microsoft.ML.OnnxRuntime.Gpu.Windows 1.20.0
About

ONNX Runtime is a cross-platform machine-learning inferencing accelerator.
ONNX Runtime can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc. ONNX Runtime is compatible with different hardware, drivers, and operating systems, and provides optimal performance by leveraging hardware accelerators where applicable alongside graph optimizations and transforms.
Learn more → here
NuGet Packages
ONNX Runtime Native packages
Microsoft.ML.OnnxRuntime
- Native libraries for all supported platforms
- CPU Execution Provider
- CoreML Execution Provider on macOS/iOS
- XNNPACK Execution Provider on Android/iOS
Microsoft.ML.OnnxRuntime.Gpu
- Windows and Linux
- TensorRT Execution Provider
- CUDA Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.DirectML
- Windows
- DirectML Execution Provider
- CPU Execution Provider
Microsoft.ML.OnnxRuntime.QNN
- 64-bit Windows
- QNN Execution Provider
- CPU Execution Provider
Other packages
Microsoft.ML.OnnxRuntime.Managed
- C# language bindings
Microsoft.ML.OnnxRuntime.Extensions
- Custom operators for pre/post processing on all supported platforms.
Showing the top 20 packages that depend on Microsoft.ML.OnnxRuntime.Gpu.Windows.
| Packages | Downloads |
|---|---|
|
Microsoft.ML.OnnxRuntime.Gpu
This package contains native shared library artifacts for all supported platforms of ONNX Runtime.
|
6 |
Release Def:
Branch: refs/heads/rel-1.20.0
Commit: c4fb724e810bb496165b9015c77f402727392933
Build: https://aiinfra.visualstudio.com/Lotus/_build/results?buildId=593735
.NET Core 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.20.0)
.NET Standard 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.20.0)
.NET Framework 0.0
- Microsoft.ML.OnnxRuntime.Managed (>= 1.20.0)