Questions tagged [windows-machine-learning]

Windows ML is a API for evaluating trained machine learning models locally in Windows 10 applications.

Windows ML is a API for evaluating trained machine learning models locally in Windows apps (C#, C++, and JavaScript), including GPU acceleration if hardware is available. Models are in ONNX format. Available for UWP apps since Windows 10 build 17723, see Windows ML docs

53 questions
3
votes
1 answer

WinML inference time on GPU 3 time slower than Tensorflow python

I try to use a tensorflow model trained on python in WinML. I successfully convert protobuf to onnx. The following performance result are obtained : WinML 43s OnnxRuntime 10s Tensorflow 12s The inference on CPU take arround 86s. On performance…
3
votes
1 answer

How can one control the number of threads Windows ML is using for evaluation

I'm trying to benchmark Windows ML against other backends and see some weird distribution of inference times (see plot). This is with the CPU backend using the ARM64 architecture. On ARM there's no bimodal distribution. I don't have a good…
etarion
  • 16,935
  • 4
  • 43
  • 66
3
votes
0 answers

Loading ONNX model in c# for Microsoft HoloLens

I've a problem when load and evaluate ONNX model with Windows Machine Learning API. I try to load the model for evaluation with Microsoft HoloLens but when I evaluate the model the code generate some exceptions like: "No suitable kernel definition…
ll_gzr
  • 99
  • 10
2
votes
0 answers

How do I convert a winrt::Microsoft::AI::MachineLearning::TensorFloat type back to ID3D12Resource

I am loading an image to the GPU by leveraging ID3D12Resource type. And I found some documentation on how to convert the ID3D12Resource to a Microsoft::AI::MachineLearning::TensorFloat in the Microsoft documentation, but I can't seem to find how to…
2
votes
3 answers

Inference of onnx model (opset11) in Windows 10 c++?

In order to use my custom TF model through WinML, I converted it to onnx using the tf2onnx converter. The conversion finally worked using opset 11. Unfortunately I cannot load the model in the WinRT c++ library, therefore I am confused about the…
2
votes
1 answer

How does "half float support" in WinML terminology translate to DX capability?

For a project we use WinML to do inference using a fully convolutional network. We query all adapters on the platform and explicitly pass an d3d12 device to the Learning session. For performance reasons we converted the weights of to half float…
Vincent L
  • 21
  • 2
2
votes
1 answer

LearningModel.LoadFromStorageFileAsync faster_cnn / mask rcnn model "Unrecognized attribute: ceil_mode" exception

I have downloaded the SqueezeNetObjectDetection sample. Got it running successfuly. But then I have tried loading the faster rcnn model and got an exception with the message: "Unspecified error\r\n\r\nUnrecognized attribute: ceil_mode". Same result…
Avrohom
  • 640
  • 8
  • 23
2
votes
1 answer

Can Windows ML learning models be accessed by C# outside of UWP?

I have an ONNX model I wish to evaluate images against, from a C# Windows service (non UWP). I don't see any way to get to the Windows ML framework from C# outside of building a UWP app, is that correct? I found this posting which seems to indicate…
N8allan
  • 2,138
  • 19
  • 32
2
votes
2 answers

Exception: 'The parameter is incorrect.' When attempting to run an ONNX model with convolution

I am seeing an exception from the WinML runtime 'The parameter is incorrect.' when running a single convolution ONNX model on DirectX devices. My model runs fine on Default and Cpu devices, and I am able to run the SqueezeNet.onnx model from the…
L. Hughes
  • 23
  • 3
1
vote
1 answer

CppWinRT wont generate headers for another nuget package

I'm trying to get this tutorial to work: port-to-nuget. (Note that cppwinrt provides the Windows.AI.MachineLearning namespace, but for quicker releases a different nuget package provides the Microsoft.AI.MachineLearning namespace) Apparently the…
Tom Huntington
  • 2,260
  • 10
  • 20
1
vote
1 answer

How to create a custom winrt::Microsoft::AI::MachineLearning::TensorFloat16Bit?

How do I create a TensorFloat16Bit when manually doing a tensorization of the data? We tensorized our data based on this Microsoft example, where we are converting 255-0 to 1-0, and changing the RGBA order. ... std::vector
Anna Maule
  • 268
  • 1
  • 9
1
vote
0 answers

Microsoft ML can't use opset 11 despite nuget package installed

I'm trying to build a simple object detection runner, really just following this MS Docs guide: https://learn.microsoft.com/en-us/windows/ai/windows-ml/tutorials/tensorflow-deploy-model I already figured out building the View and adding capabilities…
Squirrelkiller
  • 2,575
  • 1
  • 22
  • 41
1
vote
1 answer

Difference between WinML and OnnxRuntime for WPF in C#

To package trained Onnx models with a WPF .Net Core 3.1 app, I'm wondering if there are any difference to these two methods: Microsoft.ML.OnnxRuntime and Microsoft.AI.MachineLearning (WinML)? OnnxRuntime seems to be easier to implement with C# while…
Lola
  • 68
  • 4
1
vote
1 answer

Custom Vision ONNX models stopped working with Windows 10 ML

I have trained a model using Custom Vision ai. Exporting the model as ONNX file. In my C# .net core console application I have referenced the windows 10 sdk as described here: accessing windows ml from console apps I am then creating a screenshot…
SBetzin
  • 11
  • 1
1
vote
2 answers

Unable to load onnx model in UWP project on Windows build 19041 but it works on Windows build 18363

I get ArgumentException with message "Failed to load model with error: Unknown model file format version." when try to call LearningModel.LoadFromStreamAsync(stream) on Windows build 19041. It works fine with the same file on build 18363. ONNX opset…
1
2 3 4