-
-
-
-
-
-
Inference Providers
Active filters:
quark
amd/gpt-oss-120b-w-mxfp4-a-fp8
Updated
•
1.56k
•
4
playable/playable1-int4-bfloat16
1B
•
Updated
•
1
•
1
amd/Qwen3-235B-A22B-Instruct-2507-MXFP4
Text Generation
•
118B
•
Updated
•
1.37k
•
2
185B
•
Updated
•
192
•
1
amd/Kimi-K2-Instruct-0905-MXFP4
551B
•
Updated
•
816
•
1
fxmarty/llama-tiny-testing-quark-indev
1.03M
•
Updated
fxmarty/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
1
fxmarty/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
2
fxmarty/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
1
fxmarty/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
1
fxmarty/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
1
fxmarty/quark-legacy-int8
1.03M
•
Updated
fxmarty/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
2
fxmarty/llama-small-int4-per-group-sym-awq-old
16.7M
•
Updated
•
2
amd-quark/llama-tiny-w-int8-per-tensor
1.03M
•
Updated
•
261
amd-quark/llama-tiny-w-int8-b-int8-per-tensor
1.03M
•
Updated
•
257
amd-quark/llama-tiny-w-fp8-a-fp8
1.03M
•
Updated
•
270
amd-quark/llama-tiny-w-fp8-a-fp8-o-fp8
1.03M
•
Updated
•
272
amd-quark/llama-tiny-int4-per-group-sym
1.03M
•
Updated
•
261
amd-quark/llama-small-int4-per-group-sym-awq
16.7M
•
Updated
•
268
amd-quark/quark-legacy-int8
1.03M
•
Updated
amd/Llama-3.1-8B-Instruct-FP8-KV-Quark-test
8B
•
Updated
•
7.84k
amd/Llama-3.1-8B-Instruct-w-int8-a-int8-sym-test
8B
•
Updated
•
4.05k
EmbeddedLLM/Llama-3.1-8B-Instruct-w_fp8_per_channel_sym
Text Generation
•
8B
•
Updated
•
4
amd/DeepSeek-R1-Distill-Llama-8B-awq-asym-uint4-g128-lmhead
Text Generation
•
2B
•
Updated
•
4
amd-quark/llama-tiny-fp8-quark-quant-method
17.1M
•
Updated
•
3.94k
aigdat/Qwen2.5-Coder-7B-quantized-ppl-14
1B
•
Updated
•
1
aigdat/Qwen2-7B-Instruct_quantized_int4_bfloat16
1B
•
Updated
aigdat/Qwen2.5-1.5B-Instruct-awq-uint4-bfloat16
0.4B
•
Updated
aigdat/Qwen2.5-0.5B-Instruct-awq-int4-asym-g128-fp16
0.2B
•
Updated
•
1