Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

z-lab
/
Qwen3.5-27B-DFlash

Text Generation
Transformers
Safetensors
qwen3
feature-extraction
dflash
speculative-decoding
diffusion
efficiency
flash-decoding
qwen
diffusion-language-model
custom_code
text-generation-inference
Model card Files Files and versions
xet
Community
8
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Can we create Qwen3.6-27B-DFlash version?

1
#8 opened about 17 hours ago by
linyunfeng99512

Advice for a 5090

1
#7 opened 11 days ago by
pramjana

Can i use this draft model with Q4 , Q6 and Q8 27B Models ?

👍 4
#6 opened 11 days ago by
hugypufy

dflash with quantize model

1
#5 opened 12 days ago by
Shimon324

Qwen3.5-4B/9B dflash supports VL mode

2
#4 opened 13 days ago by
huzhua

there a public release planned for the Qwen3.5-122B-DFlash model?

1
#3 opened 23 days ago by
wyc201314

FP8 work for base model or is 16-bit of 27B required?

14
#2 opened 23 days ago by
unoid
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs