Instructions to use jkminder/Qwen3-8B-LF-EM_a0.2_aligned_1d8eaf67 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jkminder/Qwen3-8B-LF-EM_a0.2_aligned_1d8eaf67 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("jkminder/Qwen3-8B-LF-EM_a0.2_aligned_1d8eaf67", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- c4c0ce69802e3b223b7e8566896709a1d770a3fc1ca421a8b61b8582d373b28a
- Size of remote file:
- 6.42 kB
- SHA256:
- 77cc662621f3cdd69041f4edf1a7bad0e3604ab7b6fff60d93083abbe9aee8da
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.