Instructions to use internlm/internlm-xcomposer2-4khd-7b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use internlm/internlm-xcomposer2-4khd-7b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("visual-question-answering", model="internlm/internlm-xcomposer2-4khd-7b", trust_remote_code=True)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("internlm/internlm-xcomposer2-4khd-7b", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
Update build_mlp.py
#14 opened almost 2 years ago
by
unsubscribe
Faster mask_human_targets implementation
#13 opened almost 2 years ago
by
Godricly
getModel
#11 opened almost 2 years ago
by
lsqingfeng
fix bfloat mismatch bug when model loading using half()
#10 opened about 2 years ago
by
secularbird
Adding `safetensors` variant of this model
#9 opened about 2 years ago
by
SFconvertbot
Multi image query
#8 opened about 2 years ago
by
rjmehta
Multi GPU Inference
๐ 1
#7 opened about 2 years ago
by
rjmehta
try your code snippet
#6 opened about 2 years ago
by
yuanze1024
TypeError: forward() takes 2 positional arguments but 3 were given
๐ 4
1
#5 opened about 2 years ago
by
prabhatk579
Adding `safetensors` variant of this model
#4 opened about 2 years ago
by
SFconvertbot
Adding `safetensors` variant of this model
#1 opened about 2 years ago
by
SFconvertbot