# Convert MHR TorchScript to CPU-safe file (Colab) Run this in a GPU Colab (Runtime -> Change runtime type -> GPU). It loads the MHR torchscript saved in the original HF repo, moves it to CPU, and saves a CPU-safe file you can download. 1) Install dependencies ```bash pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118 pip install huggingface_hub ``` 2) Run this Python snippet (paste into a notebook cell) ```python import os from huggingface_hub import snapshot_download import torch import pathlib HF_TOKEN = os.environ.get('HF_TOKEN') or 'paste_your_token_here' repo_id = 'facebook/sam-3d-body-vith' local_dir = snapshot_download(repo_id=repo_id, use_auth_token=HF_TOKEN) print('downloaded', local_dir) mhr_path = pathlib.Path(local_dir) / 'assets' / 'mhr_model.pt' print('mhr_path', mhr_path) m = torch.jit.load(str(mhr_path), map_location='cuda' if torch.cuda.is_available() else 'cpu') print('loaded') try: m_cpu = m.to('cpu') except Exception: m_cpu = m out = '/content/mhr_model_cpu.pt' torch.jit.save(m_cpu, out) print('saved', out) ``` 3) Download `mhr_model_cpu.pt` from the Colab files panel and copy it to your local machine. Then place it under the cached HF snapshot path or set the `LOCAL_MHR_PATH` env var to point at it when running the helper locally. Local usage example (on your Mac): ```bash export LOCAL_MHR_PATH=/path/to/mhr_model_cpu.pt export LOCAL_CKPT_PATH=/path/to/model.ckpt # optional, if you downloaded the checkpoint python app/helper/model_loader.py ``` This will let your local helper load the MHR model without CUDA device pointers.