Image Segmentation
Transformers
PyTorch
ONNX
Safetensors
Transformers.js
SegformerForSemanticSegmentation
remove background
background
background-removal
Pytorch
vision
legal liability
custom_code
Instructions to use briaai/RMBG-1.4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use briaai/RMBG-1.4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-segmentation", model="briaai/RMBG-1.4", trust_remote_code=True)# Load model directly from transformers import AutoModelForImageSegmentation model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-1.4", trust_remote_code=True, dtype="auto") - Transformers.js
How to use briaai/RMBG-1.4 with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('image-segmentation', 'briaai/RMBG-1.4'); - Notebooks
- Google Colab
- Kaggle
Really inaccurate for logos and white backgrounds
#42
by inkityink - opened
Yes, the results really don't look very good, if you have relevant data, you can train the model that will give better results for your case.
Our model was not trained on this type of data
Since your images are very simple, you can remove the background without deep learning.
Here is an example using the remove_background_bicolor function from the PyMatting library.
from pymatting import *
import numpy as np
import urllib.request
url = "https://huggingface.co/proxy/cdn-uploads.huggingface.co/production/uploads/666bd373726e7ee7113c689a/2QsaGjluZS3BG4swfP2vW.png"
# Assuming that foreground color is white
# You may have to adjust this for other images
fg_color = np.array([1.0, 1.0, 1.0])
image_path = url.split("/")[-1]
# Download image
urllib.request.urlretrieve(url, image_path)
image = load_image(image_path, "RGB")
h, w = image.shape[:2]
# Assuming that background color is median color of top 5% rows of image
# You may have to adjust this for other images
bg_color = np.median(image[:round(0.05 * h)], axis=(0, 1))
# Remove background
image = remove_background_bicolor(image, fg_color, bg_color)
# Assume that foreground color is white everywhere
image[:, :, :3] = fg_color
# Bost alpha a bit to fix errors due to low quality input image
image[:, :, 3] *= 1.1
print("Saving cutout.png")
save_image("cutout.png", image)
origubany changed discussion status to closed





