Instructions to use byrLLCC/ChestX-Reasoner with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use byrLLCC/ChestX-Reasoner with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="byrLLCC/ChestX-Reasoner") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("byrLLCC/ChestX-Reasoner") model = AutoModelForImageTextToText.from_pretrained("byrLLCC/ChestX-Reasoner") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use byrLLCC/ChestX-Reasoner with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "byrLLCC/ChestX-Reasoner" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "byrLLCC/ChestX-Reasoner", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/byrLLCC/ChestX-Reasoner
- SGLang
How to use byrLLCC/ChestX-Reasoner with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "byrLLCC/ChestX-Reasoner" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "byrLLCC/ChestX-Reasoner", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "byrLLCC/ChestX-Reasoner" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "byrLLCC/ChestX-Reasoner", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use byrLLCC/ChestX-Reasoner with Docker Model Runner:
docker model run hf.co/byrLLCC/ChestX-Reasoner
ChestX-Reasoner
Access to this model is restricted and requires manual approval.
This repository contains the model weights for ChestX-Reasoner, a model developed for chest X-ray reasoning and report-related research.
Access Policy
This model is released as a gated model with manual review.
To request access, please:
- Click Request access on Hugging Face.
- Send an email to liangcheng@sjtu.edu.cn
- Include proof that you have authorized access to the MIMIC database (or the relevant MIMIC resource used in your research setting).
Access requests will only be approved after manual verification.
Required Materials for Access Request
When emailing liangcheng@sjtu.edu.cn, please include:
- Your full name
- Your institution / affiliation
- Your Hugging Face username
- Your intended use of the model
- Proof of authorized access to MIMIC
Examples of acceptable proof may include:
- A screenshot showing your authorized access status
- A confirmation email or credential page
- Other reasonable documentation demonstrating that you have valid MIMIC access rights
If your request does not include sufficient proof of authorized access, it may be declined.
Important Ethical Notice
This model is for research purposes only.
Because the training source involves access-controlled medical data, special care is required regarding:
- privacy
- memorization risk
- redistribution risk
- secondary release of potentially sensitive content
- compliance with source dataset governance
If you are unsure whether your intended use is permissible, please contact liangcheng@sjtu.edu.cn before using the model.
Liang Cheng
Email: liangcheng@sjtu.edu.cn
- Downloads last month
- -
Model tree for byrLLCC/ChestX-Reasoner
Base model
Qwen/Qwen2.5-VL-7B-Instruct
# Gated model: Login with a HF token with gated access permission hf auth login