Dataset Viewer
Auto-converted to Parquet Duplicate
repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
โŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
25,306
open
"Dynamic" Issue in LlamaDynamicNTKScalingRotaryEmbedding - Long context inference will impact short context inference.
### System Info - `transformers` version: 4.32.0.dev0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.22.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tenso...
08-04-2023 00:31:00
08-04-2023 00:31:00
transformers
25,305
open
Unable to change default cache folders despite setting environment variables
### System Info Collecting environment information... PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.2 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: Could no...
08-03-2023 23:42:20
08-03-2023 23:42:20
transformers
25,304
open
Tokenizer failing to encode chatml correctly
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.14.0-284.18.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True)...
08-03-2023 23:13:33
08-03-2023 23:13:33
transformers
25,303
open
loss reduction for `Llama2ForCausalLM.forward`
### Feature request In `forward` method, it outputs `loss` when `labels` are provided. But the `loss` shape is always `(1,)` because `reduction='mean'` in CrossEntropy. I wonder if I could pass `reduction='none'` and get a `(batch_size,)` shaped loss tensor. https://github.com/huggingface/transformers/blob/641adca5...
08-03-2023 21:29:20
08-03-2023 21:29:20
transformers
25,302
closed
Fix typo: Roberta -> RoBERTa
# What does this PR do? Small typo in docs: "Roberta" should have the correct capitalization "RoBERTa". Fixes #25301 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). <!-- - [ ] Did you read the [contributor guideline](https://githu...
08-03-2023 20:04:27
08-03-2023 20:04:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,301
closed
Minor typo referencing RoBERTa
"Roberta" should use the correct capitalization: "RoBERTa" https://github.com/huggingface/transformers/blob/d27e4c18fe2970abcb9a48dcb8a824e48083b15f/docs/source/en/tokenizer_summary.md?plain=1#L144 Should be a simple fix.
08-03-2023 19:58:21
08-03-2023 19:58:21
transformers
25,300
open
Add zero-shot classification task for BLIP-2
### Feature request I would like to add the support for the zero-shot classification task using BLIP2, computing text-image similarities with the normalized embeddings, that would be accessed from BLIP2 feature extractor. The idea is to enable calling the zero-shot classification pipeline using BLIP2, by implement...
08-03-2023 19:53:46
08-03-2023 19:53:46
transformers
25,299
open
cannot import name 'Module' from '_pytest.doctest'
### System Info transformers 4.32.0.dev0 torch 2.1.0.dev20230523+cu117 Error: Traceback (most recent call last): File "/workspace/transformers/examples/pytorch/language-modeling/run_clm.py", line 52, in <module> Traceback (most recent call last): File "/workspace/tran...
08-03-2023 19:05:56
08-03-2023 19:05:56
You might need a `pip install --upgrade pytest`.
transformers
25,298
open
[Whisper] Better error message for outdated generation config
# What does this PR do? Gives a better error message in the case that a user tries using an outdated generation config with the new generation arguments `language` and `task` (as described in https://github.com/huggingface/transformers/issues/25084#issuecomment-1653722724).
08-03-2023 17:57:18
08-03-2023 17:57:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25298). All of your documentation changes will be reflected on that endpoint.
transformers
25,297
open
MaskFormer, Mask2Former - replace einsum for tracing
# What does this PR do? Maskformer cannot currently be traced because of einsum operations. This PR replaces the einsum operations with standard matmuls. With this PR, the following now runs: ```python import torch from transformers import Mask2FormerForUniversalSegmentation device = torch.device("cuda...
08-03-2023 17:48:58
08-03-2023 17:48:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25297). All of your documentation changes will be reflected on that endpoint.
transformers
25,296
open
BertForSequenceClassification does not support 'device_map':"auto" yet
### System Info I have trained a model and am now trying to load and quantise it but getting the error: BertForSequenceClassification does not support 'device_map':"auto" yet Code for loading is simply: ` model = AutoModelForSequenceClassification.from_pretrained(model_dir, device_map='auto', load_in_8bit=T...
08-03-2023 17:00:09
08-03-2023 17:00:09
transformers
25,295
closed
[small] llama2.md typo
# What does this PR do? `groupe` -> `grouped` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. ...
08-03-2023 16:51:06
08-03-2023 16:51:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,294
open
Generate: remove Marian hack
# What does this PR do? WIP, let's see first if all tests pass
08-03-2023 16:48:40
08-03-2023 16:48:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25294). All of your documentation changes will be reflected on that endpoint.
transformers
25,293
open
MassFormer
### Model description We propose adding a new model, MassFormer, to predict tandem mass spectra accurately. MassFormer uses a graph transformer architecture to model long-distance relationships between atoms in the molecule. The transformer module is initialized with parameters obtained through a chemical pre-training...
08-03-2023 16:41:42
08-03-2023 16:41:42
transformers
25,292
open
Generate: get generation mode as a string
# What does this PR do? Currently, generate gets several `is_XXX_mode` flags, to determine the generation mode. This was cool when there were a handful of generation modes, but now it means we have many variables. This PR replaces that part of the logic by a single variable -- a string containing the name of the gen...
08-03-2023 16:33:36
08-03-2023 16:33:36
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25292). All of your documentation changes will be reflected on that endpoint.
transformers
25,291
open
Document check copies
# What does this PR do? This PR document a little bit better how or `Copied from` framework works, adds comments in the actual scripts and rework a bit the test to be better. In passing I added a feature requested which was to make sure `make fix-copies` took the function definition or the superclass into account...
08-03-2023 15:59:52
08-03-2023 15:59:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25291). All of your documentation changes will be reflected on that endpoint.
transformers
25,290
open
Make `bark` could have tiny model
# What does this PR do? Make `bark` could have tiny model. This is mainly for #24952 cc @ylacombe
08-03-2023 15:35:40
08-03-2023 15:35:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25290). All of your documentation changes will be reflected on that endpoint.
transformers
25,289
open
Quantized models + PEFT + multi-gpu setup failing during training
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.8 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 ### Who can help? @younesbelkada ### Information - [] The offici...
08-03-2023 15:17:46
08-03-2023 15:17:46
@younesbelkada maybe you can have a look at it?
transformers
25,288
closed
device_map="auto" -> uninitialized parameters
### System Info - `transformers` version: 4.31.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) ### Who can help? @Arthur...
08-03-2023 13:54:40
08-03-2023 13:54:40
I think this should have been fixed by #25101 Could you try again with a source install? (Yes it is a false positive, just tied weights where the copies are not present in the state dict.)<|||||>Awesome, that works. Was afraid that I was messing something up with converting to safetensors. Glad that that is not the ca...
transformers
25,287
open
Transformers Agent suggesting it should use text_generator although it is not provided.
### System Info I am running a version of [your notebook on Transformers Agent](https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj), where I have added a cell where I ask the StarCoder agent to generate a sentence for me. I am using StarCoder, as you can see: ``` #@title Agent init agent...
08-03-2023 13:08:51
08-03-2023 13:08:51
I'm not too sure why you are reporting a bug. The agent is an LLM which sometimes hallucinate content (in this case, a tool that does not exist). If your prompt does not work, you should try refining it. You should also try using another model and see if it performs better.
transformers
25,286
closed
[JAX] Bump min version
# What does this PR do? Bumps the minimum version of JAX to [0.4.1](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-1-dec-13-2022), the earliest version where the new `jax.Array` API is introduced, replacing the deprecated `jax.numpy.DeviceArray` API. This allows compatibility with the latest JAX version...
08-03-2023 12:53:27
08-03-2023 12:53:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,284
open
Fix Llama's attention map handling for left padding which causes numerical instability and performance drops
Hi this PR is trying to address the performance drop and potential numerical instability caused by vanilla left padding in Llama. Here is the explanation: 1. If we initialize the tokenizer with left padding and call model.generate without passing in corresponding attention_mask, the code will run, but for the instanc...
08-03-2023 12:02:01
08-03-2023 12:02:01
cc @ArthurZucker
transformers
25,283
open
Use of logging.warn is deprecated in favour of logging.warning
There are a few places where `transformers` uses the deprecated `warn` method on a logger, while most of the library uses `warning`. While this works for now, it will presumably be removed at some point (calling it emits a `DeprecationWarning`) and it means that strict test runners (such as `pytest`) complain about som...
08-03-2023 11:38:29
08-03-2023 11:38:29
@PeterJCLaw Indeed! Happy to review a PR :)
transformers
25,282
open
Timm models Safetensor weights give 'NoneType' object has no attribute 'get', weight re-initialization and wrong num_labels
### System Info My env information: ``` - `transformers` version: 4.31.0 - Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?)...
08-03-2023 09:20:08
08-03-2023 09:20:08
@sawradip `timm` weights on the hub work in timm, unless I'm missing something (some automatic conversion was added that I'm not aware) I don't think there is any expectation you can load them in `transformers`? I feel the pytorch native weights is a bug that it doesn't crash and it's probably not loading any keys... ...
transformers
25,281
closed
Docs: Update list of `report_to` logging integrations in docstring
# What does this PR do? ## Pull Request overview * Add missing `dagshub`, `codecarbon` and `flyte` integrations to `TrainingArguments` docstring. * Update `report_to` type hint to allow strings. ## Details I also converted the ordering back to alphabetical. I considered using a typing `Literal` as the type...
08-03-2023 08:52:32
08-03-2023 08:52:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,280
open
How to download files from HF spaces
### System Info google colab ### Who can help? @sanchit-gandhi @rock ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproductio...
08-03-2023 07:02:03
08-03-2023 07:02:03
Hi @andysingal, There is a typo in the repo_id. The correct command is: ``` model_path = hf_hub_download(repo_id="xinyu1205/recognize_anything_model", filename="tag2text_swin_14m.pth", local_dir = "/content") ``` If you receive an error that a repo doesn't exist, the best thing to do is check directly on...
transformers
25,279
closed
CI ๐Ÿš€ even more
# What does this PR do? A follow up of #25274: - To reduce `torch_job` reaches `95%` RAM --> with this PR, it reaches only `82%`. - Also smaller RAM usage for: `tf_job`: `60%` | `flax_job`: `86%` - Avoid the non-modeling files being tested redundantly - we save the timing for ~ 2 x 8 = 16 min. Now, ...
08-03-2023 06:03:20
08-03-2023 06:03:20
Well, request a review too quickly, sorry, but just a few tiny thing to fix ...<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, fair point. At least a (closed) PR is in the history for reference if we ever need it in the future. Thanks!<|||||>(we will need to keep an eye on...
transformers
25,278
open
Llama tokenizer add_prefix_space
Hi @sgugger This PR enables llama tokenizer supporting `add_prefix_space`. Would you please help me review it? Thanks!
08-03-2023 03:36:00
08-03-2023 03:36:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25278). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @sgugger , I have the same request here. My problem is as follows: "\nObservation" is a substring of "!\nObservation", but in the encoded ...
transformers
25,277
open
Unable to quantize Meta's new AudioCraft MusicGen model
### System Info - Windows 11 64bit - Python 3.10.12 - Torch v2.0.1+cu117 - Transformers v4.31.0 - audiocraft v0.0.2 - bitsandbytes v0.41.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `exa...
08-03-2023 00:18:53
08-03-2023 00:18:53
I figured out a fix by adding the line ```python inputs_embeds = inputs_embeds.to(torch.float16) ``` right after line 776, but I noticed commit https://github.com/huggingface/transformers/commit/03f98f96836477f6f5b86957d3ce98778cad5d94 which also fixes this bug. So the second bug is fixed if you're using a version ...
transformers
25,276
open
vectorize PrefixConstrainedLogitsProcessor
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-02-2023 20:56:57
08-02-2023 20:56:57
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25276). All of your documentation changes will be reflected on that endpoint.<|||||>There's a silly shape thing happening here which I'll try to debug ASAP (unless others are interested). Unfortunately testing locally is not worki...
transformers
25,275
open
Replace jnp.DeviceArray with jax.Array in FLAX models
## What does this PR do? Recent JAX versions have dropped support for jax.numpy.DeviceArray. Many FLAX models refer to jax.numpy.DeviceArray which causes a crash. This PR replaces all references to jax.numpy.DeviceArray with jax.Array. <!-- Congratulations! You've made it this far! You're not quite done yet thou...
08-02-2023 20:03:56
08-02-2023 20:03:56
Thanks for the fix @akhilgoe - believe this is a duplicate of #24875?<|||||> > Thanks for the fix @akhilgoe - believe this is a duplicate of #24875? Yes correct! <|||||>If it's okay with you can we give @mariecwhite the opportunity to finish their PR since they've worked on it since last week? (should be merged...
transformers
25,274
closed
CI with `pytest_num_workers=8` for torch/tf jobs
We set `pytest_num_workers` to `3` for `torch_job` and 6 for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`. - The full suite: all 3 jobs (PT/TF/Flax): `12-15 minutes` - On the latest nightly CI (without all PRs merged today): `PT: 37 min | TF...
08-02-2023 19:21:30
08-02-2023 19:21:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,273
closed
use `pytest_num_workers=8` for `torch_job` and `tf_job`
# What does this PR do? We set `pytest_num_workers` to `3` for `torch_job` and `6` for `tf_job` to avoid OOM. With the recent efforts of reducing model size in CI, we can actually set `pytest_num_workers=8`. The full suite: all 3 jobs (PT/TF/Flax) 12-15 minutes (on the latest nightly CI without all PRs merged to...
08-02-2023 19:17:59
08-02-2023 19:17:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25273). All of your documentation changes will be reflected on that endpoint.
transformers
25,272
closed
Question about generate method for AutoModelForCausalLM
Hi, I am trying to use the git model from the pretrained to pass to captum API for calculation of the attribution score. ` ### Initialize the attribution algorithm from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("microsoft/git-base") ig = IntegratedGradients(model...
08-02-2023 17:08:26
08-02-2023 17:08:26
Hi, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
25,271
open
EncoderDecoder does not automatically create decoder_attention_mask to match decoder_input_ids
### System Info ``` - `transformers` version: 4.31.0 - Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - ...
08-02-2023 14:59:12
08-02-2023 14:59:12
somewhat related, it seems like in the notebook, the `decoder_input_ids` nor the `labels` are shifted; Patrick claims it's because: > `"labels"` are shifted automatically to the left for language modeling training. but I don't see any evidence of this in the implementation. Was this behavior changed at some point? ...
transformers
25,270
open
Device errors when loading in 8 bit
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.31.0 - Platform: Linux-5.10.178-162.673.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate ver...
08-02-2023 13:39:56
08-02-2023 13:39:56
You cannot re-dispatch a model that was loaded in 8bit. You need to pass along your `max_memory` or `device_map` to the call to `from_pretrained`.
transformers
25,269
open
run_clm_no_trainer.py example - problem with most recent checkpoint loading
The example has code for finding the latest checkpoint, but accelerator.load_state isn't called. https://github.com/huggingface/transformers/blob/1baeed5bdf3c58b723a6125632567f97bdf322c6/examples/pytorch/language-modeling/run_clm_no_trainer.py#L561C15-L561C15
08-02-2023 13:39:33
08-02-2023 13:39:33
Hi @TomerRonen34, thanks for raising this issue! Can you make sure to follow the issue template and include: * A reproducible code snippet * Details of the expected and observed behaviour including the full traceback if it exists * Information about the running environment: run `transformers-cli env` in the ter...
transformers
25,268
closed
recommend DeepSpeed's Argument Parsing documentation
# What does this PR do? Clarify how to properly set the arguments passed by `deepspeed` when running in CLI. For example the following errors might be raised when running something like `deepspeed --num_gpus=2 fine-tune.py google/flan-t5-xxl` due to args passed by `deepspeed`: ``` usage: fine-tune.py [-h] mod...
08-02-2023 13:32:15
08-02-2023 13:32:15
cc @pacman100 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
25,267
closed
[MMS] Fix mms
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-02-2023 13:26:07
08-02-2023 13:26:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh ok to merge or should we run some more tests?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25267). All of your documentation changes will be reflected on that endpoint.
transformers
25,266
closed
CI with layers=2
# What does this PR do? Running a (sub) set of 24315 tests (given by test fetcher) - only tests in `test_modeling_xxx.py`. (for a full run like nightly run, it doesn't seem change anything about running time - need more investigation) Running time: - num_layers = mixed (2, 3, 4, 5, 6) - currently `main` - ...
08-02-2023 13:08:37
08-02-2023 13:08:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,265
open
[`Docs` / `BetterTransformer` ] Added more details about flash attention + SDPA
# What does this PR do? as discussed offline with @LysandreJik This PR clarifies to users how it is possible to use Flash Attention as a backend for most used models in transformers. As we have a seen some questions from users asking whether it is possible to integrate flash attention into HF models, whereas you...
08-02-2023 12:59:23
08-02-2023 12:59:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25265). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot for the extensive review @stevhliu ! ๐ŸŽ‰
transformers
25,264
open
[Question] How to load AutoFeatureExtractor on GPU?
Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification I intend to extract features of my data with the following codes ``` feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-lar...
08-02-2023 12:26:20
08-02-2023 12:26:20
Hi @treya-lin, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. You can move arrays prepared by the feature extractor to the GPU using the `to` method on its outputs: ``` de...
transformers
25,263
closed
Remove `pytest_options={"rA": None}` in CI
# What does this PR do? This option causes the (TF/Flax) jobs to spend 6-8 minutes (for a full set run) to prepare something for reporting after the actual tests are finished. Taking [this TF job (nightly run)](https://app.circleci.com/pipelines/github/huggingface/transformers/69562/workflows/8fd9db08-9730-4d57-9...
08-02-2023 11:36:03
08-02-2023 11:36:03
_The documentation is not available anymore as the PR was closed or merged._<|||||> > For reference, I think `-rA` generates a [detailed summary report for all groups](https://docs.pytest.org/en/6.2.x/usage.html#detailed-summary-report). Oh yes, my memory mixed the `--make-reports` and `-rA` things. Thanks! <|||||...
transformers
25,262
open
model.push_to_hub not working for gtr-large while loading with 8-bit using bnb
### System Info Issue :- I want to load gtr-large model in 8-bits using bitsandbytes and save it for future usage model = T5ForConditionalGeneration.from_pretrained('sentence-transformers/gtr-t5-large',load_in_8bit=True) model.push_to_hub("snigdhachandan/gtr_large_8bit") Error :- Traceback (most recen...
08-02-2023 11:18:38
08-02-2023 11:18:38
Hi @nss-programmer, thanks for raising this issue. There's been quite a few updates between bitsandbytes and transformers recently. Could you update your local transformers version to the most recent release `pip install --upgrade transformers` and try again? If that doesn't work, then could you try from source `pi...
transformers
25,261
open
Mask2Former broadcasting issue when running inference on model traced with GPU device
### System Info ``` - System information: x86_64 GNU/Linux - Ubuntu version: 18.04 - Python version: 3.8.12 - CUDA version: 11.1 - PyTorch version: 2.0.1 - transformers version: 4.31.0 ``` ### Who can help? @amyeroberts @sgugger @muellerzr ### Information - [ ] The official example scripts - [ ] My own...
08-02-2023 11:06:50
08-02-2023 11:06:50
Hi @matteot11, thanks for reporting this and for providing such a detailed and clean issue report โค๏ธ Looking into it ๐Ÿ” <|||||>@matteot11 I'm going to open up a PR soon to resolve this and remove the einsum operations. In the meantime, if you need to be able to run a compiled model now, it will run on torch nightly...
transformers
25,260
closed
โš ๏ธ [Wav2Vec2-MMS] `pipeline` and `from_pretrained` fail to load the Wav2Vec2 MMS checkpoints
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensor...
08-02-2023 10:22:16
08-02-2023 10:22:16
cc @patrickvonplaten <|||||>It looks like it's related to some recent changes and accelerate. If you checkout this commit: https://github.com/huggingface/transformers/commit/b0513b013b10939a2b47ab94933c2cca909716a2 and uninstall accelerate the code snippet works fine for me.<|||||>IIRC, fast loading with acceler...
transformers
25,259
closed
Update rescale tests - cast to float after rescaling to reflect #25229
# What does this PR do? In #25229 - the casting to float was moved back to after rescaling. This wasn't reflected in the specific rescaling tests for EfficientNet and ViVit, resulting in failing tests. This PR resolves this. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dis...
08-02-2023 10:01:18
08-02-2023 10:01:18
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,258
open
Why I cannot assign new parameter to the whisper pretrained config?
### System Info - `transformers` version: 4.31.0 - Platform: Linux-5.4.0-155-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) -...
08-02-2023 09:29:35
08-02-2023 09:29:35
Hi @teinhonglo, thanks for raising this issue! The reason for not being able to assign through the `from_pretrained` call is a safety check. Unknown kwargs are not applied: their application is ambigious - should they control the `from_pretrained` behaviour or be set as a config attribute? You can see which kwargs ...
transformers
25,257
open
how to print out the data loaded by each epoch during trainer.train() training?
### Feature request please tell to me, how to print out the data loaded by each epoch during trainer.train() training? ### Motivation how to print out the data loaded by each epoch during trainer.train() training? ### Your contribution how to print out the data loaded by each epoch during trainer.train() train...
08-02-2023 09:13:55
08-02-2023 09:13:55
Hi @ahong007007, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
25,256
open
Use 'transformers.BertModel.from_pretrained', The code is blocked
![52ae2d1edf2fa3044e6932d42c558f1](https://github.com/huggingface/transformers/assets/86940083/180c1033-375a-46b8-af7e-cda344e1e5ff) this is py-spy result: ![image](https://github.com/huggingface/transformers/assets/86940083/5d5aa094-fa16-452d-ab39-8700fa4d8d1e)
08-02-2023 08:56:36
08-02-2023 08:56:36
Hi, are you running the script/command in some particular setting? Looks like it's in a multiprocessing setting? Could you provide a self-complete code snippet instead of just uploading screenshot? Thanks in advance.<|||||>if not use pyrocketmq is ok. but use pyrocketmq not ok. the code is: ``` import jpype.impo...
transformers
25,255
open
fix bad URL to Llama 2
# What does this PR do? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
08-02-2023 08:43:23
08-02-2023 08:43:23
@fangli80 Running`make fix-copies` and pushing the changes will resolve the failing quality CI checks
transformers
25,254
open
Add FlaxCLIPTextModelWithProjection
# What does this PR do? `FlaxCLIPTextModelWithProjection` is necessary to support the Flax port of Stable Diffusion XL: https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/fb6d705fb518524cabc79c77f13a0e7921bcab3a/text_encoder_2/config.json#L3 I can add some tests, if necessary, after this appr...
08-02-2023 08:25:27
08-02-2023 08:25:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25254). All of your documentation changes will be reflected on that endpoint.<|||||>Should we maybe for now just add it in a subfolder of sdxl in diffusers here: https://github.com/huggingface/diffusers/tree/main/src/diffusers/pip...
transformers
25,253
open
RWKV-WORLD-4
### Model description BlinkDL/rwkv-4-world is a repo present on Huggingface i want the model's tokenizer and the model to be added to the Transformers Lib. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No r...
08-02-2023 07:39:58
08-02-2023 07:39:58
Hi @CosmoLM, thanks for opening this model request! The RWKV-4 model already exists in transformers -- [PR](https://github.com/huggingface/transformers/pull/22797), [docs](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/rwkv#rwkv-attention-and-the-recurrent-formulas). To enable loading the model throu...
transformers
25,252
open
run_mae.py can not be used directly on own dir
### System Info ref: https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining python run_mae.py \ --model_type vit_mae \ --dataset_name nateraw/image-folder \ --train_dir <path-to-train-root> \ --output_dir ./outputs/ \ --remove_unused_columns False \ --...
08-02-2023 07:30:25
08-02-2023 07:30:25
The error > FileNotFoundError: Unable to find '/home/ana/data4/datasets/rvl_cdip/data/pretrain_images/train/' at / shows you don't have local datasets (or there is some issue to locate it). Could you verify this on your own side? Thanks.<|||||>Hi @CheungZeeCn, thanks for raising this issue! So that we can bes...
transformers
25,251
open
Defining top_k within pipeline changes output from list to nested list
### System Info ``` - `transformers` version: 4.30.2 - Platform: Linux-5.14.0-162.22.2.el9_1.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Fla...
08-02-2023 05:12:29
08-02-2023 05:12:29
Hi @Harjas123 thank you for reporting! Our team will take a look.<|||||>also cc @Narsil <|||||>I agree that this is inconsistent but I don't think there is much to do about it now since this has been the case for the past three years, and making any change would break a lot of users code.<|||||>I understand. Would it a...
transformers
25,250
open
Ko perf train gpu one
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `<your_file>.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [ ] Chec...
08-02-2023 03:43:28
08-02-2023 03:43:28
transformers
25,249
closed
Bump cryptography from 41.0.2 to 41.0.3 in /examples/research_projects/decision_transformer
Bumps [cryptography](https://github.com/pyca/cryptography) from 41.0.2 to 41.0.3. <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's changelog</a>.</em></p> <blockquote> <p>41.0.3 - 2023-08-01</p> <pre><code> * Fixed performan...
08-02-2023 02:22:03
08-02-2023 02:22:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major vers...
transformers
25,248
open
Allow `trust_remote_code` in example scripts
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
08-01-2023 20:31:51
08-01-2023 20:31:51
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25248). All of your documentation changes will be reflected on that endpoint.<|||||>Will do flax and tf tomorrow. I have a few questions though: 1. @ydshieh, this script is still using `use_auth_token`. Is this intended? https:...
transformers
25,247
open
Enable use of best epoch in Trial, with early stopping, during hyperparameter search
### Feature request When running a `Trainer.hyperparameter_search`, each trial's value is calculated from the last epoch's chosen metric. However, especially when using early stopping and `load_best_model_at_end`, it would be useful to use the best model instead. This could be a parameter of `Trainer.hyperparameter...
08-01-2023 19:36:07
08-01-2023 19:36:07
cc @sgugger <|||||>Yes this is not currently supported. Could be nice to add, but this is not high-priority on our side, so it would have to be a contribution :-) Happy to review a PR!
transformers
25,246
closed
Fix return_dict_in_generate bug in InstructBlip generate function
# What does this PR do? Previously, the postprocessing conducted on generated sequences in InstructBlip's generate function assumed these sequences were tensors (i.e. that `return_dict_in_generate == False`). This PR updates the InstructBlip generate function to check whether the result of the call to the wrapped...
08-01-2023 18:28:04
08-01-2023 18:28:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
25,245
open
BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.
### System Info linux, python 3.8+, pytorch '1.13.0+cu116' ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ...
08-01-2023 18:21:07
08-01-2023 18:21:07
Hi @wingz1, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. There are code examples of how to use [BLIP](https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/blip#trans...
transformers
25,244
open
VQA task guide
This PR adds a new Visual Question Answering task guide to the transformers docs: fine-tuning ViLT, based on @NielsRogge 's [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/ViLT/Fine_tuning_ViLT_for_VQA.ipynb)
08-01-2023 17:57:58
08-01-2023 17:57:58
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25244). All of your documentation changes will be reflected on that endpoint.
transformers
25,243
closed
RetNet model support
### Model description RetNet / Retentive Networks is a new model *archetype* released by microsoft; the research paper is [here](https://arxiv.org/pdf/2307.08621.pdf). As of now, there is *one* model for retnet; [made by me](https://huggingface.co/parsee-mizuhashi/retnet-tiny-wikitext-undertrained); which is undertrai...
08-01-2023 17:35:07
08-01-2023 17:35:07
cc @ArthurZucker @younesbelkada <|||||>p.s. if google offered any bigger TPU's for TRC; i could train retnet-3b (the point at which retnet is better than regular transformers), but as of now; theres retnet_base (small) and retnet_medium (ill upload it when it gets good)<|||||>I am wondering if the original authors rele...
End of preview. Expand in Data Studio

Dataset Card for "hf-repo-issues"

More Information needed

Downloads last month
18