Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / locuslab/tofu issues and pull requests
#47 - Request for the original author profiles of TOFU dataset
Issue -
State: open - Opened by the-jb about 1 month ago
#46 - Upload Llama Unlearning Checkpoints
Issue -
State: open - Opened by shariqahn about 2 months ago
- 8 comments
#45 - Unusual Model Utility Gap: Gradient Difference vs Ascent (Llama2)
Issue -
State: open - Opened by jeongjin0 2 months ago
- 1 comment
#44 - Is the dpo loss wrong?
Issue -
State: closed - Opened by HaomingX 2 months ago
- 3 comments
#43 - Which dataset should we use for evaluate?
Issue -
State: open - Opened by Yuda-Jin 5 months ago
- 1 comment
#42 - RuntimeError: 'weight' must be 2-D During Fine-Tuning with Single GPU
Issue -
State: open - Opened by ouerwt 5 months ago
- 1 comment
#41 - Can not find 'adapter_config.json' in ckpt or huggingface
Issue -
State: closed - Opened by Yuda-Jin 6 months ago
- 2 comments
#40 - Raise error while evaluate.
Issue -
State: closed - Opened by Yuda-Jin 6 months ago
- 5 comments
#39 - Could you please provide finetuned model weight of phi-1.5 and lamma2, this will unify the basis of our research.
Issue -
State: closed - Opened by Yuda-Jin 6 months ago
- 1 comment
#38 - DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`
Issue -
State: open - Opened by zhmzm 6 months ago
- 1 comment
#37 - Why finetuned model and retained model have similar model utility?
Issue -
State: closed - Opened by Carol-gutianle 7 months ago
- 2 comments
#36 - About the deepspeed
Issue -
State: closed - Opened by LetheSec 8 months ago
- 3 comments
#35 - Error loading Phi Finetuned model
Issue -
State: closed - Opened by pomonam 8 months ago
- 2 comments
#34 - Added support for LLaMa3
Pull Request -
State: closed - Opened by mikeFore4 8 months ago
- 1 comment
#33 - Inquiry about Constructing Datasets with Elaborate Prompt
Issue -
State: closed - Opened by tbozhong 9 months ago
- 1 comment
#32 - Inconsistent number of forget samples when evaluating the retain model (forget10 task)
Issue -
State: closed - Opened by LetheSec 9 months ago
- 2 comments
#31 - Breaking change in Huggingface Phi-1-5?
Issue -
State: closed - Opened by ajyl 9 months ago
- 1 comment
#30 - Bug in calculating loss when using DPO
Issue -
State: closed - Opened by zeta-zl 9 months ago
- 1 comment
#29 - Trying to get model parallelism and lower precision working
Pull Request -
State: closed - Opened by molereddy 10 months ago
#28 - Finetuning with LORA causes DeepSpeed error
Pull Request -
State: closed - Opened by mikeFore4 10 months ago
- 1 comment
#27 - Fixing order of directory creation for forget.py to prevent exiting
Pull Request -
State: closed - Opened by mikeFore4 10 months ago
- 1 comment
#26 - The implementation of Truth Ratio and Probability is different from the definition in the paper
Issue -
State: open - Opened by wzunknown 10 months ago
- 13 comments
#25 - Support for different num_processes in interleave_eval_result_dict
Issue -
State: closed - Opened by molereddy 10 months ago
- 1 comment
#24 - question about retain_perturbed.json in datasets locuslab/TOFU
Issue -
State: open - Opened by wtma1999 10 months ago
- 1 comment
#23 - TOFU-finetuned Phi-1.5 is not on the huggingface page
Issue -
State: closed - Opened by molereddy 10 months ago
- 1 comment
#22 - The huggingface leaderboard page is showing Runtime Error
Issue -
State: open - Opened by wzunknown 10 months ago
#21 - changed bf16 to fp16 and fixed some model paths
Pull Request -
State: closed - Opened by akshayneema 10 months ago
#20 - One of the inputs missing for DPO loss
Issue -
State: open - Opened by chrisliu298 10 months ago
- 4 comments
#19 - Getting truth ratio always 1
Issue -
State: open - Opened by shaswati1 10 months ago
- 13 comments
#18 - Dataset contents issues
Issue -
State: open - Opened by molereddy 10 months ago
- 4 comments
#17 - Eval log file limiting examples
Issue -
State: closed - Opened by molereddy 10 months ago
- 1 comment
#16 - eval generates answer same as dataset
Issue -
State: closed - Opened by shaswati1 11 months ago
- 9 comments
#15 - Refactor eval
Pull Request -
State: closed - Opened by zhilif 11 months ago
#15 - Refactor eval
Pull Request -
State: closed - Opened by zhilif 11 months ago
#14 - Issues introduced by refactoring and other miscellaneous
Issue -
State: open - Opened by molereddy 11 months ago
- 9 comments
#13 - Refactor eval
Pull Request -
State: closed - Opened by pratyushmaini 11 months ago
#12 - End to generated text
Issue -
State: closed - Opened by molereddy 11 months ago
- 5 comments
#11 - Hyperparameter issues in configs
Issue -
State: closed - Opened by molereddy 11 months ago
- 3 comments
#10 - Unable to train fintuned LoRA on forget
Issue -
State: open - Opened by shaswati1 11 months ago
- 3 comments
#9 - Where is eval_log_aggregated.json generated?
Issue -
State: closed - Opened by molereddy 11 months ago
- 1 comment
#8 - Issues with deepspeed
Issue -
State: closed - Opened by molereddy 11 months ago
- 3 comments
#7 - Unable to save finetuned llama2
Issue -
State: closed - Opened by shaswati1 11 months ago
- 8 comments
#6 - requirements.txt needs a fix
Issue -
State: closed - Opened by molereddy 12 months ago
- 2 comments
#5 - Finetune LLAMA2 with LoRA
Issue -
State: closed - Opened by petezone 12 months ago
- 1 comment
#4 - Is anyone getting a problem with the command for forget.py?
Issue -
State: closed - Opened by sriramvema 12 months ago
- 3 comments
#3 - Where are the evals inside the data folder being generated?
Issue -
State: closed - Opened by rthapa84 12 months ago
- 5 comments
#2 - Command for evaluations
Issue -
State: closed - Opened by yujianll about 1 year ago
- 2 comments
#1 - Unable to load the dataset from HuggingFace hub, throws a ValueError
Issue -
State: closed - Opened by archit31uniyal about 1 year ago
- 1 comment