Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / ludwig-ai/ludwig issues and pull requests

#3606 - fix: Load 8-bit quantized models for eval after fine-tuning

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 1 comment

#3605 - Code Llama Support not fully working yet

Issue - State: closed - Opened by DevHorn over 1 year ago - 3 comments

#3604 - Fix registration for char error rate.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 1 comment

#3603 - Support merging LoRA/AdaLoRA weights into the base model.

Issue - State: closed - Opened by arnavgarg1 over 1 year ago - 3 comments
Labels: feature, help wanted, easy

#3602 - Updated characters, underscore and comma preprocessors to be TorchScriptable.

Pull Request - State: closed - Opened by martindavis over 1 year ago - 1 comment

#3601 - Store steps_per_epoch in Trainer

Pull Request - State: closed - Opened by hungcs over 1 year ago - 1 comment

#3600 - Eliminate short-circuiting for loading from local

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3599 - Set default eval batch size to 2 for LLM fine-tuning

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3598 - Add codellama to tokenizer list for set_pad_token

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3596 - FIX: Failure in TabTransformer Combiner Unit test

Pull Request - State: closed - Opened by jimthompson5802 over 1 year ago - 2 comments

#3594 - ludwig serve Internal error 500 (model has no max_length, division by zero)

Issue - State: closed - Opened by pkpro over 1 year ago - 3 comments

#3592 - Unpin Transformers for CodeLlama support

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3589 - Load LORA adapter for inference

Issue - State: closed - Opened by kv-gits over 1 year ago - 3 comments

#3587 - Export to CoreML fails

Issue - State: closed - Opened by saad-palapa over 1 year ago - 2 comments

#3586 - WandB: Add metric logging support on eval end and epoch end

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3585 - Set default global_max_sequence_length to 512 for LLMs

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 6 comments

#3584 - Export to GPTQ

Issue - State: closed - Opened by tgaddair over 1 year ago
Labels: feature

#3583 - Out of Memory Error Running llama2_7b_finetuning_4bit Example

Issue - State: closed - Opened by charleslbryant over 1 year ago - 6 comments
Labels: bug

#3581 - Add AutoAugmentation to image classification training

Issue - State: closed - Opened by saad-palapa over 1 year ago - 7 comments
Labels: feature, help wanted

#3578 - Re-enable GPU tests

Pull Request - State: closed - Opened by tgaddair over 1 year ago - 2 comments

#3574 - Lora Parameter Configuration

Issue - State: closed - Opened by msmmpts over 1 year ago - 2 comments

#3573 - Changes made for clarity and brevity

Issue - State: closed - Opened by DonMiller9294 over 1 year ago - 3 comments

#3572 - Allow user to specify huggingface link or local path to pretrained lora weights

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3571 - Unpin `transformers` when a newer version > 4.32.1 is released

Issue - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3570 - Upload to HF fails for non-LLM trained

Issue - State: closed - Opened by thelinuxkid over 1 year ago - 1 comment

#3569 - Pin Transformers to 4.31.0

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 3 comments

#3568 - local variable 'tokens' referenced before assignment

Issue - State: closed - Opened by randy-tsukemen over 1 year ago - 1 comment

#3567 - fix: Move target tensor to model output device in `check_module_parameters_updated`

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 3 comments

#3566 - Move DDP model to device if it hasn't been wrapped yet

Pull Request - State: closed - Opened by tgaddair over 1 year ago
Labels: bug

#3565 - fix: Add predictor-specific device placement handling

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 1 comment

#3564 - schema: Add `prompt` validation check

Pull Request - State: closed - Opened by ksbrar over 1 year ago - 2 comments

#3563 - ludwig_llama2_7b_finetuning_4bit.ipynb crashes

Issue - State: closed - Opened by silvacarl2 over 1 year ago - 3 comments

#3562 - Wrap each metric update in try/except.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago

#3561 - Fully featured E2E computer vision workflow

Issue - State: closed - Opened by saad-palapa over 1 year ago

#3560 - ensure that there are enough colors to match the score index in visua…

Pull Request - State: closed - Opened by thelinuxkid over 1 year ago - 2 comments

#3559 - Report loss in tqdm to avoid log spam

Pull Request - State: closed - Opened by tgaddair over 1 year ago - 1 comment

#3558 - do not compare dicts in visualize compare_performance

Pull Request - State: closed - Opened by thelinuxkid over 1 year ago - 5 comments

#3557 - Fixed ground truth formats to include hdf5

Pull Request - State: closed - Opened by tgaddair over 1 year ago - 1 comment

#3556 - add missing ptitprince depedency

Pull Request - State: closed - Opened by thelinuxkid over 1 year ago - 4 comments

#3554 - fix: Move model to the correct device for eval

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 5 comments

#3553 - Add backwards compatibility check for effective batch size

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3552 - Add create_pr=True to `ludwig upload`.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 2 comments

#3551 - Model weights are not getting creating after training completes

Issue - State: closed - Opened by SumanthDatta-Kony over 1 year ago - 6 comments

#3549 - Add reasonable LLM fine-tuning defaults

Pull Request - State: closed - Opened by tgaddair over 1 year ago - 1 comment

#3548 - Add test to show global_max_sequence_length can never exceed an LLMs context length

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3547 - Set default max_sequence_length to None for LLM text input/output features

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3546 - Fix sequence generator test.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago

#3544 - Issue Running Ludwig AutoML on Modal.com Cloud Computing Service

Issue - State: closed - Opened by degschta over 1 year ago - 4 comments
Labels: bug, looking into it

#3543 - Add skip_all_evaluation as a mechanic to skip all evaluation.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 1 comment

#3542 - Add Ludwig 0.8 notebook to the README

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3541 - Fix pad + bos token issues for all models

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 2 comments

#3540 - Remove obsolete prompt tuning example.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 1 comment

#3538 - Implement Sample Packing for Efficient LLM Training

Issue - State: closed - Opened by fire over 1 year ago - 4 comments

#3537 - [bug] Pin pydantic to < 2.0

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 3 comments

#3536 - Improve observability during LLM inference

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3535 - Update comment for predict to update Ludwig docs

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3534 - [bug] Support preprocessing `datetime.date` date features

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 1 comment

#3533 - Add `effective_batch_size` to auto-adjust gradient accumulation

Pull Request - State: closed - Opened by tgaddair over 1 year ago - 2 comments

#3531 - Revert "Ensure user sets backend to local w/ quantization (#3524)"

Pull Request - State: closed - Opened by tgaddair over 1 year ago
Labels: release-0.8

#3530 - README: Update LLM fine-tuning config

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3529 - Lamma2 training on dataset downloaded from Huggingface.

Issue - State: closed - Opened by sudhir2016 over 1 year ago - 13 comments

#3528 - Release 0.8.1 latest

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3527 - Update ludwig version to 0.8.1

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3526 - Add comment about batch size tuning

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3524 - Ensure user sets backend to local w/ quantization

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 3 comments

#3522 - Move loss metric to same device as inputs

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3521 - Add new sythesized `response` column for text output features during postprocessing

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3520 - Add mechanic to override default values for generation during model.predict()

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 2 comments

#3519 - Move loss metric to same device as inputs

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3518 - Set default local backend when doing quantization

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 2 comments

#3517 - [feat] Support for numeric date feature inputs

Pull Request - State: closed - Opened by jeffkinnison over 1 year ago - 1 comment

#3515 - Set backend to local instead of ray when using quantization

Pull Request - State: closed - Opened by Infernaught over 1 year ago - 1 comment

#3514 - [WIP] Enable strict schema enforcement

Pull Request - State: closed - Opened by ksbrar over 1 year ago - 1 comment

#3512 - Bump to v0.8

Pull Request - State: closed - Opened by tgaddair over 1 year ago

#3511 - Fix typo in function name for LR schedulers

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3510 - Ludwig 0.7.2 codebase

Pull Request - State: closed - Opened by SumanthDatta-Kony over 1 year ago - 1 comment

#3509 - Fix temperature description

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3508 - Check that LLMs have exactly one text input feature

Pull Request - State: closed - Opened by geoffreyangus over 1 year ago - 1 comment

#3507 - Add Cosine Annealing LR scheduler as a decay method

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3506 - Update ludwig version to v0.7.5

Pull Request - State: closed - Opened by justinxzhao over 1 year ago

#3505 - Make Ludwig logo smaller in the README

Pull Request - State: closed - Opened by abidwael over 1 year ago

#3504 - Return sentences instead of individual tokens for text features during inference

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3503 - Remove tables for Ludwig 0.7.

Pull Request - State: closed - Opened by justinxzhao over 1 year ago - 1 comment

#3502 - Readme TYPO

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago

#3501 - Improve description for generation config parameters

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 1 comment

#3500 - Readme updates for 0.8

Pull Request - State: closed - Opened by tgaddair over 1 year ago

#3499 - Updates for ludwig-docs

Pull Request - State: closed - Opened by tgaddair over 1 year ago

#3498 - Add use_pretrained attribute for AutoTransformers

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago - 3 comments

#3497 - Add parameter metadata for global_max_sequence_length

Pull Request - State: closed - Opened by arnavgarg1 over 1 year ago