Ecosyste.ms: Issues

An open API service for providing issue and pull request metadata for open source projects.

GitHub / juncongmoo/pyllama issues and pull requests

#114 - Question regarding EnergonAI repo

Issue - State: closed - Opened by philipp-fischer 4 months ago

#113 - torch.distributed.elastic.multiprocessing.errors.ChildFailedError

Issue - State: open - Opened by sido420 6 months ago - 1 comment

#112 - Quick Question

Issue - State: closed - Opened by ArgusK17 6 months ago

#111 - How to run an interactive mode in Jupyter?

Issue - State: open - Opened by myrainbowandsky 8 months ago

#109 - 12GB card

Issue - State: open - Opened by arthurwolf 11 months ago - 2 comments

#108 - no module named llama

Issue - State: open - Opened by Cooper-Ji 12 months ago - 1 comment

#107 - Added transformers to requirements.txt

Pull Request - State: open - Opened by HireTheHero about 1 year ago

#106 - NVMLError_NoPermission: Insufficient Permissions

Issue - State: open - Opened by mzdsk2 about 1 year ago

#105 - evaluating has an extremely large value when quantize to 4bit.

Issue - State: open - Opened by JiachuanDENG about 1 year ago - 1 comment

#104 - Download 7B model seems stuck

Issue - State: open - Opened by guanlinz about 1 year ago - 9 comments

#103 - Download watchdog kicking in? (M1 mac)

Issue - State: open - Opened by kryt about 1 year ago

#100 - shape mismatch error

Issue - State: open - Opened by Celppu about 1 year ago

#98 - parameter inncorrect when I run make command

Issue - State: open - Opened by GameDevKitY about 1 year ago

#97 - gptq github

Issue - State: open - Opened by austinmw about 1 year ago - 4 comments

#96 - Try Modular - Mojo

Issue - State: open - Opened by eznix86 about 1 year ago

#95 - Randomly get shape mismatch error

Issue - State: open - Opened by vedantroy about 1 year ago

#94 - Does this include the GPTQ quantization tricks?

Issue - State: open - Opened by vedantroy about 1 year ago

#93 - Why are params.json empty?

Issue - State: closed - Opened by ItsCRC about 1 year ago - 5 comments

#92 - Quantize issue

Issue - State: open - Opened by ZenekZombie about 1 year ago

#90 - RecursionError running llama.download

Issue - State: open - Opened by anyangpeng about 1 year ago - 4 comments

#89 - Adjust watchdog time interval from 30 seconds to 2 minutes.

Pull Request - State: closed - Opened by Jack-Moo about 1 year ago - 1 comment

#86 - Run 'inference.py' and 'model parallel group is not initialized'

Issue - State: open - Opened by ildartregulov about 1 year ago - 7 comments

#85 - Apply Delta failed

Issue - State: open - Opened by majidbhatti about 1 year ago - 1 comment

#84 - How to run 13B model in a single GPU just by inference.by?

Issue - State: open - Opened by statyui about 1 year ago

#83 - about rotary embedding in llama

Issue - State: closed - Opened by irasin about 1 year ago - 2 comments

#82 - Strange characters

Issue - State: open - Opened by webpolis about 1 year ago - 1 comment

#81 - Cannot run on Mac with Python 3.11.3

Issue - State: open - Opened by kornhill about 1 year ago - 6 comments

#79 - docs: reduce bit misspell README

Pull Request - State: closed - Opened by guspan-tanadi about 1 year ago

#78 - Quantized version link suspect

Issue - State: open - Opened by thistleknot about 1 year ago - 1 comment

#76 - Gave written examples to run 7B model on GPUs

Pull Request - State: closed - Opened by george-adams1 about 1 year ago

#75 - Can't Load Quantized Model with GPTQ-for-LLaMa

Issue - State: open - Opened by chigkim about 1 year ago - 2 comments

#74 - a questuon about the single GPU Inference

Issue - State: open - Opened by zsmmsz99 about 1 year ago - 1 comment

#72 - Readme Should Have Inference Command to use for Quantization in Text

Issue - State: open - Opened by chigkim about 1 year ago - 1 comment

#71 - rewrite download_community.sh

Pull Request - State: closed - Opened by llimllib about 1 year ago - 3 comments

#70 - add a shebang to all shell files

Pull Request - State: closed - Opened by llimllib about 1 year ago

#69 - Document if it works with CPU / Macos

Issue - State: open - Opened by ikamensh about 1 year ago

#67 - ModuleNotFoundError: No module named 'transformers'

Issue - State: open - Opened by tasteitslight about 1 year ago - 6 comments

#66 - Can't see progress bar

Issue - State: open - Opened by rahulvigneswaran about 1 year ago - 1 comment

#65 - Has black formatting been considered?

Issue - State: open - Opened by tanitna about 1 year ago

#63 - make download work behind proxy

Pull Request - State: closed - Opened by wanweilove about 1 year ago

#62 - Killed

Issue - State: open - Opened by javierp183 about 1 year ago - 6 comments

#61 - Any way to infer a quantized model on multi GPUs?

Issue - State: open - Opened by Imagium719 about 1 year ago - 1 comment

#60 - Quantize Original LLaMA Model Files

Issue - State: open - Opened by htcml about 1 year ago - 3 comments

#59 - Let it run under WSL

Pull Request - State: closed - Opened by daniel-kukiela about 1 year ago

#58 - Quantization with "groupsize" makes the results completely wrong.

Issue - State: open - Opened by daniel-kukiela about 1 year ago - 8 comments

#56 - Downloading get stuck in infinite loop

Issue - State: open - Opened by jarimustonen over 1 year ago - 13 comments

#55 - Error trying Quantize 7B model to 8-bit

Issue - State: closed - Opened by guoti777 over 1 year ago - 2 comments

#54 - AttributeError: module 'itree' has no attribute 'Node'

Issue - State: open - Opened by Tor101 over 1 year ago - 8 comments

#53 - Docker install

Issue - State: open - Opened by mgpai22 over 1 year ago

#52 - Meaningless Prediction in 13B 2bit

Issue - State: open - Opened by axenov over 1 year ago - 3 comments

#51 - error when installing

Issue - State: closed - Opened by zzzgit over 1 year ago - 1 comment

#50 - Error Downloading Models from Community on Winodws

Issue - State: open - Opened by mmortazavi over 1 year ago - 5 comments
Labels: bug

#49 - add suggestion for quantization and some bug fixes

Pull Request - State: closed - Opened by juncongmoo over 1 year ago

#47 - pyllama/downloads returns empty folders

Issue - State: open - Opened by flyjgh over 1 year ago - 34 comments
Labels: question

#46 - How can I input prompt when I use multi GPU?

Issue - State: open - Opened by liydxl over 1 year ago - 1 comment

#45 - Share your evaluate result

Issue - State: open - Opened by jeff3071 over 1 year ago - 3 comments

#44 - fix argument in convert_llama

Pull Request - State: closed - Opened by a1ex90 over 1 year ago

#43 - AttributeError: module 'numpy' has no attribute 'array'

Issue - State: open - Opened by jameswan over 1 year ago

#42 - watch downloading speed and restart downloading if it drops to very low

Pull Request - State: closed - Opened by gmlove over 1 year ago

#41 - Error trying Quantize 7B model to 2-bit

Issue - State: open - Opened by willintonmb over 1 year ago - 5 comments

#40 - Quantize 7B model to 8-bit --> "Killed"

Issue - State: closed - Opened by hex4def6 over 1 year ago - 1 comment

#39 - "KeyError: 'llama'"

Issue - State: closed - Opened by DirtyKnightForVi over 1 year ago

#37 - ModuleNotFoundError: No module named 'quant_cuda'

Issue - State: open - Opened by AceBeaker2 over 1 year ago - 15 comments

#36 - Unkown cuda error

Issue - State: closed - Opened by AceBeaker2 over 1 year ago - 3 comments

#34 - ModuleNotFoundError: No module named 'llama.hf'

Issue - State: closed - Opened by vetka925 over 1 year ago - 4 comments

#33 - No module named "transformers" error

Issue - State: closed - Opened by SimoGiuffrida over 1 year ago - 1 comment

#32 - example.py FAILED

Issue - State: closed - Opened by yangzhipeng1108 over 1 year ago - 1 comment

#31 - Model mismatch for 13B

Issue - State: open - Opened by BOB603049648 over 1 year ago - 3 comments

#30 - ModuleNotFoundError: No module named 'quant_cuda'

Issue - State: closed - Opened by WeissAzura over 1 year ago - 3 comments

#29 - Download takes forever

Issue - State: closed - Opened by puyuanliu over 1 year ago - 2 comments

#28 - Model does not split for 65B

Issue - State: open - Opened by YixinSong-e over 1 year ago - 5 comments

#27 - How to run llama_quant without downloading models from huggingface ?

Issue - State: open - Opened by B2F over 1 year ago - 1 comment
Labels: enhancement, good first issue

#26 - Error when download models

Issue - State: open - Opened by paulocoutinhox over 1 year ago - 5 comments

#25 - world size assertionerror

Issue - State: closed - Opened by sharlec over 1 year ago - 6 comments

#24 - M1 inference

Issue - State: open - Opened by zmactep over 1 year ago - 1 comment

#23 - multiple GPU support

Pull Request - State: closed - Opened by mldevorg over 1 year ago

#22 - Execuse me, How to use chat mode?

Issue - State: closed - Opened by baifachuan over 1 year ago
Labels: invalid

#21 - convert

Pull Request - State: closed - Opened by mldevorg over 1 year ago

#20 - add simple input loop to inference.py

Pull Request - State: closed - Opened by lucemia over 1 year ago

#19 - Bug fix3

Pull Request - State: closed - Opened by juncongmoo over 1 year ago

#18 - fix a bug

Pull Request - State: closed - Opened by mldevorg over 1 year ago

#17 - fix document

Pull Request - State: closed - Opened by mldevorg over 1 year ago

#16 - add quant and download info

Pull Request - State: closed - Opened by juncongmoo over 1 year ago

#15 - Vanilla pytorch LLaMA implementation

Issue - State: closed - Opened by galatolofederico over 1 year ago - 3 comments

#14 - Struggle with training LLaMA with a single GPU using both PT v1 and v2

Issue - State: closed - Opened by linhduongtuan over 1 year ago - 4 comments

#13 - Docker Playground With LLaMA And PyLLaMA

Issue - State: closed - Opened by soulteary over 1 year ago - 1 comment