Ecosyste.ms: Issues
An open API service for providing issue and pull request metadata for open source projects.
GitHub / google/gemma_pytorch issues and pull requests
#77 - Failed to cancel access request
Issue -
State: open - Opened by lucassunalt 20 days ago
- 1 comment
Labels: stat:awaiting response
#76 - Add required world_size and rank to GemmaDecodeLayer init
Pull Request -
State: closed - Opened by DavidRV00 about 1 month ago
- 2 comments
#75 - Bug: GemmaDecodeLayer __init__ is not passed required world_size, rank in model_xla
Issue -
State: closed - Opened by DavidRV00 about 1 month ago
#74 - Question about Rotary Embedding Sequence in Model Code vs. Diagrams
Issue -
State: open - Opened by littlepsilon 2 months ago
#73 - Non causal sliding window mask ?
Issue -
State: closed - Opened by Optimox 3 months ago
- 3 comments
#72 - How to solve the 'RESOURCE_EXHAUSTED' error when loading 'gemma2_instruct_2b_en' (the script is from kaggle and runs on colab with TPU)?
Issue -
State: closed - Opened by nicewang 3 months ago
- 4 comments
Labels: type:support
#71 - Inconsistent 'query_pre_attn_scalar' Setting Between 9B and 27B Models
Issue -
State: open - Opened by kiddj 6 months ago
- 2 comments
Labels: bug, stat:awaiting response
#70 - Hope to See the Source Code of Gemma2 Version
Issue -
State: closed - Opened by thefreeman007 7 months ago
- 1 comment
Labels: type:support
#69 - Remove unused imports
Pull Request -
State: closed - Opened by neurosnap 7 months ago
- 1 comment
#68 - Fix downcasting and upcasting similar to https://github.com/google/ge…
Pull Request -
State: closed - Opened by michaelmoynihan 7 months ago
- 1 comment
#67 - Fix downcasting and upcasting
Pull Request -
State: closed - Opened by danielhanchen 7 months ago
- 1 comment
#66 - Supporting Gemma V2
Pull Request -
State: closed - Opened by michaelmoynihan 7 months ago
- 1 comment
#65 - Update run_xla.py
Pull Request -
State: closed - Opened by michaelmoynihan 7 months ago
#64 - gemma-2b-it-pytorch on tpu v5p
Issue -
State: closed - Opened by shungcp 7 months ago
- 1 comment
#63 - Modify SentencePiece function calls.
Pull Request -
State: closed - Opened by texasmichelle 8 months ago
- 1 comment
#62 - Change return to raise in `get_model_config`.
Pull Request -
State: closed - Opened by texasmichelle 8 months ago
- 1 comment
#61 - when to support RecurrentGemma?
Issue -
State: closed - Opened by Mddct 9 months ago
- 1 comment
Labels: enhancement
#60 - Gemma finetuning formatting
Issue -
State: closed - Opened by mostafamdy 9 months ago
- 3 comments
Labels: type:support
#59 - fix missing torch in requirment
Pull Request -
State: closed - Opened by Mddct 9 months ago
- 1 comment
#58 - Add CodeGemma and HF pointers
Pull Request -
State: closed - Opened by osanseviero 10 months ago
- 1 comment
#57 - early stop when all sequence reach EOS
Pull Request -
State: open - Opened by je1lee 10 months ago
- 3 comments
#56 - Memory saving loading weight for non-quant models
Pull Request -
State: closed - Opened by KaneGreen 10 months ago
- 5 comments
#55 - Prepare model for deployment to Private Vertex AI endpoint
Issue -
State: closed - Opened by BriianPowell 10 months ago
- 5 comments
Labels: type:support
#54 - Update xla_model_parallel.py
Pull Request -
State: closed - Opened by ya0guang 10 months ago
- 2 comments
#53 - Error when run docker/Dockerfile
Issue -
State: closed - Opened by Cguanqin 10 months ago
- 3 comments
Labels: type:support, stat:awaiting response
#52 - How to use gemma for multi-round conversations
Issue -
State: closed - Opened by ranck626 10 months ago
- 4 comments
Labels: type:support, stat:awaiting response
#51 - How to save memory when loading weights?
Issue -
State: closed - Opened by KaneGreen 10 months ago
- 9 comments
Labels: bug
#50 - Unable to reproduce MATH resulst
Issue -
State: open - Opened by wenhuchen 10 months ago
- 2 comments
Labels: type:support, stat:awaiting internal
#49 - fix: raise Exception
Pull Request -
State: closed - Opened by leowzz 10 months ago
- 2 comments
#48 - Is it possible to load 7b-it using quantization config
Issue -
State: closed - Opened by aliasneo1 10 months ago
- 1 comment
Labels: enhancement
#47 - Error when running Gemma inference on GPU
Issue -
State: closed - Opened by LarryHawkingYoung 10 months ago
- 3 comments
Labels: type:support, stat:awaiting response
#46 - rm fairescale
Pull Request -
State: closed - Opened by Mon-ius 10 months ago
- 7 comments
#45 - I got empty result while using 7b-it model
Issue -
State: closed - Opened by egbertwong 11 months ago
- 4 comments
Labels: type:support
#44 - Document the existence of 99 unused tokens in the tokenizer
Pull Request -
State: closed - Opened by Qubitium 11 months ago
- 1 comment
#43 - fix(temperature): allow passing 0 or None as the temperature parameter
Pull Request -
State: closed - Opened by joselpart 11 months ago
- 3 comments
#42 - Can't disable sampling
Issue -
State: closed - Opened by joselpart 11 months ago
Labels: bug
#41 - Is max_position_embeddings=8096 neccessary in 2b model?
Issue -
State: closed - Opened by agiwave 11 months ago
- 5 comments
Labels: type:support
#40 - Auto-labels 'Gemma' on 'gemma' issues/PRs.
Pull Request -
State: closed - Opened by shmishra99 11 months ago
- 1 comment
#39 - Objectivity
Issue -
State: closed - Opened by o6uoq 11 months ago
Labels: type:support
#38 - How to fine-tune Gemma with pytorch?
Issue -
State: closed - Opened by solitude-alive 11 months ago
- 2 comments
Labels: duplicate
#37 - Gemma fixes - gelu
Pull Request -
State: closed - Opened by danielhanchen 11 months ago
- 4 comments
#36 - Torch implementation now same as JAX
Pull Request -
State: closed - Opened by thebraingen 11 months ago
- 1 comment
#35 - Implementation now equals JAX
Pull Request -
State: closed - Opened by thebraingen 11 months ago
- 1 comment
#34 - Add instructions to download from Hugging Face Hub
Pull Request -
State: closed - Opened by osanseviero 11 months ago
- 1 comment
#33 - Inconsistency between PyTorch and JAX implementation
Issue -
State: closed - Opened by aboros98 11 months ago
- 2 comments
Labels: enhancement
#32 - "--output_len" argument ignored
Pull Request -
State: closed - Opened by k-nar 11 months ago
- 1 comment
#31 - not found weight file
Issue -
State: closed - Opened by Cguanqin 11 months ago
- 5 comments
Labels: type:support, stat:awaiting response
#30 - is it possible to convert gemma_pytorch to onnx to tflite?
Issue -
State: closed - Opened by nyadla-sys 11 months ago
- 4 comments
Labels: type:support, stat:awaiting response
#29 - [Question] Embeddings normalization by sqrt(hidden_size)
Issue -
State: closed - Opened by Andrei-Aksionov 11 months ago
- 4 comments
Labels: type:support
#26 - After deplyed google/gemma-7b-it, there always is error response.
Issue -
State: closed - Opened by ydh10002023 11 months ago
- 10 comments
Labels: bug
#25 - Cannot run on v4-16 worker 0 TPU VM: "Failed to get global TPU topology"
Issue -
State: closed - Opened by markusheimerl 11 months ago
- 6 comments
Labels: type:support
#24 - always loss nan while finetune a few step, wether fp32 or fp16
Issue -
State: closed - Opened by yongzhuo 11 months ago
- 1 comment
Labels: type:support
#23 - keras finetuning and inference examples uploaded
Pull Request -
State: closed - Opened by r-gheda 11 months ago
- 2 comments
#22 - H
Issue -
State: closed - Opened by ZainBinTariq7 11 months ago
- 1 comment
Labels: type:support
#21 - Changed <2B or 7B> to <2b or 7b> in README
Pull Request -
State: closed - Opened by r-gheda 11 months ago
#20 - Changes <2B or 7B> option to <2b or 7b> in README
Pull Request -
State: closed - Opened by r-gheda 11 months ago
- 1 comment
#19 - Output with higher max_length is repetition of base text
Issue -
State: closed - Opened by azrael05 11 months ago
- 9 comments
Labels: type:support, stat:awaiting response
#18 - Update config.py
Pull Request -
State: closed - Opened by Khajaamee455 11 months ago
- 2 comments
#17 - Updated ClassMD
Pull Request -
State: closed - Opened by Masomabayat 11 months ago
- 2 comments
#15 - Update xla_model_parallel.py
Pull Request -
State: closed - Opened by eltociear 11 months ago
- 3 comments
#13 - RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
Issue -
State: closed - Opened by 2579356425 11 months ago
- 6 comments
Labels: duplicate
#12 - Are there reserved/unused tokens for developers?
Issue -
State: closed - Opened by Qubitium 11 months ago
- 3 comments
Labels: type:support
#11 - MPS (Apple Silicon) Support
Issue -
State: open - Opened by dsanmart 11 months ago
- 3 comments
Labels: enhancement
#10 - why some prompt doesn't work, the hidden_states will be nan after GemmaModel.forward
Issue -
State: closed - Opened by vupjing 11 months ago
- 11 comments
Labels: bug, stat:awaiting response
#9 - Loading torch checkpoint with weights_only set to True
Pull Request -
State: closed - Opened by michaelmoynihan 11 months ago
#8 - how to finetune with gemma model?
Issue -
State: closed - Opened by runningabcd 11 months ago
- 10 comments
Labels: type:support, stat:awaiting response
#7 - Quantised weights are bfloat16 not int8
Issue -
State: closed - Opened by dsanmart 11 months ago
- 3 comments
Labels: type:support
#6 - Add utility to convert string to boolean type to fix quant parse arg
Pull Request -
State: closed - Opened by nakkapeddi 11 months ago
- 3 comments
#5 - --quant always returns True
Issue -
State: closed - Opened by nakkapeddi 11 months ago
- 5 comments
Labels: bug
#4 - A web runtime supported version of gemma is really needed and high value
Issue -
State: closed - Opened by Zwe1 11 months ago
- 4 comments
Labels: enhancement
#3 - RuntimeError: at::cuda::blas::gemm: not implemented for struct c10::BFloat16
Issue -
State: closed - Opened by dhchenx 11 months ago
- 9 comments
Labels: bug
#2 - Inconsistencies in Reported Dimensions and Configuration Files
Issue -
State: closed - Opened by fvarno 11 months ago
- 2 comments
Labels: type:support
#1 - `torch.load` without `weights_only` parameter is unsafe
Issue -
State: closed - Opened by kit1980 11 months ago
- 2 comments
Labels: bug