Skip to content

Commit

Permalink
llamamodel: add missing softmax to fix temperature (nomic-ai#3202)
Browse files Browse the repository at this point in the history
Signed-off-by: Jared Van Bortel <[email protected]>
  • Loading branch information
cebtenzzre authored Dec 4, 2024
1 parent ffd29ea commit 0c70b5a
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
3 changes: 2 additions & 1 deletion gpt4all-backend/src/llamamodel.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -584,7 +584,8 @@ void LLamaModel::initSampler(const PromptContext &promptCtx)
llama_sampler_init_top_p(promptCtx.top_p, 1),
llama_sampler_init_min_p(promptCtx.min_p, 1),
llama_sampler_init_temp(promptCtx.temp),
llama_sampler_init_dist(LLAMA_DEFAULT_SEED)
llama_sampler_init_softmax(),
llama_sampler_init_dist(LLAMA_DEFAULT_SEED),
};
for (auto *smpl : samplers)
llama_sampler_chain_add(chain, smpl);
Expand Down
1 change: 1 addition & 0 deletions gpt4all-chat/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Fix bug in GUI when localdocs encounters binary data ([#3137](https://github.com/nomic-ai/gpt4all/pull/3137))
- Fix LocalDocs bugs that prevented some docx files from fully chunking ([#3140](https://github.com/nomic-ai/gpt4all/pull/3140))
- Fix missing softmax that was causing crashes and effectively infinite temperature since 3.4.0 ([#3202](https://github.com/nomic-ai/gpt4all/pull/3202))

## [3.4.2] - 2024-10-16

Expand Down

0 comments on commit 0c70b5a

Please sign in to comment.