From cddd0f75078627ab55e6b8b2ee5fb2ea98de7b59 Mon Sep 17 00:00:00 2001 From: Jared Van Bortel Date: Fri, 6 Dec 2024 16:25:09 -0500 Subject: [PATCH] chat: run update_translations for v3.5.0 (#3230) Signed-off-by: Jared Van Bortel --- gpt4all-chat/translations/gpt4all_en_US.ts | 731 ++++++++++++-------- gpt4all-chat/translations/gpt4all_es_MX.ts | 742 +++++++++++++------- gpt4all-chat/translations/gpt4all_it_IT.ts | 740 +++++++++++++------- gpt4all-chat/translations/gpt4all_pt_BR.ts | 742 +++++++++++++------- gpt4all-chat/translations/gpt4all_ro_RO.ts | 743 ++++++++++++++------- gpt4all-chat/translations/gpt4all_zh_CN.ts | 741 +++++++++++++------- gpt4all-chat/translations/gpt4all_zh_TW.ts | 743 ++++++++++++++------- 7 files changed, 3561 insertions(+), 1621 deletions(-) diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts index ea4b79b8bd32..b84517d54b29 100644 --- a/gpt4all-chat/translations/gpt4all_en_US.ts +++ b/gpt4all-chat/translations/gpt4all_en_US.ts @@ -548,12 +548,12 @@ - Save Chat Context + Enable System Tray - Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. + The application will minimize to the system tray when the window is closed. @@ -595,13 +595,13 @@ Chat - - + + New Chat - + Server Chat @@ -609,12 +609,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 @@ -682,35 +682,181 @@ + + ChatItemView + + + GPT4All + + + + + You + + + + + response stopped ... + + + + + retrieving localdocs: %1 ... + + + + + searching localdocs: %1 ... + + + + + processing ... + + + + + generating response ... + + + + + generating questions ... + + + + + + Copy + + + + + Copy Message + + + + + Disable markdown + + + + + Enable markdown + + + + + %n Source(s) + + %n Source + %n Sources + + + + + LocalDocs + + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY - + THIS WEEK - + THIS MONTH - + LAST SIX MONTHS - + THIS YEAR - + LAST YEAR @@ -718,349 +864,282 @@ ChatView - + <h3>Warning</h3><p>%1</p> - - Switch model dialog - - - - - Warn the user if they switch models, then context will be erased + + Conversation copied to clipboard. - - Conversation copied to clipboard. + + Code copied to clipboard. - - Code copied to clipboard. + + The entire chat will be erased. - + Chat panel - + Chat panel with options - + Reload the currently loaded model - + Eject the currently loaded model - + No model installed. - + Model loading error. - + Waiting for model... - + Switching context... - + Choose a model... - + Not found: %1 - + The top item is the current model - - + LocalDocs - + Add documents - + add collections of documents to the chat - + Load the default model - + Loads the default model which can be changed in settings - + No Model Installed - + GPT4All requires that you install at least one model to get started - + Install a Model - + Shows the add model view - + Conversation with the model - + prompt / response pairs from the conversation - - GPT4All - - - - - You - - - - - response stopped ... + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. - - processing ... + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. - - generating response ... + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. - - generating questions ... + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. - - + Copy - - - Copy Message - - - - - Disable markdown - - - - - Enable markdown - - - - - Thumbs up - - - - - Gives a thumbs up to the response - - - - - Thumbs down - - - - - Opens thumbs down dialog - - - %n Source(s) - + %n Source %n Sources - - Suggested follow-ups - - - - + Erase and reset chat session - + Copy chat session to clipboard - - Redo last chat response - - - - + Add media - + Adds media to the prompt - + Stop generating - + Stop the current response generation - + Attach - + Single File - + Reloads the model - + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help - - - Reload · %1 - - - - - Loading · %1 + + + Erase conversation? - - Load · %1 (default) → + + Changing the model will erase the current conversation. - - restoring from text ... + + + Reload · %1 - - retrieving localdocs: %1 ... + + Loading · %1 - - searching localdocs: %1 ... + + Load · %1 (default) → - + Send a message... - + Load a model to continue... - + Send messages/prompts to the model - + Cut - + Paste - + Select All - + Send message - + Sends the message/prompt contained in textfield to the model @@ -1104,40 +1183,53 @@ model to get started + + ConfirmationDialog + + + OK + + + + + Cancel + + + Download - + Model "%1" is installed successfully. - + ERROR: $MODEL_NAME is empty. - + ERROR: $API_KEY is empty. - + ERROR: $BASE_URL is invalid. - + ERROR: Model "%1 (%2)" is conflict. - + Model "%1 (%2)" is installed successfully. - + Model "%1" is removed. @@ -1303,52 +1395,52 @@ model to get started - + Display - + Show Sources - + Display the sources used for each response. - + Advanced - + Warning: Advanced usage only. - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. - + Document snippet size (characters) - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. - + Max document snippets per prompt - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. @@ -1510,78 +1602,78 @@ model to get started ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. - - + + cannot open "%1": %2 - + cannot create "%1": %2 - + %1 (%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> @@ -1594,212 +1686,280 @@ model to get started - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings - + Clone - + Remove - + Name - + Model File - - System Prompt + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template - - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. + + This Jinja template turns the chat into input for the model. - - Prompt Template + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. - - The template that wraps every prompt. + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. - - Must contain the string "%1" to be replaced with the user's input. + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 - + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Chat Name Prompt - + Prompt used to automatically generate chat names. - + Suggested FollowUp Prompt - + Prompt used to generate suggested follow-up questions. - + Context Length - + Number of input and output tokens the model sees. - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. - + Temperature - + Randomness of model output. Higher -> more variation. - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. - + Top-P - + Nucleus Sampling factor. Lower -> more predictable. - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. - + Min-P - + Minimum token probability. Higher -> more predictable. - + Sets the minimum relative probability for a token to be considered. - + Top-K - + Size of selection pool for tokens. - + Only the top K most likely tokens will be chosen from. - + Max Length - + Maximum response length, in tokens. - + Prompt Batch Size - + The batch size used for prompt processing. - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. - + Repeat Penalty - + Repetition penalty factor. Set to 1 to disable. - + Repeat Penalty Tokens - + Number of previous tokens used for penalty. - + GPU Layers - + Number of model layers to load into VRAM. - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2053,15 +2213,38 @@ NOTE: Does not take effect until you reload the model. + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults - + Restores settings dialog to a default state @@ -2306,30 +2489,6 @@ model release that uses your data! - - SwitchModelDialog - - - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - - - - - Continue - - - - - Continue with model loading - - - - - - Cancel - - - ThumbsDownDialog @@ -2366,125 +2525,135 @@ model release that uses your data! main - + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. - + Connection to datalake failed. - + Saving chats. - + Network dialog - + opt-in to share feedback/conversations - + Home view - + Home view of application - + Home - + Chat view - + Chat view to interact with models - + Chats - - + + Models - + Models view for installed models - - + + LocalDocs - + LocalDocs view to configure and use local docs - - + + Settings - + Settings view for application configuration - + The datalake is enabled - + Using a network model - + Server mode is enabled - + Installed models - + View of installed models diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts index e7693bf23f0d..c392ddb33c26 100644 --- a/gpt4all-chat/translations/gpt4all_es_MX.ts +++ b/gpt4all-chat/translations/gpt4all_es_MX.ts @@ -536,13 +536,21 @@ - Save Chat Context - Guardar contexto del chat + Enable System Tray + + The application will minimize to the system tray when the window is closed. + + + + Save Chat Context + Guardar contexto del chat + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat. + Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat. @@ -599,13 +607,13 @@ Chat - - + + New Chat Nuevo chat - + Server Chat Chat del servidor @@ -613,12 +621,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server ERROR: Ocurrió un error de red al conectar con el servidor API - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished obtuvo Error HTTP %1 %2 @@ -686,35 +694,181 @@ Lista de chats en el diálogo del cajón + + ChatItemView + + + GPT4All + GPT4All + + + + You + + + + + response stopped ... + respuesta detenida ... + + + + retrieving localdocs: %1 ... + recuperando documentos locales: %1 ... + + + + searching localdocs: %1 ... + buscando en documentos locales: %1 ... + + + + processing ... + procesando ... + + + + generating response ... + generando respuesta ... + + + + generating questions ... + generando preguntas ... + + + + + Copy + Copiar + + + + Copy Message + Copiar mensaje + + + + Disable markdown + Desactivar markdown + + + + Enable markdown + Activar markdown + + + + %n Source(s) + + %n Fuente + %n Fuentes + + + + + LocalDocs + + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + Seguimientos sugeridos + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY HOY - + THIS WEEK ESTA SEMANA - + THIS MONTH ESTE MES - + LAST SIX MONTHS ÚLTIMOS SEIS MESES - + THIS YEAR ESTE AÑO - + LAST YEAR AÑO PASADO @@ -722,343 +876,357 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>Advertencia</h3><p>%1</p> - Switch model dialog - Diálogo para cambiar de modelo + Diálogo para cambiar de modelo - Warn the user if they switch models, then context will be erased - Advertir al usuario si cambia de modelo, entonces se borrará el contexto + Advertir al usuario si cambia de modelo, entonces se borrará el contexto - + Conversation copied to clipboard. Conversación copiada al portapapeles. - + Code copied to clipboard. Código copiado al portapapeles. - + + The entire chat will be erased. + + + + Chat panel Panel de chat - + Chat panel with options Panel de chat con opciones - + Reload the currently loaded model Recargar el modelo actualmente cargado - + Eject the currently loaded model Expulsar el modelo actualmente cargado - + No model installed. No hay modelo instalado. - + Model loading error. Error al cargar el modelo. - + Waiting for model... Esperando al modelo... - + Switching context... Cambiando contexto... - + Choose a model... Elige un modelo... - + Not found: %1 No encontrado: %1 - + The top item is the current model El elemento superior es el modelo actual - - + LocalDocs DocumentosLocales - + Add documents Agregar documentos - + add collections of documents to the chat agregar colecciones de documentos al chat - + Load the default model Cargar el modelo predeterminado - + Loads the default model which can be changed in settings Carga el modelo predeterminado que se puede cambiar en la configuración - + No Model Installed No hay modelo instalado - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>Se encontró un error al cargar el modelo:</h3><br><i>"%1"</i><br><br>Los fallos en la carga de modelos pueden ocurrir por varias razones, pero las causas más comunes incluyen un formato de archivo incorrecto, una descarga incompleta o corrupta, un tipo de archivo equivocado, RAM del sistema insuficiente o un tipo de modelo incompatible. Aquí hay algunas sugerencias para resolver el problema:<br><ul><li>Asegúrate de que el archivo del modelo tenga un formato y tipo compatibles<li>Verifica que el archivo del modelo esté completo en la carpeta de descargas<li>Puedes encontrar la carpeta de descargas en el diálogo de configuración<li>Si has cargado el modelo manualmente, asegúrate de que el archivo no esté corrupto verificando el md5sum<li>Lee más sobre qué modelos son compatibles en nuestra <a href="https://docs.gpt4all.io/">documentación</a> para la interfaz gráfica<li>Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de discord</a> para obtener ayuda - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + Install a Model Instalar un modelo - + Shows the add model view Muestra la vista de agregar modelo - + Conversation with the model Conversación con el modelo - + prompt / response pairs from the conversation pares de pregunta / respuesta de la conversación - GPT4All - GPT4All + GPT4All - You - + - response stopped ... - respuesta detenida ... + respuesta detenida ... - processing ... - procesando ... + procesando ... - generating response ... - generando respuesta ... + generando respuesta ... - generating questions ... - generando preguntas ... + generando preguntas ... - - + Copy Copiar - Copy Message - Copiar mensaje + Copiar mensaje - Disable markdown - Desactivar markdown + Desactivar markdown - Enable markdown - Activar markdown + Activar markdown - Thumbs up - Me gusta + Me gusta - Gives a thumbs up to the response - Da un me gusta a la respuesta + Da un me gusta a la respuesta - Thumbs down - No me gusta + No me gusta - Opens thumbs down dialog - Abre el diálogo de no me gusta + Abre el diálogo de no me gusta - Suggested follow-ups - Seguimientos sugeridos + Seguimientos sugeridos - + Erase and reset chat session Borrar y reiniciar sesión de chat - + Copy chat session to clipboard Copiar sesión de chat al portapapeles - Redo last chat response - Rehacer última respuesta del chat + Rehacer última respuesta del chat - + Add media Agregar medios - + Adds media to the prompt Agrega medios al mensaje - + Stop generating Detener generación - + Stop the current response generation Detener la generación de la respuesta actual - + Attach Adjuntar - + Single File Fila india - + Reloads the model Recarga el modelo - - + + Reload · %1 Recargar · %1 - + Loading · %1 Cargando · %1 - + Load · %1 (default) → Cargar · %1 (predeterminado) → - retrieving localdocs: %1 ... - recuperando documentos locales: %1 ... + recuperando documentos locales: %1 ... - searching localdocs: %1 ... - buscando en documentos locales: %1 ... + buscando en documentos locales: %1 ... - %n Source(s) - + %n Fuente %n Fuentes - + Send a message... Enviar un mensaje... - + Load a model to continue... Carga un modelo para continuar... - + Send messages/prompts to the model Enviar mensajes/indicaciones al modelo - + Cut Cortar - + Paste Pegar - + Select All Seleccionar todo - + Send message Enviar mensaje - + Sends the message/prompt contained in textfield to the model Envía el mensaje/indicación contenido en el campo de texto al modelo - + GPT4All requires that you install at least one model to get started GPT4All requiere que instale al menos un @@ -1066,9 +1234,8 @@ modelo para comenzar - restoring from text ... - restaurando desde texto ... + restaurando desde texto ... @@ -1110,40 +1277,53 @@ modelo para comenzar Seleccione una colección para hacerla disponible al modelo de chat. + + ConfirmationDialog + + + OK + + + + + Cancel + Cancelar + + Download - + Model "%1" is installed successfully. El modelo "%1" se ha instalado correctamente. - + ERROR: $MODEL_NAME is empty. ERROR: $MODEL_NAME está vacío. - + ERROR: $API_KEY is empty. ERROR: $API_KEY está vacía. - + ERROR: $BASE_URL is invalid. ERROR: $BASE_URL no es válida. - + ERROR: Model "%1 (%2)" is conflict. ERROR: El modelo "%1 (%2)" está en conflicto. - + Model "%1 (%2)" is installed successfully. El modelo "%1 (%2)" se ha instalado correctamente. - + Model "%1" is removed. El modelo "%1" ha sido eliminado. @@ -1309,52 +1489,52 @@ modelo para comenzar Predeterminado de la aplicación - + Display Visualización - + Show Sources Mostrar fuentes - + Display the sources used for each response. Mostrar las fuentes utilizadas para cada respuesta. - + Advanced Avanzado - + Warning: Advanced usage only. Advertencia: Solo para uso avanzado. - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. Valores demasiado grandes pueden causar fallos en localdocs, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se añaden a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>. - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. Número de caracteres por fragmento de documento. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta. - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. Máximo de N mejores coincidencias de fragmentos de documentos recuperados para añadir al contexto del prompt. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta. - + Document snippet size (characters) Tamaño del fragmento de documento (caracteres) - + Max document snippets per prompt Máximo de fragmentos de documento por indicación @@ -1516,78 +1696,78 @@ modelo para comenzar ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>Requiere clave API personal de OpenAI.</li><li>ADVERTENCIA: ¡Enviará sus chats a OpenAI!</li><li>Su clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con OpenAI</li><li>Puede solicitar una clave API <a href="https://platform.openai.com/account/api-keys">aquí.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>Modelo ChatGPT GPT-3.5 Turbo de OpenAI</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Aunque pagues a OpenAI por ChatGPT-4, esto no garantiza el acceso a la clave API. Contacta a OpenAI para más información. - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Requiere una clave API personal de Mistral.</li><li>ADVERTENCIA: ¡Enviará tus chats a Mistral!</li><li>Tu clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con Mistral</li><li>Puedes solicitar una clave API <a href="https://console.mistral.ai/user/api-keys">aquí</a>.</li> - + <strong>Mistral Tiny model</strong><br> %1 <strong>Modelo Mistral Tiny</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Modelo Mistral Small</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Modelo Mistral Medium</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Creado por %1.</strong><br><ul><li>Publicado el %2.<li>Este modelo tiene %3 me gusta.<li>Este modelo tiene %4 descargas.<li>Más información puede encontrarse <a href="https://huggingface.co/%5">aquí.</a></ul> - + %1 (%2) %1 (%2) - - + + cannot open "%1": %2 no se puede abrir "%1": %2 - + cannot create "%1": %2 no se puede crear "%1": %2 - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>Modelo de API compatible con OpenAI</strong><br><ul><li>Clave API: %1</li><li>URL base: %2</li><li>Nombre del modelo: %3</li></ul> - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>Requiere una clave API personal y la URL base de la API.</li><li>ADVERTENCIA: ¡Enviará sus chats al servidor de API compatible con OpenAI que especificó!</li><li>Su clave API se almacenará en el disco</li><li>Solo se utilizará para comunicarse con el servidor de API compatible con OpenAI</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>Conectar al servidor de API compatible con OpenAI</strong><br> %1 @@ -1600,208 +1780,251 @@ modelo para comenzar Modelo - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings Configuración del modelo - + Clone Clonar - + Remove Eliminar - + Name Nombre - + Model File Archivo del modelo - System Prompt - Indicación del sistema + Indicación del sistema - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados. + Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados. - Prompt Template - Plantilla de indicación + Plantilla de indicación - The template that wraps every prompt. - La plantilla que envuelve cada indicación. + La plantilla que envuelve cada indicación. - Must contain the string "%1" to be replaced with the user's input. - Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario. + Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario. - + Chat Name Prompt Indicación para el nombre del chat - + Prompt used to automatically generate chat names. Indicación utilizada para generar automáticamente nombres de chat. - + Suggested FollowUp Prompt Indicación de seguimiento sugerida - + Prompt used to generate suggested follow-up questions. Indicación utilizada para generar preguntas de seguimiento sugeridas. - + Context Length Longitud del contexto - + Number of input and output tokens the model sees. Número de tokens de entrada y salida que el modelo ve. - + Temperature Temperatura - + Randomness of model output. Higher -> more variation. Aleatoriedad de la salida del modelo. Mayor -> más variación. - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. La temperatura aumenta las probabilidades de elegir tokens menos probables. NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles. - + Top-P Top-P - + Nucleus Sampling factor. Lower -> more predictable. Factor de muestreo de núcleo. Menor -> más predecible. - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Solo se pueden elegir los tokens más probables hasta una probabilidad total de top_p. NOTA: Evita elegir tokens altamente improbables. - + Min-P Min-P - + Minimum token probability. Higher -> more predictable. Probabilidad mínima del token. Mayor -> más predecible. - + Sets the minimum relative probability for a token to be considered. Establece la probabilidad relativa mínima para que un token sea considerado. - + Top-K Top-K - + Size of selection pool for tokens. Tamaño del grupo de selección para tokens. - + Only the top K most likely tokens will be chosen from. Solo se elegirán los K tokens más probables. - + Max Length Longitud máxima - + Maximum response length, in tokens. Longitud máxima de respuesta, en tokens. - + Prompt Batch Size Tamaño del lote de indicaciones - + The batch size used for prompt processing. El tamaño del lote utilizado para el procesamiento de indicaciones. - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Cantidad de tokens de prompt a procesar de una vez. NOTA: Valores más altos pueden acelerar la lectura de prompts, pero usarán más RAM. - + Repeat Penalty Penalización por repetición - + Repetition penalty factor. Set to 1 to disable. Factor de penalización por repetición. Establecer a 1 para desactivar. - + Repeat Penalty Tokens Tokens de penalización por repetición - + Number of previous tokens used for penalty. Número de tokens anteriores utilizados para la penalización. - + GPU Layers Capas de GPU - + Number of model layers to load into VRAM. Número de capas del modelo a cargar en la VRAM. - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1810,7 +2033,52 @@ Usar más contexto del que el modelo fue entrenado producirá resultados deficie NOTA: No surtirá efecto hasta que recargue el modelo. - + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2066,6 +2334,19 @@ NOTA: No surte efecto hasta que recargue el modelo. Por favor, elija un directorio + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsStack @@ -2076,12 +2357,22 @@ NOTA: No surte efecto hasta que recargue el modelo. MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults Restaurar valores predeterminados - + Restores settings dialog to a default state Restaura el diálogo de configuración a su estado predeterminado @@ -2349,25 +2640,20 @@ lanzamiento de modelo GPT4All que utilice sus datos. SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>Advertencia:</b> cambiar el modelo borrará la conversación actual. ¿Deseas continuar? + <b>Advertencia:</b> cambiar el modelo borrará la conversación actual. ¿Deseas continuar? - Continue - Continuar + Continuar - Continue with model loading - Continuar con la carga del modelo + Continuar con la carga del modelo - - Cancel - Cancelar + Cancelar @@ -2407,126 +2693,136 @@ lanzamiento de modelo GPT4All que utilice sus datos. main - + GPT4All v%1 GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>Se encontró un error al iniciar:</h3><br><i>"Se detectó hardware incompatible."</i><br><br>Desafortunadamente, tu CPU no cumple con los requisitos mínimos para ejecutar este programa. En particular, no soporta instrucciones AVX, las cuales este programa requiere para ejecutar con éxito un modelo de lenguaje grande moderno. La única solución en este momento es actualizar tu hardware a una CPU más moderna.<br><br>Consulta aquí para más información: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>Se encontró un error al iniciar:</h3><br><i>"No se puede acceder al archivo de configuración."</i><br><br>Desafortunadamente, algo está impidiendo que el programa acceda al archivo de configuración. Esto podría ser causado por permisos incorrectos en el directorio de configuración local de la aplicación donde se encuentra el archivo de configuración. Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para obtener ayuda. - + Connection to datalake failed. La conexión al datalake falló. - + Saving chats. Guardando chats. - + Network dialog Diálogo de red - + opt-in to share feedback/conversations optar por compartir comentarios/conversaciones - + Home view Vista de inicio - + Home view of application Vista de inicio de la aplicación - + Home Inicio - + Chat view Vista de chat - + Chat view to interact with models Vista de chat para interactuar con modelos - + Chats Chats - - + + Models Modelos - + Models view for installed models Vista de modelos para modelos instalados - - + + LocalDocs Docs Locales - + LocalDocs view to configure and use local docs Vista de DocumentosLocales para configurar y usar documentos locales - - + + Settings Config. - + Settings view for application configuration Vista de configuración para la configuración de la aplicación - + The datalake is enabled El datalake está habilitado - + Using a network model Usando un modelo de red - + Server mode is enabled El modo servidor está habilitado - + Installed models Modelos instalados - + View of installed models Vista de modelos instalados diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts index e2c810becc07..dddb44c067e2 100644 --- a/gpt4all-chat/translations/gpt4all_it_IT.ts +++ b/gpt4all-chat/translations/gpt4all_it_IT.ts @@ -557,13 +557,21 @@ - Save Chat Context - Salva il contesto della chat + Enable System Tray + + The application will minimize to the system tray when the window is closed. + + + + Save Chat Context + Salva il contesto della chat + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat. + Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat. @@ -608,13 +616,13 @@ Chat - - + + New Chat Nuova Chat - + Server Chat Chat del server @@ -622,12 +630,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server ERRORE: si è verificato un errore di rete durante la connessione al server API - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished ha ricevuto l'errore HTTP %1 %2 @@ -695,35 +703,181 @@ Elenco delle chat nella finestra di dialogo del cassetto + + ChatItemView + + + GPT4All + + + + + You + Tu + + + + response stopped ... + risposta interrotta ... + + + + retrieving localdocs: %1 ... + recupero documenti locali: %1 ... + + + + searching localdocs: %1 ... + ricerca in documenti locali: %1 ... + + + + processing ... + elaborazione ... + + + + generating response ... + generazione risposta ... + + + + generating questions ... + generarzione domande ... + + + + + Copy + Copia + + + + Copy Message + Copia messaggio + + + + Disable markdown + Disabilita Markdown + + + + Enable markdown + Abilita Markdown + + + + %n Source(s) + + %n Fonte + %n Fonti + + + + + LocalDocs + + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + Approfondimenti suggeriti + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY OGGI - + THIS WEEK QUESTA SETTIMANA - + THIS MONTH QUESTO MESE - + LAST SIX MONTHS ULTIMI SEI MESI - + THIS YEAR QUEST'ANNO - + LAST YEAR L'ANNO SCORSO @@ -731,350 +885,359 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>Avviso</h3><p>%1</p> - Switch model dialog - Finestra di dialogo Cambia modello + Finestra di dialogo Cambia modello - Warn the user if they switch models, then context will be erased - Avvisa l'utente che se cambia modello, il contesto verrà cancellato + Avvisa l'utente che se cambia modello, il contesto verrà cancellato - + Conversation copied to clipboard. Conversazione copiata negli appunti. - + Code copied to clipboard. Codice copiato negli appunti. - + + The entire chat will be erased. + + + + Chat panel Pannello chat - + Chat panel with options Pannello chat con opzioni - + Reload the currently loaded model Ricarica il modello attualmente caricato - + Eject the currently loaded model Espelli il modello attualmente caricato - + No model installed. Nessun modello installato. - + Model loading error. Errore di caricamento del modello. - + Waiting for model... In attesa del modello... - + Switching context... Cambio contesto... - + Choose a model... Scegli un modello... - + Not found: %1 Non trovato: %1 - + The top item is the current model L'elemento in alto è il modello attuale - - + LocalDocs - + Add documents Aggiungi documenti - + add collections of documents to the chat aggiungi raccolte di documenti alla chat - + Load the default model Carica il modello predefinito - + Loads the default model which can be changed in settings Carica il modello predefinito che può essere modificato nei settaggi - + No Model Installed Nessun modello installato - + GPT4All requires that you install at least one model to get started GPT4All richiede l'installazione di almeno un modello per iniziare - + Install a Model Installa un modello - + Shows the add model view Mostra la vista aggiungi modello - + Conversation with the model Conversazione con il modello - + prompt / response pairs from the conversation coppie prompt/risposta dalla conversazione - - GPT4All - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + - You - Tu + Tu - response stopped ... - risposta interrotta ... + risposta interrotta ... - processing ... - elaborazione ... + elaborazione ... - generating response ... - generazione risposta ... + generazione risposta ... - generating questions ... - generarzione domande ... + generarzione domande ... - - + Copy Copia - Copy Message - Copia messaggio + Copia messaggio - Disable markdown - Disabilita Markdown + Disabilita Markdown - Enable markdown - Abilita Markdown + Abilita Markdown - Thumbs up - Mi piace + Mi piace - Gives a thumbs up to the response - Dà un mi piace alla risposta + Dà un mi piace alla risposta - Thumbs down - Non mi piace + Non mi piace - Opens thumbs down dialog - Apre la finestra di dialogo "Non mi piace" + Apre la finestra di dialogo "Non mi piace" - Suggested follow-ups - Approfondimenti suggeriti + Approfondimenti suggeriti - + Erase and reset chat session Cancella e ripristina la sessione di chat - + Copy chat session to clipboard Copia la sessione di chat negli appunti - Redo last chat response - Riesegui l'ultima risposta della chat + Riesegui l'ultima risposta della chat - + Add media Aggiungi contenuti multimediali - + Adds media to the prompt Aggiunge contenuti multimediali al prompt - + Stop generating Interrompi la generazione - + Stop the current response generation Arresta la generazione della risposta corrente - + Attach Allegare - + Single File File singolo - + Reloads the model Ricarica il modello - + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>Si è verificato un errore durante il caricamento del modello:</h3><br><i>"%1"</i><br><br>Gli errori di caricamento del modello possono verificarsi per diversi motivi, ma le cause più comuni includono un formato di file non valido, un download incompleto o danneggiato, il tipo di file sbagliato, RAM di sistema insufficiente o un tipo di modello incompatibile. Ecco alcuni suggerimenti per risolvere il problema:<br><ul><li>Assicurati che il file del modello abbia un formato e un tipo compatibili<li>Verifica che il file del modello sia completo nella cartella di download<li>Puoi trovare la cartella di download nella finestra di dialogo dei settaggi<li>Se hai scaricato manualmente il modello, assicurati che il file non sia danneggiato controllando md5sum<li>Leggi ulteriori informazioni su quali modelli sono supportati nella nostra <a href="https://docs.gpt4all.io/ ">documentazione</a> per la GUI<li>Consulta il nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per assistenza - - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + + Reload · %1 Ricarica · %1 - + Loading · %1 Caricamento · %1 - + Load · %1 (default) → Carica · %1 (predefinito) → - restoring from text ... - ripristino dal testo ... + ripristino dal testo ... - retrieving localdocs: %1 ... - recupero documenti locali: %1 ... + recupero documenti locali: %1 ... - searching localdocs: %1 ... - ricerca in documenti locali: %1 ... + ricerca in documenti locali: %1 ... - %n Source(s) - + %n Fonte %n Fonti - + Send a message... Manda un messaggio... - + Load a model to continue... Carica un modello per continuare... - + Send messages/prompts to the model Invia messaggi/prompt al modello - + Cut Taglia - + Paste Incolla - + Select All Seleziona tutto - + Send message Invia messaggio - + Sends the message/prompt contained in textfield to the model Invia il messaggio/prompt contenuto nel campo di testo al modello @@ -1118,40 +1281,53 @@ modello per iniziare Seleziona una raccolta per renderla disponibile al modello in chat. + + ConfirmationDialog + + + OK + + + + + Cancel + Annulla + + Download - + Model "%1" is installed successfully. Il modello "%1" è stato installato correttamente. - + ERROR: $MODEL_NAME is empty. ERRORE: $MODEL_NAME è vuoto. - + ERROR: $API_KEY is empty. ERRORE: $API_KEY è vuoto. - + ERROR: $BASE_URL is invalid. ERRORE: $BASE_URL non è valido. - + ERROR: Model "%1 (%2)" is conflict. ERRORE: il modello "%1 (%2)" è in conflitto. - + Model "%1 (%2)" is installed successfully. Il modello "%1 (%2)" è stato installato correttamente. - + Model "%1" is removed. Il modello "%1" è stato rimosso. @@ -1318,52 +1494,52 @@ modello per iniziare Applicazione predefinita - + Display Mostra - + Show Sources Mostra le fonti - + Display the sources used for each response. Visualizza le fonti utilizzate per ciascuna risposta. - + Advanced Avanzate - + Warning: Advanced usage only. Avvertenza: solo per uso avanzato. - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. Valori troppo grandi possono causare errori di LocalDocs, risposte estremamente lente o l'impossibilità di rispondere. In parole povere, {N caratteri x N frammenti} vengono aggiunti alla finestra di contesto del modello. Maggiori informazioni <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">qui</a>. - + Document snippet size (characters) Dimensioni del frammento di documento (caratteri) - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. Numero di caratteri per frammento di documento. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta. - + Max document snippets per prompt Numero massimo di frammenti di documento per prompt - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. Il numero massimo di frammenti di documento recuperati che presentano le migliori corrispondenze, da includere nel contesto del prompt. Numeri più alti aumentano la probabilità di ricevere risposte basate sui fatti, ma comportano anche una generazione più lenta. @@ -1525,78 +1701,78 @@ modello per iniziare ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>Richiede una chiave API OpenAI personale.</li><li>ATTENZIONE: invierà le tue chat a OpenAI!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con OpenAI</li><li>Puoi richiedere una chiave API <a href="https://platform.openai.com/account/api-keys">qui.</a> </li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Anche se paghi OpenAI per ChatGPT-4 questo non garantisce l'accesso alla chiave API. Contatta OpenAI per maggiori informazioni. - - + + cannot open "%1": %2 impossibile aprire "%1": %2 - + cannot create "%1": %2 impossibile creare "%1": %2 - + %1 (%2) %1 (%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>Modello API compatibile con OpenAI</strong><br><ul><li>Chiave API: %1</li><li>URL di base: %2</li><li>Nome modello: %3</li></ul> - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Richiede una chiave API Mistral personale.</li><li>ATTENZIONE: invierà le tue chat a Mistral!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con Mistral</li><li>Puoi richiedere una chiave API <a href="https://console.mistral.ai/user/api-keys">qui</a>. </li> - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>Richiede una chiave API personale e l'URL di base dell'API.</li><li>ATTENZIONE: invierà le tue chat al server API compatibile con OpenAI che hai specificato!</li><li>La tua chiave API verrà archiviata su disco</li><li>Verrà utilizzata solo per comunicare con il server API compatibile con OpenAI</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>Connetti al server API compatibile con OpenAI</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Creato da %1.</strong><br><ul><li>Pubblicato il %2.<li>Questo modello ha %3 Mi piace.<li>Questo modello ha %4 download.<li>Altro informazioni possono essere trovate <a href="https://huggingface.co/%5">qui.</a></ul> @@ -1609,87 +1785,175 @@ modello per iniziare Modello - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings Settaggi modello - + Clone Clona - + Remove Rimuovi - + Name Nome - + Model File File del modello - System Prompt - Prompt di sistema + Prompt di sistema - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - Prefisso all'inizio di ogni conversazione. Deve contenere i token di inquadramento appropriati. + Prefisso all'inizio di ogni conversazione. Deve contenere i token di inquadramento appropriati. - Prompt Template - Schema del prompt + Schema del prompt - The template that wraps every prompt. - Lo schema che incorpora ogni prompt. + Lo schema che incorpora ogni prompt. - Must contain the string "%1" to be replaced with the user's input. - Deve contenere la stringa "%1" da sostituire con l'input dell'utente. + Deve contenere la stringa "%1" da sostituire con l'input dell'utente. - + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Chat Name Prompt Prompt del nome della chat - + Prompt used to automatically generate chat names. Prompt utilizzato per generare automaticamente nomi di chat. - + Suggested FollowUp Prompt Prompt di approfondimento suggerito - + Prompt used to generate suggested follow-up questions. Prompt utilizzato per generare le domande di approfondimento suggerite. - + Context Length Lunghezza del contesto - + Number of input and output tokens the model sees. Numero di token di input e output visualizzati dal modello. - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1698,128 +1962,128 @@ L'utilizzo di un contesto maggiore rispetto a quello su cui è stato addest NOTA: non ha effetto finché non si ricarica il modello. - + Temperature Temperatura - + Randomness of model output. Higher -> more variation. Casualità dell'uscita del modello. Più alto -> più variazione. - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. La temperatura aumenta le possibilità di scegliere token meno probabili. NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedibili. - + Top-P - + Nucleus Sampling factor. Lower -> more predictable. Fattore di campionamento del nucleo. Inferiore -> più prevedibile. - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Solo i token più probabili, fino a un totale di probabilità di Top-P, possono essere scelti. NOTA: impedisce la scelta di token altamente improbabili. - + Min-P - + Minimum token probability. Higher -> more predictable. Probabilità minima del token. Più alto -> più prevedibile. - + Sets the minimum relative probability for a token to be considered. Imposta la probabilità relativa minima affinché un token venga considerato. - + Top-K - + Size of selection pool for tokens. Dimensione del lotto di selezione per i token. - + Only the top K most likely tokens will be chosen from. Saranno scelti solo i primi K token più probabili. - + Max Length Lunghezza massima - + Maximum response length, in tokens. Lunghezza massima della risposta, in token. - + Prompt Batch Size Dimensioni del lotto di prompt - + The batch size used for prompt processing. La dimensione del lotto usata per l'elaborazione dei prompt. - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Numero di token del prompt da elaborare contemporaneamente. NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM. - + Repeat Penalty Penalità di ripetizione - + Repetition penalty factor. Set to 1 to disable. Fattore di penalità di ripetizione. Impostare su 1 per disabilitare. - + Repeat Penalty Tokens Token di penalità ripetizione - + Number of previous tokens used for penalty. Numero di token precedenti utilizzati per la penalità. - + GPU Layers Livelli GPU - + Number of model layers to load into VRAM. Numero di livelli del modello da caricare nella VRAM. - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2075,6 +2339,19 @@ NOTA: non ha effetto finché non si ricarica il modello. Scegli una cartella + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsStack @@ -2085,12 +2362,22 @@ NOTA: non ha effetto finché non si ricarica il modello. MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults Riprista i valori predefiniti - + Restores settings dialog to a default state Ripristina la finestra di dialogo dei settaggi a uno stato predefinito @@ -2350,25 +2637,20 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>Avviso:</b> la modifica del modello cancellerà la conversazione corrente. Vuoi continuare? + <b>Avviso:</b> la modifica del modello cancellerà la conversazione corrente. Vuoi continuare? - Continue - Continua + Continua - Continue with model loading - Continuare con il caricamento del modello + Continuare con il caricamento del modello - - Cancel - Annulla + Annulla @@ -2407,125 +2689,135 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di main - + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>Si è verificato un errore all'avvio:</h3><br><i>"Rilevato hardware incompatibile."</i><br><br>Sfortunatamente, la tua CPU non soddisfa i requisiti minimi per eseguire questo programma. In particolare, non supporta gli elementi intrinseci AVX richiesti da questo programma per eseguire con successo un modello linguistico moderno e di grandi dimensioni. L'unica soluzione in questo momento è aggiornare il tuo hardware con una CPU più moderna.<br><br>Vedi qui per ulteriori informazioni: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https ://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>Si è verificato un errore all'avvio:</h3><br><i>"Impossibile accedere al file dei settaggi."</i><br><br>Sfortunatamente, qualcosa impedisce al programma di accedere al file dei settaggi. Ciò potrebbe essere causato da autorizzazioni errate nella cartella di configurazione locale dell'app in cui si trova il file dei settaggi. Dai un'occhiata al nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per ricevere assistenza. - + Connection to datalake failed. La connessione al Datalake non è riuscita. - + Saving chats. Salvataggio delle chat. - + Network dialog Dialogo di rete - + opt-in to share feedback/conversations aderisci per condividere feedback/conversazioni - + Home view Vista iniziale - + Home view of application Vista iniziale dell'applicazione - + Home Inizia - + Chat view Vista chat - + Chat view to interact with models Vista chat per interagire con i modelli - + Chats Chat - - + + Models Modelli - + Models view for installed models Vista modelli per i modelli installati - - + + LocalDocs - + LocalDocs view to configure and use local docs Vista LocalDocs per configurare e utilizzare i documenti locali - - + + Settings Settaggi - + Settings view for application configuration Vista dei settaggi per la configurazione dell'applicazione - + The datalake is enabled Il Datalake è abilitato - + Using a network model Utilizzando un modello di rete - + Server mode is enabled La modalità server è abilitata - + Installed models Modelli installati - + View of installed models Vista dei modelli installati diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts index b82a025e4868..2a86309df04d 100644 --- a/gpt4all-chat/translations/gpt4all_pt_BR.ts +++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts @@ -553,14 +553,22 @@ + Enable System Tray + + + + + The application will minimize to the system tray when the window is closed. + + + Save Chat Context I used "Histórico do Chat" (Chat History) instead of "Contexto do Chat" (Chat Context) to clearly convey that it refers to saving past messages, making it more intuitive and avoiding potential confusion with abstract terms. - Salvar Histórico do Chat + Salvar Histórico do Chat - Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat). + Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat). @@ -601,13 +609,13 @@ Chat - - + + New Chat Novo Chat - + Server Chat Chat com o Servidor @@ -615,12 +623,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server ERRO: Ocorreu um erro de rede ao conectar-se ao servidor da API - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished recebeu erro HTTP %1 %2 @@ -688,35 +696,181 @@ Lista de chats na caixa de diálogo do menu lateral + + ChatItemView + + + GPT4All + GPT4All + + + + You + Você + + + + response stopped ... + resposta interrompida... + + + + retrieving localdocs: %1 ... + Recuperando dados em LocalDocs: %1 ... + + + + searching localdocs: %1 ... + Buscando em LocalDocs: %1 ... + + + + processing ... + processando... + + + + generating response ... + gerando resposta... + + + + generating questions ... + gerando perguntas... + + + + + Copy + Copiar + + + + Copy Message + Copiar Mensagem + + + + Disable markdown + Desativar markdown + + + + Enable markdown + Ativar markdown + + + + %n Source(s) + + %n Origem + %n Origens + + + + + LocalDocs + LocalDocs + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + Perguntas relacionadas + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY HOJE - + THIS WEEK ESTA SEMANA - + THIS MONTH ESTE MÊS - + LAST SIX MONTHS ÚLTIMOS SEIS MESES - + THIS YEAR ESTE ANO - + LAST YEAR ANO PASSADO @@ -724,350 +878,363 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>Aviso</h3><p>%1</p> - Switch model dialog - Mensagem ao troca de modelo + Mensagem ao troca de modelo - Warn the user if they switch models, then context will be erased - Ao trocar de modelo, o contexto da conversa será apagado + Ao trocar de modelo, o contexto da conversa será apagado - + Conversation copied to clipboard. Conversa copiada. - + Code copied to clipboard. Código copiado. - + + The entire chat will be erased. + + + + Chat panel Painel de chat - + Chat panel with options Painel de chat com opções - + Reload the currently loaded model Recarregar modelo atual - + Eject the currently loaded model Ejetar o modelo carregado atualmente - + No model installed. Nenhum modelo instalado. - + Model loading error. Erro ao carregar o modelo. - + Waiting for model... Aguardando modelo... - + Switching context... Mudando de contexto... - + Choose a model... Escolha um modelo... - + Not found: %1 Não encontrado: %1 - + The top item is the current model O modelo atual é exibido no topo - - + LocalDocs LocalDocs - + Add documents Adicionar documentos - + add collections of documents to the chat Adicionar Coleção de Documentos - + Load the default model Carregar o modelo padrão - + Loads the default model which can be changed in settings Carrega o modelo padrão (personalizável nas configurações) - + No Model Installed Nenhum Modelo Instalado - + GPT4All requires that you install at least one model to get started O GPT4All precisa de pelo menos um modelo modelo instalado para funcionar - + Install a Model Instalar um Modelo - + Shows the add model view Mostra a visualização para adicionar modelo - + Conversation with the model Conversa com o modelo - + prompt / response pairs from the conversation Pares de pergunta/resposta da conversa - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + GPT4All - GPT4All + GPT4All - You - Você + Você - response stopped ... - resposta interrompida... + resposta interrompida... - processing ... - processando... + processando... - generating response ... - gerando resposta... + gerando resposta... - generating questions ... - gerando perguntas... + gerando perguntas... - - + Copy Copiar - Copy Message - Copiar Mensagem + Copiar Mensagem - Disable markdown - Desativar markdown + Desativar markdown - Enable markdown - Ativar markdown + Ativar markdown - Thumbs up - Resposta boa + Resposta boa - Gives a thumbs up to the response - Curte a resposta + Curte a resposta - Thumbs down - Resposta ruim + Resposta ruim - Opens thumbs down dialog - Abrir diálogo de joinha para baixo + Abrir diálogo de joinha para baixo - Suggested follow-ups - Perguntas relacionadas + Perguntas relacionadas - + Erase and reset chat session Apagar e redefinir sessão de chat - + Copy chat session to clipboard Copiar histórico da conversa - Redo last chat response - Refazer última resposta + Refazer última resposta - + Add media Adicionar mídia - + Adds media to the prompt Adiciona mídia ao prompt - + Stop generating Parar de gerar - + Stop the current response generation Parar a geração da resposta atual - + Attach Anexar - + Single File Arquivo Único - + Reloads the model Recarrega modelo - + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>Ocorreu um erro ao carregar o modelo:</h3><br><i>"%1"</i><br><br>Falhas no carregamento do modelo podem acontecer por vários motivos, mas as causas mais comuns incluem um formato de arquivo incorreto, um download incompleto ou corrompido, o tipo de arquivo errado, memória RAM do sistema insuficiente ou um tipo de modelo incompatível. Aqui estão algumas sugestões para resolver o problema:<br><ul><li>Certifique-se de que o arquivo do modelo tenha um formato e tipo compatíveis<li>Verifique se o arquivo do modelo está completo na pasta de download<li>Você pode encontrar a pasta de download na caixa de diálogo de configurações<li>Se você carregou o modelo, certifique-se de que o arquivo não esteja corrompido verificando o md5sum<li>Leia mais sobre quais modelos são suportados em nossa <a href="https://docs.gpt4all.io/">documentação</a> para a interface gráfica<li>Confira nosso <a href="https://discord.gg/4M2QFmTt2k">canal do Discord</a> para obter ajuda - - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + + Reload · %1 Recarregar · %1 - + Loading · %1 Carregando · %1 - + Load · %1 (default) → Carregar · %1 (padrão) → - restoring from text ... - Recuperando do texto... + Recuperando do texto... - retrieving localdocs: %1 ... - Recuperando dados em LocalDocs: %1 ... + Recuperando dados em LocalDocs: %1 ... - searching localdocs: %1 ... - Buscando em LocalDocs: %1 ... + Buscando em LocalDocs: %1 ... - %n Source(s) - + %n Origem %n Origens - + Send a message... Enviar uma mensagem... - + Load a model to continue... Carregue um modelo para continuar... - + Send messages/prompts to the model Enviar mensagens/prompts para o modelo - + Cut Recortar - + Paste Colar - + Select All Selecionar tudo - + Send message Enviar mensagem - + Sends the message/prompt contained in textfield to the model Envia a mensagem/prompt contida no campo de texto para o modelo @@ -1111,40 +1278,53 @@ modelo instalado para funcionar Selecione uma coleção para disponibilizá-la ao modelo de chat. + + ConfirmationDialog + + + OK + + + + + Cancel + Cancelar + + Download - + Model "%1" is installed successfully. Modelo "%1" instalado com sucesso. - + ERROR: $MODEL_NAME is empty. ERRO: O nome do modelo ($MODEL_NAME) está vazio. - + ERROR: $API_KEY is empty. ERRO: A chave da API ($API_KEY) está vazia. - + ERROR: $BASE_URL is invalid. ERRO: A URL base ($BASE_URL) é inválida. - + ERROR: Model "%1 (%2)" is conflict. ERRO: Conflito com o modelo "%1 (%2)". - + Model "%1 (%2)" is installed successfully. Modelo "%1 (%2)" instalado com sucesso. - + Model "%1" is removed. Modelo "%1" removido. @@ -1310,53 +1490,53 @@ modelo instalado para funcionar Aplicativo padrão - + Display Exibir - + Show Sources Mostrar Fontes - + Display the sources used for each response. Mostra as fontes usadas para cada resposta. - + Advanced Apenas para usuários avançados - + Warning: Advanced usage only. Atenção: Apenas para usuários avançados. - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. Valores muito altos podem causar falhas no LocalDocs, respostas extremamente lentas ou até mesmo nenhuma resposta. De forma geral, o valor {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Clique <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aqui</a> para mais informações. - + Document snippet size (characters) I translated "snippet" as "trecho" to make the term feel more natural and understandable in Portuguese. "Trecho" effectively conveys the idea of a portion or section of a document, fitting well within the context, whereas a more literal translation might sound less intuitive or awkward for users. Tamanho do trecho de documento (caracteres) - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. Número de caracteres por trecho de documento. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta. - + Max document snippets per prompt Máximo de Trechos de Documento por Prompt - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. Número máximo de trechos de documentos a serem adicionados ao contexto do prompt. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta. @@ -1518,78 +1698,78 @@ modelo instalado para funcionar ModelList - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>É necessária uma chave de API da OpenAI.</li><li>AVISO: Seus chats serão enviados para a OpenAI!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a OpenAI</li><li>Você pode solicitar uma chave de API <a href="https://platform.openai.com/account/api-keys">aqui.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>Modelo ChatGPT GPT-3.5 Turbo da OpenAI</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Modelo ChatGPT GPT-4 da OpenAI</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 <strong>Modelo Mistral Tiny</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Modelo Mistral Small</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Modelo Mistral Medium</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Mesmo que você pague pelo ChatGPT-4 da OpenAI, isso não garante acesso à chave de API. Contate a OpenAI para mais informações. - - + + cannot open "%1": %2 não é possível abrir "%1": %2 - + cannot create "%1": %2 não é possível criar "%1": %2 - + %1 (%2) %1 (%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>Modelo de API Compatível com OpenAI</strong><br><ul><li>Chave da API: %1</li><li>URL Base: %2</li><li>Nome do Modelo: %3</li></ul> - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>É necessária uma chave de API da Mistral.</li><li>AVISO: Seus chats serão enviados para a Mistral!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a Mistral</li><li>Você pode solicitar uma chave de API <a href="https://console.mistral.ai/user/api-keys">aqui</a>.</li> - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>É necessária uma chave de API e a URL da API.</li><li>AVISO: Seus chats serão enviados para o servidor de API compatível com OpenAI que você especificou!</li><li>Sua chave de API será armazenada no disco</li><li>Será usada apenas para comunicação com o servidor de API compatível com OpenAI</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>Conectar a um servidor de API compatível com OpenAI</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Criado por %1.</strong><br><ul><li>Publicado em %2.<li>Este modelo tem %3 curtidas.<li>Este modelo tem %4 downloads.<li>Mais informações podem ser encontradas <a href="https://huggingface.co/%5">aqui.</a></ul> @@ -1602,87 +1782,175 @@ modelo instalado para funcionar Modelo - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings Configurações do Modelo - + Clone Clonar - + Remove Remover - + Name Nome - + Model File Arquivo do Modelo - System Prompt - Prompt do Sistema + Prompt do Sistema - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - Prefixado no início de cada conversa. Deve conter os tokens de enquadramento apropriados. + Prefixado no início de cada conversa. Deve conter os tokens de enquadramento apropriados. - Prompt Template - Modelo de Prompt + Modelo de Prompt - The template that wraps every prompt. - Modelo para cada prompt. + Modelo para cada prompt. - Must contain the string "%1" to be replaced with the user's input. - Deve incluir "%1" para a entrada do usuário. + Deve incluir "%1" para a entrada do usuário. - + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Chat Name Prompt Prompt para Nome do Chat - + Prompt used to automatically generate chat names. Prompt usado para gerar automaticamente nomes de chats. - + Suggested FollowUp Prompt Prompt de Sugestão de Acompanhamento - + Prompt used to generate suggested follow-up questions. Prompt usado para gerar sugestões de perguntas. - + Context Length Tamanho do Contexto - + Number of input and output tokens the model sees. Tamanho da Janela de Contexto. - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1691,128 +1959,128 @@ Usar mais contexto do que o modelo foi treinado pode gerar resultados ruins. Obs.: Só entrará em vigor após recarregar o modelo. - + Temperature Temperatura - + Randomness of model output. Higher -> more variation. Aleatoriedade das respostas. Quanto maior, mais variadas. - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. Aumenta a chance de escolher tokens menos prováveis. Obs.: Uma temperatura mais alta gera resultados mais criativos, mas menos previsíveis. - + Top-P Top-P - + Nucleus Sampling factor. Lower -> more predictable. Amostragem por núcleo. Menor valor, respostas mais previsíveis. - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Apenas tokens com probabilidade total até o valor de top_p serão escolhidos. Obs.: Evita tokens muito improváveis. - + Min-P Min-P - + Minimum token probability. Higher -> more predictable. Probabilidade mínima do token. Quanto maior -> mais previsível. - + Sets the minimum relative probability for a token to be considered. Define a probabilidade relativa mínima para um token ser considerado. - + Top-K Top-K - + Size of selection pool for tokens. Número de tokens considerados na amostragem. - + Only the top K most likely tokens will be chosen from. Serão escolhidos apenas os K tokens mais prováveis. - + Max Length Comprimento Máximo - + Maximum response length, in tokens. Comprimento máximo da resposta, em tokens. - + Prompt Batch Size Tamanho do Lote de Processamento - + The batch size used for prompt processing. Tokens processados por lote. - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Quantidade de tokens de prompt para processar de uma vez. OBS.: Valores mais altos podem acelerar a leitura dos prompts, mas usarão mais RAM. - + Repeat Penalty Penalidade de Repetição - + Repetition penalty factor. Set to 1 to disable. Penalidade de Repetição (1 para desativar). - + Repeat Penalty Tokens Tokens para penalizar repetição - + Number of previous tokens used for penalty. Número de tokens anteriores usados para penalidade. - + GPU Layers Camadas na GPU - + Number of model layers to load into VRAM. Camadas Carregadas na GPU. - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2068,6 +2336,19 @@ Obs.: Só entrará em vigor após recarregar o modelo. Escolha um diretório + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsStack @@ -2078,12 +2359,22 @@ Obs.: Só entrará em vigor após recarregar o modelo. MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults Restaurar Configurações Padrão - + Restores settings dialog to a default state Restaura as configurações para o estado padrão @@ -2353,25 +2644,20 @@ versão do modelo GPT4All que utilize seus dados! SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>Atenção:</b> Ao trocar o modelo a conversa atual será perdida. Continuar? + <b>Atenção:</b> Ao trocar o modelo a conversa atual será perdida. Continuar? - Continue - Continuar + Continuar - Continue with model loading - Confirma a troca do modelo + Confirma a troca do modelo - - Cancel - Cancelar + Cancelar @@ -2410,125 +2696,135 @@ versão do modelo GPT4All que utilize seus dados! main - + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>Ocorreu um erro ao iniciar:</h3><br><i>"Hardware incompatível detectado."</i><br><br>Infelizmente, seu processador não atende aos requisitos mínimos para executar este programa. Especificamente, ele não possui suporte às instruções AVX, que são necessárias para executar modelos de linguagem grandes e modernos. A única solução, no momento, é atualizar seu hardware para um processador mais recente.<br><br>Para mais informações, consulte: <a href="https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions">https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + GPT4All v%1 GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>Ocorreu um erro ao iniciar:</h3><br><i>"Não foi possível acessar o arquivo de configurações."</i><br><br>Infelizmente, algo está impedindo o programa de acessar o arquivo de configurações. Isso pode acontecer devido a permissões incorretas na pasta de configurações do aplicativo. Para obter ajuda, acesse nosso <a href="https://discord.gg/4M2QFmTt2k">canal no Discord</a>. - + Connection to datalake failed. Falha na conexão com o datalake. - + Saving chats. Salvando chats. - + Network dialog Avisos de rede - + opt-in to share feedback/conversations permitir compartilhamento de feedback/conversas - + Home view Tela inicial - + Home view of application Tela inicial do aplicativo - + Home Início - + Chat view Visualização do Chat - + Chat view to interact with models Visualização do chat para interagir com os modelos - + Chats Chats - - + + Models Modelos - + Models view for installed models Tela de modelos instalados - - + + LocalDocs LocalDocs - + LocalDocs view to configure and use local docs Tela de configuração e uso de documentos locais do LocalDocs - - + + Settings Config - + Settings view for application configuration Tela de configurações do aplicativo - + The datalake is enabled O datalake está ativado - + Using a network model Usando um modelo de rede - + Server mode is enabled Modo servidor ativado - + Installed models Modelos instalados - + View of installed models Exibe os modelos instalados diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts index 0c9227c99ade..c20cb23c6dab 100644 --- a/gpt4all-chat/translations/gpt4all_ro_RO.ts +++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts @@ -557,13 +557,21 @@ - Save Chat Context - Salvarea contextului conversaţiei + Enable System Tray + + The application will minimize to the system tray when the window is closed. + + + + Save Chat Context + Salvarea contextului conversaţiei + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie. + Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie. @@ -604,13 +612,13 @@ Chat - - + + New Chat Conversaţie Nouă - + Server Chat Conversaţie cu Serverul @@ -618,12 +626,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server EROARE: Eroare de reţea - conectarea la serverul API - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished - eroare: HTTP Error %1 %2 @@ -691,35 +699,182 @@ Lista conversaţiilor în secţiunea-sertar + + ChatItemView + + + GPT4All + GPT4All + + + + You + Tu + + + + response stopped ... + replică întreruptă... + + + + retrieving localdocs: %1 ... + se preia din LocalDocs: %1 ... + + + + searching localdocs: %1 ... + se caută în LocalDocs: %1 ... + + + + processing ... + procesare... + + + + generating response ... + se generează replica... + + + + generating questions ... + se generează întrebări... + + + + + Copy + Copiere + + + + Copy Message + Copiez mesajul + + + + Disable markdown + Dezactivez markdown + + + + Enable markdown + Activez markdown + + + + %n Source(s) + + %n Sursa + %n Surse + %n de Surse + + + + + LocalDocs + LocalDocs + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + Continuări sugerate + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY ASTĂZI - + THIS WEEK SĂPTĂMÂNA ACEASTA - + THIS MONTH LUNA ACEASTA - + LAST SIX MONTHS ULTIMELE ŞASE LUNI - + THIS YEAR ANUL ACESTA - + LAST YEAR ANUL TRECUT @@ -727,118 +882,140 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>Atenţie</h3><p>%1</p> - Switch model dialog - Schimbarea modelului + Schimbarea modelului - Warn the user if they switch models, then context will be erased - Avertizează utilizatorul că la schimbarea modelului va fi şters contextul + Avertizează utilizatorul că la schimbarea modelului va fi şters contextul - + Conversation copied to clipboard. Conversaţia a fost plasată în Clipboard. - + Code copied to clipboard. Codul a fost plasat în Clipboard. - + + The entire chat will be erased. + + + + Chat panel Secţiunea de chat - + Chat panel with options Secţiunea de chat cu opţiuni - + Reload the currently loaded model Reîncarcă modelul curent - + Eject the currently loaded model Ejectează modelul curent - + No model installed. Niciun model instalat. - + Model loading error. Eroare la încărcarea modelului. - + Waiting for model... Se aşteaptă modelul... - + Switching context... Se schimbă contextul... - + Choose a model... Selectează un model... - + Not found: %1 Absent: %1 - + The top item is the current model Primul element e modelul curent - - + LocalDocs LocalDocs - + Add documents Adaug documente - + add collections of documents to the chat adaugă Colecţii de documente la conversaţie - + Load the default model Încarcă modelul implicit - + Loads the default model which can be changed in settings Încarcă modelul implicit care poate fi stabilit în Configurare - + No Model Installed Niciun model instalat - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>EROARE la încărcarea modelului:</h3><br><i>"%1"</i><br><br>Astfel @@ -857,154 +1034,149 @@ se oferă ajutor - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + GPT4All requires that you install at least one model to get started GPT4All necesită cel puţin un model pentru a putea rula - + Install a Model Instalează un model - + Shows the add model view Afişează secţiunea de adăugare a unui model - + Conversation with the model Conversaţie cu modelul - + prompt / response pairs from the conversation perechi prompt/replică din conversaţie - GPT4All - GPT4All + GPT4All - You - Tu + Tu - response stopped ... - replică întreruptă... + replică întreruptă... - processing ... - procesare... + procesare... - generating response ... - se generează replica... + se generează replica... - generating questions ... - se generează întrebări... + se generează întrebări... - - + Copy Copiere - Copy Message - Copiez mesajul + Copiez mesajul - Disable markdown - Dezactivez markdown + Dezactivez markdown - Enable markdown - Activez markdown + Activez markdown - Thumbs up - Bravo + Bravo - Gives a thumbs up to the response - Dă un Bravo acestei replici + Dă un Bravo acestei replici - Thumbs down - Aiurea + Aiurea - Opens thumbs down dialog - Deschide reacţia Aiurea + Deschide reacţia Aiurea - Suggested follow-ups - Continuări sugerate + Continuări sugerate - + Erase and reset chat session Şterge şi resetează sesiunea de chat - + Copy chat session to clipboard Copiez sesiunea de chat (conversaţia) în Clipboard - Redo last chat response - Reface ultima replică + Reface ultima replică - + Add media Adaugă media (un fişier) - + Adds media to the prompt Adaugă media (un fişier) la prompt - + Stop generating Opreşte generarea - + Stop the current response generation Opreşte generarea replicii curente - + Attach Ataşează - + Single File Un singur fişier - + Reloads the model Reîncarc modelul @@ -1039,82 +1211,78 @@ model to get started se oferă ajutor - - + + Reload · %1 Reîncărcare · %1 - + Loading · %1 Încărcare · %1 - + Load · %1 (default) → Încarcă · %1 (implicit) → - restoring from text ... - restaurare din text... + restaurare din text... - retrieving localdocs: %1 ... - se preia din LocalDocs: %1 ... + se preia din LocalDocs: %1 ... - searching localdocs: %1 ... - se caută în LocalDocs: %1 ... + se caută în LocalDocs: %1 ... - %n Source(s) - + %n Sursa %n Surse %n de Surse - + Send a message... Trimite un mesaj... - + Load a model to continue... Încarcă un model pentru a continua... - + Send messages/prompts to the model Trimite mesaje/prompt-uri către model - + Cut Decupare (Cut) - + Paste Alipire (Paste) - + Select All Selectez tot - + Send message Trimit mesajul - + Sends the message/prompt contained in textfield to the model Trimite modelului mesajul/prompt-ul din câmpul-text @@ -1160,40 +1328,53 @@ model to get started Selectează o Colecţie pentru ca modelul să o poată accesa. + + ConfirmationDialog + + + OK + + + + + Cancel + Anulare + + Download - + Model "%1" is installed successfully. Modelul "%1" - instalat cu succes. - + ERROR: $MODEL_NAME is empty. EROARE: $MODEL_NAME absent. - + ERROR: $API_KEY is empty. EROARE: $API_KEY absentă - + ERROR: $BASE_URL is invalid. EROARE: $API_KEY incorecta - + ERROR: Model "%1 (%2)" is conflict. EROARE: Model "%1 (%2)" conflictual. - + Model "%1 (%2)" is installed successfully. Modelul "%1 (%2)" - instalat cu succes. - + Model "%1" is removed. Modelul "%1" - îndepărtat @@ -1359,52 +1540,52 @@ model to get started Implicit - + Display Vizualizare - + Show Sources Afişarea Surselor - + Display the sources used for each response. Afişează Sursele utilizate pentru fiecare replică. - + Advanced Avansate - + Warning: Advanced usage only. Atenţie: Numai pentru utilizare avansată. - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>. - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă. - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă. - + Document snippet size (characters) Lungimea (în caractere) a citatelor din documente - + Max document snippets per prompt Numărul maxim de citate per prompt @@ -1568,78 +1749,78 @@ model to get started ModelList - - + + cannot open "%1": %2 nu se poate deschide „%1”: %2 - + cannot create "%1": %2 nu se poate crea „%1”: %2 - + %1 (%2) %1 (%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>Model API compatibil cu OpenAI</strong><br><ul><li>Cheia API: %1</li><li>Base URL: %2</li><li>Numele modelului: %3</li></ul> - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!</li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>Modelul OpenAI's ChatGPT GPT-3.5 Turbo</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii. - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Modelul ChatGPT GPT-4 al OpenAI</strong><br> %1 %2 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Necesită cheia personală Mistral API. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li> - + <strong>Mistral Tiny model</strong><br> %1 <strong>Modelul Mistral Tiny</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Modelul Mistral Small</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Modelul Mistral Medium</strong><br> %1 - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>Necesită cheia personală API si base-URL a API.</li><li>ATENŢIE: Conversaţiile tale vor fi trimise la serverul API compatibil cu OpenAI specificat!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu serverul API compatibil cu OpenAI</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>Conectare la un server API compatibil cu OpenAI</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul> @@ -1652,214 +1833,302 @@ model to get started Model - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings Configurez modelul - + Clone Clonez - + Remove Şterg - + Name Denumire - + Model File Fişierul modelului - System Prompt - System Prompt + System Prompt - Prompt Template - Prompt Template + Prompt Template - The template that wraps every prompt. - Standardul de formulare a fiecărui prompt. + Standardul de formulare a fiecărui prompt. - + Chat Name Prompt Denumirea conversaţiei - + Prompt used to automatically generate chat names. Standardul de formulare a denumirii conversaţiilor. - + Suggested FollowUp Prompt Prompt-ul sugerat pentru a continua - + Prompt used to generate suggested follow-up questions. Prompt-ul folosit pentru generarea întrebărilor de continuare. - + Context Length Lungimea Contextului - + Number of input and output tokens the model sees. Numărul token-urilor de input şi de output văzute de model. - + Temperature Temperatura - + Randomness of model output. Higher -> more variation. Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate. - + Top-P Top-P - + Nucleus Sampling factor. Lower -> more predictable. Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare. - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare. + Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare. - Must contain the string "%1" to be replaced with the user's input. - Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul. + Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul. - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului. - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile. - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile. - + Min-P Min-P - + Minimum token probability. Higher -> more predictable. Probabilitatea mínimă a unui token. Mai mare -> mai predictibil. - + Sets the minimum relative probability for a token to be considered. Stabileşte probabilitatea minimă relativă a unui token de luat în considerare. - + Top-K Top-K - + Size of selection pool for tokens. Dimensiunea setului de token-uri. - + Only the top K most likely tokens will be chosen from. Se va alege numai din cele mai probabile K token-uri. - + Max Length Lungimea maximă - + Maximum response length, in tokens. Lungimea maximă - în token-uri - a replicii. - + Prompt Batch Size Prompt Batch Size - + The batch size used for prompt processing. Dimensiunea setului de token-uri citite simultan din prompt. - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM. - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. Cât de multe layere ale modelului să fie încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. NOTĂ: Nu are efect până la reîncărcarea modelului. - + Repeat Penalty Penalizarea pentru repetare - + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Repetition penalty factor. Set to 1 to disable. Factorul de penalizare a repetării ce se dezactivează cu valoarea 1. - + Repeat Penalty Tokens Token-uri pentru penalizare a repetării - + Number of previous tokens used for penalty. Numărul token-urilor anterioare considerate pentru penalizare. - + GPU Layers Layere în GPU - + Number of model layers to load into VRAM. Numărul layerelor modelului ce vor fi Încărcate în VRAM. @@ -2111,6 +2380,19 @@ NOTE: Does not take effect until you reload the model. Selectează un director (folder) + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsStack @@ -2121,12 +2403,22 @@ NOTE: Does not take effect until you reload the model. MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults Restaurez valorile implicite - + Restores settings dialog to a default state Restaurez secţiunea de configurare la starea sa implicită @@ -2446,25 +2738,20 @@ care foloseşte datele tale! SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta? + <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta? - Continue - Continuă + Continuă - Continue with model loading - Continuă încărcarea modelului + Continuă încărcarea modelului - - Cancel - Anulare + Anulare @@ -2503,125 +2790,135 @@ care foloseşte datele tale! main - + GPT4All v%1 GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă. - + Connection to datalake failed. Conectarea la DataLake a eşuat. - + Saving chats. Se salvează conversaţiile. - + Network dialog Dialogul despre reţea - + opt-in to share feedback/conversations acceptă partajarea (share) de comentarii/conversaţii - + Home view Secţiunea de Început - + Home view of application Secţiunea de Început a programului - + Home Prima<br>pagină - + Chat view Secţiunea conversaţiilor - + Chat view to interact with models Secţiunea de chat pentru interacţiune cu modele - + Chats Conversaţii - - + + Models Modele - + Models view for installed models Secţiunea modelelor instalate - - + + LocalDocs LocalDocs - + LocalDocs view to configure and use local docs Secţiunea LocalDocs de configurare şi folosire a Documentelor Locale - - + + Settings Configurare - + Settings view for application configuration Secţiunea de configurare a programului - + The datalake is enabled DataLake: ACTIV - + Using a network model Se foloseşte un model pe reţea - + Server mode is enabled Modul Server: ACTIV - + Installed models Modele instalate - + View of installed models Secţiunea modelelor instalate diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts index f6c87a58018d..7fc06d38c418 100644 --- a/gpt4all-chat/translations/gpt4all_zh_CN.ts +++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts @@ -556,13 +556,21 @@ - Save Chat Context - 保存对话上下文 + Enable System Tray + + The application will minimize to the system tray when the window is closed. + + + + Save Chat Context + 保存对话上下文 + + Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - 保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话. + 保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话. @@ -603,13 +611,13 @@ Chat - - + + New Chat 新对话 - + Server Chat 服务器对话 @@ -617,12 +625,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server 错误:连接到 API 服务器时发生网络错误 - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished 收到 HTTP 错误 %1 %2 @@ -690,35 +698,180 @@ 对话框中的聊天列表 + + ChatItemView + + + GPT4All + GPT4All + + + + You + + + + + response stopped ... + 响应停止... + + + + retrieving localdocs: %1 ... + 检索本地文档: %1 ... + + + + searching localdocs: %1 ... + 搜索本地文档: %1 ... + + + + processing ... + 处理中 + + + + generating response ... + 响应中... + + + + generating questions ... + 生成响应 + + + + + Copy + 复制 + + + + Copy Message + 复制内容 + + + + Disable markdown + 不允许markdown + + + + Enable markdown + 允许markdown + + + + %n Source(s) + + %n 资源 + + + + + LocalDocs + 本地文档 + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + 建议的后续行动 + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY 今天 - + THIS WEEK 本周 - + THIS MONTH 本月 - + LAST SIX MONTHS 半年内 - + THIS YEAR 今年内 - + LAST YEAR 去年 @@ -726,348 +879,361 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>警告</h3><p>%1</p> - Switch model dialog - 切换模型对话 + 切换模型对话 - Warn the user if they switch models, then context will be erased - 如果用户切换模型,则警告用户,然后上下文将被删除 + 如果用户切换模型,则警告用户,然后上下文将被删除 - + Conversation copied to clipboard. 复制对话到剪切板 - + Code copied to clipboard. 复制代码到剪切板 - + + The entire chat will be erased. + + + + Chat panel 对话面板 - + Chat panel with options 对话面板选项 - + Reload the currently loaded model 重载当前模型 - + Eject the currently loaded model 弹出当前加载的模型 - + No model installed. 没有安装模型 - + Model loading error. 模型加载错误 - + Waiting for model... 稍等片刻 - + Switching context... 切换上下文 - + Choose a model... 选择模型 - + Not found: %1 没找到: %1 - + The top item is the current model 当前模型的最佳选项 - - + LocalDocs 本地文档 - + Add documents 添加文档 - + add collections of documents to the chat 将文档集合添加到聊天中 - + Load the default model 载入默认模型 - + Loads the default model which can be changed in settings 加载默认模型,可以在设置中更改 - + No Model Installed 没有下载模型 - + GPT4All requires that you install at least one model to get started GPT4All要求您至少安装一个模型才能开始 - + Install a Model 下载模型 - + Shows the add model view 查看添加的模型 - + Conversation with the model 使用此模型对话 - + prompt / response pairs from the conversation 对话中的提示/响应对 - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + GPT4All - GPT4All + GPT4All - You - + - response stopped ... - 响应停止... + 响应停止... - processing ... - 处理中 + 处理中 - generating response ... - 响应中... + 响应中... - generating questions ... - 生成响应 + 生成响应 - - + Copy 复制 - Copy Message - 复制内容 + 复制内容 - Disable markdown - 不允许markdown + 不允许markdown - Enable markdown - 允许markdown + 允许markdown - Thumbs up - 点赞 + 点赞 - Gives a thumbs up to the response - 点赞响应 + 点赞响应 - Thumbs down - 点踩 + 点踩 - Opens thumbs down dialog - 打开点踩对话框 + 打开点踩对话框 - Suggested follow-ups - 建议的后续行动 + 建议的后续行动 - + Erase and reset chat session 擦除并重置聊天会话 - + Copy chat session to clipboard 复制对话到剪切板 - Redo last chat response - 重新生成上个响应 + 重新生成上个响应 - + Add media 新增媒體 - + Adds media to the prompt 將媒體加入提示中 - + Stop generating 停止生成 - + Stop the current response generation 停止当前响应 - + Attach - + Single File 單一文件 - + Reloads the model 重载模型 - + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>加载模型时遇到错误:</h3><br><i><%1></i><br><br>模型加载失败可能由多种原因引起,但最常见的原因包括文件格式错误、下载不完整或损坏、文件类型错误、系统 RAM 不足或模型类型不兼容。以下是一些解决问题的建议:<br><ul><li>确保模型文件具有兼容的格式和类型<li>检查下载文件夹中的模型文件是否完整<li>您可以在设置对话框中找到下载文件夹<li>如果您已侧载模型,请通过检查 md5sum 确保文件未损坏<li>在我们的 <a href="https://docs.gpt4all.io/">文档</a> 中了解有关 gui 支持哪些模型的更多信息<li>查看我们的 <a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助 - - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + + Reload · %1 重载 · %1 - + Loading · %1 载入中 · %1 - + Load · %1 (default) → 载入 · %1 (默认) → - restoring from text ... - 从文本恢复中 + 从文本恢复中 - retrieving localdocs: %1 ... - 检索本地文档: %1 ... + 检索本地文档: %1 ... - searching localdocs: %1 ... - 搜索本地文档: %1 ... + 搜索本地文档: %1 ... - %n Source(s) - + %n 资源 - + Send a message... 发送消息... - + Load a model to continue... 选择模型并继续 - + Send messages/prompts to the model 发送消息/提示词给模型 - + Cut 剪切 - + Paste 粘贴 - + Select All 全选 - + Send message 发送消息 - + Sends the message/prompt contained in textfield to the model 将文本框中包含的消息/提示发送给模型 @@ -1109,40 +1275,53 @@ model to get started 选择一个集合,使其可用于聊天模型。 + + ConfirmationDialog + + + OK + + + + + Cancel + 取消 + + Download - + Model "%1" is installed successfully. 模型 "%1" 安装成功 - + ERROR: $MODEL_NAME is empty. 错误:$MODEL_NAME 为空 - + ERROR: $API_KEY is empty. 错误:$API_KEY为空 - + ERROR: $BASE_URL is invalid. 错误:$BASE_URL 非法 - + ERROR: Model "%1 (%2)" is conflict. 错误: 模型 "%1 (%2)" 有冲突. - + Model "%1 (%2)" is installed successfully. 模型 "%1 (%2)" 安装成功. - + Model "%1" is removed. 模型 "%1" 已删除. @@ -1308,52 +1487,52 @@ model to get started 程序默认 - + Display 显示 - + Show Sources 查看源码 - + Display the sources used for each response. 显示每个响应所使用的源。 - + Advanced 高级 - + Warning: Advanced usage only. 提示: 仅限高级使用。 - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. 值过大可能会导致 localdocs 失败、响应速度极慢或根本无法响应。粗略地说,{N 个字符 x N 个片段} 被添加到模型的上下文窗口中。更多信息请见<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此处</a>。 - + Document snippet size (characters) 文档粘贴大小 (字符) - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. 每个文档片段的字符数。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。 - + Max document snippets per prompt 每个提示的最大文档片段数 - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. 检索到的文档片段最多添加到提示上下文中的前 N 个最佳匹配项。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。 @@ -1519,78 +1698,78 @@ model to get started ModelList - - + + cannot open "%1": %2 无法打开“%1”:%2 - + cannot create "%1": %2 无法创建“%1”:%2 - + %1 (%2) %1 (%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>与 OpenAI 兼容的 API 模型</strong><br><ul><li>API 密钥:%1</li><li>基本 URL:%2</li><li>模型名称:%3</li></ul> - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>需要个人 OpenAI API 密钥。</li><li>警告:将把您的聊天内容发送给 OpenAI!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与 OpenAI 通信</li><li>您可以在此处<a href="https://platform.openai.com/account/api-keys">申请 API 密钥。</a></li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 - + <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral Tiny model</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Mistral Small model</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Mistral Medium model</strong><br> %1 - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>需要个人 API 密钥和 API 基本 URL。</li><li>警告:将把您的聊天内容发送到您指定的与 OpenAI 兼容的 API 服务器!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与与 OpenAI 兼容的 API 服务器通信</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>连接到与 OpenAI 兼容的 API 服务器</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* 即使您为ChatGPT-4向OpenAI付款,这也不能保证API密钥访问。联系OpenAI获取更多信息。 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> @@ -1603,87 +1782,175 @@ model to get started 模型 - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings 模型设置 - + Clone 克隆 - + Remove 删除 - + Name 名称 - + Model File 模型文件 - System Prompt - 系统提示词 + 系统提示词 - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - 每次对话开始时的前缀 + 每次对话开始时的前缀 - Prompt Template - 提示词模版 + 提示词模版 - The template that wraps every prompt. - 包装每个提示的模板 + 包装每个提示的模板 - Must contain the string "%1" to be replaced with the user's input. - 必须包含字符串 "%1" 替换为用户的's 输入. + 必须包含字符串 "%1" 替换为用户的's 输入. + + + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + - + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Chat Name Prompt 聊天名称提示 - + Prompt used to automatically generate chat names. 用于自动生成聊天名称的提示。 - + Suggested FollowUp Prompt 建议的后续提示 - + Prompt used to generate suggested follow-up questions. 用于生成建议的后续问题的提示。 - + Context Length 上下文长度 - + Number of input and output tokens the model sees. 模型看到的输入和输出令牌的数量。 - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1692,128 +1959,128 @@ NOTE: Does not take effect until you reload the model. 注意:在重新加载模型之前不会生效。 - + Temperature 温度 - + Randomness of model output. Higher -> more variation. 模型输出的随机性。更高->更多的变化。 - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. 温度增加了选择不太可能的token的机会。 注:温度越高,输出越有创意,但预测性越低。 - + Top-P Top-P - + Nucleus Sampling factor. Lower -> more predictable. 核子取样系数。较低->更具可预测性。 - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. 只能选择总概率高达top_p的最有可能的令牌。 注意:防止选择极不可能的token。 - + Min-P Min-P - + Minimum token probability. Higher -> more predictable. 最小令牌概率。更高 -> 更可预测。 - + Sets the minimum relative probability for a token to be considered. 设置被考虑的标记的最小相对概率。 - + Top-K Top-K - + Size of selection pool for tokens. 令牌选择池的大小。 - + Only the top K most likely tokens will be chosen from. 仅从最可能的前 K 个标记中选择 - + Max Length 最大长度 - + Maximum response length, in tokens. 最大响应长度(以令牌为单位) - + Prompt Batch Size 提示词大小 - + The batch size used for prompt processing. 用于快速处理的批量大小。 - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. 一次要处理的提示令牌数量。 注意:较高的值可以加快读取提示,但会使用更多的RAM。 - + Repeat Penalty 重复惩罚 - + Repetition penalty factor. Set to 1 to disable. 重复处罚系数。设置为1可禁用。 - + Repeat Penalty Tokens 重复惩罚数 - + Number of previous tokens used for penalty. 用于惩罚的先前令牌数量。 - + GPU Layers GPU 层 - + Number of model layers to load into VRAM. 要加载到VRAM中的模型层数。 - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2069,6 +2336,19 @@ NOTE: Does not take effect until you reload the model. 請選擇目錄 + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsStack @@ -2079,12 +2359,22 @@ NOTE: Does not take effect until you reload the model. MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults 恢复初始化 - + Restores settings dialog to a default state 将设置对话框恢复为默认状态 @@ -2346,25 +2636,20 @@ model release that uses your data! SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>警告:</b> 更改模型将删除当前对话。您想继续吗? + <b>警告:</b> 更改模型将删除当前对话。您想继续吗? - Continue - 继续 + 继续 - Continue with model loading - 模型载入时继续 + 模型载入时继续 - - Cancel - 取消 + 取消 @@ -2403,125 +2688,135 @@ model release that uses your data! main - + GPT4All v%1 GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>启动时遇到错误:</h3><br><i>“检测到不兼容的硬件。”</i><br><br>很遗憾,您的 CPU 不满足运行此程序的最低要求。特别是,它不支持此程序成功运行现代大型语言模型所需的 AVX 内在函数。目前唯一的解决方案是将您的硬件升级到更现代的 CPU。<br><br>有关更多信息,请参阅此处:<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions>>https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> - + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>启动时遇到错误:</h3><br><i>“无法访问设置文件。”</i><br><br>不幸的是,某些东西阻止程序访问设置文件。这可能是由于设置文件所在的本地应用程序配置目录中的权限不正确造成的。请查看我们的<a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助。 - + Connection to datalake failed. 链接数据湖失败 - + Saving chats. 保存对话 - + Network dialog 网络对话 - + opt-in to share feedback/conversations 选择加入以共享反馈/对话 - + Home view 主页 - + Home view of application 主页 - + Home 主页 - + Chat view 对话视图 - + Chat view to interact with models 聊天视图可与模型互动 - + Chats 对话 - - + + Models 模型 - + Models view for installed models 已安装模型的页面 - - + + LocalDocs 本地文档 - + LocalDocs view to configure and use local docs LocalDocs视图可配置和使用本地文档 - - + + Settings 设置 - + Settings view for application configuration 设置页面 - + The datalake is enabled 数据湖已开启 - + Using a network model 使用联网模型 - + Server mode is enabled 服务器模式已开 - + Installed models 安装模型 - + View of installed models 查看已安装模型 diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts index 5ba14d749e56..75325cdbea3f 100644 --- a/gpt4all-chat/translations/gpt4all_zh_TW.ts +++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts @@ -481,6 +481,16 @@ Never 永不 + + + Enable System Tray + + + + + The application will minimize to the system tray when the window is closed. + + Enable Local API Server @@ -553,14 +563,12 @@ 用於推理與嵌入的中央處理器線程數。 - Save Chat Context - 儲存交談語境 + 儲存交談語境 - Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. - 將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。 + 將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。 @@ -596,13 +604,13 @@ Chat - - + + New Chat 新的交談 - + Server Chat 伺服器交談 @@ -610,12 +618,12 @@ ChatAPIWorker - + ERROR: Network error occurred while connecting to the API server 錯誤:網路錯誤,無法連線到目標 API 伺服器 - + ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished 遇到一個 HTTP 錯誤 %1 %2 @@ -683,35 +691,180 @@ 側邊欄對話視窗的交談列表 + + ChatItemView + + + GPT4All + GPT4All + + + + You + + + + + response stopped ... + 回覆停止...... + + + + retrieving localdocs: %1 ... + 檢索本機文件中:%1 ...... + + + + searching localdocs: %1 ... + 搜尋本機文件中:%1 ...... + + + + processing ... + 處理中...... + + + + generating response ... + 生成回覆...... + + + + generating questions ... + 生成問題...... + + + + + Copy + 複製 + + + + Copy Message + 複製訊息 + + + + Disable markdown + 停用 Markdown + + + + Enable markdown + 啟用 Markdown + + + + %n Source(s) + + %n 來源 + + + + + LocalDocs + 我的文件 + + + + Edit this message? + + + + + + All following messages will be permanently erased. + + + + + Redo this response? + + + + + Cannot edit chat without a loaded model. + + + + + Cannot edit chat while the model is generating. + + + + + Edit + + + + + Cannot redo response without a loaded model. + + + + + Cannot redo response while the model is generating. + + + + + Redo + + + + + Like response + + + + + Dislike response + + + + + Suggested follow-ups + 後續建議 + + + + ChatLLM + + + Your message was too long and could not be processed (%1 > %2). Please try again with something shorter. + + + ChatListModel - + TODAY 今天 - + THIS WEEK 這星期 - + THIS MONTH 這個月 - + LAST SIX MONTHS 前六個月 - + THIS YEAR 今年 - + LAST YEAR 去年 @@ -719,349 +872,362 @@ ChatView - + <h3>Warning</h3><p>%1</p> <h3>警告</h3><p>%1</p> - Switch model dialog - 切換模型對話視窗 + 切換模型對話視窗 - Warn the user if they switch models, then context will be erased - 警告使用者如果切換模型,則語境將被刪除 + 警告使用者如果切換模型,則語境將被刪除 - + Conversation copied to clipboard. 對話已複製到剪貼簿。 - + Code copied to clipboard. 程式碼已複製到剪貼簿。 - + + The entire chat will be erased. + + + + Chat panel 交談面板 - + Chat panel with options 具有選項的交談面板 - + Reload the currently loaded model 重新載入目前已載入的模型 - + Eject the currently loaded model 彈出目前載入的模型 - + No model installed. 沒有已安裝的模型。 - + Model loading error. 模型載入時發生錯誤。 - + Waiting for model... 等待模型中...... - + Switching context... 切換語境中...... - + Choose a model... 選擇一個模型...... - + Not found: %1 不存在:%1 - - + + Reload · %1 重新載入 · %1 - + Loading · %1 載入中 · %1 - + Load · %1 (default) → 載入 · %1 (預設) → - + + Legacy prompt template needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + Legacy system prompt needs to be <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">updated</a> in Settings. + + + + The top item is the current model 最上面的那項是目前使用的模型 - - + + + Erase conversation? + + + + + Changing the model will erase the current conversation. + + + + LocalDocs 我的文件 - + Add documents 新增文件 - + add collections of documents to the chat 將文件集合新增至交談中 - + Load the default model 載入預設模型 - + Loads the default model which can be changed in settings 預設模型可於設定中變更 - + No Model Installed 沒有已安裝的模型 - + GPT4All requires that you install at least one model to get started GPT4All 要求您至少安裝一個 模型開始 - + Install a Model 安裝一個模型 - + Shows the add model view 顯示新增模型視圖 - + Conversation with the model 與模型對話 - + prompt / response pairs from the conversation 對話中的提示詞 / 回覆組合 - GPT4All - GPT4All + GPT4All - You - + - response stopped ... - 回覆停止...... + 回覆停止...... - retrieving localdocs: %1 ... - 檢索本機文件中:%1 ...... + 檢索本機文件中:%1 ...... - searching localdocs: %1 ... - 搜尋本機文件中:%1 ...... + 搜尋本機文件中:%1 ...... - processing ... - 處理中...... + 處理中...... - generating response ... - 生成回覆...... + 生成回覆...... - generating questions ... - 生成問題...... + 生成問題...... - - + Copy 複製 - Copy Message - 複製訊息 + 複製訊息 - Disable markdown - 停用 Markdown + 停用 Markdown - Enable markdown - 啟用 Markdown + 啟用 Markdown - Thumbs up - + - Gives a thumbs up to the response - 對這則回覆比讚 + 對這則回覆比讚 - Thumbs down - 倒讚 + 倒讚 - Opens thumbs down dialog - 開啟倒讚對話視窗 + 開啟倒讚對話視窗 - Suggested follow-ups - 後續建議 + 後續建議 - + Erase and reset chat session 刪除並重置交談會話 - + Copy chat session to clipboard 複製交談會議到剪貼簿 - Redo last chat response - 復原上一個交談回覆 + 復原上一個交談回覆 - + Add media 附加媒體文件 - + Adds media to the prompt 附加媒體文件到提示詞 - + Stop generating 停止生成 - + Stop the current response generation 停止當前回覆生成 - + Attach 附加 - + Single File 單一文件 - + Reloads the model 重新載入模型 - + <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>載入模型時發生錯誤:</h3><br><i>"%1"</i><br><br>導致模型載入失敗的原因可能有很多種,但絕大多數的原因是檔案格式損毀、下載的檔案不完整、檔案類型錯誤、系統RAM空間不足或不相容的模型類型。這裡有些建議可供疑難排解:<br><ul><li>確保使用的模型是相容的格式與類型<li>檢查位於下載資料夾的檔案是否完整<li>您可以從設定中找到您所設定的「下載資料夾路徑」<li>如果您有側載模型,請利用 md5sum 等工具確保您的檔案是完整的<li>想了解更多關於我們所支援的模型資訊,煩請詳閱<a href="https://docs.gpt4all.io/">本文件</a>。<li>歡迎洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求幫助 - restoring from text ... - 從文字中恢復...... + 從文字中恢復...... - %n Source(s) - + %n 來源 - + Send a message... 傳送一則訊息...... - + Load a model to continue... 載入模型以繼續...... - + Send messages/prompts to the model 向模型傳送訊息/提示詞 - + Cut 剪下 - + Paste 貼上 - + Select All 全選 - + Send message 傳送訊息 - + Sends the message/prompt contained in textfield to the model 將文字欄位中包含的訊息/提示詞傳送到模型 @@ -1103,40 +1269,53 @@ model to get started 選擇一個收藏以使其可供交談模型使用。 + + ConfirmationDialog + + + OK + + + + + Cancel + 取消 + + Download - + Model "%1" is installed successfully. 模型「%1」已安裝成功。 - + ERROR: $MODEL_NAME is empty. 錯誤:$MODEL_NAME 未填寫。 - + ERROR: $API_KEY is empty. 錯誤:$API_KEY 未填寫。 - + ERROR: $BASE_URL is invalid. 錯誤:$BASE_URL 無效。 - + ERROR: Model "%1 (%2)" is conflict. 錯誤:模型「%1 (%2)」發生衝突。 - + Model "%1 (%2)" is installed successfully. 模型「%1(%2)」已安裝成功。 - + Model "%1" is removed. 模型「%1」已移除。 @@ -1302,52 +1481,52 @@ model to get started 應用程式預設值 - + Display 顯示 - + Show Sources 查看來源 - + Display the sources used for each response. 顯示每則回覆所使用的來源。 - + Advanced 進階 - + Warning: Advanced usage only. 警告:僅限進階使用。 - + Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. 設定太大的數值可能會導致「我的文件」處理失敗、反應速度極慢或根本無法回覆。簡單地說,這會將 {N 個字元 x N 個片段} 被添加到模型的語境視窗中。更多資訊<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此處</a>。 - + Document snippet size (characters) 文件片段大小(字元) - + Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. 每個文件片段的字元數。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。 - + Max document snippets per prompt 每個提示詞的最大文件片段 - + Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. 新增至提示詞語境中的檢索到的文件片段的最大 N 個符合的項目。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。 @@ -1507,78 +1686,78 @@ model to get started ModelList - - + + cannot open "%1": %2 無法開啟“%1”:%2 - + cannot create "%1": %2 無法建立“%1”:%2 - + %1 (%2) %1(%2) - + <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>OpenAI API 相容模型</strong><br><ul><li>API 金鑰:%1</li><li>基底 URL:%2</li><li>模型名稱:%3</li></ul> - + <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>需要個人的 OpenAI API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 OpenAI</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 OpenAI 進行通訊</li><li>您可以在<a href="https://platform.openai.com/account/api-keys">此處</a>申請一個 API 金鑰。</li> - + <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI 的 ChatGPT 模型 GPT-3.5 Turbo</strong><br> %1 - + <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* 即使您已向 OpenAI 付費購買了 ChatGPT 的 GPT-4 模型使用權,但這也不能保證您能擁有 API 金鑰的使用權限。請聯繫 OpenAI 以查閱更多資訊。 - + <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>OpenAI 的 ChatGPT 模型 GPT-4</strong><br> %1 %2 - + <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>需要個人的 Mistral API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 Mistral!</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 Mistral 進行通訊</li><li>您可以在<a href="https://console.mistral.ai/user/api-keys">此處</a>申請一個 API 金鑰。</li> - + <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral 迷你模型</strong><br> %1 - + <strong>Mistral Small model</strong><br> %1 <strong>Mistral 小型模型</strong><br> %1 - + <strong>Mistral Medium model</strong><br> %1 <strong>Mistral 中型模型</strong><br> %1 - + <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>需要個人的 API 金鑰和 API 的基底 URL(Base URL)。</li><li>警告:這將會傳送您的交談紀錄到您所指定的 OpenAI API 相容伺服器</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與其 OpenAI API 相容伺服器進行通訊</li> - + <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>連線到 OpenAI API 相容伺服器</strong><br> %1 - + <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>模型作者:%1</strong><br><ul><li>發佈日期:%2<li>累積讚數:%3 個讚<li>下載次數:%4 次<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul> @@ -1591,87 +1770,175 @@ model to get started 模型 - + + %1 system message? + + + + + + Clear + + + + + + Reset + + + + + The system message will be %1. + + + + + removed + + + + + + reset to the default + + + + + %1 chat template? + + + + + The chat template will be %1. + + + + + erased + + + + Model Settings 模型設定 - + Clone 複製 - + Remove 移除 - + Name 名稱 - + Model File 模型檔案 - System Prompt - 系統提示詞 + 系統提示詞 - Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. - 在每個對話的開頭加上前綴。必須包含適當的構建符元(framing tokens)。 + 在每個對話的開頭加上前綴。必須包含適當的構建符元(framing tokens)。 - Prompt Template - 提示詞模板 + 提示詞模板 - The template that wraps every prompt. - 包裝每個提示詞的模板。 + 包裝每個提示詞的模板。 - Must contain the string "%1" to be replaced with the user's input. - 必須包含要替換為使用者輸入的字串「%1」。 + 必須包含要替換為使用者輸入的字串「%1」。 - + + System Message + + + + + A message to set the context or guide the behavior of the model. Leave blank for none. NOTE: Since GPT4All 3.5, this should not contain control tokens. + + + + + System message is not <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">plain text</a>. + + + + + Chat Template + + + + + This Jinja template turns the chat into input for the model. + + + + + No <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> configured. + + + + + The <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">chat template</a> cannot be blank. + + + + + <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Syntax error</a>: %1 + + + + + Chat template is not in <a href="https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html">Jinja format</a>. + + + + Chat Name Prompt 交談名稱提示詞 - + Prompt used to automatically generate chat names. 用於自動生成交談名稱的提示詞。 - + Suggested FollowUp Prompt 後續建議提示詞 - + Prompt used to generate suggested follow-up questions. 用於生成後續建議問題的提示詞。 - + Context Length 語境長度 - + Number of input and output tokens the model sees. 模型看見的輸入與輸出的符元數量。 - + Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. @@ -1680,128 +1947,128 @@ NOTE: Does not take effect until you reload the model. 注意:重新載入模型後才會生效。 - + Temperature 語境溫度 - + Randomness of model output. Higher -> more variation. 模型輸出的隨機性。更高 -> 更多變化。 - + Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. 語境溫度會提高選擇不容易出現的符元機率。(Temperature) 注意:較高的語境溫度會生成更多創意,但輸出的可預測性會相對較差。 - + Top-P 核心採樣 - + Nucleus Sampling factor. Lower -> more predictable. 核心採樣因子。更低 -> 更可預測。 - + Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. 只選擇總機率約為核心採樣,最有可能性的符元。(Top-P) 注意:用於避免選擇不容易出現的符元。 - + Min-P 最小符元機率 - + Minimum token probability. Higher -> more predictable. 最小符元機率。更高 -> 更可預測。 - + Sets the minimum relative probability for a token to be considered. 設定要考慮的符元的最小相對機率。(Min-P) - + Top-K 高頻率採樣機率 - + Size of selection pool for tokens. 符元選擇池的大小。 - + Only the top K most likely tokens will be chosen from. 只選擇前 K 個最有可能性的符元。(Top-K) - + Max Length 最大長度 - + Maximum response length, in tokens. 最大響應長度(以符元為單位)。 - + Prompt Batch Size 提示詞批次大小 - + The batch size used for prompt processing. 用於即時處理的批量大小。 - + Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. 一次處理的提示詞符元數量。(Prompt Batch Size) 注意:較高的值可以加快讀取提示詞的速度,但會使用比較多的記憶體。 - + Repeat Penalty 重複處罰 - + Repetition penalty factor. Set to 1 to disable. 重複懲罰因子。設定為 1 以停用。 - + Repeat Penalty Tokens 重複懲罰符元 - + Number of previous tokens used for penalty. 之前用於懲罰的符元數量。 - + GPU Layers 圖形處理器負載層 - + Number of model layers to load into VRAM. 要載入到顯示記憶體中的模型層數。 - + How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. @@ -2058,15 +2325,38 @@ NOTE: Does not take effect until you reload the model. 請選擇一個資料夾 + + MySettingsLabel + + + Clear + + + + + Reset + + + MySettingsTab - + + Restore defaults? + + + + + This page of settings will be reset to the defaults. + + + + Restore Defaults 恢復預設值 - + Restores settings dialog to a default state 恢復設定對話視窗到預設狀態 @@ -2337,25 +2627,20 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將 SwitchModelDialog - <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? - <b>警告:</b> 變更模型將會清除目前對話內容。您真的想要繼續嗎? + <b>警告:</b> 變更模型將會清除目前對話內容。您真的想要繼續嗎? - Continue - 繼續 + 繼續 - Continue with model loading - 繼續載入模型 + 繼續載入模型 - - Cancel - 取消 + 取消 @@ -2394,125 +2679,135 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將 main - + GPT4All v%1 GPT4All v%1 - + + Restore + + + + + Quit + + + + <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體裝置。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a> - + <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>啟動時發生錯誤:</h3><br><i>「無法存取設定檔。」</i><br><br>糟糕!有些東西正在阻止程式存取設定檔。這極為可能是由於設定檔所在的本機應用程式設定資料夾中的權限設定不正確所造成的。煩請洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求協助。 - + Connection to datalake failed. 連線資料湖泊失敗。 - + Saving chats. 儲存交談。 - + Network dialog 資料湖泊計畫對話視窗 - + opt-in to share feedback/conversations 分享回饋/對話計畫 - + Home view 首頁視圖 - + Home view of application 應用程式首頁視圖 - + Home 首頁 - + Chat view 查看交談 - + Chat view to interact with models 模型互動交談視圖 - + Chats 交談 - - + + Models 模型 - + Models view for installed models 已安裝模型的模型視圖 - - + + LocalDocs 我的文件 - + LocalDocs view to configure and use local docs 用於設定與使用我的文件的「我的文件」視圖 - - + + Settings 設定 - + Settings view for application configuration 應用程式設定視圖 - + The datalake is enabled 資料湖泊已啟用 - + Using a network model 使用一個網路模型 - + Server mode is enabled 伺服器模式已啟用 - + Installed models 已安裝的模型 - + View of installed models 已安裝的模型視圖