From 21c422236c87493c34ba0885f5f04869d3fdf067 Mon Sep 17 00:00:00 2001 From: raycastbot Date: Wed, 18 Sep 2024 11:08:32 +0000 Subject: [PATCH] Docs: update for the new API release --- docs/.config.json | 2 +- docs/api-reference/ai.md | 3 +-- docs/api-reference/oauth.md | 6 +++++- 3 files changed, 7 insertions(+), 4 deletions(-) diff --git a/docs/.config.json b/docs/.config.json index c153d5f30fd..5d4e6a68aec 100644 --- a/docs/.config.json +++ b/docs/.config.json @@ -1,3 +1,3 @@ { - "version": "1.82.5" + "version": "1.83.0" } diff --git a/docs/api-reference/ai.md b/docs/api-reference/ai.md index 0cc4a9814fe..08efd9b5fd6 100644 --- a/docs/api-reference/ai.md +++ b/docs/api-reference/ai.md @@ -152,7 +152,6 @@ The AI model to use to answer to the prompt. Defaults to `AI.Model["OpenAI_GPT3. #### Enumeration members -| OpenAI_GPT3.5-turbo | GPT-3.5 Turbo is OpenAI’s fastest model, making it ideal for tasks that require quick response times with basic language processing capabilities. | | OpenAI_GPT4 | GPT-4 is OpenAI’s most capable model with broad general knowledge, allowing it to follow complex instructions and solve difficult problems. | | OpenAI_GPT4-turbo | GPT-4 Turbo from OpenAI has a big context window that fits hundreds of pages of text, making it a great choice for workloads that involve longer prompts. | | OpenAI_GPT4o | GPT-4o is the most advanced and fastest model from OpenAI, making it a great choice for complex everyday problems and deeper conversations. | @@ -174,7 +173,7 @@ The AI model to use to answer to the prompt. Defaults to `AI.Model["OpenAI_GPT3. If a model isn't available to the user, Raycast will fallback to a similar one: - `AI.Model.Anthropic_Claude_Opus` and `AI.Model.Anthropic_Claude_Sonnet` -> `AI.Model.Anthropic_Claude_Haiku` -- `AI.Model.OpenAI_GPT4` and `AI.Model["OpenAI_GPT4-turbo"]` -> `AI.Model["OpenAI_GPT3.5-turbo"]` +- `AI.Model.OpenAI_GPT4` and `AI.Model["OpenAI_GPT4-turbo"]` -> `AI.Model["OpenAI_GPT4o-mini"]` - `AI.Model["Perplexity_Llama3.1_Sonar_Large"]` and `AI.Model["Perplexity_Llama3.1_Sonar_Huge"]` -> `AI.Model["Perplexity_Llama3.1_Sonar_Small"]` - `AI.Model.Mistral_Large2` -> `AI.Model.Mistral_Nemo` diff --git a/docs/api-reference/oauth.md b/docs/api-reference/oauth.md index 034b3e5ca8a..77df1e7d6b1 100644 --- a/docs/api-reference/oauth.md +++ b/docs/api-reference/oauth.md @@ -4,7 +4,11 @@ A Raycast extension can use OAuth for authorizing access to a provider's resources on the user's behalf. Since Raycast is a desktop app and the extensions are considered "public", we only support the [PKCE flow](https://datatracker.ietf.org/doc/html/rfc7636) (Proof Key for Code Exchange, pronounced “pixy”). This flow is the official recommendation for native clients that cannot keep a client secret. With PKCE, the client dynamically creates a secret and uses the secret again during code exchange, ensuring that only the client that performed the initial request can exchange the code for the access token (”proof of possession”). -Before considering OAuth, first check if your provider supports PKCE. You can usually find this in the provider's OAuth docs by looking for `code_challenge` and `code_verifier` parameters. Providers such as Google, Twitter, GitLab, Spotify, Zoom, Asana or Dropbox are all PKCE-ready. If you find that your provider does not yet support PKCE, you can often use other forms of authorization such as personal access tokens (usable with Raycast password preferences), or open-source PKCE proxies that "adapt" an OAuth flow to be PKCE-compatible (you would need to operate your own backend service in this case, so this is only recommended for very advanced use cases.) +{% hint style="info" %} +Providers such as Google, Twitter, GitLab, Spotify, Zoom, Asana or Dropbox are all PKCE-ready. + +However, if your provider doesn't support PKCE, you can use our [PKCE proxy](https://oauth.raycast.com). It allows extensions to securely use an OAuth flow without exposing any secret. +{% endhint %} ## OAuth Flow