Title: | Asking GPT About R Stuff |
---|---|
Description: | A chat package connecting to API endpoints by 'OpenAI' (<https://platform.openai.com/>) to answer questions (about R). |
Authors: | Johannes Gruber [aut, cre] |
Maintainer: | Johannes Gruber <[email protected]> |
License: | GPL (>= 3) |
Version: | 0.1.3.9000 |
Built: | 2024-11-01 04:39:39 UTC |
Source: | https://github.com/jbgruber/askgpt |
Annotate R code with inline comments
annotate_code(code, ...)
annotate_code(code, ...)
code |
A character vector of R code. If missing the code currently selected in RStudio is documented (If RStudio is used). |
... |
passed on to |
A character vector.
Ask openai's GPT models a question
askgpt(prompt, chat = TRUE, progress = TRUE, return_answer = FALSE, ...)
askgpt(prompt, chat = TRUE, progress = TRUE, return_answer = FALSE, ...)
prompt |
What you want to ask |
chat |
whether to use the chat API (i.e., the same model as ChatGPT) or the completions API. |
progress |
Show a progress spinner while the request to the API has not been fulfilled. |
return_answer |
Should the answer be returned as an object instead of printing it to the screen? |
... |
additional options forwarded to |
either an httr2 response from one of the APIs or a character vector (if return_answer).
## Not run: askgpt("What is an R function?") askgpt("What is wrong with my last command?") askgpt("Can you help me with the function aes() from ggplot2?") ## End(Not run)
## Not run: askgpt("What is an R function?") askgpt("What is wrong with my last command?") askgpt("Can you help me with the function aes() from ggplot2?") ## End(Not run)
Request answer from openai's chat API
chat_api( prompt, model = NULL, config = NULL, max_tokens = NULL, api_key = NULL, ... )
chat_api( prompt, model = NULL, config = NULL, max_tokens = NULL, api_key = NULL, ... )
prompt |
character string of the prompt to be completed. |
model |
character string of the model to be used (defaults to "gpt-3.5-turbo-instruct"). |
config |
a configuration prompt to tell the model how it should behave. |
max_tokens |
The maximum number of tokens to generate in the completion. 2048L is the maximum the models accept. |
api_key |
set the API key. If NULL, looks for the env OPENAI_API_KEY. |
... |
additional parameters to be passed to the API (see [the API documentation](https://platform.openai.com/docs/api-reference/completions) |
A tibble with available models
a httr2 response object
## Not run: chat_api("Hi, how are you?", config = "answer as a friendly chat bot") ## End(Not run)
## Not run: chat_api("Hi, how are you?", config = "answer as a friendly chat bot") ## End(Not run)
Mostly used under the hood for askgpt
.
completions_api( prompt, model = NULL, temperature = NULL, max_tokens = NULL, api_key = NULL, ... )
completions_api( prompt, model = NULL, temperature = NULL, max_tokens = NULL, api_key = NULL, ... )
prompt |
character string of the prompt to be completed. |
model |
character string of the model to be used (defaults to "gpt-3.5-turbo-instruct"). |
temperature |
numeric value between 0 and 1 to control the randomness of the output (defaults to 0.2; lower values like 0.2 will make answers more focused and deterministic). |
max_tokens |
The maximum number of tokens to generate in the completion. 2048L is the maximum the models accept. |
api_key |
set the API key. If NULL, looks for the env OPENAI_API_KEY. |
... |
additional parameters to be passed to the API (see [the API documentation](https://platform.openai.com/docs/api-reference/completions) |
Only a few parameters are implemented by name. Most can be sent
through the ...
. For example, you could use the n
parameter
just like this completions_api("The quick brown fox", n = 2)
.
A couple of defaults are used by the package:
the model used by default is "gpt-3.5-turbo-instruct"
the default temperature is 0.2
the default for max_tokens is 2048L
You can configure how askgpt
makes requests by setting
options that start with askgpt_*
. For example, to use a different
model use options(askgpt_model = "text-curie-001")
. It does not
matter if the API parameter ist listed in the function or not. All are
used.
a httr2 response object
## Not run: completions_api("The quick brown fox") ## End(Not run)
## Not run: completions_api("The quick brown fox") ## End(Not run)
Document R Code
document_code(code, ...)
document_code(code, ...)
code |
A character vector of R code. If missing the code currently selected in RStudio is documented (If RStudio is used). |
... |
passed on to |
A character vector.
## Not run: document_code() ## End(Not run)
## Not run: document_code() ## End(Not run)
Estimate token count
estimate_token(x, mult = 1.6)
estimate_token(x, mult = 1.6)
x |
character vector |
mult |
the multiplier used |
This function estimates how many tokens the API will make of the input words. For the models 1 word is more than one token. The default multiplier value resulted from testing the API. See <https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them> for more information.
a integer vector of token counts
estimate_token("this is a test")
estimate_token("this is a test")
Explain R code
explain_code(code, ...)
explain_code(code, ...)
code |
A character vector of R code. If missing the code currently selected in RStudio is documented (If RStudio is used). |
... |
passed on to |
A character vector.
'tutorialise_addin()' opens an [RStudio gadget](https://shiny.rstudio.com/articles/gadgets.html) and [addin](http://rstudio.github.io/rstudioaddins/) that can be used to improve existing code, documentation, or writing.
improve_addin()
improve_addin()
No return value, opens a new file in RStudio
List the models available in the API. You can refer to the [Models documentation](https://platform.openai.com/docs/models) to understand what models are available and the differences between them.
list_models(api_key = NULL)
list_models(api_key = NULL)
api_key |
set the API key. If NULL, looks for the env OPENAI_API_KEY. |
A tibble with available models
## Not run: completions_api("The quick brown fox") ## End(Not run)
## Not run: completions_api("The quick brown fox") ## End(Not run)
Initiate error logging
log_init(...)
log_init(...)
... |
forwarded to |
Just an alias for rlang::global_entrace() with a more fitting name (for the purpose here).
No return value, called to enable rlang error logging
Log in to OpenAI
login(api_key, force_refresh = FALSE, cache_dir = NULL, no_cache = FALSE)
login(api_key, force_refresh = FALSE, cache_dir = NULL, no_cache = FALSE)
api_key |
API key to use for authentication. If not provided, the function look for a cached key or guide the user to obtain one. |
force_refresh |
Log in again even if an API key is already cached. |
cache_dir |
dir location to save keys on disk. The default is to use
|
no_cache |
Don't cache the API key, only load it into the environment. |
a character vector with an API key
Deletes the local prompt and response history to start a new conversation.
new_conversation()
new_conversation()
Does not return a value
Parse response from API functions
parse_response(response)
parse_response(response)
response |
a response object from |
a character vector
Return the prompt/response history
prompt_history(n = Inf)
prompt_history(n = Inf)
n |
number of prompts/responses to return. |
a character vector
Return the prompt/response history
response_history(n = Inf)
response_history(n = Inf)
n |
number of prompts/responses to return. |
a character vector
Test R code
test_function(code, ...)
test_function(code, ...)
code |
A character vector of R code. If missing the code currently selected in RStudio is documented (If RStudio is used). |
... |
passed on to |
A character vector.
OpenAI's token limits for different models.
token_limits
token_limits
An object of class data.frame
with 6 rows and 2 columns.
<https://platform.openai.com/docs/models/overview>
'tutorialise_addin()' opens an [RStudio gadget](https://shiny.rstudio.com/articles/gadgets.html) and [addin](http://rstudio.github.io/rstudioaddins/) that turns selected code into an R Markdown/Quarto Tutorial.
tutorialise_addin()
tutorialise_addin()
No return value, opens a new file in RStudio