Chat with AI in RStudio
chattr
is a bundle that allows interplay with Massive Language Fashions (LLMs),
resembling GitHub Copilot Chat, and OpenAI’s GPT 3.5 and 4. The principle car is a
Shiny app that runs contained in the RStudio IDE. Right here is an instance of what it seems to be
like working contained in the Viewer pane:
Despite the fact that this text highlights chattr
’s integration with the RStudio IDE,
it’s value mentioning that it really works exterior RStudio, for instance the terminal.
Getting began
To get began, merely obtain the bundle from GitHub, and name the Shiny app
utilizing the chattr_app()
operate:
# Install from GitHub
::install_github("mlverse/chattr")
remotes
# Run the app
::chattr_app()
chattr
#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 1: GitHub - Copilot Chat - (copilot)
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> 4: LlamaGPT - ~/ggml-gpt4all-j-v1.3-groovy.bin (llamagpt)
#>
#>
#> Selection:
>
After you select the model you wish to interact with, the app will open. The
following screenshot provides an overview of the different buttons and
keyboard shortcuts you can use with the app:
You can start writing your requests in the main text box at the top left of the
app. Then submit your question by either clicking on the ‘Submit’ button, or
by pressing Shift+Enter.
chattr
parses the output of the LLM, and displays the code inside chunks. It
also places three buttons at the top of each chunk. One to copy the code to the
clipboard, the other to copy it directly to your active script in RStudio, and
one to copy the code to a new script. To close the app, press the ‘Escape’ key.
Pressing the ‘Settings’ button will open the defaults that the chat session
is using. These can be changed as you see fit. The ‘Prompt’ text box is
the additional text being sent to the LLM as part of your question.
Personalized setup
chattr
will try and identify which models you have setup,
and will include only those in the selection menu. For Copilot and OpenAI,
chattr
confirms that there is an available authentication token in order to
display them in the menu. For example, if you have only have
OpenAI setup, then the prompt will look something like this:
::chattr_app()
chattr#> ── chattr - Available models
#> Select the number of the model you would like to use:
#>
#> 2: OpenAI - Chat Completions - gpt-3.5-turbo (gpt35)
#>
#> 3: OpenAI - Chat Completions - gpt-4 (gpt4)
#>
#> Selection:
>
If you wish to avoid the menu, use the chattr_use()
function. Here is an example
of setting GPT 4 as the default:
library(chattr)
chattr_use("gpt4")
chattr_app()
You can also select a model by setting the CHATTR_USE
environment
variable.
Advanced customization
It is possible to customize many aspects of your interaction with the LLM. To do
this, use the chattr_defaults()
function. This function displays and sets the
additional prompt sent to the LLM, the model to be used, determines if the
history of the chat is to be sent to the LLM, and model specific arguments.
For example, you may wish to change the maximum number of tokens used per response,
for OpenAI you can use this:
# Default for max_tokens is 1,000
library(chattr)
chattr_use("gpt4")
chattr_defaults(model_arguments = list("max_tokens" = 100))
#>
#> ── chattr ──────────────────────────────────────────────────────────────────────
#>
#> ── Defaults for: Default ──
#>
#> ── Prompt:
#> • {{readLines(system.file('prompt/base.txt', package = 'chattr'))}}
#>
#> ── Model
#> • Provider: OpenAI - Chat Completions
#> • Path/URL: https://api.openai.com/v1/chat/completions
#> • Model: gpt-4
#> • Label: GPT 4 (OpenAI)
#>
#> ── Model Arguments:
#> • max_tokens: 100
#> • temperature: 0.01
#> • stream: TRUE
#>
#> ── Context:
#> Max Data Files: 0
#> Max Data Frames: 0
#> ✔ Chat History
#> ✖ Document contents
If you wish to persist your changes to the defaults, use the chattr_defaults_save()
function. This will create a yaml file, named ‘chattr.yml’ by default. If found,
chattr
will use this file to load all of the defaults, including the selected
model.
A more extensive description of this feature is available in the chattr
website
under
Modify prompt enhancements
Past the app
Along with the Shiny app, chattr
affords a few different methods to work together
with the LLM:
- Use the
chattr()
operate - Spotlight a query in your script, and use it as your immediate
> chattr("how do I remove the legend from a ggplot?")
#> You can remove the legend from a ggplot by adding
#> `theme(legend.position = "none")` to your ggplot code.
A more detailed article is available in chattr
website
here.
RStudio Add-ins
chattr
comes with two RStudio add-ins:
You may bind these add-in calls to keyboard shortcuts, making it straightforward to open the app with out having to jot down
the command each time. To discover ways to try this, see the Keyboard Shortcut part within the
chattr
official web site.
Works with native LLMs
Open-source, educated fashions, which might be capable of run in your laptop computer are extensively
out there at this time. As an alternative of integrating with every mannequin individually, chattr
works with LlamaGPTJ-chat. It is a light-weight software that communicates
with quite a lot of native fashions. At the moment, LlamaGPTJ-chat integrates with the
following households of fashions:
- GPT-J (ggml and gpt4all fashions)
- LLaMA (ggml Vicuna fashions from Meta)
- Mosaic Pretrained Transformers (MPT)
LlamaGPTJ-chat works proper off the terminal. chattr
integrates with the
software by beginning an ‘hidden’ terminal session. There it initializes the
chosen mannequin, and makes it out there to start out chatting with it.
To get began, you should set up LlamaGPTJ-chat, and obtain a appropriate
mannequin. Extra detailed directions are discovered
here.
chattr
seems to be for the situation of the LlamaGPTJ-chat, and the put in mannequin
in a selected folder location in your machine. In case your set up paths do
not match the places anticipated by chattr
, then the LlamaGPT won’t present
up within the menu. However that’s OK, you possibly can nonetheless entry it with chattr_use()
:
library(chattr)
chattr_use(
"llamagpt",
path = "[path to compiled program]",
model = "[path to model]"
)#>
#> ── chattr
#> • Provider: LlamaGPT
#> • Path/URL: [path to compiled program]
#> • Model: [path to model]
#> • Label: GPT4ALL 1.3 (LlamaGPT)
Extending chattr
chattr
aims to make it easy for new LLM APIs to be added. chattr
has two components, the user-interface (Shiny app and
chattr()
function), and the included back-ends (GPT, Copilot, LLamaGPT).
New back-ends do not need to be added directly in chattr
.
If you are a package
developer and would like to take advantage of the chattr
UI, all you need to do is define ch_submit()
method in your package.
The two output requirements for ch_submit()
are:
-
As the final return value, send the full response from the model you are
integrating intochattr
. -
If streaming (
stream
isTRUE
), output the current output as it is occurring.
Generally through acat()
function call.
Here is a simple toy example that shows how to create a custom method for
chattr
:
library(chattr)
<- function(defaults,
ch_submit.ch_my_llm prompt = NULL,
stream = NULL,
prompt_build = TRUE,
preview = FALSE,
...) {# Use `prompt_build` to prepend the prompt
if(prompt_build) prompt <- paste0("Use the tidyversen", prompt)
# If `preview` is true, return the resulting prompt back
if(preview) return(prompt)
<- paste0("You said this: n", prompt)
llm_response if(stream) {
cat(">> Streaming:n")
for(i in seq_len(nchar(llm_response))) {
# If `stream` is true, make sure to `cat()` the current output
cat(substr(llm_response, i, i))
Sys.sleep(0.1)
}
}# Make sure to return the entire output from the LLM at the end
llm_response
}
chattr_defaults("console", provider = "my llm")
#>
chattr("hello")
#> >> Streaming:
#> You said this:
#> Use the tidyverse
#> hello
chattr("I can use it right from RStudio", prompt_build = FALSE)
#> >> Streaming:
#> You said this:
#> I can use it right from RStudio
For more detail, please visit the function’s reference page, link
here.
Suggestions welcome
After making an attempt it out, be happy to submit your ideas or points within the
chattr
’s GitHub repository.