Combining 'LocalAI' + 'Continue' to Create a Private Co-Pilot Coding Assistant!
Combining 'LocalAI' + 'Continue' to Create a Private Co-Pilot Coding Assistant!
Hello everyone!
I am working on figuring out better workflows bringing back more consistent post schedules. In the meantime, I'd like to leave you with a new update from LocalAI & Continue.
Check these projects out! More info from the Continue & LocalAI teams below:
Continue
The open-source autopilot for software development A VS Code extension that brings the power of ChatGPT to your IDE
LocalAI
LocalAI is a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It allows you to run LLMs (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families that are compatible with the ggml format. Does not require GPU.
Combining the Power of Continue + LocalAI!
Note
From this release the llama backend supports only gguf files (see 943 ). LocalAI however still supports ggml files. We ship a version of llama.cpp before that change in a separate backend, named llama-stable to allow still loading ggml files. If you were specifying the llama backend manually to load ggml files from this release you should use llama-stable instead, or do not specify a backend at all (LocalAI will automatically handle this).
Continue
This document presents an example of integration with continuedev/continue.
For a live demonstration, please click on the link below:
Integration Setup Walkthrough
- As outlined in
continue
's documentation, install the Visual Studio Code extension from the marketplace and open it. - In this example, LocalAI will download the gpt4all model and set it up as "gpt-3.5-turbo". Refer to the
docker-compose.yaml
file for details.undefined
bash # Clone LocalAI git clone https://github.com/go-skynet/LocalAI cd LocalAI/examples/continue # Start with docker-compose docker-compose up --build -d
- Type
/config
within Continue's VSCode extension, or edit the file located at~/.continue/config.py
on your system with the following configuration:undefined
py from continuedev.src.continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo config = ContinueConfig( ... models=Models( default=OpenAI( api_key="my-api-key", model="gpt-3.5-turbo", openai_server_info=OpenAIServerInfo( api_base="http://localhost:8080", model="gpt-3.5-turbo" ) ) ), )
This setup enables you to make queries directly to your model running in the Docker container. Note that the api_key
does not need to be properly set up; it is included here as a placeholder.
If editing the configuration seems confusing, you may copy and paste the provided default config.py
file over the existing one in ~/.continue/config.py
after initializing the extension in the VSCode IDE.