* first commit of the chatGPT selector
* stashing changes as checkpoint
* pending changes for chrome extension
* commiting a working version
* converting conversation into messages object
* working version of the extension
* working version with fixed styling and better tested
* fixed the issue that the drop down was too small, and fixed the issue where the route was not displayed on the screen
* updating folder with README.md
* fixes for default model, and to update the manifest.json file
* made changes to the dark mode. improved styles
* fix installation bug
* added dark mode
* fixed default model selection
* fixed the scrolling issue
* Update README.md
* updated content.js to update the labels even when default model
* fixed readme
* upddated the title of the packag
* removing the unnecessary permissions
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-329.local>
Co-authored-by: cotran <cotran2@utexas.edu>
Co-authored-by: Shuguang Chen <54548843+nehcgs@users.noreply.github.com>
* local support for Arch-Router via Ollama
* fixed issue withe non-local yaml config
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-329.local>
We were using same port for both chatui and app_server which was causing conflict. This code change updates host port for app_server to 18083 and updates arch_config
* fixed issue with groq LLMs that require the openai in the /v1/chat/completions path. My first change
* updated the GH actions with keys for Groq
* adding missing groq API keys
* add llama-3.2-3b-preview to the model based on addin groq to the demo
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
* stashing changes on my local branch
* updated the java demo with debug points and jaeger tracing
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
* updated the spotify bearer authorization README and fixed main README links
* minor fixes to SPOTIFY README
---------
Co-authored-by: Salman Paracha <salmanparacha@MacBook-Pro-261.local>
* add support for custom llm with ssl support
Add support for using custom llm that are served through https protocol.
* add instructions on how to add custom inference endpoint
* fix formatting
* add more details
* Apply suggestions from code review
Co-authored-by: Salman Paracha <salman.paracha@gmail.com>
* Apply suggestions from code review
* fix precommit
---------
Co-authored-by: Salman Paracha <salman.paracha@gmail.com>
* Fix llm_routing provider element
We replaced provider with provider_interface to make it more clear to developers about provider api/backend being used. During that upgrade we removed support for mistral in provider to encourage developers to start using provider_interface. But this demo was not updated to use provider_interface as it was using mistral. This code change fixes it by replacing provider with provider_interface.
Signed-off-by: Adil Hafeez <adil.hafeez@gmail.com>
* fix the path
* move
* add more details
* fix
* Apply suggestions from code review
* fix
* fix
---------
Signed-off-by: Adil Hafeez <adil.hafeez@gmail.com>