diff --git a/README.md b/README.md index 4aba5b5c..469f25f2 100644 --- a/README.md +++ b/README.md @@ -60,11 +60,13 @@ package installed can also run the entire architecture. chunking algorithm to produce smaller text chunks. - `embeddings-hf` - A service which analyses text and returns a vector embedding using one of the HuggingFace embeddings models. +- `embeddings-ollama` - A service which analyses text and returns a vector + embedding using an Ollama embeddings model. - `embeddings-vectorize` - Uses an embeddings service to get a vector embedding which is added to the processor payload. - `graph-rag` - A query service which applies a Graph RAG algorithm to provide a response to a text prompt. -- `graph-write-cassandra` - Takes knowledge graph edges and writes them to +- `triples-write-cassandra` - Takes knowledge graph edges and writes them to a Cassandra store. - `kg-extract-definitions` - knowledge extractor - examines text and produces graph edges. @@ -80,15 +82,15 @@ package installed can also run the entire architecture. format. For instance, the wrapping of text between lines in a PDF document is not semantically encoded, so the decoder will see wrapped lines as space-separated. -- `vector-write-milvus` - Takes vector-entity mappings and records them +- `ge-write-milvus` - Takes graph embeddings mappings and records them in the vector embeddings store. ## LM Specific Modules -- `llm-azure-text` - Sends request to AzureAI serverless endpoint -- `llm-claude-text` - Sends request to Anthropic's API -- `llm-ollama-text` - Sends request to LM running using Ollama -- `llm-vertexai-text` - Sends request to model available through VertexAI API +- `text-completion-azure` - Sends request to AzureAI serverless endpoint +- `text-completion-claude` - Sends request to Anthropic's API +- `text-completion-ollama` - Sends request to LM running using Ollama +- `text-completion-vertexai` - Sends request to model available through VertexAI API ## Quickstart Guide diff --git a/docs/README.quickstart-docker-compose.md b/docs/README.quickstart-docker-compose.md index 9845d8f0..37e965af 100644 --- a/docs/README.quickstart-docker-compose.md +++ b/docs/README.quickstart-docker-compose.md @@ -48,7 +48,6 @@ following `Docker Compose` files: **NOTE**: All tokens, paths, and authentication files must be set **PRIOR** to launching a `Docker Compose` file. - #### AzureAI Serverless Model Deployment ``` @@ -159,7 +158,7 @@ Similar output to above processes, except many entries instead. `Language Model Inference`: ``` -docker logs trustgraph-llm-1 +docker logs trustgraph-text-completion-1 ``` Output should be a sequence of entries: