mirror of
https://github.com/trustgraph-ai/trustgraph.git
synced 2026-04-25 08:26:21 +02:00
Update docs for API/CLI changes in 1.0 (#420)
* Update some API basics for the 0.23/1.0 API change
This commit is contained in:
parent
b1a546e4d2
commit
cc224e97f6
69 changed files with 19981 additions and 407 deletions
|
|
@ -1,6 +1,8 @@
|
|||
|
||||
# Getting Started
|
||||
|
||||
## Preparation
|
||||
|
||||
> [!TIP]
|
||||
> Before launching `TrustGraph`, be sure to have the `Docker Engine` or `Podman Machine` installed and running on the host machine.
|
||||
>
|
||||
|
|
@ -13,24 +15,29 @@
|
|||
> [!TIP]
|
||||
> If using `Podman`, the only change will be to substitute `podman` instead of `docker` in all commands.
|
||||
|
||||
All `TrustGraph` components are deployed through a `Docker Compose` file. There are **16** `Docker Compose` files to choose from, depending on the desired model deployment and choosing between the graph stores `Cassandra` or `Neo4j` or `FalkorDB`:
|
||||
## Create the configuration
|
||||
|
||||
- `AzureAI` serverless endpoint for deployed models in Azure
|
||||
- `Bedrock` API for models deployed in AWS Bedrock
|
||||
- `Claude` through Anthropic's API
|
||||
- `Cohere` through Cohere's API
|
||||
- `Mix` for mixed model deployments
|
||||
- `Ollama` for local model deployments
|
||||
- `OpenAI` for OpenAI's API
|
||||
- `VertexAI` for models deployed in Google Cloud
|
||||
This guide talks you through the Compose file launch, which is the easiest
|
||||
way to lauch on a standalone machine, or a single cloud instance.
|
||||
See [README](README.md) for links to other deployment mechanisms.
|
||||
|
||||
`Docker Compose` enables the following functions:
|
||||
|
||||
- Run the required components for full end-to-end `Graph RAG` knowledge pipeline
|
||||
- Inspect processing logs
|
||||
- Load text corpus and begin knowledge extraction
|
||||
- Verify extracted Graph Edges
|
||||
- Model agnostic, Graph RAG
|
||||
To create the deployment configuration, go to the
|
||||
[deployment portal](https://config-ui.demo.trustgraph.ai/) and follow the
|
||||
instructions.
|
||||
- Select Docker Compose or Podman Compose as the deployment
|
||||
mechanism.
|
||||
- Use Cassandra for the graph store, it's easiest and most tested.
|
||||
- Use Qdrant for the vector store, it's easiest and most tested.
|
||||
- Chunker: Recursive, chunk size of 1000, 50 overlap should be fine.
|
||||
- Pick your favourite LLM model:
|
||||
- If you have enough horsepower in a local GPU, LMStudio is an easy
|
||||
starting point for a local model deployment. Ollama is fairly easy.
|
||||
- VertexAI on Google is relatively straightforward for a cloud
|
||||
model-as-a-service LLM, and you can get some free credits.
|
||||
- Max output tokens as per the model, 2048 is safe.
|
||||
- Customisation, check LLM Prompt Manager and Agent Tools.
|
||||
- Finish deployment, Generate and download the deployment bundle.
|
||||
Read the extra deploy steps on that page.
|
||||
|
||||
## Preparing TrustGraph
|
||||
|
||||
|
|
@ -41,208 +48,31 @@ Below is a step-by-step guide to deploy `TrustGraph`, extract knowledge from a P
|
|||
```
|
||||
python3 -m venv env
|
||||
. env/bin/activate
|
||||
pip3 install pulsar-client
|
||||
pip3 install cassandra-driver
|
||||
export PYTHON_PATH=.
|
||||
pip install trustgraph-cli
|
||||
```
|
||||
|
||||
### Clone the GitHub Repo
|
||||
|
||||
```
|
||||
git clone https://github.com/trustgraph-ai/trustgraph trustgraph
|
||||
cd trustgraph
|
||||
```
|
||||
|
||||
## TrustGraph as Docker Compose Files
|
||||
|
||||
Launching `TrustGraph` is a simple as running a single `Docker Compose` file. There are `Docker Compose` files for each possible model deployment and graph store configuration. Depending on your chosen model ang graph store deployment, chose one of the following launch files:
|
||||
|
||||
| Model Deployment | Graph Store | Launch File |
|
||||
| ---------------- | ------------ | ----------- |
|
||||
| AWS Bedrock | Cassandra | `tg-launch-bedrock-cassandra.yaml` |
|
||||
| AWS Bedrock | Neo4j | `tg-launch-bedrock-neo4j.yaml` |
|
||||
| AzureAI Serverless Endpoint | Cassandra | `tg-launch-azure-cassandra.yaml` |
|
||||
| AzureAI Serverless Endpoint | Neo4j | `tg-launch-azure-neo4j.yaml` |
|
||||
| Anthropic API | Cassandra | `tg-launch-claude-cassandra.yaml` |
|
||||
| Anthropic API | Neo4j | `tg-launch-claude-neo4j.yaml` |
|
||||
| Cohere API | Cassandra | `tg-launch-cohere-cassandra.yaml` |
|
||||
| Cohere API | Neo4j | `tg-launch-cohere-neo4j.yaml` |
|
||||
| Mixed Depoloyment | Cassandra | `tg-launch-mix-cassandra.yaml` |
|
||||
| Mixed Depoloyment | Neo4j | `tg-launch-mix-neo4j.yaml` |
|
||||
| Ollama | Cassandra | `tg-launch-ollama-cassandra.yaml` |
|
||||
| Ollama | Neo4j | `tg-launch-ollama-neo4j.yaml` |
|
||||
| OpenAI | Cassandra | `tg-launch-openai-cassandra.yaml` |
|
||||
| OpenAI | Neo4j | `tg-launch-openai-neo4j.yaml` |
|
||||
| VertexAI | Cassandra | `tg-launch-vertexai-cassandra.yaml` |
|
||||
| VertexAI | Neo4j | `tg-launch-vertexai-neo4j.yaml` |
|
||||
|
||||
> [!CAUTION]
|
||||
> All tokens, paths, and authentication files must be set **PRIOR** to launching a `Docker Compose` file.
|
||||
|
||||
## Chunking
|
||||
|
||||
Extraction performance can vary signficantly with chunk size. The default chunk size is `2000` characters using a recursive method. Decreasing the chunk size may increase the amount of extracted graph edges at the cost of taking longer to complete the extraction process. The chunking method and sizes can be adjusted in the selected `YAML` file. In the selected `YAML` file, find the section for `chunker`. Under the commands list, modify the follwing parameters:
|
||||
|
||||
```
|
||||
- "chunker-recursive" # recursive text splitter in characters
|
||||
- "chunker-token" # recursive style token splitter
|
||||
- "--chunk-size"
|
||||
- "<number-of-characters/tokens-per-chunk>"
|
||||
- "--chunk-overlap"
|
||||
- "<number-of-characters/tokens-to-overlap-per-chunk>"
|
||||
```
|
||||
|
||||
## Model Parameters
|
||||
|
||||
Most configurations allow adjusting some model parameters. For configurations with adjustable parameters, the `temperature` and `max_output` tokens can be set in the selected `YAML` file:
|
||||
|
||||
```
|
||||
- "-x"
|
||||
- <max_model_output_tokens>
|
||||
- "-t"
|
||||
- <model_temperature>
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> The default `temperature` in `TrustGraph` is set to `0.0`. Even for models with long input contexts, the max output might only be 2048 (like some intances of Llama3.1). Make sure `max_output` is not set higher than allowed for a given model.
|
||||
|
||||
## Choose a TrustGraph Configuration
|
||||
|
||||
Choose one of the `Docker Compose` files that meets your preferred model and graph store deployments. Each deployment will require setting some `environment variables` and commands in the chosen `YAML` file. All variables and commands must be set prior to running the chosen `Docker Compose` file.
|
||||
|
||||
### AWS Bedrock API
|
||||
|
||||
```
|
||||
export AWS_ACCESS_KEY_ID=<ID-KEY-HERE>
|
||||
export AWS_SECRET_ACCESS_KEY=<TOKEN-GOES-HERE>
|
||||
export AWS_DEFAULT_REGION=<REGION-HERE>
|
||||
docker compose -f tg-launch-bedrock-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-bedrock-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> The current defaults for `AWS Bedrock` are `Mistral Large 2 (24.07)` in `US-West-2`.
|
||||
|
||||
To change the model and region, go the sections for `text-completion` and `text-completion-rag` in the `tg-launch-bedrock.yaml` file. Add the following lines under the `command` section:
|
||||
|
||||
```
|
||||
- "-r"
|
||||
- "<"us-east-1" or "us-west-2">
|
||||
- "-m"
|
||||
- "<bedrock-api-model-name-here>
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Having two separate modules for `text-completion` and `text-completion-rag` allows for using one model for extraction and a different model for RAG.
|
||||
|
||||
### AzureAI Serverless Model Deployment
|
||||
|
||||
```
|
||||
export AZURE_ENDPOINT=<https://ENDPOINT.HOST.GOES.HERE/>
|
||||
export AZURE_TOKEN=<TOKEN-GOES-HERE>
|
||||
docker compose -f tg-launch-azure-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compsoe -f tg-launch-azure-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
### Claude through Anthropic API
|
||||
|
||||
```
|
||||
export CLAUDE_KEY=<TOKEN-GOES-HERE>
|
||||
docker compose -f tg-launch-claude-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-claude-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
### Cohere API
|
||||
|
||||
```
|
||||
export COHERE_KEY=<TOKEN-GOES-HERE>
|
||||
docker compose -f tg-launch-cohere-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-cohere-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
### Ollama Hosted Model Deployment
|
||||
|
||||
> [!TIP]
|
||||
> The power of `Ollama` is the flexibility it provides in Language Model deployments. Being able to run LMs with `Ollama` enables fully secure AI `TrustGraph` pipelines that aren't relying on any external APIs. No data is leaving the host environment or network. More information on `Ollama` deployments can be found [here](https://trustgraph.ai/docs/deploy/localnetwork).
|
||||
|
||||
> [!NOTE]
|
||||
> The current default model for an `Ollama` deployment is `Gemma2:9B`.
|
||||
|
||||
```
|
||||
export OLLAMA_HOST=<hostname> # Set to location of machine running Ollama such as http://localhost:11434
|
||||
docker compose -f tg-launch-ollama-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-ollama-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> On `MacOS`, if running `Ollama` locally set `OLLAMA_HOST=http://host.docker.internal:11434`.
|
||||
|
||||
To change the `Ollama` model, first make sure the desired model has been pulled and fully downloaded. In the `YAML` file, go to the section for `text-completion` and `text-completion-rag`. Under `commands`, add the following two lines:
|
||||
|
||||
```
|
||||
- "-m"
|
||||
- "<model-name-here>"
|
||||
```
|
||||
|
||||
### OpenAI API
|
||||
|
||||
```
|
||||
export OPENAI_TOKEN=<TOKEN-GOES-HERE>
|
||||
docker compose -f tg-launch-openai-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-openai-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
### VertexAI through GCP
|
||||
|
||||
```
|
||||
mkdir -p vertexai
|
||||
cp <your config> vertexai/private.json
|
||||
docker compose -f tg-launch-vertexai-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-vertexai-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> If you're running `SELinux` on Linux you may need to set the permissions on the VertexAI directory so that the key file can be mounted on a Docker container using the following command:
|
||||
>
|
||||
> ```
|
||||
> chcon -Rt svirt_sandbox_file_t vertexai/
|
||||
> ```
|
||||
|
||||
## Mixing Models
|
||||
|
||||
One of the most powerful features of `TrustGraph` is the ability to use one model deployment for the `Naive Extraction` process and a different model for `RAG`. Since the `Naive Extraction` can be a one time process, it makes sense to use a more performant model to generate the most comprehensive set of graph edges and embeddings as possible. With a high-quality extraction, it's possible to use a much smaller model for `RAG` and still achieve "big" model performance.
|
||||
|
||||
A "split" model deployment uses `tg-launch-mix.yaml`. There are two modules: `text-completion` and `text-completion-rag`. The `text-completion` module is called only for extraction while `text-completion-rag` is called only for RAG.
|
||||
|
||||
### Choosing Model Deployments
|
||||
|
||||
Before launching the `Docker Compose` file, the desired model deployments must be specified. The options are:
|
||||
|
||||
- `text-completion-azure`
|
||||
- `text-completion-bedrock`
|
||||
- `text-completion-claude`
|
||||
- `text-completion-cohere`
|
||||
- `text-completion-ollama`
|
||||
- `text-completion-openai`
|
||||
- `text-completion-vertexai`
|
||||
|
||||
For the `text-completion` and `text-completion-rag` modules in the `tg-launch-mix.yaml`file, choose one of the above deployment options and enter that line as the first line under `command` for each `text-completion` and `text-completion-rag` module. Depending on the model deployment, other variables such as endpoints, keys, and model names must specified under the `command` section as well. Once all variables and commands have been set, the `mix` deployment can be lauched with:
|
||||
|
||||
```
|
||||
docker compose -f tg-launch-mix-cassandra.yaml up -d # Using Cassandra as the graph store
|
||||
docker compose -f tg-launch-mix-neo4j.yaml up -d # Using Neo4j as the graph store
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Any of the `YAML` files can be modified for a "split" deployment by adding the `text-completion-rag` module.
|
||||
|
||||
## Running TrustGraph
|
||||
|
||||
```
|
||||
docker-compose -f docker-compose.yaml up -d
|
||||
```
|
||||
|
||||
After running the chosen `Docker Compose` file, all `TrustGraph` services will launch and be ready to run `Naive Extraction` jobs and provide `RAG` responses using the extracted knowledge.
|
||||
|
||||
### Verify TrustGraph Containers
|
||||
|
||||
On first running a `Docker Compose` file, it may take a while (depending on your network connection) to pull all the necessary components. Once all of the components have been pulled, check that the TrustGraph containers are running:
|
||||
On first running a `Docker Compose` file, it may take a while (depending on your network connection) to pull all the necessary components. Once all of the components have been pulled.
|
||||
|
||||
A quick check that TrustGraph processors have started:
|
||||
|
||||
```
|
||||
tg-show-processor-state
|
||||
```
|
||||
|
||||
Processors start quickly, but can take a while (~60 seconds) for
|
||||
Pulsar and Cassandra to start.
|
||||
|
||||
If you have any concerns,
|
||||
check that the TrustGraph containers are running:
|
||||
|
||||
```
|
||||
docker ps
|
||||
|
|
@ -257,129 +87,60 @@ docker ps -a
|
|||
> [!TIP]
|
||||
> Before proceeding, allow the system to stabilize. A safe warm up period is `120 seconds`. If services seem to be "stuck", it could be because services did not have time to initialize correctly and are trying to restart. Waiting `120 seconds` before launching any scripts should provide much more reliable operation.
|
||||
|
||||
### Load a Text Corpus
|
||||
### Everything running
|
||||
|
||||
Create a sources directory and get a test PDF file. To demonstrate the power of `TrustGraph`, the provided script loads a PDF of the public [Roger's Commision Report](https://sma.nasa.gov/SignificantIncidents/assets/rogers_commission_report.pdf) from the NASA Challenger disaster. This PDF includes complex formatting, unique terms, complex concepts, unique concepts, and information not commonly found in public knowledge sources.
|
||||
An easy way to check all the main start is complete:
|
||||
|
||||
```
|
||||
mkdir sources
|
||||
curl -o sources/Challenger-Report-Vol1.pdf https://sma.nasa.gov/SignificantIncidents/assets/rogers_commission_report.pdf
|
||||
tg-show-flows
|
||||
```
|
||||
|
||||
Load the file for knowledge extraction:
|
||||
You should see a default flow. If you see an error, leave it and try again.
|
||||
|
||||
### Load some sample documents
|
||||
|
||||
```
|
||||
scripts/load-pdf -f sources/Challenger-Report-Vol1.pdf
|
||||
tg-load-sample-documents
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> To load a text file, use the following script:
|
||||
>
|
||||
> ```
|
||||
> scripts/load-text -f sources/<txt-file.txt>
|
||||
> ```
|
||||
### Workbench
|
||||
|
||||
The console output `File loaded.` indicates the text corpus has been sucessfully loaded to the processing queues and extraction will begin.
|
||||
A UI is launched on port 8888, see if you can see it at
|
||||
[http://localhost:8888/](http://localhost:8888/)
|
||||
|
||||
### Processing Logs
|
||||
Verify things are working:
|
||||
- Go to the prompts page see that you can see some prompts
|
||||
- Go to the library page, and check you can see the sample documents you
|
||||
just loaded.
|
||||
|
||||
### Load a document
|
||||
|
||||
At this point, many processing services are running concurrently. You can check the status of these processes with the following logs:
|
||||
- On the library page, select a document. Beyond State Vigilance is a
|
||||
smallish doc to work with.
|
||||
- Select the doc by clicking on it.
|
||||
- Select Submit at the bottom of the screen on the action bar.
|
||||
- Select a processing flow, use the default.
|
||||
- Click submit.
|
||||
|
||||
`PDF Decoder`:
|
||||
```
|
||||
docker logs trustgraph-pdf-decoder-1
|
||||
```
|
||||
### Look in Grafana
|
||||
|
||||
Output should look:
|
||||
```
|
||||
Decoding 1f7b7055...
|
||||
Done.
|
||||
```
|
||||
|
||||
`Chunker`:
|
||||
```
|
||||
docker logs trustgraph-chunker-1
|
||||
```
|
||||
|
||||
The output should be similiar to the output of the `Decode`, except it should be a sequence of many entries.
|
||||
|
||||
`Vectorizer`:
|
||||
```
|
||||
docker logs trustgraph-vectorize-1
|
||||
```
|
||||
|
||||
Similar output to above processes, except many entries instead.
|
||||
|
||||
|
||||
`Language Model Inference`:
|
||||
```
|
||||
docker logs trustgraph-text-completion-1
|
||||
```
|
||||
|
||||
Output should be a sequence of entries:
|
||||
```
|
||||
Handling prompt fa1b98ae-70ef-452b-bcbe-21a867c5e8e2...
|
||||
Send response...
|
||||
Done.
|
||||
```
|
||||
|
||||
`Knowledge Graph Definitions`:
|
||||
```
|
||||
docker logs trustgraph-kg-extract-definitions-1
|
||||
```
|
||||
|
||||
Output should be an array of JSON objects with keys `entity` and `definition`:
|
||||
|
||||
```
|
||||
Indexing 1f7b7055-p11-c1...
|
||||
[
|
||||
{
|
||||
"entity": "Orbiter",
|
||||
"definition": "A spacecraft designed for spaceflight."
|
||||
},
|
||||
{
|
||||
"entity": "flight deck",
|
||||
"definition": "The top level of the crew compartment, typically where flight controls are located."
|
||||
},
|
||||
{
|
||||
"entity": "middeck",
|
||||
"definition": "The lower level of the crew compartment, used for sleeping, working, and storing equipment."
|
||||
}
|
||||
]
|
||||
Done.
|
||||
```
|
||||
|
||||
`Knowledge Graph Relationshps`:
|
||||
```
|
||||
docker logs trustgraph-kg-extract-relationships-1
|
||||
```
|
||||
|
||||
Output should be an array of JSON objects with keys `subject`, `predicate`, `object`, and `object-entity`:
|
||||
```
|
||||
Indexing 1f7b7055-p11-c3...
|
||||
[
|
||||
{
|
||||
"subject": "Space Shuttle",
|
||||
"predicate": "carry",
|
||||
"object": "16 tons of cargo",
|
||||
"object-entity": false
|
||||
},
|
||||
{
|
||||
"subject": "friction",
|
||||
"predicate": "generated by",
|
||||
"object": "atmosphere",
|
||||
"object-entity": true
|
||||
}
|
||||
]
|
||||
Done.
|
||||
```
|
||||
A Grafana is launched on port 3000, see if you can see it at
|
||||
[http://localhost:3000/](http://localhost:3000/)
|
||||
|
||||
- Login as admin, password admin.
|
||||
- Skip the password change screen / change the password.
|
||||
- Verify things are working by selecting the TrustGraph dashboard
|
||||
- After a short while, you should see the backlog rise to a few hundred
|
||||
document chunks.
|
||||
|
||||
Once some chunks are loaded, you can start to work with the document.
|
||||
|
||||
### Graph Parsing
|
||||
|
||||
To check that the knowledge graph is successfully parsing data:
|
||||
|
||||
```
|
||||
scripts/graph-show
|
||||
tg-show-graph
|
||||
```
|
||||
|
||||
The output should be a set of semantic triples in [N-Triples](https://www.w3.org/TR/rdf12-n-triples/) format.
|
||||
|
|
@ -390,64 +151,25 @@ http://trustgraph.ai/e/enterprise http://www.w3.org/2000/01/rdf-schema#label Ent
|
|||
http://trustgraph.ai/e/enterprise http://www.w3.org/2004/02/skos/core#definition A prototype space shuttle orbiter used for atmospheric flight testing.
|
||||
```
|
||||
|
||||
### Number of Graph Edges
|
||||
### Work with the document
|
||||
|
||||
N-Triples format is not particularly human readable. It's more useful to know how many graph edges have successfully been extracted from the text corpus:
|
||||
```
|
||||
scripts/graph-show | wc -l
|
||||
```
|
||||
Back on the workbench, click on the 'Vector search' tab, and
|
||||
search for something e.g. state. You should see some search results.
|
||||
Click on results to start exploring the knowledge graph.
|
||||
|
||||
The Challenger report has a long introduction with quite a bit of adminstrative text commonly found in official reports. The first few hundred graph edges mostly capture this document formatting knowledge. To fully test the ability to extract complex knowledge, wait until at least `1000` graph edges have been extracted. The full extraction for this PDF will extract many thousand graph edges.
|
||||
Click on Graph view on an explored page to visualize the graph.
|
||||
|
||||
### RAG Test
|
||||
```
|
||||
scripts/query-graph-rag -q 'Give me 20 facts about the space shuttle Challenger'
|
||||
```
|
||||
This script forms a LM prompt asking for 20 facts regarding the Challenger disaster. Depending on how many graph edges have been extracted, the response will be similar to:
|
||||
### Queries over the document
|
||||
|
||||
```
|
||||
Here are 20 facts from the provided knowledge graph about the Space Shuttle disaster:
|
||||
|
||||
1. **Space Shuttle Challenger was a Space Shuttle spacecraft.**
|
||||
2. **The third Spacelab mission was carried by Orbiter Challenger.**
|
||||
3. **Francis R. Scobee was the Commander of the Challenger crew.**
|
||||
4. **Earth-to-orbit systems are designed to transport payloads and humans from Earth's surface into orbit.**
|
||||
5. **The Space Shuttle program involved the Space Shuttle.**
|
||||
6. **Orbiter Challenger flew on mission 41-B.**
|
||||
7. **Orbiter Challenger was used on STS-7 and STS-8 missions.**
|
||||
8. **Columbia completed the orbital test.**
|
||||
9. **The Space Shuttle flew 24 successful missions.**
|
||||
10. **One possibility for the Space Shuttle was a winged but unmanned recoverable liquid-fuel vehicle based on the Saturn 5 rocket.**
|
||||
11. **A Commission was established to investigate the space shuttle Challenger accident.**
|
||||
12. **Judit h Arlene Resnik was Mission Specialist Two.**
|
||||
13. **Mission 51-L was originally scheduled for December 1985 but was delayed until January 1986.**
|
||||
14. **The Corporation's Space Transportation Systems Division was responsible for the design and development of the Space Shuttle Orbiter.**
|
||||
15. **Michael John Smith was the Pilot of the Challenger crew.**
|
||||
16. **The Space Shuttle is composed of two recoverable Solid Rocket Boosters.**
|
||||
17. **The Space Shuttle provides for the broadest possible spectrum of civil/military missions.**
|
||||
18. **Mission 51-L consisted of placing one satellite in orbit, deploying and retrieving Spartan, and conducting six experiments.**
|
||||
19. **The Space Shuttle became the focus of NASA's near-term future.**
|
||||
20. **The Commission focused its attention on safety aspects of future flights.**
|
||||
```
|
||||
|
||||
For any errors with the `RAG` proces, check the following log:
|
||||
```
|
||||
docker logs -f trustgraph-graph-rag-1
|
||||
```
|
||||
### Custom RAG Queries
|
||||
|
||||
At any point, a RAG request can be generated and run with the following script:
|
||||
|
||||
```
|
||||
scripts/query-graph-rag -q "RAG request here"
|
||||
```
|
||||
On workbench, click Graph RAG and enter a question e.g.
|
||||
What is this document about?
|
||||
|
||||
### Shutting Down TrustGraph
|
||||
|
||||
When shutting down `TrustGraph`, it's best to shut down all Docker containers and volumes. Run the `docker compose down` command that corresponds to your model and graph store deployment:
|
||||
|
||||
```
|
||||
docker compose -f tg-launch-<model-deployment>-<graph-store>.yaml down -v
|
||||
docker compose -f document-compose.yaml down -v -t 0
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
|
|
@ -460,3 +182,4 @@ docker compose -f tg-launch-<model-deployment>-<graph-store>.yaml down -v
|
|||
> ```
|
||||
> docker volume ls
|
||||
> ```
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue