PageIndex/cookbook/vision_RAG_pageindex.ipynb
2025-11-01 00:26:34 +08:00

667 lines
28 KiB
Text
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "TCh9BTedHJK1"
},
"source": [
"![pageindex_banner](https://pageindex.ai/static/images/pageindex_banner.jpg)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nD0hb4TFHWTt"
},
"source": [
"<div align=\"center\">\n",
"<p><i>Reasoning-based RAG&nbsp; ◦ &nbsp;No Vector DB&nbsp; ◦ &nbsp;No Chunking&nbsp; ◦ &nbsp;Human-like Retrieval</i></p>\n",
"</div>\n",
"\n",
"<div align=\"center\">\n",
"<p>\n",
" <a href=\"https://vectify.ai\">🏠 Homepage</a>&nbsp; • &nbsp;\n",
" <a href=\"https://chat.pageindex.ai\">💻 Chat</a>&nbsp; • &nbsp;\n",
" <a href=\"https://pageindex.ai/mcp\">🔌 MCP</a>&nbsp; • &nbsp;\n",
" <a href=\"https://docs.pageindex.ai/quickstart\">📚 API</a>&nbsp; • &nbsp;\n",
" <a href=\"https://github.com/VectifyAI/PageIndex\">📦 GitHub</a>&nbsp; • &nbsp;\n",
" <a href=\"https://discord.com/invite/VuXuf29EUj\">💬 Discord</a>&nbsp; • &nbsp;\n",
" <a href=\"https://ii2abc2jejf.typeform.com/to/tK3AXl8T\">✉️ Contact</a>&nbsp;\n",
"</p>\n",
"</div>\n",
"\n",
"<div align=\"center\">\n",
"\n",
"[![Star us on GitHub](https://img.shields.io/github/stars/VectifyAI/PageIndex?style=for-the-badge&logo=github&label=⭐️%20Star%20Us)](https://github.com/VectifyAI/PageIndex) &nbsp;&nbsp; [![Follow us on X](https://img.shields.io/badge/Follow%20Us-000000?style=for-the-badge&logo=x&logoColor=white)](https://twitter.com/VectifyAI)\n",
"\n",
"</div>\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> Check out our blog post, \"[Do We Still Need OCR?](https://pageindex.ai/blog/do-we-need-ocr)\", for a more detailed discussion."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ebvn5qfpcG1K"
},
"source": [
"# A Vision-based, Vectorless RAG System for Long Documents\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In modern document question answering (QA) systems, Optical Character Recognition (OCR) serves an important role by converting PDF pages into text that can be processed by Large Language Models (LLMs). The resulting text can provide contextual input that enables LLMs to perform question answering over document content.\n",
"\n",
"Traditional OCR systems typically use a two-stage process that first detects the layout of a PDF — dividing it into text, tables, and images — and then recognizes and converts these elements into plain text. With the rise of vision-language models (VLMs) (such as [Qwen-VL](https://github.com/QwenLM/Qwen3-VL) and [GPT-4.1](https://openai.com/index/gpt-4-1/)), new end-to-end OCR models like [DeepSeek-OCR](https://github.com/deepseek-ai/DeepSeek-OCR) have emerged. These models jointly understand visual and textual information, enabling direct interpretation of PDFs without an explicit layout detection step.\n",
"\n",
"However, this paradigm shift raises an important question: \n",
"\n",
"\n",
"> **If a VLM can already process both the document images and the query to produce an answer directly, do we still need the intermediate OCR step?**\n",
"\n",
"In this notebook, we give a practical implementation of a vision-based question-answering system for long documents, without relying on OCR. Specifically, we use PageIndex as a reasoning-based retrieval layer and OpenAI's multimodal GPT-4.1 as the VLM for visual reasoning and answer generation.\n",
"\n",
"See the original [blog post](https://pageindex.ai/blog/do-we-need-ocr) for a more detailed discussion on how VLMs can replace traditional OCR pipelines in document question-answering."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 📝 Notebook Overview\n",
"\n",
"This notebook demonstrates a *minimal*, **vision-based vectorless RAG** pipeline for long documents with PageIndex, using only visual context from PDF pages. You will learn how to:\n",
"- [x] Build a PageIndex tree structure of a document\n",
"- [x] Perform reasoning-based retrieval with tree search\n",
"- [x] Extract PDF page images of retrieved tree nodes for visual context\n",
"- [x] Generate answers using VLM with PDF image inputs only (no OCR required)\n",
"\n",
"> ⚡ Note: This example uses PageIndex's reasoning-based retrieval with OpenAI's multimodal GPT-4.1 model for both tree search and visual context reasoning.\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7ziuTbbWcG1L"
},
"source": [
"## Step 0: Preparation\n",
"\n",
"This notebook demonstrates **Vision-based RAG** with PageIndex, using PDF page images as visual context for retrieval and answer generation.\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "edTfrizMFK4c"
},
"source": [
"#### 0.1 Install PageIndex"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "LaoB58wQFNDh"
},
"outputs": [],
"source": [
"%pip install -q --upgrade pageindex requests openai PyMuPDF"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WVEWzPKGcG1M"
},
"source": [
"#### 0.2 Setup PageIndex"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "StvqfcK4cG1M"
},
"outputs": [],
"source": [
"from pageindex import PageIndexClient\n",
"import pageindex.utils as utils\n",
"\n",
"# Get your PageIndex API key from https://dash.pageindex.ai/api-keys\n",
"PAGEINDEX_API_KEY = \"YOUR_PAGEINDEX_API_KEY\"\n",
"pi_client = PageIndexClient(api_key=PAGEINDEX_API_KEY)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 0.3 Setup VLM\n",
"\n",
"Choose your preferred VLM — in this notebook, we use OpenAI's multimodal GPT-4.1 as the VLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import openai, fitz, base64, os\n",
"\n",
"# Setup OpenAI client\n",
"OPENAI_API_KEY = \"YOUR_OPENAI_API_KEY\"\n",
"\n",
"async def call_vlm(prompt, image_paths=None, model=\"gpt-4.1\"):\n",
" client = openai.AsyncOpenAI(api_key=OPENAI_API_KEY)\n",
" messages = [{\"role\": \"user\", \"content\": prompt}]\n",
" if image_paths:\n",
" content = [{\"type\": \"text\", \"text\": prompt}]\n",
" for image in image_paths:\n",
" if os.path.exists(image):\n",
" with open(image, \"rb\") as image_file:\n",
" image_data = base64.b64encode(image_file.read()).decode('utf-8')\n",
" content.append({\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\n",
" \"url\": f\"data:image/jpeg;base64,{image_data}\"\n",
" }\n",
" })\n",
" messages[0][\"content\"] = content\n",
" response = await client.chat.completions.create(model=model, messages=messages, temperature=0)\n",
" return response.choices[0].message.content.strip()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 0.4 PDF Image Extraction Helper Functions\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def extract_pdf_page_images(pdf_path, output_dir=\"pdf_images\"):\n",
" os.makedirs(output_dir, exist_ok=True)\n",
" pdf_document = fitz.open(pdf_path)\n",
" page_images = {}\n",
" total_pages = len(pdf_document)\n",
" for page_number in range(len(pdf_document)):\n",
" page = pdf_document.load_page(page_number)\n",
" # Convert page to image\n",
" mat = fitz.Matrix(2.0, 2.0) # 2x zoom for better quality\n",
" pix = page.get_pixmap(matrix=mat)\n",
" img_data = pix.tobytes(\"jpeg\")\n",
" image_path = os.path.join(output_dir, f\"page_{page_number + 1}.jpg\")\n",
" with open(image_path, \"wb\") as image_file:\n",
" image_file.write(img_data)\n",
" page_images[page_number + 1] = image_path\n",
" print(f\"Saved page {page_number + 1} image: {image_path}\")\n",
" pdf_document.close()\n",
" return page_images, total_pages\n",
"\n",
"def get_page_images_for_nodes(node_list, node_map, page_images):\n",
" # Get PDF page images for retrieved nodes\n",
" image_paths = []\n",
" seen_pages = set()\n",
" for node_id in node_list:\n",
" node_info = node_map[node_id]\n",
" for page_num in range(node_info['start_index'], node_info['end_index'] + 1):\n",
" if page_num not in seen_pages:\n",
" image_paths.append(page_images[page_num])\n",
" seen_pages.add(page_num)\n",
" return image_paths\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "heGtIMOVcG1N"
},
"source": [
"## Step 1: PageIndex Tree Generation"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Mzd1VWjwMUJL"
},
"source": [
"#### 1.1 Submit a document for generating PageIndex tree"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "f6--eZPLcG1N",
"outputId": "ca688cfd-6c4b-4a57-dac2-f3c2604c4112"
},
"outputs": [],
"source": [
"import os, requests\n",
"\n",
"# You can also use our GitHub repo to generate PageIndex tree\n",
"# https://github.com/VectifyAI/PageIndex\n",
"\n",
"pdf_url = \"https://arxiv.org/pdf/1706.03762.pdf\" # the \"Attention Is All You Need\" paper\n",
"pdf_path = os.path.join(\"../data\", pdf_url.split('/')[-1])\n",
"os.makedirs(os.path.dirname(pdf_path), exist_ok=True)\n",
"\n",
"response = requests.get(pdf_url)\n",
"with open(pdf_path, \"wb\") as f:\n",
" f.write(response.content)\n",
"print(f\"Downloaded {pdf_url}\\n\")\n",
"\n",
"# Extract page images from PDF\n",
"print(\"Extracting page images...\")\n",
"page_images, total_pages = extract_pdf_page_images(pdf_path)\n",
"print(f\"Extracted {len(page_images)} page images from {total_pages} total pages.\\n\")\n",
"\n",
"doc_id = pi_client.submit_document(pdf_path)[\"doc_id\"]\n",
"print('Document Submitted:', doc_id)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "4-Hrh0azcG1N"
},
"source": [
"#### 1.2 Get the generated PageIndex tree structure"
]
},
{
"cell_type": "code",
"execution_count": 65,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"id": "b1Q1g6vrcG1O",
"outputId": "dc944660-38ad-47ea-d358-be422edbae53"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Simplified Tree Structure of the Document:\n",
"[{'title': 'Attention Is All You Need',\n",
" 'node_id': '0000',\n",
" 'page_index': 1,\n",
" 'prefix_summary': '# Attention Is All You Need\\n\\nAshish Vasw...',\n",
" 'nodes': [{'title': 'Abstract',\n",
" 'node_id': '0001',\n",
" 'page_index': 1,\n",
" 'summary': 'The text introduces the Transformer, a n...'},\n",
" {'title': '1 Introduction',\n",
" 'node_id': '0002',\n",
" 'page_index': 2,\n",
" 'summary': 'The text introduces the Transformer, a n...'},\n",
" {'title': '2 Background',\n",
" 'node_id': '0003',\n",
" 'page_index': 2,\n",
" 'summary': 'This section discusses the Transformer m...'},\n",
" {'title': '3 Model Architecture',\n",
" 'node_id': '0004',\n",
" 'page_index': 2,\n",
" 'prefix_summary': 'The text describes the encoder-decoder a...',\n",
" 'nodes': [{'title': '3.1 Encoder and Decoder Stacks',\n",
" 'node_id': '0005',\n",
" 'page_index': 3,\n",
" 'summary': 'The text describes the encoder and decod...'},\n",
" {'title': '3.2 Attention',\n",
" 'node_id': '0006',\n",
" 'page_index': 3,\n",
" 'prefix_summary': '### 3.2 Attention\\n\\nAn attention function...',\n",
" 'nodes': [{'title': '3.2.1 Scaled Dot-Product Attention',\n",
" 'node_id': '0007',\n",
" 'page_index': 4,\n",
" 'summary': 'The text describes Scaled Dot-Product At...'},\n",
" {'title': '3.2.2 Multi-Head Attention',\n",
" 'node_id': '0008',\n",
" 'page_index': 4,\n",
" 'summary': 'The text describes Multi-Head Attention,...'},\n",
" {'title': '3.2.3 Applications of Attention in our M...',\n",
" 'node_id': '0009',\n",
" 'page_index': 5,\n",
" 'summary': 'The text describes the three application...'}]},\n",
" {'title': '3.3 Position-wise Feed-Forward Networks',\n",
" 'node_id': '0010',\n",
" 'page_index': 5,\n",
" 'summary': '### 3.3 Position-wise Feed-Forward Netwo...'},\n",
" {'title': '3.4 Embeddings and Softmax',\n",
" 'node_id': '0011',\n",
" 'page_index': 5,\n",
" 'summary': 'The text describes the use of learned em...'},\n",
" {'title': '3.5 Positional Encoding',\n",
" 'node_id': '0012',\n",
" 'page_index': 6,\n",
" 'summary': 'This section explains the necessity of p...'}]},\n",
" {'title': '4 Why Self-Attention',\n",
" 'node_id': '0013',\n",
" 'page_index': 6,\n",
" 'summary': 'This text compares self-attention layers...'},\n",
" {'title': '5 Training',\n",
" 'node_id': '0014',\n",
" 'page_index': 7,\n",
" 'prefix_summary': '## 5 Training\\n\\nThis section describes th...',\n",
" 'nodes': [{'title': '5.1 Training Data and Batching',\n",
" 'node_id': '0015',\n",
" 'page_index': 7,\n",
" 'summary': '### 5.1 Training Data and Batching\\n\\nWe t...'},\n",
" {'title': '5.2 Hardware and Schedule',\n",
" 'node_id': '0016',\n",
" 'page_index': 7,\n",
" 'summary': '### 5.2 Hardware and Schedule\\n\\nWe traine...'},\n",
" {'title': '5.3 Optimizer',\n",
" 'node_id': '0017',\n",
" 'page_index': 7,\n",
" 'summary': '### 5.3 Optimizer\\n\\nWe used the Adam opti...'},\n",
" {'title': '5.4 Regularization',\n",
" 'node_id': '0018',\n",
" 'page_index': 7,\n",
" 'summary': 'The text details three regularization te...'}]},\n",
" {'title': '6 Results',\n",
" 'node_id': '0019',\n",
" 'page_index': 8,\n",
" 'prefix_summary': '## 6 Results\\n',\n",
" 'nodes': [{'title': '6.1 Machine Translation',\n",
" 'node_id': '0020',\n",
" 'page_index': 8,\n",
" 'summary': 'The text details the performance of a Tr...'},\n",
" {'title': '6.2 Model Variations',\n",
" 'node_id': '0021',\n",
" 'page_index': 8,\n",
" 'summary': 'This text details experiments varying co...'},\n",
" {'title': '6.3 English Constituency Parsing',\n",
" 'node_id': '0022',\n",
" 'page_index': 9,\n",
" 'summary': 'The text describes experiments evaluatin...'}]},\n",
" {'title': '7 Conclusion',\n",
" 'node_id': '0023',\n",
" 'page_index': 10,\n",
" 'summary': 'This text concludes by presenting the Tr...'},\n",
" {'title': 'References',\n",
" 'node_id': '0024',\n",
" 'page_index': 10,\n",
" 'summary': 'The provided text is a collection of ref...'},\n",
" {'title': 'Attention Visualizations',\n",
" 'node_id': '0025',\n",
" 'page_index': 13,\n",
" 'summary': 'The text provides examples of attention ...'}]}]\n"
]
}
],
"source": [
"if pi_client.is_retrieval_ready(doc_id):\n",
" tree = pi_client.get_tree(doc_id, node_summary=True)['result']\n",
" print('Simplified Tree Structure of the Document:')\n",
" utils.print_tree(tree, exclude_fields=['text'])\n",
"else:\n",
" print(\"Processing document, please try again later...\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "USoCLOiQcG1O"
},
"source": [
"## Step 2: Reasoning-Based Retrieval with Tree Search"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.1 Reasoning-based retrieval with PageIndex to identify nodes that might contain relevant context"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LLHNJAtTcG1O"
},
"outputs": [],
"source": [
"import json\n",
"\n",
"query = \"What is the last operation in the Scaled Dot-Product Attention figure?\"\n",
"\n",
"tree_without_text = utils.remove_fields(tree.copy(), fields=['text'])\n",
"\n",
"search_prompt = f\"\"\"\n",
"You are given a question and a tree structure of a document.\n",
"Each node contains a node id, node title, and a corresponding summary.\n",
"Your task is to find all tree nodes that are likely to contain the answer to the question.\n",
"\n",
"Question: {query}\n",
"\n",
"Document tree structure:\n",
"{json.dumps(tree_without_text, indent=2)}\n",
"\n",
"Please reply in the following JSON format:\n",
"{{\n",
" \"thinking\": \"<Your thinking process on which nodes are relevant to the question>\",\n",
" \"node_list\": [\"node_id_1\", \"node_id_2\", ..., \"node_id_n\"]\n",
"}}\n",
"Directly return the final JSON structure. Do not output anything else.\n",
"\"\"\"\n",
"\n",
"tree_search_result = await call_vlm(search_prompt)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.2 Print retrieved nodes and reasoning process"
]
},
{
"cell_type": "code",
"execution_count": 87,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 206
},
"id": "P8DVUOuAen5u",
"outputId": "6bb6d052-ef30-4716-f88e-be98bcb7ebdb"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Reasoning Process:\n",
"\n",
"The question asks about the last operation in the Scaled Dot-Product Attention figure. The most\n",
"relevant section is the one that describes Scaled Dot-Product Attention in detail, including its\n",
"computation and the figure itself. This is likely found in section 3.2.1 'Scaled Dot-Product\n",
"Attention' (node_id: 0007), which is a subsection of 3.2 'Attention' (node_id: 0006). The parent\n",
"section 3.2 may also contain the figure and its caption, as the summary mentions Figure 2 (which is\n",
"the Scaled Dot-Product Attention figure). Therefore, both node 0006 and node 0007 are likely to\n",
"contain the answer.\n",
"\n",
"Retrieved Nodes:\n",
"\n",
"Node ID: 0006\t Pages: 3-4\t Title: 3.2 Attention\n",
"Node ID: 0007\t Pages: 4\t Title: 3.2.1 Scaled Dot-Product Attention\n"
]
}
],
"source": [
"node_map = utils.create_node_mapping(tree, include_page_ranges=True, max_page=total_pages)\n",
"tree_search_result_json = json.loads(tree_search_result)\n",
"\n",
"print('Reasoning Process:\\n')\n",
"utils.print_wrapped(tree_search_result_json['thinking'])\n",
"\n",
"print('\\nRetrieved Nodes:\\n')\n",
"for node_id in tree_search_result_json[\"node_list\"]:\n",
" node_info = node_map[node_id]\n",
" node = node_info['node']\n",
" start_page = node_info['start_index']\n",
" end_page = node_info['end_index']\n",
" page_range = start_page if start_page == end_page else f\"{start_page}-{end_page}\"\n",
" print(f\"Node ID: {node['node_id']}\\t Pages: {page_range}\\t Title: {node['title']}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 2.3 Get corresponding PDF page images of retrieved nodes"
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"Retrieved 2 PDF page image(s) for visual context.\n"
]
}
],
"source": [
"retrieved_nodes = tree_search_result_json[\"node_list\"]\n",
"retrieved_page_images = get_page_images_for_nodes(retrieved_nodes, node_map, page_images)\n",
"print(f'\\nRetrieved {len(retrieved_page_images)} PDF page image(s) for visual context.')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "10wOZDG_cG1O"
},
"source": [
"## Step 3: Answer Generation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.1 Generate answer using VLM with visual context"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 210
},
"id": "tcp_PhHzcG1O",
"outputId": "187ff116-9bb0-4ab4-bacb-13944460b5ff"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Generated answer using VLM with retrieved PDF page images as visual context:\n",
"\n",
"The last operation in the **Scaled Dot-Product Attention** figure is a **MatMul** (matrix\n",
"multiplication). This operation multiplies the attention weights (after softmax) by the value matrix\n",
"\\( V \\).\n"
]
}
],
"source": [
"# Generate answer using VLM with only PDF page images as visual context\n",
"answer_prompt = f\"\"\"\n",
"Answer the question based on the images of the document pages as context.\n",
"\n",
"Question: {query}\n",
"\n",
"Provide a clear, concise answer based only on the context provided.\n",
"\"\"\"\n",
"\n",
"print('Generated answer using VLM with retrieved PDF page images as visual context:\\n')\n",
"answer = await call_vlm(answer_prompt, retrieved_page_images)\n",
"utils.print_wrapped(answer)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
"In this notebook, we demonstrated a *minimal* **vision-based, vectorless RAG pipeline** using PageIndex and a VLM. The system retrieves relevant pages by reasoning over the documents hierarchical tree index and answers questions directly from PDF images — no OCR required.\n",
"\n",
"If youre interested in building your own **reasoning-based document QA system**, try [PageIndex Chat](https://chat.pageindex.ai), or integrate via [PageIndex MCP](https://pageindex.ai/mcp) and the [API](https://docs.pageindex.ai/quickstart). You can also explore the [GitHub repo](https://github.com/VectifyAI/PageIndex) for open-source implementations and additional examples."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"\n",
"© 2025 [Vectify AI](https://vectify.ai)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
}