PageIndex/run_pageindex.py

119 lines
5.2 KiB
Python
Raw Normal View History

import argparse
2025-08-26 21:27:40 +08:00
import os
import json
from pageindex.index.page_index import *
from pageindex.index.page_index_md import md_to_tree
from pageindex.config import IndexConfig
if __name__ == "__main__":
# Set up argument parser
2025-08-26 21:27:40 +08:00
parser = argparse.ArgumentParser(description='Process PDF or Markdown document and generate structure')
2025-08-28 12:45:39 +08:00
parser.add_argument('--pdf_path', type=str, help='Path to the PDF file')
parser.add_argument('--md_path', type=str, help='Path to the Markdown file')
parser.add_argument('--model', type=str, default=None, help='Model to use')
2025-08-28 12:45:39 +08:00
Integrate LiteLLM for multi-provider LLM support (#168) * Integrate litellm for multi-provider LLM support * recover the default config yaml * Use litellm.acompletion for native async support * fix tob * Rename llm_complete/allm_complete to llm_completion/llm_acompletion, remove unused llm_complete_stream * Pin litellm to version 1.82.0 * resolve comments * args from cli is used to overrides config.yaml * Fix get_page_tokens hardcoded model default Pass opt.model to get_page_tokens so tokenization respects the configured model instead of always using gpt-4o-2024-11-20. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is no longer directly imported; it comes in as a transitive dependency of litellm. Pinning it explicitly risks version conflicts. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Restore openai==1.101.0 pin in requirements.txt litellm==1.82.0 and openai-agents have conflicting openai version requirements, but openai==1.101.0 works at runtime for both. The pin is necessary to prevent litellm from pulling in openai>=2.x which would break openai-agents. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is not directly used; it comes in as a transitive dependency of litellm. No openai-agents in this branch so no pin needed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix an litellm error log * resolve comments --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 18:47:07 +08:00
parser.add_argument('--toc-check-pages', type=int, default=None,
2025-08-26 21:27:40 +08:00
help='Number of pages to check for table of contents (PDF only)')
Integrate LiteLLM for multi-provider LLM support (#168) * Integrate litellm for multi-provider LLM support * recover the default config yaml * Use litellm.acompletion for native async support * fix tob * Rename llm_complete/allm_complete to llm_completion/llm_acompletion, remove unused llm_complete_stream * Pin litellm to version 1.82.0 * resolve comments * args from cli is used to overrides config.yaml * Fix get_page_tokens hardcoded model default Pass opt.model to get_page_tokens so tokenization respects the configured model instead of always using gpt-4o-2024-11-20. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is no longer directly imported; it comes in as a transitive dependency of litellm. Pinning it explicitly risks version conflicts. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Restore openai==1.101.0 pin in requirements.txt litellm==1.82.0 and openai-agents have conflicting openai version requirements, but openai==1.101.0 works at runtime for both. The pin is necessary to prevent litellm from pulling in openai>=2.x which would break openai-agents. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is not directly used; it comes in as a transitive dependency of litellm. No openai-agents in this branch so no pin needed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix an litellm error log * resolve comments --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 18:47:07 +08:00
parser.add_argument('--max-pages-per-node', type=int, default=None,
2025-08-26 21:27:40 +08:00
help='Maximum number of pages per node (PDF only)')
Integrate LiteLLM for multi-provider LLM support (#168) * Integrate litellm for multi-provider LLM support * recover the default config yaml * Use litellm.acompletion for native async support * fix tob * Rename llm_complete/allm_complete to llm_completion/llm_acompletion, remove unused llm_complete_stream * Pin litellm to version 1.82.0 * resolve comments * args from cli is used to overrides config.yaml * Fix get_page_tokens hardcoded model default Pass opt.model to get_page_tokens so tokenization respects the configured model instead of always using gpt-4o-2024-11-20. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is no longer directly imported; it comes in as a transitive dependency of litellm. Pinning it explicitly risks version conflicts. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Restore openai==1.101.0 pin in requirements.txt litellm==1.82.0 and openai-agents have conflicting openai version requirements, but openai==1.101.0 works at runtime for both. The pin is necessary to prevent litellm from pulling in openai>=2.x which would break openai-agents. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * Remove explicit openai dependency from requirements.txt openai is not directly used; it comes in as a transitive dependency of litellm. No openai-agents in this branch so no pin needed. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix an litellm error log * resolve comments --------- Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 18:47:07 +08:00
parser.add_argument('--max-tokens-per-node', type=int, default=None,
2025-08-26 21:27:40 +08:00
help='Maximum number of tokens per node (PDF only)')
2025-08-28 12:45:39 +08:00
parser.add_argument('--if-add-node-id', action='store_true', default=None,
help='Add node id to the node')
parser.add_argument('--if-add-node-summary', action='store_true', default=None,
help='Add summary to the node')
parser.add_argument('--if-add-doc-description', action='store_true', default=None,
help='Add doc description to the doc')
parser.add_argument('--if-add-node-text', action='store_true', default=None,
help='Add text to the node')
2025-08-26 21:27:40 +08:00
# Markdown specific arguments
2025-08-27 22:34:04 +08:00
parser.add_argument('--if-thinning', type=str, default='no',
2025-08-26 21:27:40 +08:00
help='Whether to apply tree thinning for markdown (markdown only)')
parser.add_argument('--thinning-threshold', type=int, default=5000,
help='Minimum token threshold for thinning (markdown only)')
parser.add_argument('--summary-token-threshold', type=int, default=200,
help='Token threshold for generating summaries (markdown only)')
args = parser.parse_args()
2025-08-28 12:45:39 +08:00
# Validate that exactly one file type is specified
if not args.pdf_path and not args.md_path:
raise ValueError("Either --pdf_path or --md_path must be specified")
if args.pdf_path and args.md_path:
raise ValueError("Only one of --pdf_path or --md_path can be specified")
# Build IndexConfig from CLI args (None values use defaults)
config_overrides = {
k: v for k, v in {
"model": args.model,
"toc_check_page_num": args.toc_check_pages,
"max_page_num_each_node": args.max_pages_per_node,
"max_token_num_each_node": args.max_tokens_per_node,
"if_add_node_id": args.if_add_node_id,
"if_add_node_summary": args.if_add_node_summary,
"if_add_doc_description": args.if_add_doc_description,
"if_add_node_text": args.if_add_node_text,
}.items() if v is not None
}
opt = IndexConfig(**config_overrides)
2025-08-28 12:45:39 +08:00
if args.pdf_path:
# Validate PDF file
if not args.pdf_path.lower().endswith('.pdf'):
raise ValueError("PDF file must have .pdf extension")
if not os.path.isfile(args.pdf_path):
raise ValueError(f"PDF file not found: {args.pdf_path}")
2025-08-26 21:27:40 +08:00
# Process the PDF
2025-08-28 12:45:39 +08:00
toc_with_page_number = page_index_main(args.pdf_path, opt)
2025-08-26 21:27:40 +08:00
print('Parsing done, saving to file...')
2025-08-26 21:27:40 +08:00
# Save results
pdf_name = os.path.splitext(os.path.basename(args.pdf_path))[0]
2025-09-02 20:22:17 +08:00
output_dir = './results'
output_file = f'{output_dir}/{pdf_name}_structure.json'
os.makedirs(output_dir, exist_ok=True)
2025-09-02 20:22:17 +08:00
with open(output_file, 'w', encoding='utf-8') as f:
2025-08-26 21:27:40 +08:00
json.dump(toc_with_page_number, f, indent=2)
2025-09-02 20:22:17 +08:00
print(f'Tree structure saved to: {output_file}')
2025-08-28 12:45:39 +08:00
elif args.md_path:
# Validate Markdown file
if not args.md_path.lower().endswith(('.md', '.markdown')):
raise ValueError("Markdown file must have .md or .markdown extension")
if not os.path.isfile(args.md_path):
raise ValueError(f"Markdown file not found: {args.md_path}")
2025-08-26 21:27:40 +08:00
# Process markdown file
print('Processing markdown file...')
import asyncio
2025-08-26 21:27:40 +08:00
toc_with_page_number = asyncio.run(md_to_tree(
2025-08-28 12:45:39 +08:00
md_path=args.md_path,
if_thinning=args.if_thinning.lower() == 'yes',
2025-08-26 21:27:40 +08:00
min_token_threshold=args.thinning_threshold,
2025-08-28 12:45:39 +08:00
if_add_node_summary=opt.if_add_node_summary,
2025-08-26 21:27:40 +08:00
summary_token_threshold=args.summary_token_threshold,
2025-08-28 12:45:39 +08:00
model=opt.model,
if_add_doc_description=opt.if_add_doc_description,
if_add_node_text=opt.if_add_node_text,
if_add_node_id=opt.if_add_node_id
2025-08-26 21:27:40 +08:00
))
2025-08-26 21:27:40 +08:00
print('Parsing done, saving to file...')
2025-08-26 21:27:40 +08:00
# Save results
md_name = os.path.splitext(os.path.basename(args.md_path))[0]
2025-09-02 20:22:17 +08:00
output_dir = './results'
output_file = f'{output_dir}/{md_name}_structure.json'
os.makedirs(output_dir, exist_ok=True)
2025-09-02 20:22:17 +08:00
with open(output_file, 'w', encoding='utf-8') as f:
json.dump(toc_with_page_number, f, indent=2, ensure_ascii=False)
print(f'Tree structure saved to: {output_file}')