mirror of
https://github.com/VectifyAI/PageIndex.git
synced 2026-04-24 23:56:21 +02:00
fix output
This commit is contained in:
parent
985d8f064f
commit
3277f16ae1
1 changed files with 23 additions and 22 deletions
|
|
@ -211,7 +211,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/"
|
||||
|
|
@ -257,7 +257,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 41,
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
|
|
@ -369,7 +369,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 25,
|
||||
"execution_count": 21,
|
||||
"metadata": {
|
||||
"id": "LLHNJAtTcG1O"
|
||||
},
|
||||
|
|
@ -409,7 +409,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 26,
|
||||
"execution_count": 57,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
|
|
@ -424,12 +424,13 @@
|
|||
"output_type": "stream",
|
||||
"text": [
|
||||
"Reasoning Process:\n",
|
||||
"The question asks for the conclusions in the document. The most direct and relevant node is '5.\n",
|
||||
"Conclusion, Limitations, and Future Work' (node_id: 0019), as it is specifically dedicated to the\n",
|
||||
"conclusion and related topics. Other nodes, such as the Abstract (0001), Introduction (0003), and\n",
|
||||
"Discussion (0018), may contain summary statements or high-level findings, but the explicit\n",
|
||||
"conclusions are most likely found in node 0019. Therefore, node 0019 is the primary node likely to\n",
|
||||
"contain the answer.\n",
|
||||
"The question asks for the conclusions in the document. Typically, conclusions are found in sections\n",
|
||||
"explicitly titled 'Conclusion' or in sections summarizing the findings and implications of the work.\n",
|
||||
"In this document tree, node 0019 ('5. Conclusion, Limitations, and Future Work') is the most\n",
|
||||
"directly relevant, as it is dedicated to the conclusion and related topics. Additionally, the\n",
|
||||
"'Abstract' (node 0001) may contain a high-level summary that sometimes includes concluding remarks,\n",
|
||||
"but it is less likely to contain the full conclusions. Other sections like 'Discussion' (node 0018)\n",
|
||||
"may discuss implications but are not explicitly conclusions. Therefore, the primary node is 0019.\n",
|
||||
"\n",
|
||||
"Retrieved Nodes:\n",
|
||||
"Node ID: 0019\t Page: 16\t Title: 5. Conclusion, Limitations, and Future Work\n"
|
||||
|
|
@ -467,7 +468,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 27,
|
||||
"execution_count": 58,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
|
|
@ -519,7 +520,7 @@
|
|||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 28,
|
||||
"execution_count": 59,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
|
|
@ -535,18 +536,18 @@
|
|||
"text": [
|
||||
"Generated Answer:\n",
|
||||
"\n",
|
||||
"**Conclusions in this document:**\n",
|
||||
"The conclusions in this document are:\n",
|
||||
"\n",
|
||||
"- DeepSeek-R1-Zero, a pure reinforcement learning (RL) model without cold-start data, achieves\n",
|
||||
"- DeepSeek-R1-Zero, a pure reinforcement learning (RL) approach without cold-start data, achieves\n",
|
||||
"strong performance across various tasks.\n",
|
||||
"- DeepSeek-R1, which combines cold-start data with iterative RL fine-tuning, is even more powerful\n",
|
||||
"and achieves performance comparable to OpenAI-o1-1217 on a range of tasks.\n",
|
||||
"- The reasoning capabilities of DeepSeek-R1 can be successfully distilled into smaller dense models,\n",
|
||||
"with DeepSeek-R1-Distill-Qwen-1.5B outperforming GPT-4o and Claude-3.5-Sonnet on math benchmarks.\n",
|
||||
"- Other small dense models fine-tuned with DeepSeek-R1 data also significantly outperform other\n",
|
||||
"instruction-tuned models based on the same checkpoints.\n",
|
||||
"- Overall, the approaches described demonstrate promising results in enhancing model reasoning\n",
|
||||
"abilities through RL and distillation.\n"
|
||||
"- DeepSeek-R1, which combines cold-start data with iterative RL fine-tuning, is more powerful and\n",
|
||||
"achieves performance comparable to OpenAI-o1-1217 on a range of tasks.\n",
|
||||
"- Distilling DeepSeek-R1’s reasoning capabilities into smaller dense models is promising; for\n",
|
||||
"example, DeepSeek-R1-Distill-Qwen-1.5B outperforms GPT-4o and Claude-3.5-Sonnet on math benchmarks,\n",
|
||||
"and other dense models also show significant improvements over similar instruction-tuned models.\n",
|
||||
"\n",
|
||||
"These results demonstrate the effectiveness of the RL-based approach and the potential for\n",
|
||||
"distilling reasoning abilities into smaller models.\n"
|
||||
]
|
||||
}
|
||||
],
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue