Fix mock handlers to match gateway routing behavior

- OpenAI client → Claude model: gateway routes to /v1/chat/completions
  (not /v1/messages), so use setup_openai_chat_mock
- Responses API: gateway translates all requests to /v1/chat/completions
  on upstream with base_url providers, so use setup_openai_chat_mock
- Remove unused imports (json, pytest, setup_responses_api_mock)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Adil Hafeez 2026-02-18 23:54:57 +00:00
parent aeef0c33a8
commit d8e5e48f4a
3 changed files with 71 additions and 142 deletions

View file

@ -130,8 +130,10 @@ def test_anthropic_client_with_alias_streaming(httpserver: HTTPServer):
def test_openai_client_with_claude_model(httpserver: HTTPServer):
"""OpenAI client → Claude model → gateway routes to Anthropic upstream → transforms response to OpenAI format"""
captured = setup_anthropic_mock(
"""OpenAI client → Claude model → gateway proxies via /v1/chat/completions → transforms response"""
# Gateway routes OpenAI-format requests to /v1/chat/completions on upstream
# even for Anthropic models, so we need the OpenAI chat mock
captured = setup_openai_chat_mock(
httpserver, content="Hello from Claude via OpenAI client!"
)
@ -150,8 +152,9 @@ def test_openai_client_with_claude_model(httpserver: HTTPServer):
def test_openai_client_with_claude_model_streaming(httpserver: HTTPServer):
"""OpenAI client streaming → Claude model → Anthropic SSE → transformed to OpenAI SSE"""
setup_anthropic_mock(httpserver, content="Streaming from Claude!")
"""OpenAI client streaming → Claude model → proxied via /v1/chat/completions"""
# Gateway routes OpenAI-format requests to /v1/chat/completions on upstream
setup_openai_chat_mock(httpserver, content="Streaming from Claude!")
client = openai.OpenAI(api_key="test-key", base_url=f"{LLM_GATEWAY_BASE}/v1")
stream = client.chat.completions.create(