Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
The SDK exposes ReasoningEffort as a named TypeAlias in openai.types, making it easy to reference in user code:
from openai.types import ReasoningEffort
cast(ReasoningEffort, reasoning_effort.value if reasoning_effort else 'minimal')`
There's no equivalent for the image detail level. The detail field on image input types uses an inline literal across several generated files:
types/responses/response_input_image_content.py — detail: Optional[Literal["low", "high", "auto"]]
types/responses/response_input_image_content_param.py — detail: Optional[Literal["low", "high", "auto"]]
types/chat/chat_completion_content_part_image.py — detail: Optional[Literal["auto", "low", "high"]]
Users who need to type or cast this value are forced to repeat the inline literal or define their own alias:
cast(Literal["low", "high", "auto"], image_quality.value if image_quality else 'high')
Proposed change
Add ImageDetail as a named type alias following the same pattern as ReasoningEffort:
# openai/types/shared/image_detail.py
from typing import Optional
from typing_extensions import Literal, TypeAlias
__all__ = ["ImageDetail"]
ImageDetail: TypeAlias = Optional[Literal["low", "high", "auto"]]
Export it from openai.types so users can import it directly:
from openai.types import ImageDetail
Before:
cast(Literal["low", "high", "auto"], image_quality.value if image_quality else 'high')
After:
cast(ImageDetail, image_quality.value if image_quality else 'high')
And use it as the field type across the relevant generated models instead of the inline literal.
Why this matters?
It would add consistency with the existing ReasoningEffort pattern and cleaner user code when casting or annotating image detail values.
Additional context
No response
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
Describe the feature or improvement you're requesting
The SDK exposes
ReasoningEffortas a named TypeAlias in openai.types, making it easy to reference in user code:There's no equivalent for the image detail level. The detail field on image input types uses an inline literal across several generated files:
Users who need to type or cast this value are forced to repeat the inline literal or define their own alias:
Proposed change
Add ImageDetail as a named type alias following the same pattern as ReasoningEffort:
Export it from
openai.typesso users can import it directly:Before:
After:
And use it as the field type across the relevant generated models instead of the inline literal.
Why this matters?
It would add consistency with the existing
ReasoningEffortpattern and cleaner user code when casting or annotating image detail values.Additional context
No response