graphai.api.voice.schemas module
- class graphai.api.voice.schemas.AudioFingerprintRequest(*, token: str, force: bool = False)
Bases:
BaseModel
- token: str
- force: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioFingerprintTaskResponse(*, result: str | None = None, fresh: bool, closest_token: str | None = None, closest_token_origin: str | None = None, duration: float, successful: bool)
Bases:
BaseModel
- result: str | None
- fresh: bool
- closest_token: str | None
- closest_token_origin: str | None
- duration: float
- successful: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioFingerprintResponse(*, task_id: str, task_name: str | None = None, task_status: str, task_result: AudioFingerprintTaskResponse | None)
Bases:
TaskStatusResponse
- task_result: AudioFingerprintTaskResponse | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioDetectLanguageRequest(*, token: str, force: bool = False)
Bases:
BaseModel
- token: str
- force: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioDetectLanguageTaskResponse(*, language: str | None = None, fresh: bool)
Bases:
BaseModel
- language: str | None
- fresh: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioDetectLanguageResponse(*, task_id: str, task_name: str | None = None, task_status: str, task_result: AudioDetectLanguageTaskResponse | None)
Bases:
TaskStatusResponse
- task_result: AudioDetectLanguageTaskResponse | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioTranscriptionRequest(*, token: str, force: bool = False, force_lang: str = None, strict: bool = False)
Bases:
BaseModel
- token: str
- force: bool
- force_lang: str
- strict: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioTranscriptionTaskResponse(*, transcript_results: str | None = None, subtitle_results: Any | None = None, language: str | None = None, fresh: bool)
Bases:
BaseModel
- transcript_results: str | None
- subtitle_results: Any | None
- language: str | None
- fresh: bool
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class graphai.api.voice.schemas.AudioTranscriptionResponse(*, task_id: str, task_name: str | None = None, task_status: str, task_result: AudioTranscriptionTaskResponse | None)
Bases:
TaskStatusResponse
- task_result: AudioTranscriptionTaskResponse | None
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].