api
The deadline.client.api module contains functions to complement usage of the boto3 deadline SDK.
These functions mostly match the interface style of boto3 APIs, using dictionary objects and plain values instead of wrapping them in dataclasses. This approach helps keep a consistent style for code calling AWS APIs with boto3 and also using these helpers.
The create_job_from_job_bundle function provides the
full capabilities of job bundle submission, and is the basis for the CLI commands deadline bundle submit
and deadline bundle gui-submit.
You can call get_boto3_client("deadline") to get a boto3 deadline client based on the Deadline client configuration located in ~/.deadline/config that Deadline Cloud Monitor login and the Deadline CLI use. You can use this boto3 deadline client directly, and a few functions like list_farms are provided here to adjust call arguments depending on whether the credentials are from a Deadline Cloud Monitor login or a different credentials provider.
FailedTask
dataclass
¶
Represents a failed task in a job.
JobCompletionResult
dataclass
¶
Result of waiting for a job to complete.
LogEvent
dataclass
¶
Represents a single log event from CloudWatch Logs.
SessionLogResult
dataclass
¶
Result of retrieving logs for a session.
TelemetryClient
¶
Sends telemetry events periodically to the Deadline Cloud telemetry service.
This client holds a queue of events which is written to synchronously, and processed asynchronously, where events are sent in the background, so that it does not slow down user interactivity.
Telemetry events contain non-personally-identifiable information that helps us understand how users interact with our software so we know what features our customers use, and/or what existing pain points are.
Data is aggregated across a session ID (a UUID created at runtime), used to mark every telemetry event for the lifetime of the application), and a 'telemetry identifier' (a UUID recorded in the configuration file), to aggregate data across multiple application lifetimes on the same machine.
Telemetry collection can be opted-out of by running: 'deadline config set "telemetry.opt_out" true' or setting the environment variable 'DEADLINE_CLOUD_TELEMETRY_OPT_OUT=true'
get_account_id(boto3_session)
cached
¶
Retrieves the AWS account ID for the current user.
If the user is not authenticated, print an error message
initialize(config=None)
¶
Starts up the telemetry background thread after getting settings from the boto3 client. Note that if this is called before boto3 is successfully configured / initialized, an error can be raised. In that case we silently fail and don't mark the client as initialized.
set_opt_out(config=None)
¶
Checks whether telemetry has been opted out by checking the DEADLINE_CLOUD_TELEMETRY_OPT_OUT environment variable and the 'telemetry.opt_out' config file setting. Note the environment variable supersedes the config file setting.
update_common_details(details)
¶
Updates the dict of common data that is included in every telemetry request.
assume_queue_role_for_read(farmId, queueId, *, config=None)
¶
Assumes the read role for a queue and returns temporary credentials.
These credentials can be used to perform read-only operations on the queue, such as viewing job status and queue information.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
farmId
|
str
|
The ID of the farm containing the queue. |
required |
queueId
|
str
|
The ID of the queue to assume the role for. |
required |
config
|
Optional[ConfigParser]
|
Optional configuration to use. If not provided, the default configuration is used. |
None
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
A dictionary containing the temporary credentials in the following format: |
Dict[str, Any]
|
{ "credentials": { "accessKeyId": str, "secretAccessKey": str, "sessionToken": str, "expiration": datetime } |
Dict[str, Any]
|
} |
Raises:
| Type | Description |
|---|---|
ClientError
|
If there is an error assuming the role. |
assume_queue_role_for_user(farmId, queueId, *, config=None)
¶
Assumes the user role for a queue and returns temporary credentials.
These credentials can be used to perform user-level operations on the queue, such as submitting jobs and monitoring job status.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
farmId
|
str
|
The ID of the farm containing the queue. |
required |
queueId
|
str
|
The ID of the queue to assume the role for. |
required |
config
|
Optional[ConfigParser]
|
Optional configuration to use. If not provided, the default configuration is used. |
None
|
Returns:
| Type | Description |
|---|---|
Dict[str, Any]
|
A dictionary containing the temporary credentials in the following format: |
Dict[str, Any]
|
{ "credentials": { "accessKeyId": str, "secretAccessKey": str, "sessionToken": str, "expiration": datetime } |
Dict[str, Any]
|
} |
Raises:
| Type | Description |
|---|---|
ClientError
|
If there is an error assuming the role. |
check_authentication_status(config=None)
¶
Checks the status of the provided session, by calling the sts::GetCallerIdentity API.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ConfigParser
|
The AWS Deadline Cloud configuration object to use instead of the config file. |
None
|
Returns AwsAuthenticationStatus enum value
- CONFIGURATION_ERROR if there is an unexpected error accessing credentials
- AUTHENTICATED if they are fine
- NEEDS_LOGIN if a Deadline Cloud monitor login is required.
check_deadline_api_available(config=None)
¶
Returns True if AWS Deadline Cloud APIs are authorized in the session, False otherwise. This only checks the deadline:ListFarms API by performing one call that requests one result.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ConfigParser
|
The AWS Deadline Cloud configuration object to use instead of the config file. |
None
|
create_job_from_job_bundle(job_bundle_dir, job_parameters=[], *, name=None, queue_parameter_definitions=None, job_attachments_file_system=None, config=None, priority=None, max_failed_tasks_count=None, max_retries_per_task=None, max_worker_count=None, target_task_run_status=None, require_paths_exist=False, submitter_name=None, known_asset_paths=[], debug_snapshot_dir=None, from_gui=False, print_function_callback=print, interactive_confirmation_callback=None, hashing_progress_callback=None, upload_progress_callback=None, create_job_result_callback=None)
¶
Creates a Deadline Cloud job in the queue configured as default for the workstation from the job bundle in the provided directory.
The return value is the submitted job id except when debug_snapshot_dir is provided. When creating a debug snapshot, no job is submitted.
To customize with a different farm, queue, or setting from the local configuration, load the configuration by calling config_file.read_config, modifying the config values like set_setting("defaults.farm_id", farm_id, config=config), and passing the temporary modified config object to create the job.
Example
To submit the CLI job sample from the Deadline Cloud samples github repo,
first clone using git or download a zip snapshot of the samples, then run the following Python code
from the deadline-cloud-samples directory.
from deadline.client.api import create_job_from_job_bundle
job_parameters = [{"name": "BashScript", "value": "echo 'Hello from this example submission.'"},
{"name": "DataDir", "value": "./queue_environments"}]
create_job_from_job_bundle("./job_bundles/cli_job",
job_parameters=job_parameters,
name="Sample Python job submission")
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
job_bundle_dir
|
str
|
The directory containing the job bundle. |
required |
job_parameters
|
List[Dict[str, Any]]
|
A list of job parameters in the following format:
[{"name": " |
[]
|
name
|
str
|
The name of the job to submit, replacing the name defined in the job bundle. |
None
|
queue_parameter_definitions
|
list[JobParameter]
|
A list of queue_parameters to use instead of retrieving queue_parameters from the queue with get_queue_parameter_definitions. |
None
|
job_attachments_file_system
|
str
|
define which file system to use; (valid values: "COPIED", "VIRTUAL") instead of using the value in the config file. |
None
|
config
|
ConfigParser
|
The AWS Deadline Cloud configuration object to use instead of the config file. |
None
|
priority
|
int
|
explicit value for the priority of the job. |
None
|
max_failed_tasks_count
|
int
|
explicit value for the maximum allowed failed tasks. |
None
|
max_retries_per_task
|
int
|
explicit value for the maximum retries per task. |
None
|
max_worker_count
|
int
|
explicit value for the max worker count of the job. |
None
|
target_task_run_status
|
str
|
explicit value for the target task run status of the job. Valid values are "READY" or "SUSPENDED". |
None
|
require_paths_exist
|
bool
|
Whether to require that all input paths exist. |
False
|
submitter_name
|
str
|
Name of the application submitting the bundle. |
None
|
known_asset_paths
|
list[str]
|
A list of paths that should not generate warnings when outside storage profile locations. Defaults to an empty list. |
[]
|
debug_snapshot_dir
|
str
|
EXPERIMENTAL - A directory in which to save a debug snapshot of the data and commands needed to exactly replicate the deadline:CreateJob service API call. |
None
|
print_function_callback
|
Callable str -> None
|
Callback to print messages produced in this function. By default calls print(), Can be replaced by click.echo or a logging function of choice. |
print
|
interactive_confirmation_callback
|
Callable [str, bool] -> bool
|
Callback arguments are (confirmation_message, default_response). This function should present the provided prompt, using default_response as the default value to respond with if the user does not make an explicit choice, and return True if the user wants to continue, False to cancel. |
None
|
hashing_progress_callback
|
Callable -> bool
|
Callbacks periodically called while hashing during job creation. If returns false, the operation will be cancelled. If return true, the operation continues. Default behavior for each is to not cancel the operation. hashing_progress_callback and upload_progress_callback both receive ProgressReport as a parameter, which can be used for projecting remaining time, as in done in the CLI. |
None
|
upload_progress_callback
|
Callable -> bool
|
Callbacks periodically called while uploading during job creation. See hashing_progress_callback for more details. |
None
|
create_job_result_callback
|
Callable -> bool
|
Callbacks periodically called while waiting for the deadline.create_job result. See hashing_progress_callback for more details. |
None
|
Returns:
| Type | Description |
|---|---|
Optional[str]
|
Returns the submitted job id. If |
get_boto3_client(service_name, config=None)
¶
Gets a client from the boto3 session returned by get_boto3_session.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
service_name
|
str
|
The AWS service to get the client for, e.g. "deadline". |
required |
config
|
ConfigParser
|
If provided, the AWS Deadline Cloud config to use. |
None
|
get_boto3_session(force_refresh=False, config=None)
¶
Gets a boto3 session for the AWS Deadline Cloud aws profile from the local
configuration ~/.deadline/config. This may either use a named profile
or the default credentials provider chain.
This implementation caches the session object for use across multiple calls
unless force_refresh is set to True.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
force_refresh
|
bool
|
If set to True, forces a cache refresh. |
False
|
config
|
ConfigParser
|
If provided, the AWS Deadline Cloud config to use. |
None
|
get_credentials_source(config=None)
¶
Returns DEADLINE_CLOUD_MONITOR_LOGIN if Deadline Cloud monitor wrote the credentials, HOST_PROVIDED otherwise.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
config
|
ConfigParser
|
The AWS Deadline Cloud configuration object to use instead of the config file. |
None
|
get_deadline_cloud_library_telemetry_client(config=None)
¶
Retrieves the cached telemetry client, specifying the Deadline Cloud Client Library's package information. :param config: Optional configuration to use for the client. Loads defaults if not given. :return: Telemetry client to make requests with.
get_queue_parameter_definitions(*, farmId, queueId, config=None)
¶
This gets all the queue parameter definitions for the specified Deadline Cloud queue.
It does so by getting all the full templates for queue environments, and then combining them equivalently to the Deadline Cloud service logic.
get_queue_user_boto3_session(deadline, config=None, farm_id=None, queue_id=None, queue_display_name=None, force_refresh=False)
¶
Calls the AssumeQueueRoleForUser API to obtain the role configured in a Queue, and then creates and returns a boto3 session with those credentials.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
deadline
|
BaseClient
|
A Deadline client. |
required |
config
|
ConfigParser
|
If provided, the AWS Deadline Cloud config to use. |
None
|
farm_id
|
str
|
The ID of the farm to use. |
None
|
queue_id
|
str
|
The ID of the queue to use. |
None
|
queue_display_name
|
str
|
The display name of the queue. |
None
|
force_refresh
|
bool
|
If True, forces a cache refresh. |
False
|
get_session_logs(farm_id, queue_id, session_id, limit=100, start_time=None, end_time=None, next_token=None, config=None)
¶
Get CloudWatch logs for a specific session.
This function retrieves logs from CloudWatch for the specified session ID. By default, it returns the most recent 100 log lines, but this can be adjusted using the limit parameter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
farm_id
|
str
|
The ID of the farm containing the session. |
required |
queue_id
|
str
|
The ID of the queue containing the session. |
required |
session_id
|
str
|
The ID of the session to get logs for. |
required |
limit
|
int
|
Maximum number of log lines to return. |
100
|
start_time
|
Optional[datetime]
|
Optional start time for logs as a datetime object. |
None
|
end_time
|
Optional[datetime]
|
Optional end time for logs as a datetime object. |
None
|
next_token
|
Optional[str]
|
Optional token for pagination of results. |
None
|
config
|
Optional[ConfigParser]
|
Optional configuration object. |
None
|
Returns:
| Type | Description |
|---|---|
SessionLogResult
|
A SessionLogResult object containing the log events and metadata. |
Raises:
| Type | Description |
|---|---|
DeadlineOperationError
|
If there's an error retrieving the logs. |
get_telemetry_client(package_name, package_ver, config=None)
¶
Retrieves the cached telemetry client, lazy-loading the first time this is called. :param package_name: Base package name to associate data by. :param package_ver: Base package version to associate data by. :param config: Optional configuration to use for the client. Loads defaults if not given. :return: Telemetry client to make requests with.
list_farms(config=None, **kwargs)
¶
Calls the deadline:ListFarms API call, applying the filter for user membership depending on the configuration. If the response is paginated, it repeated calls the API to get all the farms.
list_fleets(config=None, **kwargs)
¶
Calls the deadline:ListFleets API call, applying the filter for user membership depending on the configuration. If the response is paginated, it repeated calls the API to get all the fleets.
list_jobs(config=None, **kwargs)
¶
Calls the deadline:ListJobs API call, applying the filter for user membership depending on the configuration. If the response is paginated, it repeated calls the API to get all the jobs.
list_queues(config=None, **kwargs)
¶
Calls the deadline:ListQueues API call, applying the filter for user membership depending on the configuration. If the response is paginated, it repeated calls the API to get all the queues.
list_storage_profiles_for_queue(config=None, **kwargs)
¶
Calls the deadline:ListStorageProfilesForQueue API call. If the response is paginated, it repeated calls the API to get all the storage profiles.
login(on_pending_authorization, on_cancellation_check, config=None)
¶
For AWS profiles created by Deadline Cloud monitor, logs in to provide access to Deadline Cloud.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
on_pending_authorization
|
Callable
|
A callback that receives method-specific information to continue login. All methods: 'credentials_source' parameter of type AwsCredentialsSource For Deadline Cloud monitor: No additional parameters |
required |
on_cancellation_check
|
Callable
|
A callback that allows the operation to cancel before login completes |
required |
config
|
ConfigParser
|
The AWS Deadline Cloud configuration object to use instead of the config file. |
None
|
logout(config=None)
¶
For AWS profiles created by Deadline Cloud monitor, logs out of Deadline Cloud.
Args: config (ConfigParser, optional): The AWS Deadline Cloud configuration object to use instead of the config file.
precache_clients(deadline=None, config=None, farm_id=None, queue_id=None, queue_display_name=None)
¶
Initialize an S3 client (and optionally a Deadline client) with queue user credentials to pre-warm the client cache.
This function creates an S3 client using queue user credentials, which triggers the expensive service discovery process once. Subsequent client creations using the same session object should then use the cached client, improving performance. This function is designed to be called in a background thread at application startup.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
deadline
|
BaseClient
|
An existing deadline client. If None, one will be created. |
None
|
config
|
Optional[ConfigParser]
|
Optional configuration parser. If None, the default configuration will be used. |
None
|
farm_id
|
Optional[str]
|
The farm ID. If None, it will be retrieved from settings. |
None
|
queue_id
|
Optional[str]
|
The queue ID. If None, it will be retrieved from settings. |
None
|
queue_display_name
|
Optional[str]
|
The queue display name. If None, it will be retrieved from the queue. |
None
|
Returns:
| Type | Description |
|---|---|
Tuple[BaseClient, BaseClient]
|
Created (or current) s3 client for the given queue_role_session |
Example
# Fire and forget initialization in a background thread
import threading
threading.Thread(
target=initialize_queue_user_s3_client,
daemon=True,
name="S3ClientInit"
).start()
record_function_latency_telemetry_event(**decorator_kwargs)
¶
Decorator to time a function. Sends a latency telemetry event. :param ** Python variable arguments. See https://docs.python.org/3/glossary.html#term-parameter.
record_success_fail_telemetry_event(**decorator_kwargs)
¶
Decorator to try catch a function. Sends a success / fail telemetry event. :param ** Python variable arguments. See https://docs.python.org/3/glossary.html#term-parameter.
wait_for_create_job_to_complete(farm_id, queue_id, job_id, deadline_client, continue_callback)
¶
Wait until a job exits the CREATE_IN_PROGRESS state.
wait_for_job_completion(farm_id, queue_id, job_id, max_poll_interval=120, timeout=0, config=None, status_callback=None, job_callback=None)
¶
Wait for a job to complete and return information about its status and any failed tasks.
This function blocks until the job's taskRunStatus reaches a terminal state (SUCCEEDED, FAILED, CANCELED, SUSPENDED, or NOT_COMPATIBLE), then returns a JobCompletionResult object containing the final status and any failed tasks.
The function uses exponential backoff for polling, starting at 0.5 seconds and doubling the interval after each check until it reaches the maximum polling interval.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
farm_id
|
str
|
The ID of the farm containing the job. |
required |
queue_id
|
str
|
The ID of the queue containing the job. |
required |
job_id
|
str
|
The ID of the job to wait for. |
required |
max_poll_interval
|
int
|
Maximum time in seconds between status checks (default: 120). |
120
|
timeout
|
int
|
Maximum time in seconds to wait (0 for no timeout). |
0
|
config
|
Optional[ConfigParser]
|
Optional configuration object. |
None
|
status_callback
|
Optional[Callable]
|
Optional callback function that receives the current status during polling. |
None
|
job_callback
|
Optional[Callable]
|
Optional callback function that receives the job resource during polling. |
None
|
Returns:
| Type | Description |
|---|---|
JobCompletionResult
|
A JobCompletionResult object containing the job's final status and any failed tasks. |
Raises:
| Type | Description |
|---|---|
DeadlineOperationError
|
If the timeout is reached or there's an error retrieving job information. |