Bedrock / Client / create_model_invocation_job
create_model_invocation_job#
- Bedrock.Client.create_model_invocation_job(**kwargs)#
Creates a batch inference job to invoke a model on multiple prompts. Format your data according to Format your inference data and upload it to an Amazon S3 bucket. For more information, see Process multiple prompts with batch inference.
The response returns a
jobArn
that you can use to stop or get details about the job.See also: AWS API Documentation
Request Syntax
response = client.create_model_invocation_job( jobName='string', roleArn='string', clientRequestToken='string', modelId='string', inputDataConfig={ 's3InputDataConfig': { 's3InputFormat': 'JSONL', 's3Uri': 'string', 's3BucketOwner': 'string' } }, outputDataConfig={ 's3OutputDataConfig': { 's3Uri': 'string', 's3EncryptionKeyId': 'string', 's3BucketOwner': 'string' } }, vpcConfig={ 'subnetIds': [ 'string', ], 'securityGroupIds': [ 'string', ] }, timeoutDurationInHours=123, tags=[ { 'key': 'string', 'value': 'string' }, ] )
- Parameters:
jobName (string) –
[REQUIRED]
A name to give the batch inference job.
roleArn (string) –
[REQUIRED]
The Amazon Resource Name (ARN) of the service role with permissions to carry out and manage batch inference. You can use the console to create a default service role or follow the steps at Create a service role for batch inference.
clientRequestToken (string) –
A unique, case-sensitive identifier to ensure that the API request completes no more than one time. If this token matches a previous request, Amazon Bedrock ignores the request, but does not return an error. For more information, see Ensuring idempotency.
This field is autopopulated if not provided.
modelId (string) –
[REQUIRED]
The unique identifier of the foundation model to use for the batch inference job.
inputDataConfig (dict) –
[REQUIRED]
Details about the location of the input to the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys can be set:
s3InputDataConfig
.s3InputDataConfig (dict) –
Contains the configuration of the S3 location of the input data.
s3InputFormat (string) –
The format of the input data.
s3Uri (string) – [REQUIRED]
The S3 location of the input data.
s3BucketOwner (string) –
The ID of the Amazon Web Services account that owns the S3 bucket containing the input data.
outputDataConfig (dict) –
[REQUIRED]
Details about the location of the output of the batch inference job.
Note
This is a Tagged Union structure. Only one of the following top level keys can be set:
s3OutputDataConfig
.s3OutputDataConfig (dict) –
Contains the configuration of the S3 location of the output data.
s3Uri (string) – [REQUIRED]
The S3 location of the output data.
s3EncryptionKeyId (string) –
The unique identifier of the key that encrypts the S3 location of the output data.
s3BucketOwner (string) –
The ID of the Amazon Web Services account that owns the S3 bucket containing the output data.
vpcConfig (dict) –
The configuration of the Virtual Private Cloud (VPC) for the data in the batch inference job. For more information, see Protect batch inference jobs using a VPC.
subnetIds (list) – [REQUIRED]
An array of IDs for each subnet in the VPC to use.
(string) –
securityGroupIds (list) – [REQUIRED]
An array of IDs for each security group in the VPC to use.
(string) –
timeoutDurationInHours (integer) – The number of hours after which to force the batch inference job to time out.
tags (list) –
Any tags to associate with the batch inference job. For more information, see Tagging Amazon Bedrock resources.
(dict) –
Definition of the key/value pair for a tag.
key (string) – [REQUIRED]
Key for the tag.
value (string) – [REQUIRED]
Value for the tag.
- Return type:
dict
- Returns:
Response Syntax
{ 'jobArn': 'string' }
Response Structure
(dict) –
jobArn (string) –
The Amazon Resource Name (ARN) of the batch inference job.
Exceptions