Personalize / Client / describe_batch_inference_job
describe_batch_inference_job¶
- Personalize.Client.describe_batch_inference_job(**kwargs)¶
- Gets the properties of a batch inference job including name, Amazon Resource Name (ARN), status, input and output configurations, and the ARN of the solution version used to generate the recommendations. - See also: AWS API Documentation - Request Syntax- response = client.describe_batch_inference_job( batchInferenceJobArn='string' ) - Parameters:
- batchInferenceJobArn (string) – - [REQUIRED] - The ARN of the batch inference job to describe. 
- Return type:
- dict 
- Returns:
- Response Syntax- { 'batchInferenceJob': { 'jobName': 'string', 'batchInferenceJobArn': 'string', 'filterArn': 'string', 'failureReason': 'string', 'solutionVersionArn': 'string', 'numResults': 123, 'jobInput': { 's3DataSource': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'jobOutput': { 's3DataDestination': { 'path': 'string', 'kmsKeyArn': 'string' } }, 'batchInferenceJobConfig': { 'itemExplorationConfig': { 'string': 'string' } }, 'roleArn': 'string', 'batchInferenceJobMode': 'BATCH_INFERENCE'|'THEME_GENERATION', 'themeGenerationConfig': { 'fieldsForThemeGeneration': { 'itemName': 'string' } }, 'status': 'string', 'creationDateTime': datetime(2015, 1, 1), 'lastUpdatedDateTime': datetime(2015, 1, 1) } } - Response Structure- (dict) – - batchInferenceJob (dict) – - Information on the specified batch inference job. - jobName (string) – - The name of the batch inference job. 
- batchInferenceJobArn (string) – - The Amazon Resource Name (ARN) of the batch inference job. 
- filterArn (string) – - The ARN of the filter used on the batch inference job. 
- failureReason (string) – - If the batch inference job failed, the reason for the failure. 
- solutionVersionArn (string) – - The Amazon Resource Name (ARN) of the solution version from which the batch inference job was created. 
- numResults (integer) – - The number of recommendations generated by the batch inference job. This number includes the error messages generated for failed input records. 
- jobInput (dict) – - The Amazon S3 path that leads to the input data used to generate the batch inference job. - s3DataSource (dict) – - The URI of the Amazon S3 location that contains your input data. The Amazon S3 bucket must be in the same region as the API endpoint you are calling. - path (string) – - The file path of the Amazon S3 bucket. 
- kmsKeyArn (string) – - The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. 
 
 
- jobOutput (dict) – - The Amazon S3 bucket that contains the output data generated by the batch inference job. - s3DataDestination (dict) – - Information on the Amazon S3 bucket in which the batch inference job’s output is stored. - path (string) – - The file path of the Amazon S3 bucket. 
- kmsKeyArn (string) – - The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files. 
 
 
- batchInferenceJobConfig (dict) – - A string to string map of the configuration details of a batch inference job. - itemExplorationConfig (dict) – - A string to string map specifying the exploration configuration hyperparameters, including - explorationWeightand- explorationItemAgeCutOff, you want to use to configure the amount of item exploration Amazon Personalize uses when recommending items. See User-Personalization.- (string) – - (string) – 
 
 
 
- roleArn (string) – - The ARN of the Amazon Identity and Access Management (IAM) role that requested the batch inference job. 
- batchInferenceJobMode (string) – - The job’s mode. 
- themeGenerationConfig (dict) – - The job’s theme generation settings. - fieldsForThemeGeneration (dict) – - Fields used to generate descriptive themes for a batch inference job. - itemName (string) – - The name of the Items dataset column that stores the name of each item in the dataset. 
 
 
- status (string) – - The status of the batch inference job. The status is one of the following values: - PENDING 
- IN PROGRESS 
- ACTIVE 
- CREATE FAILED 
 
- creationDateTime (datetime) – - The time at which the batch inference job was created. 
- lastUpdatedDateTime (datetime) – - The time at which the batch inference job was last updated. 
 
 
 
 - Exceptions