Logo Passei Direto
Material
Study with thousands of resources!

Text Material Preview

AWS Certified AI
Practitioner Exam
Version: Demo
[ Total Questions: 10]
Web: www.certsout.com
Email: support@certsout.com
Amazon Web Services
AIF-C01
https://www.certsout.com
https://www.certsout.com/AIF-C01-test.html
IMPORTANT NOTICE
Feedback
We have developed quality product and state-of-art service to ensure our customers interest. If you have any 
suggestions, please feel free to contact us at feedback@certsout.com
Support
If you have any questions about our product, please provide the following items:
exam code
screenshot of the question
login id/email
please contact us at and our technical experts will provide support within 24 hours.support@certsout.com
Copyright
The product of each order has its own encryption code, so you should use it independently. Any unauthorized 
changes will inflict legal punishment. We reserve the right of final explanation for this statement.
Amazon Web Services - AIF-C01Certs Exam
1 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Category Breakdown
Category Number of Questions
AI and ML Concepts 7
Responsible AI and Governance 1
AWS AI/ML Services and Tools 2
TOTAL 10
Question #:1 - [AI and ML Concepts]
A company needs to build its own large language model (LLM) based on only the company's private data. 
The company is concerned about the environmental effect of the training process.
Which Amazon EC2 instance type has the LEAST environmental effect when training LLMs?
Amazon EC2 C series
Amazon EC2 G series
Amazon EC2 P series
Amazon EC2 Trn series
Answer: D
Explanation
The Amazon EC2 Trn series (Trainium) instances are designed for high-performance, cost-effective machine 
learning training while being energy-efficient. AWS Trainium-powered instances are optimized for deep 
learning models and have been developed to minimize environmental impact by maximizing energy 
efficiency.
Option D (Correct): "Amazon EC2 Trn series": This is the correct answer because the Trn series is purpose-
built for training deep learning models with lower energy consumption, which aligns with the company's 
concern about environmental effects.
Option A: "Amazon EC2 C series" is incorrect because it is intended for compute-intensive tasks but not 
specifically optimized for ML training with environmental considerations.
Option B: "Amazon EC2 G series" (Graphics Processing Unit instances) is optimized for graphics-intensive 
applications but does not focus on minimizing environmental impact for training.
Option C: "Amazon EC2 P series" is designed for ML training but does not offer the same level of energy 
efficiency as the Trn series.
AWS AI Practitioner References:
AWS Trainium Overview: AWS promotes Trainium instances as their most energy-efficient and cost-
effective solution for ML model training.
Amazon Web Services - AIF-C01Certs Exam
2 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Question #:2 - [AI and ML Concepts]
In which stage of the generative AI model lifecycle are tests performed to examine the model's accuracy?
Deployment
Data selection
Fine-tuning
Evaluation
Answer: D
Explanation
The evaluation stage of the generative AI model lifecycle involves testing the model to assess its 
performance, including accuracy, coherence, and other metrics. This stage ensures the model meets the 
desired quality standards before deployment.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The evaluation phase in the machine learning lifecycle involves testing the model against validation or test 
datasets to measure its performance metrics, such as accuracy, precision, recall, or task-specific metrics for 
generative AI models."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed Explanation:
Option A: DeploymentDeployment involves making the model available for use in production. While 
monitoring occurs post-deployment, accuracy testing is performed earlier in the evaluation stage.
Option B: Data selectionData selection involves choosing and preparing data for training, not testing 
the model’s accuracy.
Option C: Fine-tuningFine-tuning adjusts a pre-trained model to improve performance for a specific 
task, but it is not the stage where accuracy is formally tested.
Option D: EvaluationThis is the correct answer. The evaluation stage is where tests are conducted to 
examine the model’s accuracy and other performance metrics, ensuring it meets requirements.
References:
AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle
Amazon SageMaker Developer Guide: Model Evaluation (https://docs.aws.amazon.com/sagemaker/latest/dg
/model-evaluation.html)
Amazon Web Services - AIF-C01Certs Exam
3 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
AWS Documentation: Generative AI Lifecycle (https://aws.amazon.com/machine-learning/)
Question #:3 - [AI and ML Concepts]
A company needs to choose a model from Amazon Bedrock to use internally. The company must identify a 
model that generates responses in a style that the company's employees prefer.
What should the company do to meet these requirements?
Evaluate the models by using built-in prompt datasets.
Evaluate the models by using a human workforce and custom prompt datasets.
Use public model leaderboards to identify the model.
Use the model InvocationLatency runtime metrics in Amazon CloudWatch when trying models.
Answer: B
Explanation
To determine which model generates responses in a style that the company's employees prefer, the best 
approach is to use a human workforce to evaluate the models with custom prompt datasets. This method 
allows for subjective evaluation based on the specific stylistic preferences of the company's employees, which 
cannot be effectively assessed through automated methods or pre-built datasets.
Option B (Correct): "Evaluate the models by using a human workforce and custom prompt datasets": This is 
the correct answer as it directly involves human judgment to evaluate the style and quality of the responses, 
aligning with employee preferences.
Option A: "Evaluate the models by using built-in prompt datasets" is incorrect because built-in datasets may 
not capture the company's specific stylistic requirements.
Option C: "Use public model leaderboards to identify the model" is incorrect as leaderboards typically 
measure model performance on standard benchmarks, not on stylistic preferences.
Option D: "Use the model InvocationLatency runtime metrics in Amazon CloudWatch" is incorrect because 
latency metrics do not provide any information about the style of the model's responses.
AWS AI Practitioner References:
Model Evaluation Techniques on AWS: AWS suggests using human evaluators to assess qualitative aspects 
of model outputs, such as style and tone, to ensure alignment with organizational preferences
Question #:4 - [AI and ML Concepts]
A company has developed an ML model for image classification. The company wants to deploy the model to 
production so that a web application can use the model.
Amazon Web Services - AIF-C01Certs Exam
4 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
The company needs to implement a solution to host the model and serve predictions without managing any of 
the underlying infrastructure.
Which solution will meet these requirements?
Use Amazon SageMaker Serverless Inference to deploy the model.
Use Amazon CloudFront to deploy the model.
Use Amazon API Gateway to host the model and serve predictions.
Use AWS Batch to host the model and serve predictions.
Answer: A
Explanation
Amazon SageMaker Serverless Inference is the correct solution for deploying an ML model to production in a 
way that allows a web application to use the model without the need to manage the underlying infrastructure.
Amazon SageMaker Serverless Inference provides a fully managed environment for deploying machine 
learning models. It automatically provisions,scales, and manages the infrastructure required to host the 
model, removing the need for the company to manage servers or other underlying infrastructure.
Why Option A is Correct:
No Infrastructure Management: SageMaker Serverless Inference handles the infrastructure management for 
deploying and serving ML models. The company can simply provide the model and specify the required 
compute capacity, and SageMaker will handle the rest.
Cost-Effectiveness: The serverless inference option is ideal for applications with intermittent or unpredictable 
traffic, as the company only pays for the compute time consumed while handling requests.
Integration with Web Applications: This solution allows the model to be easily accessed by web applications 
via RESTful APIs, making it an ideal choice for hosting the model and serving predictions.
Why Other Options are Incorrect:
B. Use Amazon CloudFront to deploy the model: CloudFront is a content delivery network (CDN) service for 
distributing content, not for deploying ML models or serving predictions.
C. Use Amazon API Gateway to host the model and serve predictions: API Gateway is used for creating, 
deploying, and managing APIs, but it does not provide the infrastructure or the required environment to host 
and run ML models.
D. Use AWS Batch to host the model and serve predictions: AWS Batch is designed for running batch 
computing workloads and is not optimized for real-time inference or hosting machine learning models.
Thus, A is the correct answer, as it aligns with the requirement of deploying an ML model without managing 
any underlying infrastructure.
Amazon Web Services - AIF-C01Certs Exam
5 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Question #:5 - [Responsible AI and Governance]
Which phase of the ML lifecycle determines compliance and regulatory requirements?
Feature engineering
Model training
Data collection
Business goal identification
Answer: D
Explanation
The business goal identification phase of the ML lifecycle involves defining the objectives of the project and 
understanding the requirements, including compliance and regulatory considerations. This phase ensures the 
ML solution aligns with legal and organizational standards before proceeding to technical stages like data 
collection or model training.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The business goal identification phase involves defining the problem to be solved, identifying success 
metrics, and determining compliance and regulatory requirements to ensure the ML solution adheres to legal 
and organizational standards."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed Explanation:
Option A: Feature engineeringFeature engineering involves creating or selecting features for model 
training, which occurs after compliance requirements are identified. It does not address regulatory 
concerns.
Option B: Model trainingModel training focuses on building the ML model using data, not on 
determining compliance or regulatory requirements.
Option C: Data collectionData collection involves gathering data for training, but compliance and 
regulatory requirements (e.g., data privacy laws) are defined earlier in the business goal identification 
phase.
Option D: Business goal identificationThis is the correct answer. This phase ensures that compliance 
and regulatory requirements are considered at the outset, shaping the entire ML project.
References:
AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle
Amazon Web Services - AIF-C01Certs Exam
6 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Amazon SageMaker Developer Guide: ML Workflow (https://docs.aws.amazon.com/sagemaker/latest/dg
/how-it-works-mlconcepts.html)
AWS Well-Architected Framework: Machine Learning Lens (https://docs.aws.amazon.com/wellarchitected
/latest/machine-learning-lens/)
Question #:6 - [AWS AI/ML Services and Tools]
Which functionality does Amazon SageMaker Clarify provide?
Integrates a Retrieval Augmented Generation (RAG) workflow
Monitors the quality of ML models in production
Documents critical details about ML models
Identifies potential bias during data preparation
Answer: D
Explanation
Exploratory data analysis (EDA) involves understanding the data by visualizing it, calculating statistics, and 
creating correlation matrices. This stage helps identify patterns, relationships, and anomalies in the data, 
which can guide further steps in the ML pipeline.
Option C (Correct): "Exploratory data analysis": This is the correct answer as the tasks described (correlation 
matrix, calculating statistics, visualizing data) are all part of the EDA process.
Option A: "Data pre-processing" is incorrect because it involves cleaning and transforming data, not initial 
analysis.
Option B: "Feature engineering" is incorrect because it involves creating new features from raw data, not 
analyzing the data's existing structure.
Option D: "Hyperparameter tuning" is incorrect because it refers to optimizing model parameters, not 
analyzing the data.
AWS AI Practitioner References:
Stages of the Machine Learning Pipeline: AWS outlines EDA as the initial phase of understanding and 
exploring data before moving to more specific preprocessing, feature engineering, and model training stages.
Question #:7 - [AI and ML Concepts]
A retail company wants to build an ML model to recommend products to customers. The company wants to 
build the model based on responsible practices. Which practice should the company apply when collecting 
data to decrease model bias?
Amazon Web Services - AIF-C01Certs Exam
7 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Use data from only customers who match the demography of the company's overall customer base.
Collect data from customers who have a past purchase history.
Ensure that the data is balanced and collected from a diverse group.
Ensure that the data is from a publicly available dataset.
Answer: C
Explanation
The retail company wants to build an ML model for product recommendations using responsible practices to 
decrease model bias. Collecting balanced and diverse data ensures the model does not favor specific groups, 
reducing bias and promoting fairness, a key responsible AI practice.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"To reduce model bias, it is critical to collect balanced and diverse data that represents various 
demographics and user groups. This practice ensures fairness and prevents the model from 
disproportionately favoring certain populations."
(Source: AWS AI Practitioner Learning Path, Module on Responsible AI)
Detailed Explanation:
Option A: Use data from only customers who match the demography of the company's overall 
customer base.Limiting data to a specific demographic may reinforce existing biases, failing to address 
underrepresented groups and increasing bias.
Option B: Collect data from customers who have a past purchase history.Focusing only on 
customers with purchase history may exclude new users, potentially introducing bias, and does not 
address diversity.
Option C: Ensure that the data is balanced and collected from a diverse group.This is the correct 
answer. A balanced and diverse dataset reduces bias by ensuring the model learns from a representative 
sample, aligning with responsible AI practices.
Option D: Ensure that the data is from a publicly available dataset.Public datasets may not be 
diverse or representative of the company’s customer base and could introduce unrelated biases, failing 
to address fairness.
References:
AWS AI Practitioner Learning Path: Module on Responsible AI
Amazon SageMaker Developer Guide: Bias and Fairness in ML (https://docs.aws.amazon.com/sagemaker
/latest/dg/clarify-bias.html)
Amazon Web Services - AIF-C01Certs Exam
8 of 10Pass with Valid Exam Questions PoolA. 
B. 
C. 
D. 
A. 
B. 
C. 
D. 
AWS Documentation: Responsible AI Practices (https://aws.amazon.com/machine-learning/responsible-ai/)
Question #:8 - [AI and ML Concepts]
A company built an AI-powered resume screening system. The company used a large dataset to train the 
model. The dataset contained resumes that were not representative of all demographics. Which core 
dimension of responsible AI does this scenario present?
Fairness.
Explainability.
Privacy and security.
Transparency.
Answer: A
Explanation
Fairness refers to the absence of bias in AI models. Using non-representative datasets leads to biased 
predictions, affecting specific demographics unfairly. Explainability, privacy, and transparency are important 
but not directly related to this scenario. References: AWS Responsible AI Framework.
Question #:9 - [AI and ML Concepts]
A company is building an ML model to analyze archived data. The company must perform inference on large 
datasets that are multiple GBs in size. The company does not need to access the model predictions 
immediately.
Which Amazon SageMaker inference option will meet these requirements?
Batch transform
Real-time inference
Serverless inference
Asynchronous inference
Answer: A
Explanation
Batch transform in Amazon SageMaker is designed for offline processing of large datasets. It is ideal for 
scenarios where immediate predictions are not required, and the inference can be done on large datasets that 
are multiple gigabytes in size. This method processes data in batches, making it suitable for analyzing 
archived data without the need for real-time access to predictions.
Amazon Web Services - AIF-C01Certs Exam
9 of 10Pass with Valid Exam Questions Pool
A. 
B. 
C. 
D. 
Option A (Correct): "Batch transform": This is the correct answer because batch transform is optimized for 
handling large datasets and is suitable when immediate access to predictions is not required.
Option B: "Real-time inference" is incorrect because it is used for low-latency, real-time prediction needs, 
which is not required in this case.
Option C: "Serverless inference" is incorrect because it is designed for small-scale, intermittent inference 
requests, not for large batch processing.
Option D: "Asynchronous inference" is incorrect because it is used when immediate predictions are required, 
but with high throughput, whereas batch transform is more suitable for very large datasets.
AWS AI Practitioner References:
Batch Transform on AWS SageMaker: AWS recommends using batch transform for large datasets when real-
time processing is not needed, ensuring cost-effectiveness and scalability.
Question #:10 - [AWS AI/ML Services and Tools]
A financial institution is using Amazon Bedrock to develop an AI application. The application is hosted in a 
VPC. To meet regulatory compliance standards, the VPC is not allowed access to any internet traffic.
Which AWS service or feature will meet these requirements?
AWS PrivateLink
Amazon Macie
Amazon CloudFront
Internet gateway
Answer: A
Explanation
AWS PrivateLink enables private connectivity between VPCs and AWS services without exposing traffic to 
the public internet. This feature is critical for meeting regulatory compliance standards that require isolation 
from public internet traffic.
Option A (Correct): "AWS PrivateLink": This is the correct answer because it allows secure access to 
Amazon Bedrock and other AWS services from a VPC without internet access, ensuring compliance with 
regulatory standards.
Option B: "Amazon Macie" is incorrect because it is a security service for data classification and protection, 
not for managing private network traffic.
Option C: "Amazon CloudFront" is incorrect because it is a content delivery network service and does not 
provide private network connectivity.
Amazon Web Services - AIF-C01Certs Exam
10 of 10Pass with Valid Exam Questions Pool
Option D: "Internet gateway" is incorrect as it enables internet access, which violates the VPC's no-internet-
traffic policy.
AWS AI Practitioner References:
AWS PrivateLink Documentation: AWS highlights PrivateLink as a solution for connecting VPCs to AWS 
services privately, which is essential for organizations with strict regulatory requirements.
About certsout.com
certsout.com was founded in 2007. We provide latest & high quality IT / Business Certification Training Exam 
Questions, Study Guides, Practice Tests.
We help you pass any IT / Business Certification Exams with 100% Pass Guaranteed or Full Refund. Especially 
Cisco, CompTIA, Citrix, EMC, HP, Oracle, VMware, Juniper, Check Point, LPI, Nortel, EXIN and so on.
View list of all certification exams: All vendors
 
 
 
We prepare state-of-the art practice tests for certification exams. You can reach us at any of the email addresses 
listed below.
Sales: sales@certsout.com
Feedback: feedback@certsout.com
Support: support@certsout.com
Any problems about IT certification or our products, You can write us back and we will get back to you within 24 
hours.
https://www.certsout.com
https://www.certsout.com/vendors.html
https://www.certsout.com/Apple-Practice-Test.html
https://www.certsout.com/Cisco-Practice-Test.html
https://www.certsout.com/Citrix-Practice-Test.html
https://www.certsout.com/CompTIA-Practice-Test.html
https://www.certsout.com/EMC-Practice-Test.html
https://www.certsout.com/ISC-Practice-Test.html
https://www.certsout.com/IBM-Practice-Test.html
https://www.certsout.com/Juniper-Practice-Test.html
https://www.certsout.com/Microsoft-Practice-Test.html
https://www.certsout.com/Oracle-Practice-Test.html
https://www.certsout.com/Symantec-Practice-Test.html
https://www.certsout.com/VMware-Practice-Test.html
mailto:sales@certsout.com
mailto:feedback@certsout.com
mailto:support@certsout.com