Logo Passei Direto
Material
Study with thousands of resources!

Text Material Preview

ARA-C01
Exam Name: SnowPro Advanced Architect Certification
Full version: 377 Q&As
Full version of ARA-C01 Dumps
Share some ARA-C01 exam dumps below.
1. Who can view account-level Credit and Storage Usage?
A. ACCOUNTADMIN
B. A role which has been granted the MONITOR USAGE global privilege
C. STORAGEADMIN
D. STORAGEMONITOR
 1 / 27
https://www.certqueen.com/ARA-C01.html
Answer: A,B
Explanation:
If you have the necessary administrator privileges, you can use the Snowflake web interface to
view details about the number of warehouse credits, the number of cloud services credits, and
the amount of storage your account uses on a monthly and daily basis.
Usage information for your account is displayed in the Billing & Usage tab available from the
Account page in the Snowflake web interface.
Note
The tab and icon are only displayed in the interface if you are using the appropriate role in the
interface (ACCOUNTADMIN or a role granted the MONITOR USAGE global privilege).
2. VALIDATION_MODE does not support COPY statements that transform data during a load
A. TRUE
B. FALSE
Answer: A
Explanation:
VALIDATION_MODE does not support COPY statements that transform data during a load. If
the parameter is specified, the COPY statement returns an error.
Use the VALIDATE table function to view all errors encountered during a previous load. Note
that this function also does not support COPY statements that transform data during a load.
3. select metadata$filename, metadata$file_row_number from @filestage/data1.json.gz;
Please select the correct statements for the above-mentioned query.
A. FILESTAGE is the stage name, METADATA$FILE_ROW_NUMBER will give the row number
for each record in the container staged data file
B. FILESTAGE is the file name, METADATA$FILE_ROW_NUMBER will give the path to the
data file in the stage
C. FILESTAGE is the stage name, METADATA$FILE_ROW_NUMBER will give the path to the
data
file in the stage
Answer: A
Explanation:
Metadata Columns
Currently, the following metadata columns can be queried or copied into tables:
METADATA$FILENAME
Name of the staged data file the current row belongs to. Includes the path to the data file in the
 2 / 27
stage.
METADATA$FILE_ROW_NUMBER
Row number for each record in the container staged data file. Please also go through the below
link thoroughly https://docs.snowflake.com/en/user-guide/querying-metadata.html#metadata-
columns
4. Please choose all the securable objects from the given options.
A. SCHEMA
B. STORED PROCEDURE
C. EXTERNAL TABLE
D. DATABASE
E. UDF
Answer: A,B,C,D,E
Explanation:
This is important question, and you should refer the snowflake document on this topic for more
details - https://docs.snowflake.com/en/user-guide/security-access-control-
overview.html#securable-objects
5. A company has an inbound share set up with eight tables and five secure views. The
company plans to make the share part of its production data pipelines.
Which actions can the company take with the inbound share? (Choose two.)
A. Clone a table from a share.
B. Grant modify permissions on the share.
C. Create a table from the shared database.
D. Create additional views inside the shared database.
E. Create a table stream on the shared table.
Answer: C,E
6. Column concatenation.
7. A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are
received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are
added to the cloud provider.
What is the MOST cost-effective way to bring this data into a Snowflake table?
A. An external table
B. A pipe
C. A stream
 3 / 27
D. A copy command at regular intervals
Answer: B
8. To convert JSON null value to SQL null value, you will use
A. STRIP_NULL_VALUE
B. IS_NULL_VALUE
C. NULL_IF
Answer: A
Explanation:
STRIP_NULL_VALUE
Converts a JSON “null” value to a SQL NULL value. All other variant values are passed
unchanged.
Please remember this is semi structured data function and is different from the
STRIP_NULL_VALUES = TRUE | FALSE used during loading data into table from stage
Also, please try the below hands-on exercise
create or replace table mytable
(
src variant
);
insert into mytable
select parse_json(column1)
from values
('{
"a": "1",
"b": "2",
"c": null
}')
, ('{ "a": "1", "b": "2", "c": "3" }');
select strip_null_value(src:c) from mytable;
https://docs.snowflake.com/en/sql-reference/functions/strip_null_value.html#strip-null-value
9. You are creating a TASK to query a table streams created on the raw table and insert
subsets of rows into multiple tables. You are following the below steps, but when you reached
the step to resume the task, you received an error message as below.
Why is this error thrown and who can give you the required privilege?
 4 / 27
Steps to be followed to get this error
-- Create a landing table to store raw JSON data.
-- Snowpipe could load data into this table. create or replace table raw (var variant);
-- Create a stream to capture inserts to the landing table.
-- A task will consume a set of columns from this stream. create or replace stream rawstream1
on table raw;
-- Create a second stream to capture inserts to the landing table.
-- A second task will consume another set of columns from this stream. create or replace stream
rawstream2 on table raw;
-- Create a table that stores the names of office visitors identified in the raw data. create or
replace table names (id int, first_name string, last_name string);
-- Create a table that stores the visitation dates of office visitors identified in the raw data.
create or replace table visits (id int, dt date);
-- Create a task that inserts new name records from the rawstream1 stream into the names
table-- every minute when the stream contains records.
-- Replace the 'mywh' warehouse with a warehouse that your role has USAGE privilege on.
create or replace task raw_to_names
warehouse = etl_wh
schedule = '1 minute'
when
system$stream_has_data('rawstream1')
as
merge into names n
using (select var:id id, var:fname fname, var:lname lname from rawstream1) r1 on n.id =
to_number(r1.id)
when matched then update set n.first_name = r1.fname, n.last_name = r1.lname
when not matched then insert (id, first_name, last_name) values (r1.id, r1.fname, r1.lname)
;
 5 / 27
-- Create another task that merges visitation records from the rawstream1 stream into the visits
table-- every minute when the stream contains records.
-- Records with new IDs are inserted into the visits table;
-- Records with IDs that exist in the visits table update the DT column in the table.
-- Replace the 'mywh' warehouse with a warehouse that your role has USAGE privilege on.
create or replace task raw_to_visits
warehouse = etl_wh schedule = '1 minute' when
system$stream_has_data('rawstream2') as
merge into visits v
using (select var:id id, var:visit_dt visit_dt from rawstream2) r2 on v.id = to_number(r2.id) when
matched then update set v.dt = r2.visit_dt
when not matched then insert (id, dt) values (r2.id, r2.visit_dt);
-- Resume both tasks.
alter task raw_to_names resume;
A. The role used to resume the task does not have EXECUTE TASK privilege. Only
ACCOUNTADMIN can provide that privilege to the role.
B. The role used to resume the task does not have EXECUTE TASK privilege. Both
SECURITYADMIN and ACCOUNTADMIN can provide that privilege to the role.
C. The role used to resume the task does not have EXECUTE TASK privilege. Only TASK
OWNER
can provide that privilege to the role.
Answer: A
Explanation:
Please try this out yourself as I do not want you to mug up this concept.
EXECUTE TASK privilege can be granted only by the ACCOUNTADMIN and not even by TASK
OWNER.
To test this, when you get this error please run the below commands in a new worksheet.USE ROLE SECURITYADMIN;
GRANT execute task on account TO role <role name>; -- Replace the role you are using for the
task
You will get an error as below
Grant not executed: Insufficient privileges.
Now try it again as below
USE ROLE ACCOUNTADMIN;
GRANT execute task on account TO role <role name>; -- Replace the role you are using for the
task
 6 / 27
The grant should work now.
Go back and execute the resume task command alter task raw_to_names resume; . It should
work like a charm.
You learnt two important things on which you will be tested in the exam
10. A user has the appropriate privilege to see unmasked data in a column.
If the user loads this column data into another column that does not have a masking policy,
what will occur?
A. Unmasked data will be loaded in the new column.
B. Masked data will be loaded into the new column.
C. Unmasked data will be loaded into the new column but only users with the appropriate
privileges will be able to see the unmasked data.
D. Unmasked data will be loaded into the new column and no users will be able to see the
unmasked data.
Answer: A
11. A query you initiated is taking too long. You've gone into the History area to check whether
this query (which usually runs every hour) is supposed to take a long time. Check all true
statements.
A. Information in the History area can be filtered to show a single Session
B. Information in the History area can help you visually compare query times
C. Information in the History area can be filtered to show a single Warehouse
D. If you decide to end the query, you must return to the worksheet to Abort the query
E. Information in the History area can be filtered to show a single User
Answer: A,B,C,E
Explanation:
Snowflake provides many filter criteria's like - Status, User, Warehouse, Duration, End Time,
Session ID, SQL Text, Query ID, Statement Type, Query Tag
12. An Architect on a new project has been asked to design an architecture that meets
Snowflake security, compliance, and governance requirements as follows:
1) Use Tri-Secret Secure in Snowflake
2) Share some information stored in a view with another Snowflake customer
3) Hide portions of sensitive information from some columns
4) Use zero-copy cloning to refresh the non-production environment from the production
environment
To meet these requirements, which design elements must be implemented? (Choose three.)
 7 / 27
A. Define row access policies.
B. Use the Business Critical edition of Snowflake.
C. Create a secure view.
D. Use the Enterprise edition of Snowflake.
E. Use Dynamic Data Masking.
F. Create a materialized view.
Answer: B,E,F
13. Refreshing a secondary database is not allowed in the following circumstances
A. Materialized views
B. Primary database contains transient tables
C. Databases created from shares
D. Primary database has external table
Answer: C,D
Explanation:
Creating or refreshing a secondary database is blocked if an external table exists in the primary
database. Planned for a future version of database replication. https://docs.snowflake.com/en/u
ser-guide/database-replication-intro.html#replicated-database-objects
14. Running EXPLAIN on a query does not require a running warehouse
A. TRUE
B. FALSE
Answer: A
Explanation:
EXPLAIN compiles the SQL statement, but does not execute it, so EXPLAIN does not require a
running
warehouse.
Although EXPLAIN does not consume any compute credits, the compilation of the query does
consume
Cloud Service credits, just as other metadata operations do.
To post-process the output of this command, you can:
Use the RESULT_SCAN function, which treats the output as a table that can be queried.
Generate the output in JSON format and insert the JSON-formatted output into a table for
analysis later. If you store the output in JSON format, you can use the function
SYSTEM$EXPLAIN_JSON_TO_TEXT or EXPLAIN_JSON to convert the JSON to a more
human readable format (either tabular or formatted text).
 8 / 27
The assignedPartitions and assignedBytes values are upper bound estimates for query
execution. Runtime optimizations such as join pruning can reduce the number of partitions and
bytes scanned during query execution.
The EXPLAIN plan is the "logical" explain plan. It shows the operations that will be performed,
and their logical relationship to each other. The actual execution order of the operations in the
plan does not necessarily match the logical order shown by the plan.
https://docs.snowflake.com/en/sql-reference/sql/explain.html#usage-notes
15. A company is using a Snowflake account in Azure. The account has SAML SSO set up
using ADFS as a SCIM identity provider.
To validate Private Link connectivity, an Architect performed the following steps:
* Confirmed Private Link URLs are working by logging in with a username/password account
* Verified DNS resolution by running nslookups against Private Link URLs
* Validated connectivity using SnowCD
* Disabled public access using a network policy set to use the company’s IP address range
However, the following error message is received when using SSO to log into the company
account:
IP XX.XXX.XX.XX is not allowed to access snowflake. Contact your local security administrator.
What steps should the Architect take to resolve this error and ensure that the account is
accessed using only Private Link? (Choose two.)
A. Alter the Azure security integration to use the Private Link URLs.
B. Add the IP address in the error message to the allowed list in the network policy.
C. Generate a new SCIM access token using system$generate_scim_access_token and save it
to Azure AD.
D. Update the configuration of the Azure AD SSO to use the Private Link URLs.
E. Open a case with Snowflake Support to authorize the Private Link URLs’ access to the
account.
Answer: B,C
16. Which command do you run to remove files from stage?
A. REMOVE
B. DELETE
C. PURGE
D. CLEAN
Answer: A
Explanation:
 9 / 27
REMOVE
Removes files that have been staged (i.e. uploaded from a local file system or unloaded from a
table) in
one of the following Snowflake internal stages:
Named internal stage.
Stage for a specified table.
Stage for the current user.
Note that using the command to remove files from an external stage might work but is not
officially supported.
REMOVE can be abbreviated to RM. https://docs.snowflake.com/en/sql-
reference/sql/remove.html#remove
17. If an account has federated authentication enabled. Can Snowflake admins still maintain
user id and passwords in Snowflake?
A. No
B. Yes
Answer: B
Explanation:
With federated authentication enabled for your account, Snowflake still allows maintaining and
using Snowflake user credentials (login name and password).
In other words:
- Account and security administrators can still create users with passwords maintained in
Snowflake.
- Users can still log into Snowflake using their Snowflake credentials.
However, if federated authentication is enabled for your account, Snowflake does not
recommend maintaining user passwords in Snowflake. Instead, user passwords should be
maintained solely in your IdP.
18. What is the best recommended size of data file in case of SNOWPIPE continuous loading?
A. 1 GB Compressed
B. if file taking more than a minute, then split the files into more files
C. Same as of Bulk Loading (10 MB - 100 MB compressed)
D. Same as of Bulk Loading (10 MB - 100 MB uncompressed)
Answer: B,C
Explanation:
Snowpipe is designed to load new data typically within a minute after a file notification is sent.
 10 / 27
Follow the best practices as per bulk loading for file sizes (10 MB - 100 MB compressed) but
split into more files if it takes more than a minute
19. What type of Privilege Data Consumer should have to administer shares
A. SHOW SHARE
B. EXPORT SHAREC. IMPORT SHARES
D. LIST SHARE
Answer: C
Explanation:
Data Consumer must use ACCOUNTADMIN role or a role granted the IMPORT SHARES global
privilege to administer shares.
20. Files stored in snowflake internal stage are automatically encrypted using either AES 128 or
256 strong encryption.
A. TRUE
B. FALSE
Answer: A
Explanation:
All files stored in internal stages (for data loading/unloading) automatically encrypted (using
either AES
128 standard or 256 strong encryption).
https://docs.snowflake.com/en/user-guide/admin-security.html
Please note if using external stage, it is the responsibility of the customer to encrypt the data in
the stage
21. An hour ago, you ran a complex query. You then ran several simple queries from the same
worksheet. You want to export the results from the complex query but they are no longer loaded
in the Results pane of the worksheet.
What is the least costly way to download the results?
A. Type the command SELECT RESULTS(-3) into the Worksheet and click "Run"
B. Click on History -> Locate the Query -> Click the QueryID -> Use the "Export Result" button
C. Click on History -> Locate the Query -> Click "Download Results" in column 3
D. Type the command Select RESULTS(<Account>,<User>,<QueryID>) in the Worksheet and
click
"Run"
 11 / 27
Answer: B
Explanation:
The History page displays queries executed in the last 14 days, starting with the most recent
ones. You can use the End Time filter to display queries based on a specified date; however, if
you specify a date earlier than the last 14 days, no results are returned. You can export results
only for queries for which you can view the results (i.e. queries you’ve executed). If you didn’t
execute a query or the query result is no longer available, the Export Result button is not
displayed for the query. The web interface only supports exporting results up to 100 MB in size.
If a query result exceeds this limit, you are prompted whether to proceed with the export. The
export prompts may differ depending on your browser. For example, in Safari, you are prompted
only for an export format (CSV or TSV). After the export completes, you are prompted to
download the exported result to a new window, in which you can use the Save Page As…
browser option to save the result to a file.
22. You are running a large join on snowflake. You ran it on a medium warehouse and it took
almost an hour to run. You then tried to run the join on a large warehouse but still the
performance did not improve.
What may be the most possible cause of this.
A. There may be a symptom on skew in your data where one of the value of the column is
significantly more than rest of the values in the column
B. Your warehouses do not have enough memory
C. Since you have configured an warehouse with a low auto-suspend value, your warehouse is
going
down frequently
Answer: A
Explanation:
In the snowflake advanced architect exam, 40% of the questions will be based on work
experience and this is one such question. You need to have a very good hold on the concepts
of Snowflake. So, what may be happening here?
23. {"stuId":2000, "stuName":"Amy"}
24. Snowflake recommends starting slowly with SEARCH OPTIMIZATION(i.e. adding search
optimization to only a few tables at first) and closely monitoring the costs and benefits.
A. TRUE
B. FALSE
Answer: A
 12 / 27
25. Which DML operations will result in re-clustering?
A. INSERT
B. COPY
C. DELETE
D. MERGE
E. UPDATE
Answer: A,B,C,D,E
Explanation:
All of these DML operations can result re-clustering. If you TRUNCATE a column like truncating
nano seconds from Timestamps, that can also result in res-clustering.
26. Which parameter does help in loading files whose metadata has expired?
A. set LAST_MODIFIED_DATE to within 64 days
B. Set LAST_MODIFIED_DATE to more than 64 days
C. set LOAD_EXPIRED_FILES to TRUE
D. set LOAD_UNCERTAIN_FILES to TRUE
Answer: D
Explanation:
To load files whose metadata has expired, set the LOAD_UNCERTAIN_FILES copy option to
true.
27. Consider the following COPY command which is loading data with CSV format into a
Snowflake table from an internal stage through a data transformation query.
This command results in the following error:
SQL compilation error: invalid parameter 'validation_mode'
 13 / 27
Assuming the syntax is correct, what is the cause of this error?
A. The VALIDATION_MODE parameter supports COPY statements that load data from external
stages only.
B. The VALIDATION_MODE parameter does not support COPY statements with CSV file
formats.
C. The VALIDATION_MODE parameter does not support COPY statements that transform data
during a load.
D. The value return_all_errors of the option VALIDATION_MODE is causing a compilation error.
Answer: D
28. Which security, governance, and data protection features require, at a MINIMUM, the
Business Critical edition of Snowflake? (Choose two.)
A. Extended Time Travel (up to 90 days)
B. Customer-managed encryption keys through Tri-Secret Secure
C. Periodic rekeying of encrypted data
D. AWS, Azure, or Google Cloud private connectivity to Snowflake
E. Federated authentication and SSO
Answer: B,D
29. When using the Snowflake Connector for Kafka, what data formats are supported for the
messages? (Choose two.)
A. CSV
B. XML
C. Avro
D. JSON
E. Parquet
Answer: C,D
30. An Architect runs the following SQL query:
 14 / 27
How can this query be interpreted?
A. FILEROWS is a stage. FILE_ROW_NUMBER is line number in file.
B. FILEROWS is the table. FILE_ROW_NUMBER is the line number in the table.
C. FILEROWS is a file. FILE_ROW_NUMBER is the file format location.
D. FILERONS is the file format location. FILE_ROW_NUMBER is a stage.
Answer: A
31. Following objects can be cloned in snowflake
A. Permanent table
B. Transient table
C. Temporary table
D. External tables
E. Internal stages
Answer: A,B,C
Explanation:
For tables, Snowflake supports cloning permanent and transient tables; temporary tables can
be cloned only to a temporary table or a transient table.
For databases and schemas, cloning is recursive:
Cloning a database clones all the schemas and other objects in the database.
Cloning a schema clones all the contained objects in the schema.
However, the following object types are not cloned:
External tables
Internal (Snowflake) stages
32. Although this hands-on is not teaching you this but permanent table can also be cloned
In the exam, you may be asked which table types can be cloned. If you learn this way, is there a
possibility to make a mistake?
 15 / 27
33. Where can you define the file format settings?
A. While creating named file formats
B. In the table definition
C. In the named stage definition
D. Directly in the COPY INTO TABLE statement when loading data
Answer: A,B,C,D
Explanation:
Snowflake supports creating named file formats, which are database objects that encapsulate
all of the required format information. Named file formats can then be used as input in all the
same places where you can specify individual file format options, thereby helping to streamline
the data loading process for similarly-formatted data.
Named file formats are optional, but are recommended when you plan to regularly load similarly-
formatted data.
Creating a Named File Format
You can create a file format using either the web interface or SQL:
Web Interface
Click on Databases » <db_name> » File Formats
SQL
CREATE FILE FORMAT
For descriptions of all file format options and the default values, see CREATE FILE FORMAT.
Overriding Default File Format Options
You can define the file format settings for your staged data (i.e. override the default settings) in
any of the following locations:
In the table definition
Explicitly set the options using the FILE_FORMAT parameter. For more information, seeCREATE TABLE.
In the named stage definition
Explicitly set the options using the FILE_FORMAT parameter. The stage is then referenced in
the COPY INTO TABLE statement. For more information, see CREATE STAGE.
Directly in the COPY INTO TABLE statement when loading data
Explicitly set the options separately. For more information, see COPY INTO <table>.
If file format options are specified in multiple locations, the load operation applies the options in
the
following order of precedence:
COPY INTO TABLE statement.
 16 / 27
Stage definition.
Table definition.
34. Let's say that you have two JSONs as below
35. {"stuId":2000,"stuCourse":"Snowflake"}
How will you write a query that will check if stuId in JSON in #1 is also there in JSON in#2
A. with stu_demography as (select parse_json(column1) as src, src:stuId as ID from
values('{"stuId":2000, "stuName":"Amy"}')),
B. stu_course as (select parse_json(column1) as src, src:stuId as ID from
values('{"stuId":2000,"stuCourse":"Snowflake"}')) select case when stdemo.ID in(select ID from
stu_course) then 'True' else 'False' end as result from stu_demography stdemo;
C. with stu_demography as (select parse_json(column1) as src, src['stuId'] as ID from
values('{"stuId":2000, "stuName":"Amy"}')), stu_course as (select parse_json(column1) as src,
src['stuId'] as ID from values('{"stuId":2000,"stuCourse":"Snowflake"}')) select case when
stdemo.ID in(select ID from stu_course) then 'True' else 'False' end as result from
stu_demography stdemo;
D. SELECT CONTAINS('{"stuId":2000,
"stuName":"Amy"}','{"stuId":2000,"stuCourse":"Snowflake"}');
E. with stu_demography as (select parse_json(column1) as src, src['STUID'] as ID from
values('{"stuId":2000, "stuName":"Amy"}')), stu_course as (select parse_json(column1) as src,
src['stuId'] as ID from values('{"stuId":2000,"stuCourse":"Snowflake"}')) select case when
stdemo.ID in(select ID
from stu_course) then 'True' else 'False' end as result from stu_demography stdemo;
Answer: B,C
Explanation:
I would like you to try this out in your snowflake instance and find that out
Please note that this may not be the way the question will appear in the certification exam, but
why we are still learning this?
36. A user can change object parameters using which of the following roles?
A. ACCOUNTADMIN, SECURITYADMIN
B. SYSADMIN, SECURITYADMIN
C. ACCOUNTADMIN, USER with PRIVILEGE
D. SECURITYADMIN, USER with PRIVILEGE
Answer: A
37. Mike has setup process to load specific set of files using both Bulk and Snowpipe. This is
 17 / 27
best practice to avoid any missed loading either by Bulk loading or Snowpipe. (TRUE /FALSE)
A. TRUE
B. FALSE
Answer: B
Explanation:
This is not a best practice, it may create reloading issue. To avoid reloading files (and
duplicating data), Snowflake recommends loading data from a specific set of files using either
bulk data loading or Snowpipe but not both.
38. What will the below query return
SELECT TOP 10 GRADES FROM STUDENT;
A. The top 10 highest grades
B. The 10 lowest grades
C. Non-deterministic list of 10 grades
Answer: C
Explanation:
An ORDER BY clause is not required; however, without an ORDER BY clause, the results are
non-deterministic because results within a result set are not necessarily in any particular order.
To control the results returned, use an ORDER BY clause.
n must be a non-negative integer constant. https://docs.snowflake.com/en/sql-
reference/constructs/top_n.html#usage-notes
39. Which actions are not supported with shared data?
A. Creating a clone of a shared database or any schemas/tables in the database
B. Can be re-shared by data consumer
C. Time Travel for a shared database or any schemas/tables in the database
D. Editing the comments for a shared database
Answer: A,B,C,D
Explanation:
Share is just for read, not for cloning or time-travel query or any edit.
40. Materialized View Vs External table
41. If you find a data-related tool that is not listed as part of the Snowflake ecosystem, what
industry standard options could you check for as a way to easily connect to Snowflake? (Select
2)
A. Check to see if the tool can connect to other solutions via JDBC
 18 / 27
B. Check to see if the tool can connect to other solutions via ODBC
C. Check to see if there is a petition in the community to create a driver
D. Check to see if you can develop a driver and put it on GitHub
Answer: A,B
Explanation:
ODBC (Open Database Connectivity) and JDBC (JAVA Database Connectivity) are the industry
standard options to connect Snowflake easily.
42. Create a table EMPLOYEE as below
CREATE TABLE EMPLOYEE(EMPLOYEE_ID NUMBER, EMPLOYEE_NAME VARCHAR);
43. At which object type level can the APPLY MASKING POLICY, APPLY ROW ACCESS
POLICY and APPLY SESSION POLICY privileges be granted?
A. Global
B. Database
C. Schema
D. Table
Answer: D
44. Processing data in smaller batches.
https://docs.snowflake.com/en/user-guide/ui-query-profile.html#queries-too-large-to-fit-in-
memory
45. You have create a task as below
CREATE TASK mytask1
WAREHOUSE = mywh
SCHEDULE = '5 minute'
WHEN
SYSTEM$STREAM_HAS_DATA('MYSTREAM')
AS
INSERT INTO mytable1(id,name) SELECT id, name FROM mystream WHERE
METADATA$ACTION = 'INSERT';
Which statement is true below?
A. If SYSTEM$STREAM_HAS_DATA returns false, the task will be skipped
B. If SYSTEM$STREAM_HAS_DATA returns false, the task will still run
C. If SYSTEM$STREAM_HAS_DATA returns false, the task will go to suspended mode
Answer: A
 19 / 27
Explanation:
SYSTEM$STREAM_HAS_DATA
Indicates whether a specified stream contains change tracking data. Used to skip the current
task run if the stream contains no change data.
If the result is FALSE, then the task does not run. https://docs.snowflake.com/en/sql-
reference/sql/create-task.html#create-task
46. You create a Sequence in Snowflake with an initial value of 3 and an interval of 2.
Which series of numbers do you expect to see?
A. 3,4,5,6,7
B. 3,6,9,12,15
C. 3,5,7,9,11
D. 2,5,8,11,14
Answer: C
Explanation:
It starts at initial value and then adds the interval to get the next value.
47. Bytes spilled to remote storage in query profile indicates volume of data spilled to remote
disk
A. TRUE
B. FALSE
Answer: A
Explanation:
This question may come in various format in the exam, so let us not mug it up. Let us
understand what it means.
When you run large aggregations, sorts in snowflake the processing usually happens in memory
of the virtual warehouse. But if the virtual warehouse is not properly sized and if it does not have
enough memory, the intermediate results starts spilling to remote disk(in AWS, it will be S3).
When this happens, it impacts the query performance because now you are retrieving your
results from remote disk instead of memory. In most of these cases, if it is a regular occurrence
you may need to move to a bigger warehouse.
Also read this section referred in the link
https://docs.snowflake.com/en/user-guide/ui-query-profile.html#queries-too-large-to-fit-in-
memory
48. What conditions should be true for a table to consider search optimization
 20 / 27
A. The table size is at least 100 GB
B. The table is not clustered OR The table is frequently queried on columns other than the
primary cluster key
C. The table can be of any size
Answer: A,B
Explanation:
Search optimization works best to improve the performance of a query when the following
conditions are true:
For the table being queried:
49. Which approach would result in improved performance through linear scaling of data
ingestion workload?
A. Consider the practice of splitting input file batch within the recommended range of 10MB to
100MB
B. All of the above
C. Consider the practice of organizing data by granular path
D. Resize virtual warehouse
Answer: B
Explanation:
Resize or Scalingup the virtual warehouse improves the performance. It is better that you
organize data by granular path, it will help Snowflake find the file easily without wasting it's
resources on identify the data files. As per best practice, the size of data file should be in the
range of 10 MB to 110 MB. Each server can process 8 files in parallel so, breaking a file into
small files is the good practice.
50. If a multi-cluster warehouse is resized, the new size applies to
A. Clusters that are currently running
B. Clusters that are started after the warehouse is resized
C. All of the above
Answer: C
Explanation:
Multi-cluster Size and Credit Usage
The number of servers in each cluster is determined by warehouse size:
The total number of servers for the warehouse is calculated by multiplying the warehouse size
by the maximum number of clusters. This also indicates the maximum number of credits
consumed by the warehouse per full hour of usage (i.e. if all clusters run during the hour).
For example, the maximum number of credits consumed per hour for a Medium-size warehouse
 21 / 27
(4 servers per cluster) with 3 clusters is 12 credits.
If a multi-cluster warehouse is resized, the new size applies to all the clusters for the
warehouse, including clusters that are currently running and any clusters that are started after
the warehouse is resized.
51. When unloading data into multiple files, use the MAX_FILE_SIZE copy option to specify the
maximum size of each file created. (TRUE / FALSE)
A. FALSE
B. TRUE
Answer: B
Explanation:
True, When unloading data into multiple files, use the MAX_FILE_SIZE copy option to specify
the maximum size of each file created. If you want to unload in just one file you can set SINGLE
= TRUE.
52. A healthcare company is deploying a Snowflake account that may include Personal Health
Information (PHI). The company must ensure compliance with all relevant privacy standards.
Which best practice recommendations will meet data protection and compliance requirements?
(Choose three.)
A. Use, at minimum, the Business Critical edition of Snowflake.
B. Create Dynamic Data Masking policies and apply them to columns that contain PHI.
C. Use the Internal Tokenization feature to obfuscate sensitive data.
D. Use the External Tokenization feature to obfuscate sensitive data.
E. Rewrite SQL queries to eliminate projections of PHI data based on current_role().
F. Avoid sharing data with partner organizations.
Answer: A,B,D
53. How do you validate the data that is unloaded using COPY INTO command
A. After unloading, load the data into a relational table and validate the rows
B. Load the data into a CSV file to validate the rows
C. Use validation_mode='RETURN_ROWS'; with COPY command
Answer: C
Explanation:
Validating Data to be Unloaded (from a Query)
Execute COPY in validation mode to return the result of a query and view the data that will be
unloaded from the orderstiny table if COPY is executed in normal mode: copy into @my_stage
 22 / 27
from (select * from orderstiny limit 5)
validation_mode='RETURN_ROWS';
https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#validating-data-to-be-
unloaded-f rom-a-query
54. At what places can you specify File Format?
A. As part of a table definition
B. In the COPY INTO command
C. All of these
D. As part of Internal Table stage
E. As part of a named stage or pipe definition
Answer: A,B,E
Explanation:
File Format can be specified in COPY INTO command, as part of a named stage or pipe
definition as well as part of table definition.
If file format options are specified in multiple locations, the load operation applies the options in
the following order of precedence:
55. Which objects can not be replicated?
A. Users
B. Shares
C. Roles
D. Databases
E. Warehouses
F. Resource monitors
Answer: A,B,C,E,F
Explanation:
Currently, replication is supported for databases only. Other types of objects in an account
cannot be replicated. This list includes: Users Roles Warehouses Resource monitors Shares
56. Which type of view has an extra layer of protection to hide the SQL code from unauthorized
viewing?
A. Standard
B. Materialized
C. Secure
D. Permanent
Answer: C
 23 / 27
Explanation:
Some of the internal optimizations for views require access to the underlying data in the base
tables for the view. This access might allow data that is hidden from users of the view to be
exposed through user code, such as user-defined functions, or other programmatic methods.
Secure views do not utilize these optimizations, ensuring that users have no access to the
underlying data.
57. Jessica mistakenly dropped a table T1 last week. The database has Time-Travel retention
period set to 90 days.
How can Jessica recover the table which she dropped last week.
A. Jessica can execute UNDROP TABLE T1 command after setting up the right context for
Database and schema
B. Jessica cant recover it from TIME-TRAVEL as the table T1 moved to Fail-safe
C. Jessica should contact Salesforce support to get it done
D. Jessica can use the UI > ACCOUNT menu to recover the table T1
Answer: A
Explanation:
UNDROP TABLE command helps recover the TABLE which is still in TIME-TRAVEL. Since,
Jessica dropped last week so it is just a week old drop which will be available in the Time Travel
with retention period of 90 days.
58. Mike wants to create a multi-cluster warehouse and wants to make sure that whenever new
queries are queued, additional clusters should start immediately.
How should he configure the Warehouse?
A. Snowflake takes care of this automatically so, Mike does not have to worry about it
B. Set the SCALING POLICY as ECONOMY
C. Set the SCALING POLICY as STANDARD
D. Configure as SCALE-MAX so that the warehouse is always using maximum number of
specified
clusters
Answer: C
Explanation:
If a multi-cluster warehouse is configured with SCALING policy as STANDARD it immediately
when either a query is queued or the system detects that there’s one more query than the
currently-running clusters can execute
 24 / 27
59. A user who has SELECT privilege on a view does not also need SELECT privilege on the
tables that the view uses
A. TRUE
B. FALSE
Answer: A
Explanation:
A user who has SELECT privilege on a view does not also need SELECT privilege on the tables
that the
view uses. This means that you can use a view to give a role access to only a subset of a table.
For example, you can create a view that accesses medical billing information but not medical
diagnosis information in the same table, and you can then grant privileges on that view to the
"accountant" role so that the accountants can look at the billing information without seeing the
patient’s diagnosis. Operating on a view also requires the USAGE privilege on the parent
database and schema.
60. You need EXECUTE TASK Privilege to run a TASK
61. Stages
62. Which of the below objects cannot be replicated?
A. Resource Monitors
B. Warehouses
C. Users
D. Databases
E. Shares
F. Roles
Answer: A,B,C,E,F
Explanation:
As of today(28-Nov-2020), only database replication is supported for an account. Other objects
in the account cannot be replicated
63. As of today snowflake supports replication for databases only
A. TRUE
B. FALSE
Answer: A
Explanation:
Other Objects in an Account
Currently, replication is supported for databases only. Other types of objects in an account
 25 / 27
cannot be
replicated. This list includes:
Users
Roles
Warehouses
Resource monitors
Shares
64. To connect with Snowflake, Kafka Snowflake connector supports:
A. User Name and Password Authentication
B. Both
C. Key Pair Authentication
Answer: C
Explanation:
The Kafka connector relies on key pair authentication rather than the typical
username/password authentication. Thisauthentication method requires a 2048-bit (minimum)
RSA key pair. Generate the public-private key pair using OpenSSL. The public key is assigned
to the Snowflake user defined in the configuration file.
 
More Hot Exams are available.
350-401 ENCOR Exam Dumps
 26 / 27
https://www.certqueen.com/promotion.asp
https://www.certqueen.com/350-401.html
350-801 CLCOR Exam Dumps
200-301 CCNA Exam Dumps
Powered by TCPDF (www.tcpdf.org)
 27 / 27
https://www.certqueen.com/350-801.html
https://www.certqueen.com/200-301.html
http://www.tcpdf.org