Test Online Free Snowflake ARA-C01 Exam Questions and Answers

The questions for ARA-C01 were last updated On Apr.19 2024

Get ARA-C01 Full Access
 / 3

Question No : 1
JSON and PARQUET files can be loaded to columns in the same table

Answer:
Explanation:
Let us try to answer some logical questions to arrive at the answer for this question

Question No : 2
Running EXPLAIN on a query does not require a running warehouse

Answer:
Explanation:
EXPLAIN compiles the SQL statement, but does not execute it, so EXPLAIN does not require a running
warehouse.
Although EXPLAIN does not consume any compute credits, the compilation of the query does consume
Cloud Service credits, just as other metadata operations do.
To post-process the output of this command, you can:
Use the RESULT_SCAN function, which treats the output as a table that can be queried.
Generate the output in JSON format and insert the JSON-formatted output into a table for analysis later. If you store the output in JSON format, you can use the function SYSTEM$EXPLAIN_JSON_TO_TEXT or EXPLAIN_JSON to convert the JSON to a more human readable format (either tabular or formatted text).
The assignedPartitions and assignedBytes values are upper bound estimates for query execution. Runtime optimizations such as join pruning can reduce the number of partitions and bytes scanned during query execution.
The EXPLAIN plan is the "logical" explain plan. It shows the operations that will be performed, and their logical relationship to each other. The actual execution order of the operations in the plan does not necessarily match the logical order shown by the plan. https://docs.snowflake.com/en/sql-reference/sql/explain.html#usage-notes

Question No : 3
With default settings, how long will a query run on snowflake

Answer:
Explanation:
STATEMENT_TIMEOUT_IN_SECONDS
This parameter tells Snowflake how long can a SQL statement run before the system cancels it. The default value is 172800 seconds (48 hours)
This is both a session and object type parameter. As a session type, it can be applied to the account, a user or a session. As an object type, it can be applied to warehouses. If set at both levels, the lowest value is used.

Question No : 4
You are a snowflake architect in an organization. The business team came to to deploy an use case which requires you to load some data which they can visualize through tableau. Everyday new data comes in and the old data is no longer required.
What type of table you will use in this case to optimize cost

Answer:
Explanation:
Let us see why?
Storage fees are incurred for maintaining historical data during both the Time Travel and Fail-safe periods.The fees are calculated for each 24-hour period (i.e. 1 day) from the time the data changed. The number of days historical data is maintained is based on the table type and the Time Travel retention period for the table.
If you create a permanent table, it will have by default fail safe period of 7 days. That means it needs to allocate space to keep historical data for 7 days. Transient table, on the other hand, does not have fail safe period. Hence using transient table will be the most optimal approach from a cost perspective. Temporary table cannot be used here because temporary tables expire as soon as the session ends.

Question No : 5
Loading data using snowpipe REST API is supported for external stage only

Answer:
Explanation:
Snowpipe supports loading from the following stage types:

Question No : 6
Removing files from a stage after you are done loading the files improves performance when subsequently loading data

Answer:
Explanation:
Managing Unloaded Data Files
Staged files can be deleted from a Snowflake stage using the REMOVE command to remove the files in the stage after you are finished with them.
Removing files improves performance when loading data, because it reduces the number of files that the COPY INTO <table> command must scan to verify whether existing files in a stage were loaded already.

Question No : 7
You have a large warehouse with auto suspend configured for 10 minutes. You submitted a query and it is going to run for more than 10 minutes.
What will happen to the query?

Answer:
Explanation:
Please note the section highlighted in red. It will suspend itself only in case of inactivity
A warehouse can be set to automatically resume or suspend, based on activity:
By default, auto-suspend is enabled. Snowflake automatically suspends the warehouse if it is inactive for the specified period of time.
By default, auto-resume is enabled. Snowflake automatically resumes the warehouse when any statement that requires a warehouse is submitted and the warehouse is the current warehouse for the session.
These properties can be used to simplify and automate your monitoring and usage of warehouses to match your workload. Auto-suspend ensures that you do not leave a warehouse running (and consuming credits) when there are no incoming queries. Similarly, auto-resume ensures that the warehouse starts up again as soon as it is needed.

Question No : 8
You have created a table as below
CREATE TABLE SNOWFLAKE_BLOG(BLOG_ID NUMBER, BLOG_CONTENT VARCHAR,BLOG_AUTHOR VARCHAR,BLOG_CREATE_DATE TIMESTAMP );
Now you want to cluster it by BLOG_AUTHOR and BLOG_CREATE_DATE.
Which command below will you choose?

Answer:
Explanation:
The question may not come this way. But please try this out by following the below steps 1. Create the table first
CREATE TABLE SNOWFLAKE_BLOG(BLOG_ID NUMBER, BLOG_CONTENT VARCHAR,BLOG_AUTHOR VARCHAR,BLOG_CREATE_DATE TIMESTAMP ); 2. Create the cluster keys by running the below two queries
ALTER TABLE SNOWFLAKE_BLOG CLUSTER BY (BLOG_AUTHOR); ALTER TABLE SNOWFLAKE_BLOG CLUSTER BY (BLOG_CREATE_DATE);
Did you see that the last command replaced the cluster key from BLOG_AUTHOR to
BLOG_CREATE_DATE.
So, what did you learn?

Question No : 9
You have an inventory table. You created two views on this table. The views look like as below
CREATE VIEW NON_SECURE_INVENTORY AS
SELECT BIBNUMBER, TITLE, AUTHOR,ISBN
FROM INVENTORY
WHERE BIBNUMBER IN(511784,511805,511988,512044,512052,512063);
CREATE SECURE VIEW SECURE_INVENTORY AS
SELECT BIBNUMBER, TITLE, AUTHOR,ISBN
FROM INVENTORY
WHERE BIBNUMBER IN(511784,511805,511988,512044,512052,512063);
You ran the below queries
ALTER SESSION SET USE_CACHED_RESULT=FALSE;--This is to ensure that we do not retrieve from query cache
SELECT * FROM NON_SECURE_INVENTORY WHERE BIBNUMBER =511784; SELECT * FROM SECURE_INVENTORY WHERE BIBNUMBER =511784;
The query profile for the first query looks as below



However, the query profile for the second one looks like as below



Both the views use the same columns from the same underlying view. So, why is this difference in query profiles.

Answer:
Explanation:
Again, this is a question to cement your understanding on secure views. You can expect questions on secure view in the exam but it will not asked in the above mentioned way. But if you practice this, you
can answer any questions on secure view.
To load the inventory table in the above hands-on, please use the CSV that can be downloaded from
here. Use the SNOWSQL PUT command to load
https://www.kaggle.com/seattle-public-library/seattle-library-checkout-records?s

Question No : 10
Which of the below commands will use warehouse credits?

Answer:
Explanation:
Try this your self
CREATE TABLE SNOWFLAKE (FLAKE_ID INTEGER, UDEMY_COURSE VARCHAR);
INSERT INTO SNOWFLAKE VALUES(1111, 'SNOWFLAKE');
INSERT INTO SNOWFLAKE VALUES(2222, 'SNOWFLAKE');
SHOW TABLES LIKE 'SNOWFL%';
SELECT MAX(FLAKE_ID) FROM SNOWFLAKE;
SELECT COUNT(*) FROM SNOWFLAKE;
SELECT COUNT(FLAKE_ID) FROM SNOWFLAKE GROUP BY FLAKE_ID;
After running this, please go to query profile for each of the queries . You can go to query profile, by going to HISTORY and then clicking on the relevant query id.
You will see all the queries except the one using GROUP BY has used the metadata repository to retrieve the results. Any query which uses metadata repository does not consume any compute credit.

 / 3
  TOP 50 Exam Questions
Exam