Get Full Version of the Exam
You manage a database that includes the tables shown in the exhibit (Click the Exhibit button.)
You plan to create a DML trigger that reads the value of the LineTotal column for each row in the PurchaseOrderDetail table. The trigger must add the value obtained to the value in the SubTotal column of the PurchaseOrderHeader table.
You need to organize the list to form the appropriate Transact-SQL statement.
Which five Transact-SQL segments should you use to develop the solution? To answer, move the appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the correct order.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution. Determine whether the solution meets the stated goals.
You have a database named DB1 that includes a table named sales . Orders. You grant a user named User1 select permissions on the sales schema.
You need to ensure that User1 can select data from the sales, orders table without specifying the schema name in any Transact SQL statements.
Solution: You move the sales.orders table to the dbo schema. Does the solution meet the goal?
Correct Answer: B
You are optimizing the performance of a batch update process. You have tables and indexes that were created by running the following Transact-SQL statements:
The following query runs nightly to update the isCreditValidated field:
You review the database and make the following observations:
Most of the IsCreditValidated values in the Invoices table are set to a value of 1. There are many unique InvoiceDate values.
The CreditValidation table does not have an index.
Statistics for the index IX_invoices_CustomerID_Filter_IsCreditValidated indicate there are no individual seeks but multiple individual updates.
You need to ensure that any indexes added can be used by the update query. If the IX_invoices_CustomerId_Filter_IsCreditValidated index cannot be used by the query, it must be removed. Otherwise, the query must be modified to use with the index.
Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Add a filtered nonclustered index to Invoices on InvoiceDate that selects where IsCreditNote= 1 and IsCreditValidated = 0.
Rewrite the update query so that the condition for IsCreditValidated = 0 precedes the condition for IsCreditNote = 1.
Create a nonclustered index for invoices in IsCreditValidated, InvoiceDate with an include statement using IsCreditNote and CustomerID.
Add a nonclustered index for CreditValidation on CustomerID.
Drop the IX_invoices_CustomerId_Filter_IsCreditValidatedIndex.
Correct Answer: ABE
A filtered index is an optimized nonclustered index especially suited to cover queries that select from a welldefined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance as well as reduce index maintenance and storage costs compared with full-table indexes.
You are planning a set of stored procedures that must be able to access memory-optimized tables. You need to optimize the performance of the stored procedures. Which statement should you include in the stored procedure definitions?
WITH EXECUTE AS SELF
WITM NO INFOMSG5
Correct Answer: D
You manage a database that supports an Internet of Things (IoS) solution. The database records metrics from over 100 million devices every minute. The database requires 99.995% uptime.
The database uses a table named Checkins that is 100 gigabytes (GB) in size. The Checkins table stores metrics from the devices. The database also has a table named Archive that stores four terabytes (TB) of data.
You use stored procedures for all access to the tables.
You observe that the wait type PAGELATCH_IO causes large amounts of blocking. You need to resolve the blocking issues while minimizing downtime for the database.
Which two actions should you perform? Each correct answer presents part of the solution.
Convert all stored procedures that access the Checkins table to natively compiled procedures.
Convert the Checkins table to an In-Memory OLTP table.
Convert all tables to clustered columnstore indexes.
Convert the Checkins table to a clustered columnstore index.
Correct Answer: AB
Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and business logic in the stored procedure.
SQL Server In-Memory OLTP helps improve performance of OLTP applications through efficient, memory-optimized data access, native compilation of business logic, and lock- and latch free algorithms. The In-Memory OLTP feature includes memory-optimized tables and table types, as well as native compilation of Transact-SQL stored procedures for efficient access to these tables.
You need to create a view that can be indexed. You write the following statement.
What should you add at line 02?
with view metadata
Correct Answer: D
The following steps are required to create an indexed view and are critical to the successful implementation of the indexed view:
Verify the SET options are correct for all existing tables that will be referenced in the view.
Verify that the SET options for the session are set correctly before you create any tables and the view.
Verify that the view definition is deterministic.
Create the view by using the WITH SCHEMABINDING option. Create the unique clustered index on the view.
You have multiple queries that use time to complete.
You need to identify the cause by using detailed information about the Transact-SQL in the queries. The Transact-SQL statements must not run as part of the analysis.
Which Transact_SQL statements should you run?
STE STATISTICS 19 ON
SET SHOWPLAN_TEXT ON
SET STATISTICS XML ON
SET SHOWPLAN_ALL OFF
Correct Answer: B
You have a Microsoft Azure SQL Database. You enable Query Store for the database and configure the store to use the following settings:
SIZE_BASE_CLEANUP_MODE=OFF STATE_QUERY_THRESHOLD_DAYS=60 MAX_STORAGE_SIZE=100 QURRY_CAPTURE _MODE=ALL
You use Azure performance insight to receive queries. You observer that new queries are not displayed after 15 days and that the QUEY Store us set to read-only mode.
If the Query Store runs low on data pace, the store must prioritize queries from the regularly or queries that consume significant resources.
You set the Query Store to read_write mode and determine the performance of queries from the past 60 days.
Which three statements should you perform? Each correct step presents part of the solution. NOTE: Each selection is worth one point.
In the Azure portal navigate to Query performance insight. Use the Custom tab to select a period of 2 months.
Set the value of the QUERY_CAPTUE_MODE setting to AUTO.
Increase the value for the MAX_STORAGE_size _MB setting.
Set the value of the CLEANUO_POLICY setting to (STALE_QUERY_THRESHOLD=75)
Set the value of the SIZE_BASE_CLEANUP_MODE settings to AUTO.
Correct Answer: BCD
B: Capture mode:
All – Captures all queries. This is the default option.
Auto – Infrequent queries and queries with insignificant cost are ignored. (Ad hoc recommended) None – Query Store stops capturing new queries.
C: Max Size (MB): Specifies the limit for the data space that Query Store can consume within the database. This is the most important setting that directly affects operation mode of the Query Store.
While Query Store collects queries, execution plans and statistics, its size in the database grows until this limit is reached. When that happens, Query Store automatically changes the operation mode to read-only and stops collecting new data. You should monitor this closely to make sure you have sized the store appropriately to contain the full history you#39;d like to retain.
D: Size Based Cleanup Mode: Specifies whether automatic data cleanup will take place when Query Store data size approaches the limit.
It is strongly recommended to activate size-based cleanup to makes sure that Query Store always runs in read-write mode and collects the latest data.
You have a database that users query frequently.
The users report that during peak business hours, the queries take longer than expected to execute.
A junior database administrator uses Microsoft SQL Server Profiler on the database server to trace the session activities.
While performing the trace, the performance of the database server worsens, and the server crashes.
You need to recommend a solution to collect the query run times. The solution must minimize the impact on the resources of the database server.
What should you recommend?
Increase the free space on the system drive of the database server, and then use SQL Server Profiler on the server to trace the session activities.
Collect session activity data by using SQL Server Extended Events.
Clean up tempdb, and then use SQL Server Profiler on the datafile server to trace the session activities.
Collect performance data by using a Data Collector Set (DCS) in Performance Monitor.
Correct Answer: B
SQL Server Extended Events has a highly scalable and highly configurable architecture that allows users to collect as much or as little information as is necessary to troubleshoot or identify a performance problem.
Extended Events is a light weight performance monitoring system that uses very few performance resources. Extended Events provides two graphical user interfaces (New Session Wizard and New Session) to create, modify, display, and analyze your session data.
You are designing a stored procedure for a database named obi.
The following requirements must be met during the entire execution of the stored procedure:
The stored procedure must only read changes that are persisted to the database.
Select statements within the stored procedure should only show changes to the data that are made by the stored procedure.
You need to configure the transaction isolation level for the stored procedure. Which Transact-SQL statement or statements should you run?