Microsoft 70-767 Exam Questions 2019

for Microsoft certification, Real Success Guaranteed with Updated . 100% PASS 70-767 Implementing a SQL Data Warehouse (beta) exam Today!

Check 70-767 free dumps before getting the full version:

NEW QUESTION 1
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
Users report performance degradation when they run the following stored procedure:
70-767 dumps exhibit
You need to optimize performance.
Solution: You run the following Transact-SQL statement:
70-767 dumps exhibit
Does the solution meet the goal?

  • A. Yes
  • B. No

Answer: A

Explanation: You can specify the sample size as a percent. A 5% statistics sample size would be helpful.
References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

NEW QUESTION 2
You have a database named DB1 that contains millions of rows. You plan to perform a weekly audit of the changes to the rows.
You need to ensure that you can view which rows were modified and the hour that the modification occurred. What should you do?

  • A. Enable Policy-Based Management
  • B. Configure Stretch Database.
  • C. Configure an SSIS database.
  • D. Enable change data capture.

Answer: D

Explanation: SQL Server 2017 provides two features that track changes to data in a database: change data capture and change tracking.
Change data capture provides historical change information for a user table by capturing both the fact that DML changes were made and the actual data that was changed. Changes are captured by using an asynchronous process that reads the transaction log and has a low impact on the system.
References:
https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/track-data-changes-sql-server

NEW QUESTION 3
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have a Microsoft Azure SQL Data Warehouse instance. You run the following Transact-SQL statement:
70-767 dumps exhibit
The query fails to return results.
You need to determine why the query fails.
Solution: You run the following Transact-SQL statement:
70-767 dumps exhibit
Does the solution meet the goal?

  • A. Yes
  • B. No

Answer: B

Explanation: To use submit_time we must use sys.dm_pdw_exec_requests table. References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec

NEW QUESTION 4
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You have the following line-of-business solutions:
70-767 dumps exhibit ERP system
70-767 dumps exhibit Online WebStore
70-767 dumps exhibit Partner extranet
One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.
The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.
You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:
70-767 dumps exhibit If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.
70-767 dumps exhibit If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse. Solution: Perform the following actions:
70-767 dumps exhibit Enable the Change Tracking for the Product table in the source databases.
70-767 dumps exhibit Query the CHANGETABLE function from the sources for the updated rows.
70-767 dumps exhibit Set the IsDisabled column to True for the listed rows that have the old ReferenceNr value.
70-767 dumps exhibit Create a new row in the data warehouse Products table with the new ReferenceNr value.
Does the solution meet the goal?

  • A. Yes
  • B. No

Answer: B

Explanation: We must check for deleted rows, not just updates rows.
References: https://www.timmitchell.net/post/2016/01/18/getting-started-with-change-tracking-in-sql-server/

NEW QUESTION 5
You are a data warehouse developer.
You need to create a Microsoft SQL Server Integration Services (SSIS) catalog on a production SQL Server instance.
Which features are needed? To answer, select the appropriate options in the answer area.
70-767 dumps exhibit

    Answer:

    Explanation: Box 1: Yes
    "Enable CLR Integration" must be selected because the catalog uses CLR stored procedures. Box 2: Yes
    Once you have selected the "Enable CLR Integration" option, another checkbox will be enabled named
    "Enable automatic execution of Integration Services stored procedure at SQL Server startup". Click on this check box to enable the catalog startup stored procedure to run each time the SSIS server instance is restarted.
    70-767 dumps exhibit
    Box 3: No References:
    https://www.mssqltips.com/sqlservertip/4097/understanding-the-sql-server-integration-services-catalog-and-crea

    NEW QUESTION 6
    Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
    After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
    You plan to deploy a Microsoft SQL server that will host a data warehouse named DB1. The server will contain four SATA drives configured as a RAID 10 array.
    You need to minimize write contention on the transaction log when data is being loaded to the database. Solution: You add more data files to DB1.
    Does this meet the goal?

    • A. Yes
    • B. No

    Answer: B

    Explanation: There is no performance gain, in terms of log throughput, from multiple log files. SQL Server does not write log records in parallel to multiple log files.
    Instead you should place the log file on a separate drive. References:
    https://www.red-gate.com/simple-talk/sql/database-administration/optimizing-transaction-log-throughput/ https://docs.microsoft.com/en-us/sql/relational-databases/policy-based-management/place-data-and-log-files-on-

    NEW QUESTION 7
    You have a data warehouse.
    You need to move a table named Fact.ErrorLog to a new filegroup named LowCost.
    Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
    70-767 dumps exhibit

      Answer:

      Explanation: Step 1: Add a filegroup named LowCost to the database. First create a new filegroup.
      Step 2:
      The next stage is to go to the ‘Files’ page in the same Properties window and add a file to the filegroup (a filegroup always contains one or more files)
      Step 3:
      To move a table to a different filegroup involves moving the table’s clustered index to the new filegroup. While this may seem strange at first this is not that surprising when you remember that the leaf level of the clustered index actually contains the table data. Moving the clustered index can be done in a single statement using the DROP_EXISTING clause as follows (using one of the AdventureWorks2008R2 tables as an example) :
      CREATE UNIQUE CLUSTERED INDEX PK_Department_DepartmentID ON HumanResources.Department(DepartmentID)
      WITH (DROP_EXISTING=ON,ONLINE=ON) ON SECONDARY
      This recreates the same index but on the SECONDARY filegroup.
      References:
      http://www.sqlmatters.com/Articles/Moving%20a%20Table%20to%20a%20Different%20Filegroup.aspx

      NEW QUESTION 8
      You have a Microsoft SQL Server Integration Services (SSIS) package that loads data into a data warehouse each night from a transactional system. The package also loads data from a set of Comma-Separated Values (CSV) files that are provided by your company’s finance department.
      The SSIS package processes each CSV file in a folder. The package reads the file name for the current file into a variable and uses that value to write a log entry to a database table.
      You need to debug the package and determine the value of the variable before each file is processed.
      Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
      70-767 dumps exhibit

        Answer:

        Explanation: You debug control flows.
        The Foreach Loop container is used for looping through a group of files. Put the breakpoint on it.
        The Locals window displays information about the local expressions in the current scope of the Transact-SQL debugger.
        References: https://docs.microsoft.com/en-us/sql/integration-services/troubleshooting/debugging-control-flow
        http://blog.pragmaticworks.com/looping-through-a-result-set-with-the-foreach-loop

        NEW QUESTION 9
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
        Users report performance degradation when they run the following stored procedure:
        70-767 dumps exhibit
        You need to optimize performance.
        Solution: You run the following Transact-SQL statement:
        70-767 dumps exhibit
        Does the solution meet the goal?

        • A. Yes
        • B. No

        Answer: B

        Explanation: 100 out of 500,000 rows is a too small sample size.
        References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

        NEW QUESTION 10
        Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
        You are implementing a Microsoft SQL Server data warehouse with a multi-dimensional data model. When testing a pilot version of the data warehouse, business users observe that the number of products in
        stock is inaccurate. The number of products in stock always increases and represents the total number of
        products that have ever been in stock.
        You need to correct the existing model and ensure that it reflects the number of in-stock products. You must not change the overall structure of the data model.
        What should you do?

        • A. star schema
        • B. snowflake schema
        • C. conformed dimension
        • D. slowly changing dimension (SCD)
        • E. fact table
        • F. semi-additive measure
        • G. non-additive measure
        • H. dimension table reference relationship

        Answer: H

        NEW QUESTION 11
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        You configure a new matching policy Master Data Services (MDS) as shown in the following exhibit.
        70-767 dumps exhibit
        You review the Matching Results of the policy and find that the number of new values matches the new values.
        You verify that the data contains multiple records that have similar address values, and you expect some of the records to match.
        You need to increase the likelihood that the records will match when they have similar address values. Solution: You decrease the minimum matching score of the matching policy.
        Does this meet the goal?

        • A. Yes
        • B. NO

        Answer: A

        Explanation: We decrease the Min. matching score.
        A data matching project consists of a computer-assisted process and an interactive process. The matching project applies the matching rules in the matching policy to the data source to be assessed. This process assesses the likelihood that any two rows are matches in a matching score. Only those records with a probability of a match greater than a value set by the data steward in the matching policy will be considered a match.
        References: https://docs.microsoft.com/en-us/sql/data-quality-services/data-matching

        NEW QUESTION 12
        Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
        You are designing a data warehouse and the load process for the data warehouse.
        You have a source system that contains two tables named Table1 and Table2. All the rows in each table have a corresponding row in the other table.
        The primary key for Table1 is named Key1. The primary key for Table2 is named Key2.
        You need to combine both tables into a single table named Table3 in the data warehouse. The solution must ensure that all the nonkey columns in Table1 and Table2 exist in Table3. Which component should you use to load the data to the data warehouse?

        • A. the Slowly Changing Dimension transformation
        • B. the Conditional Split transformation
        • C. the Merge transformation
        • D. the Data Conversion transformation
        • E. an Execute SQL task
        • F. the Aggregate transformation
        • G. the Lookup transformation

        Answer: G

        Explanation: The Lookup transformation performs lookups by joining data in input columns with columns in a reference dataset. You use the lookup to access additional information in a related table that is based on values in common columns.
        You can configure the Lookup transformation in the following ways: Specify joins between the input and the reference dataset.
        Add columns from the reference dataset to the Lookup transformation output. Etc.

        NEW QUESTION 13
        You are developing a data warehouse. You run the following Transact-SQL statement:
        70-767 dumps exhibit
        Use the drop-down menus to select the answer choice that answers each question based on the information presented in the graphic.
        NOTE: Each correct selection is worth one point.
        70-767 dumps exhibit

          Answer:

          Explanation: 70-767 dumps exhibit

          NEW QUESTION 14
          Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
          After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
          You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
          Users report performance degradation when they run the following stored procedure:
          70-767 dumps exhibit
          You need to optimize performance.
          Solution: You run the following Transact-SQL statement:
          70-767 dumps exhibit
          Does the solution meet the goal?

          • A. Yes
          • B. No

          Answer: B

          Explanation: Microsoft recommend against specifying 0 PERCENT or 0 ROWS in a CREATE STATISTICS..WITH SAMPLE statement. When 0 PERCENT or ROWS is specified, the statistics object is created but does not contain statistics data.
          References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-statistics-transact-sql

          NEW QUESTION 15
          You have a data warehouse that contains a fact table named Table1 and a Product table named Dim1. Dim1 is configured as shown in the following table.
          70-767 dumps exhibit
          You are adding a second OLTP system to the data warehouse as a new fact table named Table2. The Product table of the OLTP system is configured as shown in the following table
          70-767 dumps exhibit
          You need to modify Dim1 to ensure that the table can be used for both fact tables.
          Which two actions should you perform? Each correct answer presents part of the solution.
          NOTE: Each correct selection is worth one point.

          • A. Modify the data type of the Weight column in Dim1 to decimal (19, 2).
          • B. Add the SalesUnit column to Dim1.
          • C. Modify the data type of the Name column in Dim1 to varchar (85).
          • D. Drop the ProductKey column from Dim1 and replace the column with the ProductIdentifier column.
          • E. Drop the Color column from Dim1.
          • F. Modify the data type of the ProductKey column in Dim1 to char (18).

          Answer: AD

          NEW QUESTION 16
          Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
          You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table.
          70-767 dumps exhibit
          Each week, you import a product catalog from a partner company to a staging table in DB2.
          You need to create a stored procedure that will update the staging table by inserting new products and deleting discontinued products.
          What should you use?

          • A. Lookup transformation
          • B. Merge transformation
          • C. Merge Join transformation
          • D. MERGE statement
          • E. Union All transformation
          • F. Balanced Data Distributor transformation
          • G. Sequential container
          • H. Foreach Loop container

          Answer: G

          NEW QUESTION 17
          Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in the series.
          Start of repeated scenario
          Contoso. Ltd. has a Microsoft SQL Server environment that includes SQL Server Integration Services (SSIS), a data warehouse, and SQL Server Analysis Services (SSAS) Tabular and multidimensional models.
          The data warehouse stores data related to your company sales, financial transactions and financial budgets All data for the data warenouse originates from the company's business financial system.
          The data warehouse includes the following tables:
          70-767 dumps exhibit
          The company plans to use Microsoft Azure to store older records from the data warehouse. You must modify the database to enable the Stretch Database capability.
          Users report that they are becoming confused about which city table to use for various queries. You plan to create a new schema named Dimension and change the name of the dbo.du_city table to Diamension.city. Data loss is not permissible, and you must not leave traces of the old table in the data warehouse.
          Pal to create a measure that calculates the profit margin based on the existing measures.
          You must improve performance for queries against the fact.Transaction table. You must implement appropriate indexes and enable the Stretch Database capability.
          End of repeated scenario
          You need to resolve the problems reported about the dia city table.
          How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
          70-767 dumps exhibit

            Answer:

            Explanation: 70-767 dumps exhibit

            NEW QUESTION 18
            Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
            You are developing a Microsoft SQL Server Integration Services (SSIS) package.
            You are importing data from databases at retail stores into a central data warehouse. All stores use the same database schema.
            The query being executed against the retail stores is shown below:
            70-767 dumps exhibit
            The data source property named IsSorted is set to True. The output of the transform must be sorted.
            You need to add a component to the data flow. Which SSIS Toolbox item should you use?

            • A. CDC Control task
            • B. CDC Splitter
            • C. Union All
            • D. XML task
            • E. Fuzzy Grouping
            • F. Merge
            • G. Merge Join

            Answer: C

            Thanks for reading the newest 70-767 exam dumps! We recommend you to try the PREMIUM 2passeasy 70-767 dumps in VCE and PDF here: https://www.2passeasy.com/dumps/70-767/ (109 Q&As Dumps)