Microsoft 70-767 Exam Questions and Answers 2019

It is impossible to pass Microsoft 70-767 exam without any help in the short term. Come to us soon and find the most advanced, correct and guaranteed . You will get a surprising result by our .

Also have 70-767 free dumps questions for you:

NEW QUESTION 1
You need to build a knowledge base in Data Quality Services (DQS).
You need to ensure that the data is validated by using a third-party data source before DQS processes the data. Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.
70-767 dumps exhibit

    Answer:

    Explanation: Building a DQS knowledge base involves the following processes and components: Step 1: Perform Knowledge Discovery
    A computer-assisted process that builds knowledge into a knowledge base by processing a data sample Step 2: Perform Domain Management
    An interactive process that enables the data steward to verify and modify the knowledge that is in knowledge base domains, each of which is associated with a data field. This can include setting field-wide properties, creating rules, changing specific values, using reference data services, or setting up term-based or cross-field relationships.
    Step 3: Configure reference Data Services
    A process of domain management that enables you to validate your data against data maintained and guaranteed by a reference data provider.
    Step 4: Configure a Matching Policy
    A policy that defines how DQS processes records to identify potential duplicates and non-matches, built into the knowledge base in a computer-assisted and interactive process.
    References: https://docs.microsoft.com/en-us/sql/data-quality-services/dqs-knowledge-bases-and-domains

    NEW QUESTION 2
    Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
    You are developing a Microsoft SQL Server Integration Services (SSIS) package.
    You need to ensure that the packa
    ge records the current Log Sequence Number (LSN) in the source database before the package begins reading source tables.
    Which SSIS Toolbox item should you use?

    • A. CDC Control task
    • B. CDC Splitter
    • C. Union All
    • D. XML task
    • E. Fuzzy Grouping
    • F. Merge
    • G. Merge Join

    Answer: A

    Explanation: The CDC Control task is used to control the life cycle of change data capture (CDC) packages. It handles CDC package synchronization with the initial load package, the management of Log Sequence Number (LSN) ranges that are processed in a run of a CDC package.
    References: https://docs.microsoft.com/en-us/sql/integration-services/control-flow/cdc-control-task

    NEW QUESTION 3
    After you answer a question in this section, you will NOT be able to return to it As a result, these questions will not appear in the review screen.
    You are configuring a Microsoft SQL server named ow1 for a new data warehouse. The server contains eight drives and eight processor cores. Each drive uses a separate physical disk.
    You need to configure storage for the tempdb database. The solution must minimize the amount of time it takes to process daily ETL jobs.
    Solution: You configure eight files for the tenpdb database. You place the files on a drive that contains the operating system files.
    Does this meet the goal?

    • A. Yes
    • B. No

    Answer: B

    NEW QUESTION 4
    Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
    You are a database administrator for an e-commerce company that runs an online store. The company has the databases described in the following table.
    70-767 dumps exhibit
    Product prices are updated and are stored in a table named Products on DB1. The Products table is deleted and refreshed each night from MDS by using a Microsoft SQL Server Integration Services (SSIS) package. None of the data sources are sorted.
    You need to update the SSIS package to add current prices to the Products table. What should you use?

    • A. Lookup transformation
    • B. Merge transformation
    • C. Merge Join transformation
    • D. MERGE statement
    • E. Union All transformation
    • F. Balanced Data Distributor transformation
    • G. Sequential container
    • H. Foreach Loop container

    Answer: D

    Explanation: In the current release of SQL Server Integration Services, the SQL statement in an Execute SQL task can contain a MERGE statement. This MERGE statement enables you to accomplish multiple INSERT, UPDATE, and DELETE operations in a single statement.
    References:
    https://docs.microsoft.com/en-us/sql/integration-services/control-flow/merge-in-integration-services-packages

    NEW QUESTION 5
    Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in the series.
    Start of repeated scenario
    Contoso. Ltd. has a Microsoft SQL Server environment that includes SQL Server Integration Services (SSIS). a data warehouse, and SQL Server Analysis Services (SSAS) Tabular and multi-dimensional models.
    The data warehouse stores data related to your company sales, financial transactions and financial budgets. All data for the data warehouse originates from the company's business financial system.
    The data warehouse includes the following tables:
    70-767 dumps exhibit
    You must implement a partitioning scheme for the fact. Transaction table to move older data to less expensive storage. Each partition will store data for a single calendar year, as shown in the exhibit (Click the Exhibit button.) You must align the partitions.
    70-767 dumps exhibit
    The company plans to use Microsoft Azure to store older records from the data warehouse. You must modify the database to enable the Stretch Database capability.
    End of repeated scenario
    You need to perform the first step to partition the fact .Transaction table.
    How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL segments in the answer area.
    70-767 dumps exhibit

      Answer:

      Explanation: CREATE PROCEDURE
      [DateRange] (INT) AS RANGE LEFT

      NEW QUESTION 6
      Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
      You have a Microsoft SQL Server data warehouse instance that supports several client applications. The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
      Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
      All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
      You have the following requirements:
      70-767 dumps exhibit Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
      70-767 dumps exhibit Partition the Fact.Order table and retain a total of seven years of data.
      70-767 dumps exhibit Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
      70-767 dumps exhibit Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
      70-767 dumps exhibitMaximize the performance during the data loading process for the Fact.Order partition.
      70-767 dumps exhibit Ensure that historical data remains online and available for querying.
      70-767 dumps exhibit Reduce ongoing storage costs while maintaining query performance for current data. You are not permitted to make changes to the client applications.
      You need to implement the data partitioning strategy. How should you partition the Fact.Order table?

      • A. Create 17,520 partitions.
      • B. Use a granularity of two days.
      • C. Create 2,557 partitions.
      • D. Create 730 partitions.

      Answer: C

      Explanation: We create on partition for each day. 7 years times 365 days is 2,555. Make that 2,557 to provide for leap years.
      From scenario: Partition the Fact.Order table and retain a total of seven years of data. Maximize the performance during the data loading process for the Fact.Order partition.

      NEW QUESTION 7
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      Each night you receive a comma separated values (CSV) file that contains different types of rows. Each row type has a different structure. Each row in the CSV file is unique. The first column in every row is named Type. This column identifies the data type.
      For each data type, you need to load data from the CSV file to a target table. A separate table must contain the number of rows loaded for each data type.
      Solution: You create a SQL Server Integration Services (SSIS) package as shown in the exhibit. (Click the
      Exhibit tab.)
      70-767 dumps exhibit
      Does the solution meet the goal?

      • A. Yes
      • B. NO

      Answer: B

      Explanation: The conditional split must be before the count.

      NEW QUESTION 8
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      You are the administrator of a Microsoft SQL Server Master Data Services (MDS) instance. The instance contains a model named Geography and a model named customer. The Geography model contains an entity named countryRegion.
      You need to ensure that the countryRegion entity members are available in the customer model. Solution: Configure an entity sync relationship to replicate the CountryRegion entity.
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: B

      NEW QUESTION 9
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      Each night you receive a comma separated values (CSV) file that contains different types of rows. Each row type has a different structure. Each row in the CSV file is unique. The first column in every row is named Type. This column identifies the data type.
      For each data type, you need to load data from the CSV file to a target table. A separate table must contain the number of rows loaded for each data type.
      Solution: You create a SQL Server Integration Services (SSIS) package as shown in the exhibit. (Click the
      Exhibit tab.)
      70-767 dumps exhibit
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: B

      Explanation: The conditional split must be before the count.

      NEW QUESTION 10
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      You have a data warehouse that stores information about products, sales, and orders for a manufacturing company. The instance contains a database that has two tables named SalesOrderHeader and SalesOrderDetail. SalesOrderHeader has 500,000 rows and SalesOrderDetail has 3,000,000 rows.
      Users report performance degradation when they run the following stored procedure:
      70-767 dumps exhibit
      You need to optimize performance.
      Solution: You run the following Transact-SQL statement:
      70-767 dumps exhibit
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: A

      Explanation: UPDATE STATISTICS updates query optimization statistics on a table or indexed view. FULLSCAN
      computes statistics by scanning all rows in the table or indexed view. FULLSCAN and SAMPLE 100 PERCENT have the same results.
      References:
      https://docs.microsoft.com/en-us/sql/t-sql/statements/update-statistics-transact-sql?view=sql-server-2017

      NEW QUESTION 11
      Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
      You are implementing a Microsoft SQL Server data warehouse with a multi-dimensional data model. You have a fact table that includes sales data for all products. The model includes a dimension named Geography that stores all geographies. You create a dimension that has a foreign key and provides the ability to analyze sales by the following sales channels: Internet or retail store.
      You need to update the data model to allow business users to analyze Internet sales by geography without changing the overall structure of the data model.
      What should you do?

      • A. star schema
      • B. snowflake schema
      • C. conformed dimension
      • D. slowly changing dimension (SCD)
      • E. fact table
      • F. semi-additive measure
      • G. non-additive measure
      • H. dimension table reference relationship

      Answer: D

      NEW QUESTION 12
      You manage an inventory system that has a table named Products. The Products table has several hundred columns.
      You generate a report that relates two columns named ProductReference and ProductName from the Products table. The result is sorted by a column named QuantityInStock from largest to smallest.
      You need to create an index that the report can use.
      How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL segments in the answer area.
      70-767 dumps exhibit

        Answer:

        Explanation: 70-767 dumps exhibit

        NEW QUESTION 13
        Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
        After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
        Your company uses Microsoft SQL Server to deploy a data warehouse to an environment that has a SQL Server Analysis Services (SSAS) instance. The data warehouse includes the Fact.Order table as shown in the following table definition. The table has no indexes.
        70-767 dumps exhibit
        You must minimize the amount of space that indexes for the Fact.Order table consume. You run the following queries frequently. Both queries must be able to use a columnstore index:
        70-767 dumps exhibit
        You need to ensure that the queries complete as quickly as possible.
        SolutionvYou create two nonclustered indexes. The first includes the [Order Date Key] and [Tax Amount] columns. The second will include the [Order Date Key] and [Total Excluding Tax] columns.
        Does the solution meet the goal?

        • A. Yes
        • B. No

        Answer: B

        NEW QUESTION 14
        Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
        You are a database administrator for an e-commerce company that runs an online store. The company has three databases as described in the following table.
        70-767 dumps exhibit
        You plan to load at least one million rows of data each night from DB1 into the OnlineOrder table. You must load data into the correct partitions using a parallel process.
        You create 24 Data Flow tasks. You must place the tasks into a component to allow parallel load. After all of the load processes compete, the process must proceed to the next task.
        You need to load the data for the OnlineOrder table. What should you use?

        • A. Lookup transformation
        • B. Merge transformation
        • C. Merge Join transformation
        • D. MERGE statement
        • E. Union All transformation
        • F. Balanced Data Distributor transformation
        • G. Sequential container
        • H. Foreach Loop container

        Answer: H

        Explanation: The Parallel Loop Task is an SSIS Control Flow task, which can execute multiple iterations of the standard Foreach Loop Container concurrently.
        References:
        http://www.cozyroc.com/ssis/parallel-loop-task

        NEW QUESTION 15
        Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer chokes, but the text of the scenario is exactly the same in each question in the series.
        Start of repeated scenario
        Contoso. Ltd. has a Microsoft SQL Server environment that includes SQL Server Integration Services (SSIS), a data warehouse, and SQL Server Analysis Services (SSAS) Tabular and multi-dimensional models.
        The data warehouse stores data related to your company sales, financial transactions and financial budgets. All data for the data warehouse originates from the company s business financial system.
        The data warehouse includes the following tables:
        70-767 dumps exhibit
        The company plans to use Microsoft Azure to store older records from the data warehouse. You must modify the database to enable the Stretch Database capability.
        Users report that they are becoming confused about which city table to use for various queries. You plan to create a new schema named Dimension and change the name of the dbo.diamension_city table to Diamension.city. Data loss is not permissible, and you must not leave traces of the old table in the data warehouse.
        The fact. Transaction table has measures named RawCost and Totaisale that calculate the wholesale cost of materials. You plan to create a measure that calculates the profit margin based on the two existing measures.
        You must implement a partitioning scheme for the fact.Transaction table to move older data to less expensive storage. Each partition will store data for a single calendar year, as shown in the exhibit (Click the Exhibit button.) You must align the partitions.
        70-767 dumps exhibit
        You must improve performance for queries against the fact.Transaction table. You must implement appropriate indexes and enable the Stretch Database capability.
        End of repeated scenario
        You need to create the ProfitMargin measure for the fact. Transaction table.
        How should you complete the MDX statement? To answer, select the appropriate MDX segments in the answer area.
        70-767 dumps exhibit

          Answer:

          Explanation: 70-767 dumps exhibit

          NEW QUESTION 16
          Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
          After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
          You have a Microsoft Azure SQL Data Warehouse instance. You run the following Transact-SQL statement:
          70-767 dumps exhibit
          The query fails to return results.
          You need to determine why the query fails.
          Solution: You run the following Transact-SQL statements:
          70-767 dumps exhibit
          Does the solution meet the goal?

          • A. Yes
          • B. No

          Answer: B

          Explanation: We must use Label, not QueryID in the WHERE clause. References:
          https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec

          NEW QUESTION 17
          Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
          After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
          Each night you receive a comma separated values (CSV) file that contains different types of rows. Each row type has a different structure. Each row in the CSV file is unique. The first column in every row is named Type. This column identifies the data type.
          For each data type, you need to load data from the CSV file to a target table. A separate table must contain the number of rows loaded for each data type.
          Solution: You create a SQL Server Integration Services (SSIS) package as shown in the exhibit. (Click the
          Exhibit tab.)
          70-767 dumps exhibit
          Does the solution meet the goal?

          • A. Yes
          • B. NO

          Answer: A

          Explanation: The conditional split is correctly placed before the count.

          NEW QUESTION 18
          You manage Master Data Services (MDS). You plan to create entities and attributes and load them with the data. You also plan to match data before loading it into Data Quality Services (DQS).
          You need to recommend a solution to perform the actions.
          What should you recommend?

          • A. MDS Add-in for Microsoft Excel
          • B. MDS Configuration Manager
          • C. Data Quality Matching
          • D. MDS repository

          Answer: A

          Explanation: In the Master Data Services Add-in for Excel, matching functionality is provided by Data Quality Services (DQS). This functionality must be enabled to be used.
          70-767 dumps exhibit To enable Data Quality Services integration
          70-767 dumps exhibit Open Master Data Services Configuration Manager.
          70-767 dumps exhibit In the left pane, click Web Configuration.
          70-767 dumps exhibit On the Web Configuration page, select the website and web application.
          70-767 dumps exhibit In the Enable DQS Integration section, click Enable integration with Data Quality Services.
          70-767 dumps exhibit On the confirmation dialog box, click OK.
          References:
          https://docs.microsoft.com/en-us/sql/master-data-services/install-windows/enable-data-quality-services-integrati

          P.S. Certleader now are offering 100% pass ensure 70-767 dumps! All 70-767 exam questions have been updated with correct answers: https://www.certleader.com/70-767-dumps.html (109 New Questions)