Vivid 70-767 Study Guides 2019

are updated and are verified by experts. Once you have completely prepared with our you will be ready for the real 70-767 exam without a problem. We have . PASSED First attempt! Here What I Did.

Also have 70-767 free dumps questions for you:

NEW QUESTION 1
You are designing a data transformation process using Microsoft SQL Server Integration Services (SSIS). You need to ensure that every row is compared with every other row during transformation.
What should you configure? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
70-767 dumps exhibit

    Answer:

    Explanation: When you configure the Fuzzy Grouping transformation, you can specify the comparison algorithm that the transformation uses to compare rows in the transformation input. If you set the Exhaustive property to true, the transformation compares every row in the input to every other row in the input. This comparison algorithm may produce more accurate results, but it is likely to make the transformation perform more slowly unless the number of rows in the input is small.
    References:
    https://docs.microsoft.com/en-us/sql/integration-services/data-flow/transformations/fuzzy-grouping-transformati

    NEW QUESTION 2
    Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
    You have a Microsoft SQL Server data warehouse instance that supports several client applications. The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
    Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
    All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
    You have the following requirements:
    70-767 dumps exhibit Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
    70-767 dumps exhibit Partition the Fact.Order table and retain a total of seven years of data.
    70-767 dumps exhibit Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
    70-767 dumps exhibit Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
    70-767 dumps exhibitMaximize the performance during the data loading process for the Fact.Order partition.
    70-767 dumps exhibit Ensure that historical data remains online and available for querying.
    70-767 dumps exhibit Reduce ongoing storage costs while maintaining query performance for current data.
    You are not permitted to make changes to the client applications. You need to implement partitioning for the Fact.Ticket table.
    Which three actions should you perform in sequence? To answer, drag the appropriate actions to the correct locations. Each action may be used once, more than once or not at all. You may need to drag the split bar between panes or scroll to view content.
    NOTE: More than one combination of answer choices is correct. You will receive credit for any of the correct combinations you select.
    70-767 dumps exhibit

      Answer:

      Explanation: From scenario: - Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
      The detailed steps for the recurring partition maintenance tasks are: References:
      https://docs.microsoft.com/en-us/sql/relational-databases/tables/manage-retention-of-historical-data-in-system-v

      NEW QUESTION 3
      You are implementing a Microsoft SQL Server data warehouse with a multi-dimensional data model. Orders are stored in a table named Factorder. The addresses that are associated with all orders are stored in a fact table named FactAddress. A key in the FoctAddress table specifies the type of address for an order.
      You need to ensure that business users can examine the address data by either of the following:
      • shipping address and billing address
      • shipping address or billing address type Which data model should you use?

      • A. star schema
      • B. snowflake schema
      • C. conformed dimension
      • D. slowly changing dimension (SCD)
      • E. fact table
      • F. semi-additive measure
      • G. non-additive measure
      • H. dimension table reference relationship

      Answer: H

      NEW QUESTION 4
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
      You have a Microsoft Azure SQL Data Warehouse instance that must be available six months a day for reporting.
      You need to pause the compute resources when the instance is not being used. Solution: You use SQL Server Configuration Manager.
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: B

      Explanation: To pause a SQL Data Warehouse database, use any of these individual methods. Pause compute with Azure portal
      Pause compute with PowerShell Pause compute with REST APIs References:
      https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-compute-overview

      NEW QUESTION 5
      You are the administrator of a Microsoft SQL Server Master Data Services (MDS) model. The model was developed to provide consistent and validated snapshots of master data to the ETL processes by using subscription views. A new model version has been created.
      You need to ensure that the ETL processes retrieve the latest snapshot of master data. What should you do?

      • A. Add a version flag to the last committed version, and create new subscription views that use this version flag.
      • B. Update the subscription views to use the last committed version.
      • C. Add a version flag to the new version, and update the subscription views to use this version flag.
      • D. Add a version flag to the new version, and create new subscription views that use this version flag.

      Answer: B

      NEW QUESTION 6
      Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
      After you answer a question in this section, you will NOT be able to return to it As a result these questions will not appear in the review screen.
      You are the administrator of a Microsoft SQL Server Master Data Services (MDS) instance. The instance contains a model named Geography and a model named customer. The Geography model contains an entity named countryRegion.
      You need to ensure that the countryRegion entity members are available in the customer model.
      Solution: In the Customer model, add a domain-based attribute to reference the CountryRegion entity in the Geography model.
      Does the solution meet the goal?

      • A. Yes
      • B. No

      Answer: A

      NEW QUESTION 7
      You have a fact table in a data warehouse that stores financial data. The table contains eight column configured as shown in the following table.
      70-767 dumps exhibit
      You need to identify a column that can be aggregated across all dimensions. Which column should you identify?

      • A. OpeningPrice
      • B. StockID
      • C. NumberOfTrades
      • D. MarketID

      Answer: C

      Explanation: Aggregates are sometimes referred to as pre-calculated summary data, since aggregations are usually precomputed, partially summarized data, that are stored in new aggregated tables.
      References: https://en.wikipedia.org/wiki/Aggregate_(data_warehouse)

      NEW QUESTION 8
      Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
      You are implementing the data load process for a data warehouse.
      The data warehouse uses daily partitions to store data added or modified during the last 60 days. Older data is stored in monthly partitions.
      You need to ensure that the ETL process can modify the partition scheme during the data load process. Which component should you use to load the data to the data warehouse?

      • A. the Slowly Changing Dimension transformation
      • B. the Conditional Split transformation
      • C. the Merge transformation
      • D. the Data Conversion transformation
      • E. an Execute SQL task

      Answer: E

      NEW QUESTION 9
      You have a database named OnlineSales that contains a table named Customers. You plan to copy incremental changes from the Customers table to a data warehouse every hour.
      You need to enable change tracking for the Customers table.
      How should you complete the Transact-SQL statements? To answer, drag the appropriate Transact-SQL segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
      70-767 dumps exhibit

        Answer:

        Explanation: Box 1: DATABASE [OnlineSales]
        Before you can use change tracking, you must enable change tracking at the database level. The following example shows how to enable change tracking by using ALTER DATABASE.
        ALTER DATABASE AdventureWorks2012 SET CHANGE_TRACKING = ON
        (CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON) Box 2: CHANGE_TRACKING = ON
        ALTER SET CHANGE_RETENTION
        Box 3: ALTER TABLE [dbo].[Customers]
        Change tracking must be enabled for each table that you want tracked. When change tracking is enabled, change tracking information is maintained for all rows in the table that are affected by a DML operation.
        The following example shows how to enable change tracking for a table by using ALTER TABLE. ALTER TABLE Person.Contact
        ENABLE CHANGE_TRACKING
        WITH (TRACK_COLUMNS_UPDATED = ON) Box 4: ENABLE CHANGE_TRACKING
        References:
        https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/enable-and-disable-change-tracking-sql-

        NEW QUESTION 10
        You are designing the data warehouse to import data from three different environments. The sources for the data warehouse will be loaded every hour.
        Scenario A includes tables in a Microsoft Azure SQL Database:
        70-767 dumps exhibit Millions of updates and inserts occur per hour
        70-767 dumps exhibit A periodic query of the current state of rows that have changed is needed.
        70-767 dumps exhibit The change detection method needs to be able to ignore changes to some columns in a table.
        70-767 dumps exhibit The source database is a member of an AlwaysOn Availability group.
        Scenario B includes tables with status update changes:
        70-767 dumps exhibit Tracking the duration between workflow statuses.
        70-767 dumps exhibit All transactions must be captured, including before/after values for UPDATE statements.
        70-767 dumps exhibit To minimize impact to performance, the change strategy adopted should be asynchronous.
        Scenario C includes an external source database:
        70-767 dumps exhibit Updates and inserts occur regularly.
        70-767 dumps exhibit No changes to the database should require code changes to any reports or applications.
        70-767 dumps exhibit Columns are added and dropped to tables in the database periodically. These schema changes should not require any interruption or reconfiguration of the change detection method chose.
        70-767 dumps exhibit Data is frequently queried as the entire row appeared at a past point in time. All tables have primary keys.
        You need to load each data source. You must minimize complexity, disk storage, and disruption to the data sources and the existing data warehouse.
        Which change detection method should you use for each scenario? To answer, drag the appropriate loading methods to the correct scenarios. Each source may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.
        NOTE: Each correct selection is worth one point.
        70-767 dumps exhibit

          Answer:

          Explanation: 70-767 dumps exhibit
          Box A: System-Versioned Temporal Table
          System-versioned temporal tables are designed to allow users to transparently keep the full history of changes for later analysis, separately from the current data, with the minimal impact on the main OLTP workload.
          Box B: Change Tracking Box C: Change Data Capture
          Change data capture supports tracking of historical data, while that is not supported by change tracking. References:
          https://docs.microsoft.com/en-us/sql/relational-databases/track-changes/track-data-changes-sql-server https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-table-usage-scenarios

          NEW QUESTION 11
          Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
          You have a database named DB1 that has change data capture enabled.
          A Microsoft SQL Server Integration Services (SSIS) job runs once weekly. The job loads changes from DB1 to a data warehouse by querying the change data capture tables.
          Users report that an application that uses DB1 is suddenly unresponsive.
          You discover that the Integration Services job causes severe blocking issues in the application. You need to ensure that the users can run the application as quickly as possible. Your SQL Server login is a member of only the ssis.admin database role.
          Which stored procedure should you execute?

          • A. catalog.deploy_project
          • B. catalog.restore_project
          • C. catalog.stop.operation
          • D. sys.sp.cdc.addjob
          • E. sys.sp.cdc.changejob
          • F. sys.sp_cdc_disable_db
          • G. sys.sp_cdc_enable_db
          • H. sys.sp_cdc.stopJob

          Answer: E

          Explanation: sys.sp_cdc_change_job modifies the configuration of a change data capture cleanup or capture job in the current database.
          References:
          https://docs.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-change-job-trans

          NEW QUESTION 12
          Note: This question is part of a series of questions that use the same or similar answer choices. An answer choice may be correct for more than one question in the series. Each question is independent of the other questions in this series. Information and details provided in a question apply only to that question.
          You have a database named DB1 that has change data capture enabled.
          A Microsoft SQL Server Integration Services (SSIS) job runs once weekly. The job loads changes from DB1 to a data warehouse by querying the change data capture tables.
          You discover that the job loads changes from the previous three days only. You need re ensure that the job loads changes from the previous week. Which stored procedure should you execute?

          • A. catalog.deploy_project
          • B. catalog.restore_project
          • C. catalog.stop.operation
          • D. sys.sp_cdc.addJob
          • E. sys.sp.cdc.changejob
          • F. sys.sp_cdc_disable_db
          • G. sys.sp_cdc_enable_db
          • H. sys.sp_cdc.stopJob

          Answer: A

          Explanation: catalog.deploy_project deploys a project to a folder in the Integration Services catalog or updates an existing project that has been deployed previously.
          References:
          https://docs.microsoft.com/en-us/sql/integration-services/system-stored-procedures/catalog-deploy-project-ssisd

          NEW QUESTION 13
          You are designing a warehouse named DW1.
          A table named Table1 is partitioned by using the following partitioning scheme and function.
          70-767 dumps exhibit
          Reports are generated from the data in Table1.
          You need to ensure that queries to DW1 return results as quickly as possible. Which column should appear in the WHERE statement clause of the query?

          • A. AccountNumber
          • B. MyId
          • C. DueDate
          • D. OrderDate

          Answer: D

          NEW QUESTION 14
          You are developing a Microsoft SQL Server Master Data Services (MDS) solution.
          The model contains an entity named Product. The Product entity has three user-defined attributes named category. Subcategory, and Price, respectively.
          You need to ensure that combinations of values stored in the category and subcategory attributes are unique. What should you do?

          • A. Create a derived hierarchy based on the category and subcategory attribute
          • B. Use the category attribute as the top level for the hierarchy.
          • C. Publish two business rules, one for each of the Category and Subcategory attributes.
          • D. Set the value of the Attribute Type property for the Category and Subcategory attributes to Domain-based.
          • E. Create a custom index that will be used by the Product entity.

          Answer: D

          NEW QUESTION 15
          Your company has a Microsoft SQL Server data warehouse instance. The human resources department assigns all employees a unique identifier. You plan to store this identifier in a new table named Employee.
          You create a new dimension to store information about employees by running the following Transact-SQL statement:
          70-767 dumps exhibit
          You have not added data to the dimension yet. You need to modify the dimension to implement a new column named [EmployeeKey]. The new column must use unique values.
          How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL segments in the answer area.
          70-767 dumps exhibit

            Answer:

            Explanation: 70-767 dumps exhibit

            NEW QUESTION 16
            Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
            After you answer a question in this sections, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
            You have the following line-of-business solutions:
            70-767 dumps exhibit If a change is made to the ReferenceNr column in any of the sources, set the value of IsDisabled to True and create a new row in the Products table.
            70-767 dumps exhibit If a row is deleted in any of the sources, set the value of IsDisabled to True in the data warehouse.
            One or more Microsoft SQL Server instances support each solution. Each solution has its own product catalog. You have an additional server that hosts SQL Server Integration Services (SSIS) and a data warehouse. You populate the data warehouse with data from each of the line-of-business solutions. The data warehouse does not store primary key values from the individual source tables.
            The database for each solution has a table named Products that stored product information. The Products table in each database uses a separate and unique key for product records. Each table shares a column named ReferenceNr between the databases. This column is used to create queries that involve more than once solution.
            You need to load data from the individual solutions into the data warehouse nightly. The following requirements must be met:
            70-767 dumps exhibit Enable the Change Tracking for the Product table in the source databases.
            70-767 dumps exhibit Query the cdc.fn_cdc_get_all_changes_capture_dbo_products function from the sources for updated rows.
            70-767 dumps exhibit Set the IsDisabled column to True for rows with the old ReferenceNr value.
            70-767 dumps exhibit Create a new row in the data warehouse Products table with the new ReferenceNr value.
            Solution: Perform the following actions: Does the solution meet the goal?

            • A. Yes
            • B. No

            Answer: B

            Explanation: We must also handle the deleted rows, not just the updated rows.
            References: https://solutioncenter.apexsql.com/enable-use-sql-server-change-data-capture/

            NEW QUESTION 17
            Note: This question is part of a series of questions that use the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
            You have a Microsoft SQL Server data warehouse instance that supports several client applications. The data warehouse includes the following tables: Dimension.SalesTerritory, Dimension.Customer,
            Dimension.Date, Fact.Ticket, and Fact.Order. The Dimension.SalesTerritory and Dimension.Customer tables are frequently updated. The Fact.Order table is optimized for weekly reporting, but the company wants to change it daily. The Fact.Order table is loaded by using an ETL process. Indexes have been added to the table over time, but the presence of these indexes slows data loading.
            All data in the data warehouse is stored on a shared SAN. All tables are in a database named DB1. You have a second database named DB2 that contains copies of production data for a development environment. The data warehouse has grown and the cost of storage has increased. Data older than one year is accessed infrequently and is considered historical.
            You have the following requirements:
            70-767 dumps exhibit Implement table partitioning to improve the manageability of the data warehouse and to avoid the need to repopulate all transactional data each night. Use a partitioning strategy that is as granular as possible.
            70-767 dumps exhibit - Partition the Fact.Order table and retain a total of seven years of data.
            70-767 dumps exhibit - Partition the Fact.Ticket table and retain seven years of data. At the end of each month, the partition structure must apply a sliding window strategy to ensure that a new partition is available for the upcoming month, and that the oldest month of data is archived and removed.
            70-767 dumps exhibit - Optimize data loading for the Dimension.SalesTerritory, Dimension.Customer, and Dimension.Date tables.
            70-767 dumps exhibit- Maximize the performance during the data loading process for the Fact.Order partition.
            70-767 dumps exhibit - Ensure that historical data remains online and available for querying.
            70-767 dumps exhibit - Reduce ongoing storage costs while maintaining query performance for current data. You are not permitted to make changes to the client applications.
            You need to configure the Fact.Order table.
            Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
            70-767 dumps exhibit

              Answer:

              Explanation: From scenario: Partition the Fact.Order table and retain a total of seven years of data. Maximize the performance during the data loading process for the Fact.Order partition.
              Step 1: Create a partition function.
              Using CREATE PARTITION FUNCTION is the first step in creating a partitioned table or index. Step 2: Create a partition scheme based on the partition function.
              To migrate SQL Server partition definitions to SQL Data Warehouse simply: Step 3: Execute an ALTER TABLE command to specify the partition function.
              References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-partition

              NEW QUESTION 18
              You have a database Ihat includes a table named dbo.sales. The table contains two billion rows. You created the table by running the following Transact-SQL statement:
              70-767 dumps exhibit
              70-767 dumps exhibit
              You run the following queries against the dbo.sales table. All of the queries perform poorly.
              70-767 dumps exhibit
              70-767 dumps exhibit
              The ETL process that populates the table uses bulk insert to load 10 million rows each day. The process currently takes six hours to load the records.
              The value of the Refund column is equal to 1 for only 0.01 percent of the rows in the table. For all other rows, the value of the Refund column is equal to 0.
              You need to maximize the performance of queries and the ETL process.
              Which index type should you use for each query? To answer, select the appropriate index types in the answer area.
              NOTE: Each correct selection is worth one point.
              70-767 dumps exhibit

                Answer:

                Explanation: 70-767 dumps exhibit

                P.S. Easily pass 70-767 Exam with 109 Q&As 2passeasy Dumps & pdf Version, Welcome to Download the Newest 2passeasy 70-767 Dumps: https://www.2passeasy.com/dumps/70-767/ (109 New Questions)