[Free] 2018(Jan) EnsurePass Testking Oracle 1z0-067 Dumps with VCE and PDF 41-50

Ensurepass.com : Ensure you pass the IT Exams
2018 Jan Oracle Official New Released 1z0-067
100% Free Download! 100% Pass Guaranteed!
http://www.EnsurePass.com/1z0-067.html

Upgrade Oracle9i/10g/11g OCA to Oracle Database 12c OCP

Question No: 41

You want to capture column group usage and gather extended statistics for better cardinality estimates for the customers table in the SH schema.

Examine the following steps:

  1. Issue the SELECTDBMS_STATS. CREATE_EXTENDED_STATS(‘SH’, #39;CUSTOMERS#39;)from dual statement.

  2. Execute the dbms_stats.seed_col_usage (null,’SH’,500) procedure. 3.Execute the required queries on the customers table.

4.Issue the select dbms_stats.reportwcol_usage(‘SH’, #39;customers#39;) from dual statement. Identify the correct sequence of steps.

A. 3, 2, 1, 4

B. 2, 3, 4, 1

C. 4, 1, 3, 2

D. 3, 2, 4, 1

Answer: B

Explanation: Step 1 (2). Seed column usage

Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload.

Step 2: (3)You don#39;t need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries.

Step 3. (1) Create the column groups

Atthis point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on, statistics will be maintained for each column group whenever statistics are gathered on the table.

Note:

  • DBMS_STATS.REPORT_COL_USAGE reports column usage informationand records all the SQL operations the database has processed for a given object.

  • The Oracle SQL optimizer has always been ignorant of the implied relationships between

    data columns within the same table. While the optimizer has traditionally analyzedthe distribution of values within a column, he does not collect value-based relationships between columns.

  • Creating extended statistics

Here are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats:

  1. -The first step is to create column histograms for the related columns.

  2. – Next, we run dbms_stats.create_extended_stats to relate the columns together. Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement.

Question No: 42

You want to back up a database such that only formatted blocks are backed up. Which statement is true about this backup operation?

  1. The backup must be performed in mount state.

  2. The tablespace must betaken offline.

  3. All files must be backed up as backup sets.

  4. The database must be backed up as an image copy.

Answer: C

Question No: 43

Which statement is true aboutthe loss or damage of a temp file that belongs to the temporary tablespace of a pluggable database (PDB)?

  1. The PDB is closed and the temp file is re-created automatically when the PDB is opened.

  2. The PDB is closed and requires media recovery at the PDBlevel.

  3. The PDB does not close and the temp file is re-created automatically whenever the container database (CDB) is opened.

  4. The PDB does not close and starts by using the default temporary tablespace defined for the CD

Answer: A

Question No: 44

Which two statements are true about dropping a pluggable database (PDB)?

  1. A PDB must be in mount state or it must be unplugged.

  2. The data files associated with a PDB are automatically removed from disk.

  3. A dropped and unplugged PDB can be plugged back into the same multitenant container database (CDB) or other CDBs.

  4. A PDB must be in closed state.

  5. The backups associated with a PDB are removed.

  6. A PDB must have been opened at least once after creation.

    Answer: A,D

    Reference:http://docs.oracle.com/database/121/ADMIN/cdb_plug.htm#ADMIN13858

    Question No: 45

    You wish to create jobs to satisfy these requirements:

    1. Automatically bulk load data from a flat file.

    2. Rebuild indexes on the SALES table after completion of the bulk load. How would you create these jobs?

  1. Create both jobs by using Scheduler raised events.

  2. Create both jobs using application raised events.

  3. Create one job to rebuild indexes using application raised events and another job to perform bulk load using Scheduler raised events.

  4. Create one job to rebuild indexes using Scheduler raised events and another job to perform bulk load by using events raised by the application.

Answer: A

Question No: 46

Identify three reasons for using a recovery catalog with Recovery Manager (RMAN).

  1. to store backup information of multiple databases in one place

  2. to restrict the amount of space that is used by backups

  3. to maintain a backup for an indefinite period of time by using the KEEP FOREVER clause

  4. to store RMAN scripts that are available to any RMAN client that can connect to target databasesregistered in the recovery catalog

  5. to automatically delete obsolete backups after a specified period of time

Answer: A,C,D

Question No: 47

Examine the command to create a pluggable database (PDB):

SQLgt; CREATE PLUGGABLE DATABASE pdb2 FROM pdb1

FILE_NAME-_CONVERT = (#39;/disk1/oracle/pdb1/#39;, #39;/disk2/oracle/pdb2/’) PATH_PREFIX= #39;/disk2/oracle/pdb2#39;;

Which two statements are true?

  1. The pluggable database pdb2 is created by cloning pdb1 and is in mount state.

  2. Details about the metadata describing pdb2 are stored in an XML file in the #39;/disk2/oracle/pdb2/#39; directory.

  3. The tablespace specifications of pdb2 are the same as pdb1.

  4. All database objects belonging to common users in PD3I are cloned in PD32.

  5. pdb2 is created with its own private undo and temp tablespaces.

Answer: A,C

Reference:http://oracle-info.com/2013/07/27/12c-database-create-pdbs-plug-unplug/(see the table, 4throw)

Question No: 48

You are administering a database that supports a data warehousing workload and is running in noarchivelog mode. You use RMAN to perform a level 0 backup on Sundays and level 1 incremental backups on allthe other days of the week.

One of the data files is corrupted and the current online redo log file is lost because of a media failure.

Which action must you take for recovery?

  1. Restore the data file, recover it by using the recover datafilenoredo command, and use the resetlogs option to open the database.

  2. Restore the control file and all the data files, recover them by using the recover database noredo command, and use the resetlogs option to open the database.

  3. Restore all the data files, recoverthem by using the recover database command, and open the database.

  4. Restore all the data files, recover them by using the recover database noredo command, and use the resetlogs option to open the database.

Answer: B

Question No: 49

View the Exhibit showing steps to create a database resource manager plan.

SQLgt;executedbms_resource_manager.create_pendingarea(); PL/SQLproceduresuccessfully completed.

3QLgt;execdbms_resource_manager,create_consumergroup (consumer_group=gt;’OLTP,,comment=gt;,onlineuser’)

PL/SQLproceduresuccessfullycompleted. SQLgt;execdbras_resource_raanager.create_plan(plan=gt;’PRIU3ER3#39;,comment=gt;#39;dssprio’); SQLgt;exec

Dbms_resource_manager.create_plan_directive(plan=gt;’PRIU3ER3′,group_or_subplan=gt;#39; OLTP’,comraent=gt;#39;onlinegrp#39;CPU_Pl=gt;60);

PL/3QLproceduresuccessfullycompleted.

After execting the steps in the exhibit you execute this procedure, which results in an error: SQLgt; EXECUTEdbms_resource_manager. validate_pending_area ();

What is the reason for the error?

  1. The pending area is automatically submitted when creating plan directives.

  2. The procedure must be executedbefore creating any plan directive.

  3. The sys_group group is not included in the resource plan.

  4. The other_groups group is not included in the resource plan.

  5. Pending areas can not be validated until submitted.

Answer: D

Explanation: Default consumer group for all sessions that do not have an explicit initial consumer group, are not mapped to a consumer group with session-to-consumer group mapping rules, or are mapped to a consumer group that is not in the currently active resource plan.

OTHER_GROUPS must have a resource plan directive specified in every plan. It cannot be assigned explicitly to sessions through mapping rules.

Question No: 50

Consider the following scenario for your database:

->Backup optimization is enabled in RMAN.

->Therecovery window is set to seven days in RMAN.

->The most recent backup to disk for the tools tablespace was taken on March 1, 2013.

->The tools tablespace is read-only since March 2, 2013.

On March 15, 2013, you issue the RMAN command to back up the databaseto disk. Which statement is true about the backup of the tools tablespace?

  1. The RMAN backup fails because the tools tablespace is read-only.

  2. RMAN skips the backup of the tools tablespace because backup optimization is enabled.

  3. RMAN creates a backup of the tools tablespace because backup optimization is applicable only for the backups written to media.

  4. RMAN creates a backup of the tools tablespace because no backup of the tablespace exists within the seven-day recovery window.

Answer: D

100% Ensurepass Free Download!
Download Free Demo:1z0-067 Demo PDF
100% Ensurepass Free Guaranteed!
1z0-067 Dumps

EnsurePass ExamCollection Testking
Lowest Price Guarantee Yes No No
Up-to-Dated Yes No No
Real Questions Yes No No
Explanation Yes No No
PDF VCE Yes No No
Free VCE Simulator Yes No No
Instant Download Yes No No

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.