[Free] 2018(Jan) EnsurePass Testinsides Oracle 1z0-060 Dumps with VCE and PDF 91-100

Ensurepass.com : Ensure you pass the IT Exams
2018 Jan Oracle Official New Released 1z0-060
100% Free Download! 100% Pass Guaranteed!

Upgrade to Oracle Database 12c

Question No: 91

Which three statements are true when the listener handles connection requests to an Oracle 12c database instance with multithreaded architecture enabled In UNIX?

  1. Thread creation must be routed through a dispatcher process

  2. The local listener may spawn a now process and have that new process create a thread

  3. Each Oracle process runs an SCMN thread.

  4. Each multithreaded Oracle process has an SCMN thread.

  5. The local listener may pass the request to an existing process which in turn will create a thread.

Answer: A,D,E

Question No: 92

Which two are true concerning a multitenant container database with three pluggable database?

  1. All administration tasks must be done to a specific pluggable database.

  2. The pluggable databases increase patching time.

  3. The pluggable databases reduce administration effort.

  4. The pluggable databases are patched together.

  5. Pluggable databases are only used for database consolidation.

Answer: C,E

Explanation: The benefits of Oracle Multitenant are brought by implementing a pure deployment choice. The following list calls out the most compelling examples.

  • High consolidation density. (E)

    The many pluggable databases in a single multitenant container database share its memory and background processes, letting you operate many more pluggable databases on a particular platform than you can single databases that use the old architecture. This is the same benefit that schema-based consolidation brings.

  • Rapid provisioning and cloning using SQL.

  • New paradigms for rapid patching and upgrades. (D, not B)

    The investment of time and effort to patch one multitenant container database results in patching all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug to a multitenant container database at a different Oracle Database software version.

  • (C, not A) Manage many databases as one.

    By consolidating existing databases as pluggable databases, administrators can manage many databases as one. For example, tasks like backup and disaster recovery are performed at the multitenant container database level.

  • Dynamic between pluggable database resource management. In Oracle Database 12c, Resource Manager is extended with specific functionality to control the competition for resources between the pluggable databases within a multitenant container database.


  • Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more. It is supported by a new architecture that allows a multitenant container database to hold many pluggable databases. And it fully complements other options, including Oracle Real Application Clusters and Oracle Active Data Guard. An existing database can be simply adopted, with no change, as a pluggable database; and no changes are needed in the other tiers of the application.

  • Reference: 12c Oracle Multitenant

    Question No: 93

    You create a new pluggable database, HR_PDB, from the seed database. Which three tablespaces are created by default in HR_PDB?

    1. SYSTEM

    2. SYSAUX

    3. EXAMPLE

    4. UNDO

    5. TEMP

    6. USERS

    Answer: A,B,E

    Explanation: * A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains other user created tablespaces in it.


    • Oracle Database creates both the SYSTEM and SYSAUX tablespaces as part of every database.

    • tablespace_datafile_clauses

    Use these clauses to specify attributes for all data files comprising the SYSTEM and SYSAUX tablespaces in the seed PDB.


    Not D: a PDB can not have an undo tablespace. Instead, it uses the undo tablespace belonging to the CDB.


    * Example:

    CONN pdb_admin@pdb1

    SELECT tablespace_name FROM dba_tablespaces; TABLESPACE_NAME

    ————— SYSTEM



    Question No: 94

    Which three statements are true concerning unplugging a pluggable database (PDB)?

    1. The PDB must be open in read only mode.

    2. The PDB must be dosed.

    3. The unplugged PDB becomes a non-CDB.

    4. The unplugged PDB can be plugged into the same multitenant container database (CDB)

    5. The unplugged PDB can be plugged into another CDB.

    6. The PDB data files are automatically removed from disk.

    Answer: A,D,E Explanation:

    D: An unplugged PDB contains data dictionary tables, and some of the columns in these encode information in an endianness-sensitive way. There is no supported way to handle the conversion of such columns automatically. This means, quite simply, that an unplugged PDB cannot be moved across an endianness difference.

    E (not F): To exploit the new unplug/plug paradigm for patching the Oracle version most effectively, the source and destination CDBs should share a filesystem so that the PDB’s datafiles can remain in place.

    Reference: Oracle White Paper, Oracle Multitenant

    Question No: 95

    The tnsnames.ora file has an entry for the service alias ORCL as follows:

    Ensurepass 2018 PDF and VCE

    The TNS ping command executes successfully when tested with ORCL; however, from the same OS user session, you are not able to connect to the database instance with the

    following command:

    SQL gt; CONNECT scott/tiger@orcl What could be the reason for this?

    1. The listener is not running on the database node.

    2. The TNS_ADMIN environment variable is set to the wrong value.

    3. The orcl.oracle.com database service is not registered with the listener.

    4. The DEFAULT_DOMAIN parameter is set to the wrong value in the sqlnet.ora file.

    5. The listener is running on a different port.

    Answer: C

    Explanation: Service registration enables the listener to determine whether a database service and its service handlers are available. A service handler is a dedicated server process or dispatcher that acts as a connection point to a database. During registration, the LREG process provides the listener with the instance name, database service names, and the type and addresses of service handlers. This information enables the listener to start a service handler when a client request arrives.

    Question No: 96

    A senior DBA asked you to execute the following command to improve performance: SQLgt; ALTER TABLE subscribe log STORAGE (BUFFER_POOL recycle);

    You checked the data in the SUBSCRIBE_LOG table and found that it is a large table containing one million rows.

    What could be a reason for this recommendation?

    1. The keep pool is not configured.

    2. Automatic Workarea Management is not configured.

    3. Automatic Shared Memory Management is not enabled.

    4. The data blocks in the SUBSCRIBE_LOG table are rarely accessed.

    5. All the queries on the SUBSCRIBE_LOG table are rewritten to a materialized view.

      Answer: D

      Explanation: The most of the rows in SUBSCRIBE_LOG table are accessed once a week.

      Question No: 97

      You want to capture column group usage and gather extended statistics for better cardinality estimates for the CUSTOMERS table in the SH schema.

      Examine the following steps:


      2. Execute the DBMS_STATS.SEED_COL_USAGE (null, ‘SH’, 500) procedure.

      3. Execute the required queries on the CUSTOMERS table.

      4. Issue the SELECT DBMS_STATS.REPORT_COL_USAGE (‘SH’, ‘CUSTOMERS’) FROM dual statement.

    Identify the correct sequence of steps.

    A. 3, 2, 1, 4

    B. 2, 3, 4, 1

    C. 4, 1, 3, 2

    D. 3, 2, 4, 1

    Answer: B

    Explanation: Step 1 (2). Seed column usage

    Oracle must observe a representative workload, in order to determine the appropriate column groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it should observe the workload.

    Step 2: (3) You don#39;t need to execute all of the queries in your work during this window. You can simply run explain plan for some of your longer running queries to ensure column group information is recorded for these queries.

    Step 3. (1) Create the column groups

    At this point you can get Oracle to automatically create the column groups for each of the tables based on the usage information captured during the monitoring window. You simply have to call the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires just two arguments, the schema name and the table name. From then on,

    statistics will be maintained for each column group whenever statistics are gathered on the table.


    • DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the SQL operations the database has processed for a given object.

    • The Oracle SQL optimizer has always been ignorant of the implied relationships between data columns within the same table. While the optimizer has traditionally analyzed the distribution of values within a column, he does not collect value-based relationships between columns.

    • Creating extended statisticsHere are the steps to create extended statistics for related table columns withdbms_stats.created_extended_stats:

    1 – The first step is to create column histograms for the related columns.2 – Next, we run dbms_stats.create_extended_stats to relate the columns together.

    Unlike a traditional procedure that is invoked via an execute (“exec”) statement, Oracle extended statistics are created via a select statement.

    Question No: 98

    You administer an online transaction processing (OLTP) system whose database is stored in Automatic Storage Management (ASM) and whose disk group use normal redundancy.

    One of the ASM disks goes offline, and is then dropped because it was not brought online before DISK_REPAIR_TIME elapsed.

    When the disk is replaced and added back to the disk group, the ensuing rebalance operation is too slow.

    Which two recommendations should you make to speed up the rebalance operation if this type of failure happens again?

    1. Increase the value of the ASM_POWER_LIMIT parameter.

    2. Set the DISK_REPAIR_TIME disk attribute to a lower value.

    3. Specify the statement that adds the disk back to the disk group.

    4. Increase the number of ASMB processes.

    5. Increase the number of DBWR_IO_SLAVES in the ASM instance.

    Answer: A,C

    Explanation: ASM_POWER_LIMIT specifies the maximum power on an Automatic Storage Management instance for disk rebalancing. The higher the limit, the faster rebalancing will complete. Lower values will take longer, but consume fewer processing and I/O resources.

    Grouping operations in a single ALTER DISKGROUP statement can reduce rebalancing operations. http://docs.oracle.com/cd/E11882_01/server.112/e18951/asmdiskgrps.htm#OSTMG10070

    Question No: 99

    You upgrade your Oracle database in a multiprocessor environment. As a recommended you execute the following script:

    SQL gt; @utlrp.sql

    Which two actions does the script perform?

    1. Parallel compilation of only the stored PL/SQL code

    2. Sequential recompilation of only the stored PL/SQL code

    3. Parallel recompilation of any stored PL/SQL code

    4. Sequential recompilation of any stored PL/SQL code

    5. Parallel recompilation of Java code

    6. Sequential recompilation of Java code

    Answer: C,E

    Explanation: utlrp.sql and utlprp.sql

    The utlrp.sql and utlprp.sql scripts are provided by Oracle to recompile all invalid objects in the database. They are typically run after major database changes such as upgrades or patches. They are located in the $ORACLE_HOME/rdbms/admin directory and provide a wrapper on the UTL_RECOMP package. The utlrp.sql script simply calls the utlprp.sql script with a command line parameter of quot;0quot;. The utlprp.sql accepts a single integer parameter that indicates the level of parallelism as follows.

    0 – The level of parallelism is derived based on the CPU_COUNT parameter. 1 – The recompilation is run serially, one object at a time.

    N – The recompilation is run in parallel with quot;Nquot; number of threads.

    Both scripts must be run as the SYS user, or another user with SYSDBA, to work correctly. Reference: Recompiling Invalid Schema Objects

    Question No: 100

    You plan to migrate your database from a File system to Automata Storage Management (ASM) on same platform.

    Which two methods or commands would you use to accomplish this task?

    1. RMAN CONVERT command

    2. Data Pump Export and import

    3. Conventional Export and Import

    4. The BACKUP AS COPY DATABASE . . . command of RMAN

    5. DBMS_FILE_TRANSFER with transportable tablespace

      Answer: A,D Explanation: A:

      1. Get the list of all datafiles.

    ->Use the convert datafile command to convert the datafile from the file system to ASM.

    Note: RMAN Backup of ASM Storage

    There is often a need to move the files from the file system to the ASM storage and vice versa. This may come in handy when one of the file systems is corrupted by some means and then the file may need to be moved to the other file system.

    D: Migrating a Database into ASM

    • To take advantage of Automatic Storage Management with an existing database you must migrate that database into ASM. This migration is performed using Recovery Manager (RMAN) even if you are not using RMAN for your primary backup and recovery strategy.

    • Example:

    Back up your database files as copies to the ASM disk group.


    Reference: Migrating Databases To and From ASM with Recovery Manager

    100% Ensurepass Free Download!
    Download Free Demo:1z0-060 Demo PDF
    100% Ensurepass Free Guaranteed!
    1z0-060 Dumps

    EnsurePass ExamCollection Testking
    Lowest Price Guarantee Yes No No
    Up-to-Dated Yes No No
    Real Questions Yes No No
    Explanation Yes No No
    PDF VCE Yes No No
    Free VCE Simulator Yes No No
    Instant Download Yes No No

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.