[Free] 2018(Jan) EnsurePass Testking Oracle 1z0-060 Dumps with VCE and PDF 51-60

Ensurepass.com : Ensure you pass the IT Exams
2018 Jan Oracle Official New Released 1z0-060
100% Free Download! 100% Pass Guaranteed!

Upgrade to Oracle Database 12c

Question No: 51

Your multitenant container (CDB) contains two pluggable databases (PDB), HR_PDB and ACCOUNTS_PDB, both of which use the CDB tablespace. The temp file is called temp01.tmp.

A user issues a query on a table on one of the PDBs and receives the following error: ERROR at line 1:

ORA-01565: error in identifying file ‘/u01/app/oracle/oradata/CDB1/temp01.tmp’

ORA-27037: unable to obtain file status Identify two ways to rectify the error.

  1. Add a new temp file to the temporary tablespace and drop the temp file that that produced the error.

  2. Shut down the database instance, restore the temp01.tmp file from the backup, and then restart the database.

  3. Take the temporary tablespace offline, recover the missing temp file by applying redo logs, and then bring the temporary tablespace online.

  4. Shutdown the database instance, restore and recover the temp file from the backup, and then open the database with RESETLOGS.

  5. Shut down the database instance and then restart the CDB and PDBs.

Answer: A,E

Explanation: * Because temp files cannot be backed up and because no redo is ever generated for them, RMAN never restores or recovers temp files. RMAN does track the names of temp files, but only so that it can automatically re-create them when needed.

  • If you use RMAN in a Data Guard environment, then RMAN transparently converts primary control files to standby control files and vice versa. RMAN automatically updates file names for data files, online redo logs, standby redo logs, and temp files when you issue RESTORE and RECOVER.

  • Question No: 52

    You upgraded from a previous Oracle database version to Oracle Database version to Oracle Database 12c. Your database supports a mixed workload. During the day, lots of insert, update, and delete operations are performed. At night, Extract, Transform, Load (ETL) and batch reporting jobs are run. The ETL jobs perform certain database operations using two or more concurrent sessions.

    After the upgrade, you notice that the performance of ETL jobs has degraded. To ascertain the cause of performance degradation, you want to collect basic statistics such as the level of parallelism, total database time, and the number of I/O requests for the ETL jobs.

    How do you accomplish this?

    1. Examine the Active Session History (ASH) reports for the time period of the ETL or

      batch reporting runs.

    2. Enable SQL tracing for the queries in the ETL and batch reporting queries and gather diagnostic data from the trace file.

    3. Enable real-time SQL monitoring for ETL jobs and gather diagnostic data from the V$SQL_MONITOR view.

    4. Enable real-time database operation monitoring using the DBMS_SQL_MONITOR.BEGIN_OPERATION function, and then use the DBMS_SQL_MONITOR.REPORT_SQL_MONITOR function to view the required information.

    Answer: D

    Explanation: * Monitoring database operations

    Real-Time Database Operations Monitoring enables you to monitor long running database tasks such as batch jobs, scheduler jobs, and Extraction, Transformation, and Loading (ETL) jobs as a composite business operation. This feature tracks the progress of SQL and PL/SQL queries associated with the business operation being monitored. As a DBA or developer, you can define business operations for monitoring by explicitly specifying the start and end of the operation or implicitly with tags that identify the operation.

    Question No: 53

    Which Oracle Database component is audited by default if the unified Auditing option is enabled?

    1. Oracle Data Pump

    2. Oracle Recovery Manager (RMAN)

    3. Oracle Label Security

    4. Oracle Database Vault

    5. Oracle Real Application Security

    Answer: B

    Question No: 54

    You use a recovery catalog for maintaining your database backups. You execute the following command:

    $rman TARGET / CATALOG rman / cat@catdb


    Which two statements are true?

    1. Corrupted blocks, if any, are repaired.

    2. Checks are performed for physical corruptions.

    3. Checks are performed for logical corruptions.

    4. Checks are performed to confirm whether all database files exist in correct locations

    5. Backup sets containing both data files and archive logs are created.

    Answer: B,D

    Explanation: B (not C): You can validate that all database files and archived redo logs can be backed up by running a command as follows:


    This form of the command would check for physical corruption. To check for logical corruption,


    D: You can use the VALIDATE keyword of the BACKUP command to do the following: Check datafiles for physical and logical corruption

    Confirm that all database files exist and are in the correct locations.


    You can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations (D), and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP…VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies (Not A, not E).

    Question No: 55

    You create a table with the PERIOD FOR clause to enable the use of the Temporal Validity feature of Oracle Database 12c.

    Examine the table definition:

    Ensurepass 2018 PDF and VCE

    Which three statements are true concerning the use of the Valid Time Temporal feature for the EMPLOYEES table?

    1. The valid time columns employee_time_start and employee_time_end are automatically created.

    2. The same statement may filter on both transaction time and valid temporal time by using the AS OF TIMESTAMP and PERIOD FOR clauses.

    3. The valid time columns are not populated by the Oracle Server automatically.

    4. The valid time columns are visible by default when the table is described.

    5. Setting the session valid time using DBMS_FLASHBACK_ARCHIVE.ENABLE_AT_VALID_TIME sets the visibility for data manipulation language (DML), data definition language (DDL), and queries performed by the session.

    Answer: A,B,E

    Explanation: A: To implement Temporal Validity(TV), 12c offers the option to have two date columns in that table which is having TV enabled using the new clause Period For in the Create Table for the newly created tables or in the Alter Table for the existing ones.

    The columns that are used can be defined while creating the table itself and will be used in the Period For clause or you can skip having them in the table’s definition in the case of which, the Period For clause would be creating them internally.


    This procedure enables session level valid time flashback.

    Question No: 56

    Your database supports an online transaction processing (OLTP) application. The application is undergoing some major schema changes, such as addition of new indexes and materialized views. You want to check the impact of these changes on workload performance.

    What should you use to achieve this?

    1. Database replay

    2. SQL Tuning Advisor

    3. SQL Access Advisor

    4. SQL Performance Analyzer

    5. Automatic Workload Repository compare reports

    Answer: D

    Explanation: You can use the SQL Performance Analyzer to analyze the SQL performance impact of any type of system change. Examples of common system changes include:

    鈥atabase upgrades

    鈥onfiguration changes to the operating system, hardware, or database

    鈥atabase initialization parameter changes

    鈥chema changes, such as adding new indexes or materialized views

    鈥athering optimizer statistics

    鈥QL tuning actions, such as creating SQL profiles http://docs.oracle.com/cd/B28359_01/server.111/b28318/intro.htm#CNCPT961

    Question No: 57

    In order to exploit some new storage tiers that have been provisioned by a storage administrator, the partitions of a large heap table must be moved to other tablespaces in your Oracle 12c database?

    Both local and global partitioned B-tree Indexes are defined on the table.

    A high volume of transactions access the table during the day and a medium volume of transactions access it at night and during weekends.

    Minimal disrupt ion to availability is required.

    Which three statements are true about this requirement?

    1. The partitions can be moved online to new tablespaces.

    2. Global indexes must be rebuilt manually after moving the partitions.

    3. The partitions can be compressed in the same tablespaces.

    4. The partitions can be compressed in the new tablespaces.

    5. Local indexes must be rebuilt manually after moving the partitions.

    Answer: A,C,D

    Explanation: A: You can create and rebuild indexes online. Therefore, you can update base tables at

    the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index build is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.

    D: Moving (Rebuilding) Index-Organized Tables

    Because index-organized tables are primarily stored in a B-tree index, you can encounter fragmentation as a consequence of incremental updates. However, you can use the ALTER TABLE…MOVE statement to rebuild the index and reduce this fragmentation.

    C: If a table can be compressed in the new tablespace, also it can be compressed in the same tablespace.



    Not B, not E: Local and Global indexes can be automatically rebuild with UPDATE INDEXES when you move the table.

    Question No: 58

    Oracle Grid Infrastructure for a stand-alone server is installed on your production host before installing the Oracle Database server. The database and listener are configured by using Oracle Restart.

    Examine the following command and its output:

    $ crsctl config has

    CRS-4622: Oracle High Availability Services auto start is enabled. What does this imply?

    1. When you start an instance on a high with SQL *Plus dependent listeners and ASM disk groups are automatically started.

    2. When a database instance is started by using the SRVCTL utility and listener startup fails, the instance is still started.

    3. When a database is created by using SQL* Plus, it is automatically added to the Oracle Restart configuration.

    4. When you create a database service by modifying the SERVICE_NAMES initialization parameter, it is automatically added to the Oracle Restart configuration.

    Answer: B

    Explanation: About Startup Dependencies

    Oracle Restart ensures that Oracle components are started in the proper order, in accordance with component dependencies. For example, if database files are stored in Oracle ASM disk groups, then before starting the database instance, Oracle Restart ensures that the Oracle ASM instance is started and the required disk groups are mounted. Likewise, if a component must be shut down, Oracle Restart ensures that dependent components are cleanly shut down first.

    Oracle Restart also manages the weak dependency between database instances and the Oracle Net listener (the listener): When a database instance is started, Oracle Restart attempts to start the listener. If the listener startup fails, then the database is still started. If the listener later fails, Oracle Restart does not shut down and restart any database instances. http://docs.oracle.com/cd/E16655_01/server.121/e17636/restart.htm#ADMIN12710

    Question No: 59

    Which two statements are true concerning the Resource Manager plans for individual pluggable databases (PDB plans) in a multitenant container database (CDB)?

    1. If no PDB plan is enabled for a pluggable database, then all sessions for that PDB are treated to an equal degree of the resource share of that PDB.

    2. In a PDB plan, subplans may be used with up to eight consumer groups.

    3. If a PDB plan is enabled for a pluggable database, then resources are allocated to

      consumer groups across all PDBs in the CDB.

    4. If no PDB plan is enabled for a pluggable database, then the PDB share in the CDB plan is dynamically calculated.

    5. If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer groups based on the shares provided to the PDB in the CDB plan and the shares provided to the consumer groups in the PDB plan.

    Answer: A,E

    Explanation: A: Setting a PDB resource plan is optional. If not specified, all sessions within the PDB are treated equally.


    In a non-CDB database, workloads within a database are managed with resource plans. In a PDB, workloads are also managed with resource plans, also called PDB resource plans.

    The functionality is similar except for the following differences:

    / Non-CDB Database Multi-level resource plans

    Up to 32 consumer groups Subplans

    / PDB Database

    Single-level resource plans only Up to 8 consumer groups

    (not B) No subplans

    Question No: 60

    You execute the following commands to audit database activities:



    Which statement is true about the audit record that generated when auditing after instance


    1. One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command on a table, and contains the SQL text for the SQL Statements.

    2. One audit record is created for every successful execution of a SELECT, INSERT OR DELETE command, and contains the execution plan for the SQL statements.

    3. One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or DELETE command, and contains the execution plan for the SQL statements.

    4. One audit record is created for the whole session if JOHN successfully executes a select command, and contains the SQL text and bind variables used.

    5. One audit record is created for the whole session if john successfully executes a SELECT, INSERT, or DELETE command on a table, and contains the execution plan, SQL text, and bind variables used.

    Answer: D Explanation:

    BY SESSION means:For any type of audit (schema object, statement, or privilege), BY SESSION inserts only one audit record in the audit trail, for each user and schema object, during the session that includes an audited action.

    AUDIT_TRAIL=db, extended meansPerforms all actions of AUDIT_TRAIL=db, and also populates the SQL bind and SQL text CLOB-type columns of the SYS.AUD$ table, when available. These two columns are populated only when this parameter is specified.

    100% Ensurepass Free Download!
    Download Free Demo:1z0-060 Demo PDF
    100% Ensurepass Free Guaranteed!
    1z0-060 Dumps

    EnsurePass ExamCollection Testking
    Lowest Price Guarantee Yes No No
    Up-to-Dated Yes No No
    Real Questions Yes No No
    Explanation Yes No No
    PDF VCE Yes No No
    Free VCE Simulator Yes No No
    Instant Download Yes No No

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.