Oracle Clone Tutorial: How to Clone a Large Oracle Database, Really, Really Fast!

Oracle Clone Tutorial

Take the pain out of cloning! Take the time out of cloning!

Learn how to duplicate (copy) an Oracle database in minutes – no matter how large – with Oracle DNFS.

Do you need to provide read-write clone databases for testing or development? Do you need to do it fast, simply, and with minimal disc space? This free Oracle Database Clone Tutorial will show you how.

Cloning databases can be slow, painful, and requires lots of disc space. Not any more. We can use copy-on-write technology to create as many independent clones as you want, all based on one backup of the source database. Creating a clone takes minutes, no matter how big the database is. The space used by each clone is only the space needed to record changes made within the clone (so initially, zero.) As far as your users are concerned, each clone is an independent database. They will have no idea that they are all sharing a common source.

This facility (based on Oracle’s Direct NFS technology) was introduced in release 11.2.0.2 and formally documented in 12.1.x.

Presented by Oracle Certified Master John Watson, SkillBuilders’ Director of Oracle Database Services.

This free tutorial is segmented into several separate lessons:

  1. Introduction to John Watson, SkillBuilders Director of Oracle Database Services (1:32)
  2. Agenda (1:19)
  3. Clones, Clones and More Clones. Too Many Clones? (6:19)
    John explains the reasons for creating clones and demonstrates DBMS_WM, the Oracle supplied Workspace Manager package, in the hopes of demonstrating a technique for creating *less clones*.
  4. Cloning the Old Way (1:36)
    John explains the three traditional cloning techniques: manual scripting, Data Guard and RMAN.
  5. Cloning Issues and Sample Script (3:41)
    John demonstrates (examples) cloning via a script we wrote to clone databases daily.
  6. Introducing DNFS Copy on Update to Clone (8:16)
    John explains a new technique for cloning : using Direct Network File System (DNFS) Copy-on-Update
  7. DNFS Cloning Technique and Demonstration (19:06)
    John demonstrates (examples) the new technique for cloning : using Direct Network File System (DNFS) Copy-on-Update
  8. Demo Creating Additional Clones ( In 2 Minutes! ) (3:40)
    In 2 minutes and 11 seconds, John demonstrates creating an additional clone – of any size!
  9. Review Technique and Limitations (3:23) (click on video below)

Date: Aug 14, 2013


NOTE: Some corporate firewalls will not allow videos hosted by YouTube.

Transcript

Review Technique and Limitations

Lightning Fast Cloning

 

Session 9 – Review Technique and Limitations

 

>> John:  We create NFS at the operating system level and I’m not using networking. I’m doing this along loop back addresses. A backup as image, create a parameter file and control file. We got now scripts available to do that which is harder with 11g, I can assure you. But we can do it for you with 11g and we can in effect write the clonedb script.

 

[pause]

 

Rename the files using that package, open reset logs, done. Of course, you have to monitor the space of your drive because as time goes by your clones is going to take up more space. Because it will never be your guess that when you do a simple ls – l or ls – lh, UNIX is lying to us. These are created at what I call sparse files. They are sparse files.

 

Only if we do that you receive the actual space being occupied. So never forget that the clones is going to increase in the background and you may well hit file system full problems that you are not expecting, so monitor the space usage. As many times as possible your clones as you wish all running at the same backup.

 

[pause]

 

Lastly, just a few limitations, 11.2.0.2 is when this came in. With 11.2.0.2 I’d like to say it was clunky, it was manual. The Direct NFS ODM library must be enables. Simply copy it in. I have to say I wonder why it isn’t enabled by default. I can see no downside to using the NFS library. You don’t have to have your files on the NFS device. You can have files on local devices and the NFS driver will still function. No problem at all. The NFS library can read both local storage and NFS storage whereas the standard ODMs library cannot read via NFS devices.

 

All your clones must be able to see the backup. The backup by the way does not have to on NFS. It can be on any form storage that have enough available except ASM. Clones from different machines, they’re going to run on the same machine as the source as I am. If you damage that one backup all the clones will broken. That does become a single point of failure for all your clones because there is only one master copy of the data. The private data to each clone is only changed blocks.

 

[pause]

 

Point at the bottom here that I do want to highlight, performance tuning. Tuning SQL is no problem at all. The clone database is perfect for tuning SQL. You can run the statements, get your execution plans out, do everything you want for tuning SQL on the clone, no problem at all. Actually benchmarking a workload, that wouldn’t be fair. That would not be a fair test because there could be many clones hitting the original copy of the data.

 

But for tuning SQL, not an issue. But if you were going to say – if you say perhaps you want the real application testing and option, it would not be fair to run database replay against the clone that was created in this fashion, whereas the SQL performance analyzer will be no problem at all.

 

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×
Podcast
×