Oracle Clone Tutorial: How to Clone a Large Oracle Database, Really, Really Fast!

Oracle Clone Tutorial

Take the pain out of cloning! Take the time out of cloning!

Learn how to duplicate (copy) an Oracle database in minutes – no matter how large – with Oracle DNFS.

Do you need to provide read-write clone databases for testing or development? Do you need to do it fast, simply, and with minimal disc space? This free Oracle Database Clone Tutorial will show you how.

Cloning databases can be slow, painful, and requires lots of disc space. Not any more. We can use copy-on-write technology to create as many independent clones as you want, all based on one backup of the source database. Creating a clone takes minutes, no matter how big the database is. The space used by each clone is only the space needed to record changes made within the clone (so initially, zero.) As far as your users are concerned, each clone is an independent database. They will have no idea that they are all sharing a common source.

This facility (based on Oracle’s Direct NFS technology) was introduced in release and formally documented in 12.1.x.

Presented by Oracle Certified Master John Watson, SkillBuilders’ Director of Oracle Database Services.

This free tutorial is segmented into several separate lessons:

  1. Introduction to John Watson, SkillBuilders Director of Oracle Database Services (1:32)
  2. Agenda (1:19)
  3. Clones, Clones and More Clones. Too Many Clones? (6:19)
    John explains the reasons for creating clones and demonstrates DBMS_WM, the Oracle supplied Workspace Manager package, in the hopes of demonstrating a technique for creating *less clones*.
  4. Cloning the Old Way (1:36)
    John explains the three traditional cloning techniques: manual scripting, Data Guard and RMAN.
  5. Cloning Issues and Sample Script (3:41)
    John demonstrates (examples) cloning via a script we wrote to clone databases daily.
  6. Introducing DNFS Copy on Update to Clone (8:16)
    John explains a new technique for cloning : using Direct Network File System (DNFS) Copy-on-Update
  7. DNFS Cloning Technique and Demonstration (19:06) (click on video below)
    John demonstrates (examples) the new technique for cloning : using Direct Network File System (DNFS) Copy-on-Update
  8. Demo Creating Additional Clones ( In 2 Minutes! ) (3:40)
    In 2 minutes and 11 seconds, John demonstrates creating an additional clone – of any size!
  9. Review Technique and Limitations (3:23)

Date: Aug 14, 2013

NOTE: Some corporate firewalls will not allow videos hosted by YouTube.


DNFS Cloning Technique and Demonstration

Lightning Fast Cloning


Session 7 – DNFS Cloning Technique and Demonstration




>> John:  What I’m going to run through now is – and I hope you all appreciate how brave it is to do this sort of live demonstration – the technique for rapid cloning with direct NFS. We need to configure NFS shares. I’m going to do it locally so I’m not actually using NFS in any meaningful way. It’s just that to get the access to the driver I do have to configure it.




We backup the source database, that one-off copy. And there must be an image copy. It doesn’t in fact have to be an RMAN image copy and if you have the ability to split mirrors, take read-only snapshots on the SAM, they’ll be perfectly acceptable in some sort of image copy of the data files.


Then we create a parameter file to control file. With 12c that’s very, very easy because Oracle provides the scripts to do that. It was harder work with release 11.2.




Quick use of a package to setup the use of DLL sets of data files, the read-only full copy and the read-write copy that’s local for each clone. Then we open reset logs and then we’re done.




That’s how we go with it then. To begin with, I need to configure NFS. This is another example of how the line between database administration and system administration is getting so blurred at the Oracle product set nowadays and you could argue a lot about who’s meant to be doing this.


I’m working on Oracle Enterprise Linux by the way here. Let us just see basic check is NFS actually running.




And, yes, my demon is running. Now, to configure NFS – first off, I need to create a directory where I shall share out the file. Create a directory clonedb/clone1.


Then I need to export that directory. I need to export the directory. Go to my exports file.




And I set a line up there already. I’m going to export the directory u01/nfs_shares/clonedb. That’s I’m going to export and there are bunch of options that needs to be specified when configuring NFS for use by Oracle environment and we do have certain options that must be set. Nothing special but they have to be right.




So having set that up, do we actually have it working? Well, let’s try. Export fs-a then export fs, and there we go. I’m now exporting that directory to the entire world. Not the best security, but don’t worry about that for now.


Then I need to create the directory that I’m going to mount things on. So I shall create another directory which will be




on the same machine. So I’m exporting ¿ I’m actually creating one more. We’re ready to do that. And exporting u01/nsf_shares/clonedb and I’m going to mount it on u01/nfs_mount/clonedb. So my NFS is never actually leaving. It’s never leaving the system.




Give it a mount command and for those of you who are not familiar with this, mount type nfs. Again -o, a string of options. No choice about the options or very little choice about the options and they’re well documented and they are required for it to function.


We look at etc/fstab. I’ll use the mounting I already got there. I’ve already configured this in my fstab file while I’m trying this all dynamically. What I’m doing here is taking this export and mounting it – taking that export and mounting it on that path there. So I’ll do it through the fstab file instead it mount – a, df – h and there we are.


I’ve now mounted the NFS directory on that. So what have I done so far? Not very much, except making a couple of typing errors. I have exported one directory and mounted it on another. So if we look at just to review, export fs – I’m exporting u01/nfs_shares/clonedb and then I’m mounting it on u01/nfs_mount/clonedb.




I’ll just make some ownership changes to make sure that Oracle has permission to see everything.




Now that’s configured NFS at the operating system level. This server is now exporting and mounting a file system.


Moving on then to the Oracle side, the configuration within the Oracle environment, first there’s a small configuration file that isn’t essential but is generally considered to be best practice. And in your Oracle home DBS directory we create a file called oranfstab. This is strictly speaking, not necessary. It is optional but it is considered best practice.


Within this file you specify a list of all the NFS servers you’re likely to use. In this case, looping back to my local machine. The path to get to the NFS server – the path from your NFS client and you can see I’m using loop back addresses, so I’m not really using NFS at all. It will then give me access to the driver.


Then the mount we’re going to use and that reflects the mount that’s already made at the operating system level. This file is optional but in a complex environment where you have multipathing to your servers, using these directives will allow you to isolate the NFS traffic perhaps a certain [6:30 inaudible] rather than having to interfere and will be interfered with by other traffic in the environment.




Having done that, we need to enable the use of the NFS driver. That is done by copying it in in your library’s directory.




You will find from release 11 onwards you will have two drivers for getting to disk systems. One will be called libodm12 or, the shared object library for reading and writing files on conventional storage. And then there’s libnfsodm12 or under 11 libnfsodm 11. That’s just the NFS driver. All you do is copy that over that. It’s exactly the same routine under Windows except they’re called DLLs. You’ll find both drivers there, copy one over the other copy the NFS driver over another NFS driver.


From then on we are NFS-enabled and whenever we start an instance – if you look at the [7:34 inaudible] log you’ll see a message at the startup stating that we are using NFS driver version 3 nowadays or version 2 before.




Let’s configure it then. What do we do next? Having that NFS running I now need to start the clone process. It begins with a backup. So I’ll create a directory for my backup, u01 backups. My source database is called Oracle orclz, Oracle Z. I could create a directory for the backup, connect with RMAN – because I’m going to do it with RMAN – and make a backup of the file.

There’s no reason why there shouldn’t be a hot backup by the way. Hot or cold makes no difference but what it must be is backup as copy. Backup as copy database and I’ll send the backup to the directory I just created using a standard format string.


So there’s my backup. I backed up the entire database as a copy. You could speed this up creating parallelism of course if you have the license and the files will be generated over there. That’s going to take a while, so while that’s going on perhaps I can do a bit more work in the background to make sure I’m connected to the database.




I need a copy.


>> Dave:  [9:09 inaudible]


>> John:  Thank you. I need a parameter file. I’ll get there eventually.




I need a parameter file, which is simply copied from the live system. So I create pfile, give it a name from spfile. And that parameter file will then be edited for creating every clone database. So it’s a one-off copy of the data files and a one off copy of the parameter file, and from then on we generate the clones based on that parameter file, based on the copy which will be finished in a couple of minutes.




The copy will have gone to this directory here.




And there it is. I’m going to remove the temporary backup of the spfile that RMAN did automatically and that will be a backup of the control file that RMAN did for me. I need to get rid of them because the clone of course create – the clone will have its own parameter file and its own control file but will be using these image copies of the data files.


So when is my clone going to be created? I’ll give it a directory in which to create it.




It already exists. Okay, no problem. So I’ve got a directory there waiting for me.




Now, the next step. We set four variables. The variables are quite nice and they’re nice because they fit into a script that Oracle provides 12c. With 11g we have to write scripts ourselves. Most of the variables, MASTER COPY DIR. Where is my master backup? That’s the backup in my case created with RMAN but it could be a splitting of a mirror, so I’m going to use repeatedly.


Where am I going to create my database? I’m going to create it in u01/nfs_mount/clonedb/clone1.


Name of the database? My clone database will be called clone1. SID of the database will be clone1.




Then I need to generate the control file and the parameter file. To do that Oracle, provides a script. The script exists in – I’ll run the script and then we’ll walk through –

perl ORACLE_HOME/rdbms/install/


I pointed towards the backup of the parameter file that I just created. And what that script has done is read that file and make a couple of edits to it and written out a brand new parameter file. So where is that parameter file and what’s in it? We’ll take a quick look at it.




It would have been created in nfs_mount/clonedb/clone1. That’s the file that was generated and if we look at it, just a very few changes made from the original one. Those are the parameters, two critical changes, name of the database will be clone1 dbname, and then we see clonedb = true. You’ll be wanting to look out for that parameter – I looked it up for you.


And then we have the clonedb parameter set to true or false. Clonedb set in on Direct NFS Client CloneDB database. When it’s set, the clonedb database uses the database backup as backing install. So that parameter needs to be set.




What else do the script do? Apart from that, the script created crtdb.sql and rename.sql. Those were generated by the script that I ran, by the perl script. If you have a quick look at that we will see that crtdb startup nomount of that file, that parameter file and it create control file command for our database called clone1 pointing towards the backup.


I shall run that now, make sure I’m connected to the right instance name, clone1. Yes. So sqlplus / as sysdba.




And run that first script.




Let’s startup nomount, that was done, control file created. That was reasonably painless. It just creates the control file pointing to the data files in the backup. So then we move on to the next script, which was rename.sql regenerated by the perl program. What that does, rename.sql, begin a series of calls to the dbms dnfs.clone_rename file.




We’re wanting to look at up as well and here it is. It’s a very simple procedure. It has just clonedb_rename file procedure and that’s all there is to it.


What it does, it renames data files that were pointing to our backup set, the actual filename of our clone database. Two arguments, source file (srcfile) is the data file on the backup. Destfile must point to nfs volume, but it can be local. It doesn’t have to be on the remote NFS server and that’s where the clone files will be created. So we’ll try to run that rename.




That would help. @rename.




It’s done the renaming, open reset logs, and when that goes through that will be it. The clone will have been created.




There we go. Now select open mode from  v$database.




It’s open read-write. It’s an open read-write database. If we select name in v$datafile, we see the data files are pointing to the NFS mounted directory.


That was a clone that took just less than a minute, wasn’t it? But let’s see what’s actually happening there and it is quite interesting. First, we look at the original backup. If I go to my backup orclz, we see now the data files. The data files – let’s just use a different version of ls. Ls – lsh.


These data files, the example table data file 324 megabytes and note the S in there showing the actual size. It is indeed 324 megabytes as we would expect. So the backup, your current size of the file and the actual size of the file, it matches and the entire backup is 2.1 gigabytes. That’s going to be the same size the source as the source database.


But if we look at what’s happening in the backup environment, in the cloned environment – I go to u01/nfs_mount/clonedb/clone1 ls -lsh, there are the files and on the face of it are the same size, 324 megabytes, same size as the source, 771 megabytes same, size as the source. But look at this here.




That file is not 324 megabytes, it’s 16 kilobytes and the entire clone database is occupying just 210 megabytes and most of that is the online logs. So my entire database is taking up just a few K. What happens when I do some work? If I, for example, create a table, create table T1 table example and select star from all users, if I actually do some work, absolutely normal database, open read-write – what has happened now while I rerun this command and we see that whereas the 324 megabyte file that’s occupying 16K, the 324 megabyte file is now occupying 32K. So clearly, we’ve had to copy over, we have to copy out two blocks to represent the change that we’ve actually done.




That shows you how space economical the databases. So a 2 gigabyte database is in fact occupying 210 megabytes.


Copyright 2017

Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!