Oracle Database Instance Recovery Tutorial

Is It Possible to Corrupt an Oracle Database?

No, You Cannot – Not If It Is Properly Administered

Why? Because the mechanism of redo and undo is the best yet developed on this planet.

In this tutorial, Oracle Certified Master John Watson will explain redo, undo, commit, instance recovery and demonstrate why your database can never lose one row of committed data.

Learn what actually happens, in memory and on disc, when DML is issued. John will demonstrate that instance recovery is automatic, and unstoppable.

This is critical and essential knowledge for all Oracle Administrators and DBA students.

Presenter: John Watson, Oracle Certified Master

This free tutorial is segmented into fourteen separate lessons:

  1. Introduction to Oracle Certified Master John Watson and SkillBuilders (1:44)
  2. Tutorial Agenda (2:40)
  3. The Database ACID Test (3:09)
  4. Physical and Logical Corruption (1:37)
  5. Demonstration – What Does Oracle do when DML is Executed? (9:47)
  6. Commit and the Log File Sync Wait Event (1:18)
  7. When Why and How Does Oracle Write to Disk (1:58)
  8. What About Checkpoints? (1:16)
  9. Discussion about COMMIT NOWAIT (3:46) (click on video below)
  10. Lecture – What Oracle Database does when DML is Executed (7:22)
  11. The Oracle Instant Crash Recovery Mechanism (6:24)
  12. Principles to Remember (0:52)
  13. Oracle – The Incorruptible Database (1:54)
  14. Next Steps (0:47)

Date: Jun 4, 2013


NOTE: Some corporate firewalls will not allow videos hosted by YouTube.

Transcript

Discussion about COMMIT NOWAIT

Oracle Instance Recovery Tutorial

 

Module 9 – Commit Nowait

 

>> Dave:  What is this business about commit nowait?

 

>> John:  Alright. That’s one of the few bits of tuning you can do to tune your redo generation. There are three parameters in fact, two of which are now deprecated. We have commit logging, commit wait and commit write.

 

The default alter system – I’ll do it at the session level actually. Alter session set commit_logging.  I’m working from memory here, but the default would be immediate,wait, if I remember correctly.

 

[pause]

 

Nope. Let me try commit_wait or commit_write. That’s the one. Right.

 

[pause]

 

This is the default situation. Commit_write = ‘immediate,wait’. What this means is that when you issue a commit, log writer immediately flushes the buffer to disc and your session waits. That’s what I was describing, the log file sync wait event. That’s the default situation.

 

[pause]

 

If you are feeling very brave, you can go to the other extreme. We can set that to ‘batch,nowait’.

 

[pause]

 

This may improve your performance of redo generation dramatically. What does that mean “batch”? Rather than forcing a write to disc every commit, if you’re committing thousands of transactions a second, that’s an awful lot of triggers on log writer, we batch them up and write them when we get around to it.

 

Nowait means your session doesn’t hang. You issue a commit and just carry on and hope the log writer will get it to disc, get the change vectors to disc appropriately. If you go to this, you may find that your log file sync disappear, you hit the database performance flies but you’re taking a risk because it is now possible for you to commit transactions and then if something goes wrong before the log file write is completed, you’re going to have corruptions – well, it’s a corrupted database – you would have lost your work which is breaking all the rules.

 

The other two parameters commit_write has in fact now being replaced with commit_wait and commit_logging, and this divide them up into two. We can commit_write having batch nowait, we set commit_wait to wait or nowait and commit_logging to batch or immediate. Dangerous parameters. Now the default is the guarantee of no corruption, no lost data.

 

[pause]

 

>> Dave:  I used that with great success at a customer side who had a repeatable load process that was doing too many commits. So if it failed, if there was a problem, we simply could rerun the job but that was a good scenario I think for that.

 

>> John:  That will be one of the few places where we would definitely recommend the situation. You got a batch process that’s generating vast amounts of redo and the redo generation is slowing things down, but if anything were to go wrong you can repeat the work. That would be a good case for using it. You’ll be a brave man who would set it up in standard say OLTP environment and it would have to be agreed and the implications discussed with the customer first.

 

Copyright SkillBuilders.com 2017

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×
Podcast
×