Free Exadata Tutorials and Demonstrations

Let Oracle Certified Master DBA John Watson teach you what Exadata is and demonstrate how it works!

Oracle Exadata Database Machine

A Database Machine is a massively scalable and fault tolerant piece of hardware (CPUs, RAM, discs, and networking: all perfectly balanced) combined with the Exadata smart storage software. A really powerful database server, with a disc subsystem that understands the database and take over a huge proportion of the data processing workload. This is the “offload processing” feature: no other platform can do this.

Experience shows that porting a database to a DB Machine is easy – but actually getting the benefits is harder. Before making the investment (a huge investment) in a DB Machine you will do a POC that shows that the applications work. That doesn’t mean they work as well as they could. Offload processing is more elusive than one might think, and to achieve it you may need to reconfigure your data structures and adjust your code. This is a non-trivial task that goes far beyond the usual SQL and segment tuning activities: it is a whole new layer of optimization.

We believe that the real ROI will come only from extensive work after the migration, as the application and the database are tuned to the capabilities of the Exadata platform. Yes, you will get an immediate performance boost – but you should be getting much more.

Let us help you optimize your applications for Exadata!

Contact us at 1-401-783-6172 or email us to see how we can help you.


Oracle Exadata Database Machine Tutorials
(Login Required for sessions 2 and above)

  1. About the Exadata Hardware
  2. What Makes the DB Machine Special
  3. Smart Scan in Theory
  4. How Smart Scan Functions – Demo
  5. Smart Scan in Practice
  6. Making Smart Scan Work – Demo
  7. HCC in Theory
  8. HCC performance and Compression Ratio – Demo
  9. HCC Limitations and Best Practices
  10. HCC Compression Degradation Issue – Demo
  11. Exadata is Good But Not Easy

Let us help you optimize your applications for Exadata! Contact us at 1-401-783-6172 or email us.

 

NOTE: Some corporate firewalls will not allow videos hosted by YouTube.


Transcript

HCC Limitations and Best Practices

HCC is undoubtedly good technology, but there are some limitations, some issues that I think we need to be aware of.

First off, it can work only for direct loads. I believe that this is a technology limitation. Because of the way the compression is done in units of four blocks, it’s hard to see how that type of operation could ever be done in the database buffer cache. So you need direct loads or it can’t work at all.

The four-block compression unit, I can’t see why that would be a technology limitation, and it wouldn’t surprise me if in a later release, perhaps it will be something that’s tunable. But certainly with the current release, four blocks is what you have to work with, and that can be significant when it turns,* comes to thinking about the structure of your objects, the CalMovering, the way your data is actually stored within the tables, and that may require some tuning to the four-block compression unit size.

The compression ration itself, DML against ATC objects, is definitely not a good idea. The way it’s actually implemented, if you do an update against a compressed row, the data is decompressed. The DML is the executed, and when the data is saved back into the segment, it gets compressed with basic compression, or deduplicated compression.

So what you end up with is a table that’s part ATC compressed and part deduplicated. So, inevitably, as DML occurs against an ATC table, you’ll find your compression ratio will degrade and there may also be an impact on the time it takes to perform the DML because of the extra work involved.

Finally, one point that probably isn’t important to many people. The cells will be doing the compression, the decompression, and they will serve decompressed data, just serve rows or blocks back to the compute nodes. However, if the cell nodes are, in fact, working flat out and CPU usage is running at 100%, under those circumstances, the Exadata software can decide to serve complete compression units back to compute nodes. Then the compute nodes will then have to take the hit of doing the decompression.

I would say that if we managed the two new Exadata systems up to the level that you’re running short of CPU on the cell nodes, I think we’ll be doing very well indeed, but it’s just something I want to highlight. There is no way I’m going to be able to demonstrate that, not from the systems I’ve got here, but I will try to demonstrate one or two of the other issues that may be significant to you.

×
Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!

 

×