Installing Oracle VM Server for SPARC (formerly Solaris Logical Domains) on SPARC T-series Servers

This free tutorial and demonstration covers:

  • Installation and initial configuration of Oracle VM Server for SPARC v.2 software on an Oracle T-series server.
  • Configuration of the Control Domain, which controls the logical domain environment and provides services to guest domains.
  • A brief example of creating a guest domain.


This free training is segmented into several separate lessons:

  1. Intro to VM Server (Solaris Logical Domains) and Core Multi-Threaded (CMT) Ultrasparc (10:10)
  2. Demonstration: System Firmware, Network Configuration, Oracle Integrated Lights Out Manager, Solaris Installation (9:25)
  3. FAQs and Demonstration: Installing LDOM Software, Creating Domains (10:17)
  4. Demonstration: Configuring the Control Domain (8:51)
  5. Demonstration: Configuring the Control Domain (continued) and Using ZFS Disk Space (10:31) (click on video below)
  6. Demonstration: Creating the Guest Domain (10:29)

Date: Oct 5, 2011

NOTE: Some corporate firewalls will not allow videos hosted by YouTube.


Demonstration: Configuring the Control Domain (continued) and Using ZFS Disk Space

Installing Oracle VM Server formerly Solaris Logical Domains on SPARC T-Series Servers Part 5


>> Mick:  Thankfully, for our demo purposes, the machine has come back fine.




There’s our ldm list, if you remember at 64 CPUs and 16 gig of RAM, now it’s only got 8 VCPUs and 2 gigs of memory. All the other resources are now available to us using ldm commands to configure guest domains so that the control domain is now up and running.




There are issues about when you use an ldm command. It stores information within the firmware, but there’s an intermediate phase where it’s recorded within the operating system that which you have to be aware of and make sure that information is correctly written to firmware.


There are various commands like to add configuration to the firmware. That would take whatever the current configuration is and write it away with the name called primary-initial. The only one list by default is something called factory-default.


So if you’re experimenting and you make a mess then you want to clear out what you’ve done. There are a couple of ways of doing it. You can do it from the operating system. You can set config factory-default or you can do it at the ALOM, the Lights Out Manager, about doing this command at the bottom of the page here.


The one above would be on an older T1000 or T2000. I’m not going to bother doing anything about that because I’m purely demonstrating.


One thing we now need to do is make the virtual switch we created accessible to our physical network which currently isn’t. Then that way any guest domains that have virtual network interfaces connected to that switch can then access the outside world. If you want the complete data center in a box without any access to the outside world, you can do it without doing the next step.




Let’s find out what network interfaces do I have.




It takes a little bit of time to come back. There’s my virtual switch. What I’m going to do is configure that as my main network interface. I normally would but in the interest of time, I’ll just talk you through it.


Currently, I’ve got an e1000g0 interface through which I’m communicating. What I would do is replace that with exactly the same configuration but for vsw0 interface which is shown. The procedure for doing this is shown on this page here, 32.




Take down the e1000g0, unplumb it which effectively if you like removes the driver from kernel, plumb the vsw0 interface instead and bring it up. Then create the necessary file in etc/hostname.vsw0 rather than hostname.e1000g0. The next time we reboot, it will use that interface. In that way, the switch now has access to the physical network.




We can actually investigate properties of the network settings rather than list-bindings and displaying everything. The ldm command has some pretty good ways of displaying specific information. It’s just trying to remember what the commands are. But there’s an example to look at the network.


When we set up networks within our guest domains, there’ll be vnet0, vnet1 and so forth, and we can configure them like normal physical interfaces.




One thing we have to do also is enable this vntsd daemon which will them give us access to the [4:43 inaudible] ports so we can access the OK prompts of any logical domains we create.




When we install guest domains, we will provide resources to the domains and also disk tiers. The disk can be a physical hard drive. I created the [5:13 inaudible] file or it can be a ZFS disk, which is what we’re going to use.


This system is using the ZFS disk, which you can then see there’s a df listing. If you’ve never seen ZFS before, it looks a little bit different than normal.




First of all, to find out what resources we have available, you can do an ldm list – devices and that will show the actual hardware that’s currently available to all the systems.




I’ve explained that you can use a number of different disk backend and you can also extend a domain. You can of course use SAN devices, ISCSI, whatever you like, just like you can for any system. We have a ZFS file system and what we’re going to do is do a ZFS list. We have a pool called rpool.




What I’m going to do is create a little dataset within it to store our Logical Domains volumes. ZFS has this nice little facility of being able to clone system. If we create a volume within ZFS, we can then snapshot it and clone and automatically be able to roll out another copy of the operating system. If we sys-unconfig before we halt it, when the system boots, it will go through an identification process asking us the host name, the IP address, and so forth. Every time we want to create a domain we simply clone the snapshot, assign it to the domain. We put it, and answer the questions.


We’ve got ourselves a brand new operating system that takes up very little space, so it’s very economical at being very quick. It’s not always the best way to do things, but certainly if you’re going to roll out a lot of similar Logical Domains, it’s a great way of doing it.




It’s often referred to as the “golden image” as it’s shown on the slide here. Let’s have a look.


First of all, we got to create the holding area, if you like, the dataset that’s going to hold our volumes. The pool on this example is called pod but mine is called rpool. Now I’m going to create an emulated volume.




This is by the way – sorry, it’s [8:37 inaudible] these terms around, emulated volume. But what I’m effectively doing here is creating a disk device sitting on ZFS that will look like…




Where are we? We’ll call it goldvol. We’re going to use it as an image. But we’re going to partially just going to create a domain using that to search the disk. That will create actually associated disk device names that we can now use and assign like the example there at the bottom. So we can do ldm add-vdsdev /dev/zvol/dsk/rpool/ldomvols/goldvol golddisk@primary-vds0.


Remember primary-vds0 is our disk controller so we’ve attached a disk to our disk controller and we’ll use that very shortly. Let’s quickly go through adding a domain.


Copyright 2017

Free Online Registration Required

The tutorial session you want to view requires your registering with us.

It’s fast and easy, and totally FREE.

And best of all, once you are registered, you’ll also have access to all the other 100’s of FREE Video Tutorials we offer!