Demonstration: Creating the Guest Domain
Installing Oracle VM Server formerly Solaris Logical Domains on SPARC T-Series Servers Part 6
>> Mick: Another five minutes and we should be pretty well there.
[pause]
I’m just going to use my copy and pasting again.
[pause]
I’m going to call the domain hpf1. It can be called anything. What will happen it will replace the initials of hpf.
Here I’m adding the mau unit, adding the CPUs. This is a repetitive process and the only thing that varies between different domains is the actual resources you configure which can dynamically be changed while you’re running. So if you can’t quite get it right, don’t panic. You can add or subtract any of these resources dynamically.
Here I’m adding a virtual network to vnet0 so that we’ll have a vnet0 that we can configure with ifconfig when the domain runs. Here we are assigning the disk, making sure the domain boots. We just copy that and set the default boot device which is the disk that we’ve just assigned.
[pause]
And then fingers crossed, we combine the domain which then assigns all the resources. It would’ve given us an error had I made any mistakes with all that typing. That was the most tricky part of the presentation. Again, as I’ve shown you, you can do list-bindings to find out what those bindings actually are.
[pause]
I can do a list domain and there’s our list-bindings which shows you the number of resources that we’ve actually allocated. Then I can write my configuration away. Then I can start the domain. I’ll do it the other way around.
[pause]
If I do an ldm list, I can see the port to which I can telnet to get the console. I say, “Telnet localhost.” Don’t forget that this is in the control domain to that particular… And there we go. We’ve got our new Logical Domain and there’s the OK prompt which responds to all the normal SPARC OpenBoot PROMP commands that you’re familiar with.
Now we can do a boot net. We might have to set up a boot support on a boot server. We can associate by the way the same way that we associated the actual disk device, we can associate a CD or ISO image to build from as well.
[pause]
>> Dave: Hey Mick, we’ve got a few questions in the queue, if I can throw them out to you.
>> Mick: Yeah. We’re just about finished actually, Dave. I’ll just put the last slide up.
>> Dave: Perfect. I’ll hold them. Thank you.
>> Mick: Just to remind people about the SkillBuilders capability in this area. But thanks, everybody, for listening. Let’s have some questions.
>> Dave: Thanks, Mick. A couple of questions straight away. Can you recap the difference between a Logical Domain and a Container, maybe benefit, disadvantage, or pros and cons?
>> Mick: The Logical Domain is a distinct hardware feature. You see you are effectively creating the physical system and you install and maintain the operating system within that or separately. Your admin load will increase just a little bit more the Logical Domains, but you could have completely different operating system instance, patches, completely different levels.
With the Container, that is a software facility. So if you had the Solaris box with Containers, you’d be running an initial version of the operating system called the global zone, and then you would create further zones or Containers within that, but they’d be using the same [5:18 inaudible].
So the virtualization is done through the kernel facilities and other daemons that maintain the separation between the different zones. It’s virtually impossible, for example, to have those machines running different patch levels certainly within the OS.
That’s the free technology that comes with any Solaris box and highly suitable for certain types of application. But when you want full separation with different hardware, for extra stability where no Container can impact on the other, then a Logical Domain would be the obvious choice.
[pause]
>> Dave: That’s great. Another question. Can I use SSH to access the LDom?
>> Mick: You can, Dave, but not until the LDom is actually installed with Solaris. You can’t access the OK prompt like I’m doing using SSH. You would only be able to do that with the firmware.
To get access to the OK prompt with the Logical Domain, I would have to log in to the control domain first and then use telnet. But once the Logical Domain is built and you’ve configured SSH maybe through access, then you can log into it just like any other system. That’s the answer to that one, Dave.
[pause]
>> Dave: Great. Mick, can you also recap the critical domains, for example, I/O domain? There’s also control domain and guest domain. What are the memory requirements associated with these domains?
>> Mick: They are service domains, if you like. But the main domain is your guest domains. So if you took that as an example with the T3, their guest domains are running in Oracle databases and therefore each domain is sized according to requirements of the applications you guys run, just like you would size any other Solaris system.
The control and the I/O domains are maintained separately. Although they can run applications – in the particular case of the client we know about, they are purely there to service the guest domains and therefore they only need enough resources to run the operating system. A single core is absolutely fine and a couple of gigs of RAM. But just like any other domain, if we find that that allocation is not quite enough, we can dynamically change it while the machine is running.
Now if we go back through the notes, I can show you – going back to page 35. In my current project, we’ve created a control and an I/O domain so we created a Logical Domain and we’ve given ownership to the PCI bus to it, so we can actually physically apportion the hardware to an I/O domain. So the control domain owns the PCI bus and the I/O domain owns another PCI bus, each of which has an HBA controller connected to the SAN. The guest domains are configured so that they will use one or the other hardware, and if one of those fails it will switch over to the other domain.
Having said that, not all T-Series have the ability to split the PCI buses which is so [9:17 inaudible] the T3 server that we’re working does. That can also be done by splitting. You can split the network interfaces across both domains also and you can therefore create multipathing such that if one domain fails, it will automatically use the other.
So it’s surprising the level of resilience you can build in to a single physical machine. In our case here, the control domain acts as a single I/O domain and the service domain as well. The service domain has the virtual services associated with it rather than the physical ones.
[music]
Copyright SkillBuilders.com 2017