Just found out today that very very small Nutanix environments do NOT like network appliances with 8 cores, 16GB RAM, and 2 NICs running on them. One group installed one of our NAC boxes on a Nutanix and another group noticed 6 weeks later and threw a red flag on us.
He was particularly upset about the cores, since the box itself only has 6...
:doh:
Firstly, Nutanix?
Secondly, how were they able to provision 8 cores when the hardware only has 6?
You can give a vm as many virtual cores you want
Dieselboy: Nutanix is cheaper than VBlock. Therefore, it is more scalable from a financial perspective.
This is me using up all the resources on the littlest, cheapest Nutanix installed at that site and then the VM team's response:
:mssql:
I'm the guy on the right, this time. :'(
Hyperconverged and / or distributed storage (ceph) is the future. Vblock and flexpod is so 2014. [emoji14]
Mythos
Nutanix, EVO with vsan etc the tide is coming
We had Nutanix do an on site class last Friday. We have had a demo appliance for awhile, but no real time to learn it. It was pretty nice. I actually like their hypervisor. I didn't know you could get boxes that small.
-Otanx
Quote from: wintermute000 on July 21, 2016, 02:58:37 AM
You can give a vm as many virtual cores you want
But what is the point though if the physical hardware has X number. Surely the formula for virtual cores is 0>&<X where X is the hardware core's. Is there any benefit to giving a VM more virtual cores than the physical hardware has? I can only think that it would impact performance?
Yes it kills performance. But the question was can you, not would you want to
:zomgwtfbbq:
Quote from: Dieselboy on July 22, 2016, 02:08:09 AM
:zomgwtfbbq:
My thoughts exactly... but if one has three servers that ask for 4 processors each, but they only really run on 2 most of the time, then they can fit nicely on a 6-processor box. If, however, it's a server that EXPECTS those resources 100% of the time because it doesn't know it's not really hardware, then it can sometimes delay or fail because the resources are not released to it.
My NAC box demands those resources... the Windows servers on that Nutanix are not getting the resources that they want.
Are you guys saying that people do this in production?
Yes. All the time. It's basically "thin provisioning", the computational equivalent of fractional-reserve banking.
Quote from: Dieselboy on July 22, 2016, 07:47:18 AM
Are you guys saying that people do this in production?
By "this" do you mean assign more virtual cores to a single VM than the hardware has? No, not usually done unless someone goofs. However, if you mean assign more virtual cores than the hardware has across multiple VMs with no single VM having more than the hardware. Yes, all the time. As Dean said most systems don't use all their cores all the time so I can get away with over subscribing my physical cores knowing that not all the systems will try to use all the cores at the same time. This is also done with RAM and storage. One of our customers that tracks their over subscription has (IIRC) 4:1 over subscription on cores, and RAM, and something massive like 300:1 for disk space. Over subscription is one of the big benefits of virtualization.
-Otanx
I was referring to giving a single VM more CPU than exists in hardware.
On our RHEV environment we generally run up to 90 VM's on 2 physical hosts. Each host having 128GB RAM and 24CPU threads (two Xeon CPUs with 6 cores each, hyperthreaded). RAM is overallocated by 150% but we have been discussing reducing this to 100%. Storage was in the 1000% range when I first joined here but have this down to 92% one one volume and 150% on the other.
Quote from: Dieselboy on July 24, 2016, 08:40:29 PM
I was referring to giving a single VM more CPU than exists in hardware.
That's still a no-no but the software won't stop you from doing it, and it will 'work', just badly
Quote from: Dieselboy on July 21, 2016, 02:52:57 AM
Firstly, Nutanix?
Secondly, how were they able to provision 8 cores when the hardware only has 6?
You can provision 32 cores if you would really like :D
But when things really start cooking like you are referring to cpu wise you will hit cpu ready situations and things start to break given the cpu latency with the way it works with virtualization and esxi scheduling.
Yes I was expecting that, so I still don't know why anyone would even bother doing such a config even in a personal env. Not even for a lab, it's going to break or cause problems. So in production? Seems like crazy talk.
:glitch:
This is why you're a network guy and not a VM guy.
It works very well for servers, since many of them just run at 1-10% utilization of their resources all the time and really don't need more than one processor for what they're doing. The problems start when hardware is virtualized and the OS in the hardware expects to have all the resources, all the time, or it refuses to work.
Thanks Dean,
Sorry to this thread for my repeated comments / questions but I still don't understand why anyone would even consider this.
To me, it's like yes you can physically fit 20" alloy rims on your hatchback; but it will drive like crap, look like crap, steer like crap and slow your 2.0 litre engine down.
What's the benefit of fooling the top level system? Is it a licensing thing? I did skim through the posts again to see if this was explained already. Apologies if so.
The analogy is more like how a bank with $1 billion in deposits can loan out $10 billion because the chance that its depositors will withdraw all of the $1 billion deposits is slim to none, most days. So, instead of just loaning out a billion, it loans out ten times that amount... because it can...
But does it work with that analogy? My point is- what's the point of loaning out $10 billion, if it's just going to sit in peoples deep freezers? I don't see any advantage.
This is why you are also not a banker. :)
It's there just in case.
Thanks Dean - guess I'll just file this one in my repositories under "Stuff I just don't understand".
Thanks all for the explainations above.