KVM:
a virtual CPU is a thread in the qemu-kvm process. qemu-kvm is of course multithreaded.
unless you pin processes to specific CPUs, the system scheduler will
allocate the threads CPU time from the available cores, meaning, any
vCPU can end up getting CPU cycles from any physical core, unless
specifically pinned to specific core(s)
A virtual CPU equates to 1 physical core, but when your VM attempts to process something, it can potentially run on any of the cores that happen to be available at that moment. The scheduler handles this, and the VM is not aware of it. You can assign multiple vCPUs to a VM which allows it to run concurrently across several cores.
Cores are shared between all VMs as needed, so you could have a 4-core system, and 10 VMs running on it with 2 vCPUs assigned to each. VMs share all the cores in your system quite efficiently as determined by the scheduler. This is one of the main benefits of virtualization - making the most use of under-subscribed resources to power multiple OS instances
VMWare:
vCPU is a virtual processor, you can assign multiple (up to 4,这个限制估计已经不存在了) vCPUs to a Virtual Machine but you should never exceed the number of physical sockets you have, for example if you have a 2 CPU server you should only assign a maximum of 2 vCPUs to a VM.
The number of Virtual CPUs you run per core depends on the workload of the VMs and amount of resources you expect to use on your ESX. Therefore VMs running off the server the lower the performance. Its all down to doing your maths before hand and working out what you can safely configure on each ESX.
4-8 VMs per core is the norm, better to stick closer to 4 if you are looking for performance, and if the maximum number of VMs per ESX is more important with less importance for performance then you can move closer to the sum of 8.