home
products
contribute
download
documentation
forum
Home
Forums
New posts
Search forums
What's new
New posts
All posts
Latest activity
Members
Registered members
Current visitors
Donate
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Search titles only
By:
Menu
Log in
Register
Navigation
Install the app
Install
More options
Contact us
Close Menu
Forums
HTPC Projects
HTPC Projects
Whole house technology solution
Contact us
RSS
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="fujistick" data-source="post: 556680" data-attributes="member: 65261"><p>Hi pnyberg,</p><p></p><p>I had considered running everything on the one machine using ESXi. I don't think DirectX support would be a problem; with the magic of PCI pass-through, one could map a physical graphics card and sound card to the frontend client VM, thus it would almost appear like a separate machine, but with the benefits of ESXi. USB might be a little trickier (I'm not sure why, but only up to 2 PCI devices can be mapped to a single VM), however I'm pretty sure you can get USB ports you can connect to over a network, which should solve that problem and be fine for low bandwidth things like a IR receiver, etc.</p><p></p><p>However, (and this is the real show stopper), I don't have enough PCIe slots on the servers motherboard to do this. Already used are: wasted slot for useless graphics card (I don't think this one can be mapped to a VM as ESXi is using it to show a text screen on how to connect to it remotely), 2 x tv tuner cards, raid card, network card(s).</p><p></p><p>With the frontend running on a separate PC, at least the power hungry gaming graphics card (and PC itself) will only be awake and running when in use, thus is still a better situation power consumption wise then what I have now with everything turned on all the time.</p><p></p><p>Another benefit is the damage is limited when mates come over, get drunk, and decide that running executable attachments from spam email on the big screen, is a good idea. Blowing that image away and restoring a recent backup should be pretty quick. With a single server/frontend, the frontend OS is effectively the "host" OS, which would mean all the VM's might go down if something went wrong with it.</p><p></p><p></p><p>Also, I forgot to add the NICs to the server specs in my original post. Most onboard NICs aren't supported by ESXi out of the box, so i bought an Intel Pro 1000 CT NIC to test to see if my current HTPC Asus P5Q-Pro motherboard had VT-d support. It doesn't. If the onboard NIC doesn't work on the new server with ESXi, I'll probably buy an Intel dual-port gigabit NIC, which I think only comes in server versions, and use the intel test NIC I bought in the HTPC instead of the onboard realtek one.</p></blockquote><p></p>
[QUOTE="fujistick, post: 556680, member: 65261"] Hi pnyberg, I had considered running everything on the one machine using ESXi. I don't think DirectX support would be a problem; with the magic of PCI pass-through, one could map a physical graphics card and sound card to the frontend client VM, thus it would almost appear like a separate machine, but with the benefits of ESXi. USB might be a little trickier (I'm not sure why, but only up to 2 PCI devices can be mapped to a single VM), however I'm pretty sure you can get USB ports you can connect to over a network, which should solve that problem and be fine for low bandwidth things like a IR receiver, etc. However, (and this is the real show stopper), I don't have enough PCIe slots on the servers motherboard to do this. Already used are: wasted slot for useless graphics card (I don't think this one can be mapped to a VM as ESXi is using it to show a text screen on how to connect to it remotely), 2 x tv tuner cards, raid card, network card(s). With the frontend running on a separate PC, at least the power hungry gaming graphics card (and PC itself) will only be awake and running when in use, thus is still a better situation power consumption wise then what I have now with everything turned on all the time. Another benefit is the damage is limited when mates come over, get drunk, and decide that running executable attachments from spam email on the big screen, is a good idea. Blowing that image away and restoring a recent backup should be pretty quick. With a single server/frontend, the frontend OS is effectively the "host" OS, which would mean all the VM's might go down if something went wrong with it. Also, I forgot to add the NICs to the server specs in my original post. Most onboard NICs aren't supported by ESXi out of the box, so i bought an Intel Pro 1000 CT NIC to test to see if my current HTPC Asus P5Q-Pro motherboard had VT-d support. It doesn't. If the onboard NIC doesn't work on the new server with ESXi, I'll probably buy an Intel dual-port gigabit NIC, which I think only comes in server versions, and use the intel test NIC I bought in the HTPC instead of the onboard realtek one. [/QUOTE]
Insert quotes…
Verification
Post reply
Forums
HTPC Projects
HTPC Projects
Whole house technology solution
Contact us
RSS
Top
Bottom