Hi,
not really a 'problem' as such, just looking for people's thoughts on the topic.
In the past I've been using basic Realtek AC97 motherboard audio and everything was resampled to 48000hz; didn't have a choice so just lived with it. I've recently upgraded to a discrete soundcard (Asus Xonar HDAV1.3 Deluxe) which allows the user to choose the sample rate at which audio is output - 44100, 48000, 96000, 192000hz. Unfortunately, it doesn't auto-select a sample rate based on the media playing, it will just resample anything that doesn't match the chosen sample rate.
My music library consists primarily of CD's ripped to FLAC format - these are 16bit @ 44100hz; ideally these would be output from the soundcard at 44100hz.
Many of my .avi's have audio encoded at 48000hz, and the digital tv channels here in New Zealand that don't have an AC3 soundtrack broadcast in AAC encoded at 48000hz; ideally these would be output at 48000hz.
So, in order to keep from getting up and changing the output sample rate every time I change media, I'm wondering what would be the preferred output sample rate to set and forget about, in people's opinion.
If I set it to 44100hz then I get the correct sample rate for my music, but video and tv gets downsampled - if I set it to 48000hz, then all my music gets upsampled. Ordinarily, I would say that upsampling would be preferable to downsampling, and as such I should set it to 48000hz. But, my music is 'lossless CD quality' whereas the audio on my .avi's and the AAC soundtrack on the TV broadcast is all in a 'lossy compressed' format, so do I want to 'damage' a CD track by upsampling from 44100hz to 48000hz, or further 'damage' an already compressed audio signal by downsampling from 48000hz to 44100hz?
How much 'damage' is done to a CD track by upsampling to 48000hz?
Would it be better to upsample a CD track from 44100hz to 48000hz or 96000hz?
Are there any other solutions? I output any AC3 soundtracks via bitstreaming to my receiver which then does the decoding - the AAC tv broadcast is handled by the Monogram AAC decoder, and my .avi's audio is dealt with by ffdshow (I believe, if I remember correctly); is it possible to 'capture' the audio from my video files/TV broadcast and encode into AC3 to be output to my receiver? This way, I could set the soundcard output at 44100hz, my music would be played 'perfect' and all video/TV audio would be output as an AC3 data stream to be decoded by the receiver (which can auto switch between sample rates depending on the source material; why can't soundcards do this??!!).
Anyways, thanks for listening - any thoughts on the topic would be greatly appreciated

not really a 'problem' as such, just looking for people's thoughts on the topic.
In the past I've been using basic Realtek AC97 motherboard audio and everything was resampled to 48000hz; didn't have a choice so just lived with it. I've recently upgraded to a discrete soundcard (Asus Xonar HDAV1.3 Deluxe) which allows the user to choose the sample rate at which audio is output - 44100, 48000, 96000, 192000hz. Unfortunately, it doesn't auto-select a sample rate based on the media playing, it will just resample anything that doesn't match the chosen sample rate.
My music library consists primarily of CD's ripped to FLAC format - these are 16bit @ 44100hz; ideally these would be output from the soundcard at 44100hz.
Many of my .avi's have audio encoded at 48000hz, and the digital tv channels here in New Zealand that don't have an AC3 soundtrack broadcast in AAC encoded at 48000hz; ideally these would be output at 48000hz.
So, in order to keep from getting up and changing the output sample rate every time I change media, I'm wondering what would be the preferred output sample rate to set and forget about, in people's opinion.
If I set it to 44100hz then I get the correct sample rate for my music, but video and tv gets downsampled - if I set it to 48000hz, then all my music gets upsampled. Ordinarily, I would say that upsampling would be preferable to downsampling, and as such I should set it to 48000hz. But, my music is 'lossless CD quality' whereas the audio on my .avi's and the AAC soundtrack on the TV broadcast is all in a 'lossy compressed' format, so do I want to 'damage' a CD track by upsampling from 44100hz to 48000hz, or further 'damage' an already compressed audio signal by downsampling from 48000hz to 44100hz?
How much 'damage' is done to a CD track by upsampling to 48000hz?
Would it be better to upsample a CD track from 44100hz to 48000hz or 96000hz?
Are there any other solutions? I output any AC3 soundtracks via bitstreaming to my receiver which then does the decoding - the AAC tv broadcast is handled by the Monogram AAC decoder, and my .avi's audio is dealt with by ffdshow (I believe, if I remember correctly); is it possible to 'capture' the audio from my video files/TV broadcast and encode into AC3 to be output to my receiver? This way, I could set the soundcard output at 44100hz, my music would be played 'perfect' and all video/TV audio would be output as an AC3 data stream to be decoded by the receiver (which can auto switch between sample rates depending on the source material; why can't soundcards do this??!!).
Anyways, thanks for listening - any thoughts on the topic would be greatly appreciated
New Zealand