[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [cdi-devel] [PATCH] CDI.audio

On Thu, Nov 18, 2010 at 8:51 AM, Kevin Wolf <kevin@xxxxxxxxxx> wrote:
> On Wed, Nov 17, 2010 at 11:02:11PM +0100, Max Reitz wrote:
>> Kevin Wolf wrote:
>> >> +struct cdi_audio_device {
>> >> +    int record;
>> >
>> > Hm, shouldn't that be a property of a stream/channel? It seems odd to
>> > have two devices appearing in the system for one physical sound card,
>> > just to have one for playback and one for recording. It's like having
>> > a different devices for reading from a disk than for writing to it.
>> Well, actually I wanted to use multiple streams per device only if there's a hardware mixer, thus I thought it to be more appropriate here.
>> And, as far as I'm concerned, I do think a microphone and loud speakers actually are different devices. ;-)
> But you don't have a driver for the microphone but for the sound card.
> Anyway, I think this is a point where the text need to be clearer: Matt
> seems to be talking about multiple interfaces that address different
> output devices whereas you are talking about hardware mixing.
> Maybe Matt's point that you can have multiple output devices attached to
> only one card means that treating each output/input device (such as
> microphones or speakers) as a different cdi_audio_device actually makes
> most sense.
> In this case, let's just keep what you have to support hardware mixing.

Windows and Linux both show me a heap of output and input choices when
I do sound-related stuff. All these outputs and inputs are linked to a
specific sound device, which also has properties that can be set. It
seems they also create multiple "objects" (not "devices") that
reference back to the root "device" but provide different views of it
to applications.

In other words...

You don't have a driver for the microphone but the sound card does
essentially offer multiple devices. The fact that they are
"interfaces" compared to "pure devices" is a technicality - I would
consider a sound card's "interfaces" much like USB interfaces: if a
USB device has two interfaces (one that is Mass Storage, the other a
camera perhaps) you don't expose both interfaces as one device, right?
You expose them as a disk and a camera - two separate devices that
stem from the same piece of hardware.

(I should note Pedigree does not yet support multi-interface devices,
so I can't comment on how badly it screws up the driver interface

The distinction between, say, the "line out" jack rather than the
"center speaker" jack should IMHO be clear. Whether that is done by
implementing each input/output offered by sound hardware as a device
(my personal preferred option - upper layers outside the driver can
provide APIs that mix multiple outputs to create surround sound and
such) or by some other mechanism in CDI is up for discussion I think.

That is the point I'm making - I believe hardware mixing itself should
be done on a "per-interface" or "per-input/output" level. That way if
you are outputting sound from one application to all speakers in a
surround sound environment some other application can jump in on the
center speaker - because the mix is separate per-output. I hope that's
a bit clearer now - early mornings don't do well for my explaining
skills it seems!

On Thu, Nov 18, 2010 at 8:57 AM, Max Reitz <max@xxxxxxxxxx> wrote:
> Kevin Wolf wrote:
>> True. But then Kanal is just as wrong.
> That's why I used "Channel" in German when referring to channels. *g*
>>>>> +struct cdi_audio_driver {
>>>>> +    int (*transfer_data)(struct cdi_audio_stream* stream, size_t buffer,
>>>>> +        void* memory);
>>>> Would it make sense to use a cdi_mem_area instead of size_t/void*?
>>> May be true, though I don't think it's a big difference whether the driver receives the structure or just the vaddr, because I don't think any driver will use this buffer directly without copying its content.
>> Well, you never know. And even if it converts it to a linear buffer this
>> might mean that we can save an additional copy in the OS.
> OK.

I agree with the use of a cdi_mem_area here.

>>>> How is success/error defined? You say "The amount of data transmitted
>>>> equals the buffer's size" (I assume this refers to stream->buffer_size).
>>>> So is buffer != stream->buffer_size an error or would it result in
>>>> something like a short write?
>>> I guess it would be considered an error (though I don't know why it should partially fail).
>> Assume stream->buffer_size = 42 and I provide buffer = 1337. What
>> happens? Does it copy 42 bytes (this is what your documentation
>> suggests) or does it fail immediately?
> It would copy 42 bytes, though I don't know what you mean with "buffer = 1337". I guess no driver will ever support 1338 buffers, the AC97 e.g. will probably use 32.

I would presume the driver should sanitise the input (and possibly
create a a set of "in-advance" streams for buffers bigger than those
that the hardware can handle, ready to be used once the current stream

>>>> Is there a way to update the data later? I'm thinking of a mixer that
>>>> sent 30 seconds in advance to keep the CPU load low, but then the user
>>>> hit the stop button.
>>> The stop button is handled by the application, thus it must make sure the mixer won't get the data 30 seconds in advance (I think; I've never written such an application).
>> You'll always want to provide some data in advance. The shorter your
>> buffers are, the higher are the chances that you get dropouts. So the
>> application can't make it too short. On the other, a stop button that
>> works only with a delay isn't really acceptable either.
> Hm... Anyone who knows how this is handled on Linux etc.?

To be honest, I wouldn't be looking at how Linux does audio as a
reference ;-). Jokes aside - it might be worth delving into the source
code of an audio-based application as well as the source for something
such as pulseaudio to determine how that kind of stuff works - whether
"stop" is passed all the way down to the driver or not or if the
userspace audio layer does it.