[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [cdi-devel] [PATCH] CDI.audio
Matthew Iselin wrote:
> I would presume the driver should sanitise the input (and possibly
> create a a set of "in-advance" streams for buffers bigger than those
> that the hardware can handle, ready to be used once the current stream
Hm, I don't really understand what you want to say... Both the number of streams (or channels, as Kevin proposed) as well as the number of buffers and their size are predefined by hardware (or, to be more precise: may be predefined by hardware) and will thus be unchangable by the driver. Hence, neither may a stream complete nor will a CDI.audio driver be able to create a new one out of the blue with an arbitrary buffer size.
> it might be worth delving into the source
> code of an audio-based application as well as the source for something
> such as pulseaudio to determine how that kind of stuff works - whether
> "stop" is passed all the way down to the driver or not or if the
> userspace audio layer does it.
Well, today I had a look into the mplayer code. Obviously it caches up to 64 kB (which equal several 100 ms of sound). If it wants to stop the current playback, it calls a function which forces the output device (e.g. ALSA) to drop all the data it sent. But I have absolutely no clue how this might be implemented. Maybe one would have to store all the input data and if one of those sources drops its samples, the mixer would have to remix everything without that source and replace the existing data in the sound card's buffer with that. Anyway, I think that's the mixer's problem and it's not really related to the sound card interface. Maybe we should add an offset to the transfer_data function so the mixer doesn't have to replace all of the buffer's content in that case.
However, it might be a good idea if I just write a simple sample AC97 driver to show how this interface is used...