Skip to content

ASoC: SOF: ipc4-topology: Change DeepBuffer from static to dynamic mode#5673

Open
ujfalusi wants to merge 2 commits intothesofproject:topic/sof-devfrom
ujfalusi:peter/sof/pr/dynamic_deepbuffer_ipc4
Open

ASoC: SOF: ipc4-topology: Change DeepBuffer from static to dynamic mode#5673
ujfalusi wants to merge 2 commits intothesofproject:topic/sof-devfrom
ujfalusi:peter/sof/pr/dynamic_deepbuffer_ipc4

Conversation

@ujfalusi
Copy link
Collaborator

Currently the DeepBuffer results static host DMA buffer and thus static
minimum ALSA period size which can be a limiting factor for user space.
DeepBuffer of static 100ms host DMA buffer can only be opened with at least
110ms ALSA period size, if for the same endpoint there is a need for
smaller (or larger) buffer then a new PCM device must be created with
different DeepBuffer configuration.
This does not scale in real life.

With Dynamic DeepBuffer the host DAM buffer size is calculated based on the
requested ALSA period size (with a headroom between the two) using the
DEEP_BUFFER token as a maximum limit for the host DMA buffer.
This way applications can use the same DeepBuffer enabled PCM for different
use cases and still benefit of the power saving of a bigger host DMA
buffer.
As an example, the DEEP_BUFFER in topology is set to 100ms (interpreted as
maximum size with this patch):
ALSA period time of 20ms will result 10ms host DMA Buffer
 - before the patch if 10ms host DMA buffer was desired, the minimum ALSA
   period size was 20ms
ALSA period size of 50ms will result 40ms host DMA buffer
 - before the patch if 40ms host DMA buffer was desired, the minimum ALSA
   period size was 50ms
ALSA period size of 110ms will result 100ms host DMA buffer
 - before the patch if 100ms host DMA buffer was desired, the minimum ALSA
   period size was 110ms
ALSA period size of 500ms will result 100ms host DMA buffer
 - Like before this patch: 500ms ALSA period would use 100ms host DMA
   buffer

The Dynamic DeepBuffer will give applications the means to choose between
lower latency (small host DMA buffer) or higher power save (big host DMA
buffer) with higher latency on the same device with topology providing a
meaningful upper limit of the buffer size.

Currently the DeepBuffer results static host DMA buffer and thus static
minimum ALSA period size which can be a limiting factor for user space.
DeepBuffer of static 100ms host DMA buffer can only be opened with at least
110ms ALSA period size, if for the same endpoint there is a need for
smaller (or larger) buffer then a new PCM device must be created with
different DeepBuffer configuration.
This does not scale in real life.

With Dynamic DeepBuffer the host DAM buffer size is calculated based on the
requested ALSA period size (with a headroom between the two) using the
DEEP_BUFFER token as a maximum limit for the host DMA buffer.
This way applications can use the same DeepBuffer enabled PCM for different
use cases and still benefit of the power saving of a bigger host DMA
buffer.
As an example, the DEEP_BUFFER in topology is set to 100ms (interpreted as
maximum size with this patch):
ALSA period time of 20ms will result 10ms host DMA Buffer
 - before the patch if 10ms host DMA buffer was desired, the minimum ALSA
   period size was 20ms
ALSA period size of 50ms will result 40ms host DMA buffer
 - before the patch if 40ms host DMA buffer was desired, the minimum ALSA
   period size was 50ms
ALSA period size of 110ms will result 100ms host DMA buffer
 - before the patch if 100ms host DMA buffer was desired, the minimum ALSA
   period size was 110ms
ALSA period size of 500ms will result 100ms host DMA buffer
 - Like before this patch: 500ms ALSA period would use 100ms host DMA
   buffer

The Dynamic DeepBuffer will give applications the means to choose between
lower latency (small host DMA buffer) or higher power save (big host DMA
buffer) with higher latency on the same device with topology providing a
meaningful upper limit of the buffer size.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
…st_size_in_ms

The meaning of the variable has changed with he Dynamic DeepBuffer and it
reflects the smallest burst that the host DMA does.
This can be used to set the minimum period time constraint to avoid
overshooting by the DMA burst.

Change the name and update the related code in Intel hda_dsp_pcm_open()
to reflect the revised meaning.

Signed-off-by: Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
@lgirdwood
Copy link
Member

Copy link
Member

@lgirdwood lgirdwood left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @ujfalusi I assume the rule applies to both directions and works well with latency/position reporting granularity ? i.e. we dont need to update the granularity of position reporting coming from FW/driver ?

@ujfalusi
Copy link
Collaborator Author

LGTM @ujfalusi I assume the rule applies to both directions and works well with latency/position reporting granularity ? i.e. we dont need to update the granularity of position reporting coming from FW/driver ?

The position and delay reporting is not affected by the change, everything works as it used to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants

Comments