Initial Commit
4
thirdparty/miniaudio-0.11.24/.github/CODE_OF_CONDUCT.md
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
Code of Conduct
|
||||
===============
|
||||
I don't believe we need a document telling fully grown adults how to conduct themselves within an open source
|
||||
community. All I ask is that you just don't be unpleasant and keep everything on topic.
|
||||
1
thirdparty/miniaudio-0.11.24/.github/FUNDING.yml
vendored
Normal file
@@ -0,0 +1 @@
|
||||
github: mackron
|
||||
12
thirdparty/miniaudio-0.11.24/.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Submit a bug report.
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**DELETE ALL OF THIS TEXT BEFORE SUBMITTING**
|
||||
|
||||
If you think you've found a bug, it will be helpful to compile with `#define MA_DEBUG_OUTPUT`. If you are having issues with playback, please run the simple_payback_sine example and report whether or not it's consistent with what's happening in your program.
|
||||
12
thirdparty/miniaudio-0.11.24/.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Submit a feature request.
|
||||
title: ''
|
||||
labels: feature request
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**DELETE ALL OF THIS TEXT BEFORE SUBMITTING**
|
||||
|
||||
Thanks for your suggestion! Please make sure the feature doesn't already exist and that it's within the scope of the goals of the project. Otherwise go into as much detail as possible and we'll consider it!
|
||||
14
thirdparty/miniaudio-0.11.24/.github/ISSUE_TEMPLATE/general-issue.md
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
name: General issue
|
||||
about: Submit a general issue.
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**DELETE ALL OF THIS TEXT BEFORE SUBMITTING**
|
||||
|
||||
If you have a question about how to use the library, please read the documentation at the top of miniaudio.h and take a look at the examples. If that still doesn't answer your question, consider posting in the Discussions section here on GitHub instead. Otherwise, feel free to post your issue and we'll get to it as soon as possible.
|
||||
|
||||
If you have an issue with playback, please run the simple_playback_sine example first and check whether or not that is working. Likewise for capture, please run the simple_capture example. If these examples work, it probably (but not always) means you're doing something wrong and a question in the Discussions section is more appropriate.
|
||||
4
thirdparty/miniaudio-0.11.24/.github/SECURITY.md
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
I deal with all security related issues publicly and transparently, and it can sometimes take a while before I
|
||||
get a chance to address it. If this is an issue for you, you need to use another library. The fastest way to get
|
||||
a bug fixed is to submit a pull request, but if this is impractical for you please post a ticket to the public
|
||||
GitHub issue tracker.
|
||||
16
thirdparty/miniaudio-0.11.24/.github/pull_request_template.md
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
**DELETE ALL OF THIS TEXT BEFORE SUBMITTING**
|
||||
|
||||
Thanks for your contribution! Before submitting this pull request, make sure you're okay with your code
|
||||
being put into the public domain. By submitting this pull request we will assume you're agreeing this.
|
||||
|
||||
In addition, please ensure you're respecting the goals of the project. Your pull request will be rejected
|
||||
if you do any of the following:
|
||||
|
||||
* Split your code into multiple files
|
||||
* Include a dependency on an external library
|
||||
* Include copyrighted code
|
||||
|
||||
Your pull request may still be rejected for other reasons to those listed above, but you will be informed
|
||||
of the reasons.
|
||||
|
||||
Please base your branch off the `dev` branch.
|
||||
49
thirdparty/miniaudio-0.11.24/.gitignore
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
\#docs/
|
||||
/_private/
|
||||
/build/
|
||||
/debugging/
|
||||
/evaluations/
|
||||
/examples/build/bin/
|
||||
/examples/build/codelite/
|
||||
/examples/build/vc6/
|
||||
/examples/build/vc15/
|
||||
/examples/build/vc17/
|
||||
/examples/simple_playback_sine.cpp
|
||||
/external/ogg/
|
||||
/external/vorbis/
|
||||
/external/opus/
|
||||
/external/opusfile/
|
||||
/extras/osaudio/tests/build/bin/
|
||||
/extras/osaudio/tests/build/vc17/
|
||||
/extras/osaudio/tests/build/watcom-dos/
|
||||
/extras/backends/pipewire/a.out
|
||||
/extras/decoders/litewav/
|
||||
/research/_build/
|
||||
/tests/_build/bin/
|
||||
/tests/_build/res/output/
|
||||
/tests/_build/cmake-emcc/
|
||||
/tests/_build/tcc/
|
||||
/tests/_build/vc6/
|
||||
/tests/_build/vc15/
|
||||
/tests/_build/vc17/
|
||||
/tests/_build/watcom/
|
||||
/tests/_build/capture.wav
|
||||
/tests/_build/a.out
|
||||
/tests/_build/a.exe
|
||||
/tests/debugging/archive/
|
||||
/tests/*.c
|
||||
/tests/*.cpp
|
||||
/website/docs/
|
||||
*.vcxproj.user
|
||||
.vs/
|
||||
.idea/
|
||||
.vscode/
|
||||
|
||||
# Below are individual files that I may start version controlling later or delete outright.
|
||||
/examples/build/COSMO.txt
|
||||
/research/ma_fft.c
|
||||
/research/ma_hrtf.c
|
||||
/research/ma_atomic.c
|
||||
/research/miniaudio_engine.c
|
||||
/tests/stress/
|
||||
/tools/hrtfgen/
|
||||
0
thirdparty/miniaudio-0.11.24/.gitmodules
vendored
Normal file
1196
thirdparty/miniaudio-0.11.24/CHANGES.md
vendored
Normal file
878
thirdparty/miniaudio-0.11.24/CMakeLists.txt
vendored
Normal file
@@ -0,0 +1,878 @@
|
||||
cmake_minimum_required(VERSION 3.10)
|
||||
|
||||
# Extract version from miniaudio.h
|
||||
file(READ "${CMAKE_CURRENT_SOURCE_DIR}/miniaudio.h" MINIAUDIO_HEADER_CONTENTS)
|
||||
string(REGEX MATCH "#define MA_VERSION_MAJOR[ \t]+([0-9]+)" _major_match "${MINIAUDIO_HEADER_CONTENTS}")
|
||||
set(MA_VERSION_MAJOR "${CMAKE_MATCH_1}")
|
||||
string(REGEX MATCH "#define MA_VERSION_MINOR[ \t]+([0-9]+)" _minor_match "${MINIAUDIO_HEADER_CONTENTS}")
|
||||
set(MA_VERSION_MINOR "${CMAKE_MATCH_1}")
|
||||
string(REGEX MATCH "#define MA_VERSION_REVISION[ \t]+([0-9]+)" _revision_match "${MINIAUDIO_HEADER_CONTENTS}")
|
||||
set(MA_VERSION_REVISION "${CMAKE_MATCH_1}")
|
||||
set(MINIAUDIO_VERSION "${MA_VERSION_MAJOR}.${MA_VERSION_MINOR}.${MA_VERSION_REVISION}")
|
||||
|
||||
project(miniaudio VERSION ${MINIAUDIO_VERSION})
|
||||
|
||||
|
||||
# Options
|
||||
option(MINIAUDIO_BUILD_EXAMPLES "Build miniaudio examples" OFF)
|
||||
option(MINIAUDIO_BUILD_TESTS "Build miniaudio tests" OFF)
|
||||
option(MINIAUDIO_BUILD_TOOLS "Build miniaudio development tools. Leave this disabled unless you know what you're doing. If you enable this and you get build errors, you clearly do not know what you're doing and yet you still enabled this option. Why would you do that?" OFF)
|
||||
option(MINIAUDIO_FORCE_CXX "Force compilation as C++" OFF)
|
||||
option(MINIAUDIO_FORCE_C89 "Force compilation as C89" OFF)
|
||||
option(MINIAUDIO_NO_EXTRA_NODES "Do not build extra node graph nodes" OFF)
|
||||
option(MINIAUDIO_NO_LIBVORBIS "Disable miniaudio_libvorbis" OFF)
|
||||
option(MINIAUDIO_NO_LIBOPUS "Disable miniaudio_libopus" OFF)
|
||||
option(MINIAUDIO_NO_WASAPI "Disable the WASAPI backend" OFF)
|
||||
option(MINIAUDIO_NO_DSOUND "Disable the DirectSound backend" OFF)
|
||||
option(MINIAUDIO_NO_WINMM "Disable the WinMM backend" OFF)
|
||||
option(MINIAUDIO_NO_ALSA "Disable the ALSA backend" OFF)
|
||||
option(MINIAUDIO_NO_PULSEAUDIO "Disable the PulseAudio backend" OFF)
|
||||
option(MINIAUDIO_NO_JACK "Disable the JACK backend" OFF)
|
||||
option(MINIAUDIO_NO_COREAUDIO "Disable the CoreAudio backend" OFF)
|
||||
option(MINIAUDIO_NO_SNDIO "Disable the sndio backend" OFF)
|
||||
option(MINIAUDIO_NO_AUDIO4 "Disable the audio(4) backend" OFF)
|
||||
option(MINIAUDIO_NO_OSS "Disable the OSS backend" OFF)
|
||||
option(MINIAUDIO_NO_AAUDIO "Disable the AAudio backend" OFF)
|
||||
option(MINIAUDIO_NO_OPENSL "Disable the OpenSL|ES backend" OFF)
|
||||
option(MINIAUDIO_NO_WEBAUDIO "Disable the Web Audio backend" OFF)
|
||||
option(MINIAUDIO_NO_CUSTOM "Disable support for custom backends" OFF)
|
||||
option(MINIAUDIO_NO_NULL "Disable the null backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_ONLY_SPECIFIC_BACKENDS "Only enable specific backends. Backends can be enabled with MINIAUDIO_ENABLE_[BACKEND]." OFF)
|
||||
option(MINIAUDIO_ENABLE_WASAPI "Enable the WASAPI backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_DSOUND "Enable the DirectSound backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_WINMM "Enable the WinMM backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_ALSA "Enable the ALSA backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_PULSEAUDIO "Enable the PulseAudio backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_JACK "Enable the JACK backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_COREAUDIO "Enable the CoreAudio backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_SNDIO "Enable the sndio backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_AUDIO4 "Enable the audio(4) backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_OSS "Enable the OSS backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_AAUDIO "Enable the AAudio backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_OPENSL "Enable the OpenSL|ES backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_WEBAUDIO "Enable the Web Audio backend" OFF)
|
||||
option(MINIAUDIO_ENABLE_CUSTOM "Enable support for custom backends" OFF)
|
||||
option(MINIAUDIO_ENABLE_NULL "Enable the null backend" OFF)
|
||||
option(MINIAUDIO_NO_DECODING "Disable decoding APIs" OFF)
|
||||
option(MINIAUDIO_NO_ENCODING "Disable encoding APIs" OFF)
|
||||
option(MINIAUDIO_NO_WAV "Disable the built-in WAV decoder" OFF)
|
||||
option(MINIAUDIO_NO_FLAC "Disable the built-in FLAC decoder" OFF)
|
||||
option(MINIAUDIO_NO_MP3 "Disable the built-in MP3 decoder" OFF)
|
||||
option(MINIAUDIO_NO_DEVICEIO "Disable audio playback and capture" OFF)
|
||||
option(MINIAUDIO_NO_RESOURCE_MANAGER "Disable the resource manager API" OFF)
|
||||
option(MINIAUDIO_NO_NODE_GRAPH "Disable the node graph API" OFF)
|
||||
option(MINIAUDIO_NO_ENGINE "Disable the high-level engine API" OFF)
|
||||
option(MINIAUDIO_NO_THREADING "Disable threading. Must be used with MINIAUDIO_NO_DEVICEIO." OFF)
|
||||
option(MINIAUDIO_NO_GENERATION "Disable generation APIs such as ma_waveform and ma_noise" OFF)
|
||||
option(MINIAUDIO_NO_SSE2 "Disable SSE2 optimizations" OFF)
|
||||
option(MINIAUDIO_NO_AVX2 "Disable AVX2 optimizations" OFF)
|
||||
option(MINIAUDIO_NO_NEON "Disable NEON optimizations" OFF)
|
||||
option(MINIAUDIO_NO_RUNTIME_LINKING "Disable runtime linking" OFF)
|
||||
option(MINIAUDIO_USE_STDINT "Use <stdint.h> for sized types" OFF)
|
||||
option(MINIAUDIO_DEBUG_OUTPUT "Enable stdout debug output" OFF)
|
||||
option(MINIAUDIO_INSTALL "Enable installation targets" ON)
|
||||
|
||||
include(GNUInstallDirs)
|
||||
|
||||
# Construct compiler options.
|
||||
set(COMPILE_OPTIONS)
|
||||
|
||||
# Store libraries to install
|
||||
# When installing any header that imports miniaudio.h from a relative path, we
|
||||
# need to maintain its place in the directory tree so it can find Miniaudio
|
||||
set(LIBS_TO_INSTALL)
|
||||
|
||||
|
||||
# Special rules for Emscripten.
|
||||
#
|
||||
# - MINIAUDIO_FORCE_C89 is not supported.
|
||||
# - MINIAUDIO_NO_RUNTIME_LINKING must be enabled.
|
||||
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
|
||||
set(MINIAUDIO_FORCE_C89 OFF)
|
||||
set(MINIAUDIO_NO_RUNTIME_LINKING ON)
|
||||
|
||||
# This is a hack to work around some errors relating to generation of the pkg-config file.
|
||||
set(MINIAUDIO_ENABLE_ONLY_SPECIFIC_BACKENDS ON)
|
||||
set(MINIAUDIO_ENABLE_WEBAUDIO ON)
|
||||
endif()
|
||||
|
||||
|
||||
if(MINIAUDIO_FORCE_CXX AND MINIAUDIO_FORCE_C89)
|
||||
message(FATAL_ERROR "MINIAUDIO_FORCE_CXX and MINIAUDIO_FORCE_C89 cannot be enabled at the same time.")
|
||||
endif()
|
||||
|
||||
if(MINIAUDIO_FORCE_CXX)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
|
||||
message(STATUS "Compiling as C++ (GNU/Clang)")
|
||||
list(APPEND COMPILE_OPTIONS -x c++)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
message(STATUS "Compiling as C++ (MSVC)")
|
||||
list(APPEND COMPILE_OPTIONS /TP)
|
||||
else()
|
||||
message(WARNING "MINIAUDIO_FORCE_CXX is enabled but the compiler does not support it. Ignoring.")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(MINIAUDIO_FORCE_C89)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
|
||||
message(STATUS "Compiling as C89")
|
||||
list(APPEND COMPILE_OPTIONS -std=c89)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
message(WARNING "MSVC does not support forcing C89. MINIAUDIO_FORCE_C89 ignored.")
|
||||
else()
|
||||
message(WARNING "MINIAUDIO_FORCE_C89 is enabled but the compiler does not support it. Ignoring.")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Warnings
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
|
||||
list(APPEND COMPILE_OPTIONS -Wall -Wextra -Wpedantic)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
#list(APPEND COMPILE_OPTIONS /W4)
|
||||
endif()
|
||||
|
||||
|
||||
# Construct compiler defines
|
||||
set(COMPILE_DEFINES)
|
||||
|
||||
if(MINIAUDIO_NO_WASAPI)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_WASAPI)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_DSOUND)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_DSOUND)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_WINMM)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_WINMM)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_ALSA)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_ALSA)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_PULSEAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_PULSEAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_JACK)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_JACK)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_COREAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_COREAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_SNDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_SNDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_AUDIO4)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_AUDIO4)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_OSS)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_OSS)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_AAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_AAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_OPENSL)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_OPENSL)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_WEBAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_WEBAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_CUSTOM)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_CUSTOM)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_NULL)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_NULL)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_ONLY_SPECIFIC_BACKENDS)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_ONLY_SPECIFIC_BACKENDS)
|
||||
|
||||
if(MINIAUDIO_ENABLE_WASAPI)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_WASAPI)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_DSOUND)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_DSOUND)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_WINMM)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_WINMM)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_ALSA)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_ALSA)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_PULSEAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_PULSEAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_JACK)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_JACK)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_COREAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_COREAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_SNDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_SNDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_AUDIO4)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_AUDIO4)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_OSS)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_OSS)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_AAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_AAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_OPENSL)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_OPENSL)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_WEBAUDIO)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_WEBAUDIO)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_CUSTOM)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_CUSTOM)
|
||||
endif()
|
||||
if(MINIAUDIO_ENABLE_NULL)
|
||||
list(APPEND COMPILE_DEFINES MA_ENABLE_NULL)
|
||||
endif()
|
||||
endif()
|
||||
if(MINIAUDIO_NO_DECODING)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_DECODING)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_ENCODING)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_ENCODING)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_WAV)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_WAV)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_FLAC)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_FLAC)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_MP3)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_MP3)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_DEVICEIO)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_DEVICE_IO)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_RESOURCE_MANAGER)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_RESOURCE_MANAGER)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_NODE_GRAPH)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_NODE_GRAPH)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_ENGINE)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_ENGINE)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_THREADING)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_THREADING)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_GENERATION)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_GENERATION)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_SSE2)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_SSE2)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_AVX2)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_AVX2)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_NEON)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_NEON)
|
||||
endif()
|
||||
if(MINIAUDIO_NO_RUNTIME_LINKING)
|
||||
list(APPEND COMPILE_DEFINES MA_NO_RUNTIME_LINKING)
|
||||
endif()
|
||||
if(MINIAUDIO_USE_STDINT)
|
||||
list(APPEND COMPILE_DEFINES MA_USE_STDINT)
|
||||
endif()
|
||||
if(MINIAUDIO_DEBUG_OUTPUT)
|
||||
list(APPEND COMPILE_DEFINES MA_DEBUG_OUTPUT)
|
||||
endif()
|
||||
|
||||
|
||||
# External Libraries
|
||||
function(add_libogg_subdirectory)
|
||||
if(NOT TARGET ogg)
|
||||
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/external/ogg/CMakeLists.txt)
|
||||
message(STATUS "Building libogg from source.")
|
||||
add_subdirectory(external/ogg)
|
||||
else()
|
||||
message(STATUS "libogg not found.")
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
function(add_libvorbis_subdirectory)
|
||||
if(NOT TARGET vorbis)
|
||||
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/external/vorbis/CMakeLists.txt)
|
||||
add_libogg_subdirectory()
|
||||
if(TARGET ogg)
|
||||
message(STATUS "Building libvorbis from source.")
|
||||
add_subdirectory(external/vorbis)
|
||||
else()
|
||||
message(STATUS "libogg not found. miniaudio_libvorbis will be excluded.")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
function(add_libopus_subdirectory)
|
||||
if(NOT TARGET opus)
|
||||
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/external/opus/CMakeLists.txt)
|
||||
message(STATUS "Building libopus from source.")
|
||||
set(OPUS_BUILD_TESTING OFF)
|
||||
add_subdirectory(external/opus)
|
||||
else()
|
||||
message(STATUS "libopus not found. miniaudio_libopus will be excluded.")
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
function(add_libopusfile_subdirectory)
|
||||
if(NOT TARGET opusfile)
|
||||
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/external/opusfile/CMakeLists.txt)
|
||||
add_libogg_subdirectory()
|
||||
if(TARGET ogg)
|
||||
add_libopus_subdirectory()
|
||||
if(TARGET opus)
|
||||
message(STATUS "Building libopusfile from source.")
|
||||
set(OP_DISABLE_HTTP TRUE)
|
||||
set(OP_DISABLE_DOCS TRUE)
|
||||
set(OP_DISABLE_EXAMPLES TRUE)
|
||||
add_subdirectory(external/opusfile)
|
||||
else()
|
||||
message(STATUS "libopus not found. miniaudio_libopus will be excluded.")
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "libogg not found. miniaudio_libopus will be excluded.")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
|
||||
# vorbisfile
|
||||
#
|
||||
# The vorbisfile target is required for miniaudio_libvorbis. If the vorbisfile target has already been
|
||||
# defined we'll just use that. Otherwise we'll try to use pkg-config. If that fails, as a last resort
|
||||
# we'll allow building it from source from the external/vorbis directory.
|
||||
if(NOT MINIAUDIO_NO_LIBVORBIS)
|
||||
if(NOT TARGET vorbisfile)
|
||||
# Try pkg-config first
|
||||
find_package(PkgConfig QUIET)
|
||||
if(PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_VORBISFILE vorbisfile)
|
||||
endif()
|
||||
|
||||
if(PC_VORBISFILE_FOUND)
|
||||
message(STATUS "Found vorbisfile via pkg-config: ${PC_VORBISFILE_LIBRARIES}")
|
||||
set(HAS_LIBVORBIS TRUE)
|
||||
else()
|
||||
# Fallback to building from source.
|
||||
add_libvorbis_subdirectory()
|
||||
if(NOT TARGET vorbisfile)
|
||||
message(STATUS "libvorbisfile not found. miniaudio_libvorbis will be excluded.")
|
||||
else()
|
||||
set(HAS_LIBVORBIS TRUE)
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "libvorbisfile already found.")
|
||||
set(HAS_LIBVORBIS TRUE)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# opusfile
|
||||
#
|
||||
# This is the same as vorbisfile above, but for opusfile.
|
||||
if(NOT MINIAUDIO_NO_LIBOPUS)
|
||||
if(NOT TARGET opusfile)
|
||||
# Try pkg-config first
|
||||
find_package(PkgConfig QUIET)
|
||||
if(PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_OPUSFILE opusfile)
|
||||
endif()
|
||||
|
||||
if(PC_OPUSFILE_FOUND)
|
||||
message(STATUS "Found opusfile via pkg-config: ${PC_OPUSFILE_LIBRARIES}")
|
||||
set(HAS_LIBOPUS TRUE)
|
||||
else()
|
||||
# Fallback to building from source.
|
||||
add_libopusfile_subdirectory()
|
||||
if(NOT TARGET opusfile)
|
||||
message(STATUS "libopusfile not found. miniaudio_libopus will be excluded.")
|
||||
else()
|
||||
set(HAS_LIBOPUS TRUE)
|
||||
endif()
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "libopusfile already found.")
|
||||
set(HAS_LIBOPUS TRUE)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
find_library(SDL2_LIBRARY NAMES SDL2)
|
||||
if(SDL2_LIBRARY)
|
||||
message(STATUS "Found SDL2: ${SDL2_LIBRARY}")
|
||||
set(HAS_SDL2 TRUE)
|
||||
else()
|
||||
message(STATUS "SDL2 not found. SDL2 examples will be excluded.")
|
||||
endif()
|
||||
|
||||
# SteamAudio has an annoying SDK setup. In the lib folder there is a folder for each platform. We need to specify the
|
||||
# platform we're compiling for.
|
||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "x86_64")
|
||||
# Assume 64-bit. Now we need to check if it's for Windows or Linux.
|
||||
if(WIN32)
|
||||
set(STEAMAUDIO_ARCH windows-x64)
|
||||
else()
|
||||
set(STEAMAUDIO_ARCH linux-x64)
|
||||
endif()
|
||||
else()
|
||||
# Assume 32-bit. Now we need to check if it's for Windows or Linux.
|
||||
if(WIN32)
|
||||
set(STEAMAUDIO_ARCH windows-x86)
|
||||
else()
|
||||
set(STEAMAUDIO_ARCH linux-x86)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# When searching for SteamAudio, we'll support installing it in the external/steamaudio directory.
|
||||
set(STEAMAUDIO_FIND_LIBRARY_HINTS)
|
||||
list(APPEND STEAMAUDIO_FIND_LIBRARY_HINTS ${CMAKE_CURRENT_SOURCE_DIR}/external/steamaudio/lib/${STEAMAUDIO_ARCH})
|
||||
|
||||
if(WIN32)
|
||||
else()
|
||||
list(APPEND STEAMAUDIO_FIND_LIBRARY_HINTS /opt/steamaudio/lib/${STEAMAUDIO_ARCH})
|
||||
list(APPEND STEAMAUDIO_FIND_LIBRARY_HINTS /usr/local/steamaudio/lib/${STEAMAUDIO_ARCH})
|
||||
endif()
|
||||
|
||||
set(STEAMAUDIO_FIND_HEADER_HINTS)
|
||||
list(APPEND STEAMAUDIO_FIND_HEADER_HINTS ${CMAKE_CURRENT_SOURCE_DIR}/external/steamaudio/include)
|
||||
|
||||
if(WIN32)
|
||||
else()
|
||||
list(APPEND STEAMAUDIO_FIND_HEADER_HINTS /opt/steamaudio/include)
|
||||
list(APPEND STEAMAUDIO_FIND_HEADER_HINTS /usr/local/steamaudio/include)
|
||||
endif()
|
||||
|
||||
|
||||
find_library(STEAMAUDIO_LIBRARY NAMES phonon HINTS ${STEAMAUDIO_FIND_LIBRARY_HINTS})
|
||||
if(STEAMAUDIO_LIBRARY)
|
||||
message(STATUS "Found SteamAudio: ${STEAMAUDIO_LIBRARY}")
|
||||
|
||||
find_path(STEAMAUDIO_INCLUDE_DIR
|
||||
NAMES phonon.h
|
||||
HINTS ${STEAMAUDIO_FIND_HEADER_HINTS}
|
||||
)
|
||||
if(STEAMAUDIO_INCLUDE_DIR)
|
||||
message(STATUS "Found phonon.h in ${STEAMAUDIO_INCLUDE_DIR}")
|
||||
set(HAS_STEAMAUDIO TRUE)
|
||||
else()
|
||||
message(STATUS "Could not find phonon.h. miniaudio_engine_steamaudio will be excluded.")
|
||||
endif()
|
||||
else()
|
||||
message(STATUS "SteamAudio not found. miniaudio_engine_steamaudio will be excluded.")
|
||||
endif()
|
||||
|
||||
|
||||
# Link libraries
|
||||
set(COMMON_LINK_LIBRARIES)
|
||||
|
||||
if (UNIX)
|
||||
if(NOT MINIAUDIO_NO_RUNTIME_LINKING)
|
||||
# Not all platforms actually use a separate "dl" library, notably NetBSD and OpenBSD.
|
||||
find_library(LIB_DL NAMES dl)
|
||||
if(LIB_DL)
|
||||
list(APPEND COMMON_LINK_LIBRARIES ${LIB_DL}) # For dlopen(), etc. Most compilers will link to this by default, but some may not.
|
||||
endif()
|
||||
endif()
|
||||
|
||||
find_library(LIB_PTHREAD NAMES pthread)
|
||||
if(LIB_PTHREAD)
|
||||
list(APPEND COMMON_LINK_LIBRARIES ${LIB_PTHREAD}) # Some compilers will not link to pthread by default so list it here just in case.
|
||||
endif()
|
||||
|
||||
find_library(LIB_M NAMES m)
|
||||
if(LIB_M)
|
||||
list(APPEND COMMON_LINK_LIBRARIES ${LIB_M})
|
||||
endif()
|
||||
|
||||
# If we're compiling for 32-bit ARM we need to link to -latomic.
|
||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^arm" AND NOT CMAKE_SYSTEM_PROCESSOR MATCHES "aarch64")
|
||||
find_library(LIB_ATOMIC NAMES atomic)
|
||||
if(LIB_ATOMIC)
|
||||
list(APPEND COMMON_LINK_LIBRARIES ${LIB_ATOMIC})
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
# Static Libraries
|
||||
add_library(miniaudio
|
||||
miniaudio.c
|
||||
miniaudio.h
|
||||
)
|
||||
|
||||
list(APPEND LIBS_TO_INSTALL miniaudio)
|
||||
if(MINIAUDIO_INSTALL)
|
||||
install(FILES miniaudio.h DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/miniaudio)
|
||||
endif()
|
||||
|
||||
target_include_directories(miniaudio PUBLIC ${CMAKE_CURRENT_SOURCE_DIR})
|
||||
target_compile_options (miniaudio PRIVATE ${COMPILE_OPTIONS})
|
||||
target_compile_definitions(miniaudio PRIVATE ${COMPILE_DEFINES})
|
||||
|
||||
|
||||
add_library(libvorbis_interface INTERFACE)
|
||||
if(HAS_LIBVORBIS)
|
||||
if(TARGET vorbisfile)
|
||||
target_link_libraries(libvorbis_interface INTERFACE vorbisfile)
|
||||
elseif(PC_VORBISFILE_FOUND)
|
||||
target_link_libraries (libvorbis_interface INTERFACE ${PC_VORBISFILE_LIBRARIES})
|
||||
target_include_directories(libvorbis_interface INTERFACE ${PC_VORBISFILE_INCLUDE_DIRS})
|
||||
target_link_directories (libvorbis_interface INTERFACE ${PC_VORBISFILE_LIBRARY_DIRS})
|
||||
target_compile_options (libvorbis_interface INTERFACE ${PC_VORBISFILE_CFLAGS_OTHER})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(HAS_LIBVORBIS)
|
||||
add_library(miniaudio_libvorbis
|
||||
extras/decoders/libvorbis/miniaudio_libvorbis.c
|
||||
extras/decoders/libvorbis/miniaudio_libvorbis.h
|
||||
)
|
||||
|
||||
list(APPEND LIBS_TO_INSTALL miniaudio_libvorbis)
|
||||
if(MINIAUDIO_INSTALL)
|
||||
install(FILES extras/decoders/libvorbis/miniaudio_libvorbis.h DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/miniaudio/extras/decoders/libvorbis)
|
||||
endif()
|
||||
|
||||
target_compile_options (miniaudio_libvorbis PRIVATE ${COMPILE_OPTIONS})
|
||||
target_compile_definitions(miniaudio_libvorbis PRIVATE ${COMPILE_DEFINES})
|
||||
target_link_libraries (miniaudio_libvorbis PRIVATE libvorbis_interface)
|
||||
target_include_directories(miniaudio_libvorbis PUBLIC extras/decoders/libvorbis/)
|
||||
endif()
|
||||
|
||||
|
||||
add_library(libopus_interface INTERFACE)
|
||||
if(HAS_LIBOPUS)
|
||||
if(TARGET opusfile)
|
||||
target_link_libraries (libopus_interface INTERFACE opusfile)
|
||||
elseif(PC_OPUSFILE_FOUND)
|
||||
target_link_libraries (libopus_interface INTERFACE ${PC_OPUSFILE_LIBRARIES})
|
||||
target_include_directories(libopus_interface INTERFACE ${PC_OPUSFILE_INCLUDE_DIRS})
|
||||
target_link_directories (libopus_interface INTERFACE ${PC_OPUSFILE_LIBRARY_DIRS})
|
||||
target_compile_options (libopus_interface INTERFACE ${PC_OPUSFILE_CFLAGS_OTHER})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(HAS_LIBOPUS)
|
||||
add_library(miniaudio_libopus
|
||||
extras/decoders/libopus/miniaudio_libopus.c
|
||||
extras/decoders/libopus/miniaudio_libopus.h
|
||||
)
|
||||
|
||||
|
||||
list(APPEND LIBS_TO_INSTALL miniaudio_libopus)
|
||||
if(MINIAUDIO_INSTALL)
|
||||
install(FILES extras/decoders/libopus/miniaudio_libopus.h DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/miniaudio/extras/decoders/libopus)
|
||||
endif()
|
||||
|
||||
target_compile_options (miniaudio_libopus PRIVATE ${COMPILE_OPTIONS})
|
||||
target_compile_definitions(miniaudio_libopus PRIVATE ${COMPILE_DEFINES})
|
||||
target_link_libraries (miniaudio_libopus PRIVATE libopus_interface)
|
||||
target_include_directories(miniaudio_libopus PUBLIC extras/decoders/libopus/)
|
||||
endif()
|
||||
|
||||
|
||||
if (NOT MINIAUDIO_NO_EXTRA_NODES)
|
||||
function(add_extra_node name)
|
||||
add_library(miniaudio_${name}_node
|
||||
extras/nodes/ma_${name}_node/ma_${name}_node.c
|
||||
extras/nodes/ma_${name}_node/ma_${name}_node.h
|
||||
)
|
||||
|
||||
set(libs "${LIBS_TO_INSTALL}")
|
||||
|
||||
list(APPEND libs miniaudio_${name}_node)
|
||||
set(LIBS_TO_INSTALL "${libs}" PARENT_SCOPE) # without PARENT_SCOPE, any changes are lost
|
||||
if(MINIAUDIO_INSTALL)
|
||||
install(FILES extras/nodes/ma_${name}_node/ma_${name}_node.h DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/miniaudio/extras/nodes/ma_${name}_node)
|
||||
endif()
|
||||
|
||||
target_include_directories(miniaudio_${name}_node PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/extras/nodes/ma_${name}_node)
|
||||
target_compile_options (miniaudio_${name}_node PRIVATE ${COMPILE_OPTIONS})
|
||||
target_compile_definitions(miniaudio_${name}_node PRIVATE ${COMPILE_DEFINES})
|
||||
|
||||
if(MINIAUDIO_BUILD_EXAMPLES)
|
||||
add_executable(miniaudio_${name}_node_example extras/nodes/ma_${name}_node/ma_${name}_node_example.c)
|
||||
target_link_libraries(miniaudio_${name}_node_example PRIVATE miniaudio_common_options)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
add_extra_node(channel_combiner)
|
||||
add_extra_node(channel_separator)
|
||||
add_extra_node(ltrim)
|
||||
add_extra_node(reverb)
|
||||
add_extra_node(vocoder)
|
||||
endif()
|
||||
|
||||
|
||||
# Interface with common options to simplify the setup of tests and examples. Note that we don't pass
|
||||
# in COMPILE_DEFINES here because want to allow the tests and examples to define their own defines. If
|
||||
# we were to use COMPILE_DEFINES here many of the tests and examples would not compile.
|
||||
add_library(miniaudio_common_options INTERFACE)
|
||||
target_compile_options(miniaudio_common_options INTERFACE ${COMPILE_OPTIONS})
|
||||
target_link_libraries (miniaudio_common_options INTERFACE ${COMMON_LINK_LIBRARIES})
|
||||
|
||||
function(is_backend_enabled NAME)
|
||||
if (NOT MINIAUDIO_NO_${NAME} AND (NOT MINIAUDIO_ENABLE_ONLY_SPECIFIC_BACKENDS OR MINIAUDIO_ENABLE_${NAME}))
|
||||
set(${NAME}_ENABLED TRUE PARENT_SCOPE)
|
||||
else()
|
||||
set(${NAME}_ENABLED FALSE PARENT_SCOPE)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
set(LINKED_LIBS)
|
||||
|
||||
if(MINIAUDIO_NO_RUNTIME_LINKING)
|
||||
is_backend_enabled(PULSEAUDIO)
|
||||
if (PULSEAUDIO_ENABLED)
|
||||
find_package(PulseAudio)
|
||||
|
||||
if (PulseAudio_FOUND)
|
||||
target_link_libraries(miniaudio PRIVATE ${PULSEAUDIO_LIBRARY})
|
||||
target_include_directories(miniaudio SYSTEM PRIVATE ${PULSEAUDIO_INCLUDE_DIR})
|
||||
list(APPEND LINKED_LIBS libpulse)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
is_backend_enabled(ALSA)
|
||||
if (ALSA_ENABLED)
|
||||
find_package(PkgConfig QUIET)
|
||||
|
||||
if(PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_ALSA alsa)
|
||||
endif()
|
||||
|
||||
find_library(ALSA_LIBRARY
|
||||
NAMES asound
|
||||
HINTS ${PC_ALSA_LIBRARY_DIRS}
|
||||
)
|
||||
|
||||
if (ALSA_LIBRARY)
|
||||
find_path(ALSA_INCLUDE_DIR
|
||||
NAMES alsa/asoundlib.h
|
||||
HINTS ${PC_ALSA_INCLUDE_DIRS}
|
||||
)
|
||||
|
||||
target_link_libraries(miniaudio PRIVATE ${ALSA_LIBRARY})
|
||||
target_include_directories(miniaudio PRIVATE ${ALSA_INCLUDE_DIR})
|
||||
list(APPEND LINKED_LIBS alsa)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
is_backend_enabled(SNDIO)
|
||||
if (SNDIO_ENABLED)
|
||||
find_package(PkgConfig QUIET)
|
||||
|
||||
if(PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_SNDIO sndio)
|
||||
endif()
|
||||
|
||||
find_library(SNDIO_LIBRARY
|
||||
NAMES sndio
|
||||
HINTS ${PC_SNDIO_LIBRARY_DIRS}
|
||||
)
|
||||
|
||||
if (SNDIO_LIBRARY)
|
||||
target_link_libraries(miniaudio PRIVATE ${SNDIO_LIBRARY})
|
||||
list(APPEND LINKED_LIBS sndio)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
is_backend_enabled(JACK)
|
||||
if (JACK_ENABLED)
|
||||
find_package(PkgConfig QUIET)
|
||||
|
||||
if(PKG_CONFIG_FOUND)
|
||||
pkg_check_modules(PC_JACK jack)
|
||||
endif()
|
||||
|
||||
find_library(JACK_LIBRARY
|
||||
NAMES jack
|
||||
HINTS ${PC_JACK_LIBRARY_DIRS}
|
||||
)
|
||||
|
||||
if (JACK_LIBRARY)
|
||||
find_path(JACK_INCLUDE_DIR
|
||||
NAMES jack/jack.h
|
||||
HINTS ${PC_JACK_INCLUDE_DIRS}
|
||||
)
|
||||
|
||||
target_link_libraries(miniaudio PRIVATE ${JACK_LIBRARY})
|
||||
target_include_directories(miniaudio PRIVATE ${JACK_INCLUDE_DIR})
|
||||
list(APPEND LINKED_LIBS jack)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Tests
|
||||
#
|
||||
# All tests are compiled as a single translation unit. There is no need to add miniaudio as a link library.
|
||||
if(MINIAUDIO_BUILD_TESTS)
|
||||
enable_testing()
|
||||
|
||||
set(TESTS_DIR ${CMAKE_CURRENT_SOURCE_DIR}/tests)
|
||||
|
||||
function(add_miniaudio_test name source)
|
||||
add_executable(${name} ${TESTS_DIR}/${source})
|
||||
target_link_libraries(${name} PRIVATE miniaudio_common_options)
|
||||
endfunction()
|
||||
|
||||
# Disable C++ tests when forcing C89. This is needed because we'll be passing -std=c89 which will cause errors when trying to compile a C++ file.
|
||||
if(NOT MINIAUDIO_FORCE_C89)
|
||||
# The debugging test is only used for debugging miniaudio itself. Don't do add_test() for this, and do not include it in in any automated testing.
|
||||
add_miniaudio_test(miniaudio_debugging debugging/debugging.cpp)
|
||||
|
||||
add_miniaudio_test(miniaudio_cpp cpp/cpp.cpp)
|
||||
add_test(NAME miniaudio_cpp COMMAND miniaudio_cpp --auto) # This is just the deviceio test.
|
||||
endif()
|
||||
|
||||
add_miniaudio_test(miniaudio_deviceio deviceio/deviceio.c)
|
||||
add_test(NAME miniaudio_deviceio COMMAND miniaudio_deviceio --auto)
|
||||
|
||||
add_miniaudio_test(miniaudio_conversion conversion/conversion.c)
|
||||
add_test(NAME miniaudio_conversion COMMAND miniaudio_conversion)
|
||||
|
||||
add_miniaudio_test(miniaudio_filtering filtering/filtering.c)
|
||||
add_test(NAME miniaudio_filtering COMMAND miniaudio_filtering ${CMAKE_CURRENT_SOURCE_DIR}/data/16-44100-stereo.flac)
|
||||
|
||||
add_miniaudio_test(miniaudio_generation generation/generation.c)
|
||||
add_test(NAME miniaudio_generation COMMAND miniaudio_generation)
|
||||
endif()
|
||||
|
||||
# Examples
|
||||
#
|
||||
# Like tests, all examples are compiled as a single translation unit. There is no need to add miniaudio as a link library.
|
||||
if (MINIAUDIO_BUILD_EXAMPLES)
|
||||
set(EXAMPLES_DIR ${CMAKE_CURRENT_SOURCE_DIR}/examples)
|
||||
|
||||
function(add_miniaudio_example name source)
|
||||
add_executable(${name} ${EXAMPLES_DIR}/${source})
|
||||
target_link_libraries(${name} PRIVATE miniaudio_common_options)
|
||||
endfunction()
|
||||
|
||||
add_miniaudio_example(miniaudio_custom_backend custom_backend.c)
|
||||
|
||||
add_miniaudio_example(miniaudio_custom_decoder_engine custom_decoder_engine.c)
|
||||
if(HAS_LIBVORBIS)
|
||||
target_link_libraries(miniaudio_custom_decoder_engine PRIVATE libvorbis_interface)
|
||||
else()
|
||||
target_compile_definitions(miniaudio_custom_decoder_engine PRIVATE MA_NO_LIBVORBIS)
|
||||
message(STATUS "miniaudio_libvorbis is disabled. Vorbis support is disabled in miniaudio_custom_decoder_engine.")
|
||||
endif()
|
||||
if(HAS_LIBOPUS)
|
||||
target_link_libraries(miniaudio_custom_decoder_engine PRIVATE libopus_interface)
|
||||
else()
|
||||
target_compile_definitions(miniaudio_custom_decoder_engine PRIVATE MA_NO_LIBOPUS)
|
||||
message(STATUS "miniaudio_libopus is disabled. Opus support is disabled in miniaudio_custom_decoder_engine.")
|
||||
endif()
|
||||
|
||||
add_miniaudio_example(miniaudio_custom_decoder custom_decoder.c)
|
||||
if(HAS_LIBVORBIS)
|
||||
target_link_libraries(miniaudio_custom_decoder PRIVATE libvorbis_interface)
|
||||
else()
|
||||
target_compile_definitions(miniaudio_custom_decoder PRIVATE MA_NO_LIBVORBIS)
|
||||
message(STATUS "miniaudio_libvorbis is disabled. Vorbis support is disabled in miniaudio_custom_decoder.")
|
||||
endif()
|
||||
if(HAS_LIBOPUS)
|
||||
target_link_libraries(miniaudio_custom_decoder PRIVATE libopus_interface)
|
||||
else()
|
||||
target_compile_definitions(miniaudio_custom_decoder PRIVATE MA_NO_LIBOPUS)
|
||||
message(STATUS "miniaudio_libopus is disabled. Opus support is disabled in miniaudio_custom_decoder.")
|
||||
endif()
|
||||
|
||||
add_miniaudio_example(miniaudio_data_source_chaining data_source_chaining.c)
|
||||
add_miniaudio_example(miniaudio_duplex_effect duplex_effect.c)
|
||||
add_miniaudio_example(miniaudio_engine_advanced engine_advanced.c)
|
||||
add_miniaudio_example(miniaudio_engine_effects engine_effects.c)
|
||||
add_miniaudio_example(miniaudio_engine_hello_world engine_hello_world.c)
|
||||
|
||||
if(HAS_SDL2)
|
||||
add_miniaudio_example(miniaudio_engine_sdl engine_sdl.c)
|
||||
target_link_libraries(miniaudio_engine_sdl PRIVATE ${SDL2_LIBRARY})
|
||||
else()
|
||||
message(STATUS "SDL2 could not be found. miniaudio_engine_sdl has been excluded.")
|
||||
endif()
|
||||
|
||||
if(HAS_STEAMAUDIO)
|
||||
add_miniaudio_example(miniaudio_engine_steamaudio engine_steamaudio.c)
|
||||
target_include_directories(miniaudio_engine_steamaudio PRIVATE ${STEAMAUDIO_INCLUDE_DIR})
|
||||
target_link_libraries (miniaudio_engine_steamaudio PRIVATE ${STEAMAUDIO_LIBRARY})
|
||||
else()
|
||||
message(STATUS "SteamAudio could not be found. miniaudio_engine_steamaudio has been excluded.")
|
||||
endif()
|
||||
|
||||
add_miniaudio_example(miniaudio_hilo_interop hilo_interop.c)
|
||||
add_miniaudio_example(miniaudio_node_graph node_graph.c)
|
||||
add_miniaudio_example(miniaudio_resource_manager_advanced resource_manager_advanced.c)
|
||||
add_miniaudio_example(miniaudio_resource_manager resource_manager.c)
|
||||
add_miniaudio_example(miniaudio_simple_capture simple_capture.c)
|
||||
add_miniaudio_example(miniaudio_simple_duplex simple_duplex.c)
|
||||
add_miniaudio_example(miniaudio_simple_enumeration simple_enumeration.c)
|
||||
add_miniaudio_example(miniaudio_simple_loopback simple_loopback.c)
|
||||
add_miniaudio_example(miniaudio_simple_looping simple_looping.c)
|
||||
add_miniaudio_example(miniaudio_simple_mixing simple_mixing.c)
|
||||
add_miniaudio_example(miniaudio_simple_playback_sine simple_playback_sine.c)
|
||||
add_miniaudio_example(miniaudio_simple_playback simple_playback.c)
|
||||
add_miniaudio_example(miniaudio_simple_spatialization simple_spatialization.c)
|
||||
endif()
|
||||
|
||||
|
||||
# Tools
|
||||
if (MINIAUDIO_BUILD_TOOLS)
|
||||
set(TOOLS_DIR ${CMAKE_CURRENT_SOURCE_DIR}/tools)
|
||||
|
||||
add_executable(madoc ${TOOLS_DIR}/madoc/madoc.c)
|
||||
endif()
|
||||
|
||||
|
||||
if(IS_ABSOLUTE "${CMAKE_INSTALL_INCLUDEDIR}")
|
||||
set(MINIAUDIO_PC_INCLUDEDIR "${CMAKE_INSTALL_INCLUDEDIR}")
|
||||
else()
|
||||
set(MINIAUDIO_PC_INCLUDEDIR "\${prefix}/${CMAKE_INSTALL_INCLUDEDIR}")
|
||||
endif()
|
||||
if(IS_ABSOLUTE "${CMAKE_INSTALL_LIBDIR}")
|
||||
set(MINIAUDIO_PC_LIBDIR "${CMAKE_INSTALL_LIBDIR}")
|
||||
else()
|
||||
set(MINIAUDIO_PC_LIBDIR "\${exec_prefix}/${CMAKE_INSTALL_LIBDIR}")
|
||||
endif()
|
||||
|
||||
string(JOIN ", " MINIAUDIO_PC_REQUIRES_PRIVATE ${LINKED_LIBS})
|
||||
|
||||
# Add vorbisfile and opusfile to pkg-config dependencies if found via pkg-config
|
||||
set(PC_REQUIRES_PRIVATE_LIST)
|
||||
if(PC_VORBISFILE_FOUND AND HAS_LIBVORBIS)
|
||||
list(APPEND PC_REQUIRES_PRIVATE_LIST "vorbisfile")
|
||||
endif()
|
||||
if(PC_OPUSFILE_FOUND AND HAS_LIBOPUS)
|
||||
list(APPEND PC_REQUIRES_PRIVATE_LIST "opusfile")
|
||||
endif()
|
||||
if(PC_REQUIRES_PRIVATE_LIST)
|
||||
if(MINIAUDIO_PC_REQUIRES_PRIVATE)
|
||||
string(APPEND MINIAUDIO_PC_REQUIRES_PRIVATE ", ")
|
||||
endif()
|
||||
string(JOIN ", " PC_REQUIRES_STR ${PC_REQUIRES_PRIVATE_LIST})
|
||||
string(APPEND MINIAUDIO_PC_REQUIRES_PRIVATE "${PC_REQUIRES_STR}")
|
||||
endif()
|
||||
list(TRANSFORM COMMON_LINK_LIBRARIES PREPEND "-l")
|
||||
string(JOIN " " MINIAUDIO_PC_LIBS_PRIVATE ${COMMON_LINK_LIBRARIES})
|
||||
list(TRANSFORM COMPILE_DEFINES PREPEND "-D")
|
||||
string(JOIN " " MINIAUDIO_PC_CFLAGS ${COMPILE_DEFINES})
|
||||
|
||||
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/miniaudio.pc.in" "${CMAKE_CURRENT_BINARY_DIR}/miniaudio.pc" @ONLY)
|
||||
|
||||
if(MINIAUDIO_INSTALL)
|
||||
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/miniaudio.pc" DESTINATION "${CMAKE_INSTALL_LIBDIR}/pkgconfig")
|
||||
|
||||
message(STATUS "Library list: ${LIBS_TO_INSTALL}")
|
||||
install(TARGETS ${LIBS_TO_INSTALL}
|
||||
ARCHIVE DESTINATION ${CMAKE_INSTALL_LIBDIR}
|
||||
LIBRARY DESTINATION ${CMAKE_INSTALL_LIBDIR}
|
||||
RUNTIME DESTINATION ${CMAKE_INSTALL_BINDIR}
|
||||
)
|
||||
endif()
|
||||
88
thirdparty/miniaudio-0.11.24/CONTRIBUTING.md
vendored
Normal file
@@ -0,0 +1,88 @@
|
||||
Contributing to miniaudio
|
||||
=========================
|
||||
First of all, thanks for stopping by! This document will explain a few things to consider when contributing to
|
||||
miniaudio.
|
||||
|
||||
|
||||
Found a Bug?
|
||||
------------
|
||||
If you've found a bug you can create a bug report [here on GitHub](https://github.com/mackron/miniaudio/issues).
|
||||
The more information you can provide, the quicker I'll be able to get it fixed. Sample programs and files help a
|
||||
lot, as does a detailed list of steps I can follow to reproduce the problem.
|
||||
|
||||
You can also submit a pull request which, provided your fix is correct and well written, is the quickest way to
|
||||
get the bug fixed. See the next section for guidance on pull requests.
|
||||
|
||||
|
||||
Pull Requests
|
||||
-------------
|
||||
If you want to do actual development on miniaudio, pull requests are the best place to start. Just don't do any
|
||||
significant work without talking to me first. If I don't like it, it won't be merged. You can find me via email,
|
||||
[Discord](https://discord.gg/9vpqbjU) and [Twitter](https://twitter.com/mackron).
|
||||
|
||||
Always base your pull request branch on the "dev" branch. The master branch contains the latest release, which
|
||||
means your pull request may not be including the latest in-development changes which may result in unnecessary
|
||||
conflicts.
|
||||
|
||||
I need to review your pull requests before merging. If your pull request is non-trivial, try to break it up into
|
||||
logical bite sized commits to make it easier for review, but make sure every commit compiles. Obviously this is
|
||||
not always easy to do in practice, but just keep it in mind.
|
||||
|
||||
When it comes to coding style I'm fairly relaxed, but be professional and respect the existing coding style,
|
||||
regardless of whether or not you like it. It's no big deal if something slips, but try to keep it in mind. Some
|
||||
things in particular:
|
||||
* C89. `/*...*/` style comments and variables declared at the top of the code block are the main thing.
|
||||
* Spaces instead of tabs. 4 spaces per tab.
|
||||
* Don't add a third party dependency. If you do this I'll immediately reject your pull request.
|
||||
|
||||
I'm not going to outline specific coding styles - just look at the existing code and use common sense.
|
||||
|
||||
If you want to submit a pull request for any of the dr_* libraries in the "extras" folder, please submit the pull
|
||||
request to the [dr_libs repository](https://github.com/mackron/dr_libs).
|
||||
|
||||
|
||||
Respect the Goals of the Project
|
||||
--------------------------------
|
||||
When making a contribution, please respect the primary goals of the project. These are the points of difference
|
||||
that make miniaudio unique and it's important they're maintained and respected.
|
||||
|
||||
* miniaudio is *single file*. Do not split your work into multiple files thinking I'll be impressed with your
|
||||
modular design - I won't, and your contribution will be immediately rejected.
|
||||
* miniaudio has *no external dependencies*. You might think your helping by adding some cool new feature via
|
||||
some external library. You're not helping, and your contribution will be immediately rejected.
|
||||
* miniaudio is *public domain*. Don't add any code that's taken directly from licensed code.
|
||||
|
||||
|
||||
|
||||
Licensing and Credits
|
||||
---------------------
|
||||
miniaudio is dual licensed as a choice of public domain or MIT-0 (No Attribution), so you need to agree to release
|
||||
your contributions as such. I also do not maintain a credit/contributions list. If you don't like this you should
|
||||
not contribute to this project.
|
||||
|
||||
|
||||
Predictable Questions
|
||||
---------------------
|
||||
### "Would you consider splitting out [some section of code] into its own file?"
|
||||
No, the idea is to keep everything in one place. It would be nice in specific cases to split out specific sections
|
||||
of the code, such as the resampler, for example. However, this will completely violate one of the major goals of the
|
||||
project - to have a complete audio library contained within a single file.
|
||||
|
||||
### "Would you consider adding support for CMake [or my favourite build system]?"
|
||||
No, the whole point of having the code contained entirely within a single file and not have any external dependencies
|
||||
is to make it easy to add to your source tree without needing any extra build system integration. There is no need to
|
||||
incur the cost of maintaining build systems in miniaudio.
|
||||
|
||||
### "Would you consider feature XYZ? It requires C11, but don't worry, all compilers support it."
|
||||
One of the philosophies of miniaudio is that it should just work, and that includes compilation environment. There's
|
||||
no real reason to not support older compilers. Newer versions of C will not add anything of any significance
|
||||
that cannot already be done in C89.
|
||||
|
||||
### "Will you consider adding a third license option such as [my favourite license]?"
|
||||
No, the idea is to keep licensing simple. That's why miniaudio is public domain - to avoid as much license friction
|
||||
as possible. However, some regions do not recognize public domain, so therefore an alternative license, MIT No
|
||||
Attribution, is included as an added option. There is no need to make the licensing situation any more confusing.
|
||||
|
||||
### "Is there a list of contributors? Will my name be added to any kind of list?"
|
||||
No, there's no credit list as it just adds extra maintenance work and it's too easy to accidentally and unfairly
|
||||
forget to add a contributor. A list of contributors can be retrieved from the Git log and also GitHub itself.
|
||||
47
thirdparty/miniaudio-0.11.24/LICENSE
vendored
Normal file
@@ -0,0 +1,47 @@
|
||||
This software is available as a choice of the following licenses. Choose
|
||||
whichever you prefer.
|
||||
|
||||
===============================================================================
|
||||
ALTERNATIVE 1 - Public Domain (www.unlicense.org)
|
||||
===============================================================================
|
||||
This is free and unencumbered software released into the public domain.
|
||||
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this
|
||||
software, either in source code form or as a compiled binary, for any purpose,
|
||||
commercial or non-commercial, and by any means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors of this
|
||||
software dedicate any and all copyright interest in the software to the public
|
||||
domain. We make this dedication for the benefit of the public at large and to
|
||||
the detriment of our heirs and successors. We intend this dedication to be an
|
||||
overt act of relinquishment in perpetuity of all present and future rights to
|
||||
this software under copyright law.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
For more information, please refer to <http://unlicense.org/>
|
||||
|
||||
===============================================================================
|
||||
ALTERNATIVE 2 - MIT No Attribution
|
||||
===============================================================================
|
||||
Copyright 2025 David Reid
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
213
thirdparty/miniaudio-0.11.24/README.md
vendored
Normal file
@@ -0,0 +1,213 @@
|
||||
<h1 align="center">
|
||||
<a href="https://miniaud.io"><img src="https://miniaud.io/img/miniaudio_wide.png" alt="miniaudio" width="1280"></a>
|
||||
<br>
|
||||
</h1>
|
||||
|
||||
<h4 align="center">An audio playback and capture library in a single source file.</h4>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://discord.gg/9vpqbjU"><img src="https://img.shields.io/discord/712952679415939085?label=discord&logo=discord&style=flat-square" alt="discord"></a>
|
||||
<a href="https://x.com/mackron"><img alt="x" src="https://img.shields.io/twitter/url?url=https%3A%2F%2Fx.com%2Fmackron&style=flat-square&logo=x&label=%40mackron"></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#features">Features</a> -
|
||||
<a href="#examples">Examples</a> -
|
||||
<a href="#building">Building</a> -
|
||||
<a href="#documentation">Documentation</a> -
|
||||
<a href="#supported-platforms">Supported Platforms</a> -
|
||||
<a href="#security">Security</a> -
|
||||
<a href="#license">License</a>
|
||||
</p>
|
||||
|
||||
miniaudio is written in C with no dependencies except the standard library and should compile clean on all major
|
||||
compilers without the need to install any additional development packages. All major desktop and mobile platforms
|
||||
are supported.
|
||||
|
||||
|
||||
Features
|
||||
========
|
||||
- Simple build system with no external dependencies.
|
||||
- Simple and flexible API.
|
||||
- Low-level API for direct access to raw audio data.
|
||||
- High-level API for sound management, mixing, effects and optional 3D spatialization.
|
||||
- Flexible node graph system for advanced mixing and effect processing.
|
||||
- Resource management for loading sound files.
|
||||
- Decoding, with built-in support for WAV, FLAC, and MP3, in addition to being able to plug in custom decoders.
|
||||
- Encoding (WAV only).
|
||||
- Data conversion.
|
||||
- Resampling, including custom resamplers.
|
||||
- Channel mapping.
|
||||
- Basic generation of waveforms and noise.
|
||||
- Basic effects and filters.
|
||||
|
||||
Refer to the [Programming Manual](https://miniaud.io/docs/manual/) for a more complete description of
|
||||
available features in miniaudio.
|
||||
|
||||
|
||||
Examples
|
||||
========
|
||||
|
||||
This example shows one way to play a sound using the high level API.
|
||||
|
||||
```c
|
||||
#include "miniaudio/miniaudio.h"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main()
|
||||
{
|
||||
ma_result result;
|
||||
ma_engine engine;
|
||||
|
||||
result = ma_engine_init(NULL, &engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
ma_engine_play_sound(&engine, "sound.wav", NULL);
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_engine_uninit(&engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
This example shows how to decode and play a sound using the low level API.
|
||||
|
||||
```c
|
||||
#include "miniaudio/miniaudio.h"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_decoder* pDecoder = (ma_decoder*)pDevice->pUserData;
|
||||
if (pDecoder == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
ma_decoder_read_pcm_frames(pDecoder, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder decoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_decoder_init_file(argv[1], NULL, &decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -2;
|
||||
}
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = decoder.outputFormat;
|
||||
deviceConfig.playback.channels = decoder.outputChannels;
|
||||
deviceConfig.sampleRate = decoder.outputSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &decoder;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -3;
|
||||
}
|
||||
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -4;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
More examples can be found in the [examples](examples) folder or online here: https://miniaud.io/docs/examples/
|
||||
|
||||
|
||||
Building
|
||||
========
|
||||
Just compile miniaudio.c like any other source file and include miniaudio.h like a normal header. There's no need
|
||||
to install any dependencies. On Windows and macOS there's no need to link to anything. On Linux and BSD just link
|
||||
to `-lpthread` and `-lm`. On iOS you need to compile as Objective-C. Link to `-ldl` if you get errors about
|
||||
`dlopen()`, etc.
|
||||
|
||||
If you get errors about undefined references to `__sync_val_compare_and_swap_8`, `__atomic_load_8`, etc. you
|
||||
need to link with `-latomic`.
|
||||
|
||||
ABI compatibility is not guaranteed between versions so take care if compiling as a DLL/SO. The suggested way
|
||||
to integrate miniaudio is by adding it directly to your source tree.
|
||||
|
||||
You can also use CMake if that's your preference.
|
||||
|
||||
|
||||
Documentation
|
||||
=============
|
||||
Online documentation can be found here: https://miniaud.io/docs/
|
||||
|
||||
Documentation can also be found at the top of [miniaudio.h](https://raw.githubusercontent.com/mackron/miniaudio/master/miniaudio.h)
|
||||
which is always the most up-to-date and authoritative source of information on how to use miniaudio. All other
|
||||
documentation is generated from this in-code documentation.
|
||||
|
||||
|
||||
Supported Platforms
|
||||
===================
|
||||
- Windows
|
||||
- macOS, iOS
|
||||
- Linux
|
||||
- FreeBSD / OpenBSD / NetBSD
|
||||
- Android
|
||||
- Raspberry Pi
|
||||
- Emscripten / HTML5
|
||||
|
||||
miniaudio should compile clean on other platforms, but it will not include any support for playback or capture
|
||||
by default. To support that, you would need to implement a custom backend. You can do this without needing to
|
||||
modify the miniaudio source code. See the [custom_backend](examples/custom_backend.c) example.
|
||||
|
||||
Backends
|
||||
--------
|
||||
- WASAPI
|
||||
- DirectSound
|
||||
- WinMM
|
||||
- Core Audio (Apple)
|
||||
- ALSA
|
||||
- PulseAudio
|
||||
- JACK
|
||||
- sndio (OpenBSD)
|
||||
- audio(4) (NetBSD and OpenBSD)
|
||||
- OSS (FreeBSD)
|
||||
- AAudio (Android 8.0+)
|
||||
- OpenSL|ES (Android only)
|
||||
- Web Audio (Emscripten)
|
||||
- Null (Silence)
|
||||
- Custom
|
||||
|
||||
|
||||
Security
|
||||
========
|
||||
See the miniaudio [security policy](.github/SECURITY.md).
|
||||
|
||||
|
||||
License
|
||||
=======
|
||||
Your choice of either public domain or [MIT No Attribution](https://github.com/aws/mit-0).
|
||||
13
thirdparty/miniaudio-0.11.24/camal/cleanup.camal
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
miniaudio_h := <../miniaudio.h>;
|
||||
miniaudio_c := <../miniaudio.c>;
|
||||
|
||||
cleanup :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\r\n"] <= "\n" // Normalize line endings to "\n". Needed for very old versions of GCC.
|
||||
["\t"] <= " " // Tabs to spaces.
|
||||
;
|
||||
}
|
||||
|
||||
miniaudio_h = cleanup(@(miniaudio_h));
|
||||
miniaudio_c = cleanup(@(miniaudio_c));
|
||||
289
thirdparty/miniaudio-0.11.24/camal/miniaudio.camal
vendored
Normal file
@@ -0,0 +1,289 @@
|
||||
miniaudio_h := <../miniaudio.h>;
|
||||
miniaudio_c := <../miniaudio.c>;
|
||||
dr_wav_h :: <../../dr_libs/dr_wav.h>;
|
||||
dr_flac_h :: <../../dr_libs/dr_flac.h>;
|
||||
dr_mp3_h :: <../../dr_libs/dr_mp3.h>;
|
||||
c89atomic_h :: <../../c89atomic/c89atomic.h>;
|
||||
c89atomic_c :: <../../c89atomic/c89atomic.c>;
|
||||
|
||||
minify :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["/\*[^*]*\*+(?:[^/*][^*]*\*+)*/"] <= "" // Remove all block comments to keep things clean.
|
||||
["(?m)^\s*\R"] <= "" // Remove all empty lines to compress it all down.
|
||||
["[ \t]+(?=(?:\R|$))"] <= "" // Remove trailing whitespace.
|
||||
;
|
||||
}
|
||||
|
||||
|
||||
// dr_wav
|
||||
rename_wav_namespace :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\bdrwav"] <= "ma_dr_wav"
|
||||
["\bDRWAV"] <= "MA_DR_WAV"
|
||||
["\bdr_wav"] <= "ma_dr_wav"
|
||||
["\bDR_WAV"] <= "MA_DR_WAV"
|
||||
["\bg_drwav"] <= "ma_dr_wav_g"
|
||||
|
||||
// Some common tokens will be namespaced as "ma_dr_wav" when we really want them to be "ma_".
|
||||
["\bma_dr_wav_int"] <= "ma_int"
|
||||
["\bma_dr_wav_uint"] <= "ma_uint"
|
||||
["\bma_dr_wav_bool"] <= "ma_bool"
|
||||
["\bma_dr_wav_uintptr"] <= "ma_uintptr"
|
||||
["\bMA_DR_WAV_TRUE"] <= "MA_TRUE"
|
||||
["\bMA_DR_WAV_FALSE"] <= "MA_FALSE"
|
||||
["\bMA_DR_WAV_UINT64_MAX"] <= "MA_UINT64_MAX"
|
||||
["\bMA_DR_WAV_32BIT"] <= "MA_32BIT"
|
||||
["\bMA_DR_WAV_64BIT"] <= "MA_64BIT"
|
||||
["\bMA_DR_WAV_ARM32"] <= "MA_ARM32"
|
||||
["\bMA_DR_WAV_ARM64"] <= "MA_ARM64"
|
||||
["\bMA_DR_WAV_X64"] <= "MA_X64"
|
||||
["\bMA_DR_WAV_X86"] <= "MA_X86"
|
||||
["\bMA_DR_WAV_ARM"] <= "MA_ARM"
|
||||
["\bMA_DR_WAV_API"] <= "MA_API"
|
||||
["\bMA_DR_WAV_PRIVATE"] <= "MA_PRIVATE"
|
||||
["\bMA_DR_WAV_DLL"] <= "MA_DLL"
|
||||
["\bMA_DR_WAV_DLL_IMPORT"] <= "MA_DLL_IMPORT"
|
||||
["\bMA_DR_WAV_DLL_EXPORT"] <= "MA_DLL_EXPORT"
|
||||
["\bMA_DR_WAV_DLL_PRIVATE"] <= "MA_DLL_PRIVATE"
|
||||
["\bma_dr_wav_result"] <= "ma_result"
|
||||
["\bma_dr_wav_allocation_callbacks"] <= "ma_allocation_callbacks"
|
||||
["\bMA_DR_WAV_INLINE"] <= "MA_INLINE"
|
||||
["\bMA_DR_WAV_SIZE_MAX"] <= "MA_SIZE_MAX"
|
||||
["\bma_dr_wav_result_from_errno"] <= "ma_result_from_errno"
|
||||
["\bma_dr_wav_fopen"] <= "ma_fopen"
|
||||
["\bma_dr_wav_wfopen"] <= "ma_wfopen"
|
||||
|
||||
// Result codes.
|
||||
["MA_DR_WAV_SUCCESS"] <= "MA_SUCCESS"
|
||||
["MA_DR_WAV_INVALID_ARGS"] <= "MA_INVALID_ARGS"
|
||||
["MA_DR_WAV_OUT_OF_MEMORY"] <= "MA_OUT_OF_MEMORY"
|
||||
["MA_DR_WAV_INVALID_FILE"] <= "MA_INVALID_FILE"
|
||||
["MA_DR_WAV_AT_END"] <= "MA_AT_END"
|
||||
["MA_DR_WAV_BAD_SEEK"] <= "MA_BAD_SEEK"
|
||||
;
|
||||
}
|
||||
|
||||
convert_wav_h :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Sized Types \*/\R" : "\R/\* End Sized Types \*/" ] = "";
|
||||
stripped["/\* Decorations \*/\R" : "\R/\* End Decorations \*/" ] = "";
|
||||
stripped["/\* Result Codes \*/\R" : "\R/\* End Result Codes \*/" ] = "";
|
||||
stripped["/\* Allocation Callbacks \*/\R" : "\R/\* End Allocation Callbacks \*/" ] = "";
|
||||
|
||||
return minify(rename_wav_namespace(stripped));
|
||||
}
|
||||
|
||||
convert_wav_c :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Architecture Detection \*/\R" : "\R/\* End Architecture Detection \*/"] = "";
|
||||
stripped["/\* Inline \*/\R" : "\R/\* End Inline \*/" ] = "";
|
||||
stripped["/\* SIZE_MAX \*/\R" : "\R/\* End SIZE_MAX \*/" ] = "";
|
||||
stripped["/\* Errno \*/\R" : "\R/\* End Errno \*/" ] = "";
|
||||
stripped["/\* fopen \*/\R" : "\R/\* End fopen \*/" ] = "";
|
||||
|
||||
return minify(rename_wav_namespace(stripped));
|
||||
}
|
||||
|
||||
miniaudio_h("/\* dr_wav_h begin \*/\R":"\R/\* dr_wav_h end \*/") = convert_wav_h(@(dr_wav_h["#ifndef dr_wav_h\R":"\R#endif /\* dr_wav_h \*/"]));
|
||||
miniaudio_h("/\* dr_wav_c begin \*/\R":"\R/\* dr_wav_c end \*/") = convert_wav_c(@(dr_wav_h["#ifndef dr_wav_c\R":"\R#endif /\* dr_wav_c \*/"]));
|
||||
|
||||
|
||||
// dr_flac
|
||||
rename_flac_namespace :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\bdrflac"] <= "ma_dr_flac"
|
||||
["\bDRFLAC"] <= "MA_DR_FLAC"
|
||||
["\bdr_flac"] <= "ma_dr_flac"
|
||||
["\bDR_FLAC"] <= "MA_DR_FLAC"
|
||||
["\bg_drflac"] <= "ma_dr_flac_g"
|
||||
|
||||
// Some common tokens will be namespaced as "ma_dr_flac" when we really want them to be "ma_".
|
||||
["\bma_dr_flac_int"] <= "ma_int"
|
||||
["\bma_dr_flac_uint"] <= "ma_uint"
|
||||
["\bma_dr_flac_bool"] <= "ma_bool"
|
||||
["\bma_dr_flac_uintptr"] <= "ma_uintptr"
|
||||
["\bMA_DR_FLAC_TRUE"] <= "MA_TRUE"
|
||||
["\bMA_DR_FLAC_FALSE"] <= "MA_FALSE"
|
||||
["\bMA_DR_FLAC_UINT64_MAX"] <= "MA_UINT64_MAX"
|
||||
["\bMA_DR_FLAC_32BIT"] <= "MA_32BIT"
|
||||
["\bMA_DR_FLAC_64BIT"] <= "MA_64BIT"
|
||||
["\bMA_DR_FLAC_ARM32"] <= "MA_ARM32"
|
||||
["\bMA_DR_FLAC_ARM64"] <= "MA_ARM64"
|
||||
["\bMA_DR_FLAC_X64"] <= "MA_X64"
|
||||
["\bMA_DR_FLAC_X86"] <= "MA_X86"
|
||||
["\bMA_DR_FLAC_ARM"] <= "MA_ARM"
|
||||
["\bMA_DR_FLAC_API"] <= "MA_API"
|
||||
["\bMA_DR_FLAC_PRIVATE"] <= "MA_PRIVATE"
|
||||
["\bMA_DR_FLAC_DLL"] <= "MA_DLL"
|
||||
["\bMA_DR_FLAC_DLL_IMPORT"] <= "MA_DLL_IMPORT"
|
||||
["\bMA_DR_FLAC_DLL_EXPORT"] <= "MA_DLL_EXPORT"
|
||||
["\bMA_DR_FLAC_DLL_PRIVATE"] <= "MA_DLL_PRIVATE"
|
||||
["\bma_dr_flac_result"] <= "ma_result"
|
||||
["\bma_dr_flac_allocation_callbacks"] <= "ma_allocation_callbacks"
|
||||
["\bMA_DR_FLAC_INLINE"] <= "MA_INLINE"
|
||||
["\bMA_DR_FLAC_SIZE_MAX"] <= "MA_SIZE_MAX"
|
||||
["\bma_dr_flac_result_from_errno"] <= "ma_result_from_errno"
|
||||
["\bma_dr_flac_fopen"] <= "ma_fopen"
|
||||
["\bma_dr_flac_wfopen"] <= "ma_wfopen"
|
||||
|
||||
// Result codes.
|
||||
["MA_DR_FLAC_SUCCESS"] <= "MA_SUCCESS"
|
||||
["MA_DR_FLAC_ERROR"] <= "MA_ERROR"
|
||||
["MA_DR_FLAC_AT_END"] <= "MA_AT_END"
|
||||
["MA_DR_FLAC_CRC_MISMATCH"] <= "MA_CRC_MISMATCH"
|
||||
;
|
||||
}
|
||||
|
||||
convert_flac_h :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Sized Types \*/\R" : "\R/\* End Sized Types \*/" ] = "";
|
||||
stripped["/\* Architecture Detection \*/\R" : "\R/\* End Architecture Detection \*/"] = "";
|
||||
stripped["/\* Decorations \*/\R" : "\R/\* End Decorations \*/" ] = "";
|
||||
stripped["/\* Allocation Callbacks \*/\R" : "\R/\* End Allocation Callbacks \*/" ] = "";
|
||||
|
||||
return minify(rename_flac_namespace(stripped));
|
||||
}
|
||||
|
||||
convert_flac_c :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Result Codes \*/\R" : "\R/\* End Result Codes \*/" ] = "";
|
||||
stripped["/\* Inline \*/\R" : "\R/\* End Inline \*/" ] = "";
|
||||
stripped["/\* SIZE_MAX \*/\R" : "\R/\* End SIZE_MAX \*/" ] = "";
|
||||
stripped["/\* Errno \*/\R" : "\R/\* End Errno \*/" ] = "";
|
||||
stripped["/\* fopen \*/\R" : "\R/\* End fopen \*/" ] = "";
|
||||
|
||||
return minify(rename_flac_namespace(stripped));
|
||||
}
|
||||
|
||||
miniaudio_h("/\* dr_flac_h begin \*/\R":"\R/\* dr_flac_h end \*/") = convert_flac_h(@(dr_flac_h["#ifndef dr_flac_h\R":"\R#endif /\* dr_flac_h \*/"]));
|
||||
miniaudio_h("/\* dr_flac_c begin \*/\R":"\R/\* dr_flac_c end \*/") = convert_flac_c(@(dr_flac_h["#ifndef dr_flac_c\R":"\R#endif /\* dr_flac_c \*/"]));
|
||||
|
||||
|
||||
|
||||
// dr_mp3
|
||||
rename_mp3_namespace :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\bdrmp3"] <= "ma_dr_mp3"
|
||||
["\bDRMP3"] <= "MA_DR_MP3"
|
||||
["\bdr_mp3"] <= "ma_dr_mp3"
|
||||
["\bDR_MP3"] <= "MA_DR_MP3"
|
||||
["\bg_drmp3"] <= "ma_dr_mp3_g"
|
||||
|
||||
// Some common tokens will be namespaced as "ma_dr_mp3" when we really want them to be "ma_".
|
||||
["\bma_dr_mp3_int"] <= "ma_int"
|
||||
["\bma_dr_mp3_uint"] <= "ma_uint"
|
||||
["\bma_dr_mp3_bool"] <= "ma_bool"
|
||||
["\bma_dr_mp3_uintptr"] <= "ma_uintptr"
|
||||
["\bMA_DR_MP3_TRUE"] <= "MA_TRUE"
|
||||
["\bMA_DR_MP3_FALSE"] <= "MA_FALSE"
|
||||
["\bMA_DR_MP3_UINT64_MAX"] <= "MA_UINT64_MAX"
|
||||
["\bMA_DR_MP3_32BIT"] <= "MA_32BIT"
|
||||
["\bMA_DR_MP3_64BIT"] <= "MA_64BIT"
|
||||
["\bMA_DR_MP3_ARM32"] <= "MA_ARM32"
|
||||
["\bMA_DR_MP3_ARM64"] <= "MA_ARM64"
|
||||
["\bMA_DR_MP3_X64"] <= "MA_X64"
|
||||
["\bMA_DR_MP3_X86"] <= "MA_X86"
|
||||
["\bMA_DR_MP3_ARM"] <= "MA_ARM"
|
||||
["\bMA_DR_MP3_API"] <= "MA_API"
|
||||
["\bMA_DR_MP3_PRIVATE"] <= "MA_PRIVATE"
|
||||
["\bMA_DR_MP3_DLL"] <= "MA_DLL"
|
||||
["\bMA_DR_MP3_DLL_IMPORT"] <= "MA_DLL_IMPORT"
|
||||
["\bMA_DR_MP3_DLL_EXPORT"] <= "MA_DLL_EXPORT"
|
||||
["\bMA_DR_MP3_DLL_PRIVATE"] <= "MA_DLL_PRIVATE"
|
||||
["\bma_dr_mp3_result"] <= "ma_result"
|
||||
["\bma_dr_mp3_allocation_callbacks"] <= "ma_allocation_callbacks"
|
||||
["\bMA_DR_MP3_INLINE"] <= "MA_INLINE"
|
||||
["\bMA_DR_MP3_SIZE_MAX"] <= "MA_SIZE_MAX"
|
||||
["\bma_dr_mp3_result_from_errno"] <= "ma_result_from_errno"
|
||||
["\bma_dr_mp3_fopen"] <= "ma_fopen"
|
||||
["\bma_dr_mp3_wfopen"] <= "ma_wfopen"
|
||||
|
||||
// Result codes.
|
||||
["MA_DR_MP3_SUCCESS"] <= "MA_SUCCESS"
|
||||
;
|
||||
}
|
||||
|
||||
convert_mp3_h :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Sized Types \*/\R" : "\R/\* End Sized Types \*/" ] = "";
|
||||
stripped["/\* Decorations \*/\R" : "\R/\* End Decorations \*/" ] = "";
|
||||
stripped["/\* Result Codes \*/\R" : "\R/\* End Result Codes \*/" ] = "";
|
||||
stripped["/\* Inline \*/\R" : "\R/\* End Inline \*/" ] = "";
|
||||
stripped["/\* Allocation Callbacks \*/\R" : "\R/\* End Allocation Callbacks \*/" ] = "";
|
||||
|
||||
return minify(rename_mp3_namespace(stripped));
|
||||
}
|
||||
|
||||
convert_mp3_c :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* SIZE_MAX \*/\R" : "\R/\* End SIZE_MAX \*/" ] = "";
|
||||
stripped["/\* Errno \*/\R" : "\R/\* End Errno \*/" ] = "";
|
||||
stripped["/\* fopen \*/\R" : "\R/\* End fopen \*/" ] = "";
|
||||
|
||||
return minify(rename_mp3_namespace(stripped));
|
||||
}
|
||||
|
||||
miniaudio_h("/\* dr_mp3_h begin \*/\R":"\R/\* dr_mp3_h end \*/") = convert_mp3_h(@(dr_mp3_h["#ifndef dr_mp3_h\R":"\R#endif /\* dr_mp3_h \*/"]));
|
||||
miniaudio_h("/\* dr_mp3_c begin \*/\R":"\R/\* dr_mp3_c end \*/") = convert_mp3_c(@(dr_mp3_h["#ifndef dr_mp3_c\R":"\R#endif /\* dr_mp3_c \*/"]));
|
||||
|
||||
|
||||
// c89atomic
|
||||
rename_c89atomic_namespace :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\bc89atomic"] <= "ma_atomic"
|
||||
["\bC89ATOMIC"] <= "MA_ATOMIC"
|
||||
|
||||
// Some common tokens will be namespaced as "ma_atomic" when we really want them to be "ma_".
|
||||
["\bma_atomic_int"] <= "ma_int"
|
||||
["\bma_atomic_uint"] <= "ma_uint"
|
||||
["\bma_atomic_bool"] <= "ma_bool32"
|
||||
["\bMA_ATOMIC_32BIT"] <= "MA_32BIT"
|
||||
["\bMA_ATOMIC_64BIT"] <= "MA_64BIT"
|
||||
["\bMA_ATOMIC_ARM32"] <= "MA_ARM32"
|
||||
["\bMA_ATOMIC_ARM64"] <= "MA_ARM64"
|
||||
["\bMA_ATOMIC_X64"] <= "MA_X64"
|
||||
["\bMA_ATOMIC_X86"] <= "MA_X86"
|
||||
["\bMA_ATOMIC_ARM"] <= "MA_ARM"
|
||||
["\bMA_ATOMIC_INLINE"] <= "MA_INLINE"
|
||||
|
||||
// We have an "extern c89atomic_spinlock" in c89atomic.h, but since we're putting this into the implementation section we can just
|
||||
// drop the extern and not bother importing anything from c89atomic.c.
|
||||
["\bextern ma_atomic_spinlock"] <= "ma_atomic_spinlock"
|
||||
;
|
||||
}
|
||||
|
||||
convert_c89atomic_h :: function(src:string) string
|
||||
{
|
||||
stripped := @(src);
|
||||
stripped["/\* Sized Types \*/\R" : "\R/\* End Sized Types \*/" ] = "";
|
||||
stripped["/\* Architecture Detection \*/\R" : "\R/\* End Architecture Detection \*/"] = "";
|
||||
stripped["/\* Inline \*/\R" : "\R/\* End Inline \*/" ] = "";
|
||||
|
||||
return minify(rename_c89atomic_namespace(stripped));
|
||||
}
|
||||
|
||||
miniaudio_h("/\* c89atomic.h begin \*/\R":"\R/\* c89atomic.h end \*/") = convert_c89atomic_h(@(c89atomic_h["#ifndef c89atomic_h\R":"\R#endif /\* c89atomic_h \*/"]));
|
||||
|
||||
|
||||
// Cleanup. If we don't normalize line endings we'll fail to compile on old versions of GCC.
|
||||
cleanup :: function(src:string) string
|
||||
{
|
||||
return @(src)
|
||||
["\r\n"] <= "\n" // Normalize line endings to "\n". Needed for very old versions of GCC.
|
||||
["\t"] <= " " // Tabs to spaces.
|
||||
;
|
||||
}
|
||||
|
||||
miniaudio_h = cleanup(@(miniaudio_h));
|
||||
miniaudio_c = cleanup(@(miniaudio_c));
|
||||
26
thirdparty/miniaudio-0.11.24/camal/split.camal
vendored
Normal file
@@ -0,0 +1,26 @@
|
||||
miniaudio_h :: <../miniaudio.h>;
|
||||
miniaudio_split_h := <../extras/miniaudio_split/miniaudio.h>;
|
||||
miniaudio_split_c := <../extras/miniaudio_split/miniaudio.c>;
|
||||
|
||||
header := @(miniaudio_h["/\*" : "\*/"]);
|
||||
footer := @(miniaudio_h["/\*\RThis software" : "\*/"]);
|
||||
|
||||
content_h : string;
|
||||
content_h["$"] = header;
|
||||
content_h["$"] = "\n";
|
||||
content_h["$"] = @(miniaudio_h["#ifndef miniaudio_h" : "#endif /\* miniaudio_h \*/"]);
|
||||
content_h["$"] = "\n\n";
|
||||
content_h["$"] = footer;
|
||||
content_h["$"] = "\n";
|
||||
|
||||
content_c : string;
|
||||
content_c["$"] = header;
|
||||
content_c["$"] = "\n";
|
||||
content_c["$"] = '#include "miniaudio.h"\n\n';
|
||||
content_c["$"] = @(miniaudio_h["#ifndef miniaudio_c" : "#endif /\* miniaudio_c \*/"]);
|
||||
content_c["$"] = "\n\n";
|
||||
content_c["$"] = footer;
|
||||
content_c["$"] = "\n";
|
||||
|
||||
miniaudio_split_h = content_h;
|
||||
miniaudio_split_c = content_c;
|
||||
BIN
thirdparty/miniaudio-0.11.24/data/16-44100-stereo.flac
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/data/48000-stereo.ogg
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/data/48000-stereo.opus
vendored
Normal file
6
thirdparty/miniaudio-0.11.24/data/README
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
Sounds in this folder are used for testing purposes. They are all in the public domain. Below is a
|
||||
list of all the places I pulled these sounds from.
|
||||
|
||||
---
|
||||
|
||||
https://freesound.org/people/josefpres/sounds/788664/
|
||||
23
thirdparty/miniaudio-0.11.24/examples/build/README.md
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
Examples
|
||||
--------
|
||||
gcc ../simple_playback.c -o bin/simple_playback -ldl -lm -lpthread
|
||||
gcc ../simple_playback.c -o bin/simple_playback -ldl -lm -lpthread -Wall -Wextra -Wpedantic -std=c89
|
||||
|
||||
Emscripten
|
||||
----------
|
||||
On Windows, you need to move into the build and run emsdk_env.bat from a command prompt using an absolute
|
||||
path like "C:\emsdk\emsdk_env.bat". Note that PowerShell doesn't work for me for some reason. Examples:
|
||||
|
||||
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html
|
||||
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html -s WASM=0 -Wall -Wextra
|
||||
|
||||
To compile with support for Audio Worklets:
|
||||
|
||||
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html -DMA_ENABLE_AUDIO_WORKLETS -sAUDIO_WORKLET=1 -sWASM_WORKERS=1 -sASYNCIFY
|
||||
|
||||
If you output WASM it may not work when running the web page locally. To test you can run with something
|
||||
like this:
|
||||
|
||||
emrun ./bin/simple_playback_sine.html
|
||||
|
||||
If you want to see stdout on the command line when running from emrun, add `--emrun` to your emcc command.
|
||||
708
thirdparty/miniaudio-0.11.24/examples/custom_backend.c
vendored
Normal file
@@ -0,0 +1,708 @@
|
||||
/*
|
||||
This example show how a custom backend can be implemented.
|
||||
|
||||
This implements a full-featured SDL2 backend. It's intentionally built using the same paradigms as the built-in backends in order to make
|
||||
it suitable as a solid basis for a custom implementation. The SDL2 backend can be disabled with MA_NO_SDL, exactly like the built-in
|
||||
backends. It supports both runtime and compile-time linking and respects the MA_NO_RUNTIME_LINKING option. It also works on Emscripten
|
||||
which requires the `-s USE_SDL=2` option.
|
||||
|
||||
There may be times where you want to support more than one custom backend. This example has been designed to make it easy to plug-in extra
|
||||
custom backends without needing to modify any of the base miniaudio initialization code. A custom context structure is declared called
|
||||
`ma_context_ex`. The first member of this structure is a `ma_context` object which allows it to be cast between the two. The same is done
|
||||
for devices, which is called `ma_device_ex`. In these structures there is a section for each custom backend, which in this example is just
|
||||
SDL. These are only enabled at compile time if `MA_SUPPORT_SDL` is defined, which it always is in this example (you may want to have some
|
||||
logic which more intelligently enables or disables SDL support).
|
||||
|
||||
To use a custom backend, at a minimum you must set the `custom.onContextInit()` callback in the context config. You do not need to set the
|
||||
other callbacks, but if you don't, you must set them in the implementation of the `onContextInit()` callback which is done via an output
|
||||
parameter. This is the approach taken by this example because it's the simplest way to support multiple custom backends. The idea is that
|
||||
the `onContextInit()` callback is set to a generic "loader", which then calls out to a backend-specific implementation which then sets the
|
||||
remaining callbacks if it is successfully initialized.
|
||||
|
||||
Custom backends are identified with the `ma_backend_custom` backend type. For the purpose of demonstration, this example only uses the
|
||||
`ma_backend_custom` backend type because otherwise the built-in backends would always get chosen first and none of the code for the custom
|
||||
backends would actually get hit. By default, the `ma_backend_custom` backend is the lowest priority backend, except for `ma_backend_null`.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
#include <emscripten.h>
|
||||
|
||||
void main_loop__em()
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
/* Support SDL on everything. */
|
||||
#define MA_SUPPORT_SDL
|
||||
|
||||
/*
|
||||
Only enable SDL if it's hasn't been explicitly disabled (MA_NO_SDL) or enabled (MA_ENABLE_SDL with
|
||||
MA_ENABLE_ONLY_SPECIFIC_BACKENDS) and it's supported at compile time (MA_SUPPORT_SDL).
|
||||
*/
|
||||
#if defined(MA_SUPPORT_SDL) && !defined(MA_NO_SDL) && (!defined(MA_ENABLE_ONLY_SPECIFIC_BACKENDS) || defined(MA_ENABLE_SDL))
|
||||
#define MA_HAS_SDL
|
||||
#endif
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_context context; /* Make this the first member so we can cast between ma_context and ma_context_ex. */
|
||||
#if defined(MA_SUPPORT_SDL)
|
||||
struct
|
||||
{
|
||||
ma_handle hSDL; /* A handle to the SDL2 shared object. We dynamically load function pointers at runtime so we can avoid linking. */
|
||||
ma_proc SDL_InitSubSystem;
|
||||
ma_proc SDL_QuitSubSystem;
|
||||
ma_proc SDL_GetNumAudioDevices;
|
||||
ma_proc SDL_GetAudioDeviceName;
|
||||
ma_proc SDL_CloseAudioDevice;
|
||||
ma_proc SDL_OpenAudioDevice;
|
||||
ma_proc SDL_PauseAudioDevice;
|
||||
} sdl;
|
||||
#endif
|
||||
} ma_context_ex;
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_device device; /* Make this the first member so we can cast between ma_device and ma_device_ex. */
|
||||
#if defined(MA_SUPPORT_SDL)
|
||||
struct
|
||||
{
|
||||
int deviceIDPlayback;
|
||||
int deviceIDCapture;
|
||||
} sdl;
|
||||
#endif
|
||||
} ma_device_ex;
|
||||
|
||||
|
||||
|
||||
#if defined(MA_HAS_SDL)
|
||||
/* SDL headers are necessary if using compile-time linking. */
|
||||
#ifdef MA_NO_RUNTIME_LINKING
|
||||
#ifdef __has_include
|
||||
#ifdef MA_EMSCRIPTEN
|
||||
#if !__has_include(<SDL/SDL_audio.h>)
|
||||
#undef MA_HAS_SDL
|
||||
#endif
|
||||
#else
|
||||
#if !__has_include(<SDL2/SDL_audio.h>)
|
||||
#undef MA_HAS_SDL
|
||||
#endif
|
||||
#endif
|
||||
#endif
|
||||
#endif
|
||||
#endif
|
||||
|
||||
|
||||
#if defined(MA_HAS_SDL)
|
||||
#define MA_SDL_INIT_AUDIO 0x00000010
|
||||
#define MA_AUDIO_U8 0x0008
|
||||
#define MA_AUDIO_S16 0x8010
|
||||
#define MA_AUDIO_S32 0x8020
|
||||
#define MA_AUDIO_F32 0x8120
|
||||
#define MA_SDL_AUDIO_ALLOW_FREQUENCY_CHANGE 0x00000001
|
||||
#define MA_SDL_AUDIO_ALLOW_FORMAT_CHANGE 0x00000002
|
||||
#define MA_SDL_AUDIO_ALLOW_CHANNELS_CHANGE 0x00000004
|
||||
#define MA_SDL_AUDIO_ALLOW_ANY_CHANGE (MA_SDL_AUDIO_ALLOW_FREQUENCY_CHANGE | MA_SDL_AUDIO_ALLOW_FORMAT_CHANGE | MA_SDL_AUDIO_ALLOW_CHANNELS_CHANGE)
|
||||
|
||||
/* If we are linking at compile time we'll just #include SDL.h. Otherwise we can just redeclare some stuff to avoid the need for development packages to be installed. */
|
||||
#ifdef MA_NO_RUNTIME_LINKING
|
||||
#define SDL_MAIN_HANDLED
|
||||
#ifdef MA_EMSCRIPTEN
|
||||
#include <SDL/SDL.h>
|
||||
#else
|
||||
#include <SDL2/SDL.h>
|
||||
#endif
|
||||
|
||||
typedef SDL_AudioCallback MA_SDL_AudioCallback;
|
||||
typedef SDL_AudioSpec MA_SDL_AudioSpec;
|
||||
typedef SDL_AudioFormat MA_SDL_AudioFormat;
|
||||
typedef SDL_AudioDeviceID MA_SDL_AudioDeviceID;
|
||||
#else
|
||||
typedef void (* MA_SDL_AudioCallback)(void* userdata, ma_uint8* stream, int len);
|
||||
typedef ma_uint16 MA_SDL_AudioFormat;
|
||||
typedef ma_uint32 MA_SDL_AudioDeviceID;
|
||||
|
||||
typedef struct MA_SDL_AudioSpec
|
||||
{
|
||||
int freq;
|
||||
MA_SDL_AudioFormat format;
|
||||
ma_uint8 channels;
|
||||
ma_uint8 silence;
|
||||
ma_uint16 samples;
|
||||
ma_uint16 padding;
|
||||
ma_uint32 size;
|
||||
MA_SDL_AudioCallback callback;
|
||||
void* userdata;
|
||||
} MA_SDL_AudioSpec;
|
||||
#endif
|
||||
|
||||
typedef int (* MA_PFN_SDL_InitSubSystem)(ma_uint32 flags);
|
||||
typedef void (* MA_PFN_SDL_QuitSubSystem)(ma_uint32 flags);
|
||||
typedef int (* MA_PFN_SDL_GetNumAudioDevices)(int iscapture);
|
||||
typedef const char* (* MA_PFN_SDL_GetAudioDeviceName)(int index, int iscapture);
|
||||
typedef void (* MA_PFN_SDL_CloseAudioDevice)(MA_SDL_AudioDeviceID dev);
|
||||
typedef MA_SDL_AudioDeviceID (* MA_PFN_SDL_OpenAudioDevice)(const char* device, int iscapture, const MA_SDL_AudioSpec* desired, MA_SDL_AudioSpec* obtained, int allowed_changes);
|
||||
typedef void (* MA_PFN_SDL_PauseAudioDevice)(MA_SDL_AudioDeviceID dev, int pause_on);
|
||||
|
||||
MA_SDL_AudioFormat ma_format_to_sdl(ma_format format)
|
||||
{
|
||||
switch (format)
|
||||
{
|
||||
case ma_format_unknown: return 0;
|
||||
case ma_format_u8: return MA_AUDIO_U8;
|
||||
case ma_format_s16: return MA_AUDIO_S16;
|
||||
case ma_format_s24: return MA_AUDIO_S32; /* Closest match. */
|
||||
case ma_format_s32: return MA_AUDIO_S32;
|
||||
case ma_format_f32: return MA_AUDIO_F32;
|
||||
default: return 0;
|
||||
}
|
||||
}
|
||||
|
||||
ma_format ma_format_from_sdl(MA_SDL_AudioFormat format)
|
||||
{
|
||||
switch (format)
|
||||
{
|
||||
case MA_AUDIO_U8: return ma_format_u8;
|
||||
case MA_AUDIO_S16: return ma_format_s16;
|
||||
case MA_AUDIO_S32: return ma_format_s32;
|
||||
case MA_AUDIO_F32: return ma_format_f32;
|
||||
default: return ma_format_unknown;
|
||||
}
|
||||
}
|
||||
|
||||
static ma_result ma_context_enumerate_devices__sdl(ma_context* pContext, ma_enum_devices_callback_proc callback, void* pUserData)
|
||||
{
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pContext;
|
||||
ma_bool32 isTerminated = MA_FALSE;
|
||||
ma_bool32 cbResult;
|
||||
int iDevice;
|
||||
|
||||
/* Playback */
|
||||
if (!isTerminated) {
|
||||
int deviceCount = ((MA_PFN_SDL_GetNumAudioDevices)pContextEx->sdl.SDL_GetNumAudioDevices)(0);
|
||||
for (iDevice = 0; iDevice < deviceCount; ++iDevice) {
|
||||
ma_device_info deviceInfo;
|
||||
MA_ZERO_OBJECT(&deviceInfo);
|
||||
|
||||
deviceInfo.id.custom.i = iDevice;
|
||||
ma_strncpy_s(deviceInfo.name, sizeof(deviceInfo.name), ((MA_PFN_SDL_GetAudioDeviceName)pContextEx->sdl.SDL_GetAudioDeviceName)(iDevice, 0), (size_t)-1);
|
||||
|
||||
if (iDevice == 0) {
|
||||
deviceInfo.isDefault = MA_TRUE;
|
||||
}
|
||||
|
||||
cbResult = callback(pContext, ma_device_type_playback, &deviceInfo, pUserData);
|
||||
if (cbResult == MA_FALSE) {
|
||||
isTerminated = MA_TRUE;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* Capture */
|
||||
if (!isTerminated) {
|
||||
int deviceCount = ((MA_PFN_SDL_GetNumAudioDevices)pContextEx->sdl.SDL_GetNumAudioDevices)(1);
|
||||
for (iDevice = 0; iDevice < deviceCount; ++iDevice) {
|
||||
ma_device_info deviceInfo;
|
||||
MA_ZERO_OBJECT(&deviceInfo);
|
||||
|
||||
deviceInfo.id.custom.i = iDevice;
|
||||
ma_strncpy_s(deviceInfo.name, sizeof(deviceInfo.name), ((MA_PFN_SDL_GetAudioDeviceName)pContextEx->sdl.SDL_GetAudioDeviceName)(iDevice, 1), (size_t)-1);
|
||||
|
||||
if (iDevice == 0) {
|
||||
deviceInfo.isDefault = MA_TRUE;
|
||||
}
|
||||
|
||||
cbResult = callback(pContext, ma_device_type_capture, &deviceInfo, pUserData);
|
||||
if (cbResult == MA_FALSE) {
|
||||
isTerminated = MA_TRUE;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_context_get_device_info__sdl(ma_context* pContext, ma_device_type deviceType, const ma_device_id* pDeviceID, ma_device_info* pDeviceInfo)
|
||||
{
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pContext;
|
||||
|
||||
#if !defined(__EMSCRIPTEN__)
|
||||
MA_SDL_AudioSpec desiredSpec;
|
||||
MA_SDL_AudioSpec obtainedSpec;
|
||||
MA_SDL_AudioDeviceID tempDeviceID;
|
||||
const char* pDeviceName;
|
||||
#endif
|
||||
|
||||
if (pDeviceID == NULL) {
|
||||
if (deviceType == ma_device_type_playback) {
|
||||
pDeviceInfo->id.custom.i = 0;
|
||||
ma_strncpy_s(pDeviceInfo->name, sizeof(pDeviceInfo->name), MA_DEFAULT_PLAYBACK_DEVICE_NAME, (size_t)-1);
|
||||
} else {
|
||||
pDeviceInfo->id.custom.i = 0;
|
||||
ma_strncpy_s(pDeviceInfo->name, sizeof(pDeviceInfo->name), MA_DEFAULT_CAPTURE_DEVICE_NAME, (size_t)-1);
|
||||
}
|
||||
} else {
|
||||
pDeviceInfo->id.custom.i = pDeviceID->custom.i;
|
||||
ma_strncpy_s(pDeviceInfo->name, sizeof(pDeviceInfo->name), ((MA_PFN_SDL_GetAudioDeviceName)pContextEx->sdl.SDL_GetAudioDeviceName)(pDeviceID->custom.i, (deviceType == ma_device_type_playback) ? 0 : 1), (size_t)-1);
|
||||
}
|
||||
|
||||
if (pDeviceInfo->id.custom.i == 0) {
|
||||
pDeviceInfo->isDefault = MA_TRUE;
|
||||
}
|
||||
|
||||
/*
|
||||
To get an accurate idea on the backend's native format we need to open the device. Not ideal, but it's the only way. An
|
||||
alternative to this is to report all channel counts, sample rates and formats, but that doesn't offer a good representation
|
||||
of the device's _actual_ ideal format.
|
||||
|
||||
Note: With Emscripten, it looks like non-zero values need to be specified for desiredSpec. Whatever is specified in
|
||||
desiredSpec will be used by SDL since it uses it just does its own format conversion internally. Therefore, from what
|
||||
I can tell, there's no real way to know the device's actual format which means I'm just going to fall back to the full
|
||||
range of channels and sample rates on Emscripten builds.
|
||||
*/
|
||||
#if defined(__EMSCRIPTEN__)
|
||||
/* Good practice to prioritize the best format first so that the application can use the first data format as their chosen one if desired. */
|
||||
pDeviceInfo->nativeDataFormatCount = 3;
|
||||
pDeviceInfo->nativeDataFormats[0].format = ma_format_s16;
|
||||
pDeviceInfo->nativeDataFormats[0].channels = 0; /* All channel counts supported. */
|
||||
pDeviceInfo->nativeDataFormats[0].sampleRate = 0; /* All sample rates supported. */
|
||||
pDeviceInfo->nativeDataFormats[0].flags = 0;
|
||||
pDeviceInfo->nativeDataFormats[1].format = ma_format_s32;
|
||||
pDeviceInfo->nativeDataFormats[1].channels = 0; /* All channel counts supported. */
|
||||
pDeviceInfo->nativeDataFormats[1].sampleRate = 0; /* All sample rates supported. */
|
||||
pDeviceInfo->nativeDataFormats[1].flags = 0;
|
||||
pDeviceInfo->nativeDataFormats[2].format = ma_format_u8;
|
||||
pDeviceInfo->nativeDataFormats[2].channels = 0; /* All channel counts supported. */
|
||||
pDeviceInfo->nativeDataFormats[2].sampleRate = 0; /* All sample rates supported. */
|
||||
pDeviceInfo->nativeDataFormats[2].flags = 0;
|
||||
#else
|
||||
MA_ZERO_MEMORY(&desiredSpec, sizeof(desiredSpec));
|
||||
|
||||
pDeviceName = NULL;
|
||||
if (pDeviceID != NULL) {
|
||||
pDeviceName = ((MA_PFN_SDL_GetAudioDeviceName)pContextEx->sdl.SDL_GetAudioDeviceName)(pDeviceID->custom.i, (deviceType == ma_device_type_playback) ? 0 : 1);
|
||||
}
|
||||
|
||||
tempDeviceID = ((MA_PFN_SDL_OpenAudioDevice)pContextEx->sdl.SDL_OpenAudioDevice)(pDeviceName, (deviceType == ma_device_type_playback) ? 0 : 1, &desiredSpec, &obtainedSpec, MA_SDL_AUDIO_ALLOW_ANY_CHANGE);
|
||||
if (tempDeviceID == 0) {
|
||||
ma_log_postf(ma_context_get_log(pContext), MA_LOG_LEVEL_ERROR, "Failed to open SDL device.");
|
||||
return MA_FAILED_TO_OPEN_BACKEND_DEVICE;
|
||||
}
|
||||
|
||||
((MA_PFN_SDL_CloseAudioDevice)pContextEx->sdl.SDL_CloseAudioDevice)(tempDeviceID);
|
||||
|
||||
/* Only reporting a single native data format. It'll be whatever SDL decides is the best. */
|
||||
pDeviceInfo->nativeDataFormatCount = 1;
|
||||
pDeviceInfo->nativeDataFormats[0].format = ma_format_from_sdl(obtainedSpec.format);
|
||||
pDeviceInfo->nativeDataFormats[0].channels = obtainedSpec.channels;
|
||||
pDeviceInfo->nativeDataFormats[0].sampleRate = obtainedSpec.freq;
|
||||
pDeviceInfo->nativeDataFormats[0].flags = 0;
|
||||
|
||||
/* If miniaudio does not support the format, just use f32 as the native format (SDL will do the necessary conversions for us). */
|
||||
if (pDeviceInfo->nativeDataFormats[0].format == ma_format_unknown) {
|
||||
pDeviceInfo->nativeDataFormats[0].format = ma_format_f32;
|
||||
}
|
||||
#endif /* __EMSCRIPTEN__ */
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
void ma_audio_callback_capture__sdl(void* pUserData, ma_uint8* pBuffer, int bufferSizeInBytes)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pUserData;
|
||||
|
||||
ma_device_handle_backend_data_callback((ma_device*)pDeviceEx, NULL, pBuffer, (ma_uint32)bufferSizeInBytes / ma_get_bytes_per_frame(pDeviceEx->device.capture.internalFormat, pDeviceEx->device.capture.internalChannels));
|
||||
}
|
||||
|
||||
void ma_audio_callback_playback__sdl(void* pUserData, ma_uint8* pBuffer, int bufferSizeInBytes)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pUserData;
|
||||
|
||||
ma_device_handle_backend_data_callback((ma_device*)pDeviceEx, pBuffer, NULL, (ma_uint32)bufferSizeInBytes / ma_get_bytes_per_frame(pDeviceEx->device.playback.internalFormat, pDeviceEx->device.playback.internalChannels));
|
||||
}
|
||||
|
||||
static ma_result ma_device_init_internal__sdl(ma_device_ex* pDeviceEx, const ma_device_config* pConfig, ma_device_descriptor* pDescriptor)
|
||||
{
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pDeviceEx->device.pContext;
|
||||
MA_SDL_AudioSpec desiredSpec;
|
||||
MA_SDL_AudioSpec obtainedSpec;
|
||||
const char* pDeviceName;
|
||||
int deviceID;
|
||||
|
||||
/*
|
||||
SDL is a little bit awkward with specifying the buffer size, You need to specify the size of the buffer in frames, but since we may
|
||||
have requested a period size in milliseconds we'll need to convert, which depends on the sample rate. But there's a possibility that
|
||||
the sample rate just set to 0, which indicates that the native sample rate should be used. There's no practical way to calculate this
|
||||
that I can think of right now so I'm just using MA_DEFAULT_SAMPLE_RATE.
|
||||
*/
|
||||
if (pDescriptor->sampleRate == 0) {
|
||||
pDescriptor->sampleRate = MA_DEFAULT_SAMPLE_RATE;
|
||||
}
|
||||
|
||||
/*
|
||||
When determining the period size, you need to take defaults into account. This is how the size of the period should be determined.
|
||||
|
||||
1) If periodSizeInFrames is not 0, use periodSizeInFrames; else
|
||||
2) If periodSizeInMilliseconds is not 0, use periodSizeInMilliseconds; else
|
||||
3) If both periodSizeInFrames and periodSizeInMilliseconds is 0, use the backend's default. If the backend does not allow a default
|
||||
buffer size, use a default value of MA_DEFAULT_PERIOD_SIZE_IN_MILLISECONDS_LOW_LATENCY or
|
||||
MA_DEFAULT_PERIOD_SIZE_IN_MILLISECONDS_CONSERVATIVE depending on the value of pConfig->performanceProfile.
|
||||
|
||||
Note that options 2 and 3 require knowledge of the sample rate in order to convert it to a frame count. You should try to keep the
|
||||
calculation of the period size as accurate as possible, but sometimes it's just not practical so just use whatever you can.
|
||||
|
||||
A helper function called ma_calculate_buffer_size_in_frames_from_descriptor() is available to do all of this for you which is what
|
||||
we'll be using here.
|
||||
*/
|
||||
pDescriptor->periodSizeInFrames = ma_calculate_buffer_size_in_frames_from_descriptor(pDescriptor, pDescriptor->sampleRate, pConfig->performanceProfile);
|
||||
|
||||
/* SDL wants the buffer size to be a power of 2 for some reason. */
|
||||
if (pDescriptor->periodSizeInFrames > 32768) {
|
||||
pDescriptor->periodSizeInFrames = 32768;
|
||||
} else {
|
||||
pDescriptor->periodSizeInFrames = ma_next_power_of_2(pDescriptor->periodSizeInFrames);
|
||||
}
|
||||
|
||||
|
||||
/* We now have enough information to set up the device. */
|
||||
MA_ZERO_OBJECT(&desiredSpec);
|
||||
desiredSpec.freq = (int)pDescriptor->sampleRate;
|
||||
desiredSpec.format = ma_format_to_sdl(pDescriptor->format);
|
||||
desiredSpec.channels = (ma_uint8)pDescriptor->channels;
|
||||
desiredSpec.samples = (ma_uint16)pDescriptor->periodSizeInFrames;
|
||||
desiredSpec.callback = (pConfig->deviceType == ma_device_type_capture) ? ma_audio_callback_capture__sdl : ma_audio_callback_playback__sdl;
|
||||
desiredSpec.userdata = pDeviceEx;
|
||||
|
||||
/* We'll fall back to f32 if we don't have an appropriate mapping between SDL and miniaudio. */
|
||||
if (desiredSpec.format == 0) {
|
||||
desiredSpec.format = MA_AUDIO_F32;
|
||||
}
|
||||
|
||||
pDeviceName = NULL;
|
||||
if (pDescriptor->pDeviceID != NULL) {
|
||||
pDeviceName = ((MA_PFN_SDL_GetAudioDeviceName)pContextEx->sdl.SDL_GetAudioDeviceName)(pDescriptor->pDeviceID->custom.i, (pConfig->deviceType == ma_device_type_playback) ? 0 : 1);
|
||||
}
|
||||
|
||||
deviceID = ((MA_PFN_SDL_OpenAudioDevice)pContextEx->sdl.SDL_OpenAudioDevice)(pDeviceName, (pConfig->deviceType == ma_device_type_playback) ? 0 : 1, &desiredSpec, &obtainedSpec, MA_SDL_AUDIO_ALLOW_ANY_CHANGE);
|
||||
if (deviceID == 0) {
|
||||
ma_log_postf(ma_device_get_log((ma_device*)pDeviceEx), MA_LOG_LEVEL_ERROR, "Failed to open SDL2 device.");
|
||||
return MA_FAILED_TO_OPEN_BACKEND_DEVICE;
|
||||
}
|
||||
|
||||
if (pConfig->deviceType == ma_device_type_playback) {
|
||||
pDeviceEx->sdl.deviceIDPlayback = deviceID;
|
||||
} else {
|
||||
pDeviceEx->sdl.deviceIDCapture = deviceID;
|
||||
}
|
||||
|
||||
/* The descriptor needs to be updated with our actual settings. */
|
||||
pDescriptor->format = ma_format_from_sdl(obtainedSpec.format);
|
||||
pDescriptor->channels = obtainedSpec.channels;
|
||||
pDescriptor->sampleRate = (ma_uint32)obtainedSpec.freq;
|
||||
ma_channel_map_init_standard(ma_standard_channel_map_default, pDescriptor->channelMap, ma_countof(pDescriptor->channelMap), pDescriptor->channels);
|
||||
pDescriptor->periodSizeInFrames = obtainedSpec.samples;
|
||||
pDescriptor->periodCount = 1; /* SDL doesn't use the notion of period counts, so just set to 1. */
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_device_init__sdl(ma_device* pDevice, const ma_device_config* pConfig, ma_device_descriptor* pDescriptorPlayback, ma_device_descriptor* pDescriptorCapture)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pDevice;
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pDevice->pContext;
|
||||
ma_result result;
|
||||
|
||||
/* SDL does not support loopback mode, so must return MA_DEVICE_TYPE_NOT_SUPPORTED if it's requested. */
|
||||
if (pConfig->deviceType == ma_device_type_loopback) {
|
||||
return MA_DEVICE_TYPE_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
if (pConfig->deviceType == ma_device_type_capture || pConfig->deviceType == ma_device_type_duplex) {
|
||||
result = ma_device_init_internal__sdl(pDeviceEx, pConfig, pDescriptorCapture);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
if (pConfig->deviceType == ma_device_type_playback || pConfig->deviceType == ma_device_type_duplex) {
|
||||
result = ma_device_init_internal__sdl(pDeviceEx, pConfig, pDescriptorPlayback);
|
||||
if (result != MA_SUCCESS) {
|
||||
if (pConfig->deviceType == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_CloseAudioDevice)pContextEx->sdl.SDL_CloseAudioDevice)(pDeviceEx->sdl.deviceIDCapture);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_device_uninit__sdl(ma_device* pDevice)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pDevice;
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pDevice->pContext;
|
||||
|
||||
if (pDevice->type == ma_device_type_capture || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_CloseAudioDevice)pContextEx->sdl.SDL_CloseAudioDevice)(pDeviceEx->sdl.deviceIDCapture);
|
||||
}
|
||||
|
||||
if (pDevice->type == ma_device_type_playback || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_CloseAudioDevice)pContextEx->sdl.SDL_CloseAudioDevice)(pDeviceEx->sdl.deviceIDCapture);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_device_start__sdl(ma_device* pDevice)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pDevice;
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pDevice->pContext;
|
||||
|
||||
if (pDevice->type == ma_device_type_capture || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_PauseAudioDevice)pContextEx->sdl.SDL_PauseAudioDevice)(pDeviceEx->sdl.deviceIDCapture, 0);
|
||||
}
|
||||
|
||||
if (pDevice->type == ma_device_type_playback || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_PauseAudioDevice)pContextEx->sdl.SDL_PauseAudioDevice)(pDeviceEx->sdl.deviceIDPlayback, 0);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_device_stop__sdl(ma_device* pDevice)
|
||||
{
|
||||
ma_device_ex* pDeviceEx = (ma_device_ex*)pDevice;
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pDevice->pContext;
|
||||
|
||||
if (pDevice->type == ma_device_type_capture || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_PauseAudioDevice)pContextEx->sdl.SDL_PauseAudioDevice)(pDeviceEx->sdl.deviceIDCapture, 1);
|
||||
}
|
||||
|
||||
if (pDevice->type == ma_device_type_playback || pDevice->type == ma_device_type_duplex) {
|
||||
((MA_PFN_SDL_PauseAudioDevice)pContextEx->sdl.SDL_PauseAudioDevice)(pDeviceEx->sdl.deviceIDPlayback, 1);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_context_uninit__sdl(ma_context* pContext)
|
||||
{
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pContext;
|
||||
|
||||
((MA_PFN_SDL_QuitSubSystem)pContextEx->sdl.SDL_QuitSubSystem)(MA_SDL_INIT_AUDIO);
|
||||
|
||||
/* Close the handle to the SDL shared object last. */
|
||||
ma_dlclose(ma_context_get_log(pContext), pContextEx->sdl.hSDL);
|
||||
pContextEx->sdl.hSDL = NULL;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_context_init__sdl(ma_context* pContext, const ma_context_config* pConfig, ma_backend_callbacks* pCallbacks)
|
||||
{
|
||||
ma_context_ex* pContextEx = (ma_context_ex*)pContext;
|
||||
int resultSDL;
|
||||
|
||||
#ifndef MA_NO_RUNTIME_LINKING
|
||||
/* We'll use a list of possible shared object names for easier extensibility. */
|
||||
size_t iName;
|
||||
const char* pSDLNames[] = {
|
||||
#if defined(_WIN32)
|
||||
"SDL2.dll"
|
||||
#elif defined(__APPLE__)
|
||||
"SDL2.framework/SDL2"
|
||||
#else
|
||||
"libSDL2-2.0.so.0"
|
||||
#endif
|
||||
};
|
||||
|
||||
(void)pConfig;
|
||||
|
||||
/* Check if we have SDL2 installed somewhere. If not it's not usable and we need to abort. */
|
||||
for (iName = 0; iName < ma_countof(pSDLNames); iName += 1) {
|
||||
pContextEx->sdl.hSDL = ma_dlopen(ma_context_get_log(pContext), pSDLNames[iName]);
|
||||
if (pContextEx->sdl.hSDL != NULL) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (pContextEx->sdl.hSDL == NULL) {
|
||||
return MA_NO_BACKEND; /* SDL2 could not be loaded. */
|
||||
}
|
||||
|
||||
/* Now that we have the handle to the shared object we can go ahead and load some function pointers. */
|
||||
pContextEx->sdl.SDL_InitSubSystem = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_InitSubSystem");
|
||||
pContextEx->sdl.SDL_QuitSubSystem = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_QuitSubSystem");
|
||||
pContextEx->sdl.SDL_GetNumAudioDevices = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_GetNumAudioDevices");
|
||||
pContextEx->sdl.SDL_GetAudioDeviceName = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_GetAudioDeviceName");
|
||||
pContextEx->sdl.SDL_CloseAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_CloseAudioDevice");
|
||||
pContextEx->sdl.SDL_OpenAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_OpenAudioDevice");
|
||||
pContextEx->sdl.SDL_PauseAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_PauseAudioDevice");
|
||||
#else
|
||||
pContextEx->sdl.SDL_InitSubSystem = (ma_proc)SDL_InitSubSystem;
|
||||
pContextEx->sdl.SDL_QuitSubSystem = (ma_proc)SDL_QuitSubSystem;
|
||||
pContextEx->sdl.SDL_GetNumAudioDevices = (ma_proc)SDL_GetNumAudioDevices;
|
||||
pContextEx->sdl.SDL_GetAudioDeviceName = (ma_proc)SDL_GetAudioDeviceName;
|
||||
pContextEx->sdl.SDL_CloseAudioDevice = (ma_proc)SDL_CloseAudioDevice;
|
||||
pContextEx->sdl.SDL_OpenAudioDevice = (ma_proc)SDL_OpenAudioDevice;
|
||||
pContextEx->sdl.SDL_PauseAudioDevice = (ma_proc)SDL_PauseAudioDevice;
|
||||
#endif /* MA_NO_RUNTIME_LINKING */
|
||||
|
||||
resultSDL = ((MA_PFN_SDL_InitSubSystem)pContextEx->sdl.SDL_InitSubSystem)(MA_SDL_INIT_AUDIO);
|
||||
if (resultSDL != 0) {
|
||||
ma_dlclose(ma_context_get_log(pContext), pContextEx->sdl.hSDL);
|
||||
return MA_ERROR;
|
||||
}
|
||||
|
||||
/*
|
||||
The last step is to make sure the callbacks are set properly in `pCallbacks`. Internally, miniaudio will copy these callbacks into the
|
||||
context object and then use them for then on for calling into our custom backend.
|
||||
*/
|
||||
pCallbacks->onContextInit = ma_context_init__sdl;
|
||||
pCallbacks->onContextUninit = ma_context_uninit__sdl;
|
||||
pCallbacks->onContextEnumerateDevices = ma_context_enumerate_devices__sdl;
|
||||
pCallbacks->onContextGetDeviceInfo = ma_context_get_device_info__sdl;
|
||||
pCallbacks->onDeviceInit = ma_device_init__sdl;
|
||||
pCallbacks->onDeviceUninit = ma_device_uninit__sdl;
|
||||
pCallbacks->onDeviceStart = ma_device_start__sdl;
|
||||
pCallbacks->onDeviceStop = ma_device_stop__sdl;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#endif /* MA_HAS_SDL */
|
||||
|
||||
|
||||
/*
|
||||
This is our custom backend "loader". All this does is attempts to initialize our custom backends in the order they are listed. The first
|
||||
one to successfully initialize is the one that's chosen. In this example we're just listing them statically, but you can use whatever logic
|
||||
you want to handle backend selection.
|
||||
|
||||
This is used as the onContextInit() callback in the context config.
|
||||
*/
|
||||
static ma_result ma_context_init__custom_loader(ma_context* pContext, const ma_context_config* pConfig, ma_backend_callbacks* pCallbacks)
|
||||
{
|
||||
ma_result result = MA_NO_BACKEND;
|
||||
|
||||
/* Silence some unused parameter warnings just in case no custom backends are enabled. */
|
||||
(void)pContext;
|
||||
(void)pCallbacks;
|
||||
|
||||
/* SDL. */
|
||||
#if !defined(MA_NO_SDL)
|
||||
if (result != MA_SUCCESS) {
|
||||
result = ma_context_init__sdl(pContext, pConfig, pCallbacks);
|
||||
}
|
||||
#endif
|
||||
|
||||
/* ... plug in any other custom backends here ... */
|
||||
|
||||
/* If we have a success result we have initialized a backend. Otherwise we need to tell miniaudio about the error so it can skip over our custom backends. */
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
Main program starts here.
|
||||
*/
|
||||
#define DEVICE_FORMAT ma_format_f32
|
||||
#define DEVICE_CHANNELS 2
|
||||
#define DEVICE_SAMPLE_RATE 48000
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
if (pDevice->type == ma_device_type_playback) {
|
||||
ma_waveform_read_pcm_frames((ma_waveform*)pDevice->pUserData, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
if (pDevice->type == ma_device_type_duplex) {
|
||||
ma_copy_pcm_frames(pOutput, pInput, frameCount, pDevice->playback.format, pDevice->playback.channels);
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_context_config contextConfig;
|
||||
ma_context_ex context;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device_ex device;
|
||||
ma_waveform_config sineWaveConfig;
|
||||
ma_waveform sineWave;
|
||||
|
||||
/*
|
||||
We're just using ma_backend_custom in this example for demonstration purposes, but a more realistic use case would probably want to include
|
||||
other backends as well for robustness.
|
||||
*/
|
||||
ma_backend backends[] = {
|
||||
ma_backend_custom
|
||||
};
|
||||
|
||||
/*
|
||||
To implement a custom backend you need to implement the callbacks in the "custom" member of the context config. The only mandatory
|
||||
callback required at this point is the onContextInit() callback. If you do not set the other callbacks, you must set them in
|
||||
onContextInit() by setting them on the `pCallbacks` parameter.
|
||||
|
||||
The way we're doing it in this example enables us to easily plug in multiple custom backends. What we do is set the onContextInit()
|
||||
callback to a generic "loader" function (ma_context_init__custom_loader() in this example), which then calls out to backend-specific
|
||||
context initialization routines, one of which will be for SDL. That way, if for example we wanted to add support for another backend,
|
||||
we don't need to touch this part of the code. Instead we add logic to ma_context_init__custom_loader() to choose the most appropriate
|
||||
custom backend. That will then fill out the other callbacks appropriately.
|
||||
*/
|
||||
contextConfig = ma_context_config_init();
|
||||
contextConfig.custom.onContextInit = ma_context_init__custom_loader;
|
||||
|
||||
result = ma_context_init(backends, sizeof(backends)/sizeof(backends[0]), &contextConfig, (ma_context*)&context);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* In playback mode we're just going to play a sine wave. */
|
||||
sineWaveConfig = ma_waveform_config_init(DEVICE_FORMAT, DEVICE_CHANNELS, DEVICE_SAMPLE_RATE, ma_waveform_type_sine, 0.2, 220);
|
||||
ma_waveform_init(&sineWaveConfig, &sineWave);
|
||||
|
||||
/* The device is created exactly as per normal. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = DEVICE_FORMAT;
|
||||
deviceConfig.playback.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.capture.format = DEVICE_FORMAT;
|
||||
deviceConfig.capture.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.sampleRate = DEVICE_SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &sineWave;
|
||||
|
||||
result = ma_device_init((ma_context*)&context, &deviceConfig, (ma_device*)&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_context_uninit((ma_context*)&context);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
printf("Device Name: %s\n", ((ma_device*)&device)->playback.name);
|
||||
|
||||
if (ma_device_start((ma_device*)&device) != MA_SUCCESS) {
|
||||
ma_device_uninit((ma_device*)&device);
|
||||
ma_context_uninit((ma_context*)&context);
|
||||
return -5;
|
||||
}
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
emscripten_set_main_loop(main_loop__em, 0, 1);
|
||||
#else
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_device_uninit((ma_device*)&device);
|
||||
ma_context_uninit((ma_context*)&context);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
|
||||
return 0;
|
||||
}
|
||||
115
thirdparty/miniaudio-0.11.24/examples/custom_decoder.c
vendored
Normal file
@@ -0,0 +1,115 @@
|
||||
/*
|
||||
Demonstrates how to implement a custom decoder.
|
||||
|
||||
This example implements two custom decoders:
|
||||
|
||||
* Vorbis via libvorbis
|
||||
* Opus via libopus
|
||||
|
||||
A custom decoder must implement a data source. In this example, the libvorbis data source is called
|
||||
`ma_libvorbis` and the Opus data source is called `ma_libopus`. These two objects are compatible
|
||||
with the `ma_data_source` APIs and can be taken straight from this example and used in real code.
|
||||
|
||||
The custom decoding data sources (`ma_libvorbis` and `ma_libopus` in this example) are connected to
|
||||
the decoder via the decoder config (`ma_decoder_config`). You need to implement a vtable for each
|
||||
of your custom decoders. See `ma_decoding_backend_vtable` for the functions you need to implement.
|
||||
The `onInitFile`, `onInitFileW` and `onInitMemory` functions are optional.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
#include "../extras/decoders/libvorbis/miniaudio_libvorbis.c"
|
||||
#include "../extras/decoders/libopus/miniaudio_libopus.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_data_source* pDataSource = (ma_data_source*)pDevice->pUserData;
|
||||
if (pDataSource == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
ma_data_source_read_pcm_frames(pDataSource, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_decoder decoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
ma_uint32 sampleRate;
|
||||
|
||||
/*
|
||||
Add your custom backend vtables here. The order in the array defines the order of priority. The
|
||||
vtables will be passed in via the decoder config.
|
||||
*/
|
||||
ma_decoding_backend_vtable* pCustomBackendVTables[] =
|
||||
{
|
||||
ma_decoding_backend_libvorbis,
|
||||
ma_decoding_backend_libopus
|
||||
};
|
||||
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Initialize the decoder. */
|
||||
decoderConfig = ma_decoder_config_init_default();
|
||||
decoderConfig.pCustomBackendUserData = NULL; /* None of our decoders require user data, so this can be set to null. */
|
||||
decoderConfig.ppCustomBackendVTables = pCustomBackendVTables;
|
||||
decoderConfig.customBackendCount = sizeof(pCustomBackendVTables) / sizeof(pCustomBackendVTables[0]);
|
||||
|
||||
result = ma_decoder_init_file(argv[1], &decoderConfig, &decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize decoder.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
ma_data_source_set_looping(&decoder, MA_TRUE);
|
||||
|
||||
|
||||
/* Initialize the device. */
|
||||
result = ma_data_source_get_data_format(&decoder, &format, &channels, &sampleRate, NULL, 0);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to retrieve decoder data format.");
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -1;
|
||||
}
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = format;
|
||||
deviceConfig.playback.channels = channels;
|
||||
deviceConfig.sampleRate = sampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &decoder;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -1;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
79
thirdparty/miniaudio-0.11.24/examples/custom_decoder_engine.c
vendored
Normal file
@@ -0,0 +1,79 @@
|
||||
/*
|
||||
Demonstrates how to implement a custom decoder and use it with the high level API.
|
||||
|
||||
This is the same as the custom_decoder example, only it's used with the high level engine API
|
||||
rather than the low level decoding API. You can use this to add support for Opus to your games, for
|
||||
example (via libopus).
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
#include "../extras/decoders/libvorbis/miniaudio_libvorbis.c"
|
||||
#include "../extras/decoders/libopus/miniaudio_libopus.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_resource_manager_config resourceManagerConfig;
|
||||
ma_resource_manager resourceManager;
|
||||
ma_engine_config engineConfig;
|
||||
ma_engine engine;
|
||||
|
||||
/*
|
||||
Add your custom backend vtables here. The order in the array defines the order of priority. The
|
||||
vtables will be passed in to the resource manager config.
|
||||
*/
|
||||
ma_decoding_backend_vtable* pCustomBackendVTables[] =
|
||||
{
|
||||
ma_decoding_backend_libvorbis,
|
||||
ma_decoding_backend_libopus
|
||||
};
|
||||
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Using custom decoding backends requires a resource manager. */
|
||||
resourceManagerConfig = ma_resource_manager_config_init();
|
||||
resourceManagerConfig.ppCustomDecodingBackendVTables = pCustomBackendVTables;
|
||||
resourceManagerConfig.customDecodingBackendCount = sizeof(pCustomBackendVTables) / sizeof(pCustomBackendVTables[0]);
|
||||
resourceManagerConfig.pCustomDecodingBackendUserData = NULL; /* <-- This will be passed in to the pUserData parameter of each function in the decoding backend vtables. */
|
||||
|
||||
result = ma_resource_manager_init(&resourceManagerConfig, &resourceManager);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize resource manager.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Once we have a resource manager we can create the engine. */
|
||||
engineConfig = ma_engine_config_init();
|
||||
engineConfig.pResourceManager = &resourceManager;
|
||||
|
||||
result = ma_engine_init(&engineConfig, &engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Now we can play our sound. */
|
||||
result = ma_engine_play_sound(&engine, argv[1], NULL);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to play sound.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_engine_uninit(&engine);
|
||||
ma_resource_manager_uninit(&resourceManager);
|
||||
|
||||
return 0;
|
||||
}
|
||||
161
thirdparty/miniaudio-0.11.24/examples/data_source_chaining.c
vendored
Normal file
@@ -0,0 +1,161 @@
|
||||
/*
|
||||
Demonstrates one way to chain together a number of data sources so they play back seamlessly
|
||||
without gaps.
|
||||
|
||||
This example uses the chaining system built into the `ma_data_source` API. It will take every sound
|
||||
passed onto the command line in order, and then loop back and start again. When looping a chain of
|
||||
data sources, you need only link the last data source back to the first one.
|
||||
|
||||
To play a chain of data sources, you first need to set up your chain. To set the data source that
|
||||
should be played after another, you have two options:
|
||||
|
||||
* Set a pointer to a specific data source
|
||||
* Set a callback that will fire when the next data source needs to be retrieved
|
||||
|
||||
The first option is good for simple scenarios. The second option is useful if you need to perform
|
||||
some action when the end of a sound is reached. This example will be using both.
|
||||
|
||||
When reading data from a chain, you always read from the head data source. Internally miniaudio
|
||||
will track a pointer to the data source in the chain that is currently playing. If you don't
|
||||
consistently read from the head data source this state will become inconsistent and things won't
|
||||
work correctly. When using a chain, this pointer needs to be reset if you need to play the
|
||||
chain again from the start:
|
||||
|
||||
```c
|
||||
ma_data_source_set_current(&headDataSource, &headDataSource);
|
||||
ma_data_source_seek_to_pcm_frame(&headDataSource, 0);
|
||||
```
|
||||
|
||||
The code above is setting the "current" data source in the chain to the head data source, thereby
|
||||
starting the chain from the start again. It is also seeking the head data source back to the start
|
||||
so that playback starts from the start as expected. You do not need to seek non-head items back to
|
||||
the start as miniaudio will do that for you internally.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
/*
|
||||
For simplicity, this example requires the device to use floating point samples.
|
||||
*/
|
||||
#define SAMPLE_FORMAT ma_format_f32
|
||||
#define CHANNEL_COUNT 2
|
||||
#define SAMPLE_RATE 48000
|
||||
|
||||
ma_uint32 g_decoderCount;
|
||||
ma_decoder* g_pDecoders;
|
||||
|
||||
static ma_data_source* next_callback_tail(ma_data_source* pDataSource)
|
||||
{
|
||||
(void)pDataSource; /* Unused. */
|
||||
|
||||
if (g_decoderCount > 0) { /* <-- We check for this in main() so should never happen. */
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/*
|
||||
This will be fired when the last item in the chain has reached the end. In this example we want
|
||||
to loop back to the start, so we need only return a pointer back to the head.
|
||||
*/
|
||||
return &g_pDecoders[0];
|
||||
}
|
||||
|
||||
static void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
We can just read from the first decoder and miniaudio will resolve the chain for us. Note that
|
||||
if you want to loop the chain, like we're doing in this example, you need to set the `loop`
|
||||
parameter to false, or else only the current data source will be looped.
|
||||
*/
|
||||
ma_data_source_read_pcm_frames(&g_pDecoders[0], pOutput, frameCount, NULL);
|
||||
|
||||
/* Unused in this example. */
|
||||
(void)pDevice;
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result = MA_SUCCESS;
|
||||
ma_uint32 iDecoder;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input files.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
g_decoderCount = argc-1;
|
||||
g_pDecoders = (ma_decoder*)malloc(sizeof(*g_pDecoders) * g_decoderCount);
|
||||
|
||||
/* In this example, all decoders need to have the same output format. */
|
||||
decoderConfig = ma_decoder_config_init(SAMPLE_FORMAT, CHANNEL_COUNT, SAMPLE_RATE);
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
result = ma_decoder_init_file(argv[1+iDecoder], &decoderConfig, &g_pDecoders[iDecoder]);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_uint32 iDecoder2;
|
||||
for (iDecoder2 = 0; iDecoder2 < iDecoder; ++iDecoder2) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder2]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
|
||||
printf("Failed to load %s.\n", argv[1+iDecoder]);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
We're going to set up our decoders to run one after the other, but then have the last one loop back
|
||||
to the first one. For demonstration purposes we're going to use the callback method for the last
|
||||
data source.
|
||||
*/
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount-1; iDecoder += 1) {
|
||||
ma_data_source_set_next(&g_pDecoders[iDecoder], &g_pDecoders[iDecoder+1]);
|
||||
}
|
||||
|
||||
/*
|
||||
For the last data source we'll loop back to the start, but for demonstration purposes we'll use a
|
||||
callback to determine the next data source in the chain.
|
||||
*/
|
||||
ma_data_source_set_next_callback(&g_pDecoders[g_decoderCount-1], next_callback_tail);
|
||||
|
||||
|
||||
/*
|
||||
The data source chain has been established so now we can get the device up and running so we
|
||||
can listen to it.
|
||||
*/
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = SAMPLE_FORMAT;
|
||||
deviceConfig.playback.channels = CHANNEL_COUNT;
|
||||
deviceConfig.sampleRate = SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = NULL;
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
goto done_decoders;
|
||||
}
|
||||
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
goto done;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
done:
|
||||
ma_device_uninit(&device);
|
||||
|
||||
done_decoders:
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
|
||||
return 0;
|
||||
}
|
||||
152
thirdparty/miniaudio-0.11.24/examples/duplex_effect.c
vendored
Normal file
@@ -0,0 +1,152 @@
|
||||
/*
|
||||
Demonstrates how to apply an effect to a duplex stream using the node graph system.
|
||||
|
||||
This example applies a vocoder effect to the input stream before outputting it. A custom node
|
||||
called `ma_vocoder_node` is used to achieve the effect which can be found in the extras folder in
|
||||
the miniaudio repository. The vocoder node uses https://github.com/blastbay/voclib to achieve the
|
||||
effect.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
#include "../extras/nodes/ma_vocoder_node/ma_vocoder_node.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32 /* Must always be f32 for this example because the node graph system only works with this. */
|
||||
#define DEVICE_CHANNELS 1 /* For this example, always set to 1. */
|
||||
|
||||
static ma_waveform g_sourceData; /* The underlying data source of the source node. */
|
||||
static ma_audio_buffer_ref g_exciteData; /* The underlying data source of the excite node. */
|
||||
static ma_data_source_node g_sourceNode; /* A data source node containing the source data we'll be sending through to the vocoder. This will be routed into the first bus of the vocoder node. */
|
||||
static ma_data_source_node g_exciteNode; /* A data source node containing the excite data we'll be sending through to the vocoder. This will be routed into the second bus of the vocoder node. */
|
||||
static ma_vocoder_node g_vocoderNode; /* The vocoder node. */
|
||||
static ma_node_graph g_nodeGraph;
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
This example assumes the playback and capture sides use the same format and channel count. The
|
||||
format must be f32.
|
||||
*/
|
||||
if (pDevice->capture.format != DEVICE_FORMAT || pDevice->playback.format != DEVICE_FORMAT || pDevice->capture.channels != pDevice->playback.channels) {
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
The node graph system is a pulling style of API. At the lowest level of the chain will be a
|
||||
node acting as a data source for the purpose of delivering the initial audio data. In our case,
|
||||
the data source is our `pInput` buffer. We need to update the underlying data source so that it
|
||||
read data from `pInput`.
|
||||
*/
|
||||
ma_audio_buffer_ref_set_data(&g_exciteData, pInput, frameCount);
|
||||
|
||||
/* With the source buffer configured we can now read directly from the node graph. */
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_node_graph_config nodeGraphConfig;
|
||||
ma_vocoder_node_config vocoderNodeConfig;
|
||||
ma_data_source_node_config sourceNodeConfig;
|
||||
ma_data_source_node_config exciteNodeConfig;
|
||||
ma_waveform_config waveformConfig;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_duplex);
|
||||
deviceConfig.capture.pDeviceID = NULL;
|
||||
deviceConfig.capture.format = DEVICE_FORMAT;
|
||||
deviceConfig.capture.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.capture.shareMode = ma_share_mode_shared;
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = DEVICE_FORMAT;
|
||||
deviceConfig.playback.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* Now we can setup our node graph. */
|
||||
nodeGraphConfig = ma_node_graph_config_init(device.capture.channels);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize node graph.");
|
||||
goto done0;
|
||||
}
|
||||
|
||||
|
||||
/* Vocoder. Attached straight to the endpoint. */
|
||||
vocoderNodeConfig = ma_vocoder_node_config_init(device.capture.channels, device.sampleRate);
|
||||
|
||||
result = ma_vocoder_node_init(&g_nodeGraph, &vocoderNodeConfig, NULL, &g_vocoderNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize vocoder node.");
|
||||
goto done1;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_vocoderNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
/* Amplify the volume of the vocoder output because in my testing it is a bit quiet. */
|
||||
ma_node_set_output_bus_volume(&g_vocoderNode, 0, 4);
|
||||
|
||||
|
||||
/* Source/carrier. Attached to input bus 0 of the vocoder node. */
|
||||
waveformConfig = ma_waveform_config_init(device.capture.format, device.capture.channels, device.sampleRate, ma_waveform_type_sawtooth, 1.0, 50);
|
||||
|
||||
result = ma_waveform_init(&waveformConfig, &g_sourceData);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize waveform for excite node.");
|
||||
goto done3;
|
||||
}
|
||||
|
||||
sourceNodeConfig = ma_data_source_node_config_init(&g_sourceData);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &sourceNodeConfig, NULL, &g_sourceNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize excite node.");
|
||||
goto done3;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_sourceNode, 0, &g_vocoderNode, 0);
|
||||
|
||||
|
||||
/* Excite/modulator. Attached to input bus 1 of the vocoder node. */
|
||||
result = ma_audio_buffer_ref_init(device.capture.format, device.capture.channels, NULL, 0, &g_exciteData);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio buffer for source.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
exciteNodeConfig = ma_data_source_node_config_init(&g_exciteData);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &exciteNodeConfig, NULL, &g_exciteNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize source node.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_exciteNode, 0, &g_vocoderNode, 1);
|
||||
|
||||
|
||||
ma_device_start(&device);
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* It's important that we stop the device first or else we'll uninitialize the graph from under the device. */
|
||||
ma_device_stop(&device);
|
||||
|
||||
/*done4:*/ ma_data_source_node_uninit(&g_exciteNode, NULL);
|
||||
done3: ma_data_source_node_uninit(&g_sourceNode, NULL);
|
||||
done2: ma_vocoder_node_uninit(&g_vocoderNode, NULL);
|
||||
done1: ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
done0: ma_device_uninit(&device);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
250
thirdparty/miniaudio-0.11.24/examples/engine_advanced.c
vendored
Normal file
@@ -0,0 +1,250 @@
|
||||
/*
|
||||
This example demonstrates some of the advanced features of the high level engine API.
|
||||
|
||||
The following features are demonstrated:
|
||||
|
||||
* Initialization of the engine from a pre-initialized device.
|
||||
* Self-managed resource managers.
|
||||
* Multiple engines with a shared resource manager.
|
||||
* Creation and management of `ma_sound` objects.
|
||||
|
||||
This example will play the sound that's passed in on the command line.
|
||||
|
||||
Using a shared resource manager, as we do in this example, is useful for when you want to user
|
||||
multiple engines so that you can output to multiple playback devices simultaneoulys. An example
|
||||
might be a local co-op multiplayer game where each player has their own headphones.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#define MAX_DEVICES 2
|
||||
#define MAX_SOUNDS 32
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
(void)pInput;
|
||||
|
||||
/*
|
||||
Since we're managing the underlying device ourselves, we need to read from the engine directly.
|
||||
To do this we need access to the `ma_engine` object which we passed in to the user data. One
|
||||
advantage of this is that you could do your own audio processing in addition to the engine's
|
||||
standard processing.
|
||||
*/
|
||||
ma_engine_read_pcm_frames((ma_engine*)pDevice->pUserData, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_context context;
|
||||
ma_resource_manager_config resourceManagerConfig;
|
||||
ma_resource_manager resourceManager;
|
||||
ma_engine engines[MAX_DEVICES];
|
||||
ma_device devices[MAX_DEVICES];
|
||||
ma_uint32 engineCount = 0;
|
||||
ma_uint32 iEngine;
|
||||
ma_device_info* pPlaybackDeviceInfos;
|
||||
ma_uint32 playbackDeviceCount;
|
||||
ma_uint32 iAvailableDevice;
|
||||
ma_uint32 iChosenDevice;
|
||||
ma_sound sounds[MAX_SOUNDS];
|
||||
ma_uint32 soundCount;
|
||||
ma_uint32 iSound;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
We are going to be initializing multiple engines. In order to save on memory usage we can use a self managed
|
||||
resource manager so we can share a single resource manager across multiple engines.
|
||||
*/
|
||||
resourceManagerConfig = ma_resource_manager_config_init();
|
||||
resourceManagerConfig.decodedFormat = ma_format_f32; /* ma_format_f32 should almost always be used as that's what the engine (and most everything else) uses for mixing. */
|
||||
resourceManagerConfig.decodedChannels = 0; /* Setting the channel count to 0 will cause sounds to use their native channel count. */
|
||||
resourceManagerConfig.decodedSampleRate = 48000; /* Using a consistent sample rate is useful for avoiding expensive resampling in the audio thread. This will result in resampling being performed by the loading thread(s). */
|
||||
|
||||
result = ma_resource_manager_init(&resourceManagerConfig, &resourceManager);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize resource manager.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* We're going to want a context so we can enumerate our playback devices. */
|
||||
result = ma_context_init(NULL, 0, NULL, &context);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize context.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
Now that we have a context we will want to enumerate over each device so we can display them to the user and give
|
||||
them a chance to select the output devices they want to use.
|
||||
*/
|
||||
result = ma_context_get_devices(&context, &pPlaybackDeviceInfos, &playbackDeviceCount, NULL, NULL);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to enumerate playback devices.");
|
||||
ma_context_uninit(&context);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* We have our devices, so now we want to get the user to select the devices they want to output to. */
|
||||
engineCount = 0;
|
||||
|
||||
for (iChosenDevice = 0; iChosenDevice < MAX_DEVICES; iChosenDevice += 1) {
|
||||
int c = 0;
|
||||
for (;;) {
|
||||
printf("Select playback device %d ([%d - %d], Q to quit):\n", iChosenDevice+1, 0, ma_min((int)playbackDeviceCount, 9));
|
||||
|
||||
for (iAvailableDevice = 0; iAvailableDevice < playbackDeviceCount; iAvailableDevice += 1) {
|
||||
printf(" %d: %s\n", iAvailableDevice, pPlaybackDeviceInfos[iAvailableDevice].name);
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
c = getchar();
|
||||
if (c != '\n') {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (c == 'q' || c == 'Q') {
|
||||
return 0; /* User aborted. */
|
||||
}
|
||||
|
||||
if (c >= '0' && c <= '9') {
|
||||
c -= '0';
|
||||
|
||||
if (c < (int)playbackDeviceCount) {
|
||||
ma_device_config deviceConfig;
|
||||
ma_engine_config engineConfig;
|
||||
|
||||
/*
|
||||
Create the device first before the engine. We'll specify the device in the engine's config. This is optional. When a device is
|
||||
not pre-initialized the engine will create one for you internally. The device does not need to be started here - the engine will
|
||||
do that for us in `ma_engine_start()`. The device's format is derived from the resource manager, but can be whatever you want.
|
||||
It's useful to keep the format consistent with the resource manager to avoid data conversions costs in the audio callback. In
|
||||
this example we're using the resource manager's sample format and sample rate, but leaving the channel count set to the device's
|
||||
native channels. You can use whatever format/channels/rate you like.
|
||||
*/
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.pDeviceID = &pPlaybackDeviceInfos[c].id;
|
||||
deviceConfig.playback.format = resourceManager.config.decodedFormat;
|
||||
deviceConfig.playback.channels = 0;
|
||||
deviceConfig.sampleRate = resourceManager.config.decodedSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &engines[engineCount];
|
||||
|
||||
result = ma_device_init(&context, &deviceConfig, &devices[engineCount]);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize device for %s.\n", pPlaybackDeviceInfos[c].name);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Now that we have the device we can initialize the engine. The device is passed into the engine's config. */
|
||||
engineConfig = ma_engine_config_init();
|
||||
engineConfig.pDevice = &devices[engineCount];
|
||||
engineConfig.pResourceManager = &resourceManager;
|
||||
engineConfig.noAutoStart = MA_TRUE; /* Don't start the engine by default - we'll do that manually below. */
|
||||
|
||||
result = ma_engine_init(&engineConfig, &engines[engineCount]);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize engine for %s.\n", pPlaybackDeviceInfos[c].name);
|
||||
ma_device_uninit(&devices[engineCount]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
engineCount += 1;
|
||||
break;
|
||||
} else {
|
||||
printf("Invalid device number.\n");
|
||||
}
|
||||
} else {
|
||||
printf("Invalid device number.\n");
|
||||
}
|
||||
}
|
||||
|
||||
printf("Device %d: %s\n", iChosenDevice+1, pPlaybackDeviceInfos[c].name);
|
||||
}
|
||||
|
||||
|
||||
/* We should now have our engine's initialized. We can now start them. */
|
||||
for (iEngine = 0; iEngine < engineCount; iEngine += 1) {
|
||||
result = ma_engine_start(&engines[iEngine]);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("WARNING: Failed to start engine %d.\n", iEngine);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
At this point our engine's are running and outputting nothing but silence. To get them playing something we'll need
|
||||
some sounds. In this example we're just using one sound per engine, but you can create as many as you like. Since
|
||||
we're using a shared resource manager, the sound data will only be loaded once. This is how you would implement
|
||||
multiple listeners.
|
||||
*/
|
||||
soundCount = 0;
|
||||
for (iEngine = 0; iEngine < engineCount; iEngine += 1) {
|
||||
/* Just one sound per engine in this example. We're going to be loading this asynchronously. */
|
||||
result = ma_sound_init_from_file(&engines[iEngine], argv[1], MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_DECODE | MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_ASYNC | MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM, NULL, NULL, &sounds[iEngine]);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("WARNING: Failed to load sound \"%s\"", argv[1]);
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
The sound can be started as soon as ma_sound_init_from_file() returns, even for sounds that are initialized
|
||||
with MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_ASYNC. The sound will start playing while it's being loaded. Note that if the
|
||||
asynchronous loading process cannot keep up with the rate at which you try reading you'll end up glitching.
|
||||
If this is an issue, you need to not load sounds asynchronously.
|
||||
*/
|
||||
result = ma_sound_start(&sounds[iEngine]);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("WARNING: Failed to start sound.");
|
||||
}
|
||||
|
||||
soundCount += 1;
|
||||
}
|
||||
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
for (;;) {
|
||||
int c = getchar();
|
||||
if (c == '\n') {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
/* Teardown. */
|
||||
|
||||
/* The application owns the `ma_sound` object which means you're responsible for uninitializing them. */
|
||||
for (iSound = 0; iSound < soundCount; iSound += 1) {
|
||||
ma_sound_uninit(&sounds[iSound]);
|
||||
}
|
||||
|
||||
/* We can now uninitialize each engine. */
|
||||
for (iEngine = 0; iEngine < engineCount; iEngine += 1) {
|
||||
ma_engine_uninit(&engines[iEngine]);
|
||||
|
||||
/*
|
||||
The engine has been uninitialized so now lets uninitialize the device. Do this first to ensure we don't
|
||||
uninitialize the resource manager from under the device while the data callback is running.
|
||||
*/
|
||||
ma_device_uninit(&devices[iEngine]);
|
||||
}
|
||||
|
||||
/* The context can only be uninitialized after the devices. */
|
||||
ma_context_uninit(&context);
|
||||
|
||||
/*
|
||||
Do the resource manager last. This way we can guarantee the data callbacks of each device aren't trying to access
|
||||
and data managed by the resource manager.
|
||||
*/
|
||||
ma_resource_manager_uninit(&resourceManager);
|
||||
|
||||
return 0;
|
||||
}
|
||||
103
thirdparty/miniaudio-0.11.24/examples/engine_effects.c
vendored
Normal file
@@ -0,0 +1,103 @@
|
||||
/*
|
||||
Demonstrates how to apply an effect to sounds using the high level engine API.
|
||||
|
||||
This example will load a file from the command line and apply an echo/delay effect to it. It will
|
||||
show you how to manage `ma_sound` objects and how to insert an effect into the graph.
|
||||
|
||||
The `ma_engine` object is a node graph and is compatible with the `ma_node_graph` API. The
|
||||
`ma_sound` object is a node within the node and is compatible with the `ma_node` API. This means
|
||||
that applying an effect is as simple as inserting an effect node into the graph and plugging in the
|
||||
sound's output into the effect's input. See the Node Graph example for how to use the node graph.
|
||||
|
||||
This example is playing only a single sound at a time which means only a single `ma_sound` object
|
||||
it being used. If you want to play multiple sounds at the same time, even if they're for the same
|
||||
sound file, you need multiple `ma_sound` objects.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#define DELAY_IN_SECONDS 0.2f
|
||||
#define DECAY 0.25f /* Volume falloff for each echo. */
|
||||
|
||||
static ma_engine g_engine;
|
||||
static ma_sound g_sound; /* This example will play only a single sound at once, so we only need one `ma_sound` object. */
|
||||
static ma_delay_node g_delayNode; /* The echo effect is achieved using a delay node. */
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* The engine needs to be initialized first. */
|
||||
result = ma_engine_init(NULL, &g_engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
We'll build our graph starting from the end so initialize the delay node now. The output of
|
||||
this node will be connected straight to the output. You could also attach it to a sound group
|
||||
or any other node that accepts an input.
|
||||
|
||||
Creating a node requires a pointer to the node graph that owns it. The engine itself is a node
|
||||
graph. In the code below we can get a pointer to the node graph with `ma_engine_get_node_graph()`
|
||||
or we could simple cast the engine to a ma_node_graph* like so:
|
||||
|
||||
(ma_node_graph*)&g_engine
|
||||
|
||||
The endpoint of the graph can be retrieved with `ma_engine_get_endpoint()`.
|
||||
*/
|
||||
{
|
||||
ma_delay_node_config delayNodeConfig;
|
||||
ma_uint32 channels;
|
||||
ma_uint32 sampleRate;
|
||||
|
||||
channels = ma_engine_get_channels(&g_engine);
|
||||
sampleRate = ma_engine_get_sample_rate(&g_engine);
|
||||
|
||||
delayNodeConfig = ma_delay_node_config_init(channels, sampleRate, (ma_uint32)(sampleRate * DELAY_IN_SECONDS), DECAY);
|
||||
|
||||
result = ma_delay_node_init(ma_engine_get_node_graph(&g_engine), &delayNodeConfig, NULL, &g_delayNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize delay node.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect the output of the delay node to the input of the endpoint. */
|
||||
ma_node_attach_output_bus(&g_delayNode, 0, ma_engine_get_endpoint(&g_engine), 0);
|
||||
}
|
||||
|
||||
|
||||
/* Now we can load the sound and connect it to the delay node. */
|
||||
{
|
||||
result = ma_sound_init_from_file(&g_engine, argv[1], 0, NULL, NULL, &g_sound);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize sound \"%s\".", argv[1]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect the output of the sound to the input of the effect. */
|
||||
ma_node_attach_output_bus(&g_sound, 0, &g_delayNode, 0);
|
||||
|
||||
/*
|
||||
Start the sound after it's applied to the sound. Otherwise there could be a scenario where
|
||||
the very first part of it is read before the attachment to the effect is made.
|
||||
*/
|
||||
ma_sound_start(&g_sound);
|
||||
}
|
||||
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_sound_uninit(&g_sound);
|
||||
ma_delay_node_uninit(&g_delayNode, NULL);
|
||||
ma_engine_uninit(&g_engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
34
thirdparty/miniaudio-0.11.24/examples/engine_hello_world.c
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
/*
|
||||
This example demonstrates how to initialize an audio engine and play a sound.
|
||||
|
||||
This will play the sound specified on the command line.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_engine engine;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_engine_init(NULL, &engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
ma_engine_play_sound(&engine, argv[1], NULL);
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_engine_uninit(&engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
137
thirdparty/miniaudio-0.11.24/examples/engine_sdl.c
vendored
Normal file
@@ -0,0 +1,137 @@
|
||||
/*
|
||||
Shows how to use the high level engine API with SDL.
|
||||
|
||||
By default, miniaudio's engine API will initialize a device internally for audio output. You can
|
||||
instead use the engine independently of a device. To show this off, this example will use SDL for
|
||||
audio output instead of miniaudio.
|
||||
|
||||
This example will load the sound specified on the command line and rotate it around the listener's
|
||||
head.
|
||||
*/
|
||||
#define MA_NO_DEVICE_IO /* <-- Disables the `ma_device` API. We don't need that in this example since SDL will be doing that part for us. */
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#define SDL_MAIN_HANDLED
|
||||
#include <SDL2/SDL.h> /* Change this to your include location. Might be <SDL.h>. */
|
||||
|
||||
#define CHANNELS 2 /* Must be stereo for this example. */
|
||||
#define SAMPLE_RATE 48000
|
||||
|
||||
static ma_engine g_engine;
|
||||
static ma_sound g_sound; /* This example will play only a single sound at once, so we only need one `ma_sound` object. */
|
||||
|
||||
void data_callback(void* pUserData, ma_uint8* pBuffer, int bufferSizeInBytes)
|
||||
{
|
||||
ma_uint32 bufferSizeInFrames;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
/* Reading is just a matter of reading straight from the engine. */
|
||||
bufferSizeInFrames = (ma_uint32)bufferSizeInBytes / ma_get_bytes_per_frame(ma_format_f32, ma_engine_get_channels(&g_engine));
|
||||
ma_engine_read_pcm_frames(&g_engine, pBuffer, bufferSizeInFrames, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_engine_config engineConfig;
|
||||
SDL_AudioSpec desiredSpec;
|
||||
SDL_AudioSpec obtainedSpec;
|
||||
SDL_AudioDeviceID deviceID;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
We'll initialize the engine first for the purpose of the example, but since the engine and SDL
|
||||
are independent of each other you can initialize them in any order. You need only make sure the
|
||||
channel count and sample rates are consistent between the two.
|
||||
|
||||
When initializing the engine it's important to make sure we don't initialize a device
|
||||
internally because we want SDL to be dealing with that for us instead.
|
||||
*/
|
||||
engineConfig = ma_engine_config_init();
|
||||
engineConfig.noDevice = MA_TRUE; /* <-- Make sure this is set so that no device is created (we'll deal with that ourselves). */
|
||||
engineConfig.channels = CHANNELS;
|
||||
engineConfig.sampleRate = SAMPLE_RATE;
|
||||
|
||||
result = ma_engine_init(&engineConfig, &g_engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Now load our sound. */
|
||||
result = ma_sound_init_from_file(&g_engine, argv[1], 0, NULL, NULL, &g_sound);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize sound.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Loop the sound so we can continuously hear it. */
|
||||
ma_sound_set_looping(&g_sound, MA_TRUE);
|
||||
|
||||
/*
|
||||
The sound will not be started by default, so start it now. We won't hear anything until the SDL
|
||||
audio device has been opened and started.
|
||||
*/
|
||||
ma_sound_start(&g_sound);
|
||||
|
||||
|
||||
/*
|
||||
Now that we have the engine and sound we can initialize SDL. This could have also been done
|
||||
first before the engine and sound.
|
||||
*/
|
||||
if (SDL_InitSubSystem(SDL_INIT_AUDIO) != 0) {
|
||||
printf("Failed to initialize SDL sub-system.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
MA_ZERO_OBJECT(&desiredSpec);
|
||||
desiredSpec.freq = ma_engine_get_sample_rate(&g_engine);
|
||||
desiredSpec.format = AUDIO_F32;
|
||||
desiredSpec.channels = ma_engine_get_channels(&g_engine);
|
||||
desiredSpec.samples = 512;
|
||||
desiredSpec.callback = data_callback;
|
||||
desiredSpec.userdata = NULL;
|
||||
|
||||
deviceID = SDL_OpenAudioDevice(NULL, 0, &desiredSpec, &obtainedSpec, SDL_AUDIO_ALLOW_ANY_CHANGE);
|
||||
if (deviceID == 0) {
|
||||
printf("Failed to open SDL audio device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Start playback. */
|
||||
SDL_PauseAudioDevice(deviceID, 0);
|
||||
|
||||
#if 1
|
||||
{
|
||||
/* We'll move the sound around the listener which we'll leave at the origin. */
|
||||
float stepAngle = 0.002f;
|
||||
float angle = 0;
|
||||
float distance = 2;
|
||||
|
||||
for (;;) {
|
||||
double x = ma_cosd(angle) - ma_sind(angle);
|
||||
double y = ma_sind(angle) + ma_cosd(angle);
|
||||
|
||||
ma_sound_set_position(&g_sound, (float)x * distance, 0, (float)y * distance);
|
||||
|
||||
angle += stepAngle;
|
||||
ma_sleep(1);
|
||||
}
|
||||
}
|
||||
#else
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_sound_uninit(&g_sound);
|
||||
ma_engine_uninit(&g_engine);
|
||||
SDL_CloseAudioDevice(deviceID);
|
||||
SDL_QuitSubSystem(SDL_INIT_AUDIO);
|
||||
|
||||
return 0;
|
||||
}
|
||||
448
thirdparty/miniaudio-0.11.24/examples/engine_steamaudio.c
vendored
Normal file
@@ -0,0 +1,448 @@
|
||||
/*
|
||||
Demonstrates integration of Steam Audio with miniaudio's engine API.
|
||||
|
||||
In this example a HRTF effect from Steam Audio will be applied. To do this a custom node will be
|
||||
implemented which uses Steam Audio's IPLBinauralEffect and IPLHRTF objects.
|
||||
|
||||
By implementing this as a node, it can be plugged into any position within the graph. The output
|
||||
channel count of this node is always stereo.
|
||||
|
||||
Steam Audio requires fixed sized processing, the size of which must be specified at initialization
|
||||
time of the IPLBinauralEffect and IPLHRTF objects. To ensure miniaudio and Steam Audio are
|
||||
consistent, you must set the period size in the engine config to be consistent with the frame size
|
||||
you specify in your IPLAudioSettings object. If for some reason you want the period size of the
|
||||
engine to be different to that of your Steam Audio configuration, you'll need to implement a sort
|
||||
of buffering solution to your node.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdint.h> /* Required for uint32_t which is used by STEAMAUDIO_VERSION, and a random use of uint8_t. If there's a Steam Audio maintainer reading this, that needs to be fixed to use IPLuint32 and IPLuint8. */
|
||||
|
||||
/* Need to silence some warnings from the Steam Audio headers. */
|
||||
#if defined(__clang__) || (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)))
|
||||
#pragma GCC diagnostic push
|
||||
#pragma GCC diagnostic ignored "-Wlong-long"
|
||||
#pragma GCC diagnostic ignored "-Wpedantic"
|
||||
#endif
|
||||
#include <phonon.h> /* Steam Audio */
|
||||
#if defined(__clang__) || (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)))
|
||||
#pragma GCC diagnostic pop
|
||||
#endif
|
||||
|
||||
#define FORMAT ma_format_f32 /* Must be floating point. */
|
||||
#define CHANNELS 2 /* Must be stereo for this example. */
|
||||
#define SAMPLE_RATE 48000
|
||||
|
||||
|
||||
static ma_result ma_result_from_IPLerror(IPLerror error)
|
||||
{
|
||||
switch (error)
|
||||
{
|
||||
case IPL_STATUS_SUCCESS: return MA_SUCCESS;
|
||||
case IPL_STATUS_OUTOFMEMORY: return MA_OUT_OF_MEMORY;
|
||||
case IPL_STATUS_INITIALIZATION:
|
||||
case IPL_STATUS_FAILURE:
|
||||
default: return MA_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channelsIn;
|
||||
IPLAudioSettings iplAudioSettings;
|
||||
IPLContext iplContext;
|
||||
IPLHRTF iplHRTF; /* There is one HRTF object to many binaural effect objects. */
|
||||
} ma_steamaudio_binaural_node_config;
|
||||
|
||||
MA_API ma_steamaudio_binaural_node_config ma_steamaudio_binaural_node_config_init(ma_uint32 channelsIn, IPLAudioSettings iplAudioSettings, IPLContext iplContext, IPLHRTF iplHRTF);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
IPLAudioSettings iplAudioSettings;
|
||||
IPLContext iplContext;
|
||||
IPLHRTF iplHRTF;
|
||||
IPLBinauralEffect iplEffect;
|
||||
ma_vec3f direction;
|
||||
float* ppBuffersIn[2]; /* Each buffer is an offset of _pHeap. */
|
||||
float* ppBuffersOut[2]; /* Each buffer is an offset of _pHeap. */
|
||||
void* _pHeap;
|
||||
} ma_steamaudio_binaural_node;
|
||||
|
||||
MA_API ma_result ma_steamaudio_binaural_node_init(ma_node_graph* pNodeGraph, const ma_steamaudio_binaural_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_steamaudio_binaural_node* pBinauralNode);
|
||||
MA_API void ma_steamaudio_binaural_node_uninit(ma_steamaudio_binaural_node* pBinauralNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
MA_API ma_result ma_steamaudio_binaural_node_set_direction(ma_steamaudio_binaural_node* pBinauralNode, float x, float y, float z);
|
||||
|
||||
|
||||
MA_API ma_steamaudio_binaural_node_config ma_steamaudio_binaural_node_config_init(ma_uint32 channelsIn, IPLAudioSettings iplAudioSettings, IPLContext iplContext, IPLHRTF iplHRTF)
|
||||
{
|
||||
ma_steamaudio_binaural_node_config config;
|
||||
|
||||
MA_ZERO_OBJECT(&config);
|
||||
config.nodeConfig = ma_node_config_init();
|
||||
config.channelsIn = channelsIn;
|
||||
config.iplAudioSettings = iplAudioSettings;
|
||||
config.iplContext = iplContext;
|
||||
config.iplHRTF = iplHRTF;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_steamaudio_binaural_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_steamaudio_binaural_node* pBinauralNode = (ma_steamaudio_binaural_node*)pNode;
|
||||
IPLBinauralEffectParams binauralParams;
|
||||
IPLAudioBuffer inputBufferDesc;
|
||||
IPLAudioBuffer outputBufferDesc;
|
||||
ma_uint32 totalFramesToProcess = *pFrameCountOut;
|
||||
ma_uint32 totalFramesProcessed = 0;
|
||||
|
||||
MA_ZERO_OBJECT(&binauralParams);
|
||||
binauralParams.direction.x = pBinauralNode->direction.x;
|
||||
binauralParams.direction.y = pBinauralNode->direction.y;
|
||||
binauralParams.direction.z = pBinauralNode->direction.z;
|
||||
binauralParams.interpolation = IPL_HRTFINTERPOLATION_NEAREST;
|
||||
binauralParams.spatialBlend = 1.0f;
|
||||
binauralParams.hrtf = pBinauralNode->iplHRTF;
|
||||
|
||||
inputBufferDesc.numChannels = (IPLint32)ma_node_get_input_channels(pNode, 0);
|
||||
|
||||
/* We'll run this in a loop just in case our deinterleaved buffers are too small. */
|
||||
outputBufferDesc.numSamples = pBinauralNode->iplAudioSettings.frameSize;
|
||||
outputBufferDesc.numChannels = 2;
|
||||
outputBufferDesc.data = pBinauralNode->ppBuffersOut;
|
||||
|
||||
while (totalFramesProcessed < totalFramesToProcess) {
|
||||
ma_uint32 framesToProcessThisIteration = totalFramesToProcess - totalFramesProcessed;
|
||||
if (framesToProcessThisIteration > (ma_uint32)pBinauralNode->iplAudioSettings.frameSize) {
|
||||
framesToProcessThisIteration = (ma_uint32)pBinauralNode->iplAudioSettings.frameSize;
|
||||
}
|
||||
|
||||
if (inputBufferDesc.numChannels == 1) {
|
||||
/* Fast path. No need for deinterleaving since it's a mono stream. */
|
||||
pBinauralNode->ppBuffersIn[0] = (float*)ma_offset_pcm_frames_const_ptr_f32(ppFramesIn[0], totalFramesProcessed, 1);
|
||||
} else {
|
||||
/* Slow path. Need to deinterleave the input data. */
|
||||
ma_deinterleave_pcm_frames(ma_format_f32, inputBufferDesc.numChannels, framesToProcessThisIteration, ma_offset_pcm_frames_const_ptr_f32(ppFramesIn[0], totalFramesProcessed, inputBufferDesc.numChannels), (void**)&pBinauralNode->ppBuffersIn[0]);
|
||||
}
|
||||
|
||||
inputBufferDesc.data = pBinauralNode->ppBuffersIn;
|
||||
inputBufferDesc.numSamples = (IPLint32)framesToProcessThisIteration;
|
||||
|
||||
/* Apply the effect. */
|
||||
iplBinauralEffectApply(pBinauralNode->iplEffect, &binauralParams, &inputBufferDesc, &outputBufferDesc);
|
||||
|
||||
/* Interleave straight into the output buffer. */
|
||||
ma_interleave_pcm_frames(ma_format_f32, 2, framesToProcessThisIteration, (const void**)&pBinauralNode->ppBuffersOut[0], ma_offset_pcm_frames_ptr_f32(ppFramesOut[0], totalFramesProcessed, 2));
|
||||
|
||||
/* Advance. */
|
||||
totalFramesProcessed += framesToProcessThisIteration;
|
||||
}
|
||||
|
||||
(void)pFrameCountIn; /* Unused. */
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_steamaudio_binaural_node_vtable =
|
||||
{
|
||||
ma_steamaudio_binaural_node_process_pcm_frames,
|
||||
NULL,
|
||||
1, /* 1 input channel. */
|
||||
1, /* 1 output channel. */
|
||||
0
|
||||
};
|
||||
|
||||
MA_API ma_result ma_steamaudio_binaural_node_init(ma_node_graph* pNodeGraph, const ma_steamaudio_binaural_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_steamaudio_binaural_node* pBinauralNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
ma_uint32 channelsIn;
|
||||
ma_uint32 channelsOut;
|
||||
IPLBinauralEffectSettings iplBinauralEffectSettings;
|
||||
size_t heapSizeInBytes;
|
||||
|
||||
if (pBinauralNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
MA_ZERO_OBJECT(pBinauralNode);
|
||||
|
||||
if (pConfig == NULL || pConfig->iplAudioSettings.frameSize == 0 || pConfig->iplContext == NULL || pConfig->iplHRTF == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
/* Steam Audio only supports mono and stereo input. */
|
||||
if (pConfig->channelsIn < 1 || pConfig->channelsIn > 2) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
channelsIn = pConfig->channelsIn;
|
||||
channelsOut = 2; /* Always stereo output. */
|
||||
|
||||
baseConfig = ma_node_config_init();
|
||||
baseConfig.vtable = &g_ma_steamaudio_binaural_node_vtable;
|
||||
baseConfig.pInputChannels = &channelsIn;
|
||||
baseConfig.pOutputChannels = &channelsOut;
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pBinauralNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
pBinauralNode->iplAudioSettings = pConfig->iplAudioSettings;
|
||||
pBinauralNode->iplContext = pConfig->iplContext;
|
||||
pBinauralNode->iplHRTF = pConfig->iplHRTF;
|
||||
|
||||
MA_ZERO_OBJECT(&iplBinauralEffectSettings);
|
||||
iplBinauralEffectSettings.hrtf = pBinauralNode->iplHRTF;
|
||||
|
||||
result = ma_result_from_IPLerror(iplBinauralEffectCreate(pBinauralNode->iplContext, &pBinauralNode->iplAudioSettings, &iplBinauralEffectSettings, &pBinauralNode->iplEffect));
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_node_uninit(&pBinauralNode->baseNode, pAllocationCallbacks);
|
||||
return result;
|
||||
}
|
||||
|
||||
heapSizeInBytes = 0;
|
||||
|
||||
/*
|
||||
Unfortunately Steam Audio uses deinterleaved buffers for everything so we'll need to use some
|
||||
intermediary buffers. We'll allocate one big buffer on the heap and then use offsets. We'll
|
||||
use the frame size from the IPLAudioSettings structure as a basis for the size of the buffer.
|
||||
*/
|
||||
heapSizeInBytes += sizeof(float) * channelsOut * pBinauralNode->iplAudioSettings.frameSize; /* Output buffer. */
|
||||
heapSizeInBytes += sizeof(float) * channelsIn * pBinauralNode->iplAudioSettings.frameSize; /* Input buffer. */
|
||||
|
||||
pBinauralNode->_pHeap = ma_malloc(heapSizeInBytes, pAllocationCallbacks);
|
||||
if (pBinauralNode->_pHeap == NULL) {
|
||||
iplBinauralEffectRelease(&pBinauralNode->iplEffect);
|
||||
ma_node_uninit(&pBinauralNode->baseNode, pAllocationCallbacks);
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pBinauralNode->ppBuffersOut[0] = (float*)pBinauralNode->_pHeap;
|
||||
pBinauralNode->ppBuffersOut[1] = (float*)ma_offset_ptr(pBinauralNode->_pHeap, sizeof(float) * pBinauralNode->iplAudioSettings.frameSize);
|
||||
|
||||
{
|
||||
ma_uint32 iChannelIn;
|
||||
for (iChannelIn = 0; iChannelIn < channelsIn; iChannelIn += 1) {
|
||||
pBinauralNode->ppBuffersIn[iChannelIn] = (float*)ma_offset_ptr(pBinauralNode->_pHeap, sizeof(float) * pBinauralNode->iplAudioSettings.frameSize * (channelsOut + iChannelIn));
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_steamaudio_binaural_node_uninit(ma_steamaudio_binaural_node* pBinauralNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
if (pBinauralNode == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
/* The base node is always uninitialized first. */
|
||||
ma_node_uninit(&pBinauralNode->baseNode, pAllocationCallbacks);
|
||||
|
||||
/*
|
||||
The Steam Audio objects are deleted after the base node. This ensures the base node is removed from the graph
|
||||
first to ensure these objects aren't getting used by the audio thread.
|
||||
*/
|
||||
iplBinauralEffectRelease(&pBinauralNode->iplEffect);
|
||||
ma_free(pBinauralNode->_pHeap, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_steamaudio_binaural_node_set_direction(ma_steamaudio_binaural_node* pBinauralNode, float x, float y, float z)
|
||||
{
|
||||
if (pBinauralNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
pBinauralNode->direction.x = x;
|
||||
pBinauralNode->direction.y = y;
|
||||
pBinauralNode->direction.z = z;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
static ma_engine g_engine;
|
||||
static ma_sound g_sound; /* This example will play only a single sound at once, so we only need one `ma_sound` object. */
|
||||
static ma_steamaudio_binaural_node g_binauralNode; /* The echo effect is achieved using a delay node. */
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_engine_config engineConfig;
|
||||
IPLAudioSettings iplAudioSettings;
|
||||
IPLContextSettings iplContextSettings;
|
||||
IPLContext iplContext;
|
||||
IPLHRTFSettings iplHRTFSettings;
|
||||
IPLHRTF iplHRTF;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* The engine needs to be initialized first. */
|
||||
engineConfig = ma_engine_config_init();
|
||||
engineConfig.channels = CHANNELS;
|
||||
engineConfig.sampleRate = SAMPLE_RATE;
|
||||
|
||||
/*
|
||||
Steam Audio requires processing in fixed sized chunks. Setting the period size in the engine config will
|
||||
ensure our updates happen in predicably sized chunks as required by Steam Audio.
|
||||
|
||||
Note that the configuration of Steam Audio below (IPLAudioSettings) will use this variable to specify the
|
||||
update size to ensure it remains consistent.
|
||||
*/
|
||||
engineConfig.periodSizeInFrames = 256;
|
||||
|
||||
result = ma_engine_init(&engineConfig, &g_engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
Now that we have the engine we can initialize the Steam Audio objects.
|
||||
*/
|
||||
MA_ZERO_OBJECT(&iplAudioSettings);
|
||||
iplAudioSettings.samplingRate = ma_engine_get_sample_rate(&g_engine);
|
||||
|
||||
/*
|
||||
If there's any Steam Audio developers reading this, why is the frame size needed? This needs to
|
||||
be documented. If this is for some kind of buffer management with FFT or something, then this
|
||||
need not be exposed to the public API. There should be no need for the public API to require a
|
||||
fixed sized update.
|
||||
|
||||
It's important that this be set to the periodSizeInFrames specified in the engine config above.
|
||||
This ensures updates on both the miniaudio side and the Steam Audio side are consistent.
|
||||
*/
|
||||
iplAudioSettings.frameSize = engineConfig.periodSizeInFrames;
|
||||
|
||||
|
||||
/* IPLContext */
|
||||
MA_ZERO_OBJECT(&iplContextSettings);
|
||||
iplContextSettings.version = STEAMAUDIO_VERSION;
|
||||
|
||||
result = ma_result_from_IPLerror(iplContextCreate(&iplContextSettings, &iplContext));
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_engine_uninit(&g_engine);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* IPLHRTF */
|
||||
MA_ZERO_OBJECT(&iplHRTFSettings);
|
||||
iplHRTFSettings.type = IPL_HRTFTYPE_DEFAULT;
|
||||
iplHRTFSettings.volume = 1;
|
||||
|
||||
result = ma_result_from_IPLerror(iplHRTFCreate(iplContext, &iplAudioSettings, &iplHRTFSettings, &iplHRTF));
|
||||
if (result != MA_SUCCESS) {
|
||||
iplContextRelease(&iplContext);
|
||||
ma_engine_uninit(&g_engine);
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
The binaural node will need to know the input channel count of the sound so we'll need to load
|
||||
the sound first. We'll initialize this such that it'll be initially detached from the graph.
|
||||
It will be attached to the graph after the binaural node is initialized.
|
||||
*/
|
||||
{
|
||||
ma_sound_config soundConfig;
|
||||
|
||||
soundConfig = ma_sound_config_init();
|
||||
soundConfig.pFilePath = argv[1];
|
||||
soundConfig.flags = MA_SOUND_FLAG_NO_DEFAULT_ATTACHMENT; /* We'll attach this to the graph later. */
|
||||
|
||||
result = ma_sound_init_ex(&g_engine, &soundConfig, &g_sound);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
/* We'll let the Steam Audio binaural effect do the directional attenuation for us. */
|
||||
ma_sound_set_directional_attenuation_factor(&g_sound, 0);
|
||||
|
||||
/* Loop the sound so we can get a continuous sound. */
|
||||
ma_sound_set_looping(&g_sound, MA_TRUE);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
We'll build our graph starting from the end so initialize the binaural node now. The output of
|
||||
this node will be connected straight to the output. You could also attach it to a sound group
|
||||
or any other node that accepts an input.
|
||||
|
||||
Creating a node requires a pointer to the node graph that owns it. The engine itself is a node
|
||||
graph. In the code below we can get a pointer to the node graph with `ma_engine_get_node_graph()`
|
||||
or we could simple cast the engine to a ma_node_graph* like so:
|
||||
|
||||
(ma_node_graph*)&g_engine
|
||||
|
||||
The endpoint of the graph can be retrieved with `ma_engine_get_endpoint()`.
|
||||
*/
|
||||
{
|
||||
ma_steamaudio_binaural_node_config binauralNodeConfig;
|
||||
|
||||
/*
|
||||
For this example we're just using the engine's channel count, but a more optimal solution
|
||||
might be to set this to mono if the source data is also mono.
|
||||
*/
|
||||
binauralNodeConfig = ma_steamaudio_binaural_node_config_init(CHANNELS, iplAudioSettings, iplContext, iplHRTF);
|
||||
|
||||
result = ma_steamaudio_binaural_node_init(ma_engine_get_node_graph(&g_engine), &binauralNodeConfig, NULL, &g_binauralNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize binaural node.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect the output of the delay node to the input of the endpoint. */
|
||||
ma_node_attach_output_bus(&g_binauralNode, 0, ma_engine_get_endpoint(&g_engine), 0);
|
||||
}
|
||||
|
||||
|
||||
/* We can now wire up the sound to the binaural node and start it. */
|
||||
ma_node_attach_output_bus(&g_sound, 0, &g_binauralNode, 0);
|
||||
ma_sound_start(&g_sound);
|
||||
|
||||
#if 1
|
||||
{
|
||||
/*
|
||||
We'll move the sound around the listener which we'll leave at the origin. We'll then get
|
||||
the direction to the listener and update the binaural node appropriately.
|
||||
*/
|
||||
float stepAngle = 0.002f;
|
||||
float angle = 0;
|
||||
float distance = 2;
|
||||
|
||||
for (;;) {
|
||||
double x = ma_cosd(angle) - ma_sind(angle);
|
||||
double y = ma_sind(angle) + ma_cosd(angle);
|
||||
ma_vec3f direction;
|
||||
|
||||
ma_sound_set_position(&g_sound, (float)x * distance, 0, (float)y * distance);
|
||||
direction = ma_sound_get_direction_to_listener(&g_sound);
|
||||
|
||||
/* Update the direction of the sound. */
|
||||
ma_steamaudio_binaural_node_set_direction(&g_binauralNode, direction.x, direction.y, direction.z);
|
||||
angle += stepAngle;
|
||||
|
||||
ma_sleep(1);
|
||||
}
|
||||
}
|
||||
#else
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_sound_uninit(&g_sound);
|
||||
ma_steamaudio_binaural_node_uninit(&g_binauralNode, NULL);
|
||||
ma_engine_uninit(&g_engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
146
thirdparty/miniaudio-0.11.24/examples/hilo_interop.c
vendored
Normal file
@@ -0,0 +1,146 @@
|
||||
/*
|
||||
Demonstrates interop between the high-level and the low-level API.
|
||||
|
||||
In this example we are using `ma_device` (the low-level API) to capture data from the microphone
|
||||
which we then play back through the engine as a sound. We use a ring buffer to act as the data
|
||||
source for the sound.
|
||||
|
||||
This is just a very basic example to show the general idea on how this might be achieved. In
|
||||
this example a ring buffer is being used as the intermediary data source, but you can use anything
|
||||
that works best for your situation. So long as the data is captured from the microphone, and then
|
||||
delivered to the sound (via a data source), you should be good to go.
|
||||
|
||||
A more robust example would probably not want to use a ring buffer directly as the data source.
|
||||
Instead you would probably want to do a custom data source that handles underruns and overruns of
|
||||
the ring buffer and deals with desyncs between capture and playback. In the future this example
|
||||
may be updated to make use of a more advanced data source that handles all of this.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
static ma_pcm_rb rb;
|
||||
static ma_device device;
|
||||
static ma_engine engine;
|
||||
static ma_sound sound; /* The sound will be the playback of the capture side. */
|
||||
|
||||
void capture_data_callback(ma_device* pDevice, void* pFramesOut, const void* pFramesIn, ma_uint32 frameCount)
|
||||
{
|
||||
ma_result result;
|
||||
ma_uint32 framesWritten;
|
||||
|
||||
(void)pFramesOut;
|
||||
|
||||
/* We need to write to the ring buffer. Need to do this in a loop. */
|
||||
framesWritten = 0;
|
||||
while (framesWritten < frameCount) {
|
||||
void* pMappedBuffer;
|
||||
ma_uint32 framesToWrite = frameCount - framesWritten;
|
||||
|
||||
result = ma_pcm_rb_acquire_write(&rb, &framesToWrite, &pMappedBuffer);
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (framesToWrite == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
/* Copy the data from the capture buffer to the ring buffer. */
|
||||
ma_copy_pcm_frames(pMappedBuffer, ma_offset_pcm_frames_const_ptr_f32((const float*)pFramesIn, framesWritten, pDevice->capture.channels), framesToWrite, pDevice->capture.format, pDevice->capture.channels);
|
||||
|
||||
result = ma_pcm_rb_commit_write(&rb, framesToWrite);
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
|
||||
framesWritten += framesToWrite;
|
||||
}
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
|
||||
/*
|
||||
The first thing we'll do is set up the capture side. There are two parts to this. The first is
|
||||
the device itself, and the other is the ring buffer. It doesn't matter what order we initialize
|
||||
these in, so long as the ring buffer is created before the device is started so that the
|
||||
callback can be guaranteed to have a valid destination. We'll initialize the device first, and
|
||||
then use the format, channels and sample rate to initialize the ring buffer.
|
||||
|
||||
It's important that the sample format of the device is set to f32 because that's what the engine
|
||||
uses internally.
|
||||
*/
|
||||
|
||||
/* Initialize the capture device. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_capture);
|
||||
deviceConfig.capture.format = ma_format_f32;
|
||||
deviceConfig.dataCallback = capture_data_callback;
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize capture device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Initialize the ring buffer. */
|
||||
result = ma_pcm_rb_init(device.capture.format, device.capture.channels, device.capture.internalPeriodSizeInFrames * 5, NULL, NULL, &rb);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize the ring buffer.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
Ring buffers don't require a sample rate for their normal operation, but we can associate it
|
||||
with a sample rate. We'll want to do this so the engine can resample if necessary.
|
||||
*/
|
||||
ma_pcm_rb_set_sample_rate(&rb, device.sampleRate);
|
||||
|
||||
|
||||
|
||||
/*
|
||||
At this point the capture side is set up and we can now set up the playback side. Here we are
|
||||
using `ma_engine` and linking the captured data to a sound so it can be manipulated just like
|
||||
any other sound in the world.
|
||||
|
||||
Note that we have not yet started the capture device. Since the captured data is tied to a
|
||||
sound, we'll link the starting and stopping of the capture device to the starting and stopping
|
||||
of the sound.
|
||||
*/
|
||||
|
||||
/* We'll get the engine up and running before we start the capture device. */
|
||||
result = ma_engine_init(NULL, &engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize the engine.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
We can now create our sound. This is created from a data source, which in this example is a
|
||||
ring buffer. The capture side will be writing data into the ring buffer, whereas the sound
|
||||
will be reading from it.
|
||||
*/
|
||||
result = ma_sound_init_from_data_source(&engine, &rb, 0, NULL, &sound);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize the sound.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Link the starting of the device and sound together. */
|
||||
ma_device_start(&device);
|
||||
ma_sound_start(&sound);
|
||||
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
ma_sound_uninit(&sound);
|
||||
ma_engine_uninit(&engine);
|
||||
ma_device_uninit(&device);
|
||||
ma_pcm_rb_uninit(&rb);
|
||||
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
248
thirdparty/miniaudio-0.11.24/examples/node_graph.c
vendored
Normal file
@@ -0,0 +1,248 @@
|
||||
/*
|
||||
This example shows how to use the node graph system.
|
||||
|
||||
The node graph system can be used for doing complex mixing and effect processing. The idea is that
|
||||
you have a number of nodes that are connected to each other to form a graph. At the end of the
|
||||
graph is an endpoint which all nodes eventually connect to.
|
||||
|
||||
A node is used to do some kind of processing on zero or more input streams and produce one or more
|
||||
output streams. Each node can have a number of inputs and outputs. Each of these is called a bus in
|
||||
miniaudio. Some nodes, particularly data source nodes, have no inputs and instead generate their
|
||||
outputs dynamically. All nodes will have at least one output or else it'll be disconnected from the
|
||||
graph and will never get processed. Each output bus of a node will be connected to an input bus of
|
||||
another node, but they don't all need to connect to the same input node. For example, a splitter
|
||||
node has 1 input bus and 2 output buses and is used to duplicate a signal. You could then branch
|
||||
off and have one output bus connected to one input node and the other connected to a different
|
||||
input node, and then have two different effects process for each of the duplicated branches.
|
||||
|
||||
Any number of output buses can be connected to an input bus in which case the output buses will be
|
||||
mixed before processing by the input node. This is how you would achieve the mixing part of the
|
||||
node graph.
|
||||
|
||||
This example will be using the following node graph set up:
|
||||
|
||||
```
|
||||
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Data flows left to right >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
|
||||
|
||||
+---------------+ +-----------------+
|
||||
| Data Source 1 =----+ +----------+ +----= Low Pass Filter =----+
|
||||
+---------------+ | | =----+ +-----------------+ | +----------+
|
||||
+----= Splitter | +----= ENDPOINT |
|
||||
+---------------+ | | =----+ +-----------------+ | +----------+
|
||||
| Data Source 2 =----+ +----------+ +----= Echo / Delay =----+
|
||||
+---------------+ +-----------------+
|
||||
```
|
||||
|
||||
This does not represent a realistic real-world scenario, but it demonstrates how to make use of
|
||||
mixing, multiple outputs and multiple effects.
|
||||
|
||||
The data source nodes are connected to the input of the splitter. They'll be mixed before being
|
||||
processed by the splitter. The splitter has two output buses. In the graph above, one bus will be
|
||||
routed to a low pass filter, whereas the other bus will be routed to an echo effect. Then, the
|
||||
outputs of these two effects will be connected to the input bus of the endpoint. Because both of
|
||||
the outputs are connected to the same input bus, they'll be mixed at that point.
|
||||
|
||||
The two data sources at the start of the graph have no inputs. They'll instead generate their
|
||||
output by reading from a data source. The data source in this case will be one `ma_decoder` for
|
||||
each input file specified on the command line.
|
||||
|
||||
You can also control the volume of an output bus. In this example, we set the volumes of the low
|
||||
pass and echo effects so that one of them becomes more obvious than the other.
|
||||
|
||||
When you want to read from the graph, you simply call `ma_node_graph_read_pcm_frames()`.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
/* Data Format */
|
||||
#define FORMAT ma_format_f32 /* Must always be f32. */
|
||||
#define CHANNELS 2
|
||||
#define SAMPLE_RATE 48000
|
||||
|
||||
/* Effect Properties */
|
||||
#define LPF_BIAS 0.9f /* Higher values means more bias towards the low pass filter (the low pass filter will be more audible). Lower values means more bias towards the echo. Must be between 0 and 1. */
|
||||
#define LPF_CUTOFF_FACTOR 80 /* High values = more filter. */
|
||||
#define LPF_ORDER 8
|
||||
#define DELAY_IN_SECONDS 0.2f
|
||||
#define DECAY 0.5f /* Volume falloff for each echo. */
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_data_source_node node; /* If you make this the first member, you can pass a pointer to this struct into any `ma_node_*` API and it will "Just Work". */
|
||||
ma_decoder decoder;
|
||||
} sound_node;
|
||||
|
||||
static ma_node_graph g_nodeGraph;
|
||||
static ma_lpf_node g_lpfNode;
|
||||
static ma_delay_node g_delayNode;
|
||||
static ma_splitter_node g_splitterNode;
|
||||
static sound_node* g_pSoundNodes;
|
||||
static int g_soundNodeCount;
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
Hearing the output of the node graph is as easy as reading straight into the output buffer. You just need to
|
||||
make sure you use a consistent data format or else you'll need to do your own conversion.
|
||||
*/
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput; /* Unused. */
|
||||
(void)pDevice; /* Unused. */
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
int iarg;
|
||||
ma_result result;
|
||||
|
||||
/* We'll set up our nodes starting from the end and working our way back to the start. We'll need to set up the graph first. */
|
||||
{
|
||||
ma_node_graph_config nodeGraphConfig = ma_node_graph_config_init(CHANNELS);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("ERROR: Failed to initialize node graph.");
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* Low Pass Filter. */
|
||||
{
|
||||
ma_lpf_node_config lpfNodeConfig = ma_lpf_node_config_init(CHANNELS, SAMPLE_RATE, SAMPLE_RATE / LPF_CUTOFF_FACTOR, LPF_ORDER);
|
||||
|
||||
result = ma_lpf_node_init(&g_nodeGraph, &lpfNodeConfig, NULL, &g_lpfNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("ERROR: Failed to initialize low pass filter node.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect the output bus of the low pass filter node to the input bus of the endpoint. */
|
||||
ma_node_attach_output_bus(&g_lpfNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
/* Set the volume of the low pass filter to make it more of less impactful. */
|
||||
ma_node_set_output_bus_volume(&g_lpfNode, 0, LPF_BIAS);
|
||||
}
|
||||
|
||||
|
||||
/* Echo / Delay. */
|
||||
{
|
||||
ma_delay_node_config delayNodeConfig = ma_delay_node_config_init(CHANNELS, SAMPLE_RATE, (ma_uint32)(SAMPLE_RATE * DELAY_IN_SECONDS), DECAY);
|
||||
|
||||
result = ma_delay_node_init(&g_nodeGraph, &delayNodeConfig, NULL, &g_delayNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("ERROR: Failed to initialize delay node.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect the output bus of the delay node to the input bus of the endpoint. */
|
||||
ma_node_attach_output_bus(&g_delayNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
/* Set the volume of the delay filter to make it more of less impactful. */
|
||||
ma_node_set_output_bus_volume(&g_delayNode, 0, 1 - LPF_BIAS);
|
||||
}
|
||||
|
||||
|
||||
/* Splitter. */
|
||||
{
|
||||
ma_splitter_node_config splitterNodeConfig = ma_splitter_node_config_init(CHANNELS);
|
||||
|
||||
result = ma_splitter_node_init(&g_nodeGraph, &splitterNodeConfig, NULL, &g_splitterNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("ERROR: Failed to initialize splitter node.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Connect output bus 0 to the input bus of the low pass filter node, and output bus 1 to the input bus of the delay node. */
|
||||
ma_node_attach_output_bus(&g_splitterNode, 0, &g_lpfNode, 0);
|
||||
ma_node_attach_output_bus(&g_splitterNode, 1, &g_delayNode, 0);
|
||||
}
|
||||
|
||||
|
||||
/* Data sources. Ignore any that cannot be loaded. */
|
||||
g_pSoundNodes = (sound_node*)ma_malloc(sizeof(*g_pSoundNodes) * argc-1, NULL);
|
||||
if (g_pSoundNodes == NULL) {
|
||||
printf("Failed to allocate memory for sounds.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
g_soundNodeCount = 0;
|
||||
for (iarg = 1; iarg < argc; iarg += 1) {
|
||||
ma_decoder_config decoderConfig = ma_decoder_config_init(FORMAT, CHANNELS, SAMPLE_RATE);
|
||||
|
||||
result = ma_decoder_init_file(argv[iarg], &decoderConfig, &g_pSoundNodes[g_soundNodeCount].decoder);
|
||||
if (result == MA_SUCCESS) {
|
||||
ma_data_source_node_config dataSourceNodeConfig = ma_data_source_node_config_init(&g_pSoundNodes[g_soundNodeCount].decoder);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &dataSourceNodeConfig, NULL, &g_pSoundNodes[g_soundNodeCount].node);
|
||||
if (result == MA_SUCCESS) {
|
||||
/* The data source node has been created successfully. Attach it to the splitter. */
|
||||
ma_node_attach_output_bus(&g_pSoundNodes[g_soundNodeCount].node, 0, &g_splitterNode, 0);
|
||||
g_soundNodeCount += 1;
|
||||
} else {
|
||||
printf("WARNING: Failed to init data source node for sound \"%s\". Ignoring.", argv[iarg]);
|
||||
ma_decoder_uninit(&g_pSoundNodes[g_soundNodeCount].decoder);
|
||||
}
|
||||
} else {
|
||||
printf("WARNING: Failed to load sound \"%s\". Ignoring.", argv[iarg]);
|
||||
}
|
||||
}
|
||||
|
||||
/* Everything has been initialized successfully so now we can set up a playback device so we can listen to the result. */
|
||||
{
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = FORMAT;
|
||||
deviceConfig.playback.channels = CHANNELS;
|
||||
deviceConfig.sampleRate = SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = NULL;
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("ERROR: Failed to initialize device.");
|
||||
goto cleanup_graph;
|
||||
}
|
||||
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
goto cleanup_graph;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* We're done. Clean up the device. */
|
||||
ma_device_uninit(&device);
|
||||
}
|
||||
|
||||
|
||||
cleanup_graph:
|
||||
{
|
||||
/* It's good practice to tear down the graph from the lowest level nodes first. */
|
||||
int iSound;
|
||||
|
||||
/* Sounds. */
|
||||
for (iSound = 0; iSound < g_soundNodeCount; iSound += 1) {
|
||||
ma_data_source_node_uninit(&g_pSoundNodes[iSound].node, NULL);
|
||||
ma_decoder_uninit(&g_pSoundNodes[iSound].decoder);
|
||||
}
|
||||
|
||||
/* Splitter. */
|
||||
ma_splitter_node_uninit(&g_splitterNode, NULL);
|
||||
|
||||
/* Echo / Delay */
|
||||
ma_delay_node_uninit(&g_delayNode, NULL);
|
||||
|
||||
/* Low Pass Filter */
|
||||
ma_lpf_node_uninit(&g_lpfNode, NULL);
|
||||
|
||||
/* Node Graph */
|
||||
ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
149
thirdparty/miniaudio-0.11.24/examples/resource_manager.c
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
/*
|
||||
Demonstrates how you can use the resource manager to manage loaded sounds.
|
||||
|
||||
This example loads the first sound specified on the command line via the resource manager and then plays it using the
|
||||
low level API.
|
||||
|
||||
You can control whether or not you want to load the sound asynchronously and whether or not you want to store the data
|
||||
in-memory or stream it. When storing the sound in-memory you can also control whether or not it is decoded. To do this,
|
||||
specify a combination of the following options in `ma_resource_manager_data_source_init()`:
|
||||
|
||||
* MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_ASYNC - Load asynchronously.
|
||||
* MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_DECODE - Store the sound in-memory in uncompressed/decoded format.
|
||||
* MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM - Stream the sound from disk rather than storing entirely in memory. Useful for music.
|
||||
|
||||
The object returned by the resource manager is just a standard data source which means it can be plugged into any of
|
||||
`ma_data_source_*()` APIs just like any other data source and it should just work.
|
||||
|
||||
Internally, there's a background thread that's used to process jobs and enable asynchronicity. By default there is only
|
||||
a single job thread, but this can be configured in the resource manager config. You can also implement your own threads
|
||||
for processing jobs. That is more advanced, and beyond the scope of this example.
|
||||
|
||||
When you initialize a resource manager you can specify the sample format, channels and sample rate to use when reading
|
||||
data from the data source. This means the resource manager will ensure all sounds will have a standard format. When not
|
||||
set, each sound will have their own formats and you'll need to do the necessary data conversion yourself.
|
||||
*/
|
||||
#define MA_NO_ENGINE /* We're intentionally not using the ma_engine API here. */
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
#include <emscripten.h>
|
||||
|
||||
void main_loop__em(void* pUserData)
|
||||
{
|
||||
ma_resource_manager* pResourceManager = (ma_resource_manager*)pUserData;
|
||||
|
||||
/*
|
||||
The Emscripten build does not support threading which means we need to process jobs manually. If
|
||||
there are no jobs needing to be processed this will return immediately with MA_NO_DATA_AVAILABLE.
|
||||
*/
|
||||
ma_resource_manager_process_next_job(pResourceManager);
|
||||
}
|
||||
#endif
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_data_source_read_pcm_frames((ma_data_source*)pDevice->pUserData, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_resource_manager_config resourceManagerConfig;
|
||||
ma_resource_manager resourceManager;
|
||||
ma_resource_manager_data_source dataSource;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* We'll initialize the device first. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &dataSource; /* <-- We'll be reading from this in the data callback. */
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
We have the device so now we want to initialize the resource manager. We'll use the resource manager to load a
|
||||
sound based on the command line.
|
||||
*/
|
||||
resourceManagerConfig = ma_resource_manager_config_init();
|
||||
resourceManagerConfig.decodedFormat = device.playback.format;
|
||||
resourceManagerConfig.decodedChannels = device.playback.channels;
|
||||
resourceManagerConfig.decodedSampleRate = device.sampleRate;
|
||||
|
||||
/*
|
||||
We're not supporting threading with Emscripten so go ahead and disable threading. It's important
|
||||
that we set the appropriate flag and also the job thread count to 0.
|
||||
*/
|
||||
#ifdef __EMSCRIPTEN__
|
||||
resourceManagerConfig.flags |= MA_RESOURCE_MANAGER_FLAG_NO_THREADING;
|
||||
resourceManagerConfig.jobThreadCount = 0;
|
||||
#endif
|
||||
|
||||
result = ma_resource_manager_init(&resourceManagerConfig, &resourceManager);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to initialize the resource manager.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Now that we have a resource manager we can load a sound. */
|
||||
result = ma_resource_manager_data_source_init(
|
||||
&resourceManager,
|
||||
argv[1],
|
||||
MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_DECODE | MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_ASYNC | MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM,
|
||||
NULL, /* Async notification. */
|
||||
&dataSource);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to load sound \"%s\".", argv[1]);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* In this example we'll enable looping. */
|
||||
ma_data_source_set_looping(&dataSource, MA_TRUE);
|
||||
|
||||
|
||||
/* Now that we have a sound we can start the device. */
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to start device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
emscripten_set_main_loop_arg(main_loop__em, &resourceManager, 0, 1);
|
||||
#else
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
/* Teardown. */
|
||||
|
||||
/* Uninitialize the device first to ensure the data callback is stopped and doesn't try to access any data. */
|
||||
ma_device_uninit(&device);
|
||||
|
||||
/*
|
||||
Before uninitializing the resource manager we need to uninitialize every data source. The data source is owned by
|
||||
the caller which means you're responsible for uninitializing it.
|
||||
*/
|
||||
ma_resource_manager_data_source_uninit(&dataSource);
|
||||
|
||||
/* Uninitialize the resource manager after each data source. */
|
||||
ma_resource_manager_uninit(&resourceManager);
|
||||
|
||||
return 0;
|
||||
}
|
||||
336
thirdparty/miniaudio-0.11.24/examples/resource_manager_advanced.c
vendored
Normal file
@@ -0,0 +1,336 @@
|
||||
/*
|
||||
Demonstrates how you can use the resource manager to manage loaded sounds.
|
||||
|
||||
The resource manager can be used to create a data source whose resources are managed internally by miniaudio. The data
|
||||
sources can then be read just like any other data source such as decoders and audio buffers.
|
||||
|
||||
In this example we use the resource manager independently of the `ma_engine` API so that we can demonstrate how it can
|
||||
be used by itself without getting it confused with `ma_engine`.
|
||||
|
||||
The main feature of the resource manager is the ability to decode and stream audio data asynchronously. Asynchronicity
|
||||
is achieved with a job system. The resource manager will issue jobs which are processed by a configurable number of job
|
||||
threads. You can also implement your own custom job threads which this example also demonstrates.
|
||||
|
||||
In this example we show how you can create a data source, mix them with other data sources, configure the number of job
|
||||
threads to manage internally and how to implement your own custom job thread.
|
||||
*/
|
||||
#define MA_NO_ENGINE /* We're intentionally not using the ma_engine API here. */
|
||||
#include "../miniaudio.c"
|
||||
|
||||
static ma_resource_manager_data_source g_dataSources[16];
|
||||
static ma_uint32 g_dataSourceCount;
|
||||
|
||||
|
||||
/*
|
||||
TODO: Consider putting these public functions in miniaudio.h. Will depend on ma_mix_pcm_frames_f32()
|
||||
being merged into miniaudio.h (it's currently in miniaudio_engine.h).
|
||||
*/
|
||||
static ma_result ma_data_source_read_pcm_frames_f32_ex(ma_data_source* pDataSource, float* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead, ma_format dataSourceFormat, ma_uint32 dataSourceChannels)
|
||||
{
|
||||
/*
|
||||
This function is intended to be used when the format and channel count of the data source is
|
||||
known beforehand. The idea is to avoid overhead due to redundant calls to ma_data_source_get_data_format().
|
||||
*/
|
||||
if (dataSourceFormat == ma_format_f32) {
|
||||
/* Fast path. No conversion necessary. */
|
||||
return ma_data_source_read_pcm_frames(pDataSource, pFramesOut, frameCount, pFramesRead);
|
||||
} else {
|
||||
/* Slow path. Conversion necessary. */
|
||||
ma_result result;
|
||||
ma_uint64 totalFramesRead;
|
||||
ma_uint8 temp[MA_DATA_CONVERTER_STACK_BUFFER_SIZE];
|
||||
ma_uint64 tempCapInFrames = sizeof(temp) / ma_get_bytes_per_frame(dataSourceFormat, dataSourceChannels);
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
ma_uint64 framesJustRead;
|
||||
ma_uint64 framesToRead = frameCount - totalFramesRead;
|
||||
if (framesToRead > tempCapInFrames) {
|
||||
framesToRead = tempCapInFrames;
|
||||
}
|
||||
|
||||
result = ma_data_source_read_pcm_frames(pDataSource, temp, framesToRead, &framesJustRead);
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
|
||||
ma_convert_pcm_frames_format(ma_offset_pcm_frames_ptr_f32(pFramesOut, totalFramesRead, dataSourceChannels), ma_format_f32, temp, dataSourceFormat, framesJustRead, dataSourceChannels, ma_dither_mode_none);
|
||||
totalFramesRead += framesJustRead;
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
}
|
||||
|
||||
MA_API ma_result ma_data_source_read_pcm_frames_f32(ma_data_source* pDataSource, float* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
ma_result result;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
|
||||
result = ma_data_source_get_data_format(pDataSource, &format, &channels, NULL, NULL, 0);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to retrieve the data format of the data source. */
|
||||
}
|
||||
|
||||
return ma_data_source_read_pcm_frames_f32_ex(pDataSource, pFramesOut, frameCount, pFramesRead, format, channels);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_data_source_read_pcm_frames_and_mix_f32(ma_data_source* pDataSource, float* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead, float volume)
|
||||
{
|
||||
ma_result result;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
ma_uint64 totalFramesRead;
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
if (pDataSource == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
result = ma_data_source_get_data_format(pDataSource, &format, &channels, NULL, NULL, 0);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to retrieve the data format of the data source. */
|
||||
}
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
float temp[MA_DATA_CONVERTER_STACK_BUFFER_SIZE/sizeof(float)];
|
||||
ma_uint64 tempCapInFrames = ma_countof(temp) / channels;
|
||||
ma_uint64 framesJustRead;
|
||||
ma_uint64 framesToRead = frameCount - totalFramesRead;
|
||||
if (framesToRead > tempCapInFrames) {
|
||||
framesToRead = tempCapInFrames;
|
||||
}
|
||||
|
||||
result = ma_data_source_read_pcm_frames_f32_ex(pDataSource, temp, framesToRead, &framesJustRead, format, channels);
|
||||
|
||||
ma_mix_pcm_frames_f32(ma_offset_pcm_frames_ptr_f32(pFramesOut, totalFramesRead, channels), temp, framesJustRead, channels, volume);
|
||||
totalFramesRead += framesJustRead;
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
In this example we're just going to play our data sources layered on top of each other. This
|
||||
assumes the device's format is f32 and that the buffer is not pre-silenced.
|
||||
*/
|
||||
ma_uint32 iDataSource;
|
||||
|
||||
/*
|
||||
If the device was configured with noPreSilencedOutputBuffer then you would need to silence the
|
||||
buffer here, or make sure the first data source to be mixed is copied rather than mixed.
|
||||
*/
|
||||
/*ma_silence_pcm_frames(pOutput, frameCount, ma_format_f32, pDevice->playback.channels);*/
|
||||
|
||||
/* For each sound, mix as much data as we can. */
|
||||
for (iDataSource = 0; iDataSource < g_dataSourceCount; iDataSource += 1) {
|
||||
ma_data_source_read_pcm_frames_and_mix_f32(&g_dataSources[iDataSource], (float*)pOutput, frameCount, NULL, /* volume = */1);
|
||||
}
|
||||
|
||||
/* Unused. */
|
||||
(void)pInput;
|
||||
(void)pDevice;
|
||||
}
|
||||
|
||||
static ma_thread_result MA_THREADCALL custom_job_thread(void* pUserData)
|
||||
{
|
||||
ma_resource_manager* pResourceManager = (ma_resource_manager*)pUserData;
|
||||
|
||||
for (;;) {
|
||||
ma_result result;
|
||||
ma_resource_manager_job job;
|
||||
|
||||
/*
|
||||
Retrieve a job from the queue first. This defines what it is you're about to do. By default this will be
|
||||
blocking. You can initialize the resource manager with MA_RESOURCE_MANAGER_FLAG_NON_BLOCKING to not block in
|
||||
which case MA_NO_DATA_AVAILABLE will be returned if no jobs are available.
|
||||
|
||||
When the quit job is returned (MA_RESOURCE_MANAGER_JOB_QUIT), the return value will always be MA_CANCELLED. If you don't want
|
||||
to check the return value (you should), you can instead check if the job code is MA_RESOURCE_MANAGER_JOB_QUIT and use that
|
||||
instead.
|
||||
*/
|
||||
result = ma_resource_manager_next_job(pResourceManager, &job);
|
||||
if (result != MA_SUCCESS) {
|
||||
if (result == MA_CANCELLED) {
|
||||
printf("CUSTOM JOB THREAD TERMINATING VIA MA_CANCELLED... ");
|
||||
} else {
|
||||
printf("CUSTOM JOB THREAD ERROR: %s. TERMINATING... ", ma_result_description(result));
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
/*
|
||||
Terminate if we got a quit message. You don't need to terminate like this, but's a bit more robust. You can
|
||||
just use a global variable or something similar if it's easier for your particular situation. The quit job
|
||||
remains in the queue and will continue to be returned by future calls to ma_resource_manager_next_job(). The
|
||||
reason for this is to give every job thread visibility to the quit job so they have a chance to exit.
|
||||
|
||||
We won't actually be hitting this code because the call above will return MA_CANCELLED when the MA_RESOURCE_MANAGER_JOB_QUIT
|
||||
event is received which means the `result != MA_SUCCESS` logic above will catch it. If you do not check the
|
||||
return value of ma_resource_manager_next_job() you will want to check for MA_RESOURCE_MANAGER_JOB_QUIT like the code below.
|
||||
*/
|
||||
if (job.toc.breakup.code == MA_JOB_TYPE_QUIT) {
|
||||
printf("CUSTOM JOB THREAD TERMINATING VIA MA_JOB_TYPE_QUIT... ");
|
||||
break;
|
||||
}
|
||||
|
||||
/* Call ma_resource_manager_process_job() to actually do the work to process the job. */
|
||||
printf("PROCESSING IN CUSTOM JOB THREAD: %d\n", job.toc.breakup.code);
|
||||
ma_resource_manager_process_job(pResourceManager, &job);
|
||||
}
|
||||
|
||||
printf("TERMINATED\n");
|
||||
return (ma_thread_result)0;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_resource_manager_config resourceManagerConfig;
|
||||
ma_resource_manager resourceManager;
|
||||
ma_thread jobThread;
|
||||
int iFile;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = ma_format_f32;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = NULL;
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* We can start the device before loading any sounds. We'll just end up outputting silence. */
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to start device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
We have the device so now we want to initialize the resource manager. We'll use the resource manager to load some
|
||||
sounds based on the command line.
|
||||
*/
|
||||
resourceManagerConfig = ma_resource_manager_config_init();
|
||||
|
||||
/*
|
||||
We'll set a standard decoding format to save us to processing time at mixing time. If you're wanting to use
|
||||
spatialization with your decoded sounds, you may want to consider leaving this as 0 to ensure the file's native
|
||||
channel count is used so you can do proper spatialization.
|
||||
*/
|
||||
resourceManagerConfig.decodedFormat = device.playback.format;
|
||||
resourceManagerConfig.decodedChannels = device.playback.channels;
|
||||
resourceManagerConfig.decodedSampleRate = device.sampleRate;
|
||||
|
||||
/* The number of job threads to be managed internally. Set this to 0 if you want to self-manage your job threads */
|
||||
resourceManagerConfig.jobThreadCount = 4;
|
||||
|
||||
result = ma_resource_manager_init(&resourceManagerConfig, &resourceManager);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to initialize the resource manager.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/*
|
||||
Now that we have a resource manager we can set up our custom job thread. This is optional. Normally when doing
|
||||
self-managed job threads you would set the internal job thread count to zero. We're doing both internal and
|
||||
self-managed job threads in this example just for demonstration purposes.
|
||||
*/
|
||||
ma_thread_create(&jobThread, ma_thread_priority_default, 0, custom_job_thread, &resourceManager, NULL);
|
||||
|
||||
/* Create each data source from the resource manager. Note that the caller is the owner. */
|
||||
for (iFile = 0; iFile < (int)ma_countof(g_dataSources) && iFile < argc-1; iFile += 1) {
|
||||
result = ma_resource_manager_data_source_init(
|
||||
&resourceManager,
|
||||
argv[iFile+1],
|
||||
MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_DECODE | MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_ASYNC /*| MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM*/,
|
||||
NULL, /* Async notification. */
|
||||
&g_dataSources[iFile]);
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
|
||||
/* Use looping in this example. */
|
||||
ma_data_source_set_looping(&g_dataSources[iFile], MA_TRUE);
|
||||
|
||||
g_dataSourceCount += 1;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
|
||||
/* Teardown. */
|
||||
|
||||
/*
|
||||
Uninitialize the device first to ensure the data callback is stopped and doesn't try to access
|
||||
any data.
|
||||
*/
|
||||
ma_device_uninit(&device);
|
||||
|
||||
/*
|
||||
Our data sources need to be explicitly uninitialized. ma_resource_manager_uninit() will not do
|
||||
it for us. This needs to be done before posting the quit event and uninitializing the resource
|
||||
manager or else we'll get stuck in a deadlock because ma_resource_manager_data_source_uninit()
|
||||
will be waiting for the job thread(s) to finish work, which will never happen because they were
|
||||
just terminated.
|
||||
*/
|
||||
for (iFile = 0; (size_t)iFile < g_dataSourceCount; iFile += 1) {
|
||||
ma_resource_manager_data_source_uninit(&g_dataSources[iFile]);
|
||||
}
|
||||
|
||||
/*
|
||||
Before uninitializing the resource manager we need to make sure a quit event has been posted to
|
||||
ensure we can get out of our custom thread. The call to ma_resource_manager_uninit() will also
|
||||
do this, but we need to call it explicitly so that our self-managed thread can exit naturally.
|
||||
You only need to post a quit job if you're using that as the exit indicator. You can instead
|
||||
use whatever variable you want to terminate your job thread, but since this example is using a
|
||||
quit job we need to post one. Note that you don't need to do this if you're not managing your
|
||||
own threads - ma_resource_manager_uninit() alone will suffice in that case.
|
||||
*/
|
||||
ma_resource_manager_post_job_quit(&resourceManager);
|
||||
ma_thread_wait(&jobThread); /* Wait for the custom job thread to finish so it doesn't try to access any data. */
|
||||
|
||||
/* Uninitialize the resource manager after each data source. */
|
||||
ma_resource_manager_uninit(&resourceManager);
|
||||
|
||||
return 0;
|
||||
}
|
||||
70
thirdparty/miniaudio-0.11.24/examples/simple_capture.c
vendored
Normal file
@@ -0,0 +1,70 @@
|
||||
/*
|
||||
Demonstrates how to capture data from a microphone using the low-level API.
|
||||
|
||||
This example simply captures data from your default microphone until you press Enter. The output is saved to the file
|
||||
specified on the command line.
|
||||
|
||||
Capturing works in a very similar way to playback. The only difference is the direction of data movement. Instead of
|
||||
the application sending data to the device, the device will send data to the application. This example just writes the
|
||||
data received by the microphone straight to a WAV file.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_encoder_write_pcm_frames((ma_encoder*)pDevice->pUserData, pInput, frameCount, NULL);
|
||||
|
||||
(void)pOutput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_encoder_config encoderConfig;
|
||||
ma_encoder encoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No output file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, ma_format_f32, 2, 44100);
|
||||
|
||||
if (ma_encoder_init_file(argv[1], &encoderConfig, &encoder) != MA_SUCCESS) {
|
||||
printf("Failed to initialize output file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_capture);
|
||||
deviceConfig.capture.format = encoder.config.format;
|
||||
deviceConfig.capture.channels = encoder.config.channels;
|
||||
deviceConfig.sampleRate = encoder.config.sampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &encoder;
|
||||
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize capture device.\n");
|
||||
return -2;
|
||||
}
|
||||
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to start device.\n");
|
||||
return -3;
|
||||
}
|
||||
|
||||
printf("Press Enter to stop recording...\n");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_encoder_uninit(&encoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
72
thirdparty/miniaudio-0.11.24/examples/simple_duplex.c
vendored
Normal file
@@ -0,0 +1,72 @@
|
||||
/*
|
||||
Demonstrates duplex mode which is where data is captured from a microphone and then output to a speaker device.
|
||||
|
||||
This example captures audio from the default microphone and then outputs it straight to the default playback device
|
||||
without any kind of modification. If you wanted to, you could also apply filters and effects to the input stream
|
||||
before outputting to the playback device.
|
||||
|
||||
Note that the microphone and playback device must run in lockstep. Any kind of timing deviation will result in audible
|
||||
glitching which the backend may not be able to recover from. For this reason, miniaudio forces you to use the same
|
||||
sample rate for both capture and playback. If internally the native sample rates differ, miniaudio will perform the
|
||||
sample rate conversion for you automatically.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
void main_loop__em()
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/* This example assumes the playback and capture sides use the same format and channel count. */
|
||||
if (pDevice->capture.format != pDevice->playback.format || pDevice->capture.channels != pDevice->playback.channels) {
|
||||
return;
|
||||
}
|
||||
|
||||
/* In this example the format and channel count are the same for both input and output which means we can just memcpy(). */
|
||||
MA_COPY_MEMORY(pOutput, pInput, frameCount * ma_get_bytes_per_frame(pDevice->capture.format, pDevice->capture.channels));
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_duplex);
|
||||
deviceConfig.capture.pDeviceID = NULL;
|
||||
deviceConfig.capture.format = ma_format_s16;
|
||||
deviceConfig.capture.channels = 2;
|
||||
deviceConfig.capture.shareMode = ma_share_mode_shared;
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = ma_format_s16;
|
||||
deviceConfig.playback.channels = 2;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_device_start(&device);
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
emscripten_set_main_loop(main_loop__em, 0, 1);
|
||||
#else
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_device_uninit(&device);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
53
thirdparty/miniaudio-0.11.24/examples/simple_enumeration.c
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
/*
|
||||
Demonstrates how to enumerate over devices.
|
||||
|
||||
Device enumeration requires a `ma_context` object which is initialized with `ma_context_init()`. Conceptually, the
|
||||
context sits above a device. You can have many devices to one context.
|
||||
|
||||
If you use device enumeration, you should explicitly specify the same context you used for enumeration in the call to
|
||||
`ma_device_init()` when you initialize your devices.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_context context;
|
||||
ma_device_info* pPlaybackDeviceInfos;
|
||||
ma_uint32 playbackDeviceCount;
|
||||
ma_device_info* pCaptureDeviceInfos;
|
||||
ma_uint32 captureDeviceCount;
|
||||
ma_uint32 iDevice;
|
||||
|
||||
if (ma_context_init(NULL, 0, NULL, &context) != MA_SUCCESS) {
|
||||
printf("Failed to initialize context.\n");
|
||||
return -2;
|
||||
}
|
||||
|
||||
result = ma_context_get_devices(&context, &pPlaybackDeviceInfos, &playbackDeviceCount, &pCaptureDeviceInfos, &captureDeviceCount);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to retrieve device information.\n");
|
||||
return -3;
|
||||
}
|
||||
|
||||
printf("Playback Devices\n");
|
||||
for (iDevice = 0; iDevice < playbackDeviceCount; ++iDevice) {
|
||||
printf(" %u: %s\n", iDevice, pPlaybackDeviceInfos[iDevice].name);
|
||||
}
|
||||
|
||||
printf("\n");
|
||||
|
||||
printf("Capture Devices\n");
|
||||
for (iDevice = 0; iDevice < captureDeviceCount; ++iDevice) {
|
||||
printf(" %u: %s\n", iDevice, pCaptureDeviceInfos[iDevice].name);
|
||||
}
|
||||
|
||||
|
||||
ma_context_uninit(&context);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
78
thirdparty/miniaudio-0.11.24/examples/simple_loopback.c
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
/*
|
||||
Demonstrates how to implement loopback recording.
|
||||
|
||||
This example simply captures data from your default playback device until you press Enter. The output is saved to the
|
||||
file specified on the command line.
|
||||
|
||||
Loopback mode is when you record audio that is played from a given speaker. It is only supported on WASAPI, but can be
|
||||
used indirectly with PulseAudio by choosing the appropriate loopback device after enumeration.
|
||||
|
||||
To use loopback mode you just need to set the device type to ma_device_type_loopback and set the capture device config
|
||||
properties. The output buffer in the callback will be null whereas the input buffer will be valid.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_encoder_write_pcm_frames((ma_encoder*)pDevice->pUserData, pInput, frameCount, NULL);
|
||||
|
||||
(void)pOutput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_encoder_config encoderConfig;
|
||||
ma_encoder encoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
/* Loopback mode is currently only supported on WASAPI. */
|
||||
ma_backend backends[] = {
|
||||
ma_backend_wasapi
|
||||
};
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No output file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, ma_format_f32, 2, 44100);
|
||||
|
||||
if (ma_encoder_init_file(argv[1], &encoderConfig, &encoder) != MA_SUCCESS) {
|
||||
printf("Failed to initialize output file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_loopback);
|
||||
deviceConfig.capture.pDeviceID = NULL; /* Use default device for this example. Set this to the ID of a _playback_ device if you want to capture from a specific device. */
|
||||
deviceConfig.capture.format = encoder.config.format;
|
||||
deviceConfig.capture.channels = encoder.config.channels;
|
||||
deviceConfig.sampleRate = encoder.config.sampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &encoder;
|
||||
|
||||
result = ma_device_init_ex(backends, sizeof(backends)/sizeof(backends[0]), NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize loopback device.\n");
|
||||
return -2;
|
||||
}
|
||||
|
||||
result = ma_device_start(&device);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
printf("Failed to start device.\n");
|
||||
return -3;
|
||||
}
|
||||
|
||||
printf("Press Enter to stop recording...\n");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_encoder_uninit(&encoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
75
thirdparty/miniaudio-0.11.24/examples/simple_looping.c
vendored
Normal file
@@ -0,0 +1,75 @@
|
||||
/*
|
||||
Shows one way to handle looping of a sound.
|
||||
|
||||
This example uses a decoder as the data source. Decoders can be used with the `ma_data_source` API which, conveniently,
|
||||
supports looping via the `ma_data_source_read_pcm_frames()` API. To use it, all you need to do is pass a pointer to the
|
||||
decoder straight into `ma_data_source_read_pcm_frames()` and it will just work.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_decoder* pDecoder = (ma_decoder*)pDevice->pUserData;
|
||||
if (pDecoder == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
/* Reading PCM frames will loop based on what we specified when called ma_data_source_set_looping(). */
|
||||
ma_data_source_read_pcm_frames(pDecoder, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder decoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_decoder_init_file(argv[1], NULL, &decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -2;
|
||||
}
|
||||
|
||||
/*
|
||||
A decoder is a data source which means we just use ma_data_source_set_looping() to set the
|
||||
looping state. We will read data using ma_data_source_read_pcm_frames() in the data callback.
|
||||
*/
|
||||
ma_data_source_set_looping(&decoder, MA_TRUE);
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = decoder.outputFormat;
|
||||
deviceConfig.playback.channels = decoder.outputChannels;
|
||||
deviceConfig.sampleRate = decoder.outputSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &decoder;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -3;
|
||||
}
|
||||
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -4;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
198
thirdparty/miniaudio-0.11.24/examples/simple_mixing.c
vendored
Normal file
@@ -0,0 +1,198 @@
|
||||
/*
|
||||
Demonstrates one way to load multiple files and play them all back at the same time.
|
||||
|
||||
When mixing multiple sounds together, you should not create multiple devices. Instead you should create only a single
|
||||
device and then mix your sounds together which you can do by simply summing their samples together. The simplest way to
|
||||
do this is to use floating point samples and use miniaudio's built-in clipper to handling clipping for you. (Clipping
|
||||
is when sample are clamped to their minimum and maximum range, which for floating point is -1..1.)
|
||||
|
||||
```
|
||||
Usage: simple_mixing [input file 0] [input file 1] ... [input file n]
|
||||
Example: simple_mixing file1.wav file2.flac
|
||||
```
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
/*
|
||||
For simplicity, this example requires the device to use floating point samples.
|
||||
*/
|
||||
#define SAMPLE_FORMAT ma_format_f32
|
||||
#define CHANNEL_COUNT 2
|
||||
#define SAMPLE_RATE 48000
|
||||
|
||||
ma_uint32 g_decoderCount;
|
||||
ma_decoder* g_pDecoders;
|
||||
ma_bool32* g_pDecodersAtEnd;
|
||||
|
||||
ma_event g_stopEvent; /* <-- Signaled by the audio thread, waited on by the main thread. */
|
||||
|
||||
ma_bool32 are_all_decoders_at_end(void)
|
||||
{
|
||||
ma_uint32 iDecoder;
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
if (g_pDecodersAtEnd[iDecoder] == MA_FALSE) {
|
||||
return MA_FALSE;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_TRUE;
|
||||
}
|
||||
|
||||
ma_uint32 read_and_mix_pcm_frames_f32(ma_decoder* pDecoder, float* pOutputF32, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
The way mixing works is that we just read into a temporary buffer, then take the contents of that buffer and mix it with the
|
||||
contents of the output buffer by simply adding the samples together. You could also clip the samples to -1..+1, but I'm not
|
||||
doing that in this example.
|
||||
*/
|
||||
ma_result result;
|
||||
float temp[4096];
|
||||
ma_uint32 tempCapInFrames = ma_countof(temp) / CHANNEL_COUNT;
|
||||
ma_uint32 totalFramesRead = 0;
|
||||
|
||||
while (totalFramesRead < frameCount) {
|
||||
ma_uint64 iSample;
|
||||
ma_uint64 framesReadThisIteration;
|
||||
ma_uint32 totalFramesRemaining = frameCount - totalFramesRead;
|
||||
ma_uint32 framesToReadThisIteration = tempCapInFrames;
|
||||
if (framesToReadThisIteration > totalFramesRemaining) {
|
||||
framesToReadThisIteration = totalFramesRemaining;
|
||||
}
|
||||
|
||||
result = ma_decoder_read_pcm_frames(pDecoder, temp, framesToReadThisIteration, &framesReadThisIteration);
|
||||
if (result != MA_SUCCESS || framesReadThisIteration == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
/* Mix the frames together. */
|
||||
for (iSample = 0; iSample < framesReadThisIteration*CHANNEL_COUNT; ++iSample) {
|
||||
pOutputF32[totalFramesRead*CHANNEL_COUNT + iSample] += temp[iSample];
|
||||
}
|
||||
|
||||
totalFramesRead += (ma_uint32)framesReadThisIteration;
|
||||
|
||||
if (framesReadThisIteration < (ma_uint32)framesToReadThisIteration) {
|
||||
break; /* Reached EOF. */
|
||||
}
|
||||
}
|
||||
|
||||
return totalFramesRead;
|
||||
}
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
float* pOutputF32 = (float*)pOutput;
|
||||
ma_uint32 iDecoder;
|
||||
|
||||
/* This example assumes the device was configured to use ma_format_f32. */
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
if (!g_pDecodersAtEnd[iDecoder]) {
|
||||
ma_uint32 framesRead = read_and_mix_pcm_frames_f32(&g_pDecoders[iDecoder], pOutputF32, frameCount);
|
||||
if (framesRead < frameCount) {
|
||||
g_pDecodersAtEnd[iDecoder] = MA_TRUE;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
If at the end all of our decoders are at the end we need to stop. We cannot stop the device in the callback. Instead we need to
|
||||
signal an event to indicate that it's stopped. The main thread will be waiting on the event, after which it will stop the device.
|
||||
*/
|
||||
if (are_all_decoders_at_end()) {
|
||||
ma_event_signal(&g_stopEvent);
|
||||
}
|
||||
|
||||
(void)pInput;
|
||||
(void)pDevice;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_uint32 iDecoder;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input files.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
g_decoderCount = argc-1;
|
||||
g_pDecoders = (ma_decoder*)malloc(sizeof(*g_pDecoders) * g_decoderCount);
|
||||
g_pDecodersAtEnd = (ma_bool32*) malloc(sizeof(*g_pDecodersAtEnd) * g_decoderCount);
|
||||
|
||||
/* In this example, all decoders need to have the same output format. */
|
||||
decoderConfig = ma_decoder_config_init(SAMPLE_FORMAT, CHANNEL_COUNT, SAMPLE_RATE);
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
result = ma_decoder_init_file(argv[1+iDecoder], &decoderConfig, &g_pDecoders[iDecoder]);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_uint32 iDecoder2;
|
||||
for (iDecoder2 = 0; iDecoder2 < iDecoder; ++iDecoder2) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder2]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
free(g_pDecodersAtEnd);
|
||||
|
||||
printf("Failed to load %s.\n", argv[1+iDecoder]);
|
||||
return -3;
|
||||
}
|
||||
g_pDecodersAtEnd[iDecoder] = MA_FALSE;
|
||||
}
|
||||
|
||||
/* Create only a single device. The decoders will be mixed together in the callback. In this example the data format needs to be the same as the decoders. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = SAMPLE_FORMAT;
|
||||
deviceConfig.playback.channels = CHANNEL_COUNT;
|
||||
deviceConfig.sampleRate = SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = NULL;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
free(g_pDecodersAtEnd);
|
||||
|
||||
printf("Failed to open playback device.\n");
|
||||
return -3;
|
||||
}
|
||||
|
||||
/*
|
||||
We can't stop in the audio thread so we instead need to use an event. We wait on this thread in the main thread, and signal it in the audio thread. This
|
||||
needs to be done before starting the device. We need a context to initialize the event, which we can get from the device. Alternatively you can initialize
|
||||
a context separately, but we don't need to do that for this example.
|
||||
*/
|
||||
ma_event_init(&g_stopEvent);
|
||||
|
||||
/* Now we start playback and wait for the audio thread to tell us to stop. */
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
ma_device_uninit(&device);
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
free(g_pDecodersAtEnd);
|
||||
|
||||
printf("Failed to start playback device.\n");
|
||||
return -4;
|
||||
}
|
||||
|
||||
printf("Waiting for playback to complete...\n");
|
||||
ma_event_wait(&g_stopEvent);
|
||||
|
||||
/* Getting here means the audio thread has signaled that the device should be stopped. */
|
||||
ma_device_uninit(&device);
|
||||
|
||||
for (iDecoder = 0; iDecoder < g_decoderCount; ++iDecoder) {
|
||||
ma_decoder_uninit(&g_pDecoders[iDecoder]);
|
||||
}
|
||||
free(g_pDecoders);
|
||||
free(g_pDecodersAtEnd);
|
||||
|
||||
return 0;
|
||||
}
|
||||
74
thirdparty/miniaudio-0.11.24/examples/simple_playback.c
vendored
Normal file
@@ -0,0 +1,74 @@
|
||||
/*
|
||||
Demonstrates how to load a sound file and play it back using the low-level API.
|
||||
|
||||
The low-level API uses a callback to deliver audio between the application and miniaudio for playback or recording. When
|
||||
in playback mode, as in this example, the application sends raw audio data to miniaudio which is then played back through
|
||||
the default playback device as defined by the operating system.
|
||||
|
||||
This example uses the `ma_decoder` API to load a sound and play it back. The decoder is entirely decoupled from the
|
||||
device and can be used independently of it. This example only plays back a single sound file, but it's possible to play
|
||||
back multiple files by simple loading multiple decoders and mixing them (do not create multiple devices to do this). See
|
||||
the simple_mixing example for how best to do this.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_decoder* pDecoder = (ma_decoder*)pDevice->pUserData;
|
||||
if (pDecoder == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
ma_decoder_read_pcm_frames(pDecoder, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput;
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder decoder;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_decoder_init_file(argv[1], NULL, &decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Could not load file: %s\n", argv[1]);
|
||||
return -2;
|
||||
}
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = decoder.outputFormat;
|
||||
deviceConfig.playback.channels = decoder.outputChannels;
|
||||
deviceConfig.sampleRate = decoder.outputSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &decoder;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -3;
|
||||
}
|
||||
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
return -4;
|
||||
}
|
||||
|
||||
printf("Press Enter to quit...");
|
||||
getchar();
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_decoder_uninit(&decoder);
|
||||
|
||||
return 0;
|
||||
}
|
||||
83
thirdparty/miniaudio-0.11.24/examples/simple_playback_sine.c
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
/*
|
||||
Demonstrates playback of a sine wave.
|
||||
|
||||
Since all this example is doing is playing back a sine wave, we can disable decoding (and encoding) which will slightly
|
||||
reduce the size of the executable. This is done with the `MA_NO_DECODING` and `MA_NO_ENCODING` options.
|
||||
|
||||
The generation of sine wave is achieved via the `ma_waveform` API. A waveform is a data source which means it can be
|
||||
seamlessly plugged into the `ma_data_source_*()` family of APIs as well.
|
||||
|
||||
A waveform is initialized using the standard config/init pattern used throughout all of miniaudio. Frames are read via
|
||||
the `ma_waveform_read_pcm_frames()` API.
|
||||
|
||||
This example works with Emscripten.
|
||||
*/
|
||||
#define MA_NO_DECODING
|
||||
#define MA_NO_ENCODING
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
#include <emscripten.h>
|
||||
|
||||
void main_loop__em()
|
||||
{
|
||||
}
|
||||
#endif
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32
|
||||
#define DEVICE_CHANNELS 2
|
||||
#define DEVICE_SAMPLE_RATE 48000
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
ma_waveform_read_pcm_frames((ma_waveform*)pDevice->pUserData, pOutput, frameCount, NULL);
|
||||
|
||||
(void)pInput; /* Unused. */
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_waveform sineWave;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_waveform_config sineWaveConfig;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = DEVICE_FORMAT;
|
||||
deviceConfig.playback.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.sampleRate = DEVICE_SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
deviceConfig.pUserData = &sineWave;
|
||||
|
||||
if (ma_device_init(NULL, &deviceConfig, &device) != MA_SUCCESS) {
|
||||
printf("Failed to open playback device.\n");
|
||||
return -4;
|
||||
}
|
||||
|
||||
printf("Device Name: %s\n", device.playback.name);
|
||||
|
||||
sineWaveConfig = ma_waveform_config_init(device.playback.format, device.playback.channels, device.sampleRate, ma_waveform_type_sine, 0.2, 220);
|
||||
ma_waveform_init(&sineWaveConfig, &sineWave);
|
||||
|
||||
if (ma_device_start(&device) != MA_SUCCESS) {
|
||||
printf("Failed to start playback device.\n");
|
||||
ma_device_uninit(&device);
|
||||
return -5;
|
||||
}
|
||||
|
||||
#ifdef __EMSCRIPTEN__
|
||||
emscripten_set_main_loop(main_loop__em, 0, 1);
|
||||
#else
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
#endif
|
||||
|
||||
ma_device_uninit(&device);
|
||||
ma_waveform_uninit(&sineWave); /* Uninitialize the waveform after the device so we don't pull it from under the device while it's being reference in the data callback. */
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
85
thirdparty/miniaudio-0.11.24/examples/simple_spatialization.c
vendored
Normal file
@@ -0,0 +1,85 @@
|
||||
/*
|
||||
Demonstrates how to do basic spatialization via the high level API.
|
||||
|
||||
You can position and orientate sounds to create a simple spatialization effect. This example shows
|
||||
how to do this.
|
||||
|
||||
In addition to positioning sounds, there is the concept of a listener. This can also be positioned
|
||||
and orientated to help with spatialization.
|
||||
|
||||
This example only covers the basics to get your started. See the documentation for more detailed
|
||||
information on the available features.
|
||||
|
||||
To use this example, pass in the path of a sound as the first argument. The sound will be
|
||||
positioned in front of the listener, while the listener rotates on the the spot to create an
|
||||
orbiting effect. Terminate the program with Ctrl+C.
|
||||
*/
|
||||
#include "../miniaudio.c"
|
||||
|
||||
#include <stdio.h>
|
||||
#include <math.h> /* For sinf() and cosf() */
|
||||
|
||||
/* Silence warning about unreachable code for MSVC. */
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(push)
|
||||
#pragma warning(disable: 4702)
|
||||
#endif
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_engine engine;
|
||||
ma_sound sound;
|
||||
float listenerAngle = 0;
|
||||
|
||||
if (argc < 2) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_engine_init(NULL, &engine);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize engine.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = ma_sound_init_from_file(&engine, argv[1], 0, NULL, NULL, &sound);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to load sound: %s\n", argv[1]);
|
||||
ma_engine_uninit(&engine);
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* This sets the position of the sound. miniaudio follows the same coordinate system as OpenGL, where -Z is forward. */
|
||||
ma_sound_set_position(&sound, 0, 0, -1);
|
||||
|
||||
/*
|
||||
This sets the position of the listener. The second parameter is the listener index. If you have only a single listener, which is
|
||||
most likely, just use 0. The position defaults to (0,0,0).
|
||||
*/
|
||||
ma_engine_listener_set_position(&engine, 0, 0, 0, 0);
|
||||
|
||||
|
||||
/* Sounds are stopped by default. We'll start it once initial parameters have been setup. */
|
||||
ma_sound_start(&sound);
|
||||
|
||||
|
||||
/* Rotate the listener on the spot to create an orbiting effect. */
|
||||
for (;;) {
|
||||
listenerAngle += 0.01f;
|
||||
ma_engine_listener_set_direction(&engine, 0, (float)sin(listenerAngle), 0, (float)cos(listenerAngle));
|
||||
|
||||
ma_sleep(1);
|
||||
}
|
||||
|
||||
|
||||
/* Won't actually get here, but do this to tear down. */
|
||||
ma_sound_uninit(&sound);
|
||||
ma_engine_uninit(&engine);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(pop)
|
||||
#endif
|
||||
10936
thirdparty/miniaudio-0.11.24/external/fs/fs.c
vendored
Normal file
3757
thirdparty/miniaudio-0.11.24/external/fs/fs.h
vendored
Normal file
536
thirdparty/miniaudio-0.11.24/extras/decoders/libopus/miniaudio_libopus.c
vendored
Normal file
@@ -0,0 +1,536 @@
|
||||
#ifndef miniaudio_libopus_c
|
||||
#define miniaudio_libopus_c
|
||||
|
||||
#include "miniaudio_libopus.h"
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
#include <opusfile.h>
|
||||
#endif
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
#include <assert.h>
|
||||
|
||||
static ma_result ma_libopus_ds_read(ma_data_source* pDataSource, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
return ma_libopus_read_pcm_frames((ma_libopus*)pDataSource, pFramesOut, frameCount, pFramesRead);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_seek(ma_data_source* pDataSource, ma_uint64 frameIndex)
|
||||
{
|
||||
return ma_libopus_seek_to_pcm_frame((ma_libopus*)pDataSource, frameIndex);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_data_format(ma_data_source* pDataSource, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
return ma_libopus_get_data_format((ma_libopus*)pDataSource, pFormat, pChannels, pSampleRate, pChannelMap, channelMapCap);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_cursor(ma_data_source* pDataSource, ma_uint64* pCursor)
|
||||
{
|
||||
return ma_libopus_get_cursor_in_pcm_frames((ma_libopus*)pDataSource, pCursor);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_length(ma_data_source* pDataSource, ma_uint64* pLength)
|
||||
{
|
||||
return ma_libopus_get_length_in_pcm_frames((ma_libopus*)pDataSource, pLength);
|
||||
}
|
||||
|
||||
static ma_data_source_vtable g_ma_libopus_ds_vtable =
|
||||
{
|
||||
ma_libopus_ds_read,
|
||||
ma_libopus_ds_seek,
|
||||
ma_libopus_ds_get_data_format,
|
||||
ma_libopus_ds_get_cursor,
|
||||
ma_libopus_ds_get_length,
|
||||
NULL, /* onSetLooping */
|
||||
0 /* flags */
|
||||
};
|
||||
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
static int ma_libopus_of_callback__read(void* pUserData, unsigned char* pBufferOut, int bytesToRead)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
size_t bytesRead;
|
||||
|
||||
result = pOpus->onRead(pOpus->pReadSeekTellUserData, (void*)pBufferOut, bytesToRead, &bytesRead);
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return (int)bytesRead;
|
||||
}
|
||||
|
||||
static int ma_libopus_of_callback__seek(void* pUserData, ogg_int64_t offset, int whence)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
ma_seek_origin origin;
|
||||
|
||||
if (whence == SEEK_SET) {
|
||||
origin = ma_seek_origin_start;
|
||||
} else if (whence == SEEK_END) {
|
||||
origin = ma_seek_origin_end;
|
||||
} else {
|
||||
origin = ma_seek_origin_current;
|
||||
}
|
||||
|
||||
result = pOpus->onSeek(pOpus->pReadSeekTellUserData, offset, origin);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static opus_int64 ma_libopus_of_callback__tell(void* pUserData)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
ma_int64 cursor;
|
||||
|
||||
if (pOpus->onTell == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = pOpus->onTell(pOpus->pReadSeekTellUserData, &cursor);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return cursor;
|
||||
}
|
||||
#endif
|
||||
|
||||
static ma_result ma_libopus_init_internal(const ma_decoding_backend_config* pConfig, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
ma_data_source_config dataSourceConfig;
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pOpus, 0, sizeof(*pOpus));
|
||||
pOpus->format = ma_format_f32; /* f32 by default. */
|
||||
|
||||
if (pConfig != NULL && (pConfig->preferredFormat == ma_format_f32 || pConfig->preferredFormat == ma_format_s16)) {
|
||||
pOpus->format = pConfig->preferredFormat;
|
||||
} else {
|
||||
/* Getting here means something other than f32 and s16 was specified. Just leave this unset to use the default format. */
|
||||
}
|
||||
|
||||
dataSourceConfig = ma_data_source_config_init();
|
||||
dataSourceConfig.vtable = &g_ma_libopus_ds_vtable;
|
||||
|
||||
result = ma_data_source_init(&dataSourceConfig, &pOpus->ds);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to initialize the base data source. */
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libopus. */
|
||||
|
||||
result = ma_libopus_init_internal(pConfig, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
if (onRead == NULL || onSeek == NULL) {
|
||||
return MA_INVALID_ARGS; /* onRead and onSeek are mandatory. */
|
||||
}
|
||||
|
||||
pOpus->onRead = onRead;
|
||||
pOpus->onSeek = onSeek;
|
||||
pOpus->onTell = onTell;
|
||||
pOpus->pReadSeekTellUserData = pReadSeekTellUserData;
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult;
|
||||
OpusFileCallbacks libopusCallbacks;
|
||||
|
||||
/* We can now initialize the Opus decoder. This must be done after we've set up the callbacks. */
|
||||
libopusCallbacks.read = ma_libopus_of_callback__read;
|
||||
libopusCallbacks.seek = ma_libopus_of_callback__seek;
|
||||
libopusCallbacks.close = NULL;
|
||||
libopusCallbacks.tell = ma_libopus_of_callback__tell;
|
||||
|
||||
pOpus->of = op_open_callbacks(pOpus, &libopusCallbacks, NULL, 0, &libopusResult);
|
||||
if (pOpus->of == NULL) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. */
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libopus. */
|
||||
|
||||
result = ma_libopus_init_internal(pConfig, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult;
|
||||
|
||||
pOpus->of = op_open_file(pFilePath, &libopusResult);
|
||||
if (pOpus->of == NULL) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. */
|
||||
(void)pFilePath;
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API void ma_libopus_uninit(ma_libopus* pOpus, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
if (pOpus == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
(void)pAllocationCallbacks;
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
op_free((OggOpusFile*)pOpus->of);
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
}
|
||||
#endif
|
||||
|
||||
ma_data_source_uninit(&pOpus->ds);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_read_pcm_frames(ma_libopus* pOpus, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
if (frameCount == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
/* We always use floating point format. */
|
||||
ma_result result = MA_SUCCESS; /* Must be initialized to MA_SUCCESS. */
|
||||
ma_uint64 totalFramesRead;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
|
||||
ma_libopus_get_data_format(pOpus, &format, &channels, NULL, NULL, 0);
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
long libopusResult;
|
||||
ma_uint64 framesToRead;
|
||||
ma_uint64 framesRemaining;
|
||||
|
||||
framesRemaining = (frameCount - totalFramesRead);
|
||||
framesToRead = 1024;
|
||||
if (framesToRead > framesRemaining) {
|
||||
framesToRead = framesRemaining;
|
||||
}
|
||||
|
||||
if (format == ma_format_f32) {
|
||||
libopusResult = op_read_float((OggOpusFile*)pOpus->of, (float* )ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), (int)(framesToRead * channels), NULL);
|
||||
} else {
|
||||
libopusResult = op_read ((OggOpusFile*)pOpus->of, (opus_int16*)ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), (int)(framesToRead * channels), NULL);
|
||||
}
|
||||
|
||||
if (libopusResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
totalFramesRead += libopusResult;
|
||||
|
||||
if (libopusResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
if (result == MA_SUCCESS && totalFramesRead == 0) {
|
||||
result = MA_AT_END;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
|
||||
(void)pFramesOut;
|
||||
(void)frameCount;
|
||||
(void)pFramesRead;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_seek_to_pcm_frame(ma_libopus* pOpus, ma_uint64 frameIndex)
|
||||
{
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult = op_pcm_seek((OggOpusFile*)pOpus->of, (ogg_int64_t)frameIndex);
|
||||
if (libopusResult != 0) {
|
||||
if (libopusResult == OP_ENOSEEK) {
|
||||
return MA_INVALID_OPERATION; /* Not seekable. */
|
||||
} else if (libopusResult == OP_EINVAL) {
|
||||
return MA_INVALID_ARGS;
|
||||
} else {
|
||||
return MA_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
|
||||
(void)frameIndex;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_data_format(ma_libopus* pOpus, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
/* Defaults for safety. */
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = ma_format_unknown;
|
||||
}
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = 0;
|
||||
}
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 0;
|
||||
}
|
||||
if (pChannelMap != NULL) {
|
||||
memset(pChannelMap, 0, sizeof(*pChannelMap) * channelMapCap);
|
||||
}
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = pOpus->format;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ma_uint32 channels = op_channel_count((OggOpusFile*)pOpus->of, -1);
|
||||
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = channels;
|
||||
}
|
||||
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 48000;
|
||||
}
|
||||
|
||||
if (pChannelMap != NULL) {
|
||||
ma_channel_map_init_standard(ma_standard_channel_map_vorbis, pChannelMap, channelMapCap, channels);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_cursor_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pCursor)
|
||||
{
|
||||
if (pCursor == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pCursor = 0; /* Safety. */
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ogg_int64_t offset = op_pcm_tell((OggOpusFile*)pOpus->of);
|
||||
if (offset < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
*pCursor = (ma_uint64)offset;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_length_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pLength)
|
||||
{
|
||||
if (pLength == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pLength = 0; /* Safety. */
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ogg_int64_t length = op_pcm_total((OggOpusFile*)pOpus->of, -1);
|
||||
if (length < 0) {
|
||||
return MA_ERROR;
|
||||
}
|
||||
|
||||
*pLength = (ma_uint64)length;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
The code below defines the vtable that you'll plug into your `ma_decoder_config` object.
|
||||
*/
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
static ma_result ma_decoding_backend_init__libopus(void* pUserData, ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_data_source** ppBackend)
|
||||
{
|
||||
ma_result result;
|
||||
ma_libopus* pOpus;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
pOpus = (ma_libopus*)ma_malloc(sizeof(*pOpus), pAllocationCallbacks);
|
||||
if (pOpus == NULL) {
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
result = ma_libopus_init(onRead, onSeek, onTell, pReadSeekTellUserData, pConfig, pAllocationCallbacks, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_free(pOpus, pAllocationCallbacks);
|
||||
return result;
|
||||
}
|
||||
|
||||
*ppBackend = pOpus;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_decoding_backend_init_file__libopus(void* pUserData, const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_data_source** ppBackend)
|
||||
{
|
||||
ma_result result;
|
||||
ma_libopus* pOpus;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
pOpus = (ma_libopus*)ma_malloc(sizeof(*pOpus), pAllocationCallbacks);
|
||||
if (pOpus == NULL) {
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
result = ma_libopus_init_file(pFilePath, pConfig, pAllocationCallbacks, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_free(pOpus, pAllocationCallbacks);
|
||||
return result;
|
||||
}
|
||||
|
||||
*ppBackend = pOpus;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static void ma_decoding_backend_uninit__libopus(void* pUserData, ma_data_source* pBackend, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pBackend;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
ma_libopus_uninit(pOpus, pAllocationCallbacks);
|
||||
ma_free(pOpus, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
static ma_decoding_backend_vtable ma_gDecodingBackendVTable_libopus =
|
||||
{
|
||||
ma_decoding_backend_init__libopus,
|
||||
ma_decoding_backend_init_file__libopus,
|
||||
NULL, /* onInitFileW() */
|
||||
NULL, /* onInitMemory() */
|
||||
ma_decoding_backend_uninit__libopus
|
||||
};
|
||||
ma_decoding_backend_vtable* ma_decoding_backend_libopus = &ma_gDecodingBackendVTable_libopus;
|
||||
#else
|
||||
ma_decoding_backend_vtable* ma_decoding_backend_libopus = NULL;
|
||||
#endif
|
||||
|
||||
#endif /* miniaudio_libopus_c */
|
||||
43
thirdparty/miniaudio-0.11.24/extras/decoders/libopus/miniaudio_libopus.h
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
/*
|
||||
This implements a data source that decodes Opus streams via libopus + libopusfile
|
||||
|
||||
This object can be plugged into any `ma_data_source_*()` API and can also be used as a custom
|
||||
decoding backend. See the custom_decoder example.
|
||||
*/
|
||||
#ifndef miniaudio_libopus_h
|
||||
#define miniaudio_libopus_h
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_data_source_base ds; /* The libopus decoder can be used independently as a data source. */
|
||||
ma_read_proc onRead;
|
||||
ma_seek_proc onSeek;
|
||||
ma_tell_proc onTell;
|
||||
void* pReadSeekTellUserData;
|
||||
ma_format format; /* Will be either f32 or s16. */
|
||||
/*OggOpusFile**/ void* of; /* Typed as void* so we can avoid a dependency on opusfile in the header section. */
|
||||
} ma_libopus;
|
||||
|
||||
MA_API ma_result ma_libopus_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus);
|
||||
MA_API ma_result ma_libopus_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus);
|
||||
MA_API void ma_libopus_uninit(ma_libopus* pOpus, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
MA_API ma_result ma_libopus_read_pcm_frames(ma_libopus* pOpus, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead);
|
||||
MA_API ma_result ma_libopus_seek_to_pcm_frame(ma_libopus* pOpus, ma_uint64 frameIndex);
|
||||
MA_API ma_result ma_libopus_get_data_format(ma_libopus* pOpus, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap);
|
||||
MA_API ma_result ma_libopus_get_cursor_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pCursor);
|
||||
MA_API ma_result ma_libopus_get_length_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pLength);
|
||||
|
||||
/* Decoding backend vtable. This is what you'll plug into ma_decoder_config.pBackendVTables. No user data required. */
|
||||
extern ma_decoding_backend_vtable* ma_decoding_backend_libopus;
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_libopus_h */
|
||||
|
||||
591
thirdparty/miniaudio-0.11.24/extras/decoders/libvorbis/miniaudio_libvorbis.c
vendored
Normal file
@@ -0,0 +1,591 @@
|
||||
#ifndef miniaudio_libvorbis_c
|
||||
#define miniaudio_libvorbis_c
|
||||
|
||||
#include "miniaudio_libvorbis.h"
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
#ifndef OV_EXCLUDE_STATIC_CALLBACKS
|
||||
#define OV_EXCLUDE_STATIC_CALLBACKS
|
||||
#endif
|
||||
#include <vorbis/vorbisfile.h>
|
||||
#endif
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
#include <assert.h>
|
||||
|
||||
static ma_result ma_libvorbis_ds_read(ma_data_source* pDataSource, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
return ma_libvorbis_read_pcm_frames((ma_libvorbis*)pDataSource, pFramesOut, frameCount, pFramesRead);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_seek(ma_data_source* pDataSource, ma_uint64 frameIndex)
|
||||
{
|
||||
return ma_libvorbis_seek_to_pcm_frame((ma_libvorbis*)pDataSource, frameIndex);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_data_format(ma_data_source* pDataSource, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
return ma_libvorbis_get_data_format((ma_libvorbis*)pDataSource, pFormat, pChannels, pSampleRate, pChannelMap, channelMapCap);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_cursor(ma_data_source* pDataSource, ma_uint64* pCursor)
|
||||
{
|
||||
return ma_libvorbis_get_cursor_in_pcm_frames((ma_libvorbis*)pDataSource, pCursor);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_length(ma_data_source* pDataSource, ma_uint64* pLength)
|
||||
{
|
||||
return ma_libvorbis_get_length_in_pcm_frames((ma_libvorbis*)pDataSource, pLength);
|
||||
}
|
||||
|
||||
static ma_data_source_vtable g_ma_libvorbis_ds_vtable =
|
||||
{
|
||||
ma_libvorbis_ds_read,
|
||||
ma_libvorbis_ds_seek,
|
||||
ma_libvorbis_ds_get_data_format,
|
||||
ma_libvorbis_ds_get_cursor,
|
||||
ma_libvorbis_ds_get_length,
|
||||
NULL, /* onSetLooping */
|
||||
0 /* flags */
|
||||
};
|
||||
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
static size_t ma_libvorbis_vf_callback__read(void* pBufferOut, size_t size, size_t count, void* pUserData)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
size_t bytesToRead;
|
||||
size_t bytesRead;
|
||||
|
||||
/* For consistency with fread(). If `size` of `count` is 0, return 0 immediately without changing anything. */
|
||||
if (size == 0 || count == 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
bytesToRead = size * count;
|
||||
result = pVorbis->onRead(pVorbis->pReadSeekTellUserData, pBufferOut, bytesToRead, &bytesRead);
|
||||
if (result != MA_SUCCESS) {
|
||||
/* Not entirely sure what to return here. What if an error occurs, but some data was read and bytesRead is > 0? */
|
||||
return 0;
|
||||
}
|
||||
|
||||
return bytesRead / size;
|
||||
}
|
||||
|
||||
static int ma_libvorbis_vf_callback__seek(void* pUserData, ogg_int64_t offset, int whence)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
ma_seek_origin origin;
|
||||
|
||||
if (whence == SEEK_SET) {
|
||||
origin = ma_seek_origin_start;
|
||||
} else if (whence == SEEK_END) {
|
||||
origin = ma_seek_origin_end;
|
||||
} else {
|
||||
origin = ma_seek_origin_current;
|
||||
}
|
||||
|
||||
result = pVorbis->onSeek(pVorbis->pReadSeekTellUserData, offset, origin);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long ma_libvorbis_vf_callback__tell(void* pUserData)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
ma_int64 cursor;
|
||||
|
||||
result = pVorbis->onTell(pVorbis->pReadSeekTellUserData, &cursor);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return (long)cursor;
|
||||
}
|
||||
#endif
|
||||
|
||||
static ma_result ma_libvorbis_init_internal(const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis)
|
||||
{
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pVorbis, 0, sizeof(*pVorbis));
|
||||
pVorbis->format = ma_format_f32; /* f32 by default. */
|
||||
|
||||
if (pConfig != NULL && (pConfig->preferredFormat == ma_format_f32 || pConfig->preferredFormat == ma_format_s16)) {
|
||||
pVorbis->format = pConfig->preferredFormat;
|
||||
} else {
|
||||
/* Getting here means something other than f32 and s16 was specified. Just leave this unset to use the default format. */
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
ma_result result;
|
||||
ma_data_source_config dataSourceConfig;
|
||||
|
||||
dataSourceConfig = ma_data_source_config_init();
|
||||
dataSourceConfig.vtable = &g_ma_libvorbis_ds_vtable;
|
||||
|
||||
result = ma_data_source_init(&dataSourceConfig, &pVorbis->ds);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to initialize the base data source. */
|
||||
}
|
||||
|
||||
pVorbis->vf = (OggVorbis_File*)ma_malloc(sizeof(OggVorbis_File), pAllocationCallbacks);
|
||||
if (pVorbis->vf == NULL) {
|
||||
ma_data_source_uninit(&pVorbis->ds);
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. */
|
||||
(void)pAllocationCallbacks;
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libvorbis. */
|
||||
|
||||
if (onRead == NULL || onSeek == NULL) {
|
||||
return MA_INVALID_ARGS; /* onRead and onSeek are mandatory. */
|
||||
}
|
||||
|
||||
result = ma_libvorbis_init_internal(pConfig, pAllocationCallbacks, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
pVorbis->onRead = onRead;
|
||||
pVorbis->onSeek = onSeek;
|
||||
pVorbis->onTell = onTell;
|
||||
pVorbis->pReadSeekTellUserData = pReadSeekTellUserData;
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult;
|
||||
ov_callbacks libvorbisCallbacks;
|
||||
|
||||
/* We can now initialize the vorbis decoder. This must be done after we've set up the callbacks. */
|
||||
libvorbisCallbacks.read_func = ma_libvorbis_vf_callback__read;
|
||||
libvorbisCallbacks.seek_func = ma_libvorbis_vf_callback__seek;
|
||||
libvorbisCallbacks.close_func = NULL;
|
||||
libvorbisCallbacks.tell_func = ma_libvorbis_vf_callback__tell;
|
||||
|
||||
libvorbisResult = ov_open_callbacks(pVorbis, (OggVorbis_File*)pVorbis->vf, NULL, 0, libvorbisCallbacks);
|
||||
if (libvorbisResult < 0) {
|
||||
ma_data_source_uninit(&pVorbis->ds);
|
||||
ma_free(pVorbis->vf, pAllocationCallbacks);
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. */
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libvorbis. */
|
||||
|
||||
result = ma_libvorbis_init_internal(pConfig, pAllocationCallbacks, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult;
|
||||
|
||||
libvorbisResult = ov_fopen(pFilePath, (OggVorbis_File*)pVorbis->vf);
|
||||
if (libvorbisResult < 0) {
|
||||
ma_data_source_uninit(&pVorbis->ds);
|
||||
ma_free(pVorbis->vf, pAllocationCallbacks);
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. */
|
||||
(void)pFilePath;
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API void ma_libvorbis_uninit(ma_libvorbis* pVorbis, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
if (pVorbis == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
(void)pAllocationCallbacks;
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
ov_clear((OggVorbis_File*)pVorbis->vf);
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
}
|
||||
#endif
|
||||
|
||||
ma_data_source_uninit(&pVorbis->ds);
|
||||
ma_free(pVorbis->vf, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_read_pcm_frames(ma_libvorbis* pVorbis, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
if (frameCount == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
/* We always use floating point format. */
|
||||
ma_result result = MA_SUCCESS; /* Must be initialized to MA_SUCCESS. */
|
||||
ma_uint64 totalFramesRead;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
|
||||
ma_libvorbis_get_data_format(pVorbis, &format, &channels, NULL, NULL, 0);
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
long libvorbisResult;
|
||||
ma_uint64 framesToRead;
|
||||
ma_uint64 framesRemaining;
|
||||
|
||||
framesRemaining = (frameCount - totalFramesRead);
|
||||
framesToRead = 1024;
|
||||
if (framesToRead > framesRemaining) {
|
||||
framesToRead = framesRemaining;
|
||||
}
|
||||
|
||||
if (format == ma_format_f32) {
|
||||
float** ppFramesF32;
|
||||
|
||||
libvorbisResult = ov_read_float((OggVorbis_File*)pVorbis->vf, &ppFramesF32, (int)framesToRead, NULL);
|
||||
if (libvorbisResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
/* Frames need to be interleaved. */
|
||||
ma_interleave_pcm_frames(format, channels, libvorbisResult, (const void**)ppFramesF32, ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels));
|
||||
totalFramesRead += libvorbisResult;
|
||||
|
||||
if (libvorbisResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
libvorbisResult = ov_read((OggVorbis_File*)pVorbis->vf, (char*)ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), (int)(framesToRead * ma_get_bytes_per_frame(format, channels)), 0, 2, 1, NULL);
|
||||
if (libvorbisResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
/* Conveniently, there's no need to interleaving when using ov_read(). I'm not sure why ov_read_float() is different in that regard... */
|
||||
totalFramesRead += libvorbisResult / ma_get_bytes_per_frame(format, channels);
|
||||
|
||||
if (libvorbisResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
if (result == MA_SUCCESS && totalFramesRead == 0) {
|
||||
result = MA_AT_END;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
|
||||
(void)pFramesOut;
|
||||
(void)frameCount;
|
||||
(void)pFramesRead;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_seek_to_pcm_frame(ma_libvorbis* pVorbis, ma_uint64 frameIndex)
|
||||
{
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult = ov_pcm_seek((OggVorbis_File*)pVorbis->vf, (ogg_int64_t)frameIndex);
|
||||
if (libvorbisResult != 0) {
|
||||
if (libvorbisResult == OV_ENOSEEK) {
|
||||
return MA_INVALID_OPERATION; /* Not seekable. */
|
||||
} else if (libvorbisResult == OV_EINVAL) {
|
||||
return MA_INVALID_ARGS;
|
||||
} else {
|
||||
return MA_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
|
||||
(void)frameIndex;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_data_format(ma_libvorbis* pVorbis, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
/* Defaults for safety. */
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = ma_format_unknown;
|
||||
}
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = 0;
|
||||
}
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 0;
|
||||
}
|
||||
if (pChannelMap != NULL) {
|
||||
memset(pChannelMap, 0, sizeof(*pChannelMap) * channelMapCap);
|
||||
}
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = pVorbis->format;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
vorbis_info* pInfo = ov_info((OggVorbis_File*)pVorbis->vf, 0);
|
||||
if (pInfo == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = pInfo->channels;
|
||||
}
|
||||
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = pInfo->rate;
|
||||
}
|
||||
|
||||
if (pChannelMap != NULL) {
|
||||
ma_channel_map_init_standard(ma_standard_channel_map_vorbis, pChannelMap, channelMapCap, pInfo->channels);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_cursor_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pCursor)
|
||||
{
|
||||
if (pCursor == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pCursor = 0; /* Safety. */
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
ogg_int64_t offset = ov_pcm_tell((OggVorbis_File*)pVorbis->vf);
|
||||
if (offset < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
*pCursor = (ma_uint64)offset;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_length_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pLength)
|
||||
{
|
||||
if (pLength == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pLength = 0; /* Safety. */
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
/*
|
||||
Will work in the supermajority of cases where a file has a single logical bitstream. Concatenated streams
|
||||
are much harder to determine the length of since they can have sample rate changes, but they should be
|
||||
extremely rare outside of unseekable livestreams anyway.
|
||||
*/
|
||||
if (ov_streams((OggVorbis_File*)pVorbis->vf) == 1) {
|
||||
ogg_int64_t length = ov_pcm_total((OggVorbis_File*)pVorbis->vf, 0);
|
||||
if(length != OV_EINVAL) {
|
||||
*pLength = (ma_uint64)length;
|
||||
} else {
|
||||
/* Unseekable. */
|
||||
}
|
||||
} else {
|
||||
/* Concatenated stream. */
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
assert(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
The code below defines the vtable that you'll plug into your `ma_decoder_config` object.
|
||||
*/
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
static ma_result ma_decoding_backend_init__libvorbis(void* pUserData, ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_data_source** ppBackend)
|
||||
{
|
||||
ma_result result;
|
||||
ma_libvorbis* pVorbis;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
pVorbis = (ma_libvorbis*)ma_malloc(sizeof(*pVorbis), pAllocationCallbacks);
|
||||
if (pVorbis == NULL) {
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
result = ma_libvorbis_init(onRead, onSeek, onTell, pReadSeekTellUserData, pConfig, pAllocationCallbacks, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_free(pVorbis, pAllocationCallbacks);
|
||||
return result;
|
||||
}
|
||||
|
||||
*ppBackend = pVorbis;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_result ma_decoding_backend_init_file__libvorbis(void* pUserData, const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_data_source** ppBackend)
|
||||
{
|
||||
ma_result result;
|
||||
ma_libvorbis* pVorbis;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
pVorbis = (ma_libvorbis*)ma_malloc(sizeof(*pVorbis), pAllocationCallbacks);
|
||||
if (pVorbis == NULL) {
|
||||
return MA_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
result = ma_libvorbis_init_file(pFilePath, pConfig, pAllocationCallbacks, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
ma_free(pVorbis, pAllocationCallbacks);
|
||||
return result;
|
||||
}
|
||||
|
||||
*ppBackend = pVorbis;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
static void ma_decoding_backend_uninit__libvorbis(void* pUserData, ma_data_source* pBackend, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pBackend;
|
||||
|
||||
(void)pUserData;
|
||||
|
||||
ma_libvorbis_uninit(pVorbis, pAllocationCallbacks);
|
||||
ma_free(pVorbis, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
|
||||
static ma_decoding_backend_vtable ma_gDecodingBackendVTable_libvorbis =
|
||||
{
|
||||
ma_decoding_backend_init__libvorbis,
|
||||
ma_decoding_backend_init_file__libvorbis,
|
||||
NULL, /* onInitFileW() */
|
||||
NULL, /* onInitMemory() */
|
||||
ma_decoding_backend_uninit__libvorbis
|
||||
};
|
||||
ma_decoding_backend_vtable* ma_decoding_backend_libvorbis = &ma_gDecodingBackendVTable_libvorbis;
|
||||
#else
|
||||
ma_decoding_backend_vtable* ma_decoding_backend_libvorbis = NULL;
|
||||
#endif
|
||||
|
||||
#endif /* miniaudio_libvorbis_c */
|
||||
42
thirdparty/miniaudio-0.11.24/extras/decoders/libvorbis/miniaudio_libvorbis.h
vendored
Normal file
@@ -0,0 +1,42 @@
|
||||
/*
|
||||
This implements a data source that decodes Vorbis streams via libvorbis + libvorbisfile
|
||||
|
||||
This object can be plugged into any `ma_data_source_*()` API and can also be used as a custom
|
||||
decoding backend. See the custom_decoder example.
|
||||
*/
|
||||
#ifndef miniaudio_libvorbis_h
|
||||
#define miniaudio_libvorbis_h
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_data_source_base ds; /* The libvorbis decoder can be used independently as a data source. */
|
||||
ma_read_proc onRead;
|
||||
ma_seek_proc onSeek;
|
||||
ma_tell_proc onTell;
|
||||
void* pReadSeekTellUserData;
|
||||
ma_format format; /* Will be either f32 or s16. */
|
||||
/*OggVorbis_File**/ void* vf; /* Typed as void* so we can avoid a dependency on opusfile in the header section. */
|
||||
} ma_libvorbis;
|
||||
|
||||
MA_API ma_result ma_libvorbis_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis);
|
||||
MA_API ma_result ma_libvorbis_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis);
|
||||
MA_API void ma_libvorbis_uninit(ma_libvorbis* pVorbis, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
MA_API ma_result ma_libvorbis_read_pcm_frames(ma_libvorbis* pVorbis, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead);
|
||||
MA_API ma_result ma_libvorbis_seek_to_pcm_frame(ma_libvorbis* pVorbis, ma_uint64 frameIndex);
|
||||
MA_API ma_result ma_libvorbis_get_data_format(ma_libvorbis* pVorbis, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap);
|
||||
MA_API ma_result ma_libvorbis_get_cursor_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pCursor);
|
||||
MA_API ma_result ma_libvorbis_get_length_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pLength);
|
||||
|
||||
/* Decoding backend vtable. This is what you'll plug into ma_decoder_config.pBackendVTables. No user data required. */
|
||||
extern ma_decoding_backend_vtable* ma_decoding_backend_libvorbis;
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_libvorbis_h */
|
||||
498
thirdparty/miniaudio-0.11.24/extras/miniaudio_libopus.h
vendored
Normal file
@@ -0,0 +1,498 @@
|
||||
/* THIS HAS BEEN DEPRECATED! Use the libopus decoder in extras/decoders/libopus instead. */
|
||||
|
||||
/*
|
||||
This implements a data source that decodes Opus streams via libopus + libopusfile
|
||||
|
||||
This object can be plugged into any `ma_data_source_*()` API and can also be used as a custom
|
||||
decoding backend. See the custom_decoder example.
|
||||
|
||||
You need to include this file after miniaudio.h.
|
||||
*/
|
||||
#ifndef miniaudio_libopus_h
|
||||
#define miniaudio_libopus_h
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
#include <opusfile.h>
|
||||
#endif
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_data_source_base ds; /* The libopus decoder can be used independently as a data source. */
|
||||
ma_read_proc onRead;
|
||||
ma_seek_proc onSeek;
|
||||
ma_tell_proc onTell;
|
||||
void* pReadSeekTellUserData;
|
||||
ma_format format; /* Will be either f32 or s16. */
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
OggOpusFile* of;
|
||||
#endif
|
||||
} ma_libopus;
|
||||
|
||||
MA_API ma_result ma_libopus_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus);
|
||||
MA_API ma_result ma_libopus_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus);
|
||||
MA_API void ma_libopus_uninit(ma_libopus* pOpus, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
MA_API ma_result ma_libopus_read_pcm_frames(ma_libopus* pOpus, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead);
|
||||
MA_API ma_result ma_libopus_seek_to_pcm_frame(ma_libopus* pOpus, ma_uint64 frameIndex);
|
||||
MA_API ma_result ma_libopus_get_data_format(ma_libopus* pOpus, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap);
|
||||
MA_API ma_result ma_libopus_get_cursor_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pCursor);
|
||||
MA_API ma_result ma_libopus_get_length_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pLength);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#if defined(MINIAUDIO_IMPLEMENTATION) || defined(MA_IMPLEMENTATION)
|
||||
|
||||
static ma_result ma_libopus_ds_read(ma_data_source* pDataSource, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
return ma_libopus_read_pcm_frames((ma_libopus*)pDataSource, pFramesOut, frameCount, pFramesRead);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_seek(ma_data_source* pDataSource, ma_uint64 frameIndex)
|
||||
{
|
||||
return ma_libopus_seek_to_pcm_frame((ma_libopus*)pDataSource, frameIndex);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_data_format(ma_data_source* pDataSource, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
return ma_libopus_get_data_format((ma_libopus*)pDataSource, pFormat, pChannels, pSampleRate, pChannelMap, channelMapCap);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_cursor(ma_data_source* pDataSource, ma_uint64* pCursor)
|
||||
{
|
||||
return ma_libopus_get_cursor_in_pcm_frames((ma_libopus*)pDataSource, pCursor);
|
||||
}
|
||||
|
||||
static ma_result ma_libopus_ds_get_length(ma_data_source* pDataSource, ma_uint64* pLength)
|
||||
{
|
||||
return ma_libopus_get_length_in_pcm_frames((ma_libopus*)pDataSource, pLength);
|
||||
}
|
||||
|
||||
static ma_data_source_vtable g_ma_libopus_ds_vtable =
|
||||
{
|
||||
ma_libopus_ds_read,
|
||||
ma_libopus_ds_seek,
|
||||
ma_libopus_ds_get_data_format,
|
||||
ma_libopus_ds_get_cursor,
|
||||
ma_libopus_ds_get_length
|
||||
};
|
||||
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
static int ma_libopus_of_callback__read(void* pUserData, unsigned char* pBufferOut, int bytesToRead)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
size_t bytesRead;
|
||||
|
||||
result = pOpus->onRead(pOpus->pReadSeekTellUserData, (void*)pBufferOut, bytesToRead, &bytesRead);
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return (int)bytesRead;
|
||||
}
|
||||
|
||||
static int ma_libopus_of_callback__seek(void* pUserData, ogg_int64_t offset, int whence)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
ma_seek_origin origin;
|
||||
|
||||
if (whence == SEEK_SET) {
|
||||
origin = ma_seek_origin_start;
|
||||
} else if (whence == SEEK_END) {
|
||||
origin = ma_seek_origin_end;
|
||||
} else {
|
||||
origin = ma_seek_origin_current;
|
||||
}
|
||||
|
||||
result = pOpus->onSeek(pOpus->pReadSeekTellUserData, offset, origin);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static opus_int64 ma_libopus_of_callback__tell(void* pUserData)
|
||||
{
|
||||
ma_libopus* pOpus = (ma_libopus*)pUserData;
|
||||
ma_result result;
|
||||
ma_int64 cursor;
|
||||
|
||||
if (pOpus->onTell == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
result = pOpus->onTell(pOpus->pReadSeekTellUserData, &cursor);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return cursor;
|
||||
}
|
||||
#endif
|
||||
|
||||
static ma_result ma_libopus_init_internal(const ma_decoding_backend_config* pConfig, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
ma_data_source_config dataSourceConfig;
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
MA_ZERO_OBJECT(pOpus);
|
||||
pOpus->format = ma_format_f32; /* f32 by default. */
|
||||
|
||||
if (pConfig != NULL && (pConfig->preferredFormat == ma_format_f32 || pConfig->preferredFormat == ma_format_s16)) {
|
||||
pOpus->format = pConfig->preferredFormat;
|
||||
} else {
|
||||
/* Getting here means something other than f32 and s16 was specified. Just leave this unset to use the default format. */
|
||||
}
|
||||
|
||||
dataSourceConfig = ma_data_source_config_init();
|
||||
dataSourceConfig.vtable = &g_ma_libopus_ds_vtable;
|
||||
|
||||
result = ma_data_source_init(&dataSourceConfig, &pOpus->ds);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to initialize the base data source. */
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libopus. */
|
||||
|
||||
result = ma_libopus_init_internal(pConfig, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
if (onRead == NULL || onSeek == NULL) {
|
||||
return MA_INVALID_ARGS; /* onRead and onSeek are mandatory. */
|
||||
}
|
||||
|
||||
pOpus->onRead = onRead;
|
||||
pOpus->onSeek = onSeek;
|
||||
pOpus->onTell = onTell;
|
||||
pOpus->pReadSeekTellUserData = pReadSeekTellUserData;
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult;
|
||||
OpusFileCallbacks libopusCallbacks;
|
||||
|
||||
/* We can now initialize the Opus decoder. This must be done after we've set up the callbacks. */
|
||||
libopusCallbacks.read = ma_libopus_of_callback__read;
|
||||
libopusCallbacks.seek = ma_libopus_of_callback__seek;
|
||||
libopusCallbacks.close = NULL;
|
||||
libopusCallbacks.tell = ma_libopus_of_callback__tell;
|
||||
|
||||
pOpus->of = op_open_callbacks(pOpus, &libopusCallbacks, NULL, 0, &libopusResult);
|
||||
if (pOpus->of == NULL) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. */
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libopus* pOpus)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libopus. */
|
||||
|
||||
result = ma_libopus_init_internal(pConfig, pOpus);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult;
|
||||
|
||||
pOpus->of = op_open_file(pFilePath, &libopusResult);
|
||||
if (pOpus->of == NULL) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. */
|
||||
(void)pFilePath;
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API void ma_libopus_uninit(ma_libopus* pOpus, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
if (pOpus == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
(void)pAllocationCallbacks;
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
op_free(pOpus->of);
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
}
|
||||
#endif
|
||||
|
||||
ma_data_source_uninit(&pOpus->ds);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_read_pcm_frames(ma_libopus* pOpus, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
if (frameCount == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
/* We always use floating point format. */
|
||||
ma_result result = MA_SUCCESS; /* Must be initialized to MA_SUCCESS. */
|
||||
ma_uint64 totalFramesRead;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
|
||||
ma_libopus_get_data_format(pOpus, &format, &channels, NULL, NULL, 0);
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
long libopusResult;
|
||||
int framesToRead;
|
||||
ma_uint64 framesRemaining;
|
||||
|
||||
framesRemaining = (frameCount - totalFramesRead);
|
||||
framesToRead = 1024;
|
||||
if (framesToRead > framesRemaining) {
|
||||
framesToRead = (int)framesRemaining;
|
||||
}
|
||||
|
||||
if (format == ma_format_f32) {
|
||||
libopusResult = op_read_float(pOpus->of, (float*)ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), framesToRead * channels, NULL);
|
||||
} else {
|
||||
libopusResult = op_read (pOpus->of, (opus_int16*)ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), framesToRead * channels, NULL);
|
||||
}
|
||||
|
||||
if (libopusResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
totalFramesRead += libopusResult;
|
||||
|
||||
if (libopusResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
if (result == MA_SUCCESS && totalFramesRead == 0) {
|
||||
result = MA_AT_END;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
|
||||
(void)pFramesOut;
|
||||
(void)frameCount;
|
||||
(void)pFramesRead;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_seek_to_pcm_frame(ma_libopus* pOpus, ma_uint64 frameIndex)
|
||||
{
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
int libopusResult = op_pcm_seek(pOpus->of, (ogg_int64_t)frameIndex);
|
||||
if (libopusResult != 0) {
|
||||
if (libopusResult == OP_ENOSEEK) {
|
||||
return MA_INVALID_OPERATION; /* Not seekable. */
|
||||
} else if (libopusResult == OP_EINVAL) {
|
||||
return MA_INVALID_ARGS;
|
||||
} else {
|
||||
return MA_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
|
||||
(void)frameIndex;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_data_format(ma_libopus* pOpus, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
/* Defaults for safety. */
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = ma_format_unknown;
|
||||
}
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = 0;
|
||||
}
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 0;
|
||||
}
|
||||
if (pChannelMap != NULL) {
|
||||
MA_ZERO_MEMORY(pChannelMap, sizeof(*pChannelMap) * channelMapCap);
|
||||
}
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = pOpus->format;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ma_uint32 channels = op_channel_count(pOpus->of, -1);
|
||||
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = channels;
|
||||
}
|
||||
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 48000;
|
||||
}
|
||||
|
||||
if (pChannelMap != NULL) {
|
||||
ma_channel_map_init_standard(ma_standard_channel_map_vorbis, pChannelMap, channelMapCap, channels);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_cursor_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pCursor)
|
||||
{
|
||||
if (pCursor == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pCursor = 0; /* Safety. */
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ogg_int64_t offset = op_pcm_tell(pOpus->of);
|
||||
if (offset < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
*pCursor = (ma_uint64)offset;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libopus_get_length_in_pcm_frames(ma_libopus* pOpus, ma_uint64* pLength)
|
||||
{
|
||||
if (pLength == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pLength = 0; /* Safety. */
|
||||
|
||||
if (pOpus == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBOPUS)
|
||||
{
|
||||
ogg_int64_t length = op_pcm_total(pOpus->of, -1);
|
||||
if (length < 0) {
|
||||
return MA_ERROR;
|
||||
}
|
||||
|
||||
*pLength = (ma_uint64)length;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libopus is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
#endif
|
||||
518
thirdparty/miniaudio-0.11.24/extras/miniaudio_libvorbis.h
vendored
Normal file
@@ -0,0 +1,518 @@
|
||||
/* THIS HAS BEEN DEPRECATED! Use the libvorbis decoder in extras/decoders/libvorbis instead. */
|
||||
|
||||
/*
|
||||
This implements a data source that decodes Vorbis streams via libvorbis + libvorbisfile
|
||||
|
||||
This object can be plugged into any `ma_data_source_*()` API and can also be used as a custom
|
||||
decoding backend. See the custom_decoder example.
|
||||
|
||||
You need to include this file after miniaudio.h.
|
||||
*/
|
||||
#ifndef miniaudio_libvorbis_h
|
||||
#define miniaudio_libvorbis_h
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
#ifndef OV_EXCLUDE_STATIC_CALLBACKS
|
||||
#define OV_EXCLUDE_STATIC_CALLBACKS
|
||||
#endif
|
||||
#include <vorbis/vorbisfile.h>
|
||||
#endif
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_data_source_base ds; /* The libvorbis decoder can be used independently as a data source. */
|
||||
ma_read_proc onRead;
|
||||
ma_seek_proc onSeek;
|
||||
ma_tell_proc onTell;
|
||||
void* pReadSeekTellUserData;
|
||||
ma_format format; /* Will be either f32 or s16. */
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
OggVorbis_File vf;
|
||||
#endif
|
||||
} ma_libvorbis;
|
||||
|
||||
MA_API ma_result ma_libvorbis_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis);
|
||||
MA_API ma_result ma_libvorbis_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis);
|
||||
MA_API void ma_libvorbis_uninit(ma_libvorbis* pVorbis, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
MA_API ma_result ma_libvorbis_read_pcm_frames(ma_libvorbis* pVorbis, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead);
|
||||
MA_API ma_result ma_libvorbis_seek_to_pcm_frame(ma_libvorbis* pVorbis, ma_uint64 frameIndex);
|
||||
MA_API ma_result ma_libvorbis_get_data_format(ma_libvorbis* pVorbis, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap);
|
||||
MA_API ma_result ma_libvorbis_get_cursor_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pCursor);
|
||||
MA_API ma_result ma_libvorbis_get_length_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pLength);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif
|
||||
|
||||
#if defined(MINIAUDIO_IMPLEMENTATION) || defined(MA_IMPLEMENTATION)
|
||||
|
||||
static ma_result ma_libvorbis_ds_read(ma_data_source* pDataSource, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
return ma_libvorbis_read_pcm_frames((ma_libvorbis*)pDataSource, pFramesOut, frameCount, pFramesRead);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_seek(ma_data_source* pDataSource, ma_uint64 frameIndex)
|
||||
{
|
||||
return ma_libvorbis_seek_to_pcm_frame((ma_libvorbis*)pDataSource, frameIndex);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_data_format(ma_data_source* pDataSource, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
return ma_libvorbis_get_data_format((ma_libvorbis*)pDataSource, pFormat, pChannels, pSampleRate, pChannelMap, channelMapCap);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_cursor(ma_data_source* pDataSource, ma_uint64* pCursor)
|
||||
{
|
||||
return ma_libvorbis_get_cursor_in_pcm_frames((ma_libvorbis*)pDataSource, pCursor);
|
||||
}
|
||||
|
||||
static ma_result ma_libvorbis_ds_get_length(ma_data_source* pDataSource, ma_uint64* pLength)
|
||||
{
|
||||
return ma_libvorbis_get_length_in_pcm_frames((ma_libvorbis*)pDataSource, pLength);
|
||||
}
|
||||
|
||||
static ma_data_source_vtable g_ma_libvorbis_ds_vtable =
|
||||
{
|
||||
ma_libvorbis_ds_read,
|
||||
ma_libvorbis_ds_seek,
|
||||
ma_libvorbis_ds_get_data_format,
|
||||
ma_libvorbis_ds_get_cursor,
|
||||
ma_libvorbis_ds_get_length
|
||||
};
|
||||
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
static size_t ma_libvorbis_vf_callback__read(void* pBufferOut, size_t size, size_t count, void* pUserData)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
size_t bytesToRead;
|
||||
size_t bytesRead;
|
||||
|
||||
/* For consistency with fread(). If `size` of `count` is 0, return 0 immediately without changing anything. */
|
||||
if (size == 0 || count == 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
bytesToRead = size * count;
|
||||
result = pVorbis->onRead(pVorbis->pReadSeekTellUserData, pBufferOut, bytesToRead, &bytesRead);
|
||||
if (result != MA_SUCCESS) {
|
||||
/* Not entirely sure what to return here. What if an error occurs, but some data was read and bytesRead is > 0? */
|
||||
return 0;
|
||||
}
|
||||
|
||||
return bytesRead / size;
|
||||
}
|
||||
|
||||
static int ma_libvorbis_vf_callback__seek(void* pUserData, ogg_int64_t offset, int whence)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
ma_seek_origin origin;
|
||||
|
||||
if (whence == SEEK_SET) {
|
||||
origin = ma_seek_origin_start;
|
||||
} else if (whence == SEEK_END) {
|
||||
origin = ma_seek_origin_end;
|
||||
} else {
|
||||
origin = ma_seek_origin_current;
|
||||
}
|
||||
|
||||
result = pVorbis->onSeek(pVorbis->pReadSeekTellUserData, offset, origin);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static long ma_libvorbis_vf_callback__tell(void* pUserData)
|
||||
{
|
||||
ma_libvorbis* pVorbis = (ma_libvorbis*)pUserData;
|
||||
ma_result result;
|
||||
ma_int64 cursor;
|
||||
|
||||
result = pVorbis->onTell(pVorbis->pReadSeekTellUserData, &cursor);
|
||||
if (result != MA_SUCCESS) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
return (long)cursor;
|
||||
}
|
||||
#endif
|
||||
|
||||
static ma_result ma_libvorbis_init_internal(const ma_decoding_backend_config* pConfig, ma_libvorbis* pVorbis)
|
||||
{
|
||||
ma_result result;
|
||||
ma_data_source_config dataSourceConfig;
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
MA_ZERO_OBJECT(pVorbis);
|
||||
pVorbis->format = ma_format_f32; /* f32 by default. */
|
||||
|
||||
if (pConfig != NULL && (pConfig->preferredFormat == ma_format_f32 || pConfig->preferredFormat == ma_format_s16)) {
|
||||
pVorbis->format = pConfig->preferredFormat;
|
||||
} else {
|
||||
/* Getting here means something other than f32 and s16 was specified. Just leave this unset to use the default format. */
|
||||
}
|
||||
|
||||
dataSourceConfig = ma_data_source_config_init();
|
||||
dataSourceConfig.vtable = &g_ma_libvorbis_ds_vtable;
|
||||
|
||||
result = ma_data_source_init(&dataSourceConfig, &pVorbis->ds);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result; /* Failed to initialize the base data source. */
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_init(ma_read_proc onRead, ma_seek_proc onSeek, ma_tell_proc onTell, void* pReadSeekTellUserData, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libvorbis. */
|
||||
|
||||
result = ma_libvorbis_init_internal(pConfig, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
if (onRead == NULL || onSeek == NULL) {
|
||||
return MA_INVALID_ARGS; /* onRead and onSeek are mandatory. */
|
||||
}
|
||||
|
||||
pVorbis->onRead = onRead;
|
||||
pVorbis->onSeek = onSeek;
|
||||
pVorbis->onTell = onTell;
|
||||
pVorbis->pReadSeekTellUserData = pReadSeekTellUserData;
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult;
|
||||
ov_callbacks libvorbisCallbacks;
|
||||
|
||||
/* We can now initialize the vorbis decoder. This must be done after we've set up the callbacks. */
|
||||
libvorbisCallbacks.read_func = ma_libvorbis_vf_callback__read;
|
||||
libvorbisCallbacks.seek_func = ma_libvorbis_vf_callback__seek;
|
||||
libvorbisCallbacks.close_func = NULL;
|
||||
libvorbisCallbacks.tell_func = ma_libvorbis_vf_callback__tell;
|
||||
|
||||
libvorbisResult = ov_open_callbacks(pVorbis, &pVorbis->vf, NULL, 0, libvorbisCallbacks);
|
||||
if (libvorbisResult < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. */
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_init_file(const char* pFilePath, const ma_decoding_backend_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_libvorbis* pVorbis)
|
||||
{
|
||||
ma_result result;
|
||||
|
||||
(void)pAllocationCallbacks; /* Can't seem to find a way to configure memory allocations in libvorbis. */
|
||||
|
||||
result = ma_libvorbis_init_internal(pConfig, pVorbis);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult;
|
||||
|
||||
libvorbisResult = ov_fopen(pFilePath, &pVorbis->vf);
|
||||
if (libvorbisResult < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. */
|
||||
(void)pFilePath;
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API void ma_libvorbis_uninit(ma_libvorbis* pVorbis, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
if (pVorbis == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
(void)pAllocationCallbacks;
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
ov_clear(&pVorbis->vf);
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
}
|
||||
#endif
|
||||
|
||||
ma_data_source_uninit(&pVorbis->ds);
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_read_pcm_frames(ma_libvorbis* pVorbis, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
|
||||
{
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = 0;
|
||||
}
|
||||
|
||||
if (frameCount == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
/* We always use floating point format. */
|
||||
ma_result result = MA_SUCCESS; /* Must be initialized to MA_SUCCESS. */
|
||||
ma_uint64 totalFramesRead;
|
||||
ma_format format;
|
||||
ma_uint32 channels;
|
||||
|
||||
ma_libvorbis_get_data_format(pVorbis, &format, &channels, NULL, NULL, 0);
|
||||
|
||||
totalFramesRead = 0;
|
||||
while (totalFramesRead < frameCount) {
|
||||
long libvorbisResult;
|
||||
int framesToRead;
|
||||
ma_uint64 framesRemaining;
|
||||
|
||||
framesRemaining = (frameCount - totalFramesRead);
|
||||
framesToRead = 1024;
|
||||
if (framesToRead > framesRemaining) {
|
||||
framesToRead = (int)framesRemaining;
|
||||
}
|
||||
|
||||
if (format == ma_format_f32) {
|
||||
float** ppFramesF32;
|
||||
|
||||
libvorbisResult = ov_read_float(&pVorbis->vf, &ppFramesF32, framesToRead, NULL);
|
||||
if (libvorbisResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
/* Frames need to be interleaved. */
|
||||
ma_interleave_pcm_frames(format, channels, libvorbisResult, (const void**)ppFramesF32, ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels));
|
||||
totalFramesRead += libvorbisResult;
|
||||
|
||||
if (libvorbisResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
libvorbisResult = ov_read(&pVorbis->vf, (char*)ma_offset_pcm_frames_ptr(pFramesOut, totalFramesRead, format, channels), framesToRead * ma_get_bytes_per_frame(format, channels), 0, 2, 1, NULL);
|
||||
if (libvorbisResult < 0) {
|
||||
result = MA_ERROR; /* Error while decoding. */
|
||||
break;
|
||||
} else {
|
||||
/* Conveniently, there's no need to interleaving when using ov_read(). I'm not sure why ov_read_float() is different in that regard... */
|
||||
totalFramesRead += libvorbisResult / ma_get_bytes_per_frame(format, channels);
|
||||
|
||||
if (libvorbisResult == 0) {
|
||||
result = MA_AT_END;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (pFramesRead != NULL) {
|
||||
*pFramesRead = totalFramesRead;
|
||||
}
|
||||
|
||||
if (result == MA_SUCCESS && totalFramesRead == 0) {
|
||||
result = MA_AT_END;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
|
||||
(void)pFramesOut;
|
||||
(void)frameCount;
|
||||
(void)pFramesRead;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_seek_to_pcm_frame(ma_libvorbis* pVorbis, ma_uint64 frameIndex)
|
||||
{
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
int libvorbisResult = ov_pcm_seek(&pVorbis->vf, (ogg_int64_t)frameIndex);
|
||||
if (libvorbisResult != 0) {
|
||||
if (libvorbisResult == OV_ENOSEEK) {
|
||||
return MA_INVALID_OPERATION; /* Not seekable. */
|
||||
} else if (libvorbisResult == OV_EINVAL) {
|
||||
return MA_INVALID_ARGS;
|
||||
} else {
|
||||
return MA_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
|
||||
(void)frameIndex;
|
||||
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_data_format(ma_libvorbis* pVorbis, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate, ma_channel* pChannelMap, size_t channelMapCap)
|
||||
{
|
||||
/* Defaults for safety. */
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = ma_format_unknown;
|
||||
}
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = 0;
|
||||
}
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = 0;
|
||||
}
|
||||
if (pChannelMap != NULL) {
|
||||
MA_ZERO_MEMORY(pChannelMap, sizeof(*pChannelMap) * channelMapCap);
|
||||
}
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pFormat != NULL) {
|
||||
*pFormat = pVorbis->format;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
vorbis_info* pInfo = ov_info(&pVorbis->vf, 0);
|
||||
if (pInfo == NULL) {
|
||||
return MA_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
if (pChannels != NULL) {
|
||||
*pChannels = pInfo->channels;
|
||||
}
|
||||
|
||||
if (pSampleRate != NULL) {
|
||||
*pSampleRate = pInfo->rate;
|
||||
}
|
||||
|
||||
if (pChannelMap != NULL) {
|
||||
ma_channel_map_init_standard(ma_standard_channel_map_vorbis, pChannelMap, channelMapCap, pInfo->channels);
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_cursor_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pCursor)
|
||||
{
|
||||
if (pCursor == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pCursor = 0; /* Safety. */
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
ogg_int64_t offset = ov_pcm_tell(&pVorbis->vf);
|
||||
if (offset < 0) {
|
||||
return MA_INVALID_FILE;
|
||||
}
|
||||
|
||||
*pCursor = (ma_uint64)offset;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
MA_API ma_result ma_libvorbis_get_length_in_pcm_frames(ma_libvorbis* pVorbis, ma_uint64* pLength)
|
||||
{
|
||||
if (pLength == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
*pLength = 0; /* Safety. */
|
||||
|
||||
if (pVorbis == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
#if !defined(MA_NO_LIBVORBIS)
|
||||
{
|
||||
/* I don't know how to reliably retrieve the length in frames using libvorbis, so returning 0 for now. */
|
||||
*pLength = 0;
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
#else
|
||||
{
|
||||
/* libvorbis is disabled. Should never hit this since initialization would have failed. */
|
||||
MA_ASSERT(MA_FALSE);
|
||||
return MA_NOT_IMPLEMENTED;
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
#endif
|
||||
7
thirdparty/miniaudio-0.11.24/extras/miniaudio_split/README.md
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
These files split the main library into separate .h and .c files. This is intended for those who prefer separate files
|
||||
or whose build environment better suits this configuration. The files here are generated from a tool based on the
|
||||
content in the main miniaudio.h file. Do not edit these files directly. If you want to contribute, please make the
|
||||
contribution in the main file.
|
||||
|
||||
This is not always up to date with the most recent commit in the dev branch, but will usually be up to date with the
|
||||
master branch.
|
||||
84121
thirdparty/miniaudio-0.11.24/extras/miniaudio_split/miniaudio.c
vendored
Normal file
7844
thirdparty/miniaudio-0.11.24/extras/miniaudio_split/miniaudio.h
vendored
Normal file
83
thirdparty/miniaudio-0.11.24/extras/nodes/ma_channel_combiner_node/ma_channel_combiner_node.c
vendored
Normal file
@@ -0,0 +1,83 @@
|
||||
#ifndef miniaudio_channel_combiner_node_c
|
||||
#define miniaudio_channel_combiner_node_c
|
||||
|
||||
#include "ma_channel_combiner_node.h"
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
|
||||
MA_API ma_channel_combiner_node_config ma_channel_combiner_node_config_init(ma_uint32 channels)
|
||||
{
|
||||
ma_channel_combiner_node_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.nodeConfig = ma_node_config_init(); /* Input and output channels will be set in ma_channel_combiner_node_init(). */
|
||||
config.channels = channels;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_channel_combiner_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_channel_combiner_node* pCombinerNode = (ma_channel_combiner_node*)pNode;
|
||||
|
||||
(void)pFrameCountIn;
|
||||
|
||||
ma_interleave_pcm_frames(ma_format_f32, ma_node_get_output_channels(pCombinerNode, 0), *pFrameCountOut, (const void**)ppFramesIn, (void*)ppFramesOut[0]);
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_channel_combiner_node_vtable =
|
||||
{
|
||||
ma_channel_combiner_node_process_pcm_frames,
|
||||
NULL,
|
||||
MA_NODE_BUS_COUNT_UNKNOWN, /* Input bus count is determined by the channel count and is unknown until the node instance is initialized. */
|
||||
1, /* 1 output bus. */
|
||||
0 /* Default flags. */
|
||||
};
|
||||
|
||||
MA_API ma_result ma_channel_combiner_node_init(ma_node_graph* pNodeGraph, const ma_channel_combiner_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_channel_combiner_node* pCombinerNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
ma_uint32 inputChannels[MA_MAX_NODE_BUS_COUNT];
|
||||
ma_uint32 outputChannels[1];
|
||||
ma_uint32 iChannel;
|
||||
|
||||
if (pCombinerNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pCombinerNode, 0, sizeof(*pCombinerNode));
|
||||
|
||||
if (pConfig == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
/* All input channels are mono. */
|
||||
for (iChannel = 0; iChannel < pConfig->channels; iChannel += 1) {
|
||||
inputChannels[iChannel] = 1;
|
||||
}
|
||||
|
||||
outputChannels[0] = pConfig->channels;
|
||||
|
||||
baseConfig = pConfig->nodeConfig;
|
||||
baseConfig.vtable = &g_ma_channel_combiner_node_vtable;
|
||||
baseConfig.inputBusCount = pConfig->channels; /* The vtable has an unknown channel count, so must specify it here. */
|
||||
baseConfig.pInputChannels = inputChannels;
|
||||
baseConfig.pOutputChannels = outputChannels;
|
||||
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pCombinerNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_channel_combiner_node_uninit(ma_channel_combiner_node* pCombinerNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
/* The base node is always uninitialized first. */
|
||||
ma_node_uninit(pCombinerNode, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
#endif /* miniaudio_channel_combiner_node_c */
|
||||
32
thirdparty/miniaudio-0.11.24/extras/nodes/ma_channel_combiner_node/ma_channel_combiner_node.h
vendored
Normal file
@@ -0,0 +1,32 @@
|
||||
/* Include ma_channel_combiner_node.h after miniaudio.h */
|
||||
#ifndef miniaudio_channel_combiner_node_h
|
||||
#define miniaudio_channel_combiner_node_h
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channels;
|
||||
} ma_channel_combiner_node_config;
|
||||
|
||||
MA_API ma_channel_combiner_node_config ma_channel_combiner_node_config_init(ma_uint32 channels);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
} ma_channel_combiner_node;
|
||||
|
||||
MA_API ma_result ma_channel_combiner_node_init(ma_node_graph* pNodeGraph, const ma_channel_combiner_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_channel_combiner_node* pSeparatorNode);
|
||||
MA_API void ma_channel_combiner_node_uninit(ma_channel_combiner_node* pSeparatorNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_channel_combiner_node_h */
|
||||
@@ -0,0 +1,2 @@
|
||||
/* The channel separator example also demonstrates how to use the combiner. */
|
||||
#include "../ma_channel_separator_node/ma_channel_separator_node_example.c"
|
||||
87
thirdparty/miniaudio-0.11.24/extras/nodes/ma_channel_separator_node/ma_channel_separator_node.c
vendored
Normal file
@@ -0,0 +1,87 @@
|
||||
#ifndef miniaudio_channel_separator_node_c
|
||||
#define miniaudio_channel_separator_node_c
|
||||
|
||||
#include "ma_channel_separator_node.h"
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
|
||||
MA_API ma_channel_separator_node_config ma_channel_separator_node_config_init(ma_uint32 channels)
|
||||
{
|
||||
ma_channel_separator_node_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.nodeConfig = ma_node_config_init(); /* Input and output channels will be set in ma_channel_separator_node_init(). */
|
||||
config.channels = channels;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_channel_separator_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_channel_separator_node* pSplitterNode = (ma_channel_separator_node*)pNode;
|
||||
|
||||
(void)pFrameCountIn;
|
||||
|
||||
ma_deinterleave_pcm_frames(ma_format_f32, ma_node_get_input_channels(pSplitterNode, 0), *pFrameCountOut, (const void*)ppFramesIn[0], (void**)ppFramesOut);
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_channel_separator_node_vtable =
|
||||
{
|
||||
ma_channel_separator_node_process_pcm_frames,
|
||||
NULL,
|
||||
1, /* 1 input bus. */
|
||||
MA_NODE_BUS_COUNT_UNKNOWN, /* Output bus count is determined by the channel count and is unknown until the node instance is initialized. */
|
||||
0 /* Default flags. */
|
||||
};
|
||||
|
||||
MA_API ma_result ma_channel_separator_node_init(ma_node_graph* pNodeGraph, const ma_channel_separator_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_channel_separator_node* pSeparatorNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
ma_uint32 inputChannels[1];
|
||||
ma_uint32 outputChannels[MA_MAX_NODE_BUS_COUNT];
|
||||
ma_uint32 iChannel;
|
||||
|
||||
if (pSeparatorNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pSeparatorNode, 0, sizeof(*pSeparatorNode));
|
||||
|
||||
if (pConfig == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (pConfig->channels > MA_MAX_NODE_BUS_COUNT) {
|
||||
return MA_INVALID_ARGS; /* Channel count cannot exceed the maximum number of buses. */
|
||||
}
|
||||
|
||||
inputChannels[0] = pConfig->channels;
|
||||
|
||||
/* All output channels are mono. */
|
||||
for (iChannel = 0; iChannel < pConfig->channels; iChannel += 1) {
|
||||
outputChannels[iChannel] = 1;
|
||||
}
|
||||
|
||||
baseConfig = pConfig->nodeConfig;
|
||||
baseConfig.vtable = &g_ma_channel_separator_node_vtable;
|
||||
baseConfig.outputBusCount = pConfig->channels; /* The vtable has an unknown channel count, so must specify it here. */
|
||||
baseConfig.pInputChannels = inputChannels;
|
||||
baseConfig.pOutputChannels = outputChannels;
|
||||
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pSeparatorNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_channel_separator_node_uninit(ma_channel_separator_node* pSeparatorNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
/* The base node is always uninitialized first. */
|
||||
ma_node_uninit(pSeparatorNode, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
#endif /* miniaudio_channel_separator_node_c */
|
||||
31
thirdparty/miniaudio-0.11.24/extras/nodes/ma_channel_separator_node/ma_channel_separator_node.h
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
/* Include ma_channel_separator_node.h after miniaudio.h */
|
||||
#ifndef miniaudio_channel_separator_node_h
|
||||
#define miniaudio_channel_separator_node_h
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channels;
|
||||
} ma_channel_separator_node_config;
|
||||
|
||||
MA_API ma_channel_separator_node_config ma_channel_separator_node_config_init(ma_uint32 channels);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
} ma_channel_separator_node;
|
||||
|
||||
MA_API ma_result ma_channel_separator_node_init(ma_node_graph* pNodeGraph, const ma_channel_separator_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_channel_separator_node* pSeparatorNode);
|
||||
MA_API void ma_channel_separator_node_uninit(ma_channel_separator_node* pSeparatorNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_channel_separator_node_h */
|
||||
@@ -0,0 +1,148 @@
|
||||
#include "../../../miniaudio.c"
|
||||
#include "ma_channel_separator_node.c"
|
||||
#include "../ma_channel_combiner_node/ma_channel_combiner_node.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32 /* Must always be f32 for this example because the node graph system only works with this. */
|
||||
#define DEVICE_CHANNELS 0 /* The input file will determine the channel count. */
|
||||
#define DEVICE_SAMPLE_RATE 48000
|
||||
|
||||
/*
|
||||
In this example we're just separating out the channels with a `ma_channel_separator_node`, and then
|
||||
combining them back together with a `ma_channel_combiner_node` before playing them back.
|
||||
*/
|
||||
static ma_decoder g_decoder; /* The decoder that we'll read data from. */
|
||||
static ma_data_source_node g_dataSupplyNode; /* The node that will sit at the root level. Will be reading data from g_dataSupply. */
|
||||
static ma_channel_separator_node g_separatorNode; /* The separator node. */
|
||||
static ma_channel_combiner_node g_combinerNode; /* The combiner node. */
|
||||
static ma_node_graph g_nodeGraph; /* The main node graph that we'll be feeding data through. */
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
(void)pInput;
|
||||
(void)pDevice;
|
||||
|
||||
/* All we need to do is read from the node graph. */
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_node_graph_config nodeGraphConfig;
|
||||
ma_channel_separator_node_config separatorNodeConfig;
|
||||
ma_channel_combiner_node_config combinerNodeConfig;
|
||||
ma_data_source_node_config dataSupplyNodeConfig;
|
||||
ma_uint32 iChannel;
|
||||
|
||||
if (argc < 1) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Decoder. */
|
||||
decoderConfig = ma_decoder_config_init(DEVICE_FORMAT, 0, DEVICE_SAMPLE_RATE);
|
||||
|
||||
result = ma_decoder_init_file(argv[1], &decoderConfig, &g_decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to load decoder.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Device. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = g_decoder.outputFormat;
|
||||
deviceConfig.playback.channels = g_decoder.outputChannels;
|
||||
deviceConfig.sampleRate = g_decoder.outputSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* Node graph. */
|
||||
nodeGraphConfig = ma_node_graph_config_init(device.playback.channels);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize node graph.");
|
||||
goto done0;
|
||||
}
|
||||
|
||||
|
||||
/* Combiner. Attached straight to the endpoint. Input will be the separator node. */
|
||||
combinerNodeConfig = ma_channel_combiner_node_config_init(device.playback.channels);
|
||||
|
||||
result = ma_channel_combiner_node_init(&g_nodeGraph, &combinerNodeConfig, NULL, &g_combinerNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize channel combiner node.");
|
||||
goto done1;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_combinerNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
|
||||
/*
|
||||
Separator. Attached to the combiner. We need to attach each of the outputs of the
|
||||
separator to each of the inputs of the combiner.
|
||||
*/
|
||||
separatorNodeConfig = ma_channel_separator_node_config_init(device.playback.channels);
|
||||
|
||||
result = ma_channel_separator_node_init(&g_nodeGraph, &separatorNodeConfig, NULL, &g_separatorNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize channel separator node.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
/* The separator and combiner must have the same number of output and input buses respectively. */
|
||||
MA_ASSERT(ma_node_get_output_bus_count(&g_separatorNode) == ma_node_get_input_bus_count(&g_combinerNode));
|
||||
|
||||
/* Each of the separator's outputs need to be attached to the corresponding input of the combiner. */
|
||||
for (iChannel = 0; iChannel < ma_node_get_output_bus_count(&g_separatorNode); iChannel += 1) {
|
||||
ma_node_attach_output_bus(&g_separatorNode, iChannel, &g_combinerNode, iChannel);
|
||||
}
|
||||
|
||||
|
||||
|
||||
/* Data supply. Attached to input bus 0 of the reverb node. */
|
||||
dataSupplyNodeConfig = ma_data_source_node_config_init(&g_decoder);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &dataSupplyNodeConfig, NULL, &g_dataSupplyNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize source node.");
|
||||
goto done3;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_dataSupplyNode, 0, &g_separatorNode, 0);
|
||||
|
||||
|
||||
|
||||
/* Now we just start the device and wait for the user to terminate the program. */
|
||||
ma_device_start(&device);
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* It's important that we stop the device first or else we'll uninitialize the graph from under the device. */
|
||||
ma_device_stop(&device);
|
||||
|
||||
|
||||
/*done4:*/ ma_data_source_node_uninit(&g_dataSupplyNode, NULL);
|
||||
done3: ma_channel_separator_node_uninit(&g_separatorNode, NULL);
|
||||
done2: ma_channel_combiner_node_uninit(&g_combinerNode, NULL);
|
||||
done1: ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
done0: ma_device_uninit(&device);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
|
||||
return 0;
|
||||
}
|
||||
112
thirdparty/miniaudio-0.11.24/extras/nodes/ma_ltrim_node/ma_ltrim_node.c
vendored
Normal file
@@ -0,0 +1,112 @@
|
||||
#ifndef miniaudio_ltrim_node_c
|
||||
#define miniaudio_ltrim_node_c
|
||||
|
||||
#include "ma_ltrim_node.h"
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
|
||||
#ifndef ma_min
|
||||
#define ma_min(a, b) (((a) < (b)) ? (a) : (b))
|
||||
#endif
|
||||
|
||||
MA_API ma_ltrim_node_config ma_ltrim_node_config_init(ma_uint32 channels, float threshold)
|
||||
{
|
||||
ma_ltrim_node_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.nodeConfig = ma_node_config_init(); /* Input and output channels will be set in ma_ltrim_node_init(). */
|
||||
config.channels = channels;
|
||||
config.threshold = threshold;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_ltrim_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_ltrim_node* pTrimNode = (ma_ltrim_node*)pNode;
|
||||
ma_uint32 framesProcessedIn = 0;
|
||||
ma_uint32 framesProcessedOut = 0;
|
||||
ma_uint32 channelCount = ma_node_get_input_channels(pNode, 0);
|
||||
|
||||
/*
|
||||
If we haven't yet found the start, skip over every input sample until we find a frame outside
|
||||
of the threshold.
|
||||
*/
|
||||
if (pTrimNode->foundStart == MA_FALSE) {
|
||||
while (framesProcessedIn < *pFrameCountIn) {
|
||||
ma_uint32 iChannel = 0;
|
||||
for (iChannel = 0; iChannel < channelCount; iChannel += 1) {
|
||||
float sample = ppFramesIn[0][framesProcessedIn*channelCount + iChannel];
|
||||
if (sample < -pTrimNode->threshold || sample > pTrimNode->threshold) {
|
||||
pTrimNode->foundStart = MA_TRUE;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (pTrimNode->foundStart) {
|
||||
break; /* The start has been found. Get out of this loop and finish off processing. */
|
||||
} else {
|
||||
framesProcessedIn += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/* If there's anything left, just copy it over. */
|
||||
framesProcessedOut = ma_min(*pFrameCountOut, *pFrameCountIn - framesProcessedIn);
|
||||
ma_copy_pcm_frames(ppFramesOut[0], &ppFramesIn[0][framesProcessedIn], framesProcessedOut, ma_format_f32, channelCount);
|
||||
|
||||
framesProcessedIn += framesProcessedOut;
|
||||
|
||||
/* We always "process" every input frame, but we may only done a partial output. */
|
||||
*pFrameCountIn = framesProcessedIn;
|
||||
*pFrameCountOut = framesProcessedOut;
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_ltrim_node_vtable =
|
||||
{
|
||||
ma_ltrim_node_process_pcm_frames,
|
||||
NULL,
|
||||
1, /* 1 input bus. */
|
||||
1, /* 1 output bus. */
|
||||
MA_NODE_FLAG_DIFFERENT_PROCESSING_RATES
|
||||
};
|
||||
|
||||
MA_API ma_result ma_ltrim_node_init(ma_node_graph* pNodeGraph, const ma_ltrim_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_ltrim_node* pTrimNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
|
||||
if (pTrimNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pTrimNode, 0, sizeof(*pTrimNode));
|
||||
|
||||
if (pConfig == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
pTrimNode->threshold = pConfig->threshold;
|
||||
pTrimNode->foundStart = MA_FALSE;
|
||||
|
||||
baseConfig = pConfig->nodeConfig;
|
||||
baseConfig.vtable = &g_ma_ltrim_node_vtable;
|
||||
baseConfig.pInputChannels = &pConfig->channels;
|
||||
baseConfig.pOutputChannels = &pConfig->channels;
|
||||
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pTrimNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_ltrim_node_uninit(ma_ltrim_node* pTrimNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
/* The base node is always uninitialized first. */
|
||||
ma_node_uninit(pTrimNode, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
#endif /* miniaudio_ltrim_node_c */
|
||||
37
thirdparty/miniaudio-0.11.24/extras/nodes/ma_ltrim_node/ma_ltrim_node.h
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
/* Include ma_ltrim_node.h after miniaudio.h */
|
||||
#ifndef miniaudio_ltrim_node_h
|
||||
#define miniaudio_ltrim_node_h
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/*
|
||||
The trim node has one input and one output.
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channels;
|
||||
float threshold;
|
||||
} ma_ltrim_node_config;
|
||||
|
||||
MA_API ma_ltrim_node_config ma_ltrim_node_config_init(ma_uint32 channels, float threshold);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
float threshold;
|
||||
ma_bool32 foundStart;
|
||||
} ma_ltrim_node;
|
||||
|
||||
MA_API ma_result ma_ltrim_node_init(ma_node_graph* pNodeGraph, const ma_ltrim_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_ltrim_node* pTrimNode);
|
||||
MA_API void ma_ltrim_node_uninit(ma_ltrim_node* pTrimNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_ltrim_node_h */
|
||||
114
thirdparty/miniaudio-0.11.24/extras/nodes/ma_ltrim_node/ma_ltrim_node_example.c
vendored
Normal file
@@ -0,0 +1,114 @@
|
||||
#include "../../../miniaudio.c"
|
||||
#include "ma_ltrim_node.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32 /* Must always be f32 for this example because the node graph system only works with this. */
|
||||
#define DEVICE_CHANNELS 0 /* The input file will determine the channel count. */
|
||||
#define DEVICE_SAMPLE_RATE 0 /* The input file will determine the sample rate. */
|
||||
|
||||
static ma_decoder g_decoder; /* The decoder that we'll read data from. */
|
||||
static ma_data_source_node g_dataSupplyNode; /* The node that will sit at the root level. Will be reading data from g_dataSupply. */
|
||||
static ma_ltrim_node g_trimNode; /* The trim node. */
|
||||
static ma_node_graph g_nodeGraph; /* The main node graph that we'll be feeding data through. */
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
(void)pInput;
|
||||
(void)pDevice;
|
||||
|
||||
/* All we need to do is read from the node graph. */
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_node_graph_config nodeGraphConfig;
|
||||
ma_ltrim_node_config trimNodeConfig;
|
||||
ma_data_source_node_config dataSupplyNodeConfig;
|
||||
|
||||
if (argc < 1) {
|
||||
printf("No input file.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Decoder. */
|
||||
decoderConfig = ma_decoder_config_init(DEVICE_FORMAT, DEVICE_CHANNELS, DEVICE_SAMPLE_RATE);
|
||||
|
||||
result = ma_decoder_init_file(argv[1], &decoderConfig, &g_decoder);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to load decoder.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
/* Device. */
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = g_decoder.outputFormat;
|
||||
deviceConfig.playback.channels = g_decoder.outputChannels;
|
||||
deviceConfig.sampleRate = g_decoder.outputSampleRate;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* Node graph. */
|
||||
nodeGraphConfig = ma_node_graph_config_init(device.playback.channels);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize node graph.");
|
||||
goto done0;
|
||||
}
|
||||
|
||||
|
||||
/* Trimmer. Attached straight to the endpoint. Input will be the data source node. */
|
||||
trimNodeConfig = ma_ltrim_node_config_init(device.playback.channels, 0);
|
||||
|
||||
result = ma_ltrim_node_init(&g_nodeGraph, &trimNodeConfig, NULL, &g_trimNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize ltrim node.");
|
||||
goto done1;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_trimNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
|
||||
/* Data supply. */
|
||||
dataSupplyNodeConfig = ma_data_source_node_config_init(&g_decoder);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &dataSupplyNodeConfig, NULL, &g_dataSupplyNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize data source node.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_dataSupplyNode, 0, &g_trimNode, 0);
|
||||
|
||||
|
||||
|
||||
/* Now we just start the device and wait for the user to terminate the program. */
|
||||
ma_device_start(&device);
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* It's important that we stop the device first or else we'll uninitialize the graph from under the device. */
|
||||
ma_device_stop(&device);
|
||||
|
||||
|
||||
/*done3:*/ ma_data_source_node_uninit(&g_dataSupplyNode, NULL);
|
||||
done2: ma_ltrim_node_uninit(&g_trimNode, NULL);
|
||||
done1: ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
done0: ma_device_uninit(&device);
|
||||
|
||||
return 0;
|
||||
}
|
||||
84
thirdparty/miniaudio-0.11.24/extras/nodes/ma_reverb_node/ma_reverb_node.c
vendored
Normal file
@@ -0,0 +1,84 @@
|
||||
#ifndef miniaudio_reverb_node_c
|
||||
#define miniaudio_reverb_node_c
|
||||
|
||||
#define VERBLIB_IMPLEMENTATION
|
||||
#include "ma_reverb_node.h"
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
|
||||
MA_API ma_reverb_node_config ma_reverb_node_config_init(ma_uint32 channels, ma_uint32 sampleRate)
|
||||
{
|
||||
ma_reverb_node_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.nodeConfig = ma_node_config_init(); /* Input and output channels will be set in ma_reverb_node_init(). */
|
||||
config.channels = channels;
|
||||
config.sampleRate = sampleRate;
|
||||
config.roomSize = verblib_initialroom;
|
||||
config.damping = verblib_initialdamp;
|
||||
config.width = verblib_initialwidth;
|
||||
config.wetVolume = verblib_initialwet;
|
||||
config.dryVolume = verblib_initialdry;
|
||||
config.mode = verblib_initialmode;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_reverb_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_reverb_node* pReverbNode = (ma_reverb_node*)pNode;
|
||||
|
||||
(void)pFrameCountIn;
|
||||
|
||||
verblib_process(&pReverbNode->reverb, ppFramesIn[0], ppFramesOut[0], *pFrameCountOut);
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_reverb_node_vtable =
|
||||
{
|
||||
ma_reverb_node_process_pcm_frames,
|
||||
NULL,
|
||||
1, /* 1 input bus. */
|
||||
1, /* 1 output bus. */
|
||||
MA_NODE_FLAG_CONTINUOUS_PROCESSING /* Reverb requires continuous processing to ensure the tail get's processed. */
|
||||
};
|
||||
|
||||
MA_API ma_result ma_reverb_node_init(ma_node_graph* pNodeGraph, const ma_reverb_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_reverb_node* pReverbNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
|
||||
if (pReverbNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pReverbNode, 0, sizeof(*pReverbNode));
|
||||
|
||||
if (pConfig == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (verblib_initialize(&pReverbNode->reverb, (unsigned long)pConfig->sampleRate, (unsigned int)pConfig->channels) == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
baseConfig = pConfig->nodeConfig;
|
||||
baseConfig.vtable = &g_ma_reverb_node_vtable;
|
||||
baseConfig.pInputChannels = &pConfig->channels;
|
||||
baseConfig.pOutputChannels = &pConfig->channels;
|
||||
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pReverbNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_reverb_node_uninit(ma_reverb_node* pReverbNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
/* The base node is always uninitialized first. */
|
||||
ma_node_uninit(pReverbNode, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
#endif /* miniaudio_reverb_node_c */
|
||||
43
thirdparty/miniaudio-0.11.24/extras/nodes/ma_reverb_node/ma_reverb_node.h
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
/* Include ma_reverb_node.h after miniaudio.h */
|
||||
#ifndef miniaudio_reverb_node_h
|
||||
#define miniaudio_reverb_node_h
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
#include "verblib.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/*
|
||||
The reverb node has one input and one output.
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channels; /* The number of channels of the source, which will be the same as the output. Must be 1 or 2. */
|
||||
ma_uint32 sampleRate;
|
||||
float roomSize;
|
||||
float damping;
|
||||
float width;
|
||||
float wetVolume;
|
||||
float dryVolume;
|
||||
float mode;
|
||||
} ma_reverb_node_config;
|
||||
|
||||
MA_API ma_reverb_node_config ma_reverb_node_config_init(ma_uint32 channels, ma_uint32 sampleRate);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
verblib reverb;
|
||||
} ma_reverb_node;
|
||||
|
||||
MA_API ma_result ma_reverb_node_init(ma_node_graph* pNodeGraph, const ma_reverb_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_reverb_node* pReverbNode);
|
||||
MA_API void ma_reverb_node_uninit(ma_reverb_node* pReverbNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_reverb_node_h */
|
||||
122
thirdparty/miniaudio-0.11.24/extras/nodes/ma_reverb_node/ma_reverb_node_example.c
vendored
Normal file
@@ -0,0 +1,122 @@
|
||||
#include "../../../miniaudio.c"
|
||||
#include "ma_reverb_node.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32 /* Must always be f32 for this example because the node graph system only works with this. */
|
||||
#define DEVICE_CHANNELS 1 /* For this example, always set to 1. */
|
||||
#define DEVICE_SAMPLE_RATE 48000 /* Cannot be less than 22050 for this example. */
|
||||
|
||||
static ma_audio_buffer_ref g_dataSupply; /* The underlying data source of the source node. */
|
||||
static ma_data_source_node g_dataSupplyNode; /* The node that will sit at the root level. Will be reading data from g_dataSupply. */
|
||||
static ma_reverb_node g_reverbNode; /* The reverb node. */
|
||||
static ma_node_graph g_nodeGraph; /* The main node graph that we'll be feeding data through. */
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
This example assumes the playback and capture sides use the same format and channel count. The
|
||||
format must be f32.
|
||||
*/
|
||||
if (pDevice->capture.format != DEVICE_FORMAT || pDevice->playback.format != DEVICE_FORMAT || pDevice->capture.channels != pDevice->playback.channels) {
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
The node graph system is a pulling style of API. At the lowest level of the chain will be a
|
||||
node acting as a data source for the purpose of delivering the initial audio data. In our case,
|
||||
the data source is our `pInput` buffer. We need to update the underlying data source so that it
|
||||
read data from `pInput`.
|
||||
*/
|
||||
ma_audio_buffer_ref_set_data(&g_dataSupply, pInput, frameCount);
|
||||
|
||||
/* With the source buffer configured we can now read directly from the node graph. */
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_node_graph_config nodeGraphConfig;
|
||||
ma_reverb_node_config reverbNodeConfig;
|
||||
ma_data_source_node_config dataSupplyNodeConfig;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_duplex);
|
||||
deviceConfig.capture.pDeviceID = NULL;
|
||||
deviceConfig.capture.format = DEVICE_FORMAT;
|
||||
deviceConfig.capture.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.capture.shareMode = ma_share_mode_shared;
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = DEVICE_FORMAT;
|
||||
deviceConfig.playback.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.sampleRate = DEVICE_SAMPLE_RATE;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* Node graph. */
|
||||
nodeGraphConfig = ma_node_graph_config_init(device.capture.channels);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize node graph.");
|
||||
goto done0;
|
||||
}
|
||||
|
||||
|
||||
/* Reverb. Attached straight to the endpoint. */
|
||||
reverbNodeConfig = ma_reverb_node_config_init(device.capture.channels, device.sampleRate);
|
||||
|
||||
result = ma_reverb_node_init(&g_nodeGraph, &reverbNodeConfig, NULL, &g_reverbNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize reverb node.");
|
||||
goto done1;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_reverbNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
|
||||
/* Data supply. Attached to input bus 0 of the reverb node. */
|
||||
result = ma_audio_buffer_ref_init(device.capture.format, device.capture.channels, NULL, 0, &g_dataSupply);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio buffer for source.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
dataSupplyNodeConfig = ma_data_source_node_config_init(&g_dataSupply);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &dataSupplyNodeConfig, NULL, &g_dataSupplyNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize source node.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_dataSupplyNode, 0, &g_reverbNode, 0);
|
||||
|
||||
|
||||
|
||||
/* Now we just start the device and wait for the user to terminate the program. */
|
||||
ma_device_start(&device);
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* It's important that we stop the device first or else we'll uninitialize the graph from under the device. */
|
||||
ma_device_stop(&device);
|
||||
|
||||
|
||||
/*done3:*/ ma_data_source_node_uninit(&g_dataSupplyNode, NULL);
|
||||
done2: ma_reverb_node_uninit(&g_reverbNode, NULL);
|
||||
done1: ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
done0: ma_device_uninit(&device);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
|
||||
return 0;
|
||||
}
|
||||
753
thirdparty/miniaudio-0.11.24/extras/nodes/ma_reverb_node/verblib.h
vendored
Normal file
@@ -0,0 +1,753 @@
|
||||
/* Reverb Library
|
||||
* Verblib version 0.5 - 2022-10-25
|
||||
*
|
||||
* Philip Bennefall - philip@blastbay.com
|
||||
*
|
||||
* See the end of this file for licensing terms.
|
||||
* This reverb is based on Freeverb, a public domain reverb written by Jezar at Dreampoint.
|
||||
*
|
||||
* IMPORTANT: The reverb currently only works with 1 or 2 channels, at sample rates of 22050 HZ and above.
|
||||
* These restrictions may be lifted in a future version.
|
||||
*
|
||||
* USAGE
|
||||
*
|
||||
* This is a single-file library. To use it, do something like the following in one .c file.
|
||||
* #define VERBLIB_IMPLEMENTATION
|
||||
* #include "verblib.h"
|
||||
*
|
||||
* You can then #include this file in other parts of the program as you would with any other header file.
|
||||
*/
|
||||
|
||||
#ifndef VERBLIB_H
|
||||
#define VERBLIB_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* COMPILE-TIME OPTIONS */
|
||||
|
||||
/* The maximum sample rate that should be supported, specified as a multiple of 44100. */
|
||||
#ifndef verblib_max_sample_rate_multiplier
|
||||
#define verblib_max_sample_rate_multiplier 4
|
||||
#endif
|
||||
|
||||
/* The silence threshold which is used when calculating decay time. */
|
||||
#ifndef verblib_silence_threshold
|
||||
#define verblib_silence_threshold 80.0 /* In dB (absolute). */
|
||||
#endif
|
||||
|
||||
/* PUBLIC API */
|
||||
|
||||
typedef struct verblib verblib;
|
||||
|
||||
/* Initialize a verblib structure.
|
||||
*
|
||||
* Call this function to initialize the verblib structure.
|
||||
* Returns nonzero (true) on success or 0 (false) on failure.
|
||||
* The function will only fail if one or more of the parameters are invalid.
|
||||
*/
|
||||
int verblib_initialize ( verblib* verb, unsigned long sample_rate, unsigned int channels );
|
||||
|
||||
/* Run the reverb.
|
||||
*
|
||||
* Call this function continuously to generate your output.
|
||||
* output_buffer may be the same pointer as input_buffer if in place processing is desired.
|
||||
* frames specifies the number of sample frames that should be processed.
|
||||
*/
|
||||
void verblib_process ( verblib* verb, const float* input_buffer, float* output_buffer, unsigned long frames );
|
||||
|
||||
/* Set the size of the room, between 0.0 and 1.0. */
|
||||
void verblib_set_room_size ( verblib* verb, float value );
|
||||
|
||||
/* Get the size of the room. */
|
||||
float verblib_get_room_size ( const verblib* verb );
|
||||
|
||||
/* Set the amount of damping, between 0.0 and 1.0. */
|
||||
void verblib_set_damping ( verblib* verb, float value );
|
||||
|
||||
/* Get the amount of damping. */
|
||||
float verblib_get_damping ( const verblib* verb );
|
||||
|
||||
/* Set the stereo width of the reverb, between 0.0 and 1.0. */
|
||||
void verblib_set_width ( verblib* verb, float value );
|
||||
|
||||
/* Get the stereo width of the reverb. */
|
||||
float verblib_get_width ( const verblib* verb );
|
||||
|
||||
/* Set the volume of the wet signal, between 0.0 and 1.0. */
|
||||
void verblib_set_wet ( verblib* verb, float value );
|
||||
|
||||
/* Get the volume of the wet signal. */
|
||||
float verblib_get_wet ( const verblib* verb );
|
||||
|
||||
/* Set the volume of the dry signal, between 0.0 and 1.0. */
|
||||
void verblib_set_dry ( verblib* verb, float value );
|
||||
|
||||
/* Get the volume of the dry signal. */
|
||||
float verblib_get_dry ( const verblib* verb );
|
||||
|
||||
/* Set the stereo width of the input signal sent to the reverb, 0.0 or greater.
|
||||
* Values less than 1.0 narrow the signal, 1.0 sends the input signal unmodified, values greater than 1.0 widen the signal.
|
||||
*/
|
||||
void verblib_set_input_width ( verblib* verb, float value );
|
||||
|
||||
/* Get the stereo width of the input signal sent to the reverb. */
|
||||
float verblib_get_input_width ( const verblib* verb );
|
||||
|
||||
/* Set the mode of the reverb, where values below 0.5 mean normal and values above mean frozen. */
|
||||
void verblib_set_mode ( verblib* verb, float value );
|
||||
|
||||
/* Get the mode of the reverb. */
|
||||
float verblib_get_mode ( const verblib* verb );
|
||||
|
||||
/* Get the decay time in sample frames based on the current room size setting. */
|
||||
/* If freeze mode is active, the decay time is infinite and this function returns 0. */
|
||||
unsigned long verblib_get_decay_time_in_frames ( const verblib* verb );
|
||||
|
||||
/* INTERNAL STRUCTURES */
|
||||
|
||||
/* Allpass filter */
|
||||
typedef struct verblib_allpass verblib_allpass;
|
||||
struct verblib_allpass
|
||||
{
|
||||
float* buffer;
|
||||
float feedback;
|
||||
int bufsize;
|
||||
int bufidx;
|
||||
};
|
||||
|
||||
/* Comb filter */
|
||||
typedef struct verblib_comb verblib_comb;
|
||||
struct verblib_comb
|
||||
{
|
||||
float* buffer;
|
||||
float feedback;
|
||||
float filterstore;
|
||||
float damp1;
|
||||
float damp2;
|
||||
int bufsize;
|
||||
int bufidx;
|
||||
};
|
||||
|
||||
/* Reverb model tuning values */
|
||||
#define verblib_numcombs 8
|
||||
#define verblib_numallpasses 4
|
||||
#define verblib_muted 0.0f
|
||||
#define verblib_fixedgain 0.015f
|
||||
#define verblib_scalewet 3.0f
|
||||
#define verblib_scaledry 2.0f
|
||||
#define verblib_scaledamp 0.8f
|
||||
#define verblib_scaleroom 0.28f
|
||||
#define verblib_offsetroom 0.7f
|
||||
#define verblib_initialroom 0.5f
|
||||
#define verblib_initialdamp 0.25f
|
||||
#define verblib_initialwet 1.0f/verblib_scalewet
|
||||
#define verblib_initialdry 0.0f
|
||||
#define verblib_initialwidth 1.0f
|
||||
#define verblib_initialinputwidth 0.0f
|
||||
#define verblib_initialmode 0.0f
|
||||
#define verblib_freezemode 0.5f
|
||||
#define verblib_stereospread 23
|
||||
|
||||
/*
|
||||
* These values assume 44.1KHz sample rate, but will be verblib_scaled appropriately.
|
||||
* The values were obtained by listening tests.
|
||||
*/
|
||||
#define verblib_combtuningL1 1116
|
||||
#define verblib_combtuningR1 (1116+verblib_stereospread)
|
||||
#define verblib_combtuningL2 1188
|
||||
#define verblib_combtuningR2 (1188+verblib_stereospread)
|
||||
#define verblib_combtuningL3 1277
|
||||
#define verblib_combtuningR3 (1277+verblib_stereospread)
|
||||
#define verblib_combtuningL4 1356
|
||||
#define verblib_combtuningR4 (1356+verblib_stereospread)
|
||||
#define verblib_combtuningL5 1422
|
||||
#define verblib_combtuningR5 (1422+verblib_stereospread)
|
||||
#define verblib_combtuningL6 1491
|
||||
#define verblib_combtuningR6 (1491+verblib_stereospread)
|
||||
#define verblib_combtuningL7 1557
|
||||
#define verblib_combtuningR7 (1557+verblib_stereospread)
|
||||
#define verblib_combtuningL8 1617
|
||||
#define verblib_combtuningR8 (1617+verblib_stereospread)
|
||||
#define verblib_allpasstuningL1 556
|
||||
#define verblib_allpasstuningR1 (556+verblib_stereospread)
|
||||
#define verblib_allpasstuningL2 441
|
||||
#define verblib_allpasstuningR2 (441+verblib_stereospread)
|
||||
#define verblib_allpasstuningL3 341
|
||||
#define verblib_allpasstuningR3 (341+verblib_stereospread)
|
||||
#define verblib_allpasstuningL4 225
|
||||
#define verblib_allpasstuningR4 (225+verblib_stereospread)
|
||||
|
||||
/* The main reverb structure. This is the structure that you will create an instance of when using the reverb. */
|
||||
struct verblib
|
||||
{
|
||||
unsigned int channels;
|
||||
float gain;
|
||||
float roomsize, roomsize1;
|
||||
float damp, damp1;
|
||||
float wet, wet1, wet2;
|
||||
float dry;
|
||||
float width;
|
||||
float input_width;
|
||||
float mode;
|
||||
|
||||
/*
|
||||
* The following are all declared inline
|
||||
* to remove the need for dynamic allocation.
|
||||
*/
|
||||
|
||||
/* Comb filters */
|
||||
verblib_comb combL[verblib_numcombs];
|
||||
verblib_comb combR[verblib_numcombs];
|
||||
|
||||
/* Allpass filters */
|
||||
verblib_allpass allpassL[verblib_numallpasses];
|
||||
verblib_allpass allpassR[verblib_numallpasses];
|
||||
|
||||
/* Buffers for the combs */
|
||||
float bufcombL1[verblib_combtuningL1* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR1[verblib_combtuningR1* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL2[verblib_combtuningL2* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR2[verblib_combtuningR2* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL3[verblib_combtuningL3* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR3[verblib_combtuningR3* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL4[verblib_combtuningL4* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR4[verblib_combtuningR4* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL5[verblib_combtuningL5* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR5[verblib_combtuningR5* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL6[verblib_combtuningL6* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR6[verblib_combtuningR6* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL7[verblib_combtuningL7* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR7[verblib_combtuningR7* verblib_max_sample_rate_multiplier];
|
||||
float bufcombL8[verblib_combtuningL8* verblib_max_sample_rate_multiplier];
|
||||
float bufcombR8[verblib_combtuningR8* verblib_max_sample_rate_multiplier];
|
||||
|
||||
/* Buffers for the allpasses */
|
||||
float bufallpassL1[verblib_allpasstuningL1* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassR1[verblib_allpasstuningR1* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassL2[verblib_allpasstuningL2* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassR2[verblib_allpasstuningR2* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassL3[verblib_allpasstuningL3* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassR3[verblib_allpasstuningR3* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassL4[verblib_allpasstuningL4* verblib_max_sample_rate_multiplier];
|
||||
float bufallpassR4[verblib_allpasstuningR4* verblib_max_sample_rate_multiplier];
|
||||
};
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* VERBLIB_H */
|
||||
|
||||
/* IMPLEMENTATION */
|
||||
|
||||
#ifdef VERBLIB_IMPLEMENTATION
|
||||
|
||||
#include <stddef.h>
|
||||
#include <math.h>
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#define VERBLIB_INLINE __forceinline
|
||||
#elif defined(__GNUC__)
|
||||
#if defined(__STRICT_ANSI__)
|
||||
#define VERBLIB_GNUC_INLINE_HINT __inline__
|
||||
#else
|
||||
#define VERBLIB_GNUC_INLINE_HINT inline
|
||||
#endif
|
||||
|
||||
#if (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 2)) || defined(__clang__)
|
||||
#define VERBLIB_INLINE VERBLIB_GNUC_INLINE_HINT __attribute__((always_inline))
|
||||
#else
|
||||
#define VERBLIB_INLINE VERBLIB_GNUC_INLINE_HINT
|
||||
#endif
|
||||
#elif defined(__WATCOMC__)
|
||||
#define VERBLIB_INLINE __inline
|
||||
#else
|
||||
#define VERBLIB_INLINE
|
||||
#endif
|
||||
|
||||
#define verblib_max(x, y) (((x) > (y)) ? (x) : (y))
|
||||
|
||||
#define undenormalise(sample) sample+=1.0f; sample-=1.0f;
|
||||
|
||||
/* Allpass filter */
|
||||
static void verblib_allpass_initialize ( verblib_allpass* allpass, float* buf, int size )
|
||||
{
|
||||
allpass->buffer = buf;
|
||||
allpass->bufsize = size;
|
||||
allpass->bufidx = 0;
|
||||
}
|
||||
|
||||
static VERBLIB_INLINE float verblib_allpass_process ( verblib_allpass* allpass, float input )
|
||||
{
|
||||
float output;
|
||||
float bufout;
|
||||
|
||||
bufout = allpass->buffer[allpass->bufidx];
|
||||
undenormalise ( bufout );
|
||||
|
||||
output = -input + bufout;
|
||||
allpass->buffer[allpass->bufidx] = input + ( bufout * allpass->feedback );
|
||||
|
||||
if ( ++allpass->bufidx >= allpass->bufsize )
|
||||
{
|
||||
allpass->bufidx = 0;
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
static void verblib_allpass_mute ( verblib_allpass* allpass )
|
||||
{
|
||||
int i;
|
||||
for ( i = 0; i < allpass->bufsize; i++ )
|
||||
{
|
||||
allpass->buffer[i] = 0.0f;
|
||||
}
|
||||
}
|
||||
|
||||
/* Comb filter */
|
||||
static void verblib_comb_initialize ( verblib_comb* comb, float* buf, int size )
|
||||
{
|
||||
comb->buffer = buf;
|
||||
comb->bufsize = size;
|
||||
comb->filterstore = 0.0f;
|
||||
comb->bufidx = 0;
|
||||
}
|
||||
|
||||
static void verblib_comb_mute ( verblib_comb* comb )
|
||||
{
|
||||
int i;
|
||||
for ( i = 0; i < comb->bufsize; i++ )
|
||||
{
|
||||
comb->buffer[i] = 0.0f;
|
||||
}
|
||||
}
|
||||
|
||||
static void verblib_comb_set_damp ( verblib_comb* comb, float val )
|
||||
{
|
||||
comb->damp1 = val;
|
||||
comb->damp2 = 1.0f - val;
|
||||
}
|
||||
|
||||
static VERBLIB_INLINE float verblib_comb_process ( verblib_comb* comb, float input )
|
||||
{
|
||||
float output;
|
||||
|
||||
output = comb->buffer[comb->bufidx];
|
||||
undenormalise ( output );
|
||||
|
||||
comb->filterstore = ( output * comb->damp2 ) + ( comb->filterstore * comb->damp1 );
|
||||
undenormalise ( comb->filterstore );
|
||||
|
||||
comb->buffer[comb->bufidx] = input + ( comb->filterstore * comb->feedback );
|
||||
|
||||
if ( ++comb->bufidx >= comb->bufsize )
|
||||
{
|
||||
comb->bufidx = 0;
|
||||
}
|
||||
|
||||
return output;
|
||||
}
|
||||
|
||||
static void verblib_update ( verblib* verb )
|
||||
{
|
||||
/* Recalculate internal values after parameter change. */
|
||||
|
||||
int i;
|
||||
|
||||
verb->wet1 = verb->wet * ( verb->width / 2.0f + 0.5f );
|
||||
verb->wet2 = verb->wet * ( ( 1.0f - verb->width ) / 2.0f );
|
||||
|
||||
if ( verb->mode >= verblib_freezemode )
|
||||
{
|
||||
verb->roomsize1 = 1.0f;
|
||||
verb->damp1 = 0.0f;
|
||||
verb->gain = verblib_muted;
|
||||
}
|
||||
else
|
||||
{
|
||||
verb->roomsize1 = verb->roomsize;
|
||||
verb->damp1 = verb->damp;
|
||||
verb->gain = verblib_fixedgain;
|
||||
}
|
||||
|
||||
for ( i = 0; i < verblib_numcombs; i++ )
|
||||
{
|
||||
verb->combL[i].feedback = verb->roomsize1;
|
||||
verb->combR[i].feedback = verb->roomsize1;
|
||||
verblib_comb_set_damp ( &verb->combL[i], verb->damp1 );
|
||||
verblib_comb_set_damp ( &verb->combR[i], verb->damp1 );
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
static void verblib_mute ( verblib* verb )
|
||||
{
|
||||
int i;
|
||||
if ( verblib_get_mode ( verb ) >= verblib_freezemode )
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
for ( i = 0; i < verblib_numcombs; i++ )
|
||||
{
|
||||
verblib_comb_mute ( &verb->combL[i] );
|
||||
verblib_comb_mute ( &verb->combR[i] );
|
||||
}
|
||||
for ( i = 0; i < verblib_numallpasses; i++ )
|
||||
{
|
||||
verblib_allpass_mute ( &verb->allpassL[i] );
|
||||
verblib_allpass_mute ( &verb->allpassR[i] );
|
||||
}
|
||||
}
|
||||
|
||||
static int verblib_get_verblib_scaled_buffer_size ( unsigned long sample_rate, unsigned long value )
|
||||
{
|
||||
long double result = ( long double ) sample_rate;
|
||||
result /= 44100.0;
|
||||
result = ( ( long double ) value ) * result;
|
||||
if ( result < 1.0 )
|
||||
{
|
||||
result = 1.0;
|
||||
}
|
||||
return ( int ) result;
|
||||
}
|
||||
|
||||
int verblib_initialize ( verblib* verb, unsigned long sample_rate, unsigned int channels )
|
||||
{
|
||||
int i;
|
||||
|
||||
if ( channels != 1 && channels != 2 )
|
||||
{
|
||||
return 0; /* Currently supports only 1 or 2 channels. */
|
||||
}
|
||||
if ( sample_rate < 22050 )
|
||||
{
|
||||
return 0; /* The minimum supported sample rate is 22050 HZ. */
|
||||
}
|
||||
else if ( sample_rate > 44100 * verblib_max_sample_rate_multiplier )
|
||||
{
|
||||
return 0; /* The sample rate is too high. */
|
||||
}
|
||||
|
||||
verb->channels = channels;
|
||||
|
||||
/* Tie the components to their buffers. */
|
||||
verblib_comb_initialize ( &verb->combL[0], verb->bufcombL1, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL1 ) );
|
||||
verblib_comb_initialize ( &verb->combR[0], verb->bufcombR1, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR1 ) );
|
||||
verblib_comb_initialize ( &verb->combL[1], verb->bufcombL2, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL2 ) );
|
||||
verblib_comb_initialize ( &verb->combR[1], verb->bufcombR2, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR2 ) );
|
||||
verblib_comb_initialize ( &verb->combL[2], verb->bufcombL3, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL3 ) );
|
||||
verblib_comb_initialize ( &verb->combR[2], verb->bufcombR3, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR3 ) );
|
||||
verblib_comb_initialize ( &verb->combL[3], verb->bufcombL4, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL4 ) );
|
||||
verblib_comb_initialize ( &verb->combR[3], verb->bufcombR4, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR4 ) );
|
||||
verblib_comb_initialize ( &verb->combL[4], verb->bufcombL5, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL5 ) );
|
||||
verblib_comb_initialize ( &verb->combR[4], verb->bufcombR5, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR5 ) );
|
||||
verblib_comb_initialize ( &verb->combL[5], verb->bufcombL6, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL6 ) );
|
||||
verblib_comb_initialize ( &verb->combR[5], verb->bufcombR6, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR6 ) );
|
||||
verblib_comb_initialize ( &verb->combL[6], verb->bufcombL7, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL7 ) );
|
||||
verblib_comb_initialize ( &verb->combR[6], verb->bufcombR7, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR7 ) );
|
||||
verblib_comb_initialize ( &verb->combL[7], verb->bufcombL8, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningL8 ) );
|
||||
verblib_comb_initialize ( &verb->combR[7], verb->bufcombR8, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_combtuningR8 ) );
|
||||
|
||||
verblib_allpass_initialize ( &verb->allpassL[0], verb->bufallpassL1, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningL1 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassR[0], verb->bufallpassR1, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningR1 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassL[1], verb->bufallpassL2, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningL2 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassR[1], verb->bufallpassR2, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningR2 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassL[2], verb->bufallpassL3, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningL3 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassR[2], verb->bufallpassR3, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningR3 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassL[3], verb->bufallpassL4, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningL4 ) );
|
||||
verblib_allpass_initialize ( &verb->allpassR[3], verb->bufallpassR4, verblib_get_verblib_scaled_buffer_size ( sample_rate, verblib_allpasstuningR4 ) );
|
||||
|
||||
/* Set default values. */
|
||||
for ( i = 0; i < verblib_numallpasses; i++ )
|
||||
{
|
||||
verb->allpassL[i].feedback = 0.5f;
|
||||
verb->allpassR[i].feedback = 0.5f;
|
||||
}
|
||||
|
||||
verblib_set_wet ( verb, verblib_initialwet );
|
||||
verblib_set_room_size ( verb, verblib_initialroom );
|
||||
verblib_set_dry ( verb, verblib_initialdry );
|
||||
verblib_set_damping ( verb, verblib_initialdamp );
|
||||
verblib_set_width ( verb, verblib_initialwidth );
|
||||
verblib_set_input_width ( verb, verblib_initialinputwidth );
|
||||
verblib_set_mode ( verb, verblib_initialmode );
|
||||
|
||||
/* The buffers will be full of rubbish - so we MUST mute them. */
|
||||
verblib_mute ( verb );
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
void verblib_process ( verblib* verb, const float* input_buffer, float* output_buffer, unsigned long frames )
|
||||
{
|
||||
int i;
|
||||
float outL, outR, input;
|
||||
|
||||
if ( verb->channels == 1 )
|
||||
{
|
||||
while ( frames-- > 0 )
|
||||
{
|
||||
outL = 0.0f;
|
||||
input = ( input_buffer[0] * 2.0f ) * verb->gain;
|
||||
|
||||
/* Accumulate comb filters in parallel. */
|
||||
for ( i = 0; i < verblib_numcombs; i++ )
|
||||
{
|
||||
outL += verblib_comb_process ( &verb->combL[i], input );
|
||||
}
|
||||
|
||||
/* Feed through allpasses in series. */
|
||||
for ( i = 0; i < verblib_numallpasses; i++ )
|
||||
{
|
||||
outL = verblib_allpass_process ( &verb->allpassL[i], outL );
|
||||
}
|
||||
|
||||
/* Calculate output REPLACING anything already there. */
|
||||
output_buffer[0] = outL * verb->wet1 + input_buffer[0] * verb->dry;
|
||||
|
||||
/* Increment sample pointers. */
|
||||
++input_buffer;
|
||||
++output_buffer;
|
||||
}
|
||||
}
|
||||
else if ( verb->channels == 2 )
|
||||
{
|
||||
if ( verb->input_width > 0.0f ) /* Stereo input is widened or narrowed. */
|
||||
{
|
||||
|
||||
/*
|
||||
* The stereo mid/side code is derived from:
|
||||
* https://www.musicdsp.org/en/latest/Effects/256-stereo-width-control-obtained-via-transfromation-matrix.html
|
||||
* The description of the code on the above page says:
|
||||
*
|
||||
* This work is hereby placed in the public domain for all purposes, including
|
||||
* use in commercial applications.
|
||||
*/
|
||||
|
||||
const float tmp = 1 / verblib_max ( 1 + verb->input_width, 2 );
|
||||
const float coef_mid = 1 * tmp;
|
||||
const float coef_side = verb->input_width * tmp;
|
||||
while ( frames-- > 0 )
|
||||
{
|
||||
const float mid = ( input_buffer[0] + input_buffer[1] ) * coef_mid;
|
||||
const float side = ( input_buffer[1] - input_buffer[0] ) * coef_side;
|
||||
const float input_left = ( mid - side ) * ( verb->gain * 2.0f );
|
||||
const float input_right = ( mid + side ) * ( verb->gain * 2.0f );
|
||||
|
||||
outL = outR = 0.0f;
|
||||
|
||||
/* Accumulate comb filters in parallel. */
|
||||
for ( i = 0; i < verblib_numcombs; i++ )
|
||||
{
|
||||
outL += verblib_comb_process ( &verb->combL[i], input_left );
|
||||
outR += verblib_comb_process ( &verb->combR[i], input_right );
|
||||
}
|
||||
|
||||
/* Feed through allpasses in series. */
|
||||
for ( i = 0; i < verblib_numallpasses; i++ )
|
||||
{
|
||||
outL = verblib_allpass_process ( &verb->allpassL[i], outL );
|
||||
outR = verblib_allpass_process ( &verb->allpassR[i], outR );
|
||||
}
|
||||
|
||||
/* Calculate output REPLACING anything already there. */
|
||||
output_buffer[0] = outL * verb->wet1 + outR * verb->wet2 + input_buffer[0] * verb->dry;
|
||||
output_buffer[1] = outR * verb->wet1 + outL * verb->wet2 + input_buffer[1] * verb->dry;
|
||||
|
||||
/* Increment sample pointers. */
|
||||
input_buffer += 2;
|
||||
output_buffer += 2;
|
||||
}
|
||||
}
|
||||
else /* Stereo input is summed to mono. */
|
||||
{
|
||||
while ( frames-- > 0 )
|
||||
{
|
||||
outL = outR = 0.0f;
|
||||
input = ( input_buffer[0] + input_buffer[1] ) * verb->gain;
|
||||
|
||||
/* Accumulate comb filters in parallel. */
|
||||
for ( i = 0; i < verblib_numcombs; i++ )
|
||||
{
|
||||
outL += verblib_comb_process ( &verb->combL[i], input );
|
||||
outR += verblib_comb_process ( &verb->combR[i], input );
|
||||
}
|
||||
|
||||
/* Feed through allpasses in series. */
|
||||
for ( i = 0; i < verblib_numallpasses; i++ )
|
||||
{
|
||||
outL = verblib_allpass_process ( &verb->allpassL[i], outL );
|
||||
outR = verblib_allpass_process ( &verb->allpassR[i], outR );
|
||||
}
|
||||
|
||||
/* Calculate output REPLACING anything already there. */
|
||||
output_buffer[0] = outL * verb->wet1 + outR * verb->wet2 + input_buffer[0] * verb->dry;
|
||||
output_buffer[1] = outR * verb->wet1 + outL * verb->wet2 + input_buffer[1] * verb->dry;
|
||||
|
||||
/* Increment sample pointers. */
|
||||
input_buffer += 2;
|
||||
output_buffer += 2;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void verblib_set_room_size ( verblib* verb, float value )
|
||||
{
|
||||
verb->roomsize = ( value * verblib_scaleroom ) + verblib_offsetroom;
|
||||
verblib_update ( verb );
|
||||
}
|
||||
|
||||
float verblib_get_room_size ( const verblib* verb )
|
||||
{
|
||||
return ( verb->roomsize - verblib_offsetroom ) / verblib_scaleroom;
|
||||
}
|
||||
|
||||
void verblib_set_damping ( verblib* verb, float value )
|
||||
{
|
||||
verb->damp = value * verblib_scaledamp;
|
||||
verblib_update ( verb );
|
||||
}
|
||||
|
||||
float verblib_get_damping ( const verblib* verb )
|
||||
{
|
||||
return verb->damp / verblib_scaledamp;
|
||||
}
|
||||
|
||||
void verblib_set_wet ( verblib* verb, float value )
|
||||
{
|
||||
verb->wet = value * verblib_scalewet;
|
||||
verblib_update ( verb );
|
||||
}
|
||||
|
||||
float verblib_get_wet ( const verblib* verb )
|
||||
{
|
||||
return verb->wet / verblib_scalewet;
|
||||
}
|
||||
|
||||
void verblib_set_dry ( verblib* verb, float value )
|
||||
{
|
||||
verb->dry = value * verblib_scaledry;
|
||||
}
|
||||
|
||||
float verblib_get_dry ( const verblib* verb )
|
||||
{
|
||||
return verb->dry / verblib_scaledry;
|
||||
}
|
||||
|
||||
void verblib_set_width ( verblib* verb, float value )
|
||||
{
|
||||
verb->width = value;
|
||||
verblib_update ( verb );
|
||||
}
|
||||
|
||||
float verblib_get_width ( const verblib* verb )
|
||||
{
|
||||
return verb->width;
|
||||
}
|
||||
|
||||
void verblib_set_input_width ( verblib* verb, float value )
|
||||
{
|
||||
verb->input_width = value;
|
||||
}
|
||||
|
||||
float verblib_get_input_width ( const verblib* verb )
|
||||
{
|
||||
return verb->input_width;
|
||||
}
|
||||
|
||||
void verblib_set_mode ( verblib* verb, float value )
|
||||
{
|
||||
verb->mode = value;
|
||||
verblib_update ( verb );
|
||||
}
|
||||
|
||||
float verblib_get_mode ( const verblib* verb )
|
||||
{
|
||||
if ( verb->mode >= verblib_freezemode )
|
||||
{
|
||||
return 1.0f;
|
||||
}
|
||||
return 0.0f;
|
||||
}
|
||||
|
||||
unsigned long verblib_get_decay_time_in_frames ( const verblib* verb )
|
||||
{
|
||||
double decay;
|
||||
|
||||
if ( verb->mode >= verblib_freezemode )
|
||||
{
|
||||
return 0; /* Freeze mode creates an infinite decay. */
|
||||
}
|
||||
|
||||
decay = verblib_silence_threshold / fabs ( -20.0 * log ( 1.0 / verb->roomsize1 ) );
|
||||
decay *= ( double ) ( verb->combR[7].bufsize * 2 );
|
||||
return ( unsigned long ) decay;
|
||||
}
|
||||
|
||||
#endif /* VERBLIB_IMPLEMENTATION */
|
||||
|
||||
/* REVISION HISTORY
|
||||
*
|
||||
* Version 0.5 - 2022-10-25
|
||||
* Added two functions called verblib_set_input_width and verblib_get_input_width.
|
||||
*
|
||||
* Version 0.4 - 2021-01-23
|
||||
* Added a function called verblib_get_decay_time_in_frames.
|
||||
*
|
||||
* Version 0.3 - 2021-01-18
|
||||
* Added support for sample rates of 22050 and above.
|
||||
*
|
||||
* Version 0.2 - 2021-01-17
|
||||
* Added support for processing mono audio.
|
||||
*
|
||||
* Version 0.1 - 2021-01-17
|
||||
* Initial release.
|
||||
*/
|
||||
|
||||
/* LICENSE
|
||||
|
||||
This software is available under 2 licenses -- choose whichever you prefer.
|
||||
------------------------------------------------------------------------------
|
||||
ALTERNATIVE A - MIT No Attribution License
|
||||
Copyright (c) 2022 Philip Bennefall
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
------------------------------------------------------------------------------
|
||||
ALTERNATIVE B - Public Domain (www.unlicense.org)
|
||||
This is free and unencumbered software released into the public domain.
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this
|
||||
software, either in source code form or as a compiled binary, for any purpose,
|
||||
commercial or non-commercial, and by any means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors of this
|
||||
software dedicate any and all copyright interest in the software to the public
|
||||
domain. We make this dedication for the benefit of the public at large and to
|
||||
the detriment of our heirs and successors. We intend this dedication to be an
|
||||
overt act of relinquishment in perpetuity of all present and future rights to
|
||||
this software under copyright law.
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
------------------------------------------------------------------------------
|
||||
*/
|
||||
86
thirdparty/miniaudio-0.11.24/extras/nodes/ma_vocoder_node/ma_vocoder_node.c
vendored
Normal file
@@ -0,0 +1,86 @@
|
||||
#ifndef miniaudio_vocoder_node_c
|
||||
#define miniaudio_vocoder_node_c
|
||||
|
||||
#define VOCLIB_IMPLEMENTATION
|
||||
#include "ma_vocoder_node.h"
|
||||
|
||||
#include <string.h> /* For memset(). */
|
||||
|
||||
MA_API ma_vocoder_node_config ma_vocoder_node_config_init(ma_uint32 channels, ma_uint32 sampleRate)
|
||||
{
|
||||
ma_vocoder_node_config config;
|
||||
|
||||
memset(&config, 0, sizeof(config));
|
||||
config.nodeConfig = ma_node_config_init(); /* Input and output channels will be set in ma_vocoder_node_init(). */
|
||||
config.channels = channels;
|
||||
config.sampleRate = sampleRate;
|
||||
config.bands = 16;
|
||||
config.filtersPerBand = 6;
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
|
||||
static void ma_vocoder_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
|
||||
{
|
||||
ma_vocoder_node* pVocoderNode = (ma_vocoder_node*)pNode;
|
||||
|
||||
(void)pFrameCountIn;
|
||||
|
||||
voclib_process(&pVocoderNode->voclib, ppFramesIn[0], ppFramesIn[1], ppFramesOut[0], *pFrameCountOut);
|
||||
}
|
||||
|
||||
static ma_node_vtable g_ma_vocoder_node_vtable =
|
||||
{
|
||||
ma_vocoder_node_process_pcm_frames,
|
||||
NULL,
|
||||
2, /* 2 input buses. */
|
||||
1, /* 1 output bus. */
|
||||
0
|
||||
};
|
||||
|
||||
MA_API ma_result ma_vocoder_node_init(ma_node_graph* pNodeGraph, const ma_vocoder_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_vocoder_node* pVocoderNode)
|
||||
{
|
||||
ma_result result;
|
||||
ma_node_config baseConfig;
|
||||
ma_uint32 inputChannels[2];
|
||||
ma_uint32 outputChannels[1];
|
||||
|
||||
if (pVocoderNode == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
memset(pVocoderNode, 0, sizeof(*pVocoderNode));
|
||||
|
||||
if (pConfig == NULL) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
if (voclib_initialize(&pVocoderNode->voclib, (unsigned char)pConfig->bands, (unsigned char)pConfig->filtersPerBand, (unsigned int)pConfig->sampleRate, (unsigned char)pConfig->channels) == 0) {
|
||||
return MA_INVALID_ARGS;
|
||||
}
|
||||
|
||||
inputChannels [0] = pConfig->channels; /* Source/carrier. */
|
||||
inputChannels [1] = 1; /* Excite/modulator. Must always be single channel. */
|
||||
outputChannels[0] = pConfig->channels; /* Output channels is always the same as the source/carrier. */
|
||||
|
||||
baseConfig = pConfig->nodeConfig;
|
||||
baseConfig.vtable = &g_ma_vocoder_node_vtable;
|
||||
baseConfig.pInputChannels = inputChannels;
|
||||
baseConfig.pOutputChannels = outputChannels;
|
||||
|
||||
result = ma_node_init(pNodeGraph, &baseConfig, pAllocationCallbacks, &pVocoderNode->baseNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
return MA_SUCCESS;
|
||||
}
|
||||
|
||||
MA_API void ma_vocoder_node_uninit(ma_vocoder_node* pVocoderNode, const ma_allocation_callbacks* pAllocationCallbacks)
|
||||
{
|
||||
/* The base node must always be initialized first. */
|
||||
ma_node_uninit(pVocoderNode, pAllocationCallbacks);
|
||||
}
|
||||
|
||||
#endif /* miniaudio_vocoder_node_c */
|
||||
45
thirdparty/miniaudio-0.11.24/extras/nodes/ma_vocoder_node/ma_vocoder_node.h
vendored
Normal file
@@ -0,0 +1,45 @@
|
||||
/* Include ma_vocoder_node.h after miniaudio.h */
|
||||
#ifndef miniaudio_vocoder_node_h
|
||||
#define miniaudio_vocoder_node_h
|
||||
|
||||
#include "../../../miniaudio.h"
|
||||
#include "voclib.h"
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/*
|
||||
The vocoder node has two inputs and one output. Inputs:
|
||||
|
||||
Input Bus 0: The source/carrier stream.
|
||||
Input Bus 1: The excite/modulator stream.
|
||||
|
||||
The source (input bus 0) and output must have the same channel count, and is restricted to 1 or 2.
|
||||
The excite (input bus 1) is restricted to 1 channel.
|
||||
*/
|
||||
typedef struct
|
||||
{
|
||||
ma_node_config nodeConfig;
|
||||
ma_uint32 channels; /* The number of channels of the source, which will be the same as the output. Must be 1 or 2. The excite bus must always have one channel. */
|
||||
ma_uint32 sampleRate;
|
||||
ma_uint32 bands; /* Defaults to 16. */
|
||||
ma_uint32 filtersPerBand; /* Defaults to 6. */
|
||||
} ma_vocoder_node_config;
|
||||
|
||||
MA_API ma_vocoder_node_config ma_vocoder_node_config_init(ma_uint32 channels, ma_uint32 sampleRate);
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
ma_node_base baseNode;
|
||||
voclib_instance voclib;
|
||||
} ma_vocoder_node;
|
||||
|
||||
MA_API ma_result ma_vocoder_node_init(ma_node_graph* pNodeGraph, const ma_vocoder_node_config* pConfig, const ma_allocation_callbacks* pAllocationCallbacks, ma_vocoder_node* pVocoderNode);
|
||||
MA_API void ma_vocoder_node_uninit(ma_vocoder_node* pVocoderNode, const ma_allocation_callbacks* pAllocationCallbacks);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* miniaudio_vocoder_node_h */
|
||||
152
thirdparty/miniaudio-0.11.24/extras/nodes/ma_vocoder_node/ma_vocoder_node_example.c
vendored
Normal file
@@ -0,0 +1,152 @@
|
||||
/*
|
||||
Demonstrates how to apply an effect to a duplex stream using the node graph system.
|
||||
|
||||
This example applies a vocoder effect to the input stream before outputting it. A custom node
|
||||
called `ma_vocoder_node` is used to achieve the effect which can be found in the extras folder in
|
||||
the miniaudio repository. The vocoder node uses https://github.com/blastbay/voclib to achieve the
|
||||
effect.
|
||||
*/
|
||||
#include "../../../miniaudio.c"
|
||||
#include "ma_vocoder_node.c"
|
||||
|
||||
#include <stdio.h>
|
||||
|
||||
#define DEVICE_FORMAT ma_format_f32 /* Must always be f32 for this example because the node graph system only works with this. */
|
||||
#define DEVICE_CHANNELS 1 /* For this example, always set to 1. */
|
||||
|
||||
static ma_waveform g_sourceData; /* The underlying data source of the excite node. */
|
||||
static ma_audio_buffer_ref g_exciteData; /* The underlying data source of the source node. */
|
||||
static ma_data_source_node g_sourceNode; /* A data source node containing the source data we'll be sending through to the vocoder. This will be routed into the first bus of the vocoder node. */
|
||||
static ma_data_source_node g_exciteNode; /* A data source node containing the excite data we'll be sending through to the vocoder. This will be routed into the second bus of the vocoder node. */
|
||||
static ma_vocoder_node g_vocoderNode; /* The vocoder node. */
|
||||
static ma_node_graph g_nodeGraph;
|
||||
|
||||
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
This example assumes the playback and capture sides use the same format and channel count. The
|
||||
format must be f32.
|
||||
*/
|
||||
if (pDevice->capture.format != DEVICE_FORMAT || pDevice->playback.format != DEVICE_FORMAT || pDevice->capture.channels != pDevice->playback.channels) {
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
The node graph system is a pulling style of API. At the lowest level of the chain will be a
|
||||
node acting as a data source for the purpose of delivering the initial audio data. In our case,
|
||||
the data source is our `pInput` buffer. We need to update the underlying data source so that it
|
||||
read data from `pInput`.
|
||||
*/
|
||||
ma_audio_buffer_ref_set_data(&g_exciteData, pInput, frameCount);
|
||||
|
||||
/* With the source buffer configured we can now read directly from the node graph. */
|
||||
ma_node_graph_read_pcm_frames(&g_nodeGraph, pOutput, frameCount, NULL);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
ma_result result;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device device;
|
||||
ma_node_graph_config nodeGraphConfig;
|
||||
ma_vocoder_node_config vocoderNodeConfig;
|
||||
ma_data_source_node_config sourceNodeConfig;
|
||||
ma_data_source_node_config exciteNodeConfig;
|
||||
ma_waveform_config waveformConfig;
|
||||
|
||||
deviceConfig = ma_device_config_init(ma_device_type_duplex);
|
||||
deviceConfig.capture.pDeviceID = NULL;
|
||||
deviceConfig.capture.format = DEVICE_FORMAT;
|
||||
deviceConfig.capture.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.capture.shareMode = ma_share_mode_shared;
|
||||
deviceConfig.playback.pDeviceID = NULL;
|
||||
deviceConfig.playback.format = DEVICE_FORMAT;
|
||||
deviceConfig.playback.channels = DEVICE_CHANNELS;
|
||||
deviceConfig.dataCallback = data_callback;
|
||||
result = ma_device_init(NULL, &deviceConfig, &device);
|
||||
if (result != MA_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
/* Now we can setup our node graph. */
|
||||
nodeGraphConfig = ma_node_graph_config_init(device.capture.channels);
|
||||
|
||||
result = ma_node_graph_init(&nodeGraphConfig, NULL, &g_nodeGraph);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize node graph.");
|
||||
goto done0;
|
||||
}
|
||||
|
||||
|
||||
/* Vocoder. Attached straight to the endpoint. */
|
||||
vocoderNodeConfig = ma_vocoder_node_config_init(device.capture.channels, device.sampleRate);
|
||||
|
||||
result = ma_vocoder_node_init(&g_nodeGraph, &vocoderNodeConfig, NULL, &g_vocoderNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize vocoder node.");
|
||||
goto done1;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_vocoderNode, 0, ma_node_graph_get_endpoint(&g_nodeGraph), 0);
|
||||
|
||||
/* Amplify the volume of the vocoder output because in my testing it is a bit quiet. */
|
||||
ma_node_set_output_bus_volume(&g_vocoderNode, 0, 4);
|
||||
|
||||
|
||||
/* Source/carrier. Attached to input bus 0 of the vocoder node. */
|
||||
waveformConfig = ma_waveform_config_init(device.capture.format, device.capture.channels, device.sampleRate, ma_waveform_type_sawtooth, 1.0, 50);
|
||||
|
||||
result = ma_waveform_init(&waveformConfig, &g_sourceData);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize waveform for excite node.");
|
||||
goto done3;
|
||||
}
|
||||
|
||||
sourceNodeConfig = ma_data_source_node_config_init(&g_sourceData);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &sourceNodeConfig, NULL, &g_sourceNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize excite node.");
|
||||
goto done3;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_sourceNode, 0, &g_vocoderNode, 0);
|
||||
|
||||
|
||||
/* Excite/modulator. Attached to input bus 1 of the vocoder node. */
|
||||
result = ma_audio_buffer_ref_init(device.capture.format, device.capture.channels, NULL, 0, &g_exciteData);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize audio buffer for source.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
exciteNodeConfig = ma_data_source_node_config_init(&g_exciteData);
|
||||
|
||||
result = ma_data_source_node_init(&g_nodeGraph, &exciteNodeConfig, NULL, &g_exciteNode);
|
||||
if (result != MA_SUCCESS) {
|
||||
printf("Failed to initialize source node.");
|
||||
goto done2;
|
||||
}
|
||||
|
||||
ma_node_attach_output_bus(&g_exciteNode, 0, &g_vocoderNode, 1);
|
||||
|
||||
|
||||
ma_device_start(&device);
|
||||
|
||||
printf("Press Enter to quit...\n");
|
||||
getchar();
|
||||
|
||||
/* It's important that we stop the device first or else we'll uninitialize the graph from under the device. */
|
||||
ma_device_stop(&device);
|
||||
|
||||
/*done4:*/ ma_data_source_node_uninit(&g_exciteNode, NULL);
|
||||
done3: ma_data_source_node_uninit(&g_sourceNode, NULL);
|
||||
done2: ma_vocoder_node_uninit(&g_vocoderNode, NULL);
|
||||
done1: ma_node_graph_uninit(&g_nodeGraph, NULL);
|
||||
done0: ma_device_uninit(&device);
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
return 0;
|
||||
}
|
||||
682
thirdparty/miniaudio-0.11.24/extras/nodes/ma_vocoder_node/voclib.h
vendored
Normal file
@@ -0,0 +1,682 @@
|
||||
/* Vocoder Library
|
||||
* Voclib version 1.1 - 2019-02-16
|
||||
*
|
||||
* Philip Bennefall - philip@blastbay.com
|
||||
*
|
||||
* See the end of this file for licensing terms.
|
||||
* The filter implementation was derived from public domain code found on musicdsp.org (see the section called "Filters" for more details).
|
||||
*
|
||||
* USAGE
|
||||
*
|
||||
* This is a single-file library. To use it, do something like the following in one .c file.
|
||||
* #define VOCLIB_IMPLEMENTATION
|
||||
* #include "voclib.h"
|
||||
*
|
||||
* You can then #include this file in other parts of the program as you would with any other header file.
|
||||
*/
|
||||
|
||||
#ifndef VOCLIB_H
|
||||
#define VOCLIB_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* COMPILE-TIME OPTIONS */
|
||||
|
||||
/* The maximum number of bands that the vocoder can be initialized with (lower this number to save memory). */
|
||||
#define VOCLIB_MAX_BANDS 96
|
||||
|
||||
/* The maximum number of filters per vocoder band (lower this number to save memory). */
|
||||
#define VOCLIB_MAX_FILTERS_PER_BAND 8
|
||||
|
||||
/* PUBLIC API */
|
||||
|
||||
typedef struct voclib_instance voclib_instance;
|
||||
|
||||
/* Initialize a voclib_instance structure.
|
||||
*
|
||||
* Call this function to initialize the voclib_instance structure.
|
||||
* bands is the number of bands that the vocoder should use; recommended values are between 12 and 64.
|
||||
* bands must be between 4 and VOCLIB_MAX_BANDS (inclusive).
|
||||
* filters_per_band determines the steapness with which the filterbank divides the signal; a value of 6 is recommended.
|
||||
* filters_per_band must be between 1 and VOCLIB_MAX_FILTERS_PER_BAND (inclusive).
|
||||
* sample_rate is the number of samples per second in hertz, and should be between 8000 and 192000 (inclusive).
|
||||
* carrier_channels is the number of channels that the carrier has, and should be between 1 and 2 (inclusive).
|
||||
* Note: The modulator must always have only one channel.
|
||||
* Returns nonzero (true) on success or 0 (false) on failure.
|
||||
* The function will only fail if one or more of the parameters are invalid.
|
||||
*/
|
||||
int voclib_initialize ( voclib_instance* instance, unsigned char bands, unsigned char filters_per_band, unsigned int sample_rate, unsigned char carrier_channels );
|
||||
|
||||
/* Run the vocoder.
|
||||
*
|
||||
* Call this function continuously to generate your output.
|
||||
* carrier_buffer and modulator_buffer should contain the carrier and modulator signals respectively.
|
||||
* The modulator must always have one channel.
|
||||
* If the carrier has two channels, the samples in carrier_buffer must be interleaved.
|
||||
* output_buffer will be filled with the result, and must be able to hold as many channels as the carrier.
|
||||
* If the carrier has two channels, the output buffer will be filled with interleaved samples.
|
||||
* output_buffer may be the same pointer as either carrier_buffer or modulator_buffer as long as it can hold the same number of channels as the carrier.
|
||||
* The processing is performed in place.
|
||||
* frames specifies the number of sample frames that should be processed.
|
||||
* Returns nonzero (true) on success or 0 (false) on failure.
|
||||
* The function will only fail if one or more of the parameters are invalid.
|
||||
*/
|
||||
int voclib_process ( voclib_instance* instance, const float* carrier_buffer, const float* modulator_buffer, float* output_buffer, unsigned int frames );
|
||||
|
||||
/* Reset the vocoder sample history.
|
||||
*
|
||||
* In order to run smoothly, the vocoder needs to store a few recent samples internally.
|
||||
* This function resets that internal history. This should only be done if you are processing a new stream.
|
||||
* Resetting the history in the middle of a stream will cause clicks.
|
||||
*/
|
||||
void voclib_reset_history ( voclib_instance* instance );
|
||||
|
||||
/* Set the reaction time of the vocoder in seconds.
|
||||
*
|
||||
* The reaction time is the time it takes for the vocoder to respond to a volume change in the modulator.
|
||||
* A value of 0.03 (AKA 30 milliseconds) is recommended for intelligible speech.
|
||||
* Values lower than about 0.02 will make the output sound raspy and unpleasant.
|
||||
* Values above 0.2 or so will make the speech hard to understand, but can be used for special effects.
|
||||
* The value must be between 0.002 and 2.0 (inclusive).
|
||||
* Returns nonzero (true) on success or 0 (false) on failure.
|
||||
* The function will only fail if the parameter is invalid.
|
||||
*/
|
||||
int voclib_set_reaction_time ( voclib_instance* instance, float reaction_time );
|
||||
|
||||
/* Get the current reaction time of the vocoder in seconds. */
|
||||
float voclib_get_reaction_time ( const voclib_instance* instance );
|
||||
|
||||
/* Set the formant shift of the vocoder in octaves.
|
||||
*
|
||||
* Formant shifting changes the size of the speaker's head.
|
||||
* A value of 1.0 leaves the head size unmodified.
|
||||
* Values lower than 1.0 make the head larger, and values above 1.0 make it smaller.
|
||||
* The value must be between 0.25 and 4.0 (inclusive).
|
||||
* Returns nonzero (true) on success or 0 (false) on failure.
|
||||
* The function will only fail if the parameter is invalid.
|
||||
*/
|
||||
int voclib_set_formant_shift ( voclib_instance* instance, float formant_shift );
|
||||
|
||||
/* Get the current formant shift of the vocoder in octaves. */
|
||||
float voclib_get_formant_shift ( const voclib_instance* instance );
|
||||
|
||||
/* INTERNAL STRUCTURES */
|
||||
|
||||
/* this holds the data required to update samples thru a filter. */
|
||||
typedef struct
|
||||
{
|
||||
float a0, a1, a2, a3, a4;
|
||||
float x1, x2, y1, y2;
|
||||
} voclib_biquad;
|
||||
|
||||
/* Stores the state required for our envelope follower. */
|
||||
typedef struct
|
||||
{
|
||||
float coef;
|
||||
float history[4];
|
||||
} voclib_envelope;
|
||||
|
||||
/* Holds a set of filters required for one vocoder band. */
|
||||
typedef struct
|
||||
{
|
||||
voclib_biquad filters[VOCLIB_MAX_FILTERS_PER_BAND];
|
||||
} voclib_band;
|
||||
|
||||
/* The main instance structure. This is the structure that you will create an instance of when using the vocoder. */
|
||||
struct voclib_instance
|
||||
{
|
||||
voclib_band analysis_bands[VOCLIB_MAX_BANDS]; /* The filterbank used for analysis (these are applied to the modulator). */
|
||||
voclib_envelope analysis_envelopes[VOCLIB_MAX_BANDS]; /* The envelopes used to smooth the analysis bands. */
|
||||
voclib_band synthesis_bands[VOCLIB_MAX_BANDS * 2]; /* The filterbank used for synthesis (these are applied to the carrier). The second half of the array is only used for stereo carriers. */
|
||||
float reaction_time; /* In seconds. Higher values make the vocoder respond more slowly to changes in the modulator. */
|
||||
float formant_shift; /* In octaves. 1.0 is unchanged. */
|
||||
unsigned int sample_rate; /* In hertz. */
|
||||
unsigned char bands;
|
||||
unsigned char filters_per_band;
|
||||
unsigned char carrier_channels;
|
||||
};
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* VOCLIB_H */
|
||||
|
||||
/* IMPLEMENTATION */
|
||||
|
||||
#ifdef VOCLIB_IMPLEMENTATION
|
||||
|
||||
#include <math.h>
|
||||
#include <assert.h>
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#define VOCLIB_INLINE __forceinline
|
||||
#elif defined(__GNUC__)
|
||||
#if defined(__STRICT_ANSI__)
|
||||
#define VOCLIB_GNUC_INLINE_HINT __inline__
|
||||
#else
|
||||
#define VOCLIB_GNUC_INLINE_HINT inline
|
||||
#endif
|
||||
|
||||
#if (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 2)) || defined(__clang__)
|
||||
#define VOCLIB_INLINE VOCLIB_GNUC_INLINE_HINT __attribute__((always_inline))
|
||||
#else
|
||||
#define VOCLIB_INLINE VOCLIB_GNUC_INLINE_HINT
|
||||
#endif
|
||||
#elif defined(__WATCOMC__)
|
||||
#define VOCLIB_INLINE __inline
|
||||
#else
|
||||
#define VOCLIB_INLINE
|
||||
#endif
|
||||
|
||||
/* Filters
|
||||
*
|
||||
* The filter code below was derived from http://www.musicdsp.org/files/biquad.c. The comment at the top of biquad.c file reads:
|
||||
*
|
||||
* Simple implementation of Biquad filters -- Tom St Denis
|
||||
*
|
||||
* Based on the work
|
||||
|
||||
Cookbook formulae for audio EQ biquad filter coefficients
|
||||
---------------------------------------------------------
|
||||
by Robert Bristow-Johnson, pbjrbj@viconet.com a.k.a. robert@audioheads.com
|
||||
|
||||
* Available on the web at
|
||||
|
||||
http://www.smartelectronix.com/musicdsp/text/filters005.txt
|
||||
|
||||
* Enjoy.
|
||||
*
|
||||
* This work is hereby placed in the public domain for all purposes, whether
|
||||
* commercial, free [as in speech] or educational, etc. Use the code and please
|
||||
* give me credit if you wish.
|
||||
*
|
||||
* Tom St Denis -- http://tomstdenis.home.dhs.org
|
||||
*/
|
||||
|
||||
#ifndef VOCLIB_M_LN2
|
||||
#define VOCLIB_M_LN2 0.69314718055994530942
|
||||
#endif
|
||||
|
||||
#ifndef VOCLIB_M_PI
|
||||
#define VOCLIB_M_PI 3.14159265358979323846
|
||||
#endif
|
||||
|
||||
/* Computes a BiQuad filter on a sample. */
|
||||
static VOCLIB_INLINE float voclib_BiQuad ( float sample, voclib_biquad* b )
|
||||
{
|
||||
float result;
|
||||
|
||||
/* compute the result. */
|
||||
result = b->a0 * sample + b->a1 * b->x1 + b->a2 * b->x2 -
|
||||
b->a3 * b->y1 - b->a4 * b->y2;
|
||||
|
||||
/* shift x1 to x2, sample to x1. */
|
||||
b->x2 = b->x1;
|
||||
b->x1 = sample;
|
||||
|
||||
/* shift y1 to y2, result to y1. */
|
||||
b->y2 = b->y1;
|
||||
b->y1 = result;
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/* filter types. */
|
||||
enum
|
||||
{
|
||||
VOCLIB_LPF, /* low pass filter */
|
||||
VOCLIB_HPF, /* High pass filter */
|
||||
VOCLIB_BPF, /* band pass filter */
|
||||
VOCLIB_NOTCH, /* Notch Filter */
|
||||
VOCLIB_PEQ, /* Peaking band EQ filter */
|
||||
VOCLIB_LSH, /* Low shelf filter */
|
||||
VOCLIB_HSH /* High shelf filter */
|
||||
};
|
||||
|
||||
/* sets up a BiQuad Filter. */
|
||||
static void voclib_BiQuad_new ( voclib_biquad* b, int type, float dbGain, /* gain of filter */
|
||||
float freq, /* center frequency */
|
||||
float srate, /* sampling rate */
|
||||
float bandwidth ) /* bandwidth in octaves */
|
||||
{
|
||||
float A, omega, sn, cs, alpha, beta;
|
||||
float a0, a1, a2, b0, b1, b2;
|
||||
|
||||
/* setup variables. */
|
||||
A = ( float ) pow ( 10, dbGain / 40.0f );
|
||||
omega = ( float ) ( 2.0 * VOCLIB_M_PI * freq / srate );
|
||||
sn = ( float ) sin ( omega );
|
||||
cs = ( float ) cos ( omega );
|
||||
alpha = sn * ( float ) sinh ( VOCLIB_M_LN2 / 2 * bandwidth * omega / sn );
|
||||
beta = ( float ) sqrt ( A + A );
|
||||
|
||||
switch ( type )
|
||||
{
|
||||
case VOCLIB_LPF:
|
||||
b0 = ( 1 - cs ) / 2;
|
||||
b1 = 1 - cs;
|
||||
b2 = ( 1 - cs ) / 2;
|
||||
a0 = 1 + alpha;
|
||||
a1 = -2 * cs;
|
||||
a2 = 1 - alpha;
|
||||
break;
|
||||
case VOCLIB_HPF:
|
||||
b0 = ( 1 + cs ) / 2;
|
||||
b1 = - ( 1 + cs );
|
||||
b2 = ( 1 + cs ) / 2;
|
||||
a0 = 1 + alpha;
|
||||
a1 = -2 * cs;
|
||||
a2 = 1 - alpha;
|
||||
break;
|
||||
case VOCLIB_BPF:
|
||||
b0 = alpha;
|
||||
b1 = 0;
|
||||
b2 = -alpha;
|
||||
a0 = 1 + alpha;
|
||||
a1 = -2 * cs;
|
||||
a2 = 1 - alpha;
|
||||
break;
|
||||
case VOCLIB_NOTCH:
|
||||
b0 = 1;
|
||||
b1 = -2 * cs;
|
||||
b2 = 1;
|
||||
a0 = 1 + alpha;
|
||||
a1 = -2 * cs;
|
||||
a2 = 1 - alpha;
|
||||
break;
|
||||
case VOCLIB_PEQ:
|
||||
b0 = 1 + ( alpha * A );
|
||||
b1 = -2 * cs;
|
||||
b2 = 1 - ( alpha * A );
|
||||
a0 = 1 + ( alpha / A );
|
||||
a1 = -2 * cs;
|
||||
a2 = 1 - ( alpha / A );
|
||||
break;
|
||||
case VOCLIB_LSH:
|
||||
b0 = A * ( ( A + 1 ) - ( A - 1 ) * cs + beta * sn );
|
||||
b1 = 2 * A * ( ( A - 1 ) - ( A + 1 ) * cs );
|
||||
b2 = A * ( ( A + 1 ) - ( A - 1 ) * cs - beta * sn );
|
||||
a0 = ( A + 1 ) + ( A - 1 ) * cs + beta * sn;
|
||||
a1 = -2 * ( ( A - 1 ) + ( A + 1 ) * cs );
|
||||
a2 = ( A + 1 ) + ( A - 1 ) * cs - beta * sn;
|
||||
break;
|
||||
case VOCLIB_HSH:
|
||||
b0 = A * ( ( A + 1 ) + ( A - 1 ) * cs + beta * sn );
|
||||
b1 = -2 * A * ( ( A - 1 ) + ( A + 1 ) * cs );
|
||||
b2 = A * ( ( A + 1 ) + ( A - 1 ) * cs - beta * sn );
|
||||
a0 = ( A + 1 ) - ( A - 1 ) * cs + beta * sn;
|
||||
a1 = 2 * ( ( A - 1 ) - ( A + 1 ) * cs );
|
||||
a2 = ( A + 1 ) - ( A - 1 ) * cs - beta * sn;
|
||||
break;
|
||||
default:
|
||||
assert ( 0 ); /* Misuse. */
|
||||
return;
|
||||
}
|
||||
|
||||
/* precompute the coefficients. */
|
||||
b->a0 = b0 / a0;
|
||||
b->a1 = b1 / a0;
|
||||
b->a2 = b2 / a0;
|
||||
b->a3 = a1 / a0;
|
||||
b->a4 = a2 / a0;
|
||||
}
|
||||
|
||||
/* Reset the filter history. */
|
||||
static void voclib_BiQuad_reset ( voclib_biquad* b )
|
||||
{
|
||||
b->x1 = b->x2 = 0.0f;
|
||||
b->y1 = b->y2 = 0.0f;
|
||||
}
|
||||
|
||||
/* Envelope follower. */
|
||||
|
||||
static void voclib_envelope_configure ( voclib_envelope* envelope, double time_in_seconds, double sample_rate )
|
||||
{
|
||||
envelope->coef = ( float ) ( pow ( 0.01, 1.0 / ( time_in_seconds * sample_rate ) ) );
|
||||
}
|
||||
|
||||
/* Reset the envelope history. */
|
||||
static void voclib_envelope_reset ( voclib_envelope* envelope )
|
||||
{
|
||||
envelope->history[0] = 0.0f;
|
||||
envelope->history[1] = 0.0f;
|
||||
envelope->history[2] = 0.0f;
|
||||
envelope->history[3] = 0.0f;
|
||||
}
|
||||
|
||||
static VOCLIB_INLINE float voclib_envelope_tick ( voclib_envelope* envelope, float sample )
|
||||
{
|
||||
const float coef = envelope->coef;
|
||||
envelope->history[0] = ( float ) ( ( 1.0f - coef ) * fabs ( sample ) ) + ( coef * envelope->history[0] );
|
||||
envelope->history[1] = ( ( 1.0f - coef ) * envelope->history[0] ) + ( coef * envelope->history[1] );
|
||||
envelope->history[2] = ( ( 1.0f - coef ) * envelope->history[1] ) + ( coef * envelope->history[2] );
|
||||
envelope->history[3] = ( ( 1.0f - coef ) * envelope->history[2] ) + ( coef * envelope->history[3] );
|
||||
return envelope->history[3];
|
||||
}
|
||||
|
||||
/* Initialize the vocoder filterbank. */
|
||||
static void voclib_initialize_filterbank ( voclib_instance* instance, int carrier_only )
|
||||
{
|
||||
unsigned char i;
|
||||
double step;
|
||||
double lastfreq = 0.0;
|
||||
double minfreq = 80.0;
|
||||
double maxfreq = instance->sample_rate;
|
||||
if ( maxfreq > 12000.0 )
|
||||
{
|
||||
maxfreq = 12000.0;
|
||||
}
|
||||
step = pow ( ( maxfreq / minfreq ), ( 1.0 / instance->bands ) );
|
||||
|
||||
for ( i = 0; i < instance->bands; ++i )
|
||||
{
|
||||
unsigned char i2;
|
||||
double bandwidth, nextfreq;
|
||||
double priorfreq = lastfreq;
|
||||
if ( lastfreq > 0.0 )
|
||||
{
|
||||
lastfreq *= step;
|
||||
}
|
||||
else
|
||||
{
|
||||
lastfreq = minfreq;
|
||||
}
|
||||
nextfreq = lastfreq * step;
|
||||
bandwidth = ( nextfreq - priorfreq ) / lastfreq;
|
||||
|
||||
if ( !carrier_only )
|
||||
{
|
||||
voclib_BiQuad_new ( &instance->analysis_bands[i].filters[0], VOCLIB_BPF, 0.0f, ( float ) lastfreq, ( float ) instance->sample_rate, ( float ) bandwidth );
|
||||
for ( i2 = 1; i2 < instance->filters_per_band; ++i2 )
|
||||
{
|
||||
instance->analysis_bands[i].filters[i2].a0 = instance->analysis_bands[i].filters[0].a0;
|
||||
instance->analysis_bands[i].filters[i2].a1 = instance->analysis_bands[i].filters[0].a1;
|
||||
instance->analysis_bands[i].filters[i2].a2 = instance->analysis_bands[i].filters[0].a2;
|
||||
instance->analysis_bands[i].filters[i2].a3 = instance->analysis_bands[i].filters[0].a3;
|
||||
instance->analysis_bands[i].filters[i2].a4 = instance->analysis_bands[i].filters[0].a4;
|
||||
}
|
||||
}
|
||||
|
||||
if ( instance->formant_shift != 1.0f )
|
||||
{
|
||||
voclib_BiQuad_new ( &instance->synthesis_bands[i].filters[0], VOCLIB_BPF, 0.0f, ( float ) ( lastfreq * instance->formant_shift ), ( float ) instance->sample_rate, ( float ) bandwidth );
|
||||
}
|
||||
else
|
||||
{
|
||||
instance->synthesis_bands[i].filters[0].a0 = instance->analysis_bands[i].filters[0].a0;
|
||||
instance->synthesis_bands[i].filters[0].a1 = instance->analysis_bands[i].filters[0].a1;
|
||||
instance->synthesis_bands[i].filters[0].a2 = instance->analysis_bands[i].filters[0].a2;
|
||||
instance->synthesis_bands[i].filters[0].a3 = instance->analysis_bands[i].filters[0].a3;
|
||||
instance->synthesis_bands[i].filters[0].a4 = instance->analysis_bands[i].filters[0].a4;
|
||||
}
|
||||
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[0].a0 = instance->synthesis_bands[i].filters[0].a0;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[0].a1 = instance->synthesis_bands[i].filters[0].a1;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[0].a2 = instance->synthesis_bands[i].filters[0].a2;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[0].a3 = instance->synthesis_bands[i].filters[0].a3;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[0].a4 = instance->synthesis_bands[i].filters[0].a4;
|
||||
|
||||
for ( i2 = 1; i2 < instance->filters_per_band; ++i2 )
|
||||
{
|
||||
instance->synthesis_bands[i].filters[i2].a0 = instance->synthesis_bands[i].filters[0].a0;
|
||||
instance->synthesis_bands[i].filters[i2].a1 = instance->synthesis_bands[i].filters[0].a1;
|
||||
instance->synthesis_bands[i].filters[i2].a2 = instance->synthesis_bands[i].filters[0].a2;
|
||||
instance->synthesis_bands[i].filters[i2].a3 = instance->synthesis_bands[i].filters[0].a3;
|
||||
instance->synthesis_bands[i].filters[i2].a4 = instance->synthesis_bands[i].filters[0].a4;
|
||||
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2].a0 = instance->synthesis_bands[i].filters[0].a0;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2].a1 = instance->synthesis_bands[i].filters[0].a1;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2].a2 = instance->synthesis_bands[i].filters[0].a2;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2].a3 = instance->synthesis_bands[i].filters[0].a3;
|
||||
instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2].a4 = instance->synthesis_bands[i].filters[0].a4;
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/* Initialize the vocoder envelopes. */
|
||||
static void voclib_initialize_envelopes ( voclib_instance* instance )
|
||||
{
|
||||
unsigned char i;
|
||||
|
||||
voclib_envelope_configure ( &instance->analysis_envelopes[0], instance->reaction_time, ( double ) instance->sample_rate );
|
||||
for ( i = 1; i < instance->bands; ++i )
|
||||
{
|
||||
instance->analysis_envelopes[i].coef = instance->analysis_envelopes[0].coef;
|
||||
}
|
||||
}
|
||||
|
||||
int voclib_initialize ( voclib_instance* instance, unsigned char bands, unsigned char filters_per_band, unsigned int sample_rate, unsigned char carrier_channels )
|
||||
{
|
||||
if ( !instance )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( bands < 4 || bands > VOCLIB_MAX_BANDS )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( filters_per_band < 1 || filters_per_band > VOCLIB_MAX_FILTERS_PER_BAND )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( sample_rate < 8000 || sample_rate > 192000 )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( carrier_channels < 1 || carrier_channels > 2 )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
instance->reaction_time = 0.03f;
|
||||
instance->formant_shift = 1.0f;
|
||||
instance->sample_rate = sample_rate;
|
||||
instance->bands = bands;
|
||||
instance->filters_per_band = filters_per_band;
|
||||
instance->carrier_channels = carrier_channels;
|
||||
|
||||
voclib_reset_history ( instance );
|
||||
voclib_initialize_filterbank ( instance, 0 );
|
||||
voclib_initialize_envelopes ( instance );
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
void voclib_reset_history ( voclib_instance* instance )
|
||||
{
|
||||
unsigned char i;
|
||||
|
||||
for ( i = 0; i < instance->bands; ++i )
|
||||
{
|
||||
unsigned char i2;
|
||||
|
||||
for ( i2 = 0; i2 < instance->filters_per_band; ++i2 )
|
||||
{
|
||||
voclib_BiQuad_reset ( &instance->analysis_bands[i].filters[i2] );
|
||||
voclib_BiQuad_reset ( &instance->synthesis_bands[i].filters[i2] );
|
||||
voclib_BiQuad_reset ( &instance->synthesis_bands[i + VOCLIB_MAX_BANDS].filters[i2] );
|
||||
}
|
||||
voclib_envelope_reset ( &instance->analysis_envelopes[i] );
|
||||
}
|
||||
}
|
||||
|
||||
int voclib_process ( voclib_instance* instance, const float* carrier_buffer, const float* modulator_buffer, float* output_buffer, unsigned int frames )
|
||||
{
|
||||
unsigned int i;
|
||||
const unsigned char bands = instance->bands;
|
||||
const unsigned char filters_per_band = instance->filters_per_band;
|
||||
|
||||
if ( !carrier_buffer )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( !modulator_buffer )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( !output_buffer )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
if ( frames == 0 )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
if ( instance->carrier_channels == 2 )
|
||||
{
|
||||
|
||||
/* The carrier has two channels and the modulator has 1. */
|
||||
for ( i = 0; i < frames * 2; i += 2, ++modulator_buffer )
|
||||
{
|
||||
unsigned char i2;
|
||||
float out_left = 0.0f;
|
||||
float out_right = 0.0f;
|
||||
|
||||
/* Run the bands in parallel and accumulate the output. */
|
||||
for ( i2 = 0; i2 < bands; ++i2 )
|
||||
{
|
||||
unsigned char i3;
|
||||
float analysis_band = voclib_BiQuad ( *modulator_buffer, &instance->analysis_bands[i2].filters[0] );
|
||||
float synthesis_band_left = voclib_BiQuad ( carrier_buffer[i], &instance->synthesis_bands[i2].filters[0] );
|
||||
float synthesis_band_right = voclib_BiQuad ( carrier_buffer[i + 1], &instance->synthesis_bands[i2 + VOCLIB_MAX_BANDS].filters[0] );
|
||||
|
||||
for ( i3 = 1; i3 < filters_per_band; ++i3 )
|
||||
{
|
||||
analysis_band = voclib_BiQuad ( analysis_band, &instance->analysis_bands[i2].filters[i3] );
|
||||
synthesis_band_left = voclib_BiQuad ( synthesis_band_left, &instance->synthesis_bands[i2].filters[i3] );
|
||||
synthesis_band_right = voclib_BiQuad ( synthesis_band_right, &instance->synthesis_bands[i2 + VOCLIB_MAX_BANDS].filters[i3] );
|
||||
}
|
||||
analysis_band = voclib_envelope_tick ( &instance->analysis_envelopes[i2], analysis_band );
|
||||
out_left += synthesis_band_left * analysis_band;
|
||||
out_right += synthesis_band_right * analysis_band;
|
||||
}
|
||||
output_buffer[i] = out_left;
|
||||
output_buffer[i + 1] = out_right;
|
||||
}
|
||||
|
||||
}
|
||||
else
|
||||
{
|
||||
|
||||
/* Both the carrier and the modulator have a single channel. */
|
||||
for ( i = 0; i < frames; ++i )
|
||||
{
|
||||
unsigned char i2;
|
||||
float out = 0.0f;
|
||||
|
||||
/* Run the bands in parallel and accumulate the output. */
|
||||
for ( i2 = 0; i2 < bands; ++i2 )
|
||||
{
|
||||
unsigned char i3;
|
||||
float analysis_band = voclib_BiQuad ( modulator_buffer[i], &instance->analysis_bands[i2].filters[0] );
|
||||
float synthesis_band = voclib_BiQuad ( carrier_buffer[i], &instance->synthesis_bands[i2].filters[0] );
|
||||
|
||||
for ( i3 = 1; i3 < filters_per_band; ++i3 )
|
||||
{
|
||||
analysis_band = voclib_BiQuad ( analysis_band, &instance->analysis_bands[i2].filters[i3] );
|
||||
synthesis_band = voclib_BiQuad ( synthesis_band, &instance->synthesis_bands[i2].filters[i3] );
|
||||
}
|
||||
analysis_band = voclib_envelope_tick ( &instance->analysis_envelopes[i2], analysis_band );
|
||||
out += synthesis_band * analysis_band;
|
||||
}
|
||||
output_buffer[i] = out;
|
||||
}
|
||||
}
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
int voclib_set_reaction_time ( voclib_instance* instance, float reaction_time )
|
||||
{
|
||||
if ( reaction_time < 0.002f || reaction_time > 2.0f )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
instance->reaction_time = reaction_time;
|
||||
voclib_initialize_envelopes ( instance );
|
||||
return 1;
|
||||
}
|
||||
|
||||
float voclib_get_reaction_time ( const voclib_instance* instance )
|
||||
{
|
||||
return instance->reaction_time;
|
||||
}
|
||||
|
||||
int voclib_set_formant_shift ( voclib_instance* instance, float formant_shift )
|
||||
{
|
||||
if ( formant_shift < 0.25f || formant_shift > 4.0f )
|
||||
{
|
||||
return 0;
|
||||
}
|
||||
|
||||
instance->formant_shift = formant_shift;
|
||||
voclib_initialize_filterbank ( instance, 1 );
|
||||
return 1;
|
||||
}
|
||||
|
||||
float voclib_get_formant_shift ( const voclib_instance* instance )
|
||||
{
|
||||
return instance->formant_shift;
|
||||
}
|
||||
|
||||
#endif /* VOCLIB_IMPLEMENTATION */
|
||||
|
||||
/* REVISION HISTORY
|
||||
*
|
||||
* Version 1.1 - 2019-02-16
|
||||
* Breaking change: Introduced a new argument to voclib_initialize called carrier_channels. This allows the vocoder to output stereo natively.
|
||||
* Better assignment of band frequencies when using lower sample rates.
|
||||
* The shell now automatically normalizes the output file to match the peak amplitude in the carrier.
|
||||
* Fixed a memory corruption bug in the shell which would occur in response to an error condition.
|
||||
*
|
||||
* Version 1.0 - 2019-01-27
|
||||
* Initial release.
|
||||
*/
|
||||
|
||||
/* LICENSE
|
||||
|
||||
This software is available under 2 licenses -- choose whichever you prefer.
|
||||
------------------------------------------------------------------------------
|
||||
ALTERNATIVE A - MIT No Attribution License
|
||||
Copyright (c) 2019 Philip Bennefall
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
||||
of the Software, and to permit persons to whom the Software is furnished to do
|
||||
so.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
------------------------------------------------------------------------------
|
||||
ALTERNATIVE B - Public Domain (www.unlicense.org)
|
||||
This is free and unencumbered software released into the public domain.
|
||||
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this
|
||||
software, either in source code form or as a compiled binary, for any purpose,
|
||||
commercial or non-commercial, and by any means.
|
||||
|
||||
In jurisdictions that recognize copyright laws, the author or authors of this
|
||||
software dedicate any and all copyright interest in the software to the public
|
||||
domain. We make this dedication for the benefit of the public at large and to
|
||||
the detriment of our heirs and successors. We intend this dedication to be an
|
||||
overt act of relinquishment in perpetuity of all present and future rights to
|
||||
this software under copyright law.
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
|
||||
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
------------------------------------------------------------------------------
|
||||
*/
|
||||
49
thirdparty/miniaudio-0.11.24/extras/osaudio/README.md
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
This is just a little experiment to explore some ideas for the kind of API that I would build if I
|
||||
was building my own operation system. The name "osaudio" means Operating System Audio. Or maybe you
|
||||
can think of it as Open Source Audio. It's whatever you want it to be.
|
||||
|
||||
The idea behind this project came about after considering the absurd complexity of audio APIs on
|
||||
various platforms after years of working on miniaudio. This project aims to disprove the idea that
|
||||
complete and flexible audio solutions and simple APIs are mutually exclusive and that it's possible
|
||||
to have both. I challenge anybody to prove me wrong.
|
||||
|
||||
In addition to the above, I also wanted to explore some ideas for a different API design to
|
||||
miniaudio. miniaudio uses a callback model for data transfer, whereas osaudio uses a blocking
|
||||
read/write model.
|
||||
|
||||
This project is essentially just a header file with a reference implementation that uses miniaudio
|
||||
under the hood. You can compile this very easily - just compile osaudio_miniaudio.c, and use
|
||||
osaudio.h just like any other header. There are no dependencies for the header, and the miniaudio
|
||||
implementation obviously requires miniaudio. Adjust the include path in osaudio_miniaudio.c if need
|
||||
be.
|
||||
|
||||
See osaudio.h for full documentation. Below is an example to get you started:
|
||||
|
||||
```c
|
||||
#include "osaudio.h"
|
||||
|
||||
...
|
||||
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
|
||||
osaudio_open(&audio, &config);
|
||||
|
||||
osaudio_write(audio, myAudioData, frameCount); // <-- This will block until all of the data has been sent to the device.
|
||||
|
||||
osaudio_close(audio);
|
||||
```
|
||||
|
||||
Compare the code above with the likes of other APIs like Core Audio and PipeWire. I challenge
|
||||
anybody to argue their APIs are cleaner and easier to use than this when it comes to simple audio
|
||||
playback.
|
||||
|
||||
If you have any feedback on this I'd be interested to hear it. In particular, I'd really like to
|
||||
hear from people who believe the likes of Core Audio (Apple), PipeWire, PulseAudio or any other
|
||||
audio API actually have good APIs (they don't!) and what makes their's better and/or worse than
|
||||
this project.
|
||||
604
thirdparty/miniaudio-0.11.24/extras/osaudio/osaudio.h
vendored
Normal file
@@ -0,0 +1,604 @@
|
||||
/*
|
||||
This is a simple API for low-level audio playback and capture. A reference implementation using
|
||||
miniaudio is provided in osaudio.c which can be found alongside this file. Consider all code
|
||||
public domain.
|
||||
|
||||
The idea behind this project came about after considering the absurd complexity of audio APIs on
|
||||
various platforms after years of working on miniaudio. This project aims to disprove the idea that
|
||||
complete and flexible audio solutions and simple APIs are mutually exclusive and that it's possible
|
||||
to have both. The idea of reliability through simplicity is the first and foremost goal of this
|
||||
project. The difference between this project and miniaudio is that this project is designed around
|
||||
the idea of what I would build if I was building an audio API for an operating system, such as at
|
||||
the level of WASAPI or ALSA. A cross-platform and cross-backend library like miniaudio is
|
||||
necessarily different in design, but there are indeed things that I would have done differently if
|
||||
given my time again, some of those ideas of which I'm expressing in this project.
|
||||
|
||||
---
|
||||
|
||||
The concept of low-level audio is simple - you have a device, such as a speaker system or a
|
||||
micrphone system, and then you write or read audio data to/from it. So in the case of playback, you
|
||||
need only write your raw audio data to the device which then emits it from the speakers when it's
|
||||
ready. Likewise, for capture you simply read audio data from the device which is filled with data
|
||||
by the microphone.
|
||||
|
||||
A complete low-level audio solution requires the following:
|
||||
|
||||
1) The ability to enumerate devices that are connected to the system.
|
||||
2) The ability to open and close a connection to a device.
|
||||
3) The ability to start and stop the device.
|
||||
4) The ability to write and read audio data to/from the device.
|
||||
5) The ability to query the device for its data configuration.
|
||||
6) The ability to notify the application when certain events occur, such as the device being
|
||||
stopped, or rerouted.
|
||||
|
||||
The API presented here aims to meet all of the above requirements. It uses a single-threaded
|
||||
blocking read/write model for data delivery instead of a callback model. This makes it a bit more
|
||||
flexible since it gives the application full control over the audio thread. It might also make it
|
||||
more feasible to use this API on single-threaded systems.
|
||||
|
||||
Device enumeration is achieved with a single function: osaudio_enumerate(). This function returns
|
||||
an array of osaudio_info_t structures which contain information about each device. The array is
|
||||
allocated must be freed with free(). Contained within the osaudio_info_t struct is, most
|
||||
importantly, the device ID, which is used to open a connection to the device, and the name of the
|
||||
device which can be used to display to the user. For advanced users, it also includes information
|
||||
about the device's native data configuration.
|
||||
|
||||
Opening and closing a connection to a device is achieved with osaudio_open() and osaudio_close().
|
||||
An important concept is that of the ability to configure the device. This is achieved with the
|
||||
osaudio_config_t structure which is passed to osaudio_open(). In addition to the ID of the device,
|
||||
this structure includes information about the desired format, channel count and sample rate. You
|
||||
can also configure the latency of the device, or the buffer size, which is specified in frames. A
|
||||
flags member is used for specifying additional options, such as whether or not to disable automatic
|
||||
rerouting. Finally a callback can be specified for notifications. When osaudio_open() returns, the
|
||||
config structure will be filled with the device's actual configuration. You can inspect the channel
|
||||
map from this structure to know how to arrange the channels in your audio data.
|
||||
|
||||
This API uses a blocking write/read model for pushing and pulling data to/from the device. This
|
||||
is done with the osaudio_write() and osaudio_read() functions. These functions will block until
|
||||
the requested number of frames have been processed or the device is drained or flushed with
|
||||
osaudio_drain() or osaudio_flush() respectively. It is from these functions that the device is
|
||||
started. As soon as you start writing data with osaudio_write() or reading data with
|
||||
osaudio_read(), the device will start. When the device is drained of flushed with osaudio_drain()
|
||||
or osaudio_flush(), the device will be stopped. osaudio_drain() will block until the device has
|
||||
been drained, whereas osaudio_flush() will stop playback immediately and return. You can also pause
|
||||
and resume the device with osaudio_pause() and osaudio_resume(). Since reading and writing is
|
||||
blocking, it can be useful to know how many frames can be written/read without blocking. This is
|
||||
achieved with osaudio_get_avail().
|
||||
|
||||
Querying the device's configuration is achieved with osaudio_get_info(). This function will return
|
||||
a pointer to an osaudio_info_t structure which contains information about the device, most
|
||||
importantly its name and data configuration. The name is important for displaying on a UI, and
|
||||
the data configuration is important for knowing how to format your audio data. The osaudio_info_t
|
||||
structure will contain an array of osaudio_config_t structures. This will contain one entry, which
|
||||
will contain the exact information that was returned in the config structure that was passed to
|
||||
osaudio_open().
|
||||
|
||||
A common requirement is to open a device that represents the operating system's default device.
|
||||
This is done easily by simply passing in NULL for the device ID. Below is an example for opening a
|
||||
default device:
|
||||
|
||||
int result;
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
|
||||
result = osaudio_open(&audio, &config);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to open device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
...
|
||||
|
||||
osaudio_close(audio);
|
||||
|
||||
In the above example, the default device is opened for playback (OSAUDIO_OUTPUT). The format is
|
||||
set to 32-bit floating point (OSAUDIO_FORMAT_F32), the channel count is set to stereo (2), and the
|
||||
sample rate is set to 48kHz. The device is then closed when we're done with it.
|
||||
|
||||
If instead we wanted to open a specific device, we can do that by passing in the device ID. Below
|
||||
is an example for how to do this:
|
||||
|
||||
int result;
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
unsigned int infoCount;
|
||||
osaudio_info_t* info;
|
||||
|
||||
result = osaudio_enumerate(&infoCount, &info);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to enumerate devices.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// ... Iterate over the `info` array and find the device you want to open. Use the `direction` member to discriminate between input and output ...
|
||||
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.id = &info[indexOfYourChosenDevice].id;
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
|
||||
osaudio_open(&audio, &config);
|
||||
|
||||
...
|
||||
|
||||
osaudio_close(audio);
|
||||
free(info); // The pointer returned by osaudio_enumerate() must be freed with free().
|
||||
|
||||
The id structure is just a 256 byte array that uniquely identifies the device. Implementations may
|
||||
have different representations for device IDs, and A 256 byte array should accommodates all
|
||||
device ID representations. Implementations are required to zero-fill unused bytes. The osaudio_id_t
|
||||
structure can be copied which makes it suitable for serialization and deserialization in situations
|
||||
where you may want to save the device ID to permanent storage so it can be stored in a config file.
|
||||
|
||||
Implementations need to do their own data conversion between the device's native data configuration
|
||||
and the requested configuration. In this case, when the format, channels and rate are specified in
|
||||
the config, they should be unchanged when osaudio_open() returns. If this is not possible,
|
||||
osaudio_open() will return OSAUDIO_FORMAT_NOT_SUPPORTED. However, there are cases where it's useful
|
||||
for a program to use the device's native configuration instead of some fixed configuration. This is
|
||||
achieved by setting the format, channels and rate to 0. Below is an example:
|
||||
|
||||
int result;
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
|
||||
result = osaudio_open(&audio, &config);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to open device.");
|
||||
return -1;
|
||||
}
|
||||
|
||||
// ... `config` will have been updated by osaudio_open() to contain the *actual* format/channels/rate ...
|
||||
|
||||
osaudio_close(audio);
|
||||
|
||||
In addition to the code above, you can explicitly call `osaudio_get_info()` to retrieve the format
|
||||
configuration. If you need to know the native configuration before opening the device, you can use
|
||||
enumeration. The format, channels and rate will be continued in the first item in the configs array.
|
||||
|
||||
The examples above all use playback, but the same applies for capture. The only difference is that
|
||||
the direction is set to OSAUDIO_INPUT instead of OSAUDIO_OUTPUT.
|
||||
|
||||
To output audio from the speakers you need to call osaudio_write(). Likewise, to capture audio from
|
||||
a microphone you need to call osaudio_read(). These functions will block until the requested number
|
||||
of frames have been written or read. The device will start automatically. Below is an example for
|
||||
writing some data to a device:
|
||||
|
||||
int result = osaudio_write(audio, myAudioData, myAudioDataFrameCount);
|
||||
if (result == OSAUDIO_SUCCESS) {
|
||||
printf("Successfully wrote %d frames of audio data.\n", myAudioDataFrameCount);
|
||||
} else {
|
||||
printf("Failed to write audio data.\n");
|
||||
}
|
||||
|
||||
osaudio_write() and osaudio_read() will return OSAUDIO_SUCCESS if the requested number of frames
|
||||
were written or read. You cannot call osaudio_close() while a write or read operation is in
|
||||
progress.
|
||||
|
||||
If you want to write or read audio data without blocking, you can use osaudio_get_avail() to
|
||||
determine how many frames are available for writing or reading. Below is an example:
|
||||
|
||||
unsigned int framesAvailable = osaudio_get_avail(audio);
|
||||
if (result > 0) {
|
||||
printf("There are %d frames available for writing.\n", framesAvailable);
|
||||
} else {
|
||||
printf("There are no frames available for writing.\n");
|
||||
}
|
||||
|
||||
If you want to abort a blocking write or read, you can use osaudio_flush(). This will result in any
|
||||
pending write or read operation being aborted.
|
||||
|
||||
There are several ways of pausing a device. The first is to just drain or flush the device and
|
||||
simply don't do any more read/write operations. A drain and flush will put the device into a
|
||||
stopped state until the next call to either read or write, depending on the device's direction.
|
||||
If, however, this does not suit your requirements, you can use osaudio_pause() and
|
||||
osaudio_resume(). Take note, however, that these functions will result in osaudio_drain() never
|
||||
returning because it'll result in the device being in a stopped state which in turn results in the
|
||||
buffer never being read and therefore never drained.
|
||||
|
||||
Everything is thread safe with a few minor exceptions which has no practical issues for the client:
|
||||
|
||||
* You cannot call any function while osaudio_open() is still in progress.
|
||||
* You cannot call osaudio_close() while any other function is still in progress.
|
||||
* You can only call osaudio_write() and osaudio_read() from one thread at a time.
|
||||
|
||||
None of these issues should be a problem for the client in practice. You won't have a valid
|
||||
osaudio_t object until osaudio_open() has returned. For osaudio_close(), it makes no sense to
|
||||
destroy the object while it's still in use, and doing so would mean the client is using very poor
|
||||
form. For osaudio_write() and osaudio_read(), you wouldn't ever want to call this simultaneously
|
||||
across multiple threads anyway because otherwise you'd end up with garbage audio.
|
||||
|
||||
The rules above only apply when working with a single osaudio_t object. You can have multiple
|
||||
osaudio_t objects open at the same time, and you can call any function on different osaudio_t
|
||||
objects simultaneously from different threads.
|
||||
|
||||
---
|
||||
|
||||
# Feedback
|
||||
|
||||
I'm looking for feedback on the following:
|
||||
|
||||
* Are the supported formats enough? If not, what other formats are needed, and what is the
|
||||
justification for including it? Just because it's the native format on one particular
|
||||
piece of hardware is not enough. Big-endian and little-endian will never be supported. All
|
||||
formats are native-endian.
|
||||
* Are the available channel positions enough? What other positions are needed?
|
||||
* Just some general criticism would be appreciated.
|
||||
|
||||
*/
|
||||
#ifndef osaudio_h
|
||||
#define osaudio_h
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/*
|
||||
Support far pointers on relevant platforms (DOS, in particular). The version of this file
|
||||
distributed with an operating system wouldn't need this because they would just have an
|
||||
OS-specific version of this file, but as a reference it's useful to use far pointers here.
|
||||
*/
|
||||
#if defined(__MSDOS__) || defined(_MSDOS) || defined(__DOS__)
|
||||
#define OSAUDIO_FAR far
|
||||
#else
|
||||
#define OSAUDIO_FAR
|
||||
#endif
|
||||
|
||||
typedef struct _osaudio_t* osaudio_t;
|
||||
typedef struct osaudio_config_t osaudio_config_t;
|
||||
typedef struct osaudio_id_t osaudio_id_t;
|
||||
typedef struct osaudio_info_t osaudio_info_t;
|
||||
typedef struct osaudio_notification_t osaudio_notification_t;
|
||||
|
||||
/* Results codes. */
|
||||
typedef int osaudio_result_t;
|
||||
#define OSAUDIO_SUCCESS 0
|
||||
#define OSAUDIO_ERROR -1
|
||||
#define OSAUDIO_INVALID_ARGS -2
|
||||
#define OSAUDIO_INVALID_OPERATION -3
|
||||
#define OSAUDIO_OUT_OF_MEMORY -4
|
||||
#define OSAUDIO_FORMAT_NOT_SUPPORTED -101 /* The requested format is not supported. */
|
||||
#define OSAUDIO_XRUN -102 /* An underrun or overrun occurred. Can be returned by osaudio_read() or osaudio_write(). */
|
||||
#define OSAUDIO_DEVICE_STOPPED -103 /* The device is stopped. Can be returned by osaudio_drain(). It is invalid to call osaudio_drain() on a device that is not running because otherwise it'll get stuck. */
|
||||
|
||||
/* Directions. Cannot be combined. Use separate osaudio_t objects for bidirectional setups. */
|
||||
typedef int osaudio_direction_t;
|
||||
#define OSAUDIO_INPUT 1
|
||||
#define OSAUDIO_OUTPUT 2
|
||||
|
||||
/* All formats are native endian and interleaved. */
|
||||
typedef int osaudio_format_t;
|
||||
#define OSAUDIO_FORMAT_UNKNOWN 0
|
||||
#define OSAUDIO_FORMAT_F32 1
|
||||
#define OSAUDIO_FORMAT_U8 2
|
||||
#define OSAUDIO_FORMAT_S16 3
|
||||
#define OSAUDIO_FORMAT_S24 4 /* Tightly packed. */
|
||||
#define OSAUDIO_FORMAT_S32 5
|
||||
|
||||
/* Channel positions. */
|
||||
typedef unsigned char osaudio_channel_t;
|
||||
#define OSAUDIO_CHANNEL_NONE 0
|
||||
#define OSAUDIO_CHANNEL_MONO 1
|
||||
#define OSAUDIO_CHANNEL_FL 2
|
||||
#define OSAUDIO_CHANNEL_FR 3
|
||||
#define OSAUDIO_CHANNEL_FC 4
|
||||
#define OSAUDIO_CHANNEL_LFE 5
|
||||
#define OSAUDIO_CHANNEL_BL 6
|
||||
#define OSAUDIO_CHANNEL_BR 7
|
||||
#define OSAUDIO_CHANNEL_FLC 8
|
||||
#define OSAUDIO_CHANNEL_FRC 9
|
||||
#define OSAUDIO_CHANNEL_BC 10
|
||||
#define OSAUDIO_CHANNEL_SL 11
|
||||
#define OSAUDIO_CHANNEL_SR 12
|
||||
#define OSAUDIO_CHANNEL_TC 13
|
||||
#define OSAUDIO_CHANNEL_TFL 14
|
||||
#define OSAUDIO_CHANNEL_TFC 15
|
||||
#define OSAUDIO_CHANNEL_TFR 16
|
||||
#define OSAUDIO_CHANNEL_TBL 17
|
||||
#define OSAUDIO_CHANNEL_TBC 18
|
||||
#define OSAUDIO_CHANNEL_TBR 19
|
||||
#define OSAUDIO_CHANNEL_AUX0 20
|
||||
#define OSAUDIO_CHANNEL_AUX1 21
|
||||
#define OSAUDIO_CHANNEL_AUX2 22
|
||||
#define OSAUDIO_CHANNEL_AUX3 23
|
||||
#define OSAUDIO_CHANNEL_AUX4 24
|
||||
#define OSAUDIO_CHANNEL_AUX5 25
|
||||
#define OSAUDIO_CHANNEL_AUX6 26
|
||||
#define OSAUDIO_CHANNEL_AUX7 27
|
||||
#define OSAUDIO_CHANNEL_AUX8 28
|
||||
#define OSAUDIO_CHANNEL_AUX9 29
|
||||
#define OSAUDIO_CHANNEL_AUX10 30
|
||||
#define OSAUDIO_CHANNEL_AUX11 31
|
||||
#define OSAUDIO_CHANNEL_AUX12 32
|
||||
#define OSAUDIO_CHANNEL_AUX13 33
|
||||
#define OSAUDIO_CHANNEL_AUX14 34
|
||||
#define OSAUDIO_CHANNEL_AUX15 35
|
||||
#define OSAUDIO_CHANNEL_AUX16 36
|
||||
#define OSAUDIO_CHANNEL_AUX17 37
|
||||
#define OSAUDIO_CHANNEL_AUX18 38
|
||||
#define OSAUDIO_CHANNEL_AUX19 39
|
||||
#define OSAUDIO_CHANNEL_AUX20 40
|
||||
#define OSAUDIO_CHANNEL_AUX21 41
|
||||
#define OSAUDIO_CHANNEL_AUX22 42
|
||||
#define OSAUDIO_CHANNEL_AUX23 43
|
||||
#define OSAUDIO_CHANNEL_AUX24 44
|
||||
#define OSAUDIO_CHANNEL_AUX25 45
|
||||
#define OSAUDIO_CHANNEL_AUX26 46
|
||||
#define OSAUDIO_CHANNEL_AUX27 47
|
||||
#define OSAUDIO_CHANNEL_AUX28 48
|
||||
#define OSAUDIO_CHANNEL_AUX29 49
|
||||
#define OSAUDIO_CHANNEL_AUX30 50
|
||||
#define OSAUDIO_CHANNEL_AUX31 51
|
||||
|
||||
/* The maximum number of channels supported. */
|
||||
#define OSAUDIO_MAX_CHANNELS 64
|
||||
|
||||
/* Notification types. */
|
||||
typedef int osaudio_notification_type_t;
|
||||
#define OSAUDIO_NOTIFICATION_STARTED 0 /* The device was started in response to a call to osaudio_write() or osaudio_read(). */
|
||||
#define OSAUDIO_NOTIFICATION_STOPPED 1 /* The device was stopped in response to a call to osaudio_drain() or osaudio_flush(). */
|
||||
#define OSAUDIO_NOTIFICATION_REROUTED 2 /* The device was rerouted. Not all implementations need to support rerouting. */
|
||||
#define OSAUDIO_NOTIFICATION_INTERRUPTION_BEGIN 3 /* The device was interrupted due to something like a phone call. */
|
||||
#define OSAUDIO_NOTIFICATION_INTERRUPTION_END 4 /* The interruption has been ended. */
|
||||
|
||||
/* Flags. */
|
||||
#define OSAUDIO_FLAG_NO_REROUTING 1 /* When set, will tell the implementation to disable automatic rerouting if possible. This is a hint and may be ignored by the implementation. */
|
||||
#define OSAUDIO_FLAG_REPORT_XRUN 2 /* When set, will tell the implementation to report underruns and overruns via osaudio_write() and osaudio_read() by aborting and returning OSAUDIO_XRUN. */
|
||||
|
||||
struct osaudio_notification_t
|
||||
{
|
||||
osaudio_notification_type_t type; /* OSAUDIO_NOTIFICATION_* */
|
||||
union
|
||||
{
|
||||
struct
|
||||
{
|
||||
int _unused;
|
||||
} started;
|
||||
struct
|
||||
{
|
||||
int _unused;
|
||||
} stopped;
|
||||
struct
|
||||
{
|
||||
int _unused;
|
||||
} rerouted;
|
||||
struct
|
||||
{
|
||||
int _unused;
|
||||
} interruption;
|
||||
} data;
|
||||
};
|
||||
|
||||
struct osaudio_id_t
|
||||
{
|
||||
char data[256];
|
||||
};
|
||||
|
||||
struct osaudio_config_t
|
||||
{
|
||||
osaudio_id_t* device_id; /* Set to NULL to use default device. When non-null, automatic routing will be disabled. */
|
||||
osaudio_direction_t direction; /* OSAUDIO_INPUT or OSAUDIO_OUTPUT. Cannot be combined. Use separate osaudio_t objects for bidirectional setups. */
|
||||
osaudio_format_t format; /* OSAUDIO_FORMAT_* */
|
||||
unsigned int channels; /* Number of channels. */
|
||||
unsigned int rate; /* Sample rate in seconds. */
|
||||
osaudio_channel_t channel_map[OSAUDIO_MAX_CHANNELS]; /* Leave all items set to 0 for defaults. */
|
||||
unsigned int buffer_size; /* In frames. Set to 0 to use the system default. */
|
||||
unsigned int flags; /* A combination of OSAUDIO_FLAG_* */
|
||||
void (* notification)(void* user_data, const osaudio_notification_t* notification); /* Called when some kind of event occurs, such as a device being closed. Never called from the audio thread. */
|
||||
void* user_data; /* Passed to notification(). */
|
||||
};
|
||||
|
||||
struct osaudio_info_t
|
||||
{
|
||||
osaudio_id_t id;
|
||||
char name[256];
|
||||
osaudio_direction_t direction; /* OSAUDIO_INPUT or OSAUDIO_OUTPUT. */
|
||||
unsigned int config_count;
|
||||
osaudio_config_t* configs;
|
||||
};
|
||||
|
||||
|
||||
/*
|
||||
Enumerates the available devices.
|
||||
|
||||
On output, `count` will contain the number of items in the `info` array. The array must be freed
|
||||
with free() when it's no longer needed.
|
||||
|
||||
Use the `direction` member to discriminate between input and output devices. Below is an example:
|
||||
|
||||
unsigned int count;
|
||||
osaudio_info_t* info;
|
||||
osaudio_enumerate(&count, &info);
|
||||
|
||||
for (int i = 0; i < count; ++i) {
|
||||
if (info[i].direction == OSAUDIO_OUTPUT) {
|
||||
printf("Output device: %s\n", info[i].name);
|
||||
} else {
|
||||
printf("Input device: %s\n", info[i].name);
|
||||
}
|
||||
}
|
||||
|
||||
You can use the `id` member to open a specific device with osaudio_open(). You do not need to do
|
||||
device enumeration if you only want to open the default device.
|
||||
*/
|
||||
osaudio_result_t osaudio_enumerate(unsigned int* count, osaudio_info_t** info);
|
||||
|
||||
/*
|
||||
Initializes a default config.
|
||||
|
||||
The config object will be cleared to zero, with the direction set to `direction`. This will result
|
||||
in a configuration that uses the device's native format, channels and rate.
|
||||
|
||||
osaudio_config_t is a transparent struct. Just set the relevant fields to the desired values after
|
||||
calling this function. Example:
|
||||
|
||||
osaudio_config_t config;
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
*/
|
||||
void osaudio_config_init(osaudio_config_t* config, osaudio_direction_t direction);
|
||||
|
||||
/*
|
||||
Opens a connection to a device.
|
||||
|
||||
On input, config must be filled with the desired configuration. On output, it will be filled with
|
||||
the actual configuration.
|
||||
|
||||
Initialize the config with osaudio_config_init() and then fill in the desired configuration. Below
|
||||
is an example:
|
||||
|
||||
osaudio_config_t config;
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
|
||||
When the format, channels or rate are left at their default values, or set to 0 (or
|
||||
OSAUDIO_FORMAT_UNKNOWN for format), the native format, channels or rate will use the device's
|
||||
native configuration:
|
||||
|
||||
osaudio_config_t config;
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_UNKNOWN;
|
||||
config.channels = 0;
|
||||
config.rate = 0;
|
||||
|
||||
The code above is equivalent to this:
|
||||
|
||||
osaudio_config_t config;
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
|
||||
On output the config will be filled with the actual configuration. The implementation will perform
|
||||
any necessary data conversion between the requested data configuration and the device's native
|
||||
configuration. If it cannot, the function will return a OSAUDIO_FORMAT_NOT_SUPPORTED error. In this
|
||||
case the caller can decide to reinitialize the device to use its native configuration and do its
|
||||
own data conversion, or abort if it cannot do so. Use the channel map to determine the ordering of
|
||||
your channels. Automatic channel map conversion is not performed - that must be done manually by
|
||||
the caller when transferring data to/from the device.
|
||||
|
||||
Close the device with osaudio_close().
|
||||
|
||||
Returns 0 on success, any other error code on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_open(osaudio_t* audio, osaudio_config_t* config);
|
||||
|
||||
/*
|
||||
Closes a connection to a device.
|
||||
|
||||
As soon as this function is called, the device should be considered invalid and unusable. Do not
|
||||
attempt to use the audio object once this function has been called.
|
||||
|
||||
It's invalid to call this while any other function is still running. You can use osaudio_flush() to
|
||||
quickly abort any pending writes or reads. You can also use osaudio_drain() to wait for all pending
|
||||
writes or reads to complete.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_close(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Writes audio data to the device.
|
||||
|
||||
This will block until all data has been written or the device is closed.
|
||||
|
||||
You can only write from a single thread at any given time. If you want to write from multiple
|
||||
threads, you need to use your own synchronization mechanism.
|
||||
|
||||
This will automatically start the device if frame_count is > 0 and it's not in a paused state.
|
||||
|
||||
Use osaudio_get_avail() to determine how much data can be written without blocking.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_write(osaudio_t audio, const void OSAUDIO_FAR* data, unsigned int frame_count);
|
||||
|
||||
/*
|
||||
Reads audio data from the device.
|
||||
|
||||
This will block until the requested number of frames has been read or the device is closed.
|
||||
|
||||
You can only read from a single thread at any given time. If you want to read from multiple
|
||||
threads, you need to use your own synchronization mechanism.
|
||||
|
||||
This will automatically start the device if frame_count is > 0 and it's not in a paused state.
|
||||
|
||||
Use osaudio_get_avail() to determine how much data can be read without blocking.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_read(osaudio_t audio, void OSAUDIO_FAR* data, unsigned int frame_count);
|
||||
|
||||
/*
|
||||
Drains the device.
|
||||
|
||||
This will block until all pending reads or writes have completed.
|
||||
|
||||
If after calling this function another call to osaudio_write() or osaudio_read() is made, the
|
||||
device will be resumed like normal.
|
||||
|
||||
It is invalid to call this while the device is paused.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_drain(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Flushes the device.
|
||||
|
||||
This will immediately flush any pending reads or writes. It will not block. Any in-progress reads
|
||||
or writes will return immediately.
|
||||
|
||||
If after calling this function another thread starts reading or writing, the device will be resumed
|
||||
like normal.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_flush(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Pauses or resumes the device.
|
||||
|
||||
Pausing a device will trigger a OSAUDIO_NOTIFICATION_STOPPED notification. Resuming a device will
|
||||
trigger a OSAUDIO_NOTIFICATION_STARTED notification.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_pause(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Resumes the device.
|
||||
|
||||
Returns 0 on success, < 0 on failure.
|
||||
*/
|
||||
osaudio_result_t osaudio_resume(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Returns the number of frames that can be read or written without blocking.
|
||||
*/
|
||||
unsigned int osaudio_get_avail(osaudio_t audio);
|
||||
|
||||
/*
|
||||
Gets information about the device.
|
||||
|
||||
There will be one item in the configs array which will contain the device's current configuration,
|
||||
the contents of which will match that of the config that was returned by osaudio_open().
|
||||
|
||||
Returns NULL on failure. Do not free the returned pointer. It's up to the implementation to manage
|
||||
the memory of this object.
|
||||
*/
|
||||
const osaudio_info_t* osaudio_get_info(osaudio_t audio);
|
||||
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
#endif /* osaudio_h */
|
||||
1141
thirdparty/miniaudio-0.11.24/extras/osaudio/osaudio_dos_soundblaster.c
vendored
Normal file
948
thirdparty/miniaudio-0.11.24/extras/osaudio/osaudio_miniaudio.c
vendored
Normal file
@@ -0,0 +1,948 @@
|
||||
/*
|
||||
Consider this a reference implementation of osaudio. It uses miniaudio under the hood. You can add
|
||||
this file directly to your source tree, but you may need to update the miniaudio path.
|
||||
|
||||
This will use a mutex in osaudio_read() and osaudio_write(). It's a low-contention lock that's only
|
||||
used for the purpose of osaudio_drain(), but it's still a lock nonetheless. I'm not worrying about
|
||||
this too much right now because this is just an example implementation, but I might improve on this
|
||||
at a later date.
|
||||
*/
|
||||
#ifndef osaudio_miniaudio_c
|
||||
#define osaudio_miniaudio_c
|
||||
|
||||
#include "osaudio.h"
|
||||
|
||||
/*
|
||||
If you would rather define your own implementation of miniaudio, define OSAUDIO_NO_MINIAUDIO_IMPLEMENTATION. If you do this,
|
||||
you need to make sure you include the implmeentation before osaudio.c. This would only really be useful if you are wanting
|
||||
to do a unity build which uses other parts of miniaudio that this file is currently excluding.
|
||||
*/
|
||||
#ifndef OSAUDIO_NO_MINIAUDIO_IMPLEMENTATION
|
||||
#define MA_API static
|
||||
#define MA_NO_DECODING
|
||||
#define MA_NO_ENCODING
|
||||
#define MA_NO_RESOURCE_MANAGER
|
||||
#define MA_NO_NODE_GRAPH
|
||||
#define MA_NO_ENGINE
|
||||
#define MA_NO_GENERATION
|
||||
#define MINIAUDIO_IMPLEMENTATION
|
||||
#include "../../miniaudio.h"
|
||||
#endif
|
||||
|
||||
struct _osaudio_t
|
||||
{
|
||||
ma_device device;
|
||||
osaudio_info_t info;
|
||||
osaudio_config_t config; /* info.configs will point to this. */
|
||||
ma_pcm_rb buffer;
|
||||
ma_semaphore bufferSemaphore; /* The semaphore for controlling access to the buffer. The audio thread will release the semaphore. The read and write functions will wait on it. */
|
||||
ma_atomic_bool32 isActive; /* Starts off as false. Set to true when config.buffer_size data has been written in the case of playback, or as soon as osaudio_read() is called in the case of capture. */
|
||||
ma_atomic_bool32 isPaused;
|
||||
ma_atomic_bool32 isFlushed; /* When set, activation of the device will flush any data that's currently in the buffer. Defaults to false, and will be set to true in osaudio_drain() and osaudio_flush(). */
|
||||
ma_atomic_bool32 xrunDetected; /* Used for detecting when an xrun has occurred and returning from osaudio_read/write() when OSAUDIO_FLAG_REPORT_XRUN is enabled. */
|
||||
ma_spinlock activateLock; /* Used for starting and stopping the device. Needed because two variables control this - isActive and isPaused. */
|
||||
ma_mutex drainLock; /* Used for osaudio_drain(). For mutal exclusion between drain() and read()/write(). Technically results in a lock in read()/write(), but not overthinking that since this is just a reference for now. */
|
||||
};
|
||||
|
||||
|
||||
static ma_bool32 osaudio_g_is_backend_known = MA_FALSE;
|
||||
static ma_backend osaudio_g_backend = ma_backend_wasapi;
|
||||
static ma_context osaudio_g_context;
|
||||
static ma_mutex osaudio_g_context_lock; /* Only used for device enumeration. Created and destroyed with our context. */
|
||||
static ma_uint32 osaudio_g_refcount = 0;
|
||||
static ma_spinlock osaudio_g_lock = 0;
|
||||
|
||||
|
||||
static osaudio_result_t osaudio_result_from_miniaudio(ma_result result)
|
||||
{
|
||||
switch (result)
|
||||
{
|
||||
case MA_SUCCESS: return OSAUDIO_SUCCESS;
|
||||
case MA_INVALID_ARGS: return OSAUDIO_INVALID_ARGS;
|
||||
case MA_INVALID_OPERATION: return OSAUDIO_INVALID_OPERATION;
|
||||
case MA_OUT_OF_MEMORY: return OSAUDIO_OUT_OF_MEMORY;
|
||||
default: return OSAUDIO_ERROR;
|
||||
}
|
||||
}
|
||||
|
||||
static ma_format osaudio_format_to_miniaudio(osaudio_format_t format)
|
||||
{
|
||||
switch (format)
|
||||
{
|
||||
case OSAUDIO_FORMAT_F32: return ma_format_f32;
|
||||
case OSAUDIO_FORMAT_U8: return ma_format_u8;
|
||||
case OSAUDIO_FORMAT_S16: return ma_format_s16;
|
||||
case OSAUDIO_FORMAT_S24: return ma_format_s24;
|
||||
case OSAUDIO_FORMAT_S32: return ma_format_s32;
|
||||
default: return ma_format_unknown;
|
||||
}
|
||||
}
|
||||
|
||||
static osaudio_format_t osaudio_format_from_miniaudio(ma_format format)
|
||||
{
|
||||
switch (format)
|
||||
{
|
||||
case ma_format_f32: return OSAUDIO_FORMAT_F32;
|
||||
case ma_format_u8: return OSAUDIO_FORMAT_U8;
|
||||
case ma_format_s16: return OSAUDIO_FORMAT_S16;
|
||||
case ma_format_s24: return OSAUDIO_FORMAT_S24;
|
||||
case ma_format_s32: return OSAUDIO_FORMAT_S32;
|
||||
default: return OSAUDIO_FORMAT_UNKNOWN;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static osaudio_channel_t osaudio_channel_from_miniaudio(ma_channel channel)
|
||||
{
|
||||
/* Channel positions between here and miniaudio will remain in sync. */
|
||||
return (osaudio_channel_t)channel;
|
||||
}
|
||||
|
||||
static ma_channel osaudio_channel_to_miniaudio(osaudio_channel_t channel)
|
||||
{
|
||||
/* Channel positions between here and miniaudio will remain in sync. */
|
||||
return (ma_channel)channel;
|
||||
}
|
||||
|
||||
|
||||
static void osaudio_dummy_data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
(void)pDevice;
|
||||
(void)pOutput;
|
||||
(void)pInput;
|
||||
(void)frameCount;
|
||||
}
|
||||
|
||||
static osaudio_result_t osaudio_determine_miniaudio_backend(ma_backend* pBackend, ma_device* pDummyDevice)
|
||||
{
|
||||
ma_device dummyDevice;
|
||||
ma_device_config dummyDeviceConfig;
|
||||
ma_result result;
|
||||
|
||||
/*
|
||||
To do this we initialize a dummy device. We allow the caller to make use of this device as an optimization. This is
|
||||
only used by osaudio_enumerate_devices() because that can make use of the context from the dummy device rather than
|
||||
having to create its own. pDummyDevice can be null.
|
||||
*/
|
||||
if (pDummyDevice == NULL) {
|
||||
pDummyDevice = &dummyDevice;
|
||||
}
|
||||
|
||||
dummyDeviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
dummyDeviceConfig.dataCallback = osaudio_dummy_data_callback;
|
||||
|
||||
result = ma_device_init(NULL, &dummyDeviceConfig, pDummyDevice);
|
||||
if (result != MA_SUCCESS || pDummyDevice->pContext->backend == ma_backend_null) {
|
||||
/* Failed to open a default playback device. Try capture. */
|
||||
if (result == MA_SUCCESS) {
|
||||
/* This means we successfully initialize a device, but its backend is null. It could be that there's no playback devices attached. Try capture. */
|
||||
ma_device_uninit(pDummyDevice);
|
||||
}
|
||||
|
||||
dummyDeviceConfig = ma_device_config_init(ma_device_type_capture);
|
||||
result = ma_device_init(NULL, &dummyDeviceConfig, pDummyDevice);
|
||||
}
|
||||
|
||||
if (result != MA_SUCCESS) {
|
||||
return osaudio_result_from_miniaudio(result);
|
||||
}
|
||||
|
||||
*pBackend = pDummyDevice->pContext->backend;
|
||||
|
||||
/* We're done. */
|
||||
if (pDummyDevice == &dummyDevice) {
|
||||
ma_device_uninit(&dummyDevice);
|
||||
}
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
static osaudio_result_t osaudio_ref_context_nolock()
|
||||
{
|
||||
/* Initialize the global context if necessary. */
|
||||
if (osaudio_g_refcount == 0) {
|
||||
osaudio_result_t result;
|
||||
|
||||
/* If we haven't got a known context, we'll need to determine it here. */
|
||||
if (osaudio_g_is_backend_known == MA_FALSE) {
|
||||
result = osaudio_determine_miniaudio_backend(&osaudio_g_backend, NULL);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
result = osaudio_result_from_miniaudio(ma_context_init(&osaudio_g_backend, 1, NULL, &osaudio_g_context));
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
return result;
|
||||
}
|
||||
|
||||
/* Need a mutex for device enumeration. */
|
||||
ma_mutex_init(&osaudio_g_context_lock);
|
||||
}
|
||||
|
||||
osaudio_g_refcount += 1;
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
static osaudio_result_t osaudio_unref_context_nolock()
|
||||
{
|
||||
if (osaudio_g_refcount == 0) {
|
||||
return OSAUDIO_INVALID_OPERATION;
|
||||
}
|
||||
|
||||
osaudio_g_refcount -= 1;
|
||||
|
||||
/* Uninitialize the context if we don't have any more references. */
|
||||
if (osaudio_g_refcount == 0) {
|
||||
ma_context_uninit(&osaudio_g_context);
|
||||
ma_mutex_uninit(&osaudio_g_context_lock);
|
||||
}
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
static ma_context* osaudio_ref_context()
|
||||
{
|
||||
osaudio_result_t result;
|
||||
|
||||
ma_spinlock_lock(&osaudio_g_lock);
|
||||
{
|
||||
result = osaudio_ref_context_nolock();
|
||||
}
|
||||
ma_spinlock_unlock(&osaudio_g_lock);
|
||||
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return &osaudio_g_context;
|
||||
}
|
||||
|
||||
static osaudio_result_t osaudio_unref_context()
|
||||
{
|
||||
osaudio_result_t result;
|
||||
|
||||
ma_spinlock_lock(&osaudio_g_lock);
|
||||
{
|
||||
result = osaudio_unref_context_nolock();
|
||||
}
|
||||
ma_spinlock_unlock(&osaudio_g_lock);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
static void osaudio_info_from_miniaudio(osaudio_info_t* info, const ma_device_info* infoMA)
|
||||
{
|
||||
unsigned int iNativeConfig;
|
||||
|
||||
/* It just so happens, by absolutely total coincidence, that the size of the ID and name are the same between here and miniaudio. What are the odds?! */
|
||||
memcpy(info->id.data, &infoMA->id, sizeof(info->id.data));
|
||||
memcpy(info->name, infoMA->name, sizeof(info->name));
|
||||
|
||||
info->config_count = (unsigned int)infoMA->nativeDataFormatCount;
|
||||
for (iNativeConfig = 0; iNativeConfig < info->config_count; iNativeConfig += 1) {
|
||||
unsigned int iChannel;
|
||||
|
||||
info->configs[iNativeConfig].device_id = &info->id;
|
||||
info->configs[iNativeConfig].direction = info->direction;
|
||||
info->configs[iNativeConfig].format = osaudio_format_from_miniaudio(infoMA->nativeDataFormats[iNativeConfig].format);
|
||||
info->configs[iNativeConfig].channels = (unsigned int)infoMA->nativeDataFormats[iNativeConfig].channels;
|
||||
info->configs[iNativeConfig].rate = (unsigned int)infoMA->nativeDataFormats[iNativeConfig].sampleRate;
|
||||
|
||||
/* Apparently miniaudio does not report channel positions. I don't know why I'm not doing that. */
|
||||
for (iChannel = 0; iChannel < info->configs[iNativeConfig].channels; iChannel += 1) {
|
||||
info->configs[iNativeConfig].channel_map[iChannel] = OSAUDIO_CHANNEL_NONE;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static osaudio_result_t osaudio_enumerate_nolock(unsigned int* count, osaudio_info_t** info, ma_context* pContext)
|
||||
{
|
||||
osaudio_result_t result;
|
||||
ma_device_info* pPlaybackInfos;
|
||||
ma_uint32 playbackCount;
|
||||
ma_device_info* pCaptureInfos;
|
||||
ma_uint32 captureCount;
|
||||
ma_uint32 iInfo;
|
||||
size_t allocSize;
|
||||
osaudio_info_t* pRunningInfo;
|
||||
osaudio_config_t* pRunningConfig;
|
||||
|
||||
/* We now need to retrieve the device information from miniaudio. */
|
||||
result = osaudio_result_from_miniaudio(ma_context_get_devices(pContext, &pPlaybackInfos, &playbackCount, &pCaptureInfos, &captureCount));
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
osaudio_unref_context();
|
||||
return result;
|
||||
}
|
||||
|
||||
/*
|
||||
Because the caller needs to free the returned pointer it's important that we keep it all in one allocation. Because there can be
|
||||
a variable number of native configs we'll have to compute the size of the allocation first, and then do a second pass to fill
|
||||
out the data.
|
||||
*/
|
||||
allocSize = ((size_t)playbackCount + (size_t)captureCount) * sizeof(osaudio_info_t);
|
||||
|
||||
/* Now we need to iterate over each playback and capture device and add up the number of native configs. */
|
||||
for (iInfo = 0; iInfo < playbackCount; iInfo += 1) {
|
||||
ma_context_get_device_info(pContext, ma_device_type_playback, &pPlaybackInfos[iInfo].id, &pPlaybackInfos[iInfo]);
|
||||
allocSize += pPlaybackInfos[iInfo].nativeDataFormatCount * sizeof(osaudio_config_t);
|
||||
}
|
||||
for (iInfo = 0; iInfo < captureCount; iInfo += 1) {
|
||||
ma_context_get_device_info(pContext, ma_device_type_capture, &pCaptureInfos[iInfo].id, &pCaptureInfos[iInfo]);
|
||||
allocSize += pCaptureInfos[iInfo].nativeDataFormatCount * sizeof(osaudio_config_t);
|
||||
}
|
||||
|
||||
/* Now that we know the size of the allocation we can allocate it. */
|
||||
*info = (osaudio_info_t*)calloc(1, allocSize);
|
||||
if (*info == NULL) {
|
||||
osaudio_unref_context();
|
||||
return OSAUDIO_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pRunningInfo = *info;
|
||||
pRunningConfig = (osaudio_config_t*)(((unsigned char*)*info) + (((size_t)playbackCount + (size_t)captureCount) * sizeof(osaudio_info_t)));
|
||||
|
||||
for (iInfo = 0; iInfo < playbackCount; iInfo += 1) {
|
||||
pRunningInfo->direction = OSAUDIO_OUTPUT;
|
||||
pRunningInfo->configs = pRunningConfig;
|
||||
osaudio_info_from_miniaudio(pRunningInfo, &pPlaybackInfos[iInfo]);
|
||||
|
||||
pRunningConfig += pRunningInfo->config_count;
|
||||
pRunningInfo += 1;
|
||||
}
|
||||
|
||||
for (iInfo = 0; iInfo < captureCount; iInfo += 1) {
|
||||
pRunningInfo->direction = OSAUDIO_INPUT;
|
||||
pRunningInfo->configs = pRunningConfig;
|
||||
osaudio_info_from_miniaudio(pRunningInfo, &pPlaybackInfos[iInfo]);
|
||||
|
||||
pRunningConfig += pRunningInfo->config_count;
|
||||
pRunningInfo += 1;
|
||||
}
|
||||
|
||||
*count = (unsigned int)(playbackCount + captureCount);
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_enumerate(unsigned int* count, osaudio_info_t** info)
|
||||
{
|
||||
osaudio_result_t result;
|
||||
ma_context* pContext = NULL;
|
||||
|
||||
if (count != NULL) {
|
||||
*count = 0;
|
||||
}
|
||||
if (info != NULL) {
|
||||
*info = NULL;
|
||||
}
|
||||
|
||||
if (count == NULL || info == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
pContext = osaudio_ref_context();
|
||||
if (pContext == NULL) {
|
||||
return OSAUDIO_ERROR;
|
||||
}
|
||||
|
||||
ma_mutex_lock(&osaudio_g_context_lock);
|
||||
{
|
||||
result = osaudio_enumerate_nolock(count, info, pContext);
|
||||
}
|
||||
ma_mutex_unlock(&osaudio_g_context_lock);
|
||||
|
||||
/* We're done. We can now return. */
|
||||
osaudio_unref_context();
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
void osaudio_config_init(osaudio_config_t* config, osaudio_direction_t direction)
|
||||
{
|
||||
if (config == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
memset(config, 0, sizeof(*config));
|
||||
config->direction = direction;
|
||||
}
|
||||
|
||||
|
||||
static void osaudio_data_callback_playback(osaudio_t audio, void* pOutput, ma_uint32 frameCount)
|
||||
{
|
||||
/*
|
||||
If there's content in the buffer, read from it and release the semaphore. There needs to be a whole frameCount chunk
|
||||
in the buffer so we can keep everything in nice clean chunks. When we read from the buffer, we release a semaphore
|
||||
which will allow the main thread to write more data to the buffer.
|
||||
*/
|
||||
ma_uint32 framesToRead;
|
||||
ma_uint32 framesProcessed;
|
||||
void* pBuffer;
|
||||
|
||||
framesToRead = ma_pcm_rb_available_read(&audio->buffer);
|
||||
if (framesToRead > frameCount) {
|
||||
framesToRead = frameCount;
|
||||
}
|
||||
|
||||
framesProcessed = framesToRead;
|
||||
|
||||
/* For robustness we should run this in a loop in case the buffer wraps around. */
|
||||
while (frameCount > 0) {
|
||||
framesToRead = frameCount;
|
||||
|
||||
ma_pcm_rb_acquire_read(&audio->buffer, &framesToRead, &pBuffer);
|
||||
if (framesToRead == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(pOutput, pBuffer, framesToRead * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels));
|
||||
ma_pcm_rb_commit_read(&audio->buffer, framesToRead);
|
||||
|
||||
frameCount -= framesToRead;
|
||||
pOutput = ((unsigned char*)pOutput) + (framesToRead * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels));
|
||||
}
|
||||
|
||||
/* Make sure we release the semaphore if we ended up reading anything. */
|
||||
if (framesProcessed > 0) {
|
||||
ma_semaphore_release(&audio->bufferSemaphore);
|
||||
}
|
||||
|
||||
if (frameCount > 0) {
|
||||
/* Underrun. Pad with silence. */
|
||||
ma_silence_pcm_frames(pOutput, frameCount, audio->device.playback.format, audio->device.playback.channels);
|
||||
ma_atomic_bool32_set(&audio->xrunDetected, MA_TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
static void osaudio_data_callback_capture(osaudio_t audio, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
/* If there's space in the buffer, write to it and release the semaphore. The semaphore is only released on full-chunk boundaries. */
|
||||
ma_uint32 framesToWrite;
|
||||
ma_uint32 framesProcessed;
|
||||
void* pBuffer;
|
||||
|
||||
framesToWrite = ma_pcm_rb_available_write(&audio->buffer);
|
||||
if (framesToWrite > frameCount) {
|
||||
framesToWrite = frameCount;
|
||||
}
|
||||
|
||||
framesProcessed = framesToWrite;
|
||||
|
||||
while (frameCount > 0) {
|
||||
framesToWrite = frameCount;
|
||||
|
||||
ma_pcm_rb_acquire_write(&audio->buffer, &framesToWrite, &pBuffer);
|
||||
if (framesToWrite == 0) {
|
||||
break;
|
||||
}
|
||||
|
||||
memcpy(pBuffer, pInput, framesToWrite * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels));
|
||||
ma_pcm_rb_commit_write(&audio->buffer, framesToWrite);
|
||||
|
||||
frameCount -= framesToWrite;
|
||||
pInput = ((unsigned char*)pInput) + (framesToWrite * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels));
|
||||
}
|
||||
|
||||
/* Make sure we release the semaphore if we ended up reading anything. */
|
||||
if (framesProcessed > 0) {
|
||||
ma_semaphore_release(&audio->bufferSemaphore);
|
||||
}
|
||||
|
||||
if (frameCount > 0) {
|
||||
/* Overrun. Not enough room to move our input data into the buffer. */
|
||||
ma_atomic_bool32_set(&audio->xrunDetected, MA_TRUE);
|
||||
}
|
||||
}
|
||||
|
||||
static void osaudio_nofication_callback(const ma_device_notification* pNotification)
|
||||
{
|
||||
osaudio_t audio = (osaudio_t)pNotification->pDevice->pUserData;
|
||||
|
||||
if (audio->config.notification != NULL) {
|
||||
osaudio_notification_t notification;
|
||||
|
||||
switch (pNotification->type)
|
||||
{
|
||||
case ma_device_notification_type_started:
|
||||
{
|
||||
notification.type = OSAUDIO_NOTIFICATION_STARTED;
|
||||
} break;
|
||||
case ma_device_notification_type_stopped:
|
||||
{
|
||||
notification.type = OSAUDIO_NOTIFICATION_STOPPED;
|
||||
} break;
|
||||
case ma_device_notification_type_rerouted:
|
||||
{
|
||||
notification.type = OSAUDIO_NOTIFICATION_REROUTED;
|
||||
} break;
|
||||
case ma_device_notification_type_interruption_began:
|
||||
{
|
||||
notification.type = OSAUDIO_NOTIFICATION_INTERRUPTION_BEGIN;
|
||||
} break;
|
||||
case ma_device_notification_type_interruption_ended:
|
||||
{
|
||||
notification.type = OSAUDIO_NOTIFICATION_INTERRUPTION_END;
|
||||
} break;
|
||||
}
|
||||
|
||||
audio->config.notification(audio->config.user_data, ¬ification);
|
||||
}
|
||||
}
|
||||
|
||||
static void osaudio_data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
|
||||
{
|
||||
osaudio_t audio = (osaudio_t)pDevice->pUserData;
|
||||
|
||||
if (audio->info.direction == OSAUDIO_OUTPUT) {
|
||||
osaudio_data_callback_playback(audio, pOutput, frameCount);
|
||||
} else {
|
||||
osaudio_data_callback_capture(audio, pInput, frameCount);
|
||||
}
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_open(osaudio_t* audio, osaudio_config_t* config)
|
||||
{
|
||||
osaudio_result_t result;
|
||||
ma_context* pContext = NULL;
|
||||
ma_device_config deviceConfig;
|
||||
ma_device_info deviceInfo;
|
||||
int periodCount = 2;
|
||||
unsigned int iChannel;
|
||||
|
||||
if (audio != NULL) {
|
||||
*audio = NULL; /* Safety. */
|
||||
}
|
||||
|
||||
if (audio == NULL || config == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
pContext = osaudio_ref_context(); /* Will be unreferenced in osaudio_close(). */
|
||||
if (pContext == NULL) {
|
||||
return OSAUDIO_ERROR;
|
||||
}
|
||||
|
||||
*audio = (osaudio_t)calloc(1, sizeof(**audio));
|
||||
if (*audio == NULL) {
|
||||
osaudio_unref_context();
|
||||
return OSAUDIO_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
if (config->direction == OSAUDIO_OUTPUT) {
|
||||
deviceConfig = ma_device_config_init(ma_device_type_playback);
|
||||
deviceConfig.playback.format = osaudio_format_to_miniaudio(config->format);
|
||||
deviceConfig.playback.channels = (ma_uint32)config->channels;
|
||||
|
||||
if (config->channel_map[0] != OSAUDIO_CHANNEL_NONE) {
|
||||
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
|
||||
deviceConfig.playback.pChannelMap[iChannel] = osaudio_channel_to_miniaudio(config->channel_map[iChannel]);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
deviceConfig = ma_device_config_init(ma_device_type_capture);
|
||||
deviceConfig.capture.format = osaudio_format_to_miniaudio(config->format);
|
||||
deviceConfig.capture.channels = (ma_uint32)config->channels;
|
||||
|
||||
if (config->channel_map[0] != OSAUDIO_CHANNEL_NONE) {
|
||||
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
|
||||
deviceConfig.capture.pChannelMap[iChannel] = osaudio_channel_to_miniaudio(config->channel_map[iChannel]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
deviceConfig.sampleRate = (ma_uint32)config->rate;
|
||||
|
||||
/* If the buffer size is 0, we'll default to 10ms. */
|
||||
deviceConfig.periodSizeInFrames = (ma_uint32)config->buffer_size;
|
||||
if (deviceConfig.periodSizeInFrames == 0) {
|
||||
deviceConfig.periodSizeInMilliseconds = 10;
|
||||
}
|
||||
|
||||
deviceConfig.dataCallback = osaudio_data_callback;
|
||||
deviceConfig.pUserData = *audio;
|
||||
|
||||
if ((config->flags & OSAUDIO_FLAG_NO_REROUTING) != 0) {
|
||||
deviceConfig.wasapi.noAutoStreamRouting = MA_TRUE;
|
||||
}
|
||||
|
||||
if (config->notification != NULL) {
|
||||
deviceConfig.notificationCallback = osaudio_nofication_callback;
|
||||
}
|
||||
|
||||
result = osaudio_result_from_miniaudio(ma_device_init(pContext, &deviceConfig, &((*audio)->device)));
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
free(*audio);
|
||||
osaudio_unref_context();
|
||||
return result;
|
||||
}
|
||||
|
||||
/* The input config needs to be updated with actual values. */
|
||||
if (config->direction == OSAUDIO_OUTPUT) {
|
||||
config->format = osaudio_format_from_miniaudio((*audio)->device.playback.format);
|
||||
config->channels = (unsigned int)(*audio)->device.playback.channels;
|
||||
|
||||
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
|
||||
config->channel_map[iChannel] = osaudio_channel_from_miniaudio((*audio)->device.playback.channelMap[iChannel]);
|
||||
}
|
||||
} else {
|
||||
config->format = osaudio_format_from_miniaudio((*audio)->device.capture.format);
|
||||
config->channels = (unsigned int)(*audio)->device.capture.channels;
|
||||
|
||||
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
|
||||
config->channel_map[iChannel] = osaudio_channel_from_miniaudio((*audio)->device.capture.channelMap[iChannel]);
|
||||
}
|
||||
}
|
||||
|
||||
config->rate = (unsigned int)(*audio)->device.sampleRate;
|
||||
|
||||
if (deviceConfig.periodSizeInFrames == 0) {
|
||||
if (config->direction == OSAUDIO_OUTPUT) {
|
||||
config->buffer_size = (int)(*audio)->device.playback.internalPeriodSizeInFrames;
|
||||
} else {
|
||||
config->buffer_size = (int)(*audio)->device.capture.internalPeriodSizeInFrames;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/* The device object needs to have its local info built. We can get the ID and name from miniaudio. */
|
||||
result = osaudio_result_from_miniaudio(ma_device_get_info(&(*audio)->device, (*audio)->device.type, &deviceInfo));
|
||||
if (result == MA_SUCCESS) {
|
||||
memcpy((*audio)->info.id.data, &deviceInfo.id, sizeof((*audio)->info.id.data));
|
||||
memcpy((*audio)->info.name, deviceInfo.name, sizeof((*audio)->info.name));
|
||||
}
|
||||
|
||||
(*audio)->info.direction = config->direction;
|
||||
(*audio)->info.config_count = 1;
|
||||
(*audio)->info.configs = &(*audio)->config;
|
||||
(*audio)->config = *config;
|
||||
(*audio)->config.device_id = &(*audio)->info.id;
|
||||
|
||||
|
||||
/* We need a ring buffer. */
|
||||
result = osaudio_result_from_miniaudio(ma_pcm_rb_init(osaudio_format_to_miniaudio(config->format), (ma_uint32)config->channels, (ma_uint32)config->buffer_size * periodCount, NULL, NULL, &(*audio)->buffer));
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
ma_device_uninit(&(*audio)->device);
|
||||
free(*audio);
|
||||
osaudio_unref_context();
|
||||
return result;
|
||||
}
|
||||
|
||||
/* Now we need a semaphore to control access to the ring buffer to block read/write when necessary. */
|
||||
result = osaudio_result_from_miniaudio(ma_semaphore_init((config->direction == OSAUDIO_OUTPUT) ? periodCount : 0, &(*audio)->bufferSemaphore));
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
ma_pcm_rb_uninit(&(*audio)->buffer);
|
||||
ma_device_uninit(&(*audio)->device);
|
||||
free(*audio);
|
||||
osaudio_unref_context();
|
||||
return result;
|
||||
}
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_close(osaudio_t audio)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
ma_device_uninit(&audio->device);
|
||||
osaudio_unref_context();
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
static void osaudio_activate(osaudio_t audio)
|
||||
{
|
||||
ma_spinlock_lock(&audio->activateLock);
|
||||
{
|
||||
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
|
||||
ma_atomic_bool32_set(&audio->isActive, MA_TRUE);
|
||||
|
||||
/* If we need to flush, do so now before starting the device. */
|
||||
if (ma_atomic_bool32_get(&audio->isFlushed) == MA_TRUE) {
|
||||
ma_pcm_rb_reset(&audio->buffer);
|
||||
ma_atomic_bool32_set(&audio->isFlushed, MA_FALSE);
|
||||
}
|
||||
|
||||
/* If we're not paused, start the device. */
|
||||
if (ma_atomic_bool32_get(&audio->isPaused) == MA_FALSE) {
|
||||
ma_device_start(&audio->device);
|
||||
}
|
||||
}
|
||||
}
|
||||
ma_spinlock_unlock(&audio->activateLock);
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_write(osaudio_t audio, const void* data, unsigned int frame_count)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
ma_mutex_lock(&audio->drainLock);
|
||||
{
|
||||
/* Don't return until everything has been written. */
|
||||
while (frame_count > 0) {
|
||||
ma_uint32 framesToWrite = frame_count;
|
||||
ma_uint32 framesAvailableInBuffer;
|
||||
|
||||
/* There should be enough data available in the buffer now, but check anyway. */
|
||||
framesAvailableInBuffer = ma_pcm_rb_available_write(&audio->buffer);
|
||||
if (framesAvailableInBuffer > 0) {
|
||||
void* pBuffer;
|
||||
|
||||
if (framesToWrite > framesAvailableInBuffer) {
|
||||
framesToWrite = framesAvailableInBuffer;
|
||||
}
|
||||
|
||||
ma_pcm_rb_acquire_write(&audio->buffer, &framesToWrite, &pBuffer);
|
||||
{
|
||||
ma_copy_pcm_frames(pBuffer, data, framesToWrite, audio->device.playback.format, audio->device.playback.channels);
|
||||
}
|
||||
ma_pcm_rb_commit_write(&audio->buffer, framesToWrite);
|
||||
|
||||
frame_count -= (unsigned int)framesToWrite;
|
||||
data = (const void*)((const unsigned char*)data + (framesToWrite * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels)));
|
||||
|
||||
if (framesToWrite > 0) {
|
||||
osaudio_activate(audio);
|
||||
}
|
||||
} else {
|
||||
/* If we get here it means there's not enough data available in the buffer. We need to wait for more. */
|
||||
ma_semaphore_wait(&audio->bufferSemaphore);
|
||||
|
||||
/* If we're not active it probably means we've flushed. This write needs to be aborted. */
|
||||
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ma_mutex_unlock(&audio->drainLock);
|
||||
|
||||
if ((audio->config.flags & OSAUDIO_FLAG_REPORT_XRUN) != 0) {
|
||||
if (ma_atomic_bool32_get(&audio->xrunDetected)) {
|
||||
ma_atomic_bool32_set(&audio->xrunDetected, MA_FALSE);
|
||||
return OSAUDIO_XRUN;
|
||||
}
|
||||
}
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_read(osaudio_t audio, void* data, unsigned int frame_count)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
ma_mutex_lock(&audio->drainLock);
|
||||
{
|
||||
while (frame_count > 0) {
|
||||
ma_uint32 framesToRead = frame_count;
|
||||
ma_uint32 framesAvailableInBuffer;
|
||||
|
||||
/* There should be enough data available in the buffer now, but check anyway. */
|
||||
framesAvailableInBuffer = ma_pcm_rb_available_read(&audio->buffer);
|
||||
if (framesAvailableInBuffer > 0) {
|
||||
void* pBuffer;
|
||||
|
||||
if (framesToRead > framesAvailableInBuffer) {
|
||||
framesToRead = framesAvailableInBuffer;
|
||||
}
|
||||
|
||||
ma_pcm_rb_acquire_read(&audio->buffer, &framesToRead, &pBuffer);
|
||||
{
|
||||
ma_copy_pcm_frames(data, pBuffer, framesToRead, audio->device.capture.format, audio->device.capture.channels);
|
||||
}
|
||||
ma_pcm_rb_commit_read(&audio->buffer, framesToRead);
|
||||
|
||||
frame_count -= (unsigned int)framesToRead;
|
||||
data = (void*)((unsigned char*)data + (framesToRead * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels)));
|
||||
} else {
|
||||
/* Activate the device from the get go or else we'll never end up capturing anything. */
|
||||
osaudio_activate(audio);
|
||||
|
||||
/* If we get here it means there's not enough data available in the buffer. We need to wait for more. */
|
||||
ma_semaphore_wait(&audio->bufferSemaphore);
|
||||
|
||||
/* If we're not active it probably means we've flushed. This read needs to be aborted. */
|
||||
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
ma_mutex_unlock(&audio->drainLock);
|
||||
|
||||
if ((audio->config.flags & OSAUDIO_FLAG_REPORT_XRUN) != 0) {
|
||||
if (ma_atomic_bool32_get(&audio->xrunDetected)) {
|
||||
ma_atomic_bool32_set(&audio->xrunDetected, MA_FALSE);
|
||||
return OSAUDIO_XRUN;
|
||||
}
|
||||
}
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_drain(osaudio_t audio)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
/* This cannot be called while the device is in a paused state. */
|
||||
if (ma_atomic_bool32_get(&audio->isPaused)) {
|
||||
return OSAUDIO_DEVICE_STOPPED;
|
||||
}
|
||||
|
||||
/* For capture we want to stop the device immediately or else we won't ever drain the buffer because miniaudio will be constantly filling it. */
|
||||
if (audio->info.direction == OSAUDIO_INPUT) {
|
||||
ma_device_stop(&audio->device);
|
||||
}
|
||||
|
||||
/*
|
||||
Mark the device as inactive *before* releasing the semaphore. When read/write completes waiting
|
||||
on the semaphore, they'll check this flag and abort.
|
||||
*/
|
||||
ma_atomic_bool32_set(&audio->isActive, MA_FALSE);
|
||||
|
||||
/*
|
||||
Again in capture mode, we need to release the semaphore before waiting for the drain lock because
|
||||
there's a chance read() will be waiting on the semaphore and will need to be woken up in order for
|
||||
it to be given to chance to return.
|
||||
*/
|
||||
if (audio->info.direction == OSAUDIO_INPUT) {
|
||||
ma_semaphore_release(&audio->bufferSemaphore);
|
||||
}
|
||||
|
||||
/* Now we need to wait for any pending reads or writes to complete. */
|
||||
ma_mutex_lock(&audio->drainLock);
|
||||
{
|
||||
/* No processing should be happening on the buffer at this point. Wait for miniaudio to consume the buffer. */
|
||||
while (ma_pcm_rb_available_read(&audio->buffer) > 0) {
|
||||
ma_sleep(1);
|
||||
}
|
||||
|
||||
/*
|
||||
At this point the buffer should be empty, and we shouldn't be in any read or write calls. If
|
||||
it's a playback device, we'll want to stop the device. There's no need to release the semaphore.
|
||||
*/
|
||||
if (audio->info.direction == OSAUDIO_OUTPUT) {
|
||||
ma_device_stop(&audio->device);
|
||||
}
|
||||
}
|
||||
ma_mutex_unlock(&audio->drainLock);
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_flush(osaudio_t audio)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
/*
|
||||
First stop the device. This ensures the miniaudio background thread doesn't try modifying the
|
||||
buffer from under us while we're trying to flush it.
|
||||
*/
|
||||
ma_device_stop(&audio->device);
|
||||
|
||||
/*
|
||||
Mark the device as inactive *before* releasing the semaphore. When read/write completes waiting
|
||||
on the semaphore, they'll check this flag and abort.
|
||||
*/
|
||||
ma_atomic_bool32_set(&audio->isActive, MA_FALSE);
|
||||
|
||||
/*
|
||||
Release the semaphore after marking the device as inactive. This needs to be released in order
|
||||
to wakeup osaudio_read() and osaudio_write().
|
||||
*/
|
||||
ma_semaphore_release(&audio->bufferSemaphore);
|
||||
|
||||
/*
|
||||
The buffer should only be modified by osaudio_read() or osaudio_write(), or the miniaudio
|
||||
background thread. Therefore, we don't actually clear the buffer here. Instead we'll clear it
|
||||
in osaudio_activate(), depending on whether or not the below flag is set.
|
||||
*/
|
||||
ma_atomic_bool32_set(&audio->isFlushed, MA_TRUE);
|
||||
|
||||
return OSAUDIO_SUCCESS;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_pause(osaudio_t audio)
|
||||
{
|
||||
osaudio_result_t result = OSAUDIO_SUCCESS;
|
||||
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
ma_spinlock_lock(&audio->activateLock);
|
||||
{
|
||||
if (ma_atomic_bool32_get(&audio->isPaused) == MA_FALSE) {
|
||||
ma_atomic_bool32_set(&audio->isPaused, MA_TRUE);
|
||||
|
||||
/* No need to stop the device if it's not active. */
|
||||
if (ma_atomic_bool32_get(&audio->isActive)) {
|
||||
result = osaudio_result_from_miniaudio(ma_device_stop(&audio->device));
|
||||
}
|
||||
}
|
||||
}
|
||||
ma_spinlock_unlock(&audio->activateLock);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
osaudio_result_t osaudio_resume(osaudio_t audio)
|
||||
{
|
||||
osaudio_result_t result = OSAUDIO_SUCCESS;
|
||||
|
||||
if (audio == NULL) {
|
||||
return OSAUDIO_INVALID_ARGS;
|
||||
}
|
||||
|
||||
ma_spinlock_lock(&audio->activateLock);
|
||||
{
|
||||
if (ma_atomic_bool32_get(&audio->isPaused)) {
|
||||
ma_atomic_bool32_set(&audio->isPaused, MA_FALSE);
|
||||
|
||||
/* Don't start the device unless it's active. */
|
||||
if (ma_atomic_bool32_get(&audio->isActive)) {
|
||||
result = osaudio_result_from_miniaudio(ma_device_start(&audio->device));
|
||||
}
|
||||
}
|
||||
}
|
||||
ma_spinlock_unlock(&audio->activateLock);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
unsigned int osaudio_get_avail(osaudio_t audio)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (audio->info.direction == OSAUDIO_OUTPUT) {
|
||||
return ma_pcm_rb_available_write(&audio->buffer);
|
||||
} else {
|
||||
return ma_pcm_rb_available_read(&audio->buffer);
|
||||
}
|
||||
}
|
||||
|
||||
const osaudio_info_t* osaudio_get_info(osaudio_t audio)
|
||||
{
|
||||
if (audio == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return &audio->info;
|
||||
}
|
||||
|
||||
#endif /* osaudio_miniaudio_c */
|
||||
196
thirdparty/miniaudio-0.11.24/extras/osaudio/tests/osaudio_deviceio.c
vendored
Normal file
@@ -0,0 +1,196 @@
|
||||
#include "../osaudio.h"
|
||||
|
||||
/* This example uses miniaudio for decoding audio files. */
|
||||
#define MINIAUDIO_IMPLEMENTATION
|
||||
#include "../../../miniaudio.h"
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
|
||||
#define MODE_PLAYBACK 0
|
||||
#define MODE_CAPTURE 1
|
||||
#define MODE_DUPLEX 2
|
||||
|
||||
void enumerate_devices()
|
||||
{
|
||||
int result;
|
||||
unsigned int iDevice;
|
||||
unsigned int count;
|
||||
osaudio_info_t* pDeviceInfos;
|
||||
|
||||
result = osaudio_enumerate(&count, &pDeviceInfos);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to enumerate audio devices.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
for (iDevice = 0; iDevice < count; iDevice += 1) {
|
||||
printf("(%s) %s\n", (pDeviceInfos[iDevice].direction == OSAUDIO_OUTPUT) ? "Playback" : "Capture", pDeviceInfos[iDevice].name);
|
||||
}
|
||||
|
||||
free(pDeviceInfos);
|
||||
}
|
||||
|
||||
osaudio_t open_device(int direction)
|
||||
{
|
||||
int result;
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
|
||||
osaudio_config_init(&config, direction);
|
||||
config.format = OSAUDIO_FORMAT_F32;
|
||||
config.channels = 2;
|
||||
config.rate = 48000;
|
||||
config.flags = OSAUDIO_FLAG_REPORT_XRUN;
|
||||
|
||||
result = osaudio_open(&audio, &config);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to open audio device.\n");
|
||||
return NULL;
|
||||
}
|
||||
|
||||
return audio;
|
||||
}
|
||||
|
||||
void do_playback(int argc, char** argv)
|
||||
{
|
||||
int result;
|
||||
osaudio_t audio;
|
||||
const osaudio_config_t* config;
|
||||
const char* pFilePath = NULL;
|
||||
ma_result resultMA;
|
||||
ma_decoder_config decoderConfig;
|
||||
ma_decoder decoder;
|
||||
|
||||
audio = open_device(OSAUDIO_OUTPUT);
|
||||
if (audio == NULL) {
|
||||
printf("Failed to open audio device.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
config = &osaudio_get_info(audio)->configs[0];
|
||||
|
||||
/* We want to always use f32. */
|
||||
if (config->format == OSAUDIO_FORMAT_F32) {
|
||||
if (argc > 1) {
|
||||
pFilePath = argv[1];
|
||||
|
||||
decoderConfig = ma_decoder_config_init(ma_format_f32, (ma_uint32)config->channels, (ma_uint32)config->rate);
|
||||
|
||||
resultMA = ma_decoder_init_file(pFilePath, &decoderConfig, &decoder);
|
||||
if (resultMA == MA_SUCCESS) {
|
||||
/* Now just keep looping over each sample until we get to the end. */
|
||||
for (;;) {
|
||||
float frames[1024];
|
||||
ma_uint64 frameCount;
|
||||
|
||||
resultMA = ma_decoder_read_pcm_frames(&decoder, frames, ma_countof(frames) / config->channels, &frameCount);
|
||||
if (resultMA != MA_SUCCESS) {
|
||||
break;
|
||||
}
|
||||
|
||||
result = osaudio_write(audio, frames, (unsigned int)frameCount); /* Safe cast. */
|
||||
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
|
||||
printf("Error writing to audio device.");
|
||||
break;
|
||||
}
|
||||
|
||||
if (result == OSAUDIO_XRUN) {
|
||||
printf("WARNING: An xrun occurred while writing to the playback device.\n");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
printf("Failed to open file: %s\n", pFilePath);
|
||||
}
|
||||
} else {
|
||||
printf("No input file.\n");
|
||||
}
|
||||
} else {
|
||||
printf("Unsupported device format.\n");
|
||||
}
|
||||
|
||||
/* Getting here means we're done and we can tear down. */
|
||||
osaudio_close(audio);
|
||||
}
|
||||
|
||||
void do_duplex()
|
||||
{
|
||||
int result;
|
||||
osaudio_t capture;
|
||||
osaudio_t playback;
|
||||
|
||||
capture = open_device(OSAUDIO_INPUT);
|
||||
if (capture == NULL) {
|
||||
printf("Failed to open capture device.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
playback = open_device(OSAUDIO_OUTPUT);
|
||||
if (playback == NULL) {
|
||||
osaudio_close(capture);
|
||||
printf("Failed to open playback device.\n");
|
||||
return;
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
float frames[1024];
|
||||
unsigned int frameCount;
|
||||
|
||||
frameCount = ma_countof(frames) / osaudio_get_info(capture)->configs[0].channels;
|
||||
|
||||
/* Capture. */
|
||||
result = osaudio_read(capture, frames, frameCount);
|
||||
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
|
||||
printf("Error reading from capture device.\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (result == OSAUDIO_XRUN) {
|
||||
printf("WARNING: An xrun occurred while reading from the capture device.\n");
|
||||
}
|
||||
|
||||
|
||||
/* Playback. */
|
||||
result = osaudio_write(playback, frames, frameCount);
|
||||
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
|
||||
printf("Error writing to playback device.\n");
|
||||
break;
|
||||
}
|
||||
|
||||
if (result == OSAUDIO_XRUN) {
|
||||
printf("WARNING: An xrun occurred while writing to the playback device.\n");
|
||||
}
|
||||
}
|
||||
|
||||
osaudio_close(capture);
|
||||
osaudio_close(playback);
|
||||
}
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
int mode = MODE_PLAYBACK;
|
||||
int iarg;
|
||||
|
||||
enumerate_devices();
|
||||
|
||||
for (iarg = 0; iarg < argc; iarg += 1) {
|
||||
if (strcmp(argv[iarg], "capture") == 0) {
|
||||
mode = MODE_CAPTURE;
|
||||
} else if (strcmp(argv[iarg], "duplex") == 0) {
|
||||
mode = MODE_DUPLEX;
|
||||
}
|
||||
}
|
||||
|
||||
switch (mode)
|
||||
{
|
||||
case MODE_PLAYBACK: do_playback(argc, argv); break;
|
||||
case MODE_CAPTURE: break;
|
||||
case MODE_DUPLEX: do_duplex(); break;
|
||||
}
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
|
||||
return 0;
|
||||
}
|
||||
283
thirdparty/miniaudio-0.11.24/extras/osaudio/tests/osaudio_sine.c
vendored
Normal file
@@ -0,0 +1,283 @@
|
||||
#include "../osaudio.h"
|
||||
#include "../../decoders/litewav/litewav.c"
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h> /* free() */
|
||||
|
||||
#if defined(__MSDOS__) || defined(__DOS__)
|
||||
#include <dos.h>
|
||||
#define OSAUDIO_DOS
|
||||
#endif
|
||||
|
||||
const char* format_to_string(osaudio_format_t format)
|
||||
{
|
||||
switch (format)
|
||||
{
|
||||
case OSAUDIO_FORMAT_F32: return "F32";
|
||||
case OSAUDIO_FORMAT_U8: return "U8";
|
||||
case OSAUDIO_FORMAT_S16: return "S16";
|
||||
case OSAUDIO_FORMAT_S24: return "S24";
|
||||
case OSAUDIO_FORMAT_S32: return "S32";
|
||||
default: return "Unknown Format";
|
||||
}
|
||||
}
|
||||
|
||||
void enumerate_devices()
|
||||
{
|
||||
osaudio_result_t result;
|
||||
osaudio_info_t* pDeviceInfos;
|
||||
unsigned int deviceCount;
|
||||
unsigned int iDevice;
|
||||
|
||||
result = osaudio_enumerate(&deviceCount, &pDeviceInfos);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to enumerate devices.");
|
||||
return;
|
||||
}
|
||||
|
||||
for (iDevice = 0; iDevice < deviceCount; iDevice += 1) {
|
||||
osaudio_info_t* pDeviceInfo = &pDeviceInfos[iDevice];
|
||||
|
||||
printf("Device %u: [%s] %s\n", iDevice, (pDeviceInfo->direction == OSAUDIO_OUTPUT) ? "Playback" : "Capture", pDeviceInfo->name);
|
||||
|
||||
#if 0
|
||||
{
|
||||
unsigned int iFormat;
|
||||
|
||||
printf(" Native Formats\n");
|
||||
for (iFormat = 0; iFormat < pDeviceInfo->config_count; iFormat += 1) {
|
||||
osaudio_config_t* pConfig = &pDeviceInfo->configs[iFormat];
|
||||
printf(" %s %uHz %u channels\n", format_to_string(pConfig->format), pConfig->rate, pConfig->channels);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
free(pDeviceInfos);
|
||||
}
|
||||
|
||||
extern int g_TESTING;
|
||||
|
||||
#include <string.h>
|
||||
|
||||
/* Sine wave generation. */
|
||||
#include <math.h>
|
||||
|
||||
#if defined(OSAUDIO_DOS)
|
||||
/* For farmalloc(). */
|
||||
static void OSAUDIO_FAR* far_malloc(unsigned int sz)
|
||||
{
|
||||
unsigned int segment;
|
||||
unsigned int err;
|
||||
|
||||
err = _dos_allocmem(sz >> 4, &segment);
|
||||
if (err == 0) {
|
||||
return MK_FP(segment, 0);
|
||||
} else {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
#else
|
||||
#define far_malloc malloc
|
||||
#endif
|
||||
|
||||
static char OSAUDIO_FAR* gen_sine_u8(unsigned long frameCount, unsigned int channels, unsigned int sampleRate)
|
||||
{
|
||||
float phase = 0;
|
||||
float phaseIncrement = 2 * 3.14159265f * 220.0f / 44100.0f;
|
||||
unsigned long iFrame;
|
||||
char OSAUDIO_FAR* pData;
|
||||
char OSAUDIO_FAR* pRunningData;
|
||||
|
||||
pData = (char OSAUDIO_FAR*)far_malloc(frameCount * channels);
|
||||
if (pData == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pRunningData = pData;
|
||||
|
||||
for (iFrame = 0; iFrame < frameCount; iFrame += 1) {
|
||||
unsigned int iChannel;
|
||||
float sample = (float)sin(phase) * 0.2f;
|
||||
sample = (sample + 1.0f) * 127.5f;
|
||||
|
||||
for (iChannel = 0; iChannel < channels; iChannel += 1) {
|
||||
pRunningData[iChannel] = (unsigned char)sample;
|
||||
}
|
||||
|
||||
pRunningData += channels;
|
||||
phase += phaseIncrement;
|
||||
}
|
||||
|
||||
return pData;
|
||||
}
|
||||
|
||||
static short OSAUDIO_FAR* gen_sine_s16(unsigned long frameCount, unsigned int channels, unsigned int sampleRate)
|
||||
{
|
||||
float phase = 0;
|
||||
float phaseIncrement = 2 * 3.14159265f * 220.0f / 44100.0f;
|
||||
unsigned long iFrame;
|
||||
short OSAUDIO_FAR* pData;
|
||||
short OSAUDIO_FAR* pRunningData;
|
||||
|
||||
pData = (short OSAUDIO_FAR*)far_malloc(frameCount * channels * sizeof(short));
|
||||
if (pData == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pRunningData = pData;
|
||||
|
||||
for (iFrame = 0; iFrame < frameCount; iFrame += 1) {
|
||||
unsigned int iChannel;
|
||||
float sample = (float)sin(phase) * 0.2f;
|
||||
sample = sample * 32767.5f;
|
||||
|
||||
for (iChannel = 0; iChannel < channels; iChannel += 1) {
|
||||
pRunningData[iChannel] = (short)sample;
|
||||
}
|
||||
|
||||
pRunningData += channels;
|
||||
phase += phaseIncrement;
|
||||
}
|
||||
|
||||
return pData;
|
||||
}
|
||||
|
||||
//
|
||||
//
|
||||
//float sinePhase = 0;
|
||||
//float sinePhaseIncrement = 0;
|
||||
//float sineVolume = 0.2f;
|
||||
//
|
||||
//static void sine_init()
|
||||
//{
|
||||
// sinePhase = 0;
|
||||
// sinePhaseIncrement = 2 * 3.14159265f * 440.0f / 44100.0f;
|
||||
//}
|
||||
//
|
||||
//static void sine_u8(unsigned char* dst, unsigned int frameCount, unsigned int channels)
|
||||
//{
|
||||
// unsigned int iFrame;
|
||||
//
|
||||
// for (iFrame = 0; iFrame < frameCount; iFrame += 1) {
|
||||
// unsigned int iChannel;
|
||||
// float sample = (float)sin(sinePhase) * sineVolume;
|
||||
// sample = (sample + 1.0f) * 127.5f;
|
||||
//
|
||||
// for (iChannel = 0; iChannel < channels; iChannel += 1) {
|
||||
// dst[iChannel] = (unsigned char)sample;
|
||||
// }
|
||||
//
|
||||
// dst += channels;
|
||||
// sinePhase += sinePhaseIncrement;
|
||||
// }
|
||||
//}
|
||||
//
|
||||
//
|
||||
//unsigned char data[4096];
|
||||
|
||||
int main(int argc, char** argv)
|
||||
{
|
||||
osaudio_result_t result;
|
||||
osaudio_t audio;
|
||||
osaudio_config_t config;
|
||||
void OSAUDIO_FAR* pSineWave;
|
||||
unsigned long sineWaveFrameCount;
|
||||
unsigned long sineWaveCursor = 0;
|
||||
|
||||
enumerate_devices();
|
||||
|
||||
osaudio_config_init(&config, OSAUDIO_OUTPUT);
|
||||
config.format = OSAUDIO_FORMAT_S16;
|
||||
config.channels = 2;
|
||||
config.rate = 44100;
|
||||
|
||||
result = osaudio_open(&audio, &config);
|
||||
if (result != OSAUDIO_SUCCESS) {
|
||||
printf("Failed to initialize audio.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
printf("Device: %s (%s %uHz %u channels)\n", osaudio_get_info(audio)->name, format_to_string(config.format), config.rate, config.channels);
|
||||
|
||||
//printf("sizeof(void*) = %u\n", (unsigned int)sizeof(void far *));
|
||||
|
||||
/* 5 seconds. */
|
||||
sineWaveFrameCount = config.rate * 1;
|
||||
|
||||
if (config.format == OSAUDIO_FORMAT_U8) {
|
||||
pSineWave = gen_sine_u8(sineWaveFrameCount, config.channels, config.rate);
|
||||
} else {
|
||||
pSineWave = gen_sine_s16(sineWaveFrameCount, config.channels, config.rate);
|
||||
}
|
||||
|
||||
if (pSineWave == NULL) {
|
||||
printf("Failed to generate sine wave.\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (config.format == OSAUDIO_FORMAT_U8) {
|
||||
/*unsigned int framesToSilence = config.rate;
|
||||
while (framesToSilence > 0) {
|
||||
unsigned int framesToWrite;
|
||||
char silence[256];
|
||||
memset(silence, 128, sizeof(silence));
|
||||
|
||||
framesToWrite = framesToSilence;
|
||||
if (framesToWrite > sizeof(silence) / config.channels) {
|
||||
framesToWrite = sizeof(silence) / config.channels;
|
||||
}
|
||||
|
||||
osaudio_write(audio, silence, framesToWrite);
|
||||
framesToSilence -= framesToWrite;
|
||||
}*/
|
||||
|
||||
while (sineWaveCursor < sineWaveFrameCount) {
|
||||
unsigned long framesToWrite = sineWaveFrameCount - sineWaveCursor;
|
||||
if (framesToWrite > 0xFFFF) {
|
||||
framesToWrite = 0xFFFF;
|
||||
}
|
||||
|
||||
//printf("Writing sine wave: %u\n", (unsigned int)framesToWrite);
|
||||
|
||||
osaudio_write(audio, (char OSAUDIO_FAR*)pSineWave + (sineWaveCursor * config.channels), (unsigned int)framesToWrite);
|
||||
sineWaveCursor += framesToWrite;
|
||||
|
||||
//printf("TRACE 0\n");
|
||||
//sine_u8(data, frameCount, config.channels);
|
||||
//printf("TRACE: %d\n", frameCount);
|
||||
//osaudio_write(audio, data, frameCount);
|
||||
//printf("DONE LOOP\n");
|
||||
}
|
||||
} else if (config.format == OSAUDIO_FORMAT_S16) {
|
||||
while (sineWaveCursor < sineWaveFrameCount) {
|
||||
unsigned long framesToWrite = sineWaveFrameCount - sineWaveCursor;
|
||||
if (framesToWrite > 0xFFFF) {
|
||||
framesToWrite = 0xFFFF;
|
||||
}
|
||||
|
||||
osaudio_write(audio, (short OSAUDIO_FAR*)pSineWave + (sineWaveCursor * config.channels), (unsigned int)framesToWrite);
|
||||
sineWaveCursor += framesToWrite;
|
||||
}
|
||||
}
|
||||
|
||||
#if defined(OSAUDIO_DOS)
|
||||
printf("Processing...\n");
|
||||
for (;;) {
|
||||
/* Temporary. Just spinning here to ensure the program stays active. */
|
||||
//delay(1);
|
||||
if (g_TESTING > 0) {
|
||||
//printf("TESTING: %d\n", g_TESTING);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
printf("Shutting down... ");
|
||||
osaudio_close(audio);
|
||||
printf("Done.\n");
|
||||
|
||||
(void)argc;
|
||||
(void)argv;
|
||||
|
||||
return 0;
|
||||
}
|
||||
5584
thirdparty/miniaudio-0.11.24/extras/stb_vorbis.c
vendored
Normal file
2
thirdparty/miniaudio-0.11.24/miniaudio.c
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
#define MINIAUDIO_IMPLEMENTATION
|
||||
#include "miniaudio.h"
|
||||
95844
thirdparty/miniaudio-0.11.24/miniaudio.h
vendored
Normal file
15
thirdparty/miniaudio-0.11.24/miniaudio.pc.in
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
prefix=@CMAKE_INSTALL_PREFIX@
|
||||
exec_prefix=${prefix}
|
||||
includedir=@MINIAUDIO_PC_INCLUDEDIR@
|
||||
libdir=@MINIAUDIO_PC_LIBDIR@
|
||||
|
||||
Name: miniaudio
|
||||
Description: An audio playback and capture library.
|
||||
URL: https://miniaud.io/
|
||||
License: Unlicense OR MIT-0
|
||||
Version: @PROJECT_VERSION@
|
||||
|
||||
Requires.private: @MINIAUDIO_PC_REQUIRES_PRIVATE@
|
||||
Cflags: -I${includedir} @MINIAUDIO_PC_CFLAGS@
|
||||
Libs: -L${libdir} -lminiaudio
|
||||
Libs.private: @MINIAUDIO_PC_LIBS_PRIVATE@
|
||||
3
thirdparty/miniaudio-0.11.24/research/README.txt
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
This folder contains code that I'm experimenting with outside of the main miniaudio library. It's just for
|
||||
my own research and experimenting which I'm putting into the repository for version control purposes and
|
||||
to get feedback from the community. You should not consider any of this code to be production quality.
|
||||
BIN
thirdparty/miniaudio-0.11.24/resources/branding/icon-128x128.png
vendored
Normal file
|
After Width: | Height: | Size: 1.2 KiB |
BIN
thirdparty/miniaudio-0.11.24/resources/branding/icon-256x256.png
vendored
Normal file
|
After Width: | Height: | Size: 2.2 KiB |
BIN
thirdparty/miniaudio-0.11.24/resources/branding/icon-64x64.png
vendored
Normal file
|
After Width: | Height: | Size: 677 B |
BIN
thirdparty/miniaudio-0.11.24/resources/branding/miniaudio_logo1.png
vendored
Normal file
|
After Width: | Height: | Size: 183 B |
BIN
thirdparty/miniaudio-0.11.24/resources/branding/miniaudio_logo_400.png
vendored
Normal file
|
After Width: | Height: | Size: 3.1 KiB |
25
thirdparty/miniaudio-0.11.24/tests/_build/README.md
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
Building
|
||||
========
|
||||
Build and run from this directory. Example:
|
||||
|
||||
gcc ../test_deviceio/ma_test_deviceio.c -o bin/test_deviceio -ldl -lm -lpthread -Wall -Wextra -Wpedantic -std=c89
|
||||
./bin/test_deviceio
|
||||
|
||||
Output files will be placed in the "res/output" folder.
|
||||
|
||||
|
||||
Emscripten
|
||||
----------
|
||||
On Linux, do `source ~/emsdk/emsdk_env.sh` before compiling.
|
||||
|
||||
On Windows, you need to move into the build and run emsdk_env.bat from a command prompt using an absolute
|
||||
path like "C:\emsdk\emsdk_env.bat". Note that PowerShell doesn't work for me for some reason. Example:
|
||||
|
||||
emcc ../test_emscripten/ma_test_emscripten.c -o bin/test_emscripten.html -sAUDIO_WORKLET=1 -sWASM_WORKERS=1 -sASYNCIFY -DMA_ENABLE_AUDIO_WORKLETS -Wall -Wextra
|
||||
|
||||
If you output WASM it may not work when running the web page locally. To test you can run with something
|
||||
like this:
|
||||
|
||||
emrun ./bin/test_emscripten.html
|
||||
|
||||
If you want to see stdout on the command line when running from emrun, add `--emrun` to your emcc command.
|
||||
2
thirdparty/miniaudio-0.11.24/tests/_build/djgpp/djgpp_env.bat
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
set DJGPP=C:\DJGPP\DJGPP.ENV
|
||||
set PATH=C:\DJGPP\BIN;%PATH%
|
||||
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_f32_to_f32__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_f32_to_s16__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_f32_to_s24__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_f32_to_s32__mono_8000.raw
vendored
Normal file
|
After Width: | Height: | Size: 320 B |
1
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_f32_to_u8__mono_8000.raw
vendored
Normal file
@@ -0,0 +1 @@
|
||||
<EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>ȼ<EFBFBD><EFBFBD><EFBFBD><EFBFBD>o`QC7-$$-7CQ`o<><6F><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD><EFBFBD>ȼ<EFBFBD><C8BC><EFBFBD>o`QC7-$$-7CQ`p
|
||||
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_s16_to_f32__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_s16_to_s16__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_s16_to_s24__mono_8000.raw
vendored
Normal file
BIN
thirdparty/miniaudio-0.11.24/tests/_build/res/benchmarks/pcm_s16_to_s32__mono_8000.raw
vendored
Normal file
|
After Width: | Height: | Size: 320 B |