Skip to content

NowPlayable Swift voice communications helpers for iOS

Notifications You must be signed in to change notification settings

SacredData/AudioChatKit

Repository files navigation

AudioChatKit2

A Swifty iOS audio toolkit providing best-practices for difficult audio communication operations.

About

AudioAudioChatKit2 is a Swift Package meant to establish and solidify important audio standards that our app must follow to be NowPlayable. In so-doing, we setup the iOS app developers for success by removing the need to maintain a thorough understanding of the Apple ecosystem's intense audio implementation requirements.

The code in this package is meant to be relatively static and unchanging. If we need to make any modifications to this package, it is because we have expanded our application's business requirements such that new standards must now be achieved, or because Apple's own internal standards/requirements have been changed.

VIPER Architecture

AudioChatKit2 pursues the only logic iOS app architecture suitable for a real-time, offline-first audio messaging use case: VIPER.

Successful deployment of AudioChatKit2 will depend upon the implementer's discipline to retain this app architecture and to consider it a primary design goal.

Development

All changes to this library, once implemented into Chat by Storyboard for iOS, will be required to go through code review via GitHub pull request.

Features

  • Automated AVAudioEngine and AVAudioSession configuration management
  • Data models for Storyboard's defined product entities
  • Audio playback with full NowPlayable and RemoteCommandCenter support on iOS
  • Audio recording node that works in the background and transcodes on-the-fly
  • Provides real-time whisper.cpp transcription of audio recording sessions and bulk transcription of audio files
  • CoreML Support for Whisper model (in progress)
  • Playback and recording time tracking managed headlessly
  • Audio output processing that enhances speech clarity
  • Audio input processing that reduces file size and improves transcription quality (in progress)
  • Properly-threaded audio routines that never exceed 15% CPU
  • Audio downloading and asset caching modules that enable offline playback
  • Support for AirPlay and AirPlay 2

Usage Guide

Types Implemented

Dates & Times

Use Case Type
Audio message duration TimeInterval
Message creation date ISO8601DateFormatter
Playback progress tracking AVAudioTime

Audio Nodes

Use Case Type
Audio engine AudioKit.AudioEngine
Audio player AudioKit.AudioPlayer
Audio recorder AudioKit.NodeRecorder
Mixer AudioKit.Mixer
Playback tap AudioKit.RawDataTap
Record tap AudioKit.AmpltidueTap

Data Models Implemented

Structs

Model Source File
Message Models/Message.swift
Transcript Models/Transcript.swift

Enums

Enum Source File
AudioFormats Models/AudioFormats.swift
AudioKitSettings Models/AudioKitSettings.swift
PlaybackEvents Models/PlaybackEvents.swift
PlaybackState Models/PlaybackState.swift

Setup

Always do these things first!

Immediately after app launch create an instance of AudioConfigHelper inside the app's init() function.

init() {
    let configHelper = AudioConfigHelper()
    if configHelper.sessionPreferencesAreValid {
        Log("Session is configured correctly for longForm spoken audio playback")
    } else {
        Log("Uh oh! Something is wrong with our audio configuration.")
    }
}

We always start the app in its default session configuration of:

Launch the AudioConductor

Once we have done the necessary setup of our audio configuration, we can init the AudioConductor which is the simple class we use to manage AudioKit's audio objects and resources.

struct AViewOfSomeKind: View {
    @StateObject var conductor = AudioConductor()

    //setup UI view stuff
}

By doing this you now have access to all audio features provided by AudioKit.

Starting and Stopping AudioConductor

DON'T DO IT

We implement automated start/stop functionality inside the classes so you don't have to manage these concerns in the first place.

Using the PlaybackManager

The best way to access and utilize our playback tools is via AudioConductor.playerMan.

You must utilize this manager in order to get access to the class-managed timing metadata needed for UI relating to playback progress.

NEVER use a Timer or Date to do anything related to playback time tracking!! Instead, read on to learn how to access the class-managed values.

Time Metadata for UI

Formatted Time String

For publishing real-time seconds elapsed to UI

We publish time elapsed in seconds at PlaybackManager.currentTimeString.

Progress Float

For use in SwiftUI as a CGFloat

We also publish the float used to increment playback progress at PlaybackManager.currentProgress.

Examples

Create and play a new Message

let fileURL: URL = // get local file URL somehow...
let msg = Message(audioFile: AVAudioFile(forReading: fileURL))
try conductor.playerMan.newLocalMessage(msg: msg)

Note: if the player is currently playing something, playerMan will add the new Message to its queue and execute playback in order received.

Introspect the currently-loaded Message

// picking up from previous message playback example above
try conductor.playerMan.newLocalMessage(msg: msg)
Log(conductor.playerMan.nowPlayableMessage) // this is the "now playing" audio message
// Some example properties of the `Message` struct:
Log(conductor.playerMan.nowPlayableMessage.audioFile.url)
Log(conductor.playerMan.nowPlayableMessage.audioFile.duration)
Log(conductor.playerMan.nowPlayableMessage.author)
Log(conductor.playerMan.nowPlayableMessage.staticMetadata)

About

NowPlayable Swift voice communications helpers for iOS

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages