Lab 1 (iOS): Software flow control

Goal

In this lab you will get familiar with the iOS implementation of the software flow control that we will use throughout this course.

After completing this lab you will be able to:

  • Create audio files from Matlab in a format suitable to integrate with iOS applications
  • Understand how the software flow control is implemented in iOS
    • Identify the C++ callback functions where you can implement DSP algorithms
  • Use the console to debug your code

Introduction

In this lab, you will experiment with a very simple application that implements the following system block diagram.  The microphone input is added to the audio file and output to the smartphone speaker.  Note that there is a switch that controls whether or not the microphone signal is actually fed to the adder.  We recommend that you use a headset (headphone and microphone) to avoid feedback.

ee264_lab_1_ios.png

The user interface for the app is shown below.

 ee264_lab_ios_app.png

You can interact with the application text fields, switches and buttons:

  • Mic/Speaker rate fs (Hz): sampling rate for the input/output audio interface
  • Mic/Speaker block size (N): number of samples in the input and output arrays
  • File up-sample ratio (L): used in the next lab to implement the system shown at the bottom of the app user interface
  • File down-sample ratio (M): used in the next lab to implement the system shown at the bottom of the app user interface
  • Select one of the available audio files to read data from
  • Enable or disable the microphone input (Mic On)
  • Enable or disable the speaker output (Speaker On)
  • Press the Test button
  • Specify an test Mode (integer)

The File average block size text field is an output, which is internally calculated from the provided parameters and is equal to LaTeX: \frac{NM}{L} N M L .

Run the application

Download and expand the ee264_lab_1_ios.zip archive in your computer.

You should see the following directories:

  • ee264_lab_1_ios.png: block diagram of the starter code app (shown above)
  • ee264_lab_app: Xcode project
  • tone_generator: Matlab script to generate audio files

Here is a brief descriptions of the steps to get the app to run on your device (see Lab 0 for detailed instructions:)

  • Double click the EE264_lab.xcodeproj in the ee264_lab_1_ios folder, this should open the project in Xcode.
  • Select the project, EE264_lab, on the upper left part of Xcode and update your signing credentials
    • You might need to update the Bundle-Identifier, e.g., EE264_lab-v1, so that a valid certificate can be created
  • Connect your iOS device to your computer and compile the project
  • You will need to "trust" this certificate in your device
  • Connect a headset, turn the micOn switch on and turn the "Start" switch on
    • Always start with a low volume in your device and/or place the headphone at a safe distance from your ears.
  • There should be no sound coming out of the headphone (speaker)

Test the Audio File Processing

  • Turn the "Start" switch off
    • You should not hear any audio file playing
    • Select one of the audio files and turn the "Start" switch on again
  • Note that the signals in the audio files were sampled at 16 kHz but the default sampling frequency of the app is 48 kHz.  The files naming convention is as follows: tone_Fc_Fs.caf, with Fc and Fs the tone and sampling frequency, respectively.  What would be the frequency of the tone produced at the output?

Code Organization

Open the Xcode project.  The Xcode IDE should look as follows:

 ee264_lab_1_ios_ide.png

Note how the files are organized:

  • Lab1: contains the DSP implementation C++ code
    • There is one class implementation: AudioProcessing
  • Lab2-5: you will be adding code to these directories in the next labs 
  • BasicAudio: contains the Swift code used for the GUI and audio interface configuration
    • Main.storyboard: user interface elements
    • ViewController.swift: user interface control
      • Here is where the setup functions are called when the "start" switch is turned on
    • CPP-wrapper.h/mm: these are Objective C wrappers for the C++ callback (processAudio) and setup functions.
  • Audio Files: directory with the audio files available to the application
    • You can add .caf audio files to this folder and they will show up in the app audio file picker.

DSP C++ Code

The DSP code is implemented in the AudioProcessing class.  The declaration and definition of the Audio Processing class can be found in the AudioProcessing.hpp and AudioProcessing.cpp files, respectively.

The setup function is called once after enabling the audio chain (Start switch enabled). In the setup() state of the software flow control, the application configures the device audio interfaces, like input and output sampling rate and suggested buffer block sizes, prepare the audio file for reading, etc.

The fileNumSamplesNeededFor(outputNumSamples) returns the number of samples to read from the audio file for a given number of output samples (outputNumSamples).  This function is called by AudioController Swift object, to determine how many samples to request from the audio file.

The processAudio() function is called by the iOS audio interface callback implemented in the AudioController class (Swift). When a block data of N samples is needed by the iOS audio interface, the AudioController output callback function performs the following functions:

  • Retrieves a block of N data samples from the audio input (microphone) interface
  • Calls the C++ fileNumSamplesNeededFor(N) to determine how many samples to read from the audio file
  • Reads the specified number of samples from the audio file
  • Calls the C++ processAudio function and passes the input (microphone) and audio file data array; corresponding array sizes; GUI parameters and a pointer to the output (speaker) data array.

Use Known and Simple Input Signals

  • You can use a tone generator smartphone app or on-line apps to create clean tones as input to your application.
  • You can also use tone audio files to test your output processing and spectrum analysis apps to test the frequency of the generated tones.  See Useful Tools section in DSP Implementation Topics: Introduction.
  • Create specific test signals in Matlab and create an audio file to use as input of your processing.

Exercises

Lab 1.1: Create test audio files using Matlab (20% credit)

We use Apple's Core Audio Format (CAF) [1] to store audio files inside iOS apps.  The tone_generator_script.m files shows an example of how to use the audiowrite() function to create .wav files that are converted to .caf format by the macOS afconvert command line utility.

The Matlab script creates a tone at frequency fc and sampling rate fs for a given duration with a linear window at the beginning and end of the sequence.

Your assignment:

  • Create tone files for 250, 500, and 1000 Hz for sampling frequencies at 32 kHz.
    • Run the script and make sure that you can playback the created .caf file in your computer.
    • Note that irrespective of the sampling frequency specified, you should hear the tone playback at the correct frequency.  This is because Matlab performs sampling rate conversion between the audio file and the audio interface sampling frequencies.
  • Add the created files to the "Audio files" folder of the Xcode project.
    • You will need these files in the next lab.

Experiment:

  • Use a spectrum analysis app (see Useful Tools section in DSP Implementation Topics: Introduction) to test that the tones are reproduced at the specified frequency
    • You will find such an app useful in testing your own apps
  • Change the sampling frequency
      • You will note that the Audio framework will select the closest sampling frequency, sampleRate, supported by the hardware
        • The selected sampleRate is reported in the Console.
      • What is the highest and lowest frequency supported by your device?  What are the steps in between?
  • Change the input/output block size (N)
    • Note this is a suggested parameter.  The audio API will select the most appropriate block size, speakerNumSamples, for the given hardware.
    • You will note that the Audio framework will select, in most cases, the closest power of two.
    • What is the largest and smallest value of outputNumFrames that you see reported as you change the input/output block size parameter?

Lab 1.2: Mix microphone and audio file data (40% credit)

In this lab you will mix (add) the microphone data (if MicOn is enabled) with the audio file data (if one is selected); and store the result in the output data array for playback.

Your assignment:

  • Open the AudioProcessing.cpp file
  • Follow the comments and implement the missing functionality

Avoid overflow by properly scaling the data before storing the result in the output array.

int16_t a, b, c, d;

// Potential overflow (requires 17 bit container)
c = a + b;

// No overflow possible, equivalent to d = a/2 + b/2
d = (a >> 1) + (b >> 1);

Lab 1.3: Debug Strategies for Real-Time Applications (40%)

Debugging applications that process data in real-time requires special strategies.  By the time you finished pressing "stop" or the moment a "breakpoint" triggers, you would have missed many samples of the input signals. 

You can use the console to print debug messages and even intermediate results for later analysis.

Open the Xcode console and inspect the messages

  • There is a lot of debug information printed by the Swift AudioController class
    • As you saw in Lab 0, the Swift syntax to display text in the console is: print("Text")
  • You can also print messages to the console the C++ standard output stream (std::cout):

    std::cout << "Test\n";
  • When running an App in the Xcode simulator, you can print debug messages to a file in the host Mac instead of the console: 

    // Create an output file stream
    ofstream cout("/tmp/debug.txt");

    cout << "Hello World! \n";

    // Close the output file stream
    cout.close();
  • Search the console for the the messages printed by the C++ setup function:
    • "AudioProcessing::setup"
  • Uncomment the debug code at the end of the C++ processAudio() function
    • This debug code prints a tab delimited table with a call index, sample index and sample value for the output data.
    • Note that you might hear glitches in the audio as the data is printed, this is the reason the text is printed only every 100 iterations.
    • You can copy and paste this table directly into Matlab's variable editor or a spreadsheet for further analysis.

Your assignment:

  • Create a member function called debug that is called when the Test button in the GUI is pressed and the Mode is set to 1, a tab delimited table is printed to the console.  The table should include the following fields: call index, sample index, microphone data, audio file data, and output data.  The call syntax should be:
debug(mode, outputNumSamples, outputData, micData, fileData);
    • Add the appropriate declaration and definition to the AudioProcessing.hpp and AudioProcessing.cpp files, respectively.
    • Since this function does not modify any of its parameters, use the const keyword for the array parameters.
    • Hint: use the processAudio() function as a template
  • Same as above but when the Mode is set to 2, print the table to the file /tmp/debug.txt in the host Mac instead of the Console.

Deliverables

  • Submit an archive with the Xcode project implementing exercises 1.1, 1.2 and 1.3.
  • Demonstrate your code functionality and answer questions about your implementations during the lab session or one of the TA office hours.

References

[1] Apple Core Audio Format Specification 1.0 [link (Links to an external site.)Links to an external site.]