top of page
Search
Writer's picture1/4" Jack Of All Trades

MSc Final Project Weblog 3: Prototyping Audio-Visual Interactions & Project Architecture Thoughts

<02/08/21 - 19/08/21>


Mad abstraction ramblings to bring disparate concepts together in a modular and expandable program.


At first tried to directly mirror Prion's world-to-file position mapping but found that Controlling the play head position of multiple audio files does not work as effectively with Unity and FMOD due to a significantly greater number of points (from previous project found 50,000 to be a good balance between visual fidelity and computational efficiency) and- this is far too many to have individual events for each point. Thoughts were to try and implement clustering in some way.


Next avenue of exploration was the dichotomy of dots and lines. Began working with Shapes library to achieve this, resulting in some chaotic (but efficient) scenes. In devising more interesting various line drawing behaviours and a method to switch between them, I started to think about project abstraction and architecture.


Began by trying to experiment with vertex identification via raycast / collision but Unity does not support this with point clouds as they have no triangles (which is how these interactions would normally be calculated by the mesh collider).


Slightly chaotic AV line drawing to test Shapes library using Nodus Tollens as starting point.


Audio Integration


Thinking back to the Ryoji Ikeda exhibition - connecting lines often had 2 sounds associated with them - a drone and a percussive sound when a connection was made. Putting this into practice with the capabilities of FMOD in mind: localising a looping 3d audio source feature a drone element modulated (somehow - different files? dsp? both?) by controlling global parameters for each axis. However the obvious problem is that not all point clouds are going to have the same dimensions which makes choosing a min and max value for FMOD parameters difficult. I wrote the following functions to allow me to sort the mesh vertices array on each axis to find a minimum and maximum value for each allowing the creation of a normalization function (returning the position as values between 0 -1 regardless of size) - the plan being to feed said function with the position of the next point to be connected by the line and thus alter the sound of the drone element. Only issue is that the normalized number is only accurate to 1 decimal place which limits the interaction slightly, however given the large number of possible points.


The sorted arrays also have the additional bonus of being able to move through the mesh in a particular direction, offering a more structured alternative to the random approach. I also discovered the power of linq statements recently and feel that this could also be a powerful way to find points in a point cloud given an approximate local/world coordinate for the AV line drawing logic.



 
Vector3[] SortArrayX(Vector3[] toSort)
    {
        Vector3[] sortedArray;

        sortedArray = toSort.OrderBy(v => v.x).ToArray<Vector3>();

        return sortedArray;
    }
// Above is repeated for Y and Z
Vector3 GetMinValues(Vector3[] x, Vector3[] y, Vector3[] z)
    {
        Vector3 Min;

        Min.x = x[0].x;
        Min.y = y[0].y;
        Min.z = z[0].z;

        return Min;
    }
// Min/Max calculations receive the sorted arrays
 Vector3 GetMaxValues(Vector3[] x, Vector3[] y, Vector3[] z)
    {
        Vector3 Max;

        Max.x = x[x.Length - 1].x;
        Max.y = y[y.Length - 1].y;
        Max.z = z[z.Length - 1].z;

        return Max;
    }
// Min and max are then stored as private variables as they will be useful in future norm calcs
    Vector3 NormalizeVector(Vector3 Current)
    {
        Vector3 Normalized;

        Normalized.x = (Current.x - MinPos.x) / (MaxPos.x - MinPos.x);
        Normalized.y = (Current.y - MinPos.y) / (MaxPos.y - MinPos.y);
        Normalized.z = (Current.z - MinPos.z) / (MaxPos.z - MinPos.z);


        return Normalized;
    }
 

Project Architecture


After a lot of hours watching tutorials about more advanced C# concepts and their integration into typical Unity use cases (abstract classes, interfaces, structs, delegates, linq statements) I have started to draw up plans for an abstraction architecture that will allow me to wrap complex algorithms in a way that requires little to no code changes outside the algorithm itself.


As both a personal challenge and as an exercise in best practice I am going to attempt to make my code as modular as possible. My previous code has not been written in a way that allows for quick integration into future projects, and usually serves as well hidden syntax reminders - moving forward I want to build a collection of utility scripts that can be copied to any new project to allow rapid prototyping and easy expansion of the collection.




16 views0 comments

댓글


bottom of page