{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "SN9yRjJYaXg_" }, "source": [ "# CS535/EE514 Machine Learning - Spring 2023 - PA01" ] }, { "cell_type": "markdown", "metadata": { "id": "COHamatQbS3b" }, "source": [ "## Instructions \n", "\n", "* Submit your code both as notebook file (.ipynb) and python script (.py) on LMS. The name of both files should be 'RollNo_PA01', for example: \"23100214_PA01\". Failing to submit any one of them will result in the reduction of marks.\n", "* All the cells must be run once before submission and should be displaying the results(graphs/plots etc). If output of the cells is not being displayed, marks will be dedcuted.\n", "* The code MUST be implemented independently. Any plagiarism or cheating of work from others or the internet will be immediately referred to the DC.\n", "* 10% penalty per day for 3 days after due date. No submissions will be accepted\n", "after that. \n", "* Use procedural programming style and comment your code properly.\n", "* **Deadline to submit this assignment is 12/02/2023 (23:55).**" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mkrg5yzIaz6G" }, "outputs": [], "source": [ "# you can add further imports if you want\n", "\n", "import numpy as np\n", "import python_speech_features as psf\n", "import librosa\n", "import matplotlib.pyplot as plt\n", "from sklearn.datasets import make_blobs" ] }, { "cell_type": "markdown", "metadata": { "id": "6I7Nxj1paz6I" }, "source": [ "# Part 1: Feature Extraction\n", "You will use the [MNIST audio dataset](https://www.kaggle.com/datasets/sripaadsrinivasan/audio-mnist?resource=download).\n", "It is an open source dataset and you can download it from kaggle.\n", "\n", "* The dataset consists of 30000 audio samples of spoken digits (0-9) of 60 folders and 500 files each.\n", "* There is one directory per speaker holding the audio recordings.\n", "* Additionally \"audioMNIST_meta.txt\" provides meta information such as gender or age of each speaker.\n", "\n", "Use the following line of code to load the audio files\n", "```python\n", "audio, sr = librosa.load(file_path, sr=48000)\n", "```\n", "You will use the MFCC features for representing the audio.
\n", "MFCCs are a common feature representation for audio classification in machine learning. They capture the spectral envelope of the sound by converting the audio signal to the frequency domain, applying a Mel-scale filterbank, taking the logarithm of the filterbank energies and applying DCT. This results in a compact, low-dimensional representation that captures important spectral information and is used as input features for training models for tasks such as speech recognition, music genre classification and sound event detection.
\n", "Dont worry if none of this makes any sense, you can still use them as your features. But if you want to understand them in more detail you can read about them from [here](https://medium.com/@tanveer9812/mfccs-made-easy-7ef383006040). \n", "
\n", "Length of each feature vector will be $n$, where $n$ is the number of mfcc features (out of a total of 40) you decide to use. \n", "
\n", "Your dataset will be a $m \\times (n+1$) matrix, where each row will represent an audio file and each column wil represent 1 feature. The last column will be the label of the digit spoken. \n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "IB2lNf8eaz6K" }, "outputs": [], "source": [ "# Use this code to extract MFCC features from audio file\n", "\n", "def get_MFCC(audio, sr, numFeatures):\n", " features = psf.mfcc(audio, sr, 0.025, 0.01, numFeatures, appendEnergy = True)\n", " return np.mean(features, axis=0)" ] }, { "cell_type": "markdown", "metadata": { "id": "q2wnIoIHaz6K" }, "source": [ "* Extract the label of the digit from the file name\n", "
\n", "*After loading all files with their labels, do a Test-Train Split of your choice" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "pV8u2Yzvaz6L" }, "outputs": [], "source": [ "# create your data matrix\n", "\n", "### YOUR CODE HERE ### \n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "EwEFJwM4az6L" }, "source": [ "#### ***Question: The features we get here are numerical, what if we also had categorical features e.g., even vs odd digit. How would knn deal with those?***\n", "\n", "Ans: Double click to enter your answer here" ] }, { "cell_type": "markdown", "metadata": { "id": "vcL3jFhFehKY" }, "source": [ "# Part 2: Implement K-NN classifier from scratch \n", "The goal of this assignment is to get you familiar with k-NN classification and to give hands on experience of basic python tools and libraries which will be used in implementing the algorithm.\n", "You are **not** allowed to use scikit-learn or any other machine learning toolkit for this part.\n", "You have to implement your own k-NN classifier from scratch. You may use Pandas, NumPy, Matplotlib and other standard python libraries. " ] }, { "cell_type": "markdown", "metadata": { "id": "2BiCgc77uglx" }, "source": [ "### TASK 1: \n", "Create your own k-Nearest Neighbors classifier function by performing following\n", "tasks: \n", "\n", "* For a test data point, find its distance from all training instances.\n", "* Sort the calculated distances in ascending order based on distance values.\n", "* Choose k training samples with minimum distances from the test data point.\n", "* Return the most frequent class of these samples. (Incase of ties, break them by backing off to k-1 values. For example, for a particular audio, incase of k=4, if you get two '3' labels and two '7' labels you will break tie by backing off to k=3. If tie occurs again you will keep backing off until tie is broken or you reach k=1.)\n", "* Note: Your function should work with Euclidean distance as well as Manhattan\n", "distance. Pass the distance metric as a parameter in k-NN classifier function.\n", "Your function should also be general enough to work with any value of k." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "mb_-UV2Jufm1" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "pZ1rOuBLzH9Q" }, "source": [ "### TASK 2:\n", "Run your k-NN function for different values of k (atleast three) on test data. Do this for both the Euclidean distance and the Manhattan distance for each value of k. Plot three graphs displaying following:\n", "\n", "* k-values vs accuracy for both euclidean and manhattan distance (k-values on x-axis and accuracy values on y-axis)\n", "* k-values vs macro-average precision for both euclidean and manhattan distance (k-values on x-axis and precision values on y-axis)\n", "* k-values vs macro-average recall for both euclidean and manhattan distance (k-values on x-axis and recall values on y-axis)\n", "\n", "All of your graphs should be properly labelled.\n", "\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "yLwqidDQGKPV" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### ***Question: What is the best K according to accuracy, precision and recall? If you choose a different K for different metrics then comment on the difference between metrics.***\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Answer: Double-click to type your answer" ] }, { "cell_type": "markdown", "metadata": { "id": "-tftsLRfwlGU" }, "source": [ "# Part 3: k-NN classifier using scikit-learn \n" ] }, { "cell_type": "markdown", "metadata": { "id": "ewQnzp9EH8nQ" }, "source": [ "### TASK 1:\n", "In this part you have to use [scikit-learn’s k-NN implementation](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) to train and test your\n", "classifier on the dataset used in Part 2. Run the k-NN classifier for values of\n", "k = 1, 2, 3, 4, 5, 6, 7 using both Euclidean and Manhattan distance. Use scikit-learn to calculate and print the accuracy, F1 score and confusion matrix for test data.
\n", "Plot a graph with k values on x-axis and F1 score on y-axis for both distance metrics\n", "in a single plot. " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "B2292-nLRhBW" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "#### ***Question: What can we interpret from the F1 score that we can not from accuracy?***\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Answer: Double-click to type your answer" ] }, { "cell_type": "markdown", "metadata": { "id": "OPVhAwcAVDMw" }, "source": [ "### TASK 2:\n", "For this task you have been given a synthetic dataset of 1000 samples which is divided into 6 classes. Visualization of this dataset has also been given. Now you need to find the optimum value of k for this dataset. This can be done using GridSearchCV function provided by Scikit-learn. This function allows us to check easily for multiple values of k. You need to check for all values of k and report the best value. Use any distance metric you wish, or you can also try to find which metric works best for you." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ZX69_RiDWEar" }, "outputs": [], "source": [ "X, y = make_blobs(n_samples = 1000, n_features = 2, centers = 6, cluster_std = 2.5, random_state = 4)\n", "plt.figure(figsize=(10,6))\n", "plt.scatter(X[:,0], X[:,1], c=y, marker= 'o', s=50)\n", "plt.show()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "2-h_yskwbbAB" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "JsMoH9oeT5pu" }, "source": [ "#### ***Question: Why is standardization (feature scaling) required for KNN to work properly.***\n", "\n", "Ans: Double click to enter your answer here\n", "\n", "#### ***Question: How does the choice of 'k' affect overfitting and underfitting?***\n", "\n", "Ans: Double click to enter your answer here" ] }, { "cell_type": "markdown", "metadata": { "id": "hsh4SrDcaz6g" }, "source": [ "# Part 4: Principal Component Analysis\n", "First you will have to implement PCA from scratch and visualize it on a simple 2-D dataset. " ] }, { "cell_type": "markdown", "metadata": { "id": "nRQ33TJIaz6g" }, "source": [ "#### Dataset\n", "Here is a synthetic dataset, the points are sampled from a Bivariate Gaussian Distribution\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "SHTPzQdgaz6h" }, "outputs": [], "source": [ "# We will generate a dataset with 2 features so we can plot and visualize it easily.\n", "mean = [0, 3] #mean vector, mean[0] is the mean of the first feature, and mean[1] is the mean of the second feature.\n", "cov = [[3, 0.8], [0.8, 1]] #covariance matrix\n", "n_samples = 1000\n", "X = np.random.multivariate_normal(mean, cov, n_samples)\n", "print('X.shape:', X.shape)\n", "\n", "#visualize the dataset\n", "plt.figure(figsize=(10, 6))\n", "plt.scatter(X[:, 0], X[:, 1], marker= 'o')\n", "plt.xlabel(\"first feature\")\n", "plt.ylabel(\"second feature\")\n", "plt.title(\"data\")\n", "plt.show()\n" ] }, { "cell_type": "markdown", "metadata": { "id": "FZxzrFWXaz6h" }, "source": [ "###### Once done with the tasks, try different values of mean vector and the covariance matrix to try out different scenarios." ] }, { "cell_type": "markdown", "metadata": { "id": "pNxYjtA0az6h" }, "source": [ "### Task 1: PCA from scratch\n", "Create your own PCA function by performing the following tasks:-
\n", "* Standardize the data
\n", "* Compute the covariance matrix
\n", "* Compute EVD of the covariance matrix
\n", "* Sort the eigenvectors according to decreasing magnitude of eigenvalues
\n", "* Project data on the eigenvectors
" ] }, { "cell_type": "markdown", "metadata": { "id": "hhwE44KBaz6i" }, "source": [ "\n", " " ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "r6PeWjRoaz6i" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "jcntdr0Uaz6i" }, "source": [ "### Task 2: Visualizing the Principle Components\n", "Plot the principal components of the data before and after applying your PCA function.
\n", "Plot these on the same graph with the data. Properly label your plots." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ewtrLTrraz6j" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "buU2Gyv4az6j" }, "source": [ "### Task 3: Dimensionality Reduction\n", "\n", "You should notice that untill now you have only transformed the data onto a different basis,\n", "however it is still in 2-D.
\n", "* Perform dimensionality reduction by projecting the data onto only the first eigenvector.
\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8jHTdxqraz6k" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "rv3ztaq4az6m" }, "source": [ "### Task 4: Reconstructing Data and efficiency of PCA\n", "* Reconstruct the original data from data projected onto the first PC (to reconstruct 2-D data use inverse PCA) and compute the reconstruction loss i.e., $||{X_{reconstructed} - X_{original}}||$. Where $|| \\cdot{} ||$ is the L2-norm
\n", "* Now try a naive dimensionality reduction technique i.e., ignoring the second feature and compute the reconstruction loss again .
\n", "* Compare the two and explain the difference. Is there a case possible when they are equal?" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "DbQCPdwOaz6n" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] }, { "cell_type": "markdown", "metadata": { "id": "YPY7LcJ0az6n" }, "source": [ "#### ***Question: Briefly explain the curse of dimensionality and how PCA can be used to tackle it.***\n", "\n", "Ans: Double click to enter your answer here" ] }, { "cell_type": "markdown", "metadata": { "id": "byRCr_nHaz6n" }, "source": [ "# Part 5: Applying PCA to our problem\n", "\n", "#### For this part you can use the PCA function you made in part 4 or implement it from any library.\n", "Think of ways you can incorporate PCA into your implementation of KNN to classify mnist audio data.\n", "One idea that comes to mind is to extract a greater number of mfccs from the audio files and reduce them using PCA. \n", "

\n", "If you believe that PCA will not be beneficial and you choose to not use it, justify your choice." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "JCCNRw5Daz6o" }, "outputs": [], "source": [ "### YOUR CODE HERE ###\n", "\n" ] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.13" } }, "nbformat": 4, "nbformat_minor": 1 }