top of page

RESULTS

Once the system is completely online and connected to the correct peripherals, data about the surroundings can start to gather. The webcam first starts to capture the amount of motion in a given time frame, then the number of people in that frame gets identified. The LDR records the amount of ambient lighting and the time is taken. An example of the recorded data by the Intel Edison is as shown.

Next on the AWS user interface, machine learning models are generated based on a training dataset as shown in the figure below and are used to predict song attributes that will be fed into Spotify's API.

Using the above data input to the machine learning models, it can be seen that when there is low levels of motion, an average lighting condition, 12 people in the room and at 2pm in the afternoon. The predicted values are as follows:

The predicted values are entered into Spotify and the playlist gets updated with 10 new songs that are representative of these values.

If for example, the input parameters change. In this particular example the genre changes and therefore the entire playlist is replaced with a new set of 10 songs as shown below. 

bottom of page